0% found this document useful (0 votes)
17 views18 pages

Week 3 - Writing Findings

Uploaded by

isabelllatang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views18 pages

Week 3 - Writing Findings

Uploaded by

isabelllatang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

MIE344: Ergonomic Design of Information Systems

Week 3: Writing Findings

Last Week: Usability Inspection Methods


- Heuristic evaluation
- Does the interface comply with a set of heuristics?
- It requires an explicit use of heuristics (each usability issue is accompanied by a
heuristic that it violates).
- Cognitive walkthrough
- Each task is broken up into individual actions/steps.
- Four questions per action help evaluate the intuitiveness of each step.
- Expert evaluation
- Relies on the evaluators’ expertise, which comes from:
- Watching usability testing sessions
- Internalized knowledge of heuristics, design guidelines, cognitive
walkthrough questions etc.
- Includes no explicit references to the sources of the expertise.

Other Usability Inspection Methods


Pluralistic Walkthrough

Overview
- It involves a few types of participants:
- Representative users
- Product designers and developers
- Usability professionals
- Other stakeholders
- Can be used with paper prototypes.
- Participants are given tasks.
- All participants are asked to assume the role of “the user”.
- Participants mark on their hardcopies the action they would take in pursuing the task.
- Developers also serve as “living publications.” If participants need information they
would look for in the manual, they can ask aloud.

What it is not
- Simply multiple reviewers applying their ideas, findings and feedback towards a single
document.
- I.e., it is not your Project 1 where you have to combine your team’s individual
findings.
- Surprisingly, pluralistic walkthroughs have relatively little theoretical basis for its use.
- Instead of a method that is founded on fundamentals, it is founded on practicality.

Process
1. The walkthrough moderator provides the tasks
2. Participants are asked to mark on their hard copy of the first screen the action(s) they
would take in attempting the task
3. After everyone has written their independent responses, the walkthrough moderator
announces the “right answer”
4. Users verbalize their responses first and discuss potential usability problems
5. Product developers explain why the design is the way it is
6. Usability experts facilitate the discussion and help come up with solutions
7. Participants can be given a usability questionnaire after each task and at the end of the
day.

Limitations
- Walkthrough must progress as slowly as the slowest participant (because we must wait
for everybody before the discussion).
- Participants may not get a good feel for the flow.
- Multiple correct paths cannot be simulated. Only one path can be evaluated at a time.
- This precludes participants from exploring, which might result in learning.
- Participants who picked a correct path that was not selected for the walkthrough
must “reset.”
- Participants who performed a “wrong” action must mentally reset as well.
- Product designers and developers have to be thick-skinned and treat users’ comments
with respect.

Benefits
- It is a cost- and time-effective method.
- It provides early performance and satisfaction data.
- It often offers redesign on the fly.
- “I got it right but...”
- Participant responses that were correct but made with uncertainty can be
discussed.
- It increases developers’ sensitivity to users’ concerns, which leads to increased buy-in.

Competitive Evaluation
- It involves evaluating 2 or more products with similar functionality. For example:
- “Us” vs. a key competitor / key competitors
- Only key competitors
- Goals can vary. For example:
- Find usability issues in our product and tell us if there is anything that the
competitors are doing better.
- We don’t care about the usability issues of the other products.
- Compare approaches and make recommendations for our new product.
- We want to know best practices and things we should avoid.

Competitive Evaluation: Pet Insurance Example


- Client’s request:
- We are redesigning the way people obtain pet insurance quotes and enrol on
our site. Help us understand what our competitors are doing
- 4 competitors: A, B, C and D
- E.g. Pets Best
- Determine scope:
- Quote
- Enrollment
- Review the sites and define dimensions for comparison:
- Access to the quote path
- Process flow
- Plan education within the quote path
- Process Flow
Effectiveness of Usability Inspection Methods
Evaluating the Evaluation Methods
- It all boils down to: How well do they predict actual usability problems that users will
encounter?
- To assess this we compare findings from inspection methods to findings from user
testing.

Usability Evaluation Methods in Context

Inspection Methods vs Empirical Methods (UT)

- Inspection methods are not a substitute for testing but how can we make them more
effective?
Number of Evaluators
- 4-5 evaluators find ~75% of all problems that an inspection can find but:
- 1 evaluator is better than none
- 2 are better than 1
- Etc
- But at some point, costs will outweigh the benefits
- Is it better to have 1 evaluator for 18 hours or 2 evaluators for 9 hours each?
- Is it better to have 2 evaluators for 9 hours each or 3 evaluators for 6 hours each?
- What about 6 evaluators for 3 hours each?
- Or 9 evaluators for 2 hours each?

Overlap Between Evaluators


- CUE = Comparative Usability Evaluation
- 8 studies comparing outcome of usability evaluations using different methods and
- CUE 3 by Rolf Mollich (2001)
- 11 usability professionals independently evaluated avis .com using
inspection methods.
- They found 220 problems in total (including 33 severe problems).
- Each evaluator found 16% of the total number of problems...
- ...and 24% of the severe problems
- Average overlap between any 2 evaluators was only 9%.

Evaluators’ Expertise
- Evaluators with user testing experience are better than evaluators with no such
experience (even when using heuristics).
- Double experts (usability and domain) find 1.5 times more problems than usability
experts only.
- Pet insurance example:

Prescribed Tasks vs Self-Guided Exploration


- Prescribed tasks focus experts on particular areas of the interface.
- Preferred by evaluators.
- Other areas are often left unaddressed.
- Self-guided exploration ensures broad coverage among evaluators.
- FIV+, 14 yrs old etc.
- Possible solution:
- Provide evaluators with typical use cases but encourage self-guided exploration.

How to Write Good Feelings


Anatomy of a Finding

Problem Description: What is the Problem?


- Describe the problem:
- Use words
- Illustrate the problem (e.g., screenshot)

Why are Screenshots Important?


- Your audience may not be very familiar with all the nuances of the interface (e.g.,
executives).
- If they SEE the problems (as opposed to just reading about them), you are more likely to
get their buy-in.
- If there are multiple entry points / paths for a task or multiple screens with similar
functionality, screenshots help with ambiguities.
- Stakeholders can come to you a month or two later and ask questions. A screenshot will
jog your memory.

Problem Justification: Why is this a Problem?


- Can it impact efficiency?
- Does it increase users’ mental workload and make them slower?
- Does it introduce additional unnecessary steps?
- Can it impact effectiveness?
- Can it lead to errors?
- Can it prevent users from being able to access features?
- Can it impact user satisfaction?
- Is it unpleasing?
- Can it reduce credibility of the product?
- Can it impact learnability?
- Explain how the problem may impact the users

-
- Is this good enough?
-

- No, not enough explanation

Recommendation: How to fix the Problem?


- Provide a solution (or alternative solutions) to the problem.
- Make it actionable.
-

- Can’t we just cut to the chase?


- No. The problem has to be separated from the solution:
- It helps us focus on the users and how they may be impacted
- If proposed solutions are not feasible, it will be easier for the
designers/developers to come up with an alternative if they understand
the problem.
- “Express your Annoyance Tactfully” (Molich)
- CUE 6 by Rolf Molich (2006)
- 13 professional usability teams independently evaluated
- 7 used usability testing
- 3 used inspection methods
- 3 used a combination

How to Write Good Recommendations


Goal of Usability Evaluations
- What do usability consultants offer stakeholders?
-

Writing Recommendations: Current Literature


- There is extant literature focusing on usability methods and conducting usability tests.
- Disproportionately less attention has been given to the generation of
recommendations.
- Recent works focus on providing guidelines for generating recommendations.
- E.g., Molich, Jeffries, and Dumas’ paper from 2007 ‘Making usability
recommendations useful and usable.’
- However, to date, most literature and discussions have not provided specific
recommendations on making recommendations, and for those that have, the focus has
primarily been on text-based recommendations.

Getting Started: Setting the Tone


- Severity ratings provide prioritization of usability issues.
- For example:
- The wording of recommendations should reflect and reinforce the severity
ratings.
- Low Severity issues:
- “Consider” (e.g., Consider changing “Miscellaneous” to “Accessories.”)
- “May” (e.g., Removing button X may reduce user confusion when completing
task Y.)
- High Severity issues:
- Don’t use directives (e.g., You must or It is imperative)
- When appropriate: We strongly recommend...
- For most recommendations, simply state the recommendation without
qualifiers.
- E.g.:
- Change “Miscellaneous” to “Accessories.”
- Remove button X to reduce user confusion when completing task Y.
- Medium Severity issues: It’s a judgement call…

Actionable Recommendations
- Actionable recommendations are usable to stakeholders/clients
- Usability findings tell stakeholders what’s wrong. Recommendations tell stakeholders
how to fix the problem.
- What makes for a good, actionable recommendation?
How Bad can bad Recommendations be?
- Generic or vague recommendations can be worse than the omission of
recommendations altogether.
- Vague recommendations may not be actionable. As a result, the overall value of
conducting the usability test has greatly diminished, despite key usability findings.
- Implementing changes from vague recommendations can actually create more usability
issues or cause a Web site or application to become less usable!

Text-Only Recommendations
- Some actionable recommendations can be communicated effectively through text.
- Effective when used to recommend changes to:
- Labeling and terminology
- Removal of screen elements
- Messaging/descriptions
- Information hierarchy/taxonomies

When Text is Not Enough


- Some actionable recommendations may not be effectively communicated through text
only.
- Visual mockups can help illustrate recommendations when text recommendations
simply cannot provide clear direction without becoming too complex.
- E.g., Provide white space between section A and section B to provide visual
grouping of information.
- Effective when used to recommend changes to:
- Affordance
- Visual weight/balance
- Visual grouping
- Mapping

Usability Evaluators =/= Designers?


- Knowledge of color theory and use of expensive design tools (e.g., Adobe Photoshop)
are not required to create mockups that illustrate recommendations.
- Low-fidelity wireframes can illustrate workflows or process maps.
- “Quick and dirty” wireframes can be quickly created using free or low-cost software.
- SnagIt
- FastStone (free alternative to SnagIt)
- Gimp (free alternative to Photoshop)

Advantages of Mockups in Recommendations


- Mockups illustrate recommendations that may be difficult to describe with just words.
- E.g., describing “appropriate levels of white space” is difficult, and the
recommendation can quickly become long winded.
- Mockups help ensure multiple recommendations gel together
- Mockups help ensure that implemented recommendations do not create additional
usability problems
- For example, increasing a button’s size may shift the visual attention from other
screen elements
- Mockups have more visual impact:
- Showing how elements can be visually grouped instead of simply writing about
visual grouping can drive change. It will also make your reports easier to present
and more memorable.
- Mockups aid the developers and coders who may be in charge of implementing change
- Developers and coders are oftentimes not usability experts. Mockups reduce
both ambiguity and creative license for developers and coders when
implementing change.

Limits and Precautions of Using Mockups


- Mockups only serve to illustrate recommendations.
- Regardless of whether clients have separate designers for a product, mockups
illustrate design that is driven by usability principles and best practices.
- Mockups do not include graphic treatment.
- Client expectations and interpretations of mockups need to be clearly and
appropriately set. Mockups visually illustrate recommendations and should not
necessarily be adopted as is.
- Even though mockups can be created quickly, they do require a bit more time than text.
- Include this when planning a project so that there is ample time to create a
compelling report

What Makes Recommendations Great?


- Great recommendations are not developed after a usability evaluation or testing.
- Great recommendations are, in part, developed by having discussions with the
stakeholder/client to understand:
- Business needs and objectives
- Does the recommendation help achieve pre-defined success criteria (e.g.,
user registration, sales)?
- Technical constraints
- Can the recommendation be implemented within the current software
architecture?
- Implementation goals
- Does the recommendation modify existing screens (evolutionary change)
or does it define new user interactions, workflows, and/or mental models
(revolutionary change)?

Offering Tiered Recommendations


- If stakeholders/clients discuss the possibility of expanding functionality or redesigning
the application/Web site, offer two levels of recommendations:
- Short-term recommendation
- Actionable recommendations that can be immediately implemented without
requiring a change to the software architecture or new code.
- Long-term recommendation
- Actionable recommendations that would not be able to be implemented given
the current software architecture or code, but can be incorporated into a
redesign or next application rollout.

Putting it all Together


- Addressing usability issues one at a time can be easier from a to-do list perspective.
- A holistic view must be taken to ensure that all the recommendations, when
implemented, address the usability issues identified during the evaluation.
- E.g. Increasing the size of five buttons will reduce the effect of increasing each
button’s prominence and visibility.
- Mockups that incorporate all recommendations made to an application or Web site’s
page can help ensure that recommendations don’t conflict or create additional usability
issues.
- Mockups also provide an excellent opportunity to show a before and after of
screens/pages to stakeholders.

Usability Evaluation in Context

You might also like