Usability Testing
Usability Testing
* Ethnographic research
* Participatory design
* Focus group research
* Surveys
* Walk throughs
* Open and closed card sorting
* Paper prototyping
* Expert or Heuristic Evaluations
* Usability Testing
* Follow-up studies
*Techniques for
building in usability?
* Qualitative method
*Ethnographic
Research
* Users are on board with the design team
tapping user’s knowledge, skill set and
emotional reactions to design
*Participatory Design
* Used at the early stages of a project to
evaluate preliminary concepts with rep. users
(brain storming sessions)
*Focus Group
Research
* Used at initial stage of product implementation
* Quantitative data
*Surveys
* Often used at prototype stage
*Walk throughs
* Used to design “findability” of content or
functionality
*Paper Prototyping
* Review of a product or a system, usually by an
expert
* “double specialist”
*Expert or Heuristic
Evaluations
* Techniques to collect empirical data while
observing end users using the product
*Usability Testing
* Usability Testing
* Informing Design
* Improving Profitability
*Why testing?
(goals)
* Hypothesis must be formulated
* Randomly chosen participants must be assigned
to experimental conditions
* Tight controls
* Control groups
* Sample must be of sufficient size to measure
statistically significant differences
*Basics of
methodology
(classical approach)
* Development of research questions or test objectives rather than
hypothesis
*Basic Elements of
Usability Testing
* Testing is always an artificial situation
*Limitations
* Exploratory (formative)
* Assessment (summative)
* Validation (verification)
*Types of test
* When: Quite early in development cycle
* Objective: to examine the effectiveness of
preliminary design
* Example: Web designer interested in identifying how
well the interface:
* Supports users’ tasks within a goal
* Communicates the intended workflow
* Allows the user to navigate from screen to screen and
within a screen
*Exploratory or
Formative Study
* Task-oriented user perspective:
* Overall organization of subject matter
* Whether to use a graphic or verbal approach
* How well the proposed format supports
findability
* Anticipated points of assistance and messaging
* How to address reference information
*Exploratory or
Formative Study(II)
* Typical user oriented questions:
* What do users conceive and think about using the product?
* Does the product’s basic functionality have value to the user?
* How easily do users make inferences about how to use this
user interface, based on their previous experience?
* What type of prerequisite information does a person need to
use the product?
* Which functions of the product are “walk up and use” and
which will probably require either help of written
documentation?
* How should the table of contents be organized to
accommodate both novice and experienced users?
*Exploratory or
Formative Study(III)
* Overview of methodology
* Prototypes (horizontal representation)
* Walk through (vertical representation)
*Exploratory or
Formative Study(IV)
* When: either early or midway into the
product development cycle,
usually after the fundamental or
high level design.
* Objective: to expand the findings of the
exploratory test by evaluating
usability of lower level operations
and aspects of the product.
* Example: how well a user can actually
perform a full blown realistic task.
*Assessment or
Summative Test
* Methodology:
* The user will always perform tasks rather than
simply walking through and commenting upon
screens, pages and so on.
* The test moderator will lessen his or her
interaction with the participant because there is
less emphasis on thought processes and more on
actual behaviors.
* Quantitative measures will be collected.
*Assessment or
Summative Test(II)
* When Late in deployment cycle
*Validation or
Verification Test
* Methodology
* Prior to test, standards or benchmarks for the
tasks are either developed or identified.
* Participants are given tasks to perform with
either very little or no interaction with the test
moderator.
* Collection of quantitative data is the central
focus, although reasons for substandard
performance are identified.
*Validation or
Verification Test (II)
* When not associated with any specific
point in the product life cycle.
*Comparison Test
* Methodology
* The design team is forced to stretch its
conceptions of what will work rather than just
continuing along in predictable pattern.