Chapter 7 - Formative Evaluation

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 9

CHAPTER 7– FORMATIVE EVALUATION IN HCI

Evaluation is the process of determining the value or worth of a program, Course, or other
initiative, toward the ultimate goal of making decisions about adopting, rejecting or revising
the innovation.
Aim of evaluation is to test the functionality and usability of the design and to identify and
rectify any problems. A design can be evaluated before any implementation work has started,
to minimize the cost of early design errors. Query techniques provide subjective information
from the user.
GOALS

 Assess extent of system functionality


 Assess effect of interface on user
 Identify specific problems
 Assess extent of system functionality
Making the appropriate functionality available within the system
Making it clearly reachable by the user in terms of the actions

Every evaluation will have:


 Users with an experience level
 Type of tasks to be undertaken
 A system
 An environment

Categories of evaluation

i. Formative Evaluation
Takes place before implementation in order to influence the product that will be produced.
Formative evaluation is a type of usability evaluation that helps to "form" the design for a
product or service. Formative evaluations involve evaluating a product or service during
development, often iteratively, with the goal of detecting and eliminating usability problems.

One important aspect of formative evaluation is that the audience for the observations and
recommendations is the project team itself, used to immediately improve the design of the
product or service and refine the development specifications. Results can be less formal than in
summative evaluation, as suits the needs of designers, developers, project managers, and other
project participants.

Heuristic evaluation, user interface inspections, thinking-aloud testing, pluralistic usability


walkthrough, and cognitive walkthrough are some methods that can be used for formative
evaluation.

ii. Summative evaluation


Takes place after implementation with the aim of testing the proper functioning of the final
system.

Can only be started when a design is reasonably complete and involves judging the design
against quantitative goals or competitive products.

Evaluation Methods:
Designers may fail to evaluate adequately due to being entranced with their creations
• Experienced designers know that extensive testing is necessary
• Many factors influence the evaluation plan - stage of design, novelty of project, expected
number of users, criticality of the interface, time available for evaluation, experience of the
design team
• Evaluations might range from two-years to a few days
• Range of costs might be 1% to 10% of a project budget
• Customers are more and more expecting usability
FORMATIVE EVALUATION METHODS

A) USER INTERFACE INSPECTIONS


User Interface Inspection Methods succinctly covers five inspection methods:
i. Heuristic evaluation - Heuristic evaluation is perhaps the best-known inspection
method, requiring a group of evaluators to review a product against a set of general
principles.

ii. Perspective-based user interface inspection-The perspective-based user interface


inspection is based on the principle that different perspectives (view points) will find
different problems in a user interface.
iii. Cognitive walkthrough- The cognitive walkthrough is a usability evaluation method in
which one or more evaluators work through a series of tasks and ask a set of questions
from the perspective of the user. The focus of the cognitive walkthrough is on
understanding the system's learnability for new or infrequent users.
iv. Pluralistic walkthrough- The pluralistic walkthrough (also called a participatory
design review, user-centered walkthrough, storyboarding, table-topping, or
group walkthrough) is a usability inspection method used to identify usability issues in a
piece of software or website in an effort to create a maximally usable human-computer
interface.
v. Formal usability inspections- Formal usability inspections are structured activities
with defined steps and trained inspectors. This method is most appropriate for more
complex software where product teams want to track usability defects and establish a
process to detect and eliminate major usability bugs.

B) THINKING ALOUD TESTING

A direct observation method of user testing that involves asking users to think out loud as they
are performing a task. Users are asked to say whatever they are looking at, thinking, doing, and
feeling at each moment. This method is especially helpful for determining users' expectations
and identifying what aspects of a system are confusing.
 In a thinking aloud test, you ask test participants to use the system while
continuously thinking out loud — that is, simply verbalizing their thoughts as they move
through the user interface. The method aims to show, how people interact with products and why
they use products in the exact way they do. The main assumption behind the method is that
people are able to give an accurate description and explanation of their actions when they speak
about them.

C) PLURALISTIC USABILITY WALK THROUGHS

A usability test method employed to generate early design evaluation by assigning a group of
users a series of paper-based tasks that represent the proposed product interface and including
participation from developers of that interface.

A systematic group evaluation of a design in which usability practitioners serving as


walkthrough administrators guide users through tasks simulated on hard-copy panels and
facilitate feedback about those tasks while developers and other members of the product team
address concerns or questions about the interface.

 Usually conducted early in the development cycle or when production time is limited.
 Appropriate for a group of 6-10 representative users.

Procedure

 The walkthrough administrator must prepare the product designers and developers in
advance of the walkthrough. They need to be instructed to be thick-skinned, and to treat
all user comments with positive regard. It helps to tell the product developers that we will
NOT be making a recommendation for a product change in response to every user
comment; we will be filtering all their comments through our design sense (that of the
development team and the usability professional).
 Participants are presented with instructions and rules, in addition to task and scenario
descriptions.
 Walkthrough administrator asks participants to write on the hard copy of the first panel
the actions they would take in attempting the specified task.
 After all participants have written their responses, the walkthrough administrator (or a
developer) announces the answer.
 The users verbalize their responses and discuss potential usability problems, while the
product developers remain quiet and the usability professionals facilitate the discussion
among the users.
 As the discussion winds down, the developers are invited to join in, often with an
explanation of why the design was the way it was.
 After each task, the participants are given a brief questionnaire regarding the usability of
the interface.

Participants and Other Stakeholders

 Usability practitioner serves as walkthrough administrator, introducing tasks and encouraging


group discussion about the design.
 Product developers (designers, engineers, etc.) answer questions about design and suggest
solutions to interface problems users have encountered.
 Other members of the product team who are involved in making decisions that affect the user
interface.
 Users representative of the target audience are the primary participants. Theirs are the data we
are most interested in.
Materials Needed

 Printed screen-shots put together in packets in the same order that the screens would be
confronted when the users were carrying out a particular task.
 Writing utensils for marking up the screen-shots and filling out questionnaires after each task.
 Room large enough to accommodate 6-10 users and a similar or smaller number of developers.
Who Can Facilitate

The walkthrough administrator (generally an experienced usability practitioner) must be able to


guide users through tasks and facilitate collaboration between users and developers. It is usually
best to avoid having a product developer/designer do it, as they tend to get defensive.

Common Problems

Suitable for 6-10 representative users and products with linear tasks.
All users must complete each task before discussion and next task can begin, potentially
affecting participants; understanding of the design flow.

D) COGNITIVE WALKTHRU

The cognitive walkthrough is a usability evaluation method in which one or more evaluators
work through a series of tasks and ask a set of questions from the perspective of the user.

The focus of the cognitive walkthrough is on understanding the system's learnability for new or


infrequent users. The cognitive walkthrough was originally designed as a tool to evaluate walk-
up-and-use systems like postal kiosks, automated teller machines (ATMs), and interactive
exhibits in museums where users would have little or no training. However, the cognitive
walkthrough has been employed successfully with more complex systems like CAD software
and software development tools to understand the first experience of new users.

Materials Needed

 A representation of the user interface


 A user profile or Persona
 A task list that includes all the tasks that you will use in the walkthrough, as well as an action
sequence that details the specific task flow from beginning to end.
 A problem reporting form and cards for listing design ideas for later use
Who Should Be Involved?

The cognitive walkthrough can be conducted by an individual or group. In a group evaluation,


the important roles are:

 Facilitator: The facilitator is generally the organizer and is responsible for making sure that the
walkthrough team is prepared for the session and follows the ground rules for the walkthrough.
 Evaluators: Representatives from the product team. These representatives could be usability
practitioners, requirements engineers, business analysts, developers, writers, and trainers.
 Notetaker: The notetaker records the output of the cognitive walkthrough.
 Product expert: Since the cognitive walkthrough can be conducted early in the design stage
(after requirements and a functional specification for example), a product expert is desired to
answer questions that members of the walkthrough team may have about the systems features or
feedback.
 Domain experts: A domain expert is often, but not always a product expert. For example, if you
were evaluating a complex engineering tool, you might include a domain expert in addition to
product experts.
Procedure

1. Define the users of the product and conduct a context of use analysis.
2. Determine what tasks and task variants are most appropriate for the walkthrough.
3. Assemble a group of evaluators (you can also perform an individual cognitive
walkthrough).
4. Develop the ground rules for the walkthrough. Some ground rules you might consider
are:
o No discussions about ways to redesign the interface during the walkthrough.
o Designers and developers will not defend their designs.
o Participants are not to engage in Twittering, checking emails, or other behaviors that
would distract from the evaluation.
o The facilitator will remind everyone of the groundrules and note infractions during the
walkthrough.
5. Conduct the actual walkthrough
A. Provide a representation of the interface to the evaluators.
B. Walk through the action sequences for each task from the perspective of the
"typical" users of the product. For each step in the sequence, see if you can tell a
credible story based on the following questions (Wharton, Rieman, Lewis, &
Polson, 1994, pp. 106):
a. Will the user try to achieve the right effect?
b. Will the user notice that the correct action is available?
c. Will the user associate the correct action with the effect that the user is
trying to achieve?
d. If the correct action is performed, will the user see that progress is being
made toward the solution of the task?
C. Record success stories, failure stories, design suggestions, and problems that were
not the direct output of the walkthrough, assumptions about users, comments
about the tasks, and other information that may be useful in design. Use a
standard form for this process.
6. Bring all the analysts together to develop a shared understanding of the identified
strengths and weaknesses.
7. Brainstorm on potential solutions to any problems identified.
Common Problems

The cognitive walkthrough does not provide much guidance about choosing tasks that represent
what real users will do (Jeffries, Miller, Wharton, & Uyeda, 1991). The 1994 practitioner guide
suggests that tasks be chosen on the basis of market studies, needs analysis, and requirements,
which are all second hand sources of information. Wharton, Bradford, Jeffries, and Franzke
(1992, p. 387) made some specific recommendations regarding tasks:

 Start with a simple task and move to more complex tasks.


 Consider how many tasks you can complete in a single walkthrough session. A common theme
in the research and case study literature is that only a few tasks can be examined in any cognitive
walkthrough session. A recommendation is to consider evaluating 1- 4 tasks in any given session
depending on complexity.
 Choose realistic tasks that include core features of the product. Core features are ones that are
fundamental to the product and used across different tasks.
 Consider tasks that involve multiple core features so you can get input on transitions among the
core features.

Solutions from the cognitive walkthrough may be suboptimal. The cognitive walkthrough
emphasizes solutions for specific problems encountered in the action sequence of a task, but does
not deal with more general or higher-level solutions that might be applicable across different
tasks.

Analyses tend to draw attention to superficial aspects of design (such as labels and verbiage)
rather than deep aspects such as the appropriateness of the task structures and ease of error
recovery.
Benefits, Advantages and Disadvantages

Advantages

 May be done without first hand access to users.


 Unlike some usability inspection methods, takes explicit account of the user's task.
 Provides suggestions on how to improves learnability of the system
 Can be applied during any phase of development.
 Is quick and inexpensive to apply if done in a streamlined form.
Disadvantages

 The value of the data is limited by the skills of the evaluators.


 Tends to yield a relatively superficial and narrow analysis that focuses on the words and graphics
used on the screen.
 The method does not provide an estimate on the frequency or severity of identified problems.
 Following the method exactly as outlined in the research is labor intensive.

You might also like