Chapter 7 - Formative Evaluation
Chapter 7 - Formative Evaluation
Chapter 7 - Formative Evaluation
Evaluation is the process of determining the value or worth of a program, Course, or other
initiative, toward the ultimate goal of making decisions about adopting, rejecting or revising
the innovation.
Aim of evaluation is to test the functionality and usability of the design and to identify and
rectify any problems. A design can be evaluated before any implementation work has started,
to minimize the cost of early design errors. Query techniques provide subjective information
from the user.
GOALS
Categories of evaluation
i. Formative Evaluation
Takes place before implementation in order to influence the product that will be produced.
Formative evaluation is a type of usability evaluation that helps to "form" the design for a
product or service. Formative evaluations involve evaluating a product or service during
development, often iteratively, with the goal of detecting and eliminating usability problems.
One important aspect of formative evaluation is that the audience for the observations and
recommendations is the project team itself, used to immediately improve the design of the
product or service and refine the development specifications. Results can be less formal than in
summative evaluation, as suits the needs of designers, developers, project managers, and other
project participants.
Can only be started when a design is reasonably complete and involves judging the design
against quantitative goals or competitive products.
Evaluation Methods:
Designers may fail to evaluate adequately due to being entranced with their creations
• Experienced designers know that extensive testing is necessary
• Many factors influence the evaluation plan - stage of design, novelty of project, expected
number of users, criticality of the interface, time available for evaluation, experience of the
design team
• Evaluations might range from two-years to a few days
• Range of costs might be 1% to 10% of a project budget
• Customers are more and more expecting usability
FORMATIVE EVALUATION METHODS
A direct observation method of user testing that involves asking users to think out loud as they
are performing a task. Users are asked to say whatever they are looking at, thinking, doing, and
feeling at each moment. This method is especially helpful for determining users' expectations
and identifying what aspects of a system are confusing.
In a thinking aloud test, you ask test participants to use the system while
continuously thinking out loud — that is, simply verbalizing their thoughts as they move
through the user interface. The method aims to show, how people interact with products and why
they use products in the exact way they do. The main assumption behind the method is that
people are able to give an accurate description and explanation of their actions when they speak
about them.
A usability test method employed to generate early design evaluation by assigning a group of
users a series of paper-based tasks that represent the proposed product interface and including
participation from developers of that interface.
Usually conducted early in the development cycle or when production time is limited.
Appropriate for a group of 6-10 representative users.
Procedure
The walkthrough administrator must prepare the product designers and developers in
advance of the walkthrough. They need to be instructed to be thick-skinned, and to treat
all user comments with positive regard. It helps to tell the product developers that we will
NOT be making a recommendation for a product change in response to every user
comment; we will be filtering all their comments through our design sense (that of the
development team and the usability professional).
Participants are presented with instructions and rules, in addition to task and scenario
descriptions.
Walkthrough administrator asks participants to write on the hard copy of the first panel
the actions they would take in attempting the specified task.
After all participants have written their responses, the walkthrough administrator (or a
developer) announces the answer.
The users verbalize their responses and discuss potential usability problems, while the
product developers remain quiet and the usability professionals facilitate the discussion
among the users.
As the discussion winds down, the developers are invited to join in, often with an
explanation of why the design was the way it was.
After each task, the participants are given a brief questionnaire regarding the usability of
the interface.
Printed screen-shots put together in packets in the same order that the screens would be
confronted when the users were carrying out a particular task.
Writing utensils for marking up the screen-shots and filling out questionnaires after each task.
Room large enough to accommodate 6-10 users and a similar or smaller number of developers.
Who Can Facilitate
Common Problems
Suitable for 6-10 representative users and products with linear tasks.
All users must complete each task before discussion and next task can begin, potentially
affecting participants; understanding of the design flow.
D) COGNITIVE WALKTHRU
The cognitive walkthrough is a usability evaluation method in which one or more evaluators
work through a series of tasks and ask a set of questions from the perspective of the user.
Materials Needed
Facilitator: The facilitator is generally the organizer and is responsible for making sure that the
walkthrough team is prepared for the session and follows the ground rules for the walkthrough.
Evaluators: Representatives from the product team. These representatives could be usability
practitioners, requirements engineers, business analysts, developers, writers, and trainers.
Notetaker: The notetaker records the output of the cognitive walkthrough.
Product expert: Since the cognitive walkthrough can be conducted early in the design stage
(after requirements and a functional specification for example), a product expert is desired to
answer questions that members of the walkthrough team may have about the systems features or
feedback.
Domain experts: A domain expert is often, but not always a product expert. For example, if you
were evaluating a complex engineering tool, you might include a domain expert in addition to
product experts.
Procedure
1. Define the users of the product and conduct a context of use analysis.
2. Determine what tasks and task variants are most appropriate for the walkthrough.
3. Assemble a group of evaluators (you can also perform an individual cognitive
walkthrough).
4. Develop the ground rules for the walkthrough. Some ground rules you might consider
are:
o No discussions about ways to redesign the interface during the walkthrough.
o Designers and developers will not defend their designs.
o Participants are not to engage in Twittering, checking emails, or other behaviors that
would distract from the evaluation.
o The facilitator will remind everyone of the groundrules and note infractions during the
walkthrough.
5. Conduct the actual walkthrough
A. Provide a representation of the interface to the evaluators.
B. Walk through the action sequences for each task from the perspective of the
"typical" users of the product. For each step in the sequence, see if you can tell a
credible story based on the following questions (Wharton, Rieman, Lewis, &
Polson, 1994, pp. 106):
a. Will the user try to achieve the right effect?
b. Will the user notice that the correct action is available?
c. Will the user associate the correct action with the effect that the user is
trying to achieve?
d. If the correct action is performed, will the user see that progress is being
made toward the solution of the task?
C. Record success stories, failure stories, design suggestions, and problems that were
not the direct output of the walkthrough, assumptions about users, comments
about the tasks, and other information that may be useful in design. Use a
standard form for this process.
6. Bring all the analysts together to develop a shared understanding of the identified
strengths and weaknesses.
7. Brainstorm on potential solutions to any problems identified.
Common Problems
The cognitive walkthrough does not provide much guidance about choosing tasks that represent
what real users will do (Jeffries, Miller, Wharton, & Uyeda, 1991). The 1994 practitioner guide
suggests that tasks be chosen on the basis of market studies, needs analysis, and requirements,
which are all second hand sources of information. Wharton, Bradford, Jeffries, and Franzke
(1992, p. 387) made some specific recommendations regarding tasks:
Solutions from the cognitive walkthrough may be suboptimal. The cognitive walkthrough
emphasizes solutions for specific problems encountered in the action sequence of a task, but does
not deal with more general or higher-level solutions that might be applicable across different
tasks.
Analyses tend to draw attention to superficial aspects of design (such as labels and verbiage)
rather than deep aspects such as the appropriateness of the task structures and ease of error
recovery.
Benefits, Advantages and Disadvantages
Advantages