Human-Computer Interaction (HCI) : Evaluation Techniques
Human-Computer Interaction (HCI) : Evaluation Techniques
Lecture 4:
➢ EVALUATION TECHNIQUES
Goals of Evaluation
1. Assessment the extent and accessibility of the system’s functionality: the design
of the system should enable users to perform their intended tasks more easily
3. Identifying any specific problems with the system: This is, of course, related to
both the functionality and usability of the design (depending on the cause of the
problem). However, it is specifically concerned with identifying trouble-spots
which can then be rectified.
There are several approaches of evaluation through expert analysis. Two will be considered:
cognitive walkthrough, heuristic evaluation.
A. Cognitive walkthrough
2. A description of the task the user is to perform on the system. This should be a rep-
resentative task that most users will want to do.
3. A complete, written list of the actions needed to complete the task with the proposed
system.
B. Heuristic evaluation
The general idea behind heuristic evaluation is that several evaluators independently
critique a system to come up with potential usability problems. Heuristic evaluation,
developed by Jakob Nielsen and Rolf Molich, is a method for structuring the critique of
a system using a set of relatively simple and general heuristics. a set of 10 heuristics
are provided.
User participation in evaluation tends to occur in the later stages of development when there
is at least a working prototype of the system in place. This may range from a simulation of
the system’s interactive capabilities to a fully implemented system.
A. Styles of Evaluation
Before we consider some of the techniques that are available for evaluation with users,
we will distinguish between two distinct evaluation styles: those performed under laboratory
conditions and those conducted in the work environment or ‘in the field’.
Laboratory studies: In the first type of evaluation studies, users are taken out of their normal
work environment to take part in controlled tests, often in a specialist usability laboratory.
Field studies: The second type of evaluation takes the designer or evaluator out into the user’s
work environment in order to observe the system in action.
Any experiment has the same basic form. The evaluator chooses a hypothesis to test, which
can be determined by measuring some attribute of participant behaviour. A number of experi-
mental conditions are considered which differ only in the values of certain controlled variables.
Any changes in the behavioural measures are attributed to the different conditions.
− Participants
The choice of participants is vital to the success of any experiment. In evaluation ex-
periments, participants should be chosen to match the expected user population as
closely as possible.
− Variables
There are two main types of variable: those that are ‘manipulated’ or changed (known
as the independent variables) and those that are measured (the dependent variables).
The independent variable, and, as far as possible, unaffected by other factors. Common
choices of dependent variable in evaluation experiments are the time taken to complete
a task, the number of errors made, user preference and the quality of the user’s perfor-
mance.
HCI, LECTURE 004, Page | 4 of 6
− Hypotheses
C. Observational techniques
A popular way to gather information about actual use of a system is to observe users inter-
acting with it.
Think aloud and cooperative evaluation. It has the advantage of simplicity. A variation on
think aloud is known as cooperative evaluation in which the user is encouraged to see himself
as a collaborator in the evaluation and not simply as an experimental participant. The think
aloud process has a number of advantages:
− the process is less constrained and therefore easier to learn to use by the evaluator
− the evaluator can clarify points of confusion at the time they occur and so maximize
the effectiveness of the approach for identifying problem areas.
• Protocol analysis
− Paper and pencil This is primitive, but cheap, and allows the analyst to note interpre-
tations and extraneous events as they occur.
− Audio recording This is useful if the user is actively ‘thinking aloud’. However, it may
be difficult to record sufficient information.
− Video recording This has the advantage that we can see what the participant is doing.
Analysing protocols, whether video, audio or system logs, is time consuming and tedi-
ous by hand. One solution to this problem is to provide automatic analysis tools to
support the task. These offer a means of editing and annotating video, audio and system
logs and synchronizing these for detailed analysis. e.g. Experimental Video Annotator
(EVA).
D. Query techniques
Another set of evaluation techniques relies on asking the user about the interface
directly.
• Interviews: Interviewing users about their experience with an interactive system pro-
vides a direct and structured way of gathering information. Interviews have the ad-
vantages that the level of questioning can be varied to suit the context and that the
evaluator can probe the user more deeply on interesting issues as they arise.
One of the problems with most evaluation techniques is that we are reliant on observation and
the users telling us what they are doing and how they are feeling.
• Eye tracking for usability evaluation: Eye tracking has been possible for many years,
but recent improvements in hardware and software have made it more viable as an ap-
proach to measuring usability.