0% found this document useful (0 votes)
9 views6 pages

Chapter9 HCI Evaluation Techniques

Chapter 9 discusses evaluation techniques for assessing system usability and functionality throughout the design process. It outlines key evaluation goals, methods such as cognitive walkthroughs, heuristic evaluations, and user participation, as well as experimental factors and data analysis. The chapter emphasizes the importance of choosing the right evaluation method based on design stage, testing environment, and data type to enhance user experience.

Uploaded by

younas125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views6 pages

Chapter9 HCI Evaluation Techniques

Chapter 9 discusses evaluation techniques for assessing system usability and functionality throughout the design process. It outlines key evaluation goals, methods such as cognitive walkthroughs, heuristic evaluations, and user participation, as well as experimental factors and data analysis. The chapter emphasizes the importance of choosing the right evaluation method based on design stage, testing environment, and data type to enhance user experience.

Uploaded by

younas125
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Chapter 9: Evaluation Techniques

1. What is Evaluation?

 Evaluation tests the usability and functionality of a system.


 It can take place in:
o A laboratory (controlled environment).
o The field (real-world setting).
o With users (getting direct feedback).
 It should be performed at all stages of the design process to ensure a better user
experience.

2. Goals of Evaluation
The primary objectives of evaluation are to:

1. Check system functionality – Does it work as intended?


2. Assess the effect of the interface on users – Is it easy to use?
3. Identify specific problems – What needs improvement?

3. Evaluating Designs: Three Main Techniques


There are different ways to evaluate a design:

A. Cognitive Walkthrough

 Developed by Polson et al.


 Focuses on how well a design supports learning.
 Conducted by an expert in cognitive psychology.
 The expert "walks through" the design to identify potential problems.

Key Questions Asked:

1. How will the user interact with the system?


2. What cognitive (mental) processes are required?
3. What learning problems might occur?

B. Heuristic Evaluation

 Proposed by Nielsen and Molich.


 Experts use usability heuristics (rules) to evaluate if the design follows best practices.

Common Heuristics (Rules) Checked: ✔ The system is predictable (users know what to
expect).
✔ The system is consistent (same actions produce the same results).
✔ The system provides feedback (users know what is happening).

🔹 Heuristic evaluation "debugs" the design by identifying usability issues early.

C. Review-Based Evaluation

 Uses existing research and literature to evaluate the design.


 Helps identify past research findings that support or reject the current design.
 Example: GOMS Model is used to predict how long users take to complete tasks.

4. Evaluating Through User Participation


User involvement is crucial in the evaluation process.
There are two main approaches:

A. Laboratory Studies

✅ Advantages:

 Specialized equipment is available.


 The environment is controlled and distraction-free.

❌ Disadvantages:

 Users are not in a real-world setting.


 Hard to observe group interactions.

📌 Best used when: The system is being tested in a dangerous or impractical real-world
setting.

B. Field Studies

✅ Advantages:

 Conducted in a natural environment.


 Context is retained (users behave naturally).
 Long-term studies are possible.

❌ Disadvantages:

 Many distractions.
 Hard to control variables like noise and interruptions.
📌 Best used when: Studying real-world behavior over time.

5. Evaluating Implementations
Once a system is developed, different approaches are used to test how well it works.

🔹 Types of Implementations Evaluated:

 Simulation (a model that behaves like the real system).


 Prototype (a working model of the system).
 Full implementation (a finished product).

A. Experimental Evaluation

 Controlled testing of how users interact with the system.


 Researchers test hypotheses about usability.
 Example: Testing if a larger button size reduces user errors.

6. Experimental Factors
Four key factors must be considered in an experiment:

1. Subjects – Who will be tested? How many?


2. Variables – What will be changed or measured?
3. Hypothesis – What is the expected outcome?
4. Experimental Design – How will the test be conducted?

7. Types of Variables in Experiments

There are two types of variables in experiments:

1. Independent Variable (IV) – The factor that is changed.


o Example: Changing the font size in a user interface.
2. Dependent Variable (DV) – The factor that is measured.
o Example: Measuring how many errors users make.

📌 Example Hypothesis:
📝 "Error rate will increase as font size decreases."

8. Experimental Design
There are two main ways to set up an experiment:
A. Within-Groups Design

 Each user tests all versions of the system.


 Pros: Fewer participants needed.
 Cons: Users may learn from previous tests, affecting results.

B. Between-Groups Design

 Each user tests only one version of the system.


 Pros: No transfer of learning.
 Cons: More users are needed.

9. Data Analysis
Before analyzing data:
✔ Look at the raw data.
✔ Keep a backup of the original data.

Types of Statistical Tests

1. Parametric Tests
o Assume normal data distribution.
o More powerful but requires specific conditions.
2. Non-Parametric Tests
o Do not assume normal distribution.
o More reliable but less powerful.
3. Contingency Tables
o Used to classify data based on categories.

10. Evaluating Groups in Experiments


Testing groups of users is harder than testing individuals because:

 Groups need more participants, making it costly.


 More variation in behavior and interaction.
 Hard to schedule group tests.

🔹 Group Tasks for Experiments:

 Creative Tasks – Example: Writing a report.


 Decision Games – Example: A survival game.
 Control Tasks – Example: Managing a factory simulation.

11. Observational Methods


There are different ways to observe users:

A. Think-Aloud Method

 Users speak out loud while performing tasks.


 Pros: Simple and provides good insights.
 Cons: Can change user behavior.

B. Cooperative Evaluation

 Users collaborate with evaluators and ask questions.


 Pros: More interactive and comfortable for users.

C. Protocol Analysis

 Uses audio, video, or computer logging to record user interactions.


 Pros: Captures accurate data.
 Cons: Hard to analyze large amounts of data.

12. Physiological Methods


Measures how users physically react to a system.

🔹 Eye Tracking

 Measures where users look on the screen.


 Helps determine what parts of the UI are confusing.

🔹 Physiological Measurements

 Measures physical reactions like heart rate, sweat, and brain activity.

13. Choosing the Right Evaluation Method


Key Considerations

1. Design vs. Implementation – Is it early or late in development?


2. Lab vs. Field Testing – Do we need control or real-world feedback?
3. Subjective vs. Objective Data – Are we gathering user opinions or hard data?
4. Qualitative vs. Quantitative Measures – Do we want numbers or detailed insights?

Final Thoughts
This chapter explains various ways to evaluate a system to ensure usability and effectiveness.
By using different techniques like heuristic evaluation, experiments, and user testing,
designers can create better user-friendly systems.

You might also like