UNIT 5 HCI Notes
UNIT 5 HCI Notes
1. Types of Toolkits
1
2
6. Emerging Trends
Using toolkits effectively can greatly enhance the HCI design and
development process, allowing teams to focus more on user needs and
less on technical challenges. If you have a specific context or toolkit in
mind, feel free to share, and I can provide more tailored information!
2
3
Overview of UIMS
1. Definition:
o A UIMS provides a framework for developing user interfaces
in software applications. It allows designers and developers
to create, modify, and manage user interfaces efficiently.
2. Purpose:
o To separate the user interface from the application logic,
enabling easier modifications and updates to the interface
without affecting the underlying system.
5. User Modeling:
o Features that allow for the customization of the user
experience based on user profiles, preferences, and
behaviors.
1. Modularity:
o Promotes a modular approach to interface design, allowing
different components to be developed and updated
independently.
2. Consistency:
o Helps maintain consistency across different parts of an
application by providing standardized UI elements and
interaction patterns.
3. Efficiency:
o Reduces development time by providing reusable
components and tools that streamline the interface creation
process.
4. Flexibility:
o Enables designers to experiment with different layouts and
interactions, facilitating rapid prototyping and iterative
design.
5. User-Centric Design:
o Encourages the integration of user feedback and testing into
the design process, leading to more usable and satisfying
interfaces.
Relevance in HCI
4
5
Conclusion
1. Usability Assessment
Ease of Use: Determine how easily users can learn and operate the
system.
Efficiency: Measure how quickly users can accomplish tasks using
the interface.
Error Rate: Identify the frequency and types of errors users make,
and assess how easily they can recover from them.
5
6
2. User Satisfaction
3. Functionality Evaluation
4. Accessibility Assessment
6. Comparative Analysis
8. Stakeholder Alignment
User-Centered Design: Ensure that the final product aligns with the
needs and expectations of end-users.
Business Goals: Assess how well the system meets organizational
objectives and stakeholder requirements.
9. Empirical Research
Conclusion
Formative Evaluation:
7
8
Qualitative Evaluation:
o Focuses on understanding user experiences, behaviors, and
motivations.
o Methods: Interviews, observational studies, and think-aloud
protocols.
Quantitative Evaluation:
o Involves numerical data to measure usability and
performance.
o Methods: Surveys, analytics data, and controlled experiments
with statistical analysis.
Laboratory Studies:
o Conducted in controlled environments where variables can be
manipulated.
8
9
User-Centered Evaluation:
o Involves real users performing tasks with the system.
Remote Evaluation:
o Conducted when users are not physically present, often using
online tools.
o Allows for a broader and more diverse participant pool.
o Methods: Remote usability testing, online surveys, and
remote interviews.
In-Person Evaluation:
o Conducted face-to-face, allowing for direct observation and
interaction.
o Facilitates immediate feedback and clarification of user
responses.
o Methods: In-person usability testing, workshops, and focus
groups.
Performance-Based Evaluation:
o Measures objective metrics such as task completion time,
error rates, and success rates.
o Provides quantifiable data to assess usability.
Subjective Evaluation:
o Gathers user perceptions, satisfaction, and overall experience.
o Methods: Questionnaires, interviews, and user feedback
sessions.
Conclusion
1. Stage of Development
2. Research Goals
3. User Involvement
Real Users:
o Methods: Usability testing, field studies, diary studies.
Quantitative Data:
o Methods: Surveys with Likert scales, A/B testing, analytics
data.
12
13
5. Context of Use
Controlled Environment:
o Methods: Laboratory studies.
6. Resources Available
13
14
7. Participant Characteristics
Conclusion
14
15
o Ensure that you have the right tools and resources in place for
data collection and analysis.
6. E - Evaluate the Results
o Analyze the collected data to answer the evaluation
questions.
o Interpret the findings in relation to the original goals and
questions.
o Present the results to stakeholders, highlighting insights,
usability issues, and recommendations for improvement.
Conclusion
17
18
Conclusion
1. User-Centric Focus:
o Emphasizes the user's perspective, particularly for new or
infrequent users, by analyzing how they would interact with
the system without prior knowledge.
2. Task-Based Evaluation:
o Involves selecting specific tasks that users would typically
perform and assessing how easily they can accomplish these
tasks using the system.
3. Step-by-Step Analysis:
o Evaluators go through each step of a task to determine if the
user can understand what to do next, whether they can find
the necessary controls, and if they can successfully execute
the task.
20
21
21
22
Conclusion
1. User-Centric Approach:
22
23
1. Define Objectives:
o Establish clear goals for the usability test, such as identifying
specific usability issues or assessing overall user satisfaction.
2. Select Participants:
o Choose representative users who reflect the target audience
for the system. The ideal number of participants typically
ranges from 5 to 10 for qualitative insights.
3. Develop Test Scenarios:
o Create realistic tasks and scenarios that users will complete
during the test. These should reflect actual use cases.
4. Choose a Testing Method:
o Determine whether the testing will be conducted in a lab
(controlled environment) or in the field (real-world context).
Decide on moderated (facilitator present) or unmoderated
(participants work independently) testing.
23
24
24
25
Conclusion
25