0% found this document useful (0 votes)
9 views45 pages

Human Computer Interaction Solved Paper

The document discusses various aspects of Human-Computer Interaction (HCI), including factors influencing HCI such as human, computer, environmental, and usability principles. It also covers dialogue management using Markov Decision Processes (MDP) and the architecture of spoken dialogue systems, highlighting components like Automatic Speech Recognition and Natural Language Understanding. Additionally, it addresses usability evaluation methods, the capabilities of the ICARUS cognitive architecture, options for video communication in a corporate setting, and the defining factors of dialogue systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views45 pages

Human Computer Interaction Solved Paper

The document discusses various aspects of Human-Computer Interaction (HCI), including factors influencing HCI such as human, computer, environmental, and usability principles. It also covers dialogue management using Markov Decision Processes (MDP) and the architecture of spoken dialogue systems, highlighting components like Automatic Speech Recognition and Natural Language Understanding. Additionally, it addresses usability evaluation methods, the capabilities of the ICARUS cognitive architecture, options for video communication in a corporate setting, and the defining factors of dialogue systems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

I will give you some questions on human-computer interaction subject answer those properly

Sure! Send over the questions, and I'll provide well-explained answers.

1. A. Summarise the factors in HCI.


1.B. Discuss dialogue as Markov decision process for dialogue management.

1.A. Factors in Human-Computer Interaction (HCI)

Human-Computer Interaction (HCI) focuses on designing systems that enhance user experience by
improving usability, efficiency, and satisfaction. Several factors influence HCI:

1. Human Factors:

Cognitive Abilities: Memory, perception, attention, and learning impact how users interact
with systems.

Physical Abilities: Motor skills, vision, and hearing affect input and response time.
Psychological Aspects: User preferences, emotions, and motivation influence interaction.

2. Computer/System Factors:

Input/Output Devices: Keyboards, touchscreens, voice recognition, and haptic feedback


affect usability.
Response Time & Performance: System speed, latency, and real-time feedback impact user
satisfaction.
Interface Design: Consistency, clarity, and accessibility determine ease of use.
3. Environmental & Contextual Factors:

Physical Environment: Lighting, noise, and workspace setup affect usability.


Social & Cultural Aspects: Different user expectations and practices influence interaction.

Task Context: The nature of tasks (e.g., emergency systems vs. entertainment apps) dictates
design needs.
4. Usability Principles:

Learnability: How easily users can understand and use a system.


Efficiency: How quickly users can perform tasks.

Memorability: How easily users can recall interactions after a period of non-use.
Error Handling: How the system prevents and recovers from user errors.

Satisfaction: The overall user experience and comfort level.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 1/45
1.B. Dialogue as a Markov Decision Process (MDP) for Dialogue Management

A Markov Decision Process (MDP) is a mathematical framework used for decision-making where
outcomes are partly random and partly controlled by an agent. In dialogue management, an MDP helps
model conversations between humans and computers by determining the best system responses based
on user inputs.

Components of MDP in Dialogue Management:

1. States (S): Represent different stages of a conversation (e.g., user asking a question, system
responding).
2. Actions (A): Possible responses the system can choose (e.g., providing an answer, asking for
clarification).

3. Transition Probability (P): The likelihood of moving from one state to another based on user input
and system response.
4. Reward Function (R): Assigns a numerical value to actions based on how well they improve user
satisfaction.
5. Policy (π): Defines the optimal strategy for selecting actions to maximize long-term rewards (i.e.,
an efficient conversation).

Working of MDP in Dialogue Systems:

The system starts in an initial state where the user initiates a conversation.

Based on the user's input, the system selects an action (response) that maximizes the expected
reward.

The system transitions to a new state based on the user’s reaction.


The process continues iteratively, adjusting responses using reinforcement learning to improve
interaction over time.

Advantages of MDP in Dialogue Management:

Helps adapt responses dynamically based on user behavior.

Improves user experience by optimizing dialogue flow.


Enables context-awareness, allowing systems to maintain conversation history.

Used in chatbots, virtual assistants, and automated customer service applications.

Would you like further clarification on any part?

2. A. Describe the general architecture of a spoken dialogue system.


2. B. Discuss for what applications are natural language interfaces most appropriate.

2.A. General Architecture of a Spoken Dialogue System (SDS)

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 2/45
A Spoken Dialogue System (SDS) is an interactive system that enables users to communicate with
computers using spoken language. These systems are commonly used in virtual assistants (e.g., Siri,
Alexa), customer service chatbots, and automated voice response systems. The architecture of an SDS
consists of several key components:

1. Automatic Speech Recognition (ASR)

Converts spoken language into text.

Uses techniques like Hidden Markov Models (HMMs), Deep Neural Networks (DNNs), or
Transformer-based models.
Handles variations in accents, background noise, and speech speed.

2. Natural Language Understanding (NLU)

Processes and extracts meaning from the recognized text.

Includes tasks like intent recognition, named entity recognition (NER), and syntactic parsing.
Uses machine learning models like BERT, GPT, or rule-based approaches.

3. Dialogue Manager (DM)

Controls the flow of conversation.

Uses a Dialogue Policy to decide the next system response based on user input.
Can be modeled using Finite-State Machines (FSMs), Markov Decision Processes (MDPs), or
Reinforcement Learning (RL).

4. Natural Language Generation (NLG)

Converts structured system responses into natural, human-like language.

Uses templates, rule-based systems, or deep learning models (e.g., GPT).

5. Text-to-Speech Synthesis (TTS)

Converts generated text into spoken language using speech synthesis techniques.
Uses concatenative synthesis, parametric synthesis, or neural TTS models like WaveNet.

6. Knowledge Base (KB) and External Databases

Provides information retrieval, answering queries, and maintaining context.


May include APIs, databases, or external sources like Wikipedia.

7. User Interface and Context Handling

Integrates with user devices (smartphones, smart speakers, etc.).


Maintains session history and adapts responses based on past interactions.

Flow of a Spoken Dialogue System


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 3/45
1. User speaks → ASR converts speech to text.

2. NLU processes input → extracts intent and entities.


3. DM selects response → based on dialogue history and policy.
4. NLG generates text response → converts structured data into sentences.

5. TTS converts text to speech → user hears the response.

2.B. Applications of Natural Language Interfaces (NLI)

A Natural Language Interface (NLI) allows users to interact with computers using everyday language
instead of structured commands. These interfaces are useful in scenarios where users prefer
conversational input over technical commands.

1. Virtual Assistants & Chatbots

Examples: Siri, Alexa, Google Assistant.


Used for setting reminders, answering questions, controlling smart devices, etc.

2. Customer Support & Help Desks

Automates responses to frequently asked questions.

Reduces human workload in call centers.


Used in banking, telecom, and e-commerce.

3. Search Engines & Information Retrieval

Google Search, IBM Watson, and enterprise knowledge bases.


Allows users to ask questions in natural language (e.g., "What is the weather today?").

4. Healthcare Applications

Virtual health assistants for patient queries.


Automated symptom checkers (e.g., Ada Health).

Helps doctors retrieve patient information using voice commands.

5. Data Analytics & Business Intelligence

Allows business users to query databases in natural language.

Example: "Show me last month’s sales report."


Used in tools like Tableau, Power BI, and Google Data Studio.

6. Education & E-Learning

AI tutors and learning assistants (e.g., Duolingo, Socratic).


Helps students interact naturally with learning systems.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 4/45
7. Smart Homes & IoT Control

Voice-controlled home automation systems.

Example: "Turn off the lights" or "Set the thermostat to 22°C."

8. Accessibility for Disabled Users

Voice-controlled interfaces for visually impaired users.

Hands-free control for mobility-impaired individuals.

Key Benefits of NLIs

✔ User-Friendly: Reduces the need to learn complex commands.


✔ Efficiency: Speeds up information retrieval and task execution.
✔ Accessibility: Enables hands-free and inclusive interactions.
Would you like any specific examples or deeper explanations?

3. A. Describe how do you perform evaluation in usability.


3. B. List out the capabilities of ICARUS.

3.A. Evaluation in Usability

Usability evaluation assesses how effectively, efficiently, and satisfactorily users can interact with a
system. It ensures that the system meets user needs and expectations. There are two primary
approaches: formative evaluation (conducted during design) and summative evaluation (conducted
after development).

1. Usability Evaluation Methods

1. User Testing (Empirical Methods)


Think-Aloud Protocol: Users verbalize their thoughts while interacting with the system.

Remote Usability Testing: Users test the system in real-world conditions via online tools.
A/B Testing: Compares two versions of a design to determine which performs better.
2. Expert Evaluation (Analytical Methods)

Heuristic Evaluation: Experts review the system based on usability principles (e.g., Nielsen’s
heuristics).
Cognitive Walkthrough: Experts simulate user interactions to identify issues in task
completion.

3. Survey & Feedback-Based Evaluation


Questionnaires & Interviews: Collect subjective feedback from users (e.g., System Usability
Scale - SUS).
Focus Groups: Group discussions to gain insights into user needs and system effectiveness.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 5/45
4. Automated & Analytical Methods
Eye-Tracking: Monitors user gaze to analyze attention patterns.
Heatmaps: Visual representations of user interaction on screens.

Log Analysis: Tracks user interactions and error rates.

2. Usability Metrics

1. Effectiveness

Task completion rate (% of users who complete tasks successfully).


Number of errors per task.

2. Efficiency
Time taken to complete tasks.
Number of steps required to perform an action.

3. Satisfaction
User ratings (e.g., on a scale from 1 to 10).
Qualitative feedback (open-ended responses).

3. Steps in Usability Evaluation

1. Define Goals → What usability aspects are being tested?

2. Select Method → Choose testing, expert review, or surveys.


3. Conduct Testing → Perform usability tests with real users.

4. Analyze Results → Identify usability problems and trends.

5. Make Improvements → Modify the system based on feedback.

3.B. Capabilities of ICARUS

ICARUS is a cognitive architecture that models human-like problem-solving and learning in AI systems.
It integrates perception, reasoning, learning, and execution in a unified framework.

Key Capabilities of ICARUS

1. Hierarchical Knowledge Representation

Stores knowledge in structured, goal-directed concepts and skills.

Uses long-term memory for general knowledge and working memory for real-time
problem-solving.

2. Reactive Execution

Adapts to changing environments.


Uses perception modules to respond dynamically to external stimuli.

3. Goal-Driven Behavior

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 6/45
Supports goal prioritization and multi-tasking.

Selects actions based on high-level objectives.


4. Integrated Learning

Learns new problem-solving skills from experience.


Uses explanation-based learning (EBL) to generalize from examples.

5. Cognitive Simulation

Mimics human reasoning and learning.


Models decision-making in complex environments.

6. Application in Robotics & AI

Used in autonomous agents, robotic systems, and AI assistants.

Helps robots adapt to new tasks without explicit programming.

Would you like a deeper explanation of any part?

4. A. company has offices in New York and London. They are considering using some form of video
to supplement their existing email, fax, and telephone communications. Conclude the options
available and potential advantages and problems they may encounter.
4. B. Discuss the factors defining the dialogue system.

4.A. Options for Video Communication and Their Advantages/Challenges

A company with offices in New York and London can use video communication to enhance
collaboration. The available options and their respective advantages and challenges are discussed
below:

1. Options for Video Communication

1. Video Conferencing (Live Meetings)


Tools: Zoom, Microsoft Teams, Google Meet, Webex.

Used for real-time communication, meetings, and decision-making.

2. Video Messaging (Asynchronous Video Communication)

Tools: Loom, Vidyard, Microsoft Stream.


Employees can record video messages and share them for later viewing.

3. Webinars and Virtual Events

Tools: GoToWebinar, Zoom Webinar, ON24.


Used for company-wide announcements, training, and external presentations.

4. Enterprise Video Platforms (Internal Video Sharing)

Tools: Microsoft Stream, Vimeo Enterprise.


Used for internal training, knowledge sharing, and onboarding videos.
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 7/45
2. Advantages of Video Communication

✅ Improved Collaboration & Engagement


Video enhances clarity in discussions and reduces miscommunication.
Face-to-face interaction builds stronger relationships among employees.

✅ Time & Cost Savings


Reduces travel expenses between New York and London offices.

Speeds up decision-making compared to email and fax.

✅ Better Knowledge Sharing


Recorded video meetings allow employees in different time zones to catch up.

Training videos improve onboarding efficiency.

✅ Increased Productivity
Employees can use video to quickly explain complex topics instead of long emails.

Reduces time spent on back-and-forth messaging.

3. Potential Challenges

⚠ Time Zone Differences


Live video meetings between New York (EST) and London (GMT) may be difficult to schedule.

Solution: Use asynchronous video messaging for non-urgent communications.

⚠ Technical Issues & Bandwidth Requirements


Video conferencing requires a stable internet connection.
Solution: Invest in high-speed internet and reliable video platforms.

⚠ Security & Data Privacy Concerns


Sensitive company information may be exposed if security is weak.

Solution: Use encrypted video conferencing tools with secure access controls.

⚠ Video Fatigue
Excessive video meetings may lead to employee burnout.

Solution: Use video only when necessary and encourage breaks.

🔹 Conclusion
Using a mix of live video conferencing, asynchronous video messaging, and enterprise video
platforms can significantly improve communication. However, the company should manage time
zones, invest in reliable tools, and balance video use with other communication methods to
maximize benefits.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 8/45
4.B. Factors Defining a Dialogue System

A dialogue system is an AI-driven system that interacts with users through spoken or written language.
Its effectiveness depends on several key factors:

1. Input Processing

Speech Recognition (ASR): Converts spoken language to text (for voice-based systems).

Natural Language Understanding (NLU): Extracts intent and meaning from user input.

2. Dialogue Management

State Tracking: Maintains context and tracks user history in a conversation.

Decision-Making: Determines the next response using rule-based, statistical, or AI-driven


approaches.

3. Response Generation

Natural Language Generation (NLG): Produces human-like responses.

Text-to-Speech (TTS): Converts generated text into speech for voice systems.

4. Context Awareness

Maintains long-term and short-term memory of previous interactions.


Personalizes responses based on user preferences.

5. Adaptability & Learning

Uses Machine Learning (ML) and Reinforcement Learning (RL) to improve over time.

Learns from user feedback and interaction patterns.

6. Multimodal Capabilities

Supports text, voice, and visual inputs for richer interaction (e.g., chatbots with voice & image
recognition).

7. Error Handling & Recovery

Detects and corrects errors in user input.

Uses confirmation mechanisms to ensure clarity.

8. User Experience (UX) Considerations

Response Time: Should be fast and natural.


Conversational Flow: Should feel intuitive and human-like.

Personalization: Adapts to user preferences.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 9/45
🔹 Conclusion
A well-designed dialogue system understands user input, maintains context, generates natural
responses, learns over time, and ensures a smooth user experience. These factors are crucial for
developing effective chatbots, virtual assistants, and voice-controlled applications.
Would you like any more details or real-world examples?

5. A. List out all you can about natural language interfaces.


5. B. Discriminate the terms control and feedback and feedthrough in the classification by function
framework.

5.A. Overview of Natural Language Interfaces (NLI)

A Natural Language Interface (NLI) allows users to interact with computers using human language
(spoken or written) rather than structured commands or graphical interfaces.

1. Types of NLIs

1. Text-Based NLIs
Users type queries, and the system processes and responds in text.

Example: Google Search, database query tools (e.g., Wolfram Alpha).

2. Speech-Based NLIs
Users interact via voice, and the system responds through speech or text.

Example: Siri, Alexa, Google Assistant.

3. Multimodal NLIs

Combines text, voice, and gesture recognition for interaction.


Example: Smart assistants with visual interfaces (e.g., Google Nest Hub).

2. Key Components of an NLI

1. Natural Language Processing (NLP) → Converts user input into structured data.

2. Natural Language Understanding (NLU) → Extracts meaning, intent, and context.


3. Dialogue Management (DM) → Determines responses based on conversation flow.

4. Natural Language Generation (NLG) → Generates human-like responses.

5. Speech Recognition & Synthesis (for voice-based NLIs) → Converts speech to text and vice
versa.

3. Applications of NLIs

Virtual Assistants (Siri, Alexa, Cortana).

Chatbots (customer service, healthcare assistants).

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 10/45
Search Engines (Google’s conversational search).

Data Query Systems (NL interfaces for databases like SQL-based queries).

Smart Home & IoT (Controlling smart devices via voice).

4. Advantages of NLIs

✔ User-Friendly: No need for technical knowledge.


✔ Efficiency: Faster access to information.
✔ Accessibility: Beneficial for visually or physically impaired users.
✔ Improved Productivity: Reduces the learning curve for complex systems.
5. Challenges of NLIs

⚠ Ambiguity in Language: Words may have multiple meanings.


⚠ Speech Recognition Issues: Accents and background noise can reduce accuracy.
⚠ Context Understanding: NLIs struggle with maintaining long-term conversation context.
⚠ Computational Cost: Requires powerful AI models and processing capabilities.
5.B. Control, Feedback, and Feedthrough in the Classification by Function
Framework

The Classification by Function Framework categorizes human-computer interaction based on how


users control the system and receive feedback. The three key concepts—Control, Feedback, and
Feedthrough—define different aspects of user interaction.

1. Control

Refers to user input that directs the system’s behavior.

The user performs an action to control the system (e.g., clicking a button, issuing a command).
Example: A user pressing "Play" on a music player to start playback.

2. Feedback

Refers to the system’s response to user actions, indicating that the system has received and
processed the input.
Helps users understand system status and whether their action was successful.

Example:
When clicking "Play," the button changes color, and the song starts playing.

A loading animation appears when a web page is loading.

3. Feedthrough

Refers to indirect feedback about the system’s state or actions that affect the user’s interaction.
Unlike direct feedback (which responds to user input), feedthrough provides contextual
information about changes occurring in the system.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 11/45
Example:
When adjusting a thermostat, a real-time temperature display updates to show the new
setting.

A cursor changes to a spinning wheel while a file is processing.

Comparison Table

Concept Definition Example


Control User action to manipulate the system. Pressing a "Start" button.
Feedback System’s direct response to user action. Button lights up when clicked.
Feedthrough Contextual information about system state. A battery icon showing charge level.

🔹 Conclusion
Understanding the distinction between control, feedback, and feedthrough is crucial for designing
user-friendly interfaces. Proper implementation enhances usability, ensuring users can effectively
interact with and understand the system.
Would you like a deeper explanation with real-world applications?

6. A. Define cognitive architecture.


6. B. Discuss what new paradigms you think may be significant in the future of interactive
computing.

6.A. Definition of Cognitive Architecture

A Cognitive Architecture is a computational framework that models human-like intelligence by


simulating how the brain processes information, makes decisions, and learns from experience. It
provides the underlying structure for artificial intelligence (AI), robotics, and cognitive science
applications.

Key Characteristics of Cognitive Architectures

1. Knowledge Representation
Stores structured knowledge (facts, rules, concepts) in long-term and short-term memory.

2. Reasoning & Decision-Making


Uses logical rules, heuristics, and learning models to solve problems.
3. Learning & Adaptation

Improves performance over time through machine learning, reinforcement learning, or


experience-based learning.
4. Perception & Action

Integrates sensory input (e.g., vision, speech) to interact with the environment.
5. Modular & Multi-Agent Systems

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 12/45
Can consist of multiple interacting subsystems (e.g., memory, planning, execution).

Examples of Cognitive Architectures

1. SOAR – Focuses on general problem-solving and learning.


2. ACT-R – Models human cognition based on psychological principles.
3. ICARUS – Uses hierarchical reasoning for robotics and AI applications.

4. LIDA – Simulates human-like perception, memory, and consciousness.

Applications of Cognitive Architectures

✔ AI-driven assistants (e.g., Siri, Alexa).


✔ Autonomous robots and self-driving cars.
✔ Decision-support systems in business and healthcare.
✔ Simulating human behavior in virtual environments.
6.B. Future Paradigms in Interactive Computing

As technology evolves, interactive computing will see groundbreaking changes. Below are some
emerging paradigms that will shape the future:

1. Brain-Computer Interfaces (BCI)

Direct interaction between the human brain and computers, enabling thought-controlled devices.
Applications:
Helping disabled individuals control prosthetics.

Enhancing gaming and virtual reality (VR).

2. Extended Reality (XR) – (VR, AR, MR)

Virtual Reality (VR): Fully immersive digital environments.


Augmented Reality (AR): Overlaying digital elements on the real world.
Mixed Reality (MR): Real-time interaction between physical and digital worlds.

Future Applications:
Workplace Collaboration – Virtual offices replacing Zoom meetings.
Education & Training – Medical simulations, immersive learning.

3. Adaptive AI & Personalization

AI systems that learn and adapt in real-time based on user behavior.

Future Use Cases:


Personal assistants predicting user needs.
AI-driven UI that changes based on user preferences.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 13/45
4. Conversational & Emotion-Aware AI

AI that understands human emotions via voice tone, facial expressions, and behavior.
Future Applications:

AI-powered customer service agents detecting frustration.


Chatbots that provide empathetic responses.

5. Quantum Computing & AI Integration

Quantum AI will process vast amounts of data instantly, enabling new levels of machine learning
and problem-solving.

Potential Applications:
Drug discovery & medical research.
Highly complex simulations (e.g., climate modeling, financial forecasting).

6. Haptic & Gesture-Based Interfaces

Touch-based feedback systems and gesture recognition will replace traditional input methods.

Applications:
Surgeons controlling robotic arms with hand gestures.
Gaming with real-time motion feedback.

7. Decentralized & Privacy-Focused Computing

Blockchain and privacy-enhancing AI will give users control over personal data.

Examples:
Web3 & Decentralized Identity – Users own their online identities.
Federated Learning – AI models trained without sharing personal data.

🔹 Conclusion
The future of interactive computing will focus on seamless, intelligent, and personalized interactions
between humans and machines. Emerging technologies like BCI, XR, quantum AI, and emotion-aware
systems will redefine how we engage with digital systems.
Would you like any real-world examples or deeper insights into any of these paradigms? 🚀

7. A. Illustrate positive and negative issues that have arisen when it has been embodied in a specific
system.
7. B. Produce case studies of participatory design in action and use these to provide a critique of the
approach.

7.A. Positive and Negative Issues in Embodied AI Systems

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 14/45
Embodied AI refers to AI integrated into physical systems, such as robots, virtual agents, and IoT
devices. One example is humanoid robots used in customer service and healthcare. Below are the
positive and negative issues associated with their deployment.

Case Study: Pepper Robot (SoftBank Robotics)

System Overview:
Pepper is a humanoid social robot designed for customer interaction, emotional recognition, and
assistance in businesses and healthcare.

Positive Issues (Benefits)

✅ Enhanced Customer Engagement


Pepper improved customer interaction in retail stores by providing product recommendations
and answering FAQs.

Businesses reported increased customer satisfaction due to Pepper’s interactive capabilities.

✅ Healthcare Assistance
Used in elderly care homes to reduce loneliness by engaging patients in conversation and
entertainment.
Assisted with medication reminders and emergency alerts.

✅ Multimodal Interaction
Supports voice, gestures, and touchscreen input, making it accessible for various users.
Can be programmed for different languages and cultures.

Negative Issues (Challenges & Limitations)

⚠ Limited Emotional Intelligence


While Pepper can recognize emotions, it struggles with deep emotional context.
Failed to handle sensitive conversations, such as complaints or distress calls.

⚠ High Cost & Maintenance


Expensive to implement, making it inaccessible for small businesses.
Requires regular software updates and hardware servicing.

⚠ Privacy & Security Risks


Concerns over data collection and surveillance when deployed in public spaces.
Potential hacking risks if security measures are weak.

🔹 Conclusion:
Embodied AI like Pepper shows great potential in customer service and healthcare, but challenges
like emotional intelligence, cost, and privacy need to be addressed for wider adoption.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 15/45
7.B. Case Studies of Participatory Design & Critique of the Approach

Participatory Design (PD) is a user-centered approach where end-users actively contribute to the
design and development of a system. Below are case studies demonstrating PD in action, along with a
critique of the approach.

Case Study 1: Designing a Hospital Information System (Denmark)

🔹 Context:
A hospital in Denmark used Participatory Design to create an electronic health record (EHR) system
for doctors and nurses.

🔹 Implementation:
Medical staff participated in workshops, interviews, and prototype testing.
Iterative feedback cycles refined the user interface (UI) and workflows.

🔹 Results:
✅ Improved usability and efficiency, reducing errors in medical records.
✅ Staff adopted the system quickly due to familiarity with its design.
⚠ Challenges:
Time-consuming process, delaying system rollout.
Difficult to balance diverse user needs across different hospital departments.

Case Study 2: Redesigning Public Transport Ticketing System (Sweden)

🔹 Context:
Sweden's public transport authority redesigned ticket vending machines with input from daily
commuters.
🔹 Implementation:
Users tested prototypes at train stations and gave real-time feedback.
Iterations included simplified menus, touchscreen improvements, and language options.

🔹 Results:
✅ Increased ticket sales efficiency and reduced wait times.
✅ Higher user satisfaction due to better accessibility.
⚠ Challenges:
Some technical features were too complex, leading to usability concerns.
Budget constraints prevented some requested features from being implemented.

Critique of Participatory Design

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 16/45
Advantages Challenges
✅ User-Centric: Ensures real-world usability & ⚠ Time-Consuming: Requires multiple iterations and
relevance. meetings.

✅ Higher Adoption Rates: Users feel ownership ⚠ Conflicting Opinions: Difficult to satisfy all
over the final product. stakeholders.
✅ Fewer Post-Launch Issues: Problems are ⚠ Resource-Intensive: Needs significant investment in
identified early. research & facilitation.

🔹 Conclusion:
Participatory Design improves usability and adoption but can be slow and resource-heavy.
Organizations must balance user involvement with project timelines and costs to make it effective.

Would you like more case studies or a deeper critique on PD methods? 😊

8. A. Imagine a normal calculator except that it displays A for 0, B for 1, up to J for 9. So the number
372 would appear as DHC. Asses their effect on the formal transparency of the calculator.
8. B. Justify-"Is the 'back' button on a browser just like undo."

8.A. Effect of Letter-Based Display on Formal Transparency of a Calculator

A normal calculator typically represents numbers 0-9 in their standard form. However, in this modified
calculator:

0 → A, 1 → B, 2 → C, ..., 9 → J
Example: 372 appears as DHC

Formal Transparency in HCI

Formal transparency refers to how clearly a system’s workings and outputs are understood by
users. This includes:

1. Interpretability → How easily users can read and understand results.


2. Predictability → Whether users can correctly anticipate system behavior.
3. Learnability → How quickly new users can adapt to the system.

Effects on Formal Transparency

Factor Impact of Letter-Based Display

Interpretability Users unfamiliar with A-J mappings will struggle to understand results.
Predictability Users may not immediately recognize that DHC means 372, leading to confusion.
Learnability Requires memorization of mappings, slowing down adaptation.

Efficiency Slower calculations due to the need for mental conversion.


Error-Prone Higher risk of mistakes when reading or entering numbers.

🔹 Conclusion:
The formal transparency of the calculator is reduced because users must mentally map letters to

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 17/45
numbers, making it less intuitive and efficient. This design might be useful for encryption or
specialized use cases but is not practical for general users.

8.B. Is the 'Back' Button on a Browser Just Like Undo?

While the "Back" button in a web browser and the "Undo" function in applications share similarities,
they are fundamentally different in functionality.

Similarities

✅ Both reverse previous actions:


"Back" returns to a previously visited web page.
"Undo" reverts the last action in an application.

✅ Both improve usability:


Allow users to correct mistakes or revisit previous states.

Differences

Feature Back Button (Browser) Undo Function


Works at data input or editing
Scope Works at page navigation level.
level.
Restores the exact previous state
State Retention Reloads the previous page but may lose form data.
of text or design.
Linear vs. Context-aware—undoes specific
Moves through a fixed history of visited pages.
Contextual user actions.
No dedicated "Redo" button (Forward is not the Can usually "Redo" an undone
Redo Option
same). action.
Affects Web May resend requests, causing unintended actions Does not interact with external
Requests? (e.g., re-submitting a form). requests.

🔹 Conclusion:
The "Back" button is not exactly like "Undo". While both provide reversal functionality, "Undo"
directly affects user-generated content, whereas "Back" alters navigation history without
restoring content states.
Would you like a deeper explanation with real-world examples? 🚀

9. A. Illustrate the four main types of help that users may require? For each type, give an example of
a situation in which it would be appropriate.
9. B. Describe some of the different approaches to providing user support systems, with examples.

9.A. Four Main Types of Help Users May Require

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 18/45
When interacting with a system, users may require different types of help based on their familiarity, task
complexity, and unexpected issues. The four main types of help are:

Type of Help Description Example Situation

1. Procedural Guides users through specific tasks by A user wants to set up email filtering rules
Help providing step-by-step instructions. in their email client.
Provides definitions and explanations of
2. Referential A user is confused about the meaning of a
system elements (e.g., commands, icons, or
Help status icon in a messaging app.
terms).
3. Diagnostic Assists users in troubleshooting problems A user experiences Wi-Fi connectivity
Help and suggests solutions. issues and needs guidance to fix it.
A user is filling out a complex online form,
4. Context- Offers assistance based on the user’s current
and a tooltip explains the required format
Sensitive Help task or screen within the system.
for a field.

🔹 Conclusion:
Providing the right type of help at the right time improves usability, reduces frustration, and
enhances productivity.

9.B. Different Approaches to Providing User Support Systems

User support systems help users learn, troubleshoot, and optimize their experience with software or
hardware. Below are different approaches with examples:

1. Online Documentation & User Manuals

📌 Description:
A collection of guides, FAQs, and troubleshooting steps available in a digital or printed format.

✅ Example:
Microsoft Office Help Center – Provides detailed documentation on Word, Excel, and PowerPoint
usage.

⚠ Limitations:
Users may struggle to find relevant information quickly.

2. Chatbots & Virtual Assistants

📌 Description:
AI-driven systems provide automated responses to common queries.

✅ Example:
Apple Siri & Google Assistant – Help users with voice commands for tasks like setting reminders.

⚠ Limitations:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 19/45
May not handle complex or unusual queries effectively.

3. Interactive Tutorials & Walkthroughs

📌 Description:
Step-by-step guides or onboarding flows that teach new users how to use a system.

✅ Example:
Duolingo’s onboarding tutorial – Introduces users to the language-learning process interactively.

⚠ Limitations:
Users may skip tutorials, leading to confusion later.

4. Tooltips & Context-Sensitive Help

📌 Description:
Pop-up messages that appear when users hover over an element or perform a specific action.

✅ Example:
Google Forms – Provides tooltips explaining different question types.

⚠ Limitations:
Too many pop-ups can become distracting.

5. Help Desks & Customer Support

📌 Description:
Live chat, phone support, or email-based assistance from human agents.

✅ Example:
Amazon Customer Service – Offers live chat and phone support for order issues.

⚠ Limitations:
Can be slow and costly for companies to maintain.

🔹 Conclusion:
Different support approaches cater to different user needs. An effective support system combines
multiple methods to provide quick, clear, and accessible assistance.
Would you like me to suggest best practices for designing a user support system? 😊

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 20/45
10. A. Illustrate the disadvantages of online documentation and how can they be overcome.
10. B. Discuss how could systems be made more accessible to older users.

10.A. Disadvantages of Online Documentation and Solutions

Online documentation is widely used for software guides, troubleshooting, and FAQs, but it has
several disadvantages. Below are some key drawbacks and strategies to overcome them.

1. Hard to Navigate & Information Overload

📌 Issue:
Large volumes of content make it difficult for users to find relevant information quickly.
Complex technical jargon may overwhelm users.

✅ Solution:
Implement searchable indexes, FAQs, and categorized sections.

Use simplified language and include visual aids (e.g., screenshots, videos).

2. Requires Internet Access

📌 Issue:
Users cannot access the documentation if they have no internet connection.

✅ Solution:
Provide an offline PDF version or allow users to download essential guides.
Integrate help features directly within the software (context-sensitive help).

3. Not Interactive or Personalized

📌 Issue:
Static text lacks engagement and does not adapt to user needs.

✅ Solution:
Use chatbots or interactive tutorials to guide users step by step.
Implement a search assistant that suggests relevant documentation.

4. Difficult for Non-Tech-Savvy Users

📌 Issue:
Older adults or beginners may struggle with technical terms and navigating online help.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 21/45
✅ Solution:
Include video tutorials and step-by-step walkthroughs.

Offer tooltips that explain terms when users hover over them.

🔹 Conclusion:
By improving navigation, accessibility, and interactivity, online documentation can become more
user-friendly and effective.

10.B. How to Make Systems More Accessible for Older Users

Older adults often face challenges with digital interfaces due to vision decline, reduced dexterity,
and unfamiliarity with technology. Systems can be made more accessible by considering the
following factors:

1. Improve Readability

📌 Issue:
Small text and low contrast make reading difficult.

✅ Solution:
Use larger font sizes and high-contrast text (e.g., black text on a white background).
Allow users to customize font size and colors.

2. Simplify Navigation

📌 Issue:
Complex menus and deep hierarchies confuse users.

✅ Solution:
Use clear, simple menus with large, clickable buttons.
Reduce unnecessary steps in processes.

3. Reduce Required Precision for Inputs

📌 Issue:
Small buttons and touch targets are difficult for users with reduced dexterity.

✅ Solution:
Use larger buttons and increase spacing between interactive elements.
Provide voice control options for hands-free interaction.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 22/45
4. Provide Error Prevention & Clear Feedback

📌 Issue:
Older users may struggle with error messages and recovering from mistakes.

✅ Solution:
Offer confirmation prompts before actions like deleting files.

Use plain-language error messages that suggest solutions.

5. Offer Alternative Interaction Methods

📌 Issue:
Typing and mouse usage may be difficult.

✅ Solution:
Support voice commands (e.g., Siri, Google Assistant).
Enable keyboard shortcuts for common actions.

🔹 Conclusion:
By simplifying UI, improving readability, and offering alternative input methods, systems can
become more inclusive and user-friendly for older adults.

Would you like recommendations on specific UI design principles for accessibility? 😊

PART B -
11. A. Describe Fitts' law with a suitable example.
11. B. Illustrate agent-based approach as a computational model for dialogue management.

11.A. Fitts' Law with Example

Definition of Fitts' Law

Fitts' Law is a predictive model in Human-Computer Interaction (HCI) that describes the time required
to move to a target area. It states:

T = a + b log2 ( + 1)
D
​ ​

Where:

T = Time to complete the movement


a, b = Empirical constants (determined experimentally)
D = Distance to the target
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 23/45
W = Width of the target

🔹 Key Insights:
Closer targets can be reached faster.
Larger targets are easier to click.
Narrow & far targets take more time.

Example of Fitts' Law in HCI

📌 Scenario: Clicking a Button on a Website


A large "Submit" button is easier to click than a small hyperlink.
A toolbar placed near the cursor (e.g., radial menus) speeds up interaction.

✅ Real-world Example:
MacOS Dock uses magnification effects to make icons easier to click.

🔹 Conclusion:
Fitts' Law guides UI/UX design by encouraging larger clickable elements and placing frequently
used buttons closer to where users interact.

11.B. Agent-Based Approach for Dialogue Management

Definition

An agent-based approach in dialogue management uses intelligent agents to handle user


interactions. These agents operate based on predefined rules, machine learning, or reinforcement
learning.
🔹 Key Characteristics:
Agents act autonomously to process user input.
They manage dialogue state, context, and responses.
They adapt to user behavior over time.

Architecture of an Agent-Based Dialogue System

1️⃣ User Input → Speech or text


2️⃣ Natural Language Processing (NLP) Agent → Understands intent & extracts key information
3️⃣ Dialogue Manager Agent → Decides the next step based on the conversation context
4️⃣ Response Generation Agent → Generates an appropriate reply
5️⃣ Speech Synthesis or Text Output → Delivers the response to the user
Example: AI Chatbot for Customer Support

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 24/45
A user asks, "Where is my order?"
The NLP agent recognizes the intent as Order Tracking.
The Dialogue Manager Agent retrieves order details.
The Response Agent replies: "Your order will arrive by Monday."

🔹 Advantages of Agent-Based Dialogue Management:


✅ Scalability – Can handle multiple queries simultaneously.
✅ Context Awareness – Retains conversation history.
✅ Adaptability – Improves over time using AI models.
Would you like more real-world case studies on agent-based dialogue systems? 😊

12. A. Describe the functionalities of SOAR


12. B. Produce a note on dialogue management strategies.

12.A. Functionalities of SOAR

SOAR (State, Operator, And Result) is a cognitive architecture used for problem-solving, decision-
making, and learning in artificial intelligence (AI) and cognitive modeling. It was developed to simulate
human-like intelligence in computational systems.

🔹 Functionalities of SOAR
Functionality Description
1. Rule-Based Problem
Uses production rules (IF-THEN logic) to make decisions based on input.
Solving
Works by setting subgoals to reach a larger objective, similar to human
2. Goal-Oriented Behavior
thinking.
3. Knowledge Representation Stores knowledge in long-term and short-term memory for reasoning.

4. Decision Making Uses operator selection to evaluate and apply the best action.
5. Reinforcement Learning Learns from past actions by storing successful problem-solving approaches.
6. Chunking (Learning Converts repeated problem-solving steps into new production rules for faster
Mechanism) execution in future tasks.
7. Multi-Tasking Capability Can handle multiple tasks simultaneously and manage competing goals.

🔹 Example Use Case


📌 Military Training Simulations:
SOAR helps simulate real-time decision-making for pilots in combat scenarios.
It learns from past experiences and improves strategies over time.

📌 AI Assistants:
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 25/45
Used in virtual assistants to enhance context-aware reasoning.

🔹 Conclusion:
SOAR is an efficient problem-solving AI framework used in robotics, simulations, and intelligent
assistants to mimic human decision-making.

12.B. Dialogue Management Strategies

A Dialogue Manager (DM) controls the flow of conversation in spoken dialogue systems, chatbots,
and AI assistants. Dialogue management strategies define how systems respond to user inputs.

🔹 Major Dialogue Management Strategies


Strategy Description Example Use Case
1. Finite-State (Rule- Predefined paths for conversation, where each IVR (Interactive Voice Response)
Based) Approach state leads to a fixed response. systems in call centers.

2. Frame-Based (Slot- Collects required information using predefined Booking systems (e.g., flight
Filling) Approach "slots" (e.g., name, date, time). reservations).
Uses machine learning models like Hidden
3. Statistical /
Markov Models (HMMs) to predict the next AI customer support chatbots.
Probabilistic Models
response.
4. Reinforcement Adapts to user behavior using rewards and Personalized virtual assistants
Learning (RL) Approach penalties to improve dialogue efficiency. (e.g., Alexa, Google Assistant).
Combines rule-based and ML-based methods Intelligent AI chatbots (e.g.,
5. Hybrid Approach
for better adaptability. ChatGPT).

🔹 Example Scenario: Airline Booking System


User: "I want to book a flight to New York."
Frame-Based Approach: "Which date and class would you like?"
Statistical Approach: Predicts missing details based on past bookings.
Reinforcement Learning Approach: Learns user preferences and suggests the best options automatically.

🔹 Conclusion:
Each strategy has advantages and is chosen based on complexity, flexibility, and user interaction
requirements.
Would you like more comparisons between these strategies? 🚀

13. A. Produce one HTA for making a phone call.


13. B. Produce the diagrammatic form from an HTA of vacuum cleaner.

13.A. Hierarchical Task Analysis (HTA) for Making a Phone Call

Hierarchical Task Analysis (HTA) is a method used in Human-Computer Interaction (HCI) to break
down a task into a structured hierarchy of sub-tasks. Below is an HTA for making a phone call using a

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 26/45
mobile phone.

HTA for Making a Phone Call

Goal: Make a phone call

1. Unlock the phone


1.1 Press the power button
1.2 Use a password, fingerprint, or face recognition
2. Open the dialer or contacts app
2.1 Locate and tap the phone app
2.2 Decide how to input the number
Option 1: Manually dial the number
Option 2: Select a saved contact
3. Dial the number or select a contact
3.1 Use the keypad to enter the number OR
3.2 Scroll through contacts and tap a name
4. Initiate the call
4.1 Tap the call button
5. Wait for the recipient to answer
5.1 Listen for ringing or busy tone
5.2 If unanswered, decide whether to retry or leave a voicemail

6. Engage in conversation
7. End the call
7.1 Tap the end-call button

13.B. Diagrammatic Form of HTA for Using a Vacuum Cleaner

A diagrammatic Hierarchical Task Analysis (HTA) visually represents the breakdown of a task. Below is
an HTA for using a vacuum cleaner in a structured format.

HTA for Using a Vacuum Cleaner (Diagrammatic Form)

pgsql

0. Use a vacuum cleaner


├── 1. Prepare the vacuum
│ ├── 1.1 Plug in the power cord
│ ├── 1.2 Check the dust bag/container
│ ├── 1.3 Adjust settings (if needed)
│ └── 1.4 Position the vacuum at the starting point
├── 2. Start vacuuming
│ ├── 2.1 Turn on the vacuum
│ ├── 2.2 Move the vacuum over the surface
│ ├── 2.3 Adjust for different surfaces (e.g., carpet, hardwood)
│ └── 2.4 Repeat until the area is clean
├── 3. Empty or clean the vacuum (if required)
│ ├── 3.1 Turn off and unplug the vacuum
│ ├── 3.2 Remove and empty the dust bag/container
│ ├── 3.3 Clean the filter if necessary
│ └
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 27/45
│ └── 3.4 Reassemble the vacuum
└── 4. Store the vacuum
├── 4.1 Wrap the power cord
├── 4.2 Store the vacuum in its designated place
└── 4.3 Ensure all accessories are stored properly

Would you like me to generate a visual diagram for this HTA? 😊

15. A. Report an instruction manual for an application software that you are using for the first time.
15. B. Compare different interaction objects that could implement this kind of dialog. Use examples
from existing toolkits.

15.A. Instruction Manual for First-Time Use of Google Docs

Google Docs - User Guide for Beginners

Google Docs is a cloud-based word processor that allows users to create, edit, and collaborate on
documents online. Below is a step-by-step guide to help first-time users get started.

🔹 1. Getting Started
1.1. Accessing Google Docs

Open a web browser and go to docs.google.com.


Sign in with a Google Account (or create one if you don’t have it).

1.2. Creating a New Document

Click on Blank Document to start a new file.


Or select a template from the gallery for preformatted designs.

🔹 2. Editing and Formatting Text


2.1. Entering and Editing Text

Click anywhere in the document and start typing.


Use Backspace/Delete to remove text.

2.2. Formatting Options

Use the toolbar for common formatting options:


Bold (B), Italics (I), Underline (U)
Font Style & Size – Select from the dropdown menu.
Text Color – Click the A icon to choose a color.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 28/45
🔹 3. Saving and Auto-Save Feature
Google Docs automatically saves changes in real time.
Check the top-left corner for “Saved to Drive” confirmation.

🔹 4. Collaboration and Sharing


4.1. Sharing a Document

Click Share (top-right).


Enter email addresses or create a shareable link.
Set permissions: Viewer, Commenter, or Editor.

4.2. Real-Time Collaboration

Multiple users can edit the document simultaneously.


Changes are highlighted with the editor’s name.

🔹 5. Downloading and Exporting


Click File > Download As to save in PDF, Word, or other formats.

🔹 Conclusion: Google Docs is an easy-to-use document editor with real-time collaboration and cloud-
based storage.
Would you like more advanced features included, such as voice typing or extensions? 😊

15.B. Comparison of Different Interaction Objects for Dialog Implementation

Interaction objects are UI components that allow users to interact with a system. Different objects can
be used to implement dialog-based interactions in applications.

Interaction
Description Example from Toolkits Use Case in Dialogs
Object
Java Swing (`JTextField`),
Allows users to enter Used for user input fields in
Text Field HTML (`<input
text manually. forms or chatbots.
type="text">`)
Used for dialog-based
Drop-down Provides a list of HTML (`<select>`), JavaFX
selections, e.g., choosing a file
List selectable options. (`ComboBox`)
format.
Allows users to select Android UI (`RadioButton`),
Radio Used in preference selection
one option from a HTML (`<input
Buttons dialogs.
group. type="radio">`)
Triggers actions on Android (`Button`), JavaFX Used for "OK", "Cancel" in
Buttons
click. (`Button`), HTML (`<button>`) confirmation dialogs.
Message Displays system Java (`JOptionPane`), Tkinter Used for alerts,
Boxes messages to users. (`messagebox`) confirmations, and error

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 29/45
Interaction
Description Example from Toolkits Use Case in Dialogs
Object
messages.
Voice Allows interaction using Google Assistant SDK, Alexa Skills Used in voice-based dialogue
Interface voice commands. Kit systems.

🔹 Example: Using Different Interaction Objects in a Software Dialog


📌 Scenario: A file-saving dialog in an application
Text Field → To enter the file name
Drop-down List → To select the file format (e.g., PDF, DOCX, TXT)
Radio Buttons → To choose whether to save as a new file or overwrite
Buttons → "Save" and "Cancel" options

🔹 Conclusion
Different interaction objects serve various dialog needs. The choice depends on user experience,
efficiency, and clarity.
Would you like a diagrammatic representation of a dialog system using these interaction objects? 🚀

16. A. Report a manual page for making a cup of coffee. Assume your user has no experience but
will recognize a spoon.
16. B. Discuss knowledge is needed to build an adaptive help system.

16.A. Manual Page for Making a Cup of Coffee

User Manual: How to Make a Cup of Coffee


For Beginners

🔹 Introduction
This manual will guide you step by step to make a cup of coffee. No prior experience is needed, but you
should recognize a spoon.

🔹 Items You Need


✔ A coffee mug
✔ A spoon
✔ Instant coffee powder OR ground coffee
✔ Hot water
✔ Sugar (optional)
✔ Milk (optional)
Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 30/45
🔹 Step-by-Step Instructions
Option 1: Using Instant Coffee (Easiest Method)

1️⃣ Take a clean coffee mug.


2️⃣ Take one spoon of instant coffee powder and put it into the mug.
3️⃣ Boil water until hot (not necessary to be boiling).
4️⃣ Pour hot water into the mug (about ¾ full).
5️⃣ Stir well with a spoon.
6️⃣ Add sugar and milk if you want (optional).
7️⃣ Stir again and enjoy your coffee! ☕
Option 2: Using Ground Coffee (Requires a Coffee Maker or Filter)

1️⃣ Boil water in a kettle.


2️⃣ Place a coffee filter in a coffee maker (or use a manual filter).
3️⃣ Add one spoon of ground coffee per cup.
4️⃣ Slowly pour hot water over the coffee filter.
5️⃣ Let the coffee drip into a mug below.
6️⃣ Add sugar or milk as per taste.
7️⃣ Stir and drink! ☕
🔹 Safety Tips
⚠ Be careful with hot water to avoid burns.
⚠ Store coffee powder in a dry place.
🔹 Conclusion
You have now learned two ways to make coffee. Practice and enjoy! 🎉
Would you like a diagram or images for better clarity? 😊
16.B. Knowledge Needed to Build an Adaptive Help System

An adaptive help system provides personalized assistance to users by adjusting based on their
experience, needs, and behavior. To build such a system, different types of knowledge are required.

🔹 Key Knowledge Areas for an Adaptive Help System


Knowledge
Description Example in Help System
Type
User Understanding the user’s skill level, A system that offers basic tips to beginners and
Knowledge preferences, and past interactions. advanced shortcuts to experts.
Domain Knowledge of the system, software, or A help system for photo editing software needs
Knowledge tool being explained. expertise in image processing.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 31/45
Knowledge
Description Example in Help System
Type
A mobile app should give touchscreen-based
Contextual Understanding the situation in which
help, while a PC app should show keyboard
Knowledge the help is requested.
shortcuts.
Task Knowledge of user goals and tasks they A spreadsheet app offering help on how to
Knowledge need to complete. create charts when a user selects data.

Linguistic Ability to generate clear, human-like AI-driven chatbots providing step-by-step guides
Knowledge explanations. in natural language.
Adaptive Machine learning models that track A help system that remembers user preferences
Learning user behavior and improve over time. and suggests the most relevant tips.

🔹 Example: Adaptive Help in Microsoft Word


A beginner sees a tutorial on basic formatting.

An advanced user is shown keyboard shortcuts and macros.


The system remembers frequent actions and suggests faster methods.

🔹 Conclusion
Building an adaptive help system requires knowledge of users, tasks, and AI-driven learning. The
goal is to reduce frustration and improve usability.

Would you like an example prototype or UI design for an adaptive help system? 🚀

PART C -
17. A. Summarise the software and toolkits for spoken dialogue systems development.
17. B. Asses skilled and novice operators in a familiar domain, for example, touch and 'hunt-and-
peck' typists, expert and novice game players, or expert and novice users of a computer application.
What differences can you discern between their behaviors?

17.A. Summary of Software and Toolkits for Spoken Dialogue Systems Development

Spoken Dialogue Systems (SDS) enable human-computer interaction through speech. Developers use
software frameworks and toolkits to build these systems efficiently.

🔹 Popular Software & Toolkits for SDS Development


Toolkit/Software Description Key Features
- Speech-to-text & text-to-speech
integration
Cloud-based NLP service for building
Google Dialogflow - Pre-built agents & machine learning-
conversational agents.
based intent recognition
- Multi-language support

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 32/45
Toolkit/Software Description Key Features
- Deep learning models for speech
AI service for voice and chat interfaces recognition
Amazon Lex
(used in Alexa). - Integration with AWS services
- Supports text and voice interactions
- Integration with Azure Cognitive
Microsoft Bot SDK for building intelligent chatbots with Services
Framework voice support. - Can be deployed on multiple platforms
(Skype, Teams, etc.)
- On-premise deployment for privacy
Open-source framework for - Uses machine learning for dialogue
Rasa
conversational AI. flow
- Supports customizable NLU models
- Works offline, lightweight
CMU Sphinx
Open-source speech recognition toolkit. - Useful for embedded systems and
(PocketSphinx)
mobile apps

Open-source large vocabulary speech - Real-time recognition


Julius
recognition engine. - Supports multiple languages
- Highly customizable
Kaldi Toolkit for speech recognition research. - Used in academic and industry
research
- Integrates with Facebook Messenger
AI-powered conversational interface
Wit.ai (Meta) - Supports voice and text-based
builder.
interactions

🔹 Choosing the Right Toolkit


For Beginners → Google Dialogflow, Amazon Lex
For Developers Seeking Customization → Rasa, Microsoft Bot Framework
For Speech Recognition Research → Kaldi, CMU Sphinx

Would you like a comparison chart for easy selection? 😊


17.B. Assessment of Skilled vs. Novice Operators in a Familiar Domain

The behavior of skilled and novice operators varies in efficiency, error rate, and cognitive load.
Below is an analysis using examples.

🔹 Key Differences Between Skilled and Novice Users


Aspect Skilled Operator Novice Operator
Performs tasks quickly and
Speed Slower, takes time to understand actions.
smoothly.
Error Rate Few errors due to experience. High error rate, relies on trial and error.
Muscle Memory Uses automatic, fluid actions. Conscious effort needed for every step.
Uses shortcuts and efficient Focuses on basic steps, struggles with advanced
Cognitive Load
workflows. functions.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 33/45
Aspect Skilled Operator Novice Operator
Adaptability Adjusts quickly to new features. Struggles with changes, needs guidance.
Confidence High confidence, minimal
Low confidence, relies on tutorials.
Level hesitation.

🔹 Example 1: Touch Typists vs. Hunt-and-Peck Typists


Touch Typists (Skilled Users)
Type without looking at the keyboard.
Use all ten fingers efficiently.
Achieve speeds of 70+ words per minute.
Rarely make errors due to muscle memory.

Hunt-and-Peck Typists (Novice Users)


Look at the keyboard while typing.
Use only a few fingers, often index fingers.
Type at 20-30 words per minute.
Frequent backspacing due to errors.

🔹 Example 2: Expert vs. Novice Gamers


Expert Gamers
React quickly to game situations.
Memorize controls and maps.
Use advanced strategies efficiently.
Have precise hand-eye coordination.

Novice Gamers
Take time to learn controls.
Often get stuck on levels.
Make random moves instead of strategic play.
Have delayed reaction times.

🔹 Example 3: Expert vs. Novice Computer Users


Expert Users
Use keyboard shortcuts (e.g., Ctrl+C, Ctrl+V).
Navigate the system with minimal clicks.
Can troubleshoot issues independently.

Novice Users

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 34/45
Rely on the mouse for every task.
Click multiple times to find the right option.
Need help from others for troubleshooting.

🔹 Conclusion
The main difference is experience-driven efficiency. Skilled users perform tasks faster with fewer
errors, while novices take longer and require guidance.
Would you like a graphical comparison or a video explanation for better understanding? 🚀

18. A. Describe (in words as well as graphically) the interaction framework introduced in Human-
Computer Interaction. Show how it can be used to explain problems in the dialog between a user
and a computer.
18. B. Summarise functionalities of BIP.

18.A. Interaction Framework in Human-Computer Interaction (HCI)

The Interaction Framework in HCI describes how users and computers communicate through inputs
and outputs. It was introduced by Dix et al. and consists of four components:

1. User – The person interacting with the system.


2. Input – The way users provide commands to the system (keyboard, mouse, voice, etc.).
3. System – The software/hardware processing the user’s commands.
4. Output – The feedback given by the system (display updates, sound, notifications, etc.).

🔹 Graphical Representation of Interaction Framework


sql

+------------+ Input +------------+


| User | -----------------> | System |
+------------+ +------------+
^ |
| |
| v
+------------+ Output +------------+
| World | <----------------- | Interface |
+------------+ +------------+

The User performs an action (e.g., clicking a button).


The System processes the request.
The Output appears (e.g., a new window opens).
The User perceives the output and continues interaction.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 35/45
🔹 How This Explains Problems in User-Computer Dialogue
1. Gulf of Execution

Issue: The user does not know how to interact with the system.
Example: A new user doesn’t understand which button to press in a software application.

Solution: Provide clear affordances (e.g., labeled buttons, tooltips).

2. Gulf of Evaluation

Issue: The user does not understand the system’s response.


Example: After submitting a form, the system shows no confirmation message, leaving the user
confused.

Solution: Provide immediate feedback (e.g., “Form submitted successfully”).

Would you like real-world examples of these interaction problems? 😊


18.B. Summary of Functionalities of BIP (Behaviour, Interaction, Priority)

BIP (Behavior, Interaction, Priority) is a component-based framework for building complex, reactive
systems. It was developed to design and analyze real-time, embedded, and distributed systems.

🔹 Functionalities of BIP
Functionality Description Example
A robot control system with separate
Component-Based Systems are built using independent
components for sensors, movement, and
Design components.
decision-making.
Uses formal methods for system Ensures safety in critical systems like medical
Formal Modeling
verification and correctness. devices.
Interaction Defines how different components Traffic light system components interact to
Modeling communicate and interact. prevent collisions.
Priority Ensures important tasks are In an autonomous car, collision detection takes
Management executed first. higher priority than playing music.
Concurrency Manages multiple tasks running Cloud computing systems running multiple
Handling simultaneously. services at once.
Generates executable code from
Code Generation Converts a visual model into working software.
models.

🔹 Where BIP is Used


✔ Embedded Systems – Real-time applications in automobiles, robotics.
✔ Cyber-Physical Systems – Ensuring safety and efficiency in complex systems.
✔ Distributed Systems – Managing interactions in IoT networks, cloud services.
Would you like a real-world case study on how BIP is used in industry? 🚀

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 36/45
19. A. Relate the areas that may benefit most from virtual reality visualization techniques.
19. B. Describe the factors likely to delay the widespread use of video in interfaces.

19.A. Areas That Benefit Most from Virtual Reality (VR) Visualization Techniques

Virtual Reality (VR) transforms industries by providing immersive, interactive, and real-time
simulations. The following areas benefit most from VR visualization techniques:

🔹 Key Areas Benefiting from VR


Industry/Field How VR Helps Example

Healthcare & Medical Simulates surgeries & diagnoses for training VR surgery training for medical
Training doctors. students.
Creates immersive learning experiences for Virtual history tours in
Education & Training
students. classrooms.
Architecture & Real
Helps visualize buildings before construction. 3D walkthroughs of properties.
Estate
Manufacturing & Car prototypes in VR before
Tests product designs in a virtual space.
Engineering production.

Provides realistic combat and emergency


Military & Defense Flight simulators for pilots.
response training.
Gaming & Entertainment Enhances interactive experiences for users. VR games like Half-Life: Alyx.
Tourism & Cultural
Allows users to explore destinations virtually. Virtual museum tours.
Heritage
Try-before-you-buy AR & VR
Retail & E-commerce Offers virtual shopping experiences.
stores.
VR exposure therapy for fear of
Psychology & Therapy Helps treat PTSD, phobias, and anxiety.
heights.

🔹 Why These Areas Benefit from VR


✔ Realism – Provides a near-real experience for training and simulations.
✔ Cost Efficiency – Reduces real-world training expenses (e.g., flight simulators).
✔ Safety – Prevents harm during high-risk training (e.g., surgery, combat).
Would you like a specific VR case study in one of these areas? 😊

19.B. Factors Likely to Delay the Widespread Use of Video in Interfaces

While video-based interfaces (e.g., video conferencing, video tutorials, interactive video UIs) offer rich
interaction, their widespread adoption faces several challenges:

🔹 Key Factors Delaying Widespread Use of Video in Interfaces


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 37/45
Factor Why It Delays Adoption Example
High Bandwidth Video streaming needs fast internet, which Poor-quality video calls on slow
Requirement is not always available. networks.
Videos require large storage space Companies struggle with cloud
Storage Issues
compared to text/images. storage costs.
Real-time video processing demands high-
Processing Power Older devices lag during video calls.
performance GPUs & CPUs.
Accessibility Visually impaired users may struggle
Videos are not easily searchable like text.
Challenges without proper captions.
Privacy & Security Video-based systems risk data leaks &
Deepfake videos raise security issues.
Concerns hacking.
High-quality videos require expensive Companies avoid video-based help
Cost of Production
production equipment. systems due to cost.
Not all users prefer video over text-based Many skip video tutorials and prefer
User Preferences
interaction. written guides.

🔹 Possible Solutions to Overcome These Issues


✔ 5G & Better Compression → Faster internet speeds for smoother video.
✔ Cloud Storage Optimization → Reducing storage costs for large-scale adoption.
✔ AI-based Search in Videos → Making video content more searchable & accessible.
✔ Stronger Privacy Regulations → Addressing security concerns with end-to-end encryption.
Would you like an example of an interface where video improved usability? 🚀

20. A. Produce a rich picture showing the problem situation for an Institute Management System
(you can use any format that you find helpful).
20. B. Produce a taxonomy using the TDH notation for the objects teapot, mug, soup bowl, plate,
spoon, table knife, cook's knife.

20.A. Rich Picture for an Institute Management System

A Rich Picture is a diagram that represents the problem situation of a system, showing different
stakeholders, processes, and issues. Below is a structured breakdown of what should be included in a
rich picture for an Institute Management System (IMS):

🔹 Key Elements to Include in the Rich Picture


1. Stakeholders
Students (Enrollments, Fees, Attendance, Results)
Faculty (Course Management, Scheduling, Evaluations)
Administration (Finance, HR, Infrastructure, Policies)
Parents (Fee Payments, Student Performance Reports)

Government/Regulatory Bodies (Compliance, Accreditation)


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 38/45
2. Processes
Admissions & Enrollment
Course Scheduling
Attendance & Assessments
Fee Management

Library & Resource Management


Hostel & Transport Facilities
Examination & Results Processing
3. Problems & Challenges
Manual Record-Keeping → Risk of Errors

Data Security Concerns → Privacy Issues


Poor Communication → Lack of Transparency
System Downtime → Affects Student Services

🔹 Rich Picture Diagram (Conceptual Representation)


pgsql

+-----------------------+
| Institute Management |
| System (IMS) |
+-----------------------+
|
+------------+------------+
| | |
+---+---+ +--+--+ +----+----+
| Students| | Faculty | | Admin |
+--------+ +--------+ +--------+
| | |
| | |
+------------+ +-------+ +---------------+
| Enrollment | | Exams | | Fee Management |
| Attendance | | Results | | HR & Payroll |
+------------+ +-------+ +---------------+

Would you like a more detailed digital version of the rich picture? 😊
20.B. Taxonomy Using TDH Notation for Kitchen Objects

TDH (Taxonomic Hierarchy Description) notation is used for classifying objects based on their
properties. It uses a three-level classification system:

T (Task-related properties): How the object is used.


D (Descriptive properties): Shape, material, and physical attributes.
H (Hierarchical relationships): How objects are grouped together.

🔹 TDH Notation for Given Objects


Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 39/45
Object T (Task) D (Description) H (Hierarchy)
Teapot Holds and pours tea Ceramic/Metal, Handle, Spout Kitchenware → Drinkware
Mug Holds hot beverages Cylindrical, Handle Kitchenware → Drinkware
Soup Bowl Holds soup/liquids Round, Deep Kitchenware → Serveware
Plate Holds solid food Flat, Round Kitchenware → Serveware

Spoon Scoops/stirs food Long Handle, Curved End Kitchenware → Cutlery


Table Knife Cuts soft food Serrated Edge, Handle Kitchenware → Cutlery
Cook’s Knife Cuts meat/vegetables Sharp Blade, Handle Kitchenware → Cutlery

🔹 TDH Classification Diagram (Example)


less

Kitchenware

├── Drinkware
│ ├── Teapot (T: Pouring, D: Handle/Spout, H: Drinkware)
│ ├── Mug (T: Holding, D: Cylindrical, H: Drinkware)

├── Serveware
│ ├── Soup Bowl (T: Holding soup, D: Round/Deep, H: Serveware)
│ ├── Plate (T: Holding food, D: Flat/Round, H: Serveware)

└── Cutlery
├── Spoon (T: Scooping, D: Curved, H: Cutlery)
├── Table Knife (T: Cutting soft food, D: Serrated Edge, H: Cutlery)
└── Cook’s Knife (T: Chopping, D: Sharp Blade, H: Cutlery)

Would you like a visual diagram for the TDH classification? 🚀

21. A. Justify factors would you take into account if you were designing a website aimed to interview
a person whose age is more than 65.
21. B. Describe in detail the things that the window manager and applications perform during the
processing of the mouse click on a write-up opened in word processor.

21.A. Factors to Consider When Designing a Website for Users Aged 65+

When designing a website for elderly users (65+), several accessibility, usability, and cognitive
factors must be considered to ensure a smooth user experience.

🔹 Key Factors for Website Design for Elderly Users


Factor Why It’s Important Best Practices
Text Size & Use large fonts (16pt+), high contrast (black text
Seniors may have reduced vision.
Readability on white), and simple fonts (Arial, Verdana).
Navigation Complex menus may confuse Use clear labels, large buttons, and easy-to-find
Simplicity elderly users. navigation.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 40/45
Factor Why It’s Important Best Practices
Minimal Cognitive Too much information can be Avoid clutter, keep content short and to the
Load overwhelming. point.
Keyboard & Voice Some seniors may struggle with Ensure keyboard navigation and voice control
Support precise mouse movements. compatibility.

Error Prevention & Use clear confirmation messages and undo


Mistakes should be easy to undo.
Recovery options.
Slow & Clear Avoid time-limited actions and give users extra
Reflexes may be slower.
Interactions time to respond.
Familiar Design Stick to common UI elements (buttons, menus,
Elderly users may not be tech-savvy.
Patterns layouts) they recognize.

🔹 Additional Considerations for an Interview Website


✔ Large, Easy-to-Click Buttons – For "Start Interview", "End Call", etc.
✔ Clear Instructions – Step-by-step guides for joining the interview.
✔ Simple Video Chat Interface – Use Zoom-like or Skype-like layouts.
✔ Technical Support Option – Quick help section for troubleshooting.
Would you like an example wireframe or layout for such a website? 😊

21.B. Mouse Click Processing in a Word Processor (Like MS Word)

When you click the mouse on a write-up in a word processor, several background processes occur.
The window manager and application (MS Word, Google Docs, etc.) both perform distinct roles.

🔹 Step-by-Step Breakdown of Processing a Mouse Click


1. Window Manager’s Role (Operating System Level)

The window manager (part of the OS, like Windows/macOS/Linux) first intercepts the mouse click
before passing it to the word processor.

Detect Mouse Click → The OS captures the event (`WM_LBUTTONDOWN`).


Identify the Active Window → It checks which window was clicked (Word, Notepad, Browser, etc.).
Pass Control to the Application → If Word is active, it sends the event to MS Word’s event
handler.

2. Word Processor’s Role (Application Level)

Once MS Word receives the mouse click event, it processes it as follows:

Cursor Placement
Determines where the cursor should be placed based on the click location.
If clicked inside text → Moves insertion point (caret) to the nearest character.

If clicked outside text → Does nothing or selects the nearest element.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 41/45
Selection Handling
If double-clicked → Selects a word.
If triple-clicked → Selects an entire paragraph.
Text Editing Functions

If clicked inside formatted text, it checks the active font, color, bold/italic status.
If clicked on a table, it determines which cell was clicked.
Interaction with Other UI Elements
If clicked on menus or toolbar buttons, the event is redirected to those elements.
If clicked on a hyperlink, it may trigger a web browser to open the link.
Rendering & Feedback

The cursor is redrawn at the new location.


If necessary, the page scrolls to keep the cursor visible.

🔹 Simplified Flowchart of Mouse Click Processing


mathematica

User Clicks Mouse




[OS Captures Event]


[Window Manager Identifies Active Window]


[Passes Event to Word Processor]


[Word Determines Click Location]

├──> If on Text → Move Cursor
├──> If Double-Click → Select Word
├──> If Triple-Click → Select Paragraph
├──> If on UI Element → Open Menu/Toolbar
├──> If on Hyperlink → Open Browser


[Word Updates UI & Renders Changes]

Would you like a diagram to visualize this process? 🚀

Short out all the questions together

Human-Computer Interaction (HCI) - Shortened List of Questions

1. HCI Factors & Dialogue Systems

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 42/45
1.A. Summarize the factors in HCI.
1.B. Discuss dialogue as a Markov decision process for dialogue management.

2. Spoken Dialogue Systems & Natural Language Interfaces

2.A. Describe the general architecture of a spoken dialogue system.


2.B. Discuss the applications where natural language interfaces are most appropriate.

3. Usability Evaluation & ICARUS Capabilities

3.A. Describe how usability evaluation is performed.


3.B. List the capabilities of ICARUS.

4. Video Communication & Dialogue System Factors

4.A. Evaluate video communication options for a company with offices in New York and London.
4.B. Discuss factors defining a dialogue system.

5. Natural Language Interfaces & Functional Classification

5.A. List key aspects of natural language interfaces.


5.B. Differentiate between control, feedback, and feedthrough in the classification by function
framework.

6. Cognitive Architecture & Future Paradigms

6.A. Define cognitive architecture.


6.B. Discuss future paradigms in interactive computing.

7. System Embodiment & Participatory Design

7.A. Illustrate positive and negative issues from system embodiment.


7.B. Provide case studies and critique participatory design.

8. Calculator Transparency & Browser Back Button

8.A. Assess the impact of a calculator displaying letters instead of numbers on transparency.
8.B. Justify whether the ‘Back’ button in a browser is similar to ‘Undo.’

9. User Help Types & Support Systems

9.A. Illustrate the four main types of user help with examples.
9.B. Describe different approaches to user support systems with examples.

10. Online Documentation & System Accessibility

10.A. Illustrate disadvantages of online documentation and how to overcome them.


10.B. Discuss how systems can be made more accessible to older users.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 43/45
PART B - Advanced HCI Concepts

11. Fitts’ Law & Agent-Based Dialogue Management

11.A. Describe Fitts’ law with an example.


11.B. Illustrate the agent-based approach as a computational model for dialogue management.

12. SOAR & Dialogue Management Strategies

12.A. Describe the functionalities of SOAR.


12.B. Write a note on dialogue management strategies.

13. Hierarchical Task Analysis (HTA)

13.A. Produce an HTA for making a phone call.


13.B. Create a diagrammatic HTA for using a vacuum cleaner.

15. Instruction Manuals & Interaction Objects

15.A. Report an instruction manual for using a new application.


15.B. Compare different interaction objects that implement dialog functions.

16. Coffee-Making Guide & Adaptive Help Systems

16.A. Write a manual page for making a cup of coffee for a beginner.
16.B. Discuss the knowledge required to build an adaptive help system.

PART C - Practical Applications in HCI

17. Spoken Dialogue Development & Operator Comparison

17.A. Summarize software and toolkits for spoken dialogue system development.
17.B. Assess differences between skilled and novice users in familiar domains.

18. Interaction Framework & BIP Functionalities

18.A. Describe and graphically illustrate the interaction framework in HCI.


18.B. Summarize the functionalities of BIP.

19. Virtual Reality & Video Interfaces

19.A. Identify areas that benefit most from virtual reality visualization techniques.
19.B. Describe factors delaying the widespread use of video in interfaces.

20. Rich Picture & TDH Taxonomy

20.A. Produce a rich picture for an Institute Management System.


20.B. Create a TDH taxonomy for teapot, mug, soup bowl, plate, spoon, table knife, and cook’s knife.

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 44/45
21. Web Design for Seniors & Mouse Click Processing

21.A. Justify factors to consider when designing a website for users aged 65+.
21.B. Describe how a window manager and an application process a mouse click in a word processor.

This is a shortened list of all the HCI questions while keeping them well-structured and easy to
follow. Let me know if you need any modifications! 🚀

Printed using ChatGPT to PDF, powered by PDFCrowd HTML to PDF API. 45/45

You might also like