0% found this document useful (0 votes)
22 views10 pages

Hci Cs348-Iii

Computational user models represent users through attributes like demographics and behaviors to simulate how they interact with systems. They can predict actions, evaluate usability, and personalize experiences. Fitts' law models 2D pointing task times based on distance and target size. Navigation refers to moving through computer menus and files, while mobile typing inputs text on smartphones through touchscreen keyboards and predictive text.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views10 pages

Hci Cs348-Iii

Computational user models represent users through attributes like demographics and behaviors to simulate how they interact with systems. They can predict actions, evaluate usability, and personalize experiences. Fitts' law models 2D pointing task times based on distance and target size. Navigation refers to moving through computer menus and files, while mobile typing inputs text on smartphones through touchscreen keyboards and predictive text.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

UNIT-IIIrd HCI (CS348)

A computational user model A computational user model in Human-Computer


Interaction (HCI) refers to a representation of user behavior, cognition, or preferences
that is implemented in a computational form. These models are used to simulate or
predict how users interact with computer systems, allowing designers and researchers to
understand user needs, predict user actions, and evaluate system usability. Here's an
example of a computational user model in HCI:

Fitts' Law is a well-known empirical model in HCI that describes the relationship
between the time to move to a target (pointing task) and the distance to the target, as
well as the size of the target. A computational user model based on Fitts' Law can be
developed to predict user performance in pointing tasks.

The computational user model based on Fitts' Law calculates the movement time (MT)
required for a user to move a pointing device (e.g., mouse cursor) from a starting
position to a target on the screen. The model incorporates two main factors: the
distance to the target (D) and the size of the target (W).

MT= a+b x log2 (W/D+1)


Where:
MT= Movement time
a and b are empirically derived constants based on experimental data.
D = Distance to the target
W = Width of the target

Example Usage: Consider a user interface with a menu bar located at the top of the
screen. A user wants to click on a specific menu item, which is one of several small
buttons arranged horizontally. Using the computational user model based on Fitts' Law,
designers can predict the movement time required for the user to accurately click on the
desired menu item based on the distance to the target (the user's initial cursor position
to the target) and the size of the target (the width of the menu item button).

Implementation: The Fitts' Law model can be implemented in software or


programming languages such as Python or MATLAB. Designers can develop algorithms
to calculate the movement time based on user input (e.g., cursor position, target size)
and apply the Fitts' Law equation to predict user performance in pointing tasks.

Here's a breakdown of what constitutes a computational user model:


1. Representation of Users: Computational user models typically represent users in terms
of various attributes such as demographics (age, gender, occupation), psychological
characteristics (personality traits, cognitive abilities), and behavioral tendencies
(preferences, habits).
2. Simulation of User Behavior: These models simulate how users might interact with a
system based on the provided representation. This can include actions such as
navigation through interfaces, task completion, decision-making processes, and
reactions to system feedback.
3. Predictive Capabilities: One of the primary purposes of computational user models is
to predict user behavior in different scenarios. By incorporating various factors that
influence user interactions, these models can provide insights into how users are likely
to respond to changes in interface design, functionality, or content.
4. Adaptation and Personalization: Computational user models can be used to
personalize user experiences by dynamically adjusting system behavior based on
individual user characteristics and preferences. This can enhance user satisfaction and
efficiency by tailoring the interface and content to match the specific needs of each
user.
5. Evaluation and Optimization: These models can also be used for evaluating and
optimizing system designs before implementation. By simulating user interactions with
different design alternatives, designers can identify potential usability issues and make
informed decisions to improve the overall user experience.

Computational user models for 2D and 3D pointing refer to models that simulate and
predict how users interact with graphical user interfaces (GUIs) in two-dimensional (2D)
and three-dimensional (3D) environments, respectively. These models are particularly
relevant in the fields of human-computer interaction (HCI) and user experience (UX)
design, where understanding how users interact with interfaces is crucial for designing
efficient and intuitive systems.

Here's an explanation of computational user models for 2D and 3D pointing:

1. 2D Pointing Models:
• In 2D pointing, users interact with graphical elements on a flat surface, such as a
computer screen or a touchscreen device.
• Fitts' law is a fundamental principle used in 2D pointing models, which states that
the time required for a user to move a pointer (e.g., mouse cursor) to a target is a
function of the distance to the target and the size of the target.
• Computational user models for 2D pointing often incorporate Fitts' law to predict
pointing performance metrics such as movement time, accuracy, and error rates.
• These models may also consider other factors such as user characteristics (e.g.,
motor skills, familiarity with the interface), task complexity, and environmental
conditions (e.g., display size, input device).
2. 3D Pointing Models:
• In 3D pointing, users interact with graphical elements in a three-dimensional
space, such as virtual reality (VR) environments or 3D modeling applications.
• 3D pointing models extend the principles of 2D pointing to three-dimensional
space, accounting for additional complexities such as depth perception and
spatial awareness.
• Factors such as the user's viewpoint, hand-eye coordination, and the design of
the 3D interface (e.g., object placement, depth cues) influence pointing
performance in 3D environments.
• Computational user models for 3D pointing may use techniques such as ray
casting or collision detection to simulate user interactions with virtual objects.
• These models aim to predict performance metrics similar to those in 2D pointing
models, including movement time, accuracy, and error rates, but in the context of
3D interactions.
Navigation refers to the act of opening and moving through computer menus, like the Start
menu in Windows, opening software programs, or viewing files in Windows Explorer. More
generally, to navigate is to move your mouse around the screen to access icons and the other
features of an operating system.

Navigation is derived from the Latin navis (“ship”) and agere (“to drive”). Early mariners who
embarked on voyages of exploration gradually developed systematic methods of observing and
recording their position, the distances and directions they traveled, the currents of wind and
water, and the hazards and havens they encountered. The facts accumulated in their journals
made it possible for them to find their way home and for them or their successors to repeat and
extend their exploits. Each successful landfall became a signpost along a route that could be
retraced and integrated into a growing body of reliable information.

Three main types of navigation are celestial, GPS, and map and compass. In order to
better understand why we teach map and compass at High Trails, it is helpful to learn
the basics of all three technique.

he Global Positioning System (GPS) is a space-based radio-navigation


system consisting of a constellation of satellites broadcasting navigation signals and a
network of ground stations and satellite control stations used for monitoring and control.
Navigation Goals:

• A well designed navigation system facilitates quick & easy navigation between components
whose structure & relationship are easily comprehensible.

• For the user, answers to the following questions must be obvious at all times during an
interaction: Where am I now? Where did I come from? Where can I go from here? How can I get
there quickly? • General system navigation guidelines include the following.

• Control For multilevel menus, provide one simple action to: o Return to the next higher-level
menu. o Return to the main menu. o Provide multiple pathways through a menu hierarchy
whenever possible.

• Menu Navigation Aids To aid menu navigation & learning, provide an easily accessible: o
Menu map or overview of the menu hierarchy. o A “look ahead” at the next level of choices,
alternatives that will be presented when a currently viewed choice is selected. o Navigation
history. ii. Web Site Navigation:

• In designing a Web Site Navigation scheme there are two things to take in consideration: o
Never assume that users know as much about a site as the site designers do. o Any page can be
an entry point into the website.

• Web site navigational design includes: o Web site organization Divide content into logical
fragments, units or chunks. Establish a hierarchy of generality or importance

Mobile typing in the context of Human-Computer Interaction (HCI) refers to the


process of inputting text or commands on mobile devices such as smartphones and
tablets. This interaction plays a crucial role in various mobile applications, including
messaging, email, social media, web browsing, and productivity tools. Mobile typing
presents unique challenges and considerations compared to traditional keyboard typing
due to the smaller form factor and touch-based input methods commonly used on
mobile devices. Here's an explanation of mobile typing in HCI:

1. Input Methods: Mobile typing primarily relies on touch-based input methods, including
virtual keyboards, gesture-based typing, voice input, and predictive text input. Each
input method has its advantages and challenges, influencing the user experience and
efficiency of text input on mobile devices.
2. Virtual Keyboards: Virtual keyboards are graphical representations of traditional
QWERTY keyboards displayed on the touchscreen of mobile devices. Users tap on virtual
keys to input characters, numbers, and symbols. Predictive text algorithms may assist
users by suggesting words or correcting spelling errors as they type.
3. Gesture-based Typing: Gesture-based typing methods, such as swipe or glide typing,
allow users to input text by sliding their finger or stylus across the virtual keyboard to
trace the desired word's path. These methods can enhance typing speed and efficiency
for users familiar with the technique.
4. Voice Input: Voice input enables users to dictate text using their device's microphone.
Speech recognition technology converts spoken words into text, allowing users to input
text hands-free. Voice input can be particularly useful in situations where manual typing
is impractical or inconvenient, such as while driving or multitasking.
5. Predictive Text Input: Predictive text input algorithms analyze users' typing patterns
and context to predict the next word they intend to type. Suggestions are presented to
users as they type, allowing them to quickly select the desired word without typing each
character manually. Predictive text input can improve typing speed and accuracy,
especially on mobile devices with small screens.
6. Usability Challenges: Mobile typing presents several usability challenges, including
small screen sizes, limited physical feedback, typing errors, and the need for precision in
touch-based input. Designing intuitive and user-friendly text input interfaces is essential
to mitigate these challenges and enhance the overall user experience.
7. Adaptive Interfaces: HCI research explores adaptive interfaces that dynamically adjust
to users' typing behaviors, preferences, and input methods. Adaptive techniques may
include resizing virtual keyboards, optimizing predictive text suggestions, or customizing
gesture-based typing algorithms based on users' habits and preferences.

Touch interaction in Human-Computer Interaction (HCI) refers to the process of users


interacting with digital interfaces through direct physical contact with a touchscreen
display. Touchscreens have become ubiquitous in modern computing devices, including
smartphones, tablets, laptops, interactive kiosks, and wearable devices. Touch
interaction has revolutionized HCI by offering intuitive and natural ways for users to
manipulate digital content and perform various tasks. Here's an explanation of touch
interaction in HCI:

1. Physical Interaction: Touch interaction allows users to directly manipulate digital


objects by physically touching the screen with their fingers or a stylus. This direct
physical interaction eliminates the need for intermediary input devices like keyboards or
mice, making interactions more immediate and intuitive.
2. Gestures and Manipulations: Touchscreens support a variety of gestures and
manipulations that users can perform to interact with digital content. Common touch
gestures include tapping, swiping, pinching, zooming, dragging, rotating, and multi-
finger gestures. These gestures enable users to navigate interfaces, interact with objects,
manipulate content, and perform actions with ease.
3. Multi-Touch Interaction: Multi-touch interaction allows users to engage with
touchscreens using multiple fingers simultaneously. Multi-touch gestures, such as pinch-
to-zoom or two-finger scrolling, enable more sophisticated interactions and enhance
the user experience by providing greater flexibility and control.
4. Haptic Feedback: Some touchscreen devices incorporate haptic feedback, which
provides tactile sensations or vibrations in response to user interactions. Haptic
feedback enhances the user experience by providing physical confirmation of actions,
improving usability, and making interactions more engaging and immersive.
5. Contextual Interaction: Touchscreens support contextual interaction, allowing users to
interact with digital content in different ways depending on the context and application.
For example, users can tap icons to launch applications, swipe to scroll through lists, or
pinch to zoom in on images. Contextual interaction enhances usability by providing
intuitive and efficient ways for users to accomplish tasks.
6. Accessibility Features: Touchscreens often include accessibility features designed to
accommodate users with diverse needs and abilities. These features may include
customizable interface layouts, larger touch targets, voice commands, screen readers,
tactile overlays, and other assistive technologies that improve accessibility and usability
for all users.
7. Design Considerations: Designing effective touch interactions requires careful
consideration of factors such as screen size, resolution, responsiveness, touch sensitivity,
visual feedback, ergonomic considerations, and user preferences. User-centered design
methodologies, usability testing, and feedback from target users are essential for
creating intuitive and user-friendly touch interfaces.

Formal method case study- Certainly! Let's consider a case study involving the use of
formal models, specifically utilizing matrix algebra, in the context of analyzing social
networks. Social network analysis is a field that examines relationships and interactions
among individuals or entities. Matrix algebra provides a powerful framework for
representing and analyzing such networks.

Case Study: Social Network Analysis


Background: Imagine you are a researcher studying a social network of individuals
within a community. You have collected data on interactions between members of the
community and want to analyze the network's structure to understand key properties
such as centrality, connectivity, and influence.

Approach: You decide to use formal models based on matrix algebra to represent and
analyze the social network data.

Steps:

1. Data Collection: You collect data on interactions between individuals in the community.
This data could include friendship connections, communication channels, collaboration
on projects, or any other form of interaction.
2. Matrix Representation: You represent the social network data using an adjacency
matrix. In this matrix:
• Each row and column represent a member of the community.
• The entries of the matrix indicate the presence or absence of connections
between individuals. For example, a value of 1 might indicate a connection (e.g.,
friendship), while a value of 0 indicates no connection.
3. Analysis: You perform various analyses on the adjacency matrix using matrix algebra
techniques:
• Centrality Analysis: You calculate centrality measures such as degree centrality,
eigenvector centrality, or betweenness centrality. Degree centrality measures the
number of connections each individual has, while eigenvector centrality considers
the importance of a node based on its connections to other important nodes.
• Connectivity Analysis: You analyze the network's connectivity properties, such
as the existence of cliques (fully connected subgraphs) or the network's diameter
(maximum distance between any pair of nodes).
• Influence Analysis: Using techniques like PageRank (based on Google's
algorithm for ranking web pages), you assess the influence of individuals within
the network based on their connections and the connections of those they are
connected to.
4. Visualization: You visualize the social network using graphical representations based on
the adjacency matrix. Techniques such as network diagrams or heatmaps can help
visualize the network's structure and relationships between individuals.

Results: Through your analysis, you uncover valuable insights into the social network's
structure and dynamics:
• You identify influential individuals who play significant roles in connecting different
parts of the community.
• You observe clusters of closely connected individuals, indicating the presence of social
groups or communities within the larger network.
• You detect patterns of communication flow or information diffusion, revealing how
information spreads through the network over time.

Specification and verification of properties in Human-Computer Interaction (HCI)


involves defining and ensuring certain characteristics or properties of interactive systems
to enhance usability, reliability, and security. This process employs formal methods and
techniques to express, analyze, and validate these properties. Here's an explanation of
specification and verification of properties in HCI:

1. Specification of Properties:
• Functional Properties: These properties describe the desired functionality of the
interactive system, such as user interface behavior, input/output interactions, and
system responses to user actions.
• Non-functional Properties: These properties encompass aspects beyond
functionality, including usability, accessibility, performance, reliability, and
security. For example, usability properties may specify requirements related to
learnability, efficiency, memorability, error prevention, and satisfaction.
2. Formalization:
• Once properties are identified, they need to be formalized into precise and
unambiguous specifications that can be analyzed using formal methods.
Formalization often involves mathematical notations, logical expressions, or
formal languages.
• Formalization enables rigorous reasoning about the properties, facilitates
automated analysis, and helps detect inconsistencies or ambiguities early in the
design process.
3. Verification Techniques:
• Model Checking: Model checking is a formal verification technique that
systematically explores all possible states of a system model to verify whether a
given property holds. In HCI, models may represent user interfaces, interaction
sequences, or system behaviors, and model checking can be used to ensure
properties such as absence of deadlocks, compliance with usability guidelines, or
adherence to security protocols.
• Theorem Proving: Theorem proving involves proving the correctness of a system
by constructing formal proofs based on logical axioms and inference rules. This
technique is often used to verify complex properties or to establish formal
guarantees about the behavior of interactive systems.
• Simulation and Testing: Simulation and testing techniques involve executing the
system under various conditions to observe its behavior and evaluate whether
specified properties are satisfied. While not as rigorous as formal methods,
simulation and testing provide practical means to assess system properties and
identify potential issues.
• User Studies and Evaluation: In HCI, verification of properties often involves
empirical methods such as user studies, usability testing, and user feedback.
These methods assess whether the system meets specified usability, accessibility,
and user experience requirements through real-world interactions with users.
4. Iterative Design Process:
• Specification and verification of properties in HCI are typically iterative processes
integrated into the design lifecycle. Designers refine and validate system
specifications based on feedback from verification techniques, user evaluations,
and stakeholder input.
• Iterative refinement ensures that the final interactive system meets the desired
properties and user needs effectively.
5. Application Areas:
• Specification and verification of properties in HCI are applied across various
domains, including software applications, user interfaces, interactive systems,
virtual reality environments, and autonomous systems. These techniques help
ensure that interactive systems are reliable, usable, secure, and aligned with user
requirements and design goals.

Formal dialog modeling in Human-Computer Interaction (HCI) involves representing


and analyzing the structure, content, and dynamics of human-computer dialogues using
formal methods and techniques. Dialog modeling aims to capture the interaction
between users and interactive systems in a systematic and precise manner, facilitating
the design, evaluation, and optimization of dialogue-based interfaces. Here's an
explanation of formal dialog modeling in HCI:

1. Representation of Dialogues:
• Formal dialog models represent dialogues as structured interactions between
users and interactive systems, typically using formal languages, grammars, or
mathematical notations.
• Dialogues are decomposed into individual components such as utterances,
actions, states, transitions, and context, which are represented using formal
constructs.
• Various formalisms can be used for dialog modeling, including state machines,
finite automata, Petri nets, formal grammars (e.g., context-free grammars), and
modal logics.
2. Specification of Dialogue Structure:
• Formal dialog models specify the structure and flow of dialogues, including
possible user inputs, system responses, and transitions between dialogue states.
• Dialog structures may be hierarchical, sequential, or concurrent, reflecting the
organization of dialogue components and the interaction dynamics.
• Specifications may include constraints, rules, and conditions governing dialogue
progression, error handling, context management, and task completion.
3. Analysis and Validation:
• Formal dialog models enable rigorous analysis and validation of dialogue
properties such as completeness, correctness, coherence, consistency, and
usability.
• Techniques such as model checking, theorem proving, simulation, and formal
verification are used to verify whether formal dialog models satisfy desired
properties and requirements.
• Analysis tools and frameworks support automated reasoning about dialog
structures, enabling designers to detect errors, identify ambiguities, and refine
dialogue designs iteratively.
4. Dialog Management and Control:
• Formal dialog models provide a foundation for implementing dialog
management and control mechanisms in interactive systems.
• Dialog managers use formal representations to interpret user inputs, generate
appropriate system responses, maintain dialogue state, and manage context
throughout the interaction.
• By enforcing formal dialog specifications, dialog managers ensure that interactive
systems behave predictably, respond effectively to user inputs, and maintain
coherence and consistency in dialogues.
5. Application Areas:
• Formal dialog modeling is applied in various HCI domains, including spoken
language interfaces, chatbots, virtual assistants, human-robot interaction, and
multimodal interaction systems.
• These techniques are used to design and evaluate dialogue-based interfaces in
applications such as customer service, information retrieval, task assistance,
education, healthcare, and entertainment.

You might also like