To Introduce The Foundations of Human Computer Interaction. To Explain The Models and Theories of HCI. To Review The Guidelines For User Interface

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 13

Human Computer Interaction / 15UCS919

COURSE OBJECTIVES
 
To introduce the foundations of Human Computer Interaction.
To explain the models and theories of HCI.
To review the guidelines for user interface.

COURSE OUTCOMES
 
After the successful completion of this course, the student will be able to
* Describe the foundations of Human Computer Interaction.
* Explain Design rules and prototyping.
* Discuss various models and theories in Human Computer Interaction.
* Apply user interface design concepts in Mobile.
* Explain how to develop interface for web based applications.
Human Computer Interaction / 15UCS919

Unit 1: The Human: I/O channels – Memory – Reasoning and


problem solving; The computer: Devices – Memory – processing and
networks; Interaction: Models – frameworks – Ergonomics – styles –
elements – interactivity- Paradigms.
 
[CO1: Describe the foundations of Human Computer Interaction]

Meaning:

1. HCI was previously known as the man-machine studies or man-machine


interaction. HCI is a design that should produce a fit between the user, the machine
and the required services in order to achieve a certain performance both in quality
and optimality of the services

i.e. Human-computer interaction is the study, planning and design of how people
computer work together so that a person’s needs are satisfied in the most effective
way.
Human Computer Interaction / 15UCS919

Meaning:

2. HCI is the study of interaction between the users and the computers is
achieved via an interface i.e user interface.

User Interface: User interface = interface is made up of a set of hardware


devices and tools from the computer side and a system of sensory, motor and
cognitive processes from the human side.
Interaction takes place at the interface.
How do we improve interfaces?
* Educate software professionals
* Draw upon fast accumulating body of knowledge regarding H-C
interface design
* Integrate UI design methods & techniques into standard software
development
methodologies now in place
Human Computer Interaction / 15UCS919

User Interface:

The user interface is to

– the part of a computer and its software that people can see, hear, touch, talk to, or
otherwise understand or direct.

The user interface has essentially two components: input and output.

Input is how a person communicates his / her needs to the computer


– Some common input components are the keyboard, mouse, trackball, one's finger, and
one's voice.

Output is how the computer conveys the results of its computations and requirements to the
user.
– Today, the most common computer output mechanism is the display screen, followed by
mechanisms that take advantage of a person's auditory capabilities: voice and sound.
Human Computer Interaction / 15UCS919

3. HCI is an inter-disciplinary field.

Goal:
• A basic goal of HCI is
– to improve the interactions between users and computers
– by making computers more usable and receptive to the user's needs.
• A long term goal of HCI is
– to design systems that minimize the barrier between the human's cognitive model of what
they want – to accomplish and the computer's understanding of the user's task.
i.e. How people think, value, feel, and relate and using this understanding to inform
technology design.
Human Computer Interaction / 15UCS919
Human Computer Interaction / 15UCS919

How the HCI is impact on ?


Society:
Touch screen: Direct interaction with objects
Voice control: For some people the only way to interact with computers

Culture:
Smartphones have changed how we spend our "empty times": should we read the news?
answer emails? chat with friends? play "2 Dots"? should we just be bored?

Social Media have influenced how we stay in touch with each other and how find new friends.
Games, mre than entertainment, can be used as social and even productive tools.
 
Economy:

Massive increase in productivity


HCI found how to speed up input and reduce its complexity
People can perform tasks faster than they used
To Reduced need for training
More people can use technology than ever before

Now,

3D Fabrication printing
Home smart phone etc.
Human Computer Interaction / 15UCS919

HCI Systems Architecture

Architecture of a HCI system shows what these inputs and outputs are and how they work
together.

Unimodel HCI Systems:

As mentioned earlier, an interface mainly relies on number and diversity of its inputs and
outputs which are communication channels that enable users to interact with computer via this
interface. Each of the different independent single channels is called a modality.

A system that is based on only one modality is called unimodel. Based on the nature of
different modalities, they can be divided into three categories:

1. Visual-Based
2. Audio-Based
3. Sensor-Based
Human Computer Interaction / 15UCS919

Visual-Based HCI The visual based human computer interaction is probably the most
widespread area in HCI research. Considering the extent of applications and variety of open
problems and approaches, researchers tried to tackle different aspects of human responses
which can be recognized as a visual signal. Some of the main research areas in this section are
as follow:

Facial expression analysis generally deals with recognition of emotions visually.

Body movement tracking and gesture recognition are usually the main focus of this area
and can have different purposes but they are mostly used for direct interaction of human and
computer in a command and action scenario.

Gaze detection is mostly an indirect form of interaction between user and machine which is
mostly used for better understanding of user’s attention, intent or focus in context-sensitive
situations.
Human Computer Interaction / 15UCS919

Audio-Based HCI The audio based interaction between a computer and a human is another
important area of HCI systems. This area deals with information acquired by different audio
signals. While the nature of audio signals may not be as variable as visual signals but the
information gathered from audio signals can be more trustable, helpful, and is some cases
unique providers of information. Research areas in this section can be divided to the following
parts:

speech recognition and speaker recognition have been the main focus of researchers. Recent
endeavors to integrate human emotions in intelligent human computer interaction initiated the
efforts in analysis of emotions in audio signals. Other than the tone and pitch of speech data.

Human auditory signs such as sigh, gasp, and etc helped emotion analysis for designing more
intelligent HCI system.

Music generation and interaction is a very new area in HCI with applications in art industry
which is studied in both audio- and visual-based HCI system.
Human Computer Interaction / 15UCS919
Sensor-Based HCI This section is a combination of variety of areas with a wide range of applications.
The commonality of these different areas is that at least one physical sensor is used between user and
machine to provide the interaction. These sensors as shown below can be very primitive or very
sophisticated.

* Pen-Based Interaction
* Mouse & Keyboard
* Joysticks
* Motion Tracking Sensors and Digitizers
* Haptic Sensors
* Pressure Sensors
* Taste/Smell Sensors
 
Pen-Based sensors are specifically of interest in mobile devices and are related to pen gesture and
handwriting recognition areas.
 
Keyboards, mice and joysticks are already discussed
 
Motion tracking sensors/digitizers are state-of-the-art technology which revolutionized movie,
animation, art, and video-game industry. They come in the form of wearable cloth or joint sensors and
made computers much more able to interact with reality and human able to create their world virtually.
 
Haptic and pressure sensors are of special interest for applications in robotics and virtual reality .
New humanoid robots include hundreds of haptic sensors that make the robots sensitive and aware to
touch [52] [53]. These types of sensors are also used in medical surgery application
Human Computer Interaction / 15UCS919

In MMHCI systems, these modalities mostly refer to the ways that the system responds to the
inputs, i.e. communication channels . The definition of these channels is inherited from human
types of communication which are basically his senses: Sight, Hearing, Touch, Smell, and
Taste.
 
The possibilities for interaction with a machine include but are not limited to these types.
Therefore, a multimodal interface acts as a facilitator of human-computer interaction via two or
more modes of input that go beyond the traditional keyboard and mouse. The exact number of
supported input modes, their types and the way in which they work together may vary widely
from one multimodal system to another.
 
Multimodal interfaces incorporate different combinations of speech, gesture, gaze, facial
expressions and other non-conventional modes of input. One of the most commonly supported
combinations of input methods is that of gesture and speech
Human Computer Interaction / 15UCS919

Applications of multimodal systems

Multimodal Systems for Disabled people: One good application of multimodal systems is to
address and assist disabled people (as persons with hands disabilities), which need other kinds of
interfaces than ordinary people. In such systems, disabled users can perform work on the PC by
interacting with the machine using voice and head movements.
 
Two modalities are then used: speech and head movements. Both modalities are active
continuously. The head position indicates the coordinates of the cursor in current time moment
on the screen. Speech, on the other hand, provides the needed information about the meaning of
the action that must be performed with an object selected by the cursor.
Synchronization between the two modalities is performed by calculating the cursor
position at the beginning of speech detection. This is mainly due to the fact that during the
process of pronouncing the complete sentence, the cursor location can be moved by moving the
head, and then the cursor can be pointing to other graphical object; moreover the command
which must be fulfilled is appeared in the brain of a human in a short time before beginning of
phrase input.

Emotion Recognition Multimodal Systems


Map-Based Multimodal Applications
Multimodal Human-Robot Interface Applications
Smart Video Conferencing
Intelligent Homes/Offices
Driver Monitoring
Intelligent Games
E-Commerce
Helping People with Disabilities

You might also like