0% found this document useful (0 votes)
16 views155 pages

CS8079 - HCI Technical Book

Uploaded by

durgalakshmi.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views155 pages

CS8079 - HCI Technical Book

Uploaded by

durgalakshmi.cse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 155

UNIT-I

1
Foundations of HCI

Syllabus
The Human : I/O channels – Memory – Reasoning and problem solving. The Computer :
Devices – Memory – processing and networks. Interaction : Models – frameworks –
Ergonomics – styles – elements – interactivity – Paradigms. Case Studies.

Contents
Introduction to HCI
I/O Channels
Memory
Reasoning and Problem Solving
Computer
Processing and Network
Interaction Models
Ergonomics
Interaction Styles
Paradigms
Case Study
Two Marks Questions with Answers

(1 - 1)
Human Computer 1- 2 Foundations of
Interaction HCI

1.1 Introduction to HCI


 Human - computer interaction is a discipline concerned with the design,
implementation and evaluation of interactive computing systems for human use
and with the study of major phenomenon surrounding them.
 HCI is study of people, computer technology and the ways these influence each
other. We study HCI to determine how we can make this computer technology
more usable by people.
 Human : Individual user, a group of users working together, a sequence of
users in an organization
 Computer : Desktop computer, large-scale computer system, Pocket PC,
embedded system, software
 User interface : Parts of the computer that the user contacts with
 Interaction : Usually involve a dialog with feedback & control throughout
performing a task.

1.1.1 HCI Goals


 At physical level, HCI concerns the selection of the most appropriate input
devices and output devices for a particular interface or task.
 Determine the best style of interaction, such as direct manipulation, natural
language (speech, written input), WIMP (windows, icons, menus, pointers), etc.
 The goals of HCI are to produce usable and safe systems, as well as
functional systems. In order to produce computer systems with good usability,
developers must attempt to :
1. Understand the factors that determine how people use technology
2. Develop tools and techniques to enable building suitable systems
3. Achieve efficient, effective, and safe interaction
4. Put people first
 Safety : Protecting the user from dangerous conditions and undesirable situations.
 Users : Nuclear energy plant or bomb-disposal – operators. It should interact
with computer-based systems remotely. Also used in medical equipment in
Intensive Care Unit (ICU).
 Data : Prevent user from making serious errors by reducing risk of wrong
keys/buttons being mistakenly activated.
 Provide user with means of recovering errors.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- 3 Foundations of
Interaction HCI
 Ensure privacy (protect personal information such as habits and address) &
security (protect sensitive information such as passwords, VISA card numbers)
 Usability is one of the key concepts in HCI. It is concerned with making
systems easy to learn and use. Usable system is :
1. Easy to learn 2. Easy to remember how to use
3. Effective to use 4. Efficient to use
5. Safe to use 6. Enjoyable to use

1.1.2 Why HCI is Important ?


 HCI has become much more important in recent years as computers and
embedded devices have become commonplace in almost all facets of our lives.
 HCI has expanded rapidly and steadily for three decades, attracting
professionals from many other disciplines and incorporating diverse concepts
and approaches.
 HCI researches the design and use of computer technology, focusing
particularly on the interfaces between people (users) and computers.
 HCI is study of design, implementation and evaluation.
 Interaction is a concept to be distinguished from another similar term,
interface. Roughly speaking, interaction refers to an abstract model by which
humans interact with the computing device for a given task, and an interface is
a choice of technical realization.
 Usable and efficient interaction with the computing device in turn translates to
higher productivity.
 The spreadsheet interface made business computing a huge success. The
Internet phenomenon could not have happened without the web-browser
interface.
 Smart phones, with their touch-oriented interfaces, have nearly replaced the
previous generation of feature phones.
 Body-based and action-oriented interfaces are now introducing new ways to
play and enjoy computer games.

1.1.3 Factors affecting HCI


 There are a large number of factors which should be considered in the
analysis and design of a system using HCI principles. The main factors are
listed in the table below :
1. Organisation Factors : Training, job design, politics, roles, work organisation
2. Environmental Factors : Noise, heating, lighting, ventilation
3. The User : Cognitive processes and capabilities, Motivation, enjoyment,
satisfaction, personality, experience

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- 4 Foundations of
Interaction HCI
4. Comfort Factors : Seating, equipment, layout.
5. User Interface : Input devices, output devices, dialogue structures, use of
colour, icons, commands, navigation, graphics, natural language, user
support, multimedia,
6. Task Factors : Easy, complex, novel, task allocation, monitoring, skills
7. Constraints : Cost, timescales, budgets, staff, equipment, buildings
8. System Functionality : Hardware, software, application
9. Productivity Factors : Increase output, increase quality, decrease costs,
decrease errors, increase innovation

1.1.4 HCI Principles

1. Know Thy User : This principle simply states that the interaction and interface
should cater to the needs and capabilities of the target user of the system in
design. HCI designers and implementers proceed without a full understanding of
the user.

2. Understand the Task : Task refers to the job to be accomplished by the user
through the use of the interactive system. In fact, understanding the task at hand
is closely related to the interaction modeling and user analysis.

3. Reduce Memory Load : Designing interaction with as little memory load as


possible is a principle that also has a theoretical basis. Humans are certainly
more efficient in carrying out tasks that require less memory burden, long or
short term. Keeping the user’s short-term memory load light is of particular
importance with regard to the interface’s role as a quick and easy guidance to the
completion of the task.

4. Strive for Consistency : In the longer term, one way to unburden the memory
load is to keep consistency. This applies to both within an application and across
different applications and both the interaction model and interface
implementation.

5. Remind Users and Refresh Their Memory : Any significant task will involve the use
of memory, so another good strategy is to employ interfaces that give continuous
reminders of important information and thereby refresh the user’s memory. The
human memory dissipates information quite quickly, and this is especially true
when switching tasks in multitasking situations.

6. Prevent Errors / Reversal of Action : While supporting a quick completion of the


task is important, error free operation is equally important. As such, the
interaction and interface should be designed to avoid confusion and mental
overload.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- 5 Foundations of
Interaction HCI
1.2 I/O Channels
 Human interaction with the outside world occurs through information being
received and sent is called input - output channel.
 Human input-output channels vision, hearing, touch, movement etc.
 Human vision is a highly complex activity with a range of physical and
perceptual limitations, yet it is the primary source of information for the
average person.
 Vision begins with light. The eye is a mechanism for receiving light and
transforming it into electrical energy. Light is reflected from objects in the
world and their image is focused upside down on the back of the eye.
 Hearing : The sense of hearing is often considered secondary to sight, but we
tend to underestimate the amount of information that we receive through our
ears. The human ear can hear frequencies from about 20 Hz to 15 kHz.
 Touch provides us with vital information about our environment. It tells us
when we touch something hot or cold, and can therefore act as a warning. It
also provides us with feedback when we attempt to lift an object, for example.
 Movement : A simple action such as hitting a button in response to a question
involves a number of processing stages.
 There are five senses : Sight, sound, touch, taste and smell.
 Sight is the predominant sense for the majority of people, and most interactive
systems consequently use the visual channel as their primary means of
presentation, through graphics, text, video and animation.
 However, sound is also an important channel, keeping us aware of our
surroundings, monitoring people and events around us, reacting to sudden
noises, providing clues and cues that switch our attention from one thing to
another. It can also have an emotional effect on us, particularly in the case of
music.
 Music is almost completely an auditory experience, yet is able to alter moods,
conjure up visual images, and evoke atmospheres or scenes in the mind of the
listener.
 Touch, too, provides important information: tactile feedback forms an intrinsic
part of the operation of many common tools - cars, musical instruments, pens,
anything that requires holding or moving. It can form a sensuous bond
between individuals, communicating a wealth of non-verbal information.
 Taste and smell are often less appreciated but they also provide useful
information in daily life : checking if food is bad, detecting early signs of fire,
noticing that manure has been spread in a field, pleasure.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- 6 Foundations of
Interaction HCI
1.2.1 Vision
 Human vision is a highly complex activity. It is the primary source of
information for the normal person.
 Visual perception are divided into two stages :
1. Physical reception of the stimulus from outside world
2. The processing and interpretation of that stimulus
 On the one hand the physical properties of the eye and the visual system mean
that there are certain things that cannot be seen by the human; on the other
interpretative capabilities of visual processing allow images to be constructed
from incomplete information.

1. Human Eye
 Fig. 1.2.1 shows human eye.
 The major structures of the human eye are the sclera, cornea, choroid, ciliary
body, iris, pupil, retina, mocula and fovea, optic nerve, optic disc, vitreous
humor, aqueous humor, canal of schlemm, lens and conjunctiva.

Fig. 1.2.1 Human eye


 The eye is a mechanism for receiving light and transforming it into electrical
energy. Light is reflected from objects in the world and their image is focused
upside down on the back of the eye.
 The receptors in the eye transform it into electrical signals, which are passed
to brain. The eye has a number of important components as you can see in the
figure.
 Human eyes allow humans to appreciate all the beauty of the world they live
in, to read and gain knowledge, and to communicate their thoughts and
desires to each other through visual expression and visual arts.
 The human eye is wrapped in three layers of tissue : the external layer, formed
by the sclera and cornea; the intermediate layer, divided into two parts:
anterior (iris and ciliary body) and posterior (choroid); and the internal layer,
or the sensory part of the eye, the retina.
 The Müller-Lyer Illusion is one among a number of illusions where a central
aspect of a

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- 7 Foundations of
Interaction HCI
simple line image for example the length, straightness, or parallelism of lines,
appears distorted in virtue of other aspects of the image, e.g. other
background/foreground lines, or other intersecting shapes. These are
sometimes called ‘geometrical-optical illusions’.
 Fig 1.2.2 shows Müller-Lyer Illusion.

Fig 1.2.2 Müller-Lyer Illusion


 Which line is longer or shorter ? Most people say that the top line is longer
than the bottom. In fact, the two lines are the same length. People try to match
areas instead of the length of lines as shows in Fig 1.2.3.

Fig 1.2.3 Matching of area

 Limitations of human eyes are color blindness, blind spot, visual acuity
(resolution), overly sensitive to motion etc.
 Human vision is a highly complex activity with a range of physical and
perceptual limitations, yet it is the primary source of information for the
average person.
 Vision begins with light. The eye is a mechanism for receiving light and
transforming it into electrical energy. Light is reflected from objects in the
world and their image is focused upside down on the back of the eye.
 For any one, address the problems it can cause and so what has to be taken
account of in the interface .
 Colour blindness means that a significant proportion (do you know how
many ?) of the population cannot disambiguate red and green.
 So do not use colour only as the factor on buttons or displays, and be aware
that red and green next to each other may be viewed as the same by some,
influencing visualizations and so on.

2. Human Ear
 Sounds can vary in a number of dimensions : pitch (or frequency), timbre (or
musical tone), and intensity (or loudness).

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- 8 Foundations of
Interaction HCI
 Not all sounds or variations is sound are audible to humans. The ear is capable
of hearing frequencies ranging from about 20 Hz up to about 15 kHz, and
differences of around 1.5 Hz can be discerned (though this is less accurate at
high frequencies).
 Human ear is divided into three sections: outer ear, middle ear and Inner ear.
 Outer ear is consists of Pinna and auditory canal. Pinna is attached to the head
and auditory canal along which sound waves are passed to the middle ear.
 Pinna and auditory canal serve to amplify some sounds.
 In contrast to vision, sound is a "volatile" medium in the sense that sounds do
not persist in the same way that visual images do.
 The ear receives the vibrations and transmits them through various stages to
the auditory nerves. The ear comprises three sections: outer ear, middle ear
and inner ear.
 Speech sounds can obviously be used to convey information. This is useful not
only for the visually impaired but also for any application where the user’s
attention has to be divided (for example, power plant control, flight control,
etc.).
 Uses of non-speech sounds include the following :
a) Attention : To attract the user’s attention to a critical situation or to the
end of a process.
b)Status information : Continuous background sounds can be used to
convey status information. For example, monitoring the progress of a
process without the need for visual attention.
c) Confirmation : Sound associated with an action to confirm that the action
has been carried out. For example, associating a sound with deleting a file.
d)Navigation : Using changing sound to indicate where the user is in a
system. For example, what about sound to support navigation in
hypertext ?

3. Touch
 Touch is our primary non-verbal communication channel for conveying
intimate emotions and as such essential for our physical and emotional
wellbeing.
 Skin contains three types of sensory receptor : thermoreceptors, nociceptors
and mechanoreceptors.
 Chemoreceptors detect the presence of chemicals.
 Thermoreceptors detect changes in temperature.
 Mechanoreceptors detect mechanical forces.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- 9 Foundations of
Interaction HCI
1.3 Memory
 Memory is a vital part of how we perceive the world around us. Human beings
have both short term and long term memory capacities and we can create
better designs by understanding how memory works and how we can work
with that capacity rather than against it.
 Memory is a collection of systems for the storage and recall of information
(personal experiences, emotions, facts, procedures, skills and habits).
 Fig. 1.3.1 shows structure of memory.

Fig. 1.3.1 Structure of memory


 There are three types of memory or memory function :
1. Sensory buffers,
2. Short - term memory or working memory
3. Long - term memory.

1.3.1 Sensory Memory


 Sensory memory is the shortest-term element of memory. It is the ability to
retain impressions of sensory information after the original stimuli have ended.
 It acts as a kind of buffer for stimuli received through the five senses of sight,
hearing, smell, taste and touch, which are retained accurately. Short-term
memory can be accessed rapidly, in the order of 70 ms.
 The sensory memories act as buffers for stimuli received through the senses.
 Information is passed from sensory memory into short-term memory by
attention, thereby filtering the stimuli to only those which are of interest at a
given time.
 An example of this form of memory is when a person sees an object briefly
before it disappears. Once the object is gone, it is still retained in the memory
for a very short period of time. The two most studied types of sensory memory
are iconic memory (visual) and echoic memory (sound).
 Storage of information on Sensory memory is irrelevant of attention to the stimulus.
 Information in Sensory memory is stored in specific modality. For instance,
auditory information is only stored in the echoic memory, and visual
information are stored in iconic memory.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
1.3.2 Short Term Memory
 Short - term memory, also known as primary or active memory.
 Short - term memory acts as a kind of “scratch-pad” for temporary recall of the
information which is being processed at any point in time.
 It is the "smallest" part of memory, because it cannot hold much information
at any one time. Its size can be estimated by measuring memory span.
 Information is usually stored in short - term memory in terms of the physical
qualities of the experience, such as what we see, do, taste, touch or hear.

1.3.3 Long Term Memory


 Long - term memory is planned for storage of information over a long period
of time. There are two types of long-term memory : Episodic memory and
Semantic memory.
 In long - term memory, information is primarily stored in terms of its meaning
or semantic codes.
 Constructive Processing : Re-organizing or updating long-term memories on
basis of logic, reasoning, or adding new information.
 Pseudo-Memory : False memories that a person believes are true or accurate.
 Network Model : Memory model that views the structure of long - term
memory as an organizational system of linked information.
 Re-integration : One memory can serve as a cue to trigger another memory.
 There are two types of long-term memory : Episodic memory and Semantic memory.
 Episodic memory represents our memory of events and experiences in a serial
form. It is from this memory that we can reconstruct the actual events that
took place at a given point in our lives.
 Episodic memory is a part of the long-term memory responsible for storing
information about events (i.e. episodes) that we have experienced in our lives.
 It involves conscious thought and is declarative. An example would be a
memory of our 1st day at school.
 The knowledge that we hold in episodic memory focuses on “knowing that”
something is the case (i.e. declarative). For example, we might have an
episodic memory for knowing that we caught the bus to college today.
 Semantic memory is a structured record of facts, concepts and skills that we
have acquired. The information in semantic memory is derived from that in our
episodic memory, such that we can learn new facts or concepts from our
experiences.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 Semantic memory is structured in some way to allow access to information,
representation of relationships between pieces
of information, and inference.
 Semantic memory includes
things that are common
knowledge, such as the
names of colors, the sounds
of letters, the capitals of
countries and other basic
facts acquired over a
lifetime.
 One model for the way in Fig. 1.3.2 Semantic network
which semantic memory is
structured
is as a network. Items are associated to each other in classes, and may
inherit attributes from parent classes. This model is known as a semantic
network.
 Semantic memory typically refers to memory for word meanings, facts,
concepts, and general world knowledge.
 Fig. 1.3.2 shows semantic memory network for FARM.
 Notice that the primary category in this network is ''FARM'' and the memory
organization shows smaller subcategories associated with a farm (machines,
animals, crops).
 Each sub-node would have additional information. For example, the sheep node
(connected to the animal node) might also connect to the following information
:

 So memory is organized through groups of categorical information called


nodes and their associated pieces of information.
 Two common types of semantic information are conceptual and propositional
knowledge. A concept is a mental representation of something, such as a
panther, and knowledge of its similarity to other concepts.
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 A proposition is a mental representation of conceptual relations that may be
evaluated to have a truth value, for example, that a panther is a jungle cat, or
has four legs and knowledge that panthers do not have gills.
 Procedural memory is a part of the long-term memory is responsible for
knowing how to do things, i.e. memory of motor skills. It does not involve
conscious thought and is not declarative. For example, procedural memory
would involve knowledge of how to ride a bicycle.

1.3.4 Difference between Short - Term Memory and Long - Term Memory

Short term memory Long term memory


Capacity is limited. Capacity is more.

Information is stored for shorter time. Information is stored for longer time.

Information is usually stored in short- In long-term memory, information is


term memory in terms of the physical primarily stored in terms of its meaning or
qualities of the experience, such as what semantic codes.
we see, do, taste, touch or hear.

STM is stored and retrieved sequentially. LTM is stored and retrieved by association.

Short-term memory is utilized to Long-term memory is utilized more or


retain information. less at all times.

1.4 Reasoning and Problem Solving


 Thinking is usually considered to be the process of mentally representing some
aspects of the world and transforming these representations so that new
representations, useful to our goals, are generated.
 Thinking is often regarded as a conscious process, in which we are aware of
the process of transforming mental representations and can reflect on thought
itself.
 Problem solving and reasoning are two key types of thinking. Problem solving
encompasses the set of cognitive procedures and thought processes that we
apply to reach a goal when we must overcome obstacles to reach that goal.
 Reasoning encompasses the cognitive procedures we use to make inferences
from knowledge and draw conclusions. Reasoning can be part of problem
solving.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
1.4.1 Reasoning
 Reasoning is the process by which we use the knowledge we have to draw
conclusions or infer something new about the domain of interest.
 Types of reasoning : Deductive, Inductive and Abductive.

1. Deductive reasoning :
 Deductive reasoning is a basic form of valid reasoning. Deductive reasoning, or
deduction, starts out with a general statement, or hypothesis, and examines
the possibilities to reach a specific, logical conclusion.
 Deductive reasoning derives the logically necessary conclusion from the
given premises. For example : "All men are mortal. Harold is a man.
Therefore, Harold is mortal."
 Example : When it rains, Rakshita’s old car won’t start. It’s raining.
Therefore, Rakshita’s old car won’t start.
 For deductive reasoning to be sound, the hypothesis must be correct. It is
assumed that the premises, "All men are mortal" and "Harold is a man" are
true. Therefore, the conclusion is logical and true.
 In deductive reasoning, if something is true of a class of things in general, it is
also true for all members of that class.

2. Inductive reasoning :
 Inductive reasoning is the opposite of deductive reasoning. Inductive reasoning
makes broad generalizations from specific observations. Basically, there is
data, then conclusions are drawn from the data.
 Example : Rakshita’s old car won’t start. It’s raining. Therefore
Rakshita’s old car won’t start when it’s raining.
 Inductive reasoning involves drawing conclusions from facts, using logic. We
draw these kinds of conclusions all the time. If someone we know to have good
literary taste recommends a book, we may assume that means we will enjoy
the book.
 Induction can be strong or weak. If an inductive argument is strong, the truth
of the premise would mean the conclusion is likely. If an inductive argument is
weak, the logic connecting the premise and conclusion is incorrect.

3. Abductive reasoning :
 The third type of reasoning is abduction. Abduction reasons from a fact to the
action or state that caused it. This is the method we use to derive explanations
for the events we observe.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 For example, suppose we know that Sam always drives too fast when she has
been drinking. If we see Sam driving too fast we may infer that she has been
drinking.

1.4.2 Difference between Deductive and Inductive Reasoning

Deductive reasoning Inductive reasoning


Deduction moves from idea to observation. Induction moves from observation to idea.

Deduction moves from more general to Induction moves from more specific to
more specific. more general.

Deductive reasoning is reasoning Inductive reasoning is reasoning


where true premises develop a true and where the premises support the
valid conclusion. conclusion.
Deduction is more precise and Induction is more general and qualitative.
quantitative.
If the premises are true, the conclusion If the premises are true, the
must be true. conclusion is probably true.

1.4.3 Problem Solving


 If reasoning is a means of inferring new information from what is already
known, problem solving is the process of finding a solution to an unfamiliar
task, using the knowledge we have.
 Human problem solving is characterized by the ability to adapt the information
we have to deal with new situations.
 As per Gestalt theory, problem solved by two ways : Productive and Reproductive.
 Reproductive problem solving draws on previous experience as the
behaviorists claimed, but productive problem solving involves insight and
restructuring of the problem.
 Indeed, reproductive problem solving could be a hindrance to finding a
solution, since a person may ‘fixate’ on the known aspects of the problem and
so be unable to see novel interpretations that might lead to a solution.

Gestalt Theory :
 Gestalt theory focused on the mind’s perceptive.
 Although Gestalt theory is attractive in terms of its description of human
problem solving, it does not provide sufficient evidence or structure to support
its theories. It does not explain when restructuring occurs or what insight is.
 The Gestalt Principles are a set of laws arising from 1920s’ psychology,
describing how humans typically see objects by grouping similar elements,
recognizing patterns and simplifying complex images.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 Gestalt theorists followed the basic principle that the whole is greater than the
sum of its parts. In other words, the whole (a picture, a car) carried a different
and altogether greater meaning than its individual components (paint, canvas,
brush; or tire, paint, metal, respectively).
 In viewing the “whole,” a cognitive process takes place –the mind makes a leap
from understanding the parts to realizing the whole.
 Perhaps the best known example of a gestalt is the vase/face profile which is
fully explained in the six Gestalt Principles detailed below :

 Gestalt views on learning and problem-solving were opposed to at the time


dominant pre - behaviourist and behaviourist views. It emphasized importance
of seeing the whole structure of the problem.
 This presents insightful learning, which has following properties :
1. Transition from pre-solution to solution is sudden and complete.
2. When problem solution is found, performance is smooth and without errors.
3. Insightful learning results in longer retention.
4. The principle learned by insight can easily be applied to other problems.

Problem Space Theory :


 Problem space is a framework for defining a problem and finding a solution.
Newell and Simon proposed this concept.
 The problem space consists of problem states and problem solving involves
generating these states using legal state transition operators.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 The problem has an initial state and a goal state. Problem space may be huge
and heuristics are employed to select appropriate operators to reach the goal.
 Means-end analysis : In this analysis, the initial state is compared with the goal
state and an operator chosen to reach the goal.

1.4.4 Error and Mental Model


 A mental model is based on belief, not facts : that is, it's a model of what users
know (or think they know) about a system such as your website.
 Hopefully, users' thinking is closely related to reality because they base their
predictions about the system on their mental models and thus plan their future
actions based on how that model predicts the appropriate course.
 It's a prime goal for designers to make the user interface communicate the
system's basic nature well enough that users form reasonably accurate mental
models.
 Individual users each have their own mental model. A mental model is internal
to each user's brain, and different users might construct different mental
models of the same user interface.
 Further, one of usability's big dilemmas is the common gap between designers'
and users' mental models. Because designers know too much, they form
wonderful mental models of their own creations, leading them to believe that
each feature is easy to understand.
 People build their own theories to understand the causal behavior of systems.
These have been termed mental models.
 What people perceive is completely subjective and depends on the way things
appear to them.
 For example, imagine that someone tells a kid a frightening story about
swimming. The child will hold that image in his mind for a long time and, thus,
think of swimming as a perilous thing, until external forces contradict that idea
and he learns to see things differently.
 Similarly, for some, investing in stocks is a risky affair. A person’s mental
model that investing in the stock market is risky guides that person’s decision
not to invest in stocks.

1.5 Computer
 Computer system comprises various elements, each of which affects the user
of the system. It uses input device, output display devices.
 Computer system consists of keyboard, mouse and screen. Computer is of
different configuration like desktop, laptop and PDAs.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 Computer system is made up of various elements. Each of these elements
affects the interaction.
1. Input devices - Text entry and pointing
2. Output devices - Screen (small & large), digital paper
3. Virtual reality - Special interaction and display devices
4. Physical interaction - e.g. sound, haptic, bio-sensing
5. Paper - As output (print) and input (scan)
6. Memory - RAM & permanent media, capacity & access
7. Processing - Speed of processing, networks

Factors affecting on Computer


 Speed : It refers to the speed of your computer's CPU. Hardware makers
measure a CPU's clock speed using a unit called gigahertz. A CPU running at 3
GHz processes data faster than one running at 1 GHz. The speed at which data
flows from the CPU to applications also affects the computer's speed.
 Interface : graphical interface may involve a set of actions that the user can
invoke by use of the mouse and the designer must decide whether to present
each action as a ‘button’ on the screen, which is always visible, or hide all of
the actions in a menu which must be explicitly invoked before an action can be
chosen.
 Widgets : elements of the WIMP interfaces are called widgets and they
comprise the toolkit for interaction between user and system. Widgets embody
both input and output languages, so we consider them as interaction objects.
The appropriate choice of widgets and wording in menus and buttons will help
you know how to use them for a particular selection or action.

Level of Interaction : Batch Processing


 Batch processing simply means running programmes in batches without user
intervention. It dates from the time when only large organisations could afford
computers, and then just one massive mainframe.
 From a HCI perspective however, batch processing had two critical flaws.
 The first of these is the fact that a computer could only handle one program,
and one programmer, at a time. Considering the initial expense of the
computer, this limitation was prohibitive in allowing the full worth and
potential of the machine to be achieved.
 This problem was eventually solved with new interaction devices, such as a
keyboard, and time-sharing techniques allowing several operators parallel
access to a single mainframe computer.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 The second HCI issue with batch processing was the inability for programmers
to interact with the computer while programs were running. The significance
of this was a program had to be completely arranged on punched cards before
being processed by the system.
 There was no function or ability to interactively check each line, or chunk, of
code as it is written as programmers do today. This made errors very time
consuming and difficult to identify.
 In addition, batch processing allowed for very little of the programming
creativity we see today. It was not until time-sharing provided the first multi-
access interactive computers in the 1960s that programmers were able to treat
their craft as a creative art.

1.5.1 Device
 The devices dictate the styles of interaction that the system supports.
 If we use different devices, then the interface will support a different style of
interaction.

Qwerty keyboard
 QWERTY refers to the arrangement of keys on a standard English computer
keyboard or typewriter.
 The name derives from the first six characters on the top alphabetic line of the
keyboard.
 Traditional keyboards allow only one key press at a time. Higher rate of data
entry possible if multiple keys are pressed simultaneously.
 The piano keyboard allows several finger presses at once and recognizes
different pressures and durations.
 Chording : Chords represent several characters or entire word.
 Fig. 1.5.1 shows QWERTY keyboard.

Fig. 1.5.1 Layout of QWERTY keyboard

 As you are reading this text, take a little break and look at your keyboard.
Seriously. Look only at the letter keys and start reading from the top left. It
starts off like this: Q - W - E - R - T - Y. This is also referred to as QWERTY.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 Now look at another device with a keyboard - perhaps another computer, a
smart phone or a tablet. Devices like tablets don't have a physical keyboard,
but if you use an application in which typing is needed, a virtual keyword
comes up. Take a look at the layout of the letters. QWERTY again!
 At that time, the layout was developed for mechanical typewriters. These have
keys, just like a modern-day keyboard, but when you press a key, a mechanical
arm is moved to type a character on a piece of paper.
 When you start typing fast, these arms can jam easily. He examined common
letter combinations in the English language and made sure to place these
letters far apart from each other. This resulted in fewer jams and, in effect,
more efficient typing; however, by today's standards, the QWERTY layout
actually slows down typing, since jamming mechanical arms are no longer an
issue.

Dvorak keyboard
 The Dvorak keyboard is an ergonomic alternative to the layout commonly
found on typewriters and computers known as “Qwerty”.
 The Qwerty keyboard was designed in the 1870s to accommodate the slow
mechanical movement of early typewriters.
 Fig. 1.5.2 shows Dvorak keyboard.

Fig. 1.5.2 Dvorak Keyboard

 When it was designed, touch typing literally hadn’t even been thought of yet!
It’s hardly an efficient design for today’s use. By contrast, the Dvorak keyboard
was designed with emphasis on typist comfort, high productivity and ease of
learning; it’s much easier to learn.
 It is biased towards right hand.

Chord Keyboard
 Few keys (4 to 5 keys) are used. Fig. 1.5.3 shows chord keyboard.
 Letters are produced by pressing one or more of the keys at once.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI

Fig. 1.5.3 Chord keyboard

 Letters as combination of key presses. Chord keyboard is compact in size and


ideal for portable applications.

Phone Pads and T9 entry


 T9 code is the name given to the prediction algorithm that uses a dictionary to
guess the word the user is trying to write. For example: '2' for 'A' or 'B' or 'C',
'222' for 'BAC' or 'ABC'. 'CODE' is written '2633'.
 Most phones have at least two modes for the numeric buttons: one where
the keys mean the digits and one where they mean latters.

Handwriting recognition
 Handwriting Recognition (HWR), also known as Handwritten Text Recognition
(HTR), is the ability of a computer to receive and interpret intelligible
handwritten input from sources such as paper documents, photographs, touch-
screens and other devices.
 A handwriting recognition system's sensors can include a touch-sensitive
writing surface or a pen that contains sensors that detect angle, pressure and
direction. The software translates the handwriting into a graph and
recognizes the small changes in a person's handwriting from day to day and
over time.

Speech recognition
 Speech recognition is a promising area of text entry, but it has been promising
for a number of years and is still only used in very limited situations.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 There is a natural enthusiasm for being able to talk to the machine and have
it respond to commands, since this form of interaction is one with which we
are very familiar.

1.5.2 Memory
 Human short-term memory is volatile and has a limited capacity. Computer
RAM has essentially the same characteristics.
 Your long-term memory is something like a computer’s hard drive. Both of
them take longer to respond, but can store a considerable quantity of data.
1. RAM and Short-term Memory
 RAM is small, both in physical size and in the amount of data it can hold. It's
much smaller than your hard disk.
 A typical computer may come with 256 million bytes of RAM and a hard disk
that can hold 40 billion bytes.
 Random Access Memory (RAM) is a computer's short-term memory. None of
your programs, files, or Netflix streams would work without RAM, which is
your computer's working space.
 RAM is called "random access" because any storage location can be accessed
directly.
 Random Access Memory (RAM) is a general-purpose memory that usually
stores the user data in a program. RAM is volatile in the sense that it cannot
retain data in the absence of power; i.e., data is lost after the removal of
power.
 The RAM in a system is either Static RAM (SRAM) or Dynamic RAM (DRAM).
The SRAMs are fast, with access time in the range of a few nanoseconds, which
makes them ideal memory chips in computer applications.
 DRAMs are slower and because they are capacitor based they require
refreshing every several milliseconds. DRAMs have the advantage that their
power consumption is less than that of SRAMs.
 Most microcontrollers have some amount of internal RAM, commonly 256
bytes, although some microcontrollers have more and some have less.
2. Disks and Long-term Memory
 For most computer users the long term memory consists of disks, possibly with
small tapes for backup. The existence of backups, and appropriate software to
generate and retrieve them, is an important area for user security.
 There are two main kinds of technology used in disks: magnetic disks and optical
disks.
 The most common storage media, floppy disks and hard (or fixed) disks, are
coated with

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
magnetic material, like that found on an audio tape, on which the information is
stored.
 With disks there are two access times to consider, the time taken to find the
right track on the disk, and the time to read the track.
1.6 Processing and Network
 Speed of processing can seriously affect the user interface. These effects are
considered while designing an interactive system.
Myth of the infinitely fast machine :
 The adverse effects of slow processing are made worse because the designers
labor under the myth of the infinitely fast machine. That is, they design and
document their systems as if response will be immediate.
 Rather than blithely hoping that the eventual machine will be ‘fast enough’, the
designer ought to plan explicitly for slow responses where these are possible.
 A good example, where buffering is clear and audible (if not visible) to the
user, is telephones.
 Even if the user gets ahead of the telephone when entering a number, the
tones can be heard as they are sent over the line. Now this is probably an
accident of the design rather than deliberate policy, as there are so many other
problems with telephones as interfaces. However, this type of serendipitous
feedback should be emulated in other areas.

Limitations on Interactive Performance


1. Computation bound : When using find/replace in a large document.
2. Storage channel bound : The speed of memory access can interfere with
interactive performance.
3. Graphics bound “most computers include a special purpose graphics card to
handle many of the most common graphics operations.
1.7 Interaction Models
 There are at least two participants required for interaction. Here we consider
user and system are the two participants. The interface must therefore
effectively translate between them to allow the interaction to be successful.
 The purpose of an interactive system is to help a user in accomplishing goals
from some application domain. A domain defines an area of expertise and
knowledge in some real- world activity.
 Domain example are graphic design, authoring and process control in a
factory. A domain consists of concepts that highlight its important aspects.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
o Tasks are operations to manipulate the concepts of a domain. A goal is
the desired output from a performed task.
o An interaction model is a design model that binds an application together
in a way that supports the conceptual models of its target users.
o It enables designers, developers, and stakeholders to understand and
explain how users move from objects to actions within a system.

1.7.1 The Execution–Evaluation Cycle


 Phases of an interactive cycle are : Execution and Evaluation.

 Norman’s s model of interaction are as follows :


1.User establishes the goal 2. Formulates intention
3. Specifies actions at interface 4. Executes action
5. Perceives system state 6. Interprets system state
7. Evaluates system state with respect to goal
Stage 1 is Forming a Goal. This is what you want. As an example, I might want a
place that I can relax outside that won’t get muddy and that I don’t have to move
my outdoor furniture around to mow.
Stage 2 is Forming the Intention. This is what would satisfy the goal. A deck
would satisfy my goal of place to relax outdoors that won’t get muddy or be in the
way of mowing.
Stage 3 is Specifying an Action. What do I have to do to achieve the intention ?
I would need to build a deck to meet the requirements set in my goals.
Stage 4 is Executing the Action. Here I would do the steps of the action. I would
build the deck.
Stage 5 is Perceiving the State of the World. Using the senses to gather
information. My finished deck would be off the ground and have my outdoor
furniture on it.
Stage 6 is Interpreting the State of the World. What has changed ? My furniture
is off the ground away from the mud and no longer has to be moved to mow the
lawn.
Stage 7 is Evaluating the Outcome. Did I achieve my goal ? I can relax outdoors
now without worrying about mud or moving furniture. I achieved my goal.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
1.7.2 Mistake and Slips

1. Mistake
 Mistakes are errors in choosing an objective or specifying a method of achieving it.
 Examples of mistakes include : Making a poor judgment when overtaking,
leaving insufficient room to complete the man over run in the face of oncoming
traffic.
 Classification of mistakes is skill - based, rule - based, and knowledge-based.
1. Skill based behaviour : Skill - based behavior occurs when workers are
extremely expert at their jobs, so they can do the everyday, routine tasks with
little or no thought or conscious attention. The most common form of errors in
skill-based behavior is slips.
2. Rule based behaviour : Rule-based behavior occurs when the normal
routine is no longer applicable but the new situation is one that is known, so
there is already a well-prescribed course of action: a rule.
3. Knowledge based behaviour : Knowledge - based procedures occur when
unfamiliar events occur, where neither existing skills nor rules apply. In this
case, there must be considerable reasoning and problem-solving. Plans might
be developed, tested, and then used or modified.
 To reduce mistakes :
a) To avoid rule - based mistakes, increase worker situational awareness of high
- risk tasks on site and provide procedures for predictable non-routine, high -
risk tasks.
b) To avoid knowledge-based mistakes, ensure proper supervision for
inexperienced workers and provide job aids and diagrams to explain
procedures.

2. Slip
 Slips are errors in carrying out an intended method for reaching an objective.
 Examples of slips include : performing an action too soon in a procedure, or
leaving it too late, e.g. not putting your ear defenders on before starting the
drill.

Different kinds of slips


 Slips are of three types : Capture, Description, Mode

1. Capture Error :
 A type of slip where a more frequent and more practiced behavior takes place
when a similar, but less familiar, action was intended.
 Examples include telling someone your home phone number when you
intended to give your work number or typing your name when you intended to
type another word that begins with the same few letters.
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
2. Description Error :
 Performing the right action for the wrong object, e.g. pouring your juice on
your cereal in the morning instead of the milk.

3. Mode Error :
 A type of slip where a user performs an action appropriate to one situation in
another situation, common in software with multiple modes.
 Examples include drawing software, where a user tries to use one drawing tool
as if it were another (e.g. brushing with the Fill tool), or text editors with both
a command mode and an insert mode, where a user accidentally types
commands and ends up inserting text.
 To reduce slips :
1. Make all workers aware that slips and lapses do happen;
2. Use checklists to help confirm that all actions have been completed;
3. Include in your procedures the setting out of equipment, site layout and
methods of work to ensure there is a logical sequence;
4. Make sure checks are in place for complicated tasks; and
5. Try to ensure distractions and interruptions are minimized, e.g. mobile phone
policy

1.7.3 Difference between Slip and Mistake

Sli Mistak
p e
Slips are error in carrying out an Mistakes are errors in choosing an
intended method for reaching an objective.
objective.
Error in executing action. Error in formulating intention & action.

Slip is correct plan but incorrect action Mistake is incorrect plan.

Typically found in skilled behavior Typically found in rule-based


behavior or problem-solving
behavior
Skilled behavior Incorrect mental model

Example : omitting a step or series of Example : making a poor judgments


steps from a task when overtaking

To reduce slip : use checklists to help To reduce mistake : increase


confirm that all actions have been situational awareness of high-risk
completed tasks on site
Slips occur when : Mistakes occurs when :

1. The task is very familiar and requires 1. Doing too many things at the same time.
little
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
thought; 2.Doing too many complex tasks at once.

2. People confuse two similar tasks; 3.Time pressures.

1.7.4 Interaction Framework


 Interaction : The communication between the user and the system
 Domain : Area of expertise and knowledge in real-world activity
 Tasks : Operations to manipulate the concepts of a domain
 Goal : Desired output from a performed task
 Intention : Specific action required to meet goal.
 Task Analysis : Identification of problem space for the user in terms of the
domain, goals intentions and tasks.
 Fig. 1.7.1 shows general interaction framework.
Interaction framework has 4 parts : User, Input, System and Output

Fig. 1.7.1(a) General interaction framework Fig. 1.7.1(b) Translation between components

 Each has its own unique language. Interaction necessitates translation between
languages
 Problems in interaction occur when translation between one language and the
next is difficult, or impossible.
1. User intentions translated into actions at the interface.
2. Translated into alterations of system state,
3. Which in turn are reflected in the output display ?
4. Which is interpreted by the user ?
 Some systems are harder to use than others.
 Gulf of Execution - User’s formulation of actions may be different to those
actions allowed by the system.
 Gulf of Evaluation - User’s expectation of the changed system state may be
different to the

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
actual presentation of this state.
 Fig. 1.7.2 shows Framework for human computer interaction adapted from
ACM SIGCHI curriculum development group.
 Dialog design and interface styles can be placed particularly along the input
branch of the framework, addressing both articulation and performance.
 Presentation and screen design relate to the output branch of the framework.
 The entire framework can be placed within a social and organizational context
that also affects the interaction. Each of these areas has important implications
for the design of interactive systems and the performance of the user.

Fig. 1.7.2 Framework for human computer interaction adapted from ACM SIGCHI curriculum
development group.

What influence does the social environment in which you work have on your interaction with
the computer ? What effect does the organization (commercial or academic) to which you
belong have on the interaction ?
Ans. : • The aim is to get the student to explore the social and environmental
influences which effect interaction, often without the user being aware of them.
 The particular influences will vary from environment to environment but the
student should be encouraged to consider some or all of the following.
 Work context to is the work shared ? Are the machine shared ?
 Peer pressure is there pressure to compete or impress ?
 Management pressure is there pressure to achieve ? Is the interaction to
carried out in the presence of management ?
 Motivation means what motivates the interaction ? Does this encourage or
discourage experimentation ?

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 Organizational goals : What is the objective of the organization ? (profit ?,
education ?, etc.) How does this affect the interaction ?
 Organizational decision making - Who determines the system that you use ?
 Do you have any choice or influence ? Does this influence the way you interact
with the system ?
 In each case, student should discuss what influence this may have on the
interaction. It may be helpful to consider other possible environment in order
to identify hoe the interaction would differ under this different circumstances.
 For example, if the student currently shares the machine with colleagues,
would his/her interaction practice change if she/he was given a private
machine ?
There are four main translations involved in the interaction framework viz. articulation,
performance, presentation and observation.
i) The compact disk player has a button for power off. However its
remote control does not have a power off button.
ii) It is difficult in the command line interface to determine the result
of copying and moving files in a hierarchical file system.
iii) User is unable to figure out which switches from the bank to
turn on to lit the front portion of a classroom.
iv)The user is unable to know whether the voice recorder is in playing or
recording state.
Specify in each of the above four cases which of the interaction framework translations are in
effective.
Ans. :

i) Performance : Performance is the interface’s translation of the input


language into stimuli to the system, This translation is determined by the
designer or programmer (not the user).
ii) Presentation : Presentation is the translation of the system’s new state into
the output language of the interface. This translation is determined by the
designer or programmer.
iii) Articulation : Articulation is the user’s translation of their task into the input
language.
iv)Observation : Observation is the translation of the output language into
personal understanding. This translation is done by the user.
Negative affect can make it harder to do even easy tasks, positive affect can make it easier to do
difficult tasks. What are the implications of this for interaction design ?
Ans. :

 It suggests that is situations of stress, people will be less able to cope with complex
problem

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
solving or managing difficult interfaces, whereas if people are relaxed, they
will be more forgiving of limitations in the design.
 This does not give us an excuse to design bad interfaces but does suggest that
if we build interfaces that promote to positive responses – for example by using
aesthetics or reward – then they are likely being more successful.
 Positive affect is associated with other characteristics of people who tend to be
happier, like optimism, extraversion, and success.
 Positive affect can be developed and cultivated. While affectivity is somewhat
inborn, meaning that some people are simply born with a greater propensity
for being in a good mood as part of their personality, there are many things
you can do to get into the habit if experiencing positive affect more often in
your life, and making your good moods even batter.
1.8 Ergonomics
 Ergonomics means literally the study or measurement of work. In this
context, the term work signifies purposeful human function.
 Ergonomics is the science of designing user interaction with equipment and
workplaces to fit the user.
 Proper ergonomic design is necessary to prevent repetitive strain injuries,
which can develop over time and can lead to long-term disability.
 Ergonomics is employed to fulfill the two goals of health and productivity. It is
relevant in the design of such things as safe furniture and easy-to-use
interfaces to machines.
 Ergonomics is concerned with the ‘fit’ between people and their
technological tools and environments.
 As well as addressing physical issues in the layout and arrangement of the
machine interface, ergonomics is concerned with the design of the work
environment itself.
 All users should be comfortably able to see critical displays. For long periods of
use, the user should be seated for comfort and stability.
 Seating should provide back support. If required to stand, the user should
have room to move around in order to reach all the controls.
 Human factors issues arise in simple systems and consumer products as well.
 Some examples include cellular telephones and other hand held devices that
continue to shrink yet grow more complex, millions of VCRs blinking "12:00"
across the world because very few people can figure out how to program them,
or alarm clocks that allow sleepy users to inadvertently turn off the alarm
when they mean to hit 'snooze'.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 A User Centered Design (UCD), also known as a systems approach or the
usability engineering life cycle aims to improve the user-system.
 Ergonomics has a close relationship to human psychology in that it is also
concerned with the perceptual limitations of humans. For example, the use of
color in displays is an ergonomics issue.
 Ergonomics examines not only the passive ambient situation but also the
unique advantages of the human operator and the contributions that can be
made if a work situation is designed to permit and encourage the person to
make the best use of his or her abilities.
 Human abilities may be characterized not only with reference to the generic
human operator but also with respect to those more particular abilities that are
called upon in specific situations where high performance is essential.
 For example, an automobile manufacturer will consider the range of physical
size and strength of the population of drivers who are expected to use a
particular model to ensure that the seats are comfortable, that the controls are
readily identifiable and within reach, that there is clear visibility to the front
and the rear, and that the internal instruments are easy to read.
 Ease of entry and egress will also be taken into account.

1.8.1 Arrangement of Controls and Displays


 Arrangement of controls and display is depending upon organization. Some of
the possible are as follows :
1. Functional controls and displays are organized so that those that are
functionally related are placed together.
2. Sequential controls and displays are organized to reflect the order of their
use in a typical interaction.
3. Frequency controls and displays are organized according to how frequently
they are used, with the most commonly used controls being the most easily
accessible.

1.8.2 Health Issue


 Some factors may affect the use of more general computers. Factors in the
physical environment that directly affect the quality of the interaction and
users.
 The factors are as follows :
1. Performance : Users should be able to reach all controls comfortably and see all
display.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
2. Lighting : It depends upon working environment. Adequate lighting should
be provided to allow users to see the computer screen.
3. Temperature : Extremes of hot or cold will affect performance.
4. Noise : Excessive noise can be harmful to health, causing the user pain, and
in acute cases, loss of hearing.
1.9 Interaction Styles
 Interaction is a dialog between the computer and the user. Types of
common interface styles are as follows :
1. Command line interface 2. Menus
3. Natural language 4. Question/answer and query dialog
5. Form-fills and spreadsheets 6. WIMP
7. Point and click 8. Three-dimensional interfaces

1. Command line interface :


 It provides a means of expressing instructions to the computer directly,
using function keys, single characters, abbreviations or whole - word
commands. In some systems it is the only way of communicating with the
system, e.g. remote access using telnet. Interaction between user and
computer where user input series of command lines into program.

2. Menus :
 The set of available options is displayed on the screen, and selected using the
mouse, or numeric or alphabetic keys. These visible options rely on recognition
rather than recall, but still need to be meaningful and logically grouped.
Menus may be nested hierarchically, with the grouping and naming of menu
options the only cue for finding the required option.

3. Natural language :
 Natural language is very difficult for a machine to understand. It is
ambiguous, syntactically and semantically. It is difficult to provide the machine
with context.

4. Question/answer, query dialogue :


 Question/answer dialogue is a simple mechanism for providing input to an
application in a specific domain. The user is asked a series of questions and is
led through the interaction step by step. Easy to learn and use, but limited in
functionality and power.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
5. Form-fills and spreadsheets :
 Used primarily for data entry but also useful in data retrieval. The display
resembles a paper form, with slots to fill in. It may be based on an actual form
with which the user is familiar.
 Spreadsheets are a sophisticated variation of form filling. The spreadsheet
comprises a grid of cells, each of which can contain a value or a formula.

6. WIMP :
 WIMP stands for Windows, Icons, Menus and Pointers and is the default
interface style for the majority of interactive computer systems in use today,
especially in the PC and desktop workstation arena. Examples of WIMP
interfaces include Microsoft Windows for IBM PC compatibles, MacOS for
Apple Macintosh compatibles and various X Windows-based systems for UNIX.
7. Point and click :
 The point-and-click style has been popularized by World Wide Web pages,
which incorporate all the above types of point-and-click navigation: highlighted
words, maps and iconic buttons.
8. Three-dimensional interfaces :
 There is an increasing use of three-dimensional effects in user interfaces. The
most obvious example is virtual reality, but VR is only part of a range of 3D
techniques available to the interface designer.

1.9.1 Interaction with Natural Language


 Natural Language Interaction (NLI) is the convergence of a diverse set of
natural language principles that enables people to interact with any connected
device or service in a humanlike way.
 Language is ambiguous at a number of levels. First, the syntax, or structure,
of a phrase may not be clear. If we are given the sentence :
The boy hit the dog with the stick

 We cannot be sure whether the boy is using the stick to hit the dog or
whether the dog is holding the stick when it is hit.
 Even if a sentence’s structure is clear, we may find ambiguity in the meaning
of the words used.
 Natural language interaction technology takes Natural Language Processing
(NLP) and Natural Language Understanding (NLU) to the next level.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 An analysis of "Carry Me" airlines conversational data, a fictitious name for an
airline, but based on real data, showed that questions about baggage are one
of the more frequent topics, however, when we drill down, it’s possible to see
that customers use “baggage” and “luggage” differently.
 Luggage is much more likely to refer to carry-on bags. This type of information
is tremendously useful when building an NLI app that is sensitive to the
expectations of customers.
 This is where analysis on unstructured data using NLI comes into its own
because human intuitions about conversational data are often wrong.
Businesses need the facts that NLI provides to guide them, otherwise
enterprises risk misunderstanding the voice of the customer.
 It allows enterprises to create advanced dialogue systems that utilize memory,
personal preferences, and contextual understanding to deliver a proactive
natural language interface.
 Natural language interaction removes the need for your customers to know
and understand your terminology.
 The deep understanding that Natural language interaction delivers gives
enterprises the information they need to deliver a superior customer
experience and have a positive impact on their bottom line.
 In interface design, natural-language interfaces are sought after for their
speed and ease of use, but most suffer the challenges to understanding wide
varieties of ambiguous input.
 Natural - language interfaces are an active area of study in the field of natural-
language processing and computational linguistics. An intuitive general
natural-language interface is one of the active goals of the Semantic Web.
 An important problem in Natural Language Generation (NLG) is obtaining and
representing in the system the knowledge required producing texts. This
includes the knowledge from which texts are generated and linguistic
knowledge required to produce the texts.
 In some cases, information produced by HCI researchers or practitioners can
be exploited in this way for NLG systems. For example, task models can be
exploited to generate documentation and on-line help. They can provide both
the information to be included in the texts, and guide the structure of the texts
to be generated.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
1.9.2 Elements of WIMP
 WIMP stands for windows, icons, menus and pointers and is the default
interface style for the majority of interactive computer systems in use today,
especially in the PC and desktop workstation arena.
 Examples of WIMP interfaces include Microsoft Windows for IBM PC
compatibles, MacOS for Apple Macintosh compatibles and various X Windows-
based systems for UNIX.
 Elements of the WIMP interfaces are called widgets, and they comprise the
toolkit for interaction between user and system.
1. Windows
 Windows are areas of the screen that act like individual terminals for an
application. Behaviour of windows determined by the system’s window
manager.
 Fig. 1.9.1 shows parts of windows.

Fig. 1.9.1 Parts of windows


 Windows can contain text, graphics, menus, toolbars, etc. It can be moved,
resized, closed, minimized, and maximized.
 Layout Policy : Multiple windows may exist simultaneously. Physical
arrangement determined by the window manager’s layout policy.
 Layout policy may be fixed or user-selectable. Possible layouts include :
1. Overlapping - One window partially obscures another
2. Tiled - Adjoin but don’t overlap
3. Cascading - A sequence with each window offset from the preceding according
to a rule

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
2. Icon
 A small picture is used to represent a closed window, and this representation
is known as an icon
 Icons are signs and represent a significant degree of cognitive complexity. A
good design of icons is important.
 Fig. 1.9.2 shows various icon.

Fig. 1.9.2 Icons

 A well-designed icon improves the user experience. An icon difficult to


understand, vague but results in frustrating user experiences.
 The act of reducing a window to an icon is called iconifying or minimizing. A
window may be restored by clicking on its icon
 Advantages of icons :
a) Save screen space
b) Serve as a reminder of available dialogs, applications, or commands that may
be restored or invoked
3. Menu
 A menu presents a choice of operations or services that can be performed by
the system at a given time.
 Menus afford access to system functionality. Menu option lists can consist of
any type of data such as images or symbols.
 Options are generally indented in relation to the title. Frequently used items
should be placed at the top. These lists can be ordered or unordered.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
4. Tool Bar
 Toolbar is similar to a menu bar, but as the icons are smaller than the
equivalent text more functions can be simultaneously displayed.
 Sometimes the content of the toolbar is fixed, but often users can customize it,
either changing which functions are made available, or choosing which of
several predefined toolbars is displayed.
5. Pointer
 A pointer is the input device used to interact with GUI components.
 The pointer (cursor) is the visual manifestation of the mouse or pointing
device and, as such, acts as the user’s proxy in the GUI environment.
 Allow us to do actions and also provide us with contextual information (e.g. wait).
 Examples are mouse, trackball, joystick, touchpad, finger, stylus, light pen.
 Two primary purposes are position control of the on-screen tracker and
selection via buttons.

1.9.3 Difference between Menu and Tool Bar


 Menus provide information cues in the form of an ordered list of operations
that can be scanned.
 Menus are inefficient when they have too many items, and so cascading menus
are utilized, in which item selection opens up another menu adjacent to the
item, allowing refinement of the selection.
 The major problems with menus in general are deciding what items to include
and how to group those items. Including too many items makes menus too long
or creates too many of them, whereas grouping causes problems in that items
that relate to the same topic need to come under the same heading, yet many
items could be grouped under more than one heading.
 Toolbar is similar to a menu bar, but as the icons are smaller than the
equivalent text more functions can be simultaneously displayed.
 Sometimes the content of the toolbar is fixed, but often users can
customize it, either changing which functions are made available, or
choosing which of several predefined toolbars is displayed.
Many times users face problems in understanding/learning toolbar
icons. How to resolve this issue ?
Learning toolbars
 Although many applications now have toolbars, they are often underused because
users

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
simply do not know what the icons represent.
 Once learned the meaning is often relatively easy to remember, but most users
do not want to spend time reading a manual, or even using online help to find
out what each button does, they simply reach for the menu.
 There is an obvious solution, put the icons on the menus in the same way that
accelerator keys are written there. So in the ‘Edit’ menu one might find the
option.

1.9.4 Advantages and Disadvantages of WIMP


Advantages
1. Easy to use
2. User friendly
3. Increased speed of learning
4. No need to learn complicated commands
5. Provides greater productivity.
Disadvantages
1. They use more processing power than other interface
2. Requires more space for storage.
1.10 Paradigms
 Paradigms are strategies for building interactive systems. Different
interaction styles are based upon different paradigms.
 New computing technologies arrive, creating a new perception of the
human computer relationship. We can trace some of these shifts in the history
of interactive technologies.
 A paradigm is a model of understanding consistently free of significant
contradictions
 It guides our expectations and helps us to sort, organize, and classify information.
 It affects the way information is processed by the brain and the types of
questions we ask when trying to understand the world around us incorporating
as it does, all of the knowledge and experiences we have acquired since birth.
We all build internal models of our world, which we rely upon to understand it
and to assure our survival in it.
 Our brain uses paradigms to classify, sort, and process information received by the
senses.
 It is consistently free of significant contradictions and even when it isn't, it still
works, because we can shift in and out of various paradigms, although not
always as well as we would like.
 It guides our expectations and helps us to sort, organize, and classify
information that we receive from our five senses.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
 A paradigm may be personal or cultural, and we each have many different
paradigms for different contexts.

1. Time - sharing : In the 1940 and 1950s, the significant advances in computing
consisted of new hardware technologies.
 Mechanical relays were replaced by vacuum electron tubes. Tubes were
replaced by transistors and transistors by integrated chips, all of which meant
that the amount of sheer computing power was increasing by orders of
magnitude.

2. Video Display Units (VDU) : Display screens could provide a more suitable
medium than a paper printout for presenting vast quantities of information.

3. Programming toolkits : The power of programming toolkits is small, well


understood components can be composed in fixed way in order to create larger
tools. Engelbart's idea was to use the computer to teach humans.

4. Personal computing : Graphic programming language for children called LOGO


to control turtle that dragged a pen along a surface to trace its path. As
technology progresses, it is now becoming more difficult to distinguish between
what constitutes a personal computer, or workstation, and what constitutes a
mainframe.

5. Window systems and the WIMP interface : When you use a program such as a
word processor that has a WIMP interface it is often the case that the document
you are creating looks exactly the same on the screen as it will when it is printed
out. If this is the case then the program is described as being WYSIWYG. This
stands for What You See Is What You Get.
Advantages
 Most operations are self-explanatory so that you do not have to remember
lots of commands. This makes GUIs particularly suitable for inexperienced
users.
 Some operations are much easier using a GUI with a pointer. e.g. selecting
text or drawing pictures.
 Often you can have more than one program running at the same time, each of
them using different windows.
 Often GUIs are WYSIWYG. What you see on the screen is what you get if
you do a printout.
 Often with a GUI many programs use a similar interface, so it is easier to learn
how to use a new program.
 Most GUIs provide good help facilities.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
Disadvantages
 GUIs can take up a lot of memory and need to be run on a fast computer

6. Metaphor : It is used quite successfully to teach new concepts in terms of ones,


which are already understood.
 Metaphors are a technique used to simplify interface design. A carefully
chosen metaphor can assist a user new to a particular interface. One of the
most common and successful metaphors is the desktop, files, and folders.
 Comparing a computer's system for organizing textual documents to a desk
and filing cabinet helps a user picture how their files are being stored.
 A text-driven command line interface offers no metaphor, making it more
difficult for a beginner user. There are folders which hold other folders, or
files. There is a trash can to delete files.
 The use of metaphors in the design of human-computer interaction has been
increasing as the Graphic User Interfaces (GUIs) have become popular in
recent years.
 The main advantage of using metaphors in HCI design is to utilize and extend
the concepts that already exist in computer users' long-term memory, to analog
represent the functions and operations of the computer systems and reduce
the users' mental workload.
7. Direct Manipulation : Direct Manipulation (DM) is an interaction style in which
users act on displayed objects of interest using physical, incremental, reversible
actions whose effects are immediately visible on the screen.
 Features of direct manipulation :
a. Visibility of the objects of interest.
b. Incremental action at the interface with rapid feedback on all actions.
c. Reversibility of all actions, so that users are encouraged to explore without
severe penalties.
d. Syntactic correctness of all actions, so that every user action is a legal operation.
e. Replacement of complex command language with actions to manipulate
directly the visible objects.

Advantages :
 Visually presents task concepts  Reduces syntax
 Allows easy learning  Allows easy retention
 Allows errors to be avoided  Encourages exploration
 Affords high subjective satisfaction

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
8. Computer Supported Cooperative Work (CSCW) : The study of how people work
together using computer technology. Computer Supported Cooperative Work
(CSCW) consists of software tools and technology that supports a group of
individuals working on projects at different sites. It is based on the principle of
group coordination and collaborative activities supported through computer
systems.
1.11 Case Study

Example 1 : Discuss the ways in which a full-page word-processor is or is not a


direct manipulation interface for editing a document using Shneiderman’s
criteria.
Solution : Visibility of the objects of interest :
The most important objects of interest in a word-processor are the words
themselves. Indeed, the visibility of the text on a continual basis was one of the
major usability advances in moving from line-oriented to display-oriented editors.
 Depending on the user’s application, there may be other objects of interest in
word-processing that may or may not be visible.
 For example, are the margins for the text on screen similar to the ones which
would eventually printed ? Is the spacing within a line and the line-breaks
similar ? Are the different fonts and formatting characteristics of the text
visible ?
 Incremental action at the interface with rapid feedback on all actions : We
expect from a modern word-processor that characters appear in the text as we
type them it at the keyboard, with little delay.
 If we are inserting text within a paragraph, we might also expect that the
format of the paragraph adjust immediately to accommodate the new changes.
 Various word processors do this reformatting automatically, whereas others do
it occasionally or only at the explicit request of the user.
 One of the other important actions which require incremental and rapid
feedback is movement of the insertion point, usually by means of arrow keys.
 If there is a significant delay between the input command to move the insertion
point down one line and the actual movement of the cursor on screen, it is
quite possible that the user will “overshoot” the target when repeatedly
pressing the down-arrow key to move down a few lines on the screen.
 Reversibility of all actions, so that users are encouraged to explore without
severe penalties: Single step undo commands in most word-processors allow
the user to recover from the last action performed.
 One problem with this is that the user must recognize the error before doing
any other

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
action. More sophisticated undo facilities allow the user to retrace back more
than one command at a time.
 Syntactic correctness of all actions, so that every operation is a legal operation
WYSYWYG: word-processors usually provide menus and buttons which the
user uses to articulate many commands.
 These interaction mechanisms serve to constrain the input language to only
allow legal input from the user.
 Replacement of complex command languages with actions to manipulate
directly the visible objects : The case for word processors is similar to that
described above for syntactic correctness. In addition, operations on portions
of text are achieved many times by allowing the user to directly highlight the
text with a mouse (or arrow keys).
 Subsequent action on that text, such as moving it or copying it to somewhere
else, can then is achieved more directly by allowing the user to “drag” the
selected via the mouse to its new location.
Two Marks Questions with Answers

What is HCI ?
Ans. : Human-computer interaction is a discipline concerned with the design,
implementation and evaluation of interactive computing systems for human
use and with the study of major phenomenon surrounding them. HCI is study
of people, computer technology and the ways these influence each other. We
study HCI to determine how we can make this computer technology more
usable by people.
Define mistake and slips.
• Mistakes are errors in choosing an objective or specifying a method of
Ans. :
achieving it.
 Slips are errors in carrying out an intended method for reaching an objective.
List the five senses.
Ans. : There are five senses : sight, sound, touch, taste and smell.
Define sensory memory.
Sensory memory is the shortest-term element of memory. It is the ability
Ans. :
to retain impressions of sensory information after the original stimuli have
ended. It acts as a kind of buffer for stimuli received through the five senses of
sight, hearing, smell, taste and touch, which are retained accurately. Short-
term memory can be accessed rapidly, in the order of 70 ms.
Explain three basic levels of skill.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 1- Foundations of
Interaction HCI
Ans.: Three basic levels of skill are as follows :
1.The learner uses general-purpose rules which interpret facts about a problem.
This is slow and demanding on memory access.
2.The learner develops rules specific to the task.
3.The rules are tuned to speed up performance.
What is reasoning ?
Reasoning is the process by which we use the knowledge we have to
Ans. :
draw conclusions or infer something new about the domain of interest.
List the Donald Norman’s seven stage of interaction.
Ans. : They are as follows :
1. Establishing the goal 2. Forming the intention
3. Specifying the action sequence 4. Executing the action
5. Perceiving the system state 6. Interpreting the system state
7. Evaluating the system state with respect to the goals and intentions.
What is Direct Manipulation (DM) ?
Ans. :Direct manipulation is an interaction style in which the objects of interest
in the UI are visible and can be acted upon via physical, reversible, incremental
actions that receive immediate feedback.
What is Ergonomics ?
Ans. : Ergonomics means literally the study or measurement of work. In this
context, the term work signifies purposeful human function. Ergonomics is the
science of designing user interaction with equipment and workplaces to fit the
user.
What are the two types of long term memory ?
Ans. : Types of long term memory is Episodic memory and Semantic memory.
List out all text entry devices.
Ans. :Text entry devices are alphanumeric keyboard, chord keyboards, phone
pad and T9 entry, handwriting recognition and speech recognition.
What is WIMP ?
Ans. : WIMP stands for windows, icons, menus and pointers and is the default
interface style for the majority of interactive computer systems in use today,
especially in the PC and desktop workstation arena. Examples of WIMP
interfaces include Microsoft Windows for IBM PC compatibles, MacOS for
Apple Macintosh compatibles and various X Windows- based systems for UNIX .


®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
UNIT-II

2
Design and Software Process

Syllabus
Interactive Design : Basics, process,scenarios, navigation, screen design, iteration and
prototyping.
HCI in software process : Software life cycle, usability engineering, prototyping in practice,
design rationale.
Design rules : principles, standards, guidelines, rules. Evaluation techniques, universal design

Contents
Interactive Design
Scenarios
Navigation
Screen Design
Prototyping
HCI in software Process
Usability Engineering
Design Rationale
Design Rule
Evaluation Techniques
Universal Design
Two Marks Questions with Answers

(2 - 1)
Human Computer 2- 2 Design and Software
Interaction Process

2.1 Interactive Design


 Design means achieving goals within constraint.
 Interaction design is the practice of designing interactive digital products,
environments, systems, and services.
 The golden rule of design are as follows :
 Understand your materials : In the case of a physical design this is obvious. But
for the chair with a steel frame and one with a wooden frame. They are very
different : often the steel frames are tubular or thin L or H section steel.
 In contrast wooden chairs have thicker solid legs. If you made a wooden chair
using the design for a metal one it would break; if you made the metal one in
the design for the wooden one it would be too heavy to move.
 For Human – computer interaction the obvious materials are the human and
the computer. That is we must :
 Understand computers : limitations, capacities, tools, platforms
 Understand people : psychological, social aspects, human error.

2.1.1 Interaction Design Process


 Fig. 2.1.1 shows interaction design process.

Fig. 2.1.1 : Interaction design process

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- 3 Design and Software
Interaction Process
 Requirements : What is wanted ? The first stage is establishing what exactly
is needed. There are a number of techniques used for this in HCI : interviewing
people, videotaping them, looking at the documents and objects that they work
with, observing them directly.
 Analysis : The results of observation and interview need to be ordered in
some way to bring out key issues and communicate with later stages of design.
 Design : There are numerous rules, guidelines and design principles that can
be used to help Iteration and prototyping Humans are complex and we cannot
expect to get designs right first time. We therefore need to evaluate a design to
see how well it is working and where there can be improvements.
 Iteration and prototyping : Humans are complex and we cannot expect to
get designs right first time. We therefore need to evaluate a design to see how
well it is working and where there can be improvements.
 Implementation and deployment : This will involve writing code, perhaps
making hardware, writing documentation and manuals – everything that goes
into a real system that can be given to others.

2.2 Scenarios
 Scenarios are the core of any usability test as you have to correctly
communicate your participants, what you want them to do during the test.
Well crafted task scenario helps you focus on designing around the real needs
of the user and removes artificiality from the test.
 As you can see, above scenario makes the task much more realistic as we’ve
provided user only a scenario which requires users to complete 3 different
tasks -
1. Signup/login on the app
2. Search for flight as per the schedule
3. Book the flight
 A good scenario -
1. Short but enough information to perform the task
2. Use User’s language, not the product’s
3. Simple and Clear
4. Should address your tasks and concerns
1. Short but enough information to perform the task :
 Time needed to read and understand the task has to be minimized as much
as possible. Having a long written task scenarios may require users to spend
undue time in reading and

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- 4 Design and Software
Interaction Process
understanding what they have to do during the test which may indirectly
influence the overall time and effort to complete the task.
 You have to find a balance in keeping the scenario short and relieving enough
information to perform the task. It’s also suggested to communicate the task
with users as the way you talk and not sound very scientific.
2. Use user’s language, not the product’s :
 Main aim of conducting usability test is to understand how a user will use it in
their real environment without getting any support from external audience.
 So, providing users with the enough detail is important but it should be in the
language that user can relate to and not the one which is used in product.
 For example, You may have Icons, menu options or labeled button, in your UI.
Concern could be to see if users choose right Icons, menu options or labeled
button to complete a task or not.
3. Simple and Clear :
 You will get desired result if and only if your task scenario is clear to your
users that means they have no ambiguity in understanding what you want them
to do.
4.Should address your tasks and concerns :
 Every scenario we create has to address one or more tasks and each task
should be intended to address one or more concerns we have with the app/
website.
 It’s must to have scenarios that are aligned with your business goals, for
example, if your concern is to improve sales in your e-commerce portal, you
would probably be interested in knowing whether it’s easy for your customers
to find and purchase a given product or not.

2.2.1 User Focus


 User focus is a never ending process because there is so much to know and
because the users keep changing.
 An interactive system designer should consider the human factors that characterize
users.
 User characteristics vary with age, gender, physical and cognitive abilities,
personality, education, cultural or ethnic background and goals.
 An interactive system designer should recognize this diversity. Systems used
by several communities of users.
 Designer faces real challenge to cater to the need of each community.
Designers must characterize users and situations as precisely and completely
as possible.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- 5 Design and Software
Interaction Process
 Over time many people are affected directly or indirectly by a system and these
people are called stakeholders.
 Tracing the tenuous links between people could go on forever and user need to
draw boundaries as to whom you should consider. This depends very much on
the nature of the systems being designed.
 When designing a system it is easy to design it as if you were the main user:
you assume your own interests and abilities.
 People may also be able to tell you about how things really happen, not just
how the organization says they should happen. To encourage users to tell you
this, you will need to win their trust, since often the actual practices run
counter to corporate policy.
 A professional in any field is very practiced and can do things in the domain. An
academic in the same field may not be able to do things, but she knows about
the things in the domain. These are different kinds of knowledge and skill.
 Sometimes people know both, but not necessarily so. The best sports trainers
may not be the best athletes, the best painters may not be the best art critics.
 Because of this it is important to watch what people do as well as hear what
they say. This may involve sitting and taking notes of how they spend a day,
watching particular activities, using a video camera or tape recorder.
 It can be done in an informal manner or using developed methods such as
ethnography or contextual inquiry.
 Another way to find out what people are doing is to look at the artifacts they
are using and creating. Look at a typical desk in an office. There are papers,
letters, files, perhaps a stapler, a computer, sticky notes.
 One method that has been quite successful in helping design teams produce
user focused designs is the persona. A persona is a rich picture of an imaginary
person who represents your core user group.

2.3 Navigation

Navigation Design : Imagine yourself using a word processor. You interact at several
levels :
 Widgets help you know how to use them for a particular selection or action.
 Screens or windows – To understand the logical grouping of buttons.
 Navigation within the application – To understand where you are in the
interaction.
 Environment – You swap between applications, perhaps cut and paste.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- 6 Design and Software
Interaction Process
 In the web we have less control of how people enter a site and on a physical
device we have same layout of buttons and displays no matter what the
internal state (although we may treat them differently). Just in case you haven’t
already got the idea, the place to start when considering the structure of an
application is to think about actual use :
1) Who is going to use the application ?
2) How do they think about it ?
3) What will they do with it ?
 This can then drive the second task – thinking about structure.
 We will consider two main kinds of issue :
1) Local structure : Looking from one screen or page out.
2) Global structure : Structure of site, movement between screens.

Local structure
 Much of interaction involves goal-seeking behaviour. In an ideal world if users
had perfect knowledge of what they wanted and how the system worked they
could simply take the shortest path to what they want.
 At each point in the interaction they can make some assessment of whether
they are getting closer to their (often partially formed) goal.
 To do this goal seeking, each state of the system or each screen needs to give
the user enough knowledge of what to do to get closer to their goal. To get you
started, here are four things to look for when looking at a single page, screen
or state of a device.
1. Knowing where you are
2. Knowing what you can do
3. Knowing where you are going – or what will happen
4. Knowing where you’ve been – or what you’ve done
 The screen, web page or device displays should make clear where you are in
terms of the interaction or state of the system. Some websites show ‘bread
crumbs’ at the top of the screen, the path of title showing where the page is in
the site. Trade-off between appearance and ease of use may mean that this is
the right thing to do, but you should take care before confusing the user
needlessly.
 You need to know where you are going when you click a button or what will
happen. It is better if users do not have to use this ‘try it and see’ interaction.
Icons are typically not self- explanatory and should always be accompanied by
labels or at the very least tooltips or some similar technique.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- 7 Design and Software
Interaction Process
 Special care has to be taken if the same command or button press means
something different contexts. System need to give some feedback to say what
has happen.

2.3. Wider Still


 Each site amongst other devices and applications and this in turn has to be
reflected within our design.
 This has several implications :
1. Style issues : We should normally conform to platform standards, such as
positions for menus on a PC application, to ensure consistency between
applications. For example, on our proposed personal movie player we
should make use of standard fast-forward, play and pause icons.
2. Functional issues : On a PC application we need to be able to interact with
files, read standard formats and be able to handle cut and paste.
3. Navigation issues : We may need to support linkages between
applications, for example allowing the embedding of data from one
application in another, or, in a mail system, being able to double click an
attachment icon and have the right application launched for the attachment.

2.4 Screen Design


 Screen design refers to the graphic design and layout of user interfaces on displays.
 The basic principle at the screen level reflect those in other areas of interaction
design :
1. Ask 2. Think 3. Design
 Screen design refers to the graphic design and layout of user interfaces on
displays. It is a sub-area of user interface design but is limited to monitors and
displays. In screen design, the focus is on maximizing usability and user
experience by making user interaction as simple and efficient as possible.

1. Layout :
 Layout is the arrangement of items on the screen. Like items are grouped into areas
 Areas can be further subdivided and each area is self-contained. Areas
should have a natural intuitive flow.
 Users from western nations tend to read from left to right and top to bottom.
Users from other regions may have different flows.
 Number of visual tools are available to help us suggest to the user appropriate
ways to read and interact with a screen or device.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- 8 Design and Software
Interaction Process
 Fig. 2.4.1 shows grouping related items in an order screen.

Billing details: Name: Address: Delivery details: Name:


Order No: Address:
Deliver date & time:

Order Details: Name of Item Book


Quantity Price Cost
2 210 /- 420/-

Fig. 2.4.1

2. Grouping and Structure :


 Logical things are physically grouped together. This may involve multiple
levels of structure. Above figure shows a potential design for an ordering
screen.
 Group data by the natural sequence of use.
 Flow of control : how users progress through a screen when doing their work.
 Flow of control means that the focus of activity moves across a screen or page
while the user performs a certain task.
 Fig. 2.4.2 shows flow of control.

Fig. 2.4.2

 Flow of control is important for (1) efficiency in performing a task (2)


transparency and understandability of a screen or page.
 For Western cultures the natural flow is from left to right and from top to bottom.

3. Layout Hierarchy :
 Use boxes to group logical items. Fig. 2.4.3 shows layout hierarchy.
 Use fonts for emphasize groupings, headings .

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- 9 Design and Software
Interaction Process

Fig. 2.4.3 : Layout hierarchy

4. Containers and non - containers :


 Screen or page elements can either be containers or non - containers.
 Containers can contain other elements; non - containers cannot.
 Too much nesting can visually overload a page.

5. White space :
 One way to make an element within an item stand out in a list UI design is to
simply leave it alone literally.
 White space is one of the best ways to isolate the important elements in a list
item. Doing this draws the user’s attention more easily.
 Essential role in screen layout. Use margins to draw eye around design.
 White space is the area between design elements. It is also the space
within individual design elements, including the space between typography
glyphs (readable characters).
 Despite its name, white space does not need to be white. It can be any
color, texture, pattern, or even a background image.
 Fig. 2.4.4 shows use of white space.

Fig. 2.4.4

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Try to integrate figure and ground. Object should be scaled proportionally to
its background.
 Don’t crowd controls together because crowding creates spatial tension and
inhibits scanning.
The Advantages of Using Whitespace :
1. Increased Content Legibility
2. More interaction
3. Ability to Highlight Call to Actions
4. Act as a separator.

6. Presenting information :
 The way of presenting information on screen depends on the kind of
information : text, numbers, maps, tables.
 Technology available to present it: character display, line drawing, graphics,
virtual reality; and, most important of all, on the purpose for which it is being
used.

7. Aesthetics and utility :


 Remember that a pretty interface is not necessarily a good interface. Ideally, as
with any well-designed item, an interface should be aesthetically pleasing.
 The conflict between aesthetics and utility can also be seen in many ‘well
designed’ posters and multimedia systems.
 In particular, the backdrop behind text must have low contrast in order to
leave the text readable; this is often not the case and graphic designers may
include excessively complex and strong backgrounds because they look good.
The results are impressive, perhaps even award winning, but completely
unusable.

8. Making a mess of it : color and 3D : The increasing use of 3D effects in


interfaces has posed a whole new set of problems for text and numerical
information.

9. Localization / internationalization : The process of making software suitable for


different languages and cultures is called localization or internationalization.

2.4.1 Different Stages in the Design


 Concept Design : Every possible approaches are sketched out. The validity of
each sketches is verified following the usability requirements and the goals
agreed. At the end, the best approach is selected.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Interaction Design : In this step, the structure of the UI must be set by naming
interaction flows. Method : e.g. Affinity Diagramming, each action can be
written on a Post-It note, and organised in clusters. The Post-It notes are then
rearranged to simplify user tasks.
 Screen Design : Creating rough designs of the screens’ structure. These
layouts are linked together and usability test is performed with a user.
 Testing : A user is asked to follow a realistic scenario/tasks on the sketched out
prototype.

2.5 Prototyping
 A prototype is an example that serves as a basis for future models.
Prototyping gives designers an opportunity to research new alternatives and
test the existing design to confirm a product’s functionality prior to production.
 Prototyping is an example of what is known as a hill-climbing approach.
 Rapid Prototyping is an instructional design approach that combines the
design, developmental, and evaluation phases. It is a non-linear approach that
produces a sample working model that is a scaled-down representative version
of the whole course.
 Three main approach of prototyping are throw-away, incremental and evolutionary.

2.5. Types of Rapid Prototyping


 Storyboard, paper prototype, wireframes and mock up review form are the
types of rapid prototyping.
1. Storyboard : It is a graphical depiction of the outward appearance of the
intended system, without any accompanying system functionality.
Storyboards do not require much in terms of computing power to construct.
 Storyboarding allows you to check your design is on target with expectations
before investing time in developing.
 Modern graphical drawing packages now make it possible to create
storyboards with the aid of a computer instead of by hand.
 Storyboards are the simplest form of prototype. They are simply rough
sketches of the system and its user interface.
 User are guided through the system by analysts who show the users how they
are expected to use the system, and show them what the system is expected to
do. The analysts record user responses to the system and feed these back to
the designers.
 Storyboards may be very simple drawings of the system done on paper, or
there are more sophisticated tools available which allow the analysts to create
storyboards on a computer using graphic packages and apply some limited
animation to the graphics. These allows a

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
little more reality in the storyboards, but in effect the analyst is still in control
and steps the users through the system.
2. Paper prototype : Easy and fast to do. It helps you think of specifics.
Usually good as a first round prototype. It can still do usability testing, even
with paper.
3. Wireframe : Wireframes are the first stage of the design process and help
us to understand the key user journeys, information structuring, modes of
interaction and functionality.
 Wireframes or ‘page schematic’ are a basic outline or skeleton of your key
website pages, drawn to show the elements of a page, their relationships,
position, and their relative importance.
 They indicate the information types present, navigation, signposting, branding
and content areas. They are black and white schematics presented either in
PowerPoint or as a clickable web prototype.
4. Mock-up is a scale or full-size model of a design or device, used for design
evaluation, promotion. A mockup is a prototype if it provides at least part of
the functionality of a system and enables testing of a design. Mock-ups are
used by designers mainly to acquire feedback from users.

2.5.2 Hill-Climbing
 Hill climbing is a mathematical optimization heuristic method used for solving
computationally challenging problems that have multiple solutions.
 It is an iterative method belonging to the local search family which starts with
a random solution and then iteratively improves that solution one element at a
time until it arrives at a more or less optimized solution.
 Prototyping is an example of what is known as a hill-climbing approach.
 A prototype is an early sample, model, or release of a product built to test a
concept or process or to act as a thing to be replicated or learned from.
 It is a term used in a variety of contexts, including semantics, design,
electronics, and software programming.
 A prototype is designed to test and try a new design to enhance precision
by system analysts and users.
 When you’re creating a prototyping strategy, it’s important to think about
the cost of change over time.
 For physical products like a car, a toaster, the cost of making changes rises
dramatically over time throughout the design process, and even more
significantly upon release.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 With desktop software that gets distributed on, say, a CDROM, the cost rises
aren’t quite so dramatic, but it’s still pretty significant, harder to make changes
as you go throughout the design process, and much more difficult once you’ve
shipped it out to consumers.
 Web sites, and other forms of software as a service, make it much easier to
make changes over time.
 But the costs and difficulty of making changes is still increasing for a number
of reasons. Fig. 2.5.1 shows hill climbing approach.

Fig. 2.5.1 : Hill climbing approach

Problems :
1. May hit a local maximum, not global one
2. Need good starting point, but how to find ?
 Standard mechanism for finding best starting point : use experienced
designers, try different designs.
 According to Norman, current UCD processes achieve good design. For
inspirational great
design, new ways to find starting points may be needed.

2.5.3 Problems with Prototyping


 The main problem with prototyping is that no matter how cheaply designers try
to make their prototypes and how committed they are to throwing them away,
they still have to make design decisions about how to present their prototypes.
 These decisions can then become ingrained into the final product. There is a
factor to consider called ‘design inertia’ which describes how designers, once
having made a design decision, are rather reluctant to relinquish that decision,
admit it is wrong and change it.
 This is another reason why good designers put off big decisions as much as
possible, knowing that if they make a small mistake early on it is easier and
cheaper to rectify than a big early mistake.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Rapid prototyping is about exposing bad design decisions as soon as possible
after they have been made.
 Guidelines and usability engineering are more about trying to get the designer
not to make mistakes in the first place.
 Design inertia is therefore much more prevalent in rapid prototyping
development than it is in other user centred design processes. Because
prototypes do not get thrown away in iterative design processes then design
inertia is even more prevalent.
 Furthermore in spotting usability problems by user testing, the designer knows
that there is a problem, but not necessarily what causes that problem, or how
to fix it. Rapid prototyping identifies symptoms, not illnesses and not cures.

2.6 HCI in Software Process


 HCI is used in software process to interact product with user and give them a
simple and comfort able interface. Software developers check the environment
that it is simple and interactive for selected environment and it is only possible
human interaction.
 There are many models of software development, and many places at which
HCI thinking should be specifically applied.

The software lifecycle :


 Software engineering is the discipline for understanding the software design
process, or life cycle.
 Designing for usability occurs at all stages of the life cycle, not as a single isolated
activity.

Activities in the life cycle :


1. Requirements specification : designer and customer try capture what the
system is expected to provide can be expressed in natural language or more
precise languages, such as a task analysis would provide.
2. Architectural design : high-level description of how the system will provide
the services required factor system into major components of the system
and how they are interrelated needs to satisfy both functional and non-
functional requirements .
3. Detailed design : refinement of architectural components and interrelations
to identify modules to be implemented separately the refinement is
governed by the non- functional requirements.
 Software engineering is the study of designing, development and preservation
of software. It comes in contact with HCI to make the man and machine
interaction more vibrant and interactive.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Fig. 2.6.1 shows waterfall model.

Fig. 2.6.1 : Waterfall model

2.6.1 Validation and Verification


 Verification is the process of checking that a software achieves its goal without
any bugs. It is the process to ensure whether the product that is developed is
right or not. It verifies whether the developed product fulfills the requirements
that we have. Verification is static testing.
 Verification means Are we building the product right ?
 Validation is the process of checking whether the software product is up to the
mark or in other words product has high level requirements. It is the process
of checking the validation of product i.e. it checks what we are developing is
the right product. It is validation of actual and expected product. Validation is
the dynamic testing.
 Validation means Are we building the right product ?

2.7 Usability Engineering


 Usability engineering is a professional discipline that focuses on improving the
usability of interactive systems. It draws on theories from computer science
and psychology to define problems that occur during the use of such a system.
Usability engineering involves the testing of designs at various stages of the
development process, with users or with usability experts.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Usability engineering is used to determine to what degree a product or
prototype will be user-friendly. It often pertains to the field of software
development.
 To be tested, usability is expressed in terms of :
1. Learnability : time and effort to reach a specified level of user
performance (ease of learning)
2. Throughput : for tasks accomplished by experienced users, the speed of
task execution and the errors made (ease of use)
3. Flexibility : multiplicity of ways the user and system exchange
information and the extend to which the system can accommodate changes
beyond those initially specified
4. Attitude : the positive attitude created in users by the system
 “Usability data” is any information that is useful in measuring (or
identifying potential issues affecting) the usability attributes of a system under
evaluation.
 Usability can be improved by means of
1. User-centred Design
2. Heuristic Testing
3. Usability Evaluation
4. Prototyping
5. Task Analysis
6. Collaborative Walk-through

Problems with usability engineering :


 The problem with this sort of usability specification is that it may miss the point.
 Usability engineering must start from the very beginning of the design
process to ensure that account is taken of these issues.
 Also, it is not clear that what is specified in the usability specification
actually relates to genuine usability.

2.8 Design Rationale


 Design rationale is about explaining why a product has been designed the way it
has.
 For each decision made there must a set of reasons why that particular
decision was made. Design rationale is about recording those decisions and the
reasons why they were made.
 A design rationale is a useful design tool because it explicitly lays out the
reasoning behind a design process and it forces designers to be explicit about
what they are doing and why they are doing it.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 In particular a design rationale can be used after a product has been
completed in order to analyse why a product was a success or failure.
 If a similar product is being designed subsequently then its designers can refer
to a design rationale to discover why earlier products were designed the way
they were, and with the benefit of hindsight judge whether the earlier design
decisions were successful and warrant repeating.
 Design rationales are particularly helpful in interactive system design because,
there is rarely one objectively correct solution to any problem, and some
solutions may contradict one another, or require trade-offs.
 Design rationales require the designer to be explicit about how contradictions
were resolved and trade-offs were made.
 Furthermore the design space may be very large and therefore it is not obvious
that a designer will even consider the best solution, never mind choose it.
 A design rationale makes it clear which options from the design space were
considered and why. If an apparently better solution were later discovered
then it is obvious whether that solution had been considered and discarded for
some reason, or not considered at all.
 Usability is very context dependent; what is good for one user may be dreadful
for another. If subsequent designs are made where the context of use does not
change then a design rationale can be reused without modification. If however
the context does change then new design decisions can be made for this new
context, but in the light of the decisions made for the older context.
 It is beneficial to have access to the design rationale for several reasons :
1. In an explicit form, a design rationale provides a communication mechanism
among the members of a design team.
2. Accumulated knowledge in the form of design rationales for a set of
products can be reused to transfer what has worked in one situation to
another situation which has similar needs.
3. The effort required to produce a design rationale forces the designer to
deliberate more carefully about design decisions.

2.8.1 Benefits of Design Rationale


 Communication throughout life cycle
 Reuse of design knowledge across products
 Enforces design discipline

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Presents arguments for design trade-offs
 Organizes potentially large design space
 Capturing contextual information

2.8.2 Process-Oriented Design Rationale


 Process-oriented design rationale is interested in recording an historically
accurate description of a design team making some decision on a particular
issue for the design.
 In this sense, process-oriented design rationale becomes an activity concurrent
with the rest of the design process.
 It preserves order of deliberation and decision making.
 Design rationale is based on Rittels Issue Based Information System (IBIS).
 IBIS is best known for its use in dialogue mapping, a collaborative approach
to tackling wicked problems but it has a range of other applications as well.
 IBIS consists of three main elements :
1. Issues (or questions) : these are issues that need to be addressed.
2. Positions (or ideas) : these are responses to questions. Typically the set of
ideas that respond to an issue represents the spectrum of perspectives on
the issue.
3. Arguments : these can be Pros (arguments supporting) or Cons (arguments
against) an issue. The complete set of arguments that respond to an idea
represents the multiplicity of viewpoints on it.
 Fig. 2.8.1 shows IBIS elements.

Fig. 2.8.1 : IBIS elements

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Issues can be raised anew or can arise from other issues, positions or
arguments. In other words, any IBIS element can be questioned. In
Compendium notation : a question node can connect to any other IBIS node.
 Ideas can only respond to questions, i.e. in Compendium “light bulb” nodes
can only link to question nodes. The arrow pointing from the idea to the
question depicts the “responds to” relationship.
 Arguments can only be associated with ideas – i.e. in Compendium and nodes
can only link to “light bulb” nodes (with arrows pointing to the latter).

2.8.3 Design Space Analysis


 Design Space Analysis is an approach to representing design rationale. It uses
a semiformal notation, called QOC (Questions, Options, and Criteria), to
represent the design space around an artifact.
 The main constituents of QOC are Questions identifying key design issues,
Options providing possible answers to the Questions, and Criteria for assessing
and comparing the Options.
 Fig. 2.8.2 shows design space analysis.

Fig. 2.8.2 : QOC design space analysis

 A Design Space Analysis does not produce a record of the design process but
is instead a coproduct of design and has to be constructed alongside the
artifact itself.
 The key to an effective design space analysis using the QOC notation is
deciding the right questions to use to structure the space and the correct
criteria to judge the options.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 The initial questions raised must be sufficiently general that they cover a large
enough portion of the possible design space, but specific enough that a range
of options can be clearly identified. It can be difficult to decide the right set of
criteria with which to assess the options.
 Another structure-oriented technique, called Decision Representation
Language (DRL), developed by Lee and Lai, structures the design space in a
similar fashion to QOC.
 The goal of DRL is foremost to provide a vocabulary for representing the
qualitative aspects of decision making such as the issues raised, pro and con
arguments advanced, and dependency relations among alternatives and
constraints, that typically appear in a decision making process.
 Once we have a language for representing these basic elements of decision
making, it becomes possible to provide services that use this language to
support decision making.

2.9 Design Rule


 Design rules (or usability rules) are rules that a designer can follow in order to.
Increase the usability of the system/product e.g., principles, standards,
guidelines.
 Designing for maximum usability is the goal of interactive systems design.
 Abstract principles offer a way of understanding usability in a more general
sense, especially if we can express them within some coherent catalog.
 Design rules in the form of standards and guidelines provide direction for
design, in both general and more concrete terms, in order to enhance the
interactive properties of the system.
 The essential characteristics of good design are often summarised through
'golden rules' or heuristics.
 Design patterns provide a potentially generative approach to capturing and
reusing design knowledge.

2.9.1 Principle to Support Usability


 Principles are listed below :
1. Learnability : the ease with which new users can begin effective interaction
and achieve maximal performance.
2. Flexibility : the multiplicity of ways in which the user and system exchange
information.
3. Robustness : the level of support provided to the user in determining
successful achievement and assessment of goals.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Learnability concerns the features of the interactive system that allow novice
users to understand how to use it initially and then how to attain a maximal
level of performance.
 Predictability is a user-centered concept; it is deterministic behavior from the
perspective of the user. It is not enough for the behavior of the computer
system to be determined completely from its state, as the user must be able to
take advantage of the determinism.
 The generalizability of an interactive system supports this activity, leading to a
more complete predictive model of the system for the user. We can apply
generalization to situations in which the user wants to apply knowledge that
helps achieve one particular goal to another situation where the goal is in some
way similar. Generalizability can be seen as a form of consistency.
 Consistency relates to the likeness in behavior arising from similar situations
or similar task objectives. Consistency is probably the most widely mentioned
principle in the literature on user interface design.
Task Migratability :
 It is transfer of control for execution of tasks between system and user. It
should be possible for the user or system to pass the control of a task over to
the other or promote the task from a completely internalized one to a shared
and cooperative venture.
 Hence, a task that is internal to one can become internal to the other or shared
between the two partners. Example of task migratability is spell checking. It is
equipped with a dictionary, you are perfectly able to check your spelling by
reading through the entire paper and correcting mistakes as you spot them.
 This task is perfectly suited to automation, as the computer can check words
against its own list of acceptable spellings.
 It is not desirable, to leave this task completely to the discretion of the
computer, as most computerized dictionaries do not handle proper names
correctly, nor can they distinguish between correct and unintentional
duplications of words.
 In those cases, the task is handed over to the user. The spell - check is best
performed in such a cooperative way.
 In safety-critical applications, task migratability can decrease the likelihood of
an accident. For example, on the flight deck of an aircraft, there are so many
control tasks that must be performed that a pilot would be overwhelmed if he
had to perform them all.

2.9.2 Standards and Guidelines


 Usability : The effectiveness, efficiency and satisfaction with which specified
users achieve specified goals in particular environments.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 Effectiveness : The accuracy and completeness with which specified users can
achieve specified goals in particular environments.
 Efficiency : The resources expended in relation to the accuracy and
completeness of goals achieved.
 Satisfaction : The comfort and acceptability of the work system to its users and
other people affected by its use.
 The strength of a standard lies in its ability to force large communities to
abide, the so- called authority.
 The authority of a standard (or a guideline, for that matter) can only be
determined from its use in practice.
 Some software products become de facto standards long before any formal
standards document is published, for example, the X windowing system).
 Usability test of Microsoft
word : Task 1: Create New
Document
Task 2 : Save Document under
Custom Title Task 3 : Choose Custom
Font
Task 4: Paragraph
Formatting Task 5 : Creating
Table of Contents Task 6 :
Add Page Numbers
Task 7 : Upload/Format Image

Guidelines :
 Basic categories of the Smith and Mosier guidelines are as follows :
1. Data Entry
2. Data Display
3. Sequence Control
4. User Guidance
5. Data Transmission
6. Data Protection
 Smith and Mosier (1986) guidelines for data entry :
1. Consistency of data-entry transactions : Similar sequences of actions
should be used under all conditions.
2. Minimal input actions by user : Greater productivity, fewer chances for error.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
3. Minimal memory load on users : Should not be required to memorize
lengthy lists of commands.
4. Compatibility of data entry with data display.
5. Flexibility for user control of data entry.

2.9.3 Shneiderman’s Eight Golden Rules of Interface Design


 Shneiderman’s 8 Golden Rules are as follows :
1. Strive for consistency : Consistent sequences of actions should be required
in similar situations; identical terminology should be used in prompts,
menus, and help screens; and consistent commands should be employed
throughout.
2. Enable frequent users to use shortcuts : As the frequency of use increases,
so do the user's desires to reduce the number of interactions and to
increase the pace of interaction. Abbreviations, function keys, hidden
commands, and macro facilities are very helpful to an expert user.
3. Offer informative feedback : For every operator action, there should be
some system feedback. For frequent and minor actions, the response can be
modest, while for infrequent and major actions, the response should be
more substantial.
4. Design dialog to yield closure : Sequences of actions should be organized
into groups with a beginning, middle, and end. The informative feedback at
the completion of a group of actions gives the operators the satisfaction of
accomplishment, a sense of relief, the signal to drop contingency plans and
options from their minds, and an indication that the way is clear to prepare
for the next group of actions.
5. Offer simple error handling : As much as possible, design the system so the
user cannot make a serious error. If an error is made, the system should be
able to detect the error and offer simple, comprehensible mechanisms for
handling the error.
6. Permit easy reversal of actions : This feature relieves anxiety, since the user
knows that errors can be undone; it thus encourages exploration of
unfamiliar options. The units of reversibility may be a single action, a data
entry, or a complete group of actions.
7. Support internal locus of control : Experienced operators strongly desire the
sense that they are in charge of the system and that the system responds to
their actions. Design the system to make users the initiators of actions
rather than the responders.
8. Reduce short-term memory load : The limitation of human information
processing in short-term memory requires that displays be kept simple,
multiple page displays be consolidated, window-motion frequency be
reduced, and sufficient training time be allotted for codes, mnemonics, and
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
sequences of actions.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
2.10 Evaluation Techniques
 Evaluation role is to access designs and test systems to ensure that they
actually behave as we expect and meet user requirements.
 Ideally, evaluation should occur throughout the design life cycle, with the
results of the evaluation feeding back into modifications to the design.
Goals of Evaluation :
1. To assess the extent and accessibility of the systems functionality
2. To assess user’s experience of the interaction
3. To identify any specific problems with the system.
 Formalized way of imagining people's thoughts and actions when they use an
interface for the first time.
 First select a task that the design is intended to support.
 Then try to tell a believable story about each action a user has to take to do the
task.
 To make the story believable, you have to motivate each of the user's actions,
relying on the user's general knowledge and on the prompts and feedback
provided by the interface. If you can't tell a believable story about an action,
then you've located a problem with the interface
 Question assumptions about what the users will be thinking
 Identify controls that may be missing or hard to find
 Note inadequate feedback
 Suggest difficulties with labels and prompts
 Vocabulary Problem : On a piece of paper write the name you would give to a
program that tells about interesting activities occurring in some major
metropolitan area
 Focus most clearly on problems that users will have when they first use
an interface, without training
 Not a technique for evaluating the system over time (e.g., how quickly a user
moves from beginner to intermediate)
 Most effective if designers can really create a mental picture of the actual
environment of use.
 Prior to doing a walkthrough, you need four things :
1. You need a description of a prototype of the interface. It doesn't have to be
complete, but it should be fairly detailed. Things like exactly what words
are in a menu can make a big difference.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
2. You need a task description (for a representative task).
3. You need a complete, written list of the actions needed to complete the task.
4. You need an idea of who the users will be and what kind of experience
they'll bring to the job.
DECIDE Evaluation framework :
 DECIDE is a framework that is used to guide evaluation
1. Determine the goals the evaluation addresses.
2. Explore the specific questions to be answered.
3. Choose the evaluation paradigm and techniques to answer the questions.
4. Identify the practical issues.
5. Decide how to deal with the ethical issues.
6. Evaluate, interpret and present the data.

Styles of evaluation :

1.Laboratory studies : Performed under laboratory conditions.


 Advantages :
a. Specialist equipment available
b. Uninterrupted environment
 Disadvantages :
a. Lack of context
b. Difficult to observe several users cooperating
2.Field studies : Conducted in the work environment or in “the field”.
 Advantages :
a. Natural environment
b. Context retained
c. Longitudinal studies possible
 Disadvantages :
a. Distractions, interruptions
b. Noise

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
2.10.1 Evaluation through Expert Analysis

2.10.1.1 Cognitive Walkthrough


 Heuristic evaluation can be carried out on a design specification so it is useful
for evaluating early design. But it can also be used on prototypes, storyboards
and fully running systems. It is a flexible, relatively cheap approach. Hence it is
often considered a discount usability technique.
 The general idea behind heuristic evaluation is that several evaluators
independently review a system to come up with probable usability problems.
 Each evaluator assesses the system and notes violations of any of heuristics
that would indicate a probable usability problem.
 The evaluator also assesses the severity of each usability problem, based on
four factors : how common is the problem, how easy is it for the user to
overcome, will it be a one-off problem or a persistent one and how seriously
will the problem be perceived ? These can be combined into an overall severity
rating on a scale of 0-4 :
0 = I don’t agree that this is a usability problem at all
1 = Cosmetic problem only : need not be fixed unless extra time is
available on project 2 = Minor usability problem : fixing this should be
given low priority
3 = Minor usability problem : important to fix, so should be given priority
4 = Usability catastrophe : imperative to fix this before product can be released
(Nielsen)

Advantages :
 Permits early evaluation of designs at the prototyping stage or without a mockup.
 Helps the designer assess how the features of their design fit together to
support users’ work.
 Provides useful feedback about action sequences.
 Assists designer by providing reasons for trouble areas.
 Provides indications of the users’ mental
 Processes, which helps build a successful interface that accommodates users.

Disadvantages :
 Relies on analysis rather on user testing.
 Provides a detailed examination of a particular task rather than an overview
of the interface.
 Provides no quantitative data.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
2.10.1.2 Nielsen’s Ten Heuristics
1. Visibility of system status : The system should always keep users informed
about what is going on, through appropriate feedback within reasonable
time.
2. Match between system and the real world : The system should speak the
users' language, with words, phrases and concepts familiar to the user,
rather than system- oriented terms. Follow real-world conventions, making
information appear in a natural and logical order.
3. User control and freedom : Users often choose system functions by mistake
and will need a clearly marked "emergency exit" to leave the unwanted
state without having to go through an extended dialogue. Support undo and
redo.
4. Consistency and standards : Users should not have to wonder whether
different words, situations, or actions mean the same thing. Follow platform
conventions.
5. Error prevention : Even better than good error messages is a careful design
which prevents a problem from occurring in the first place. Either eliminate
error-prone conditions or check for them and present users with a
confirmation option before they commit to the action.
6. Recognition rather than recall : Minimize the user's memory load by making
objects, actions, and options visible. The user should not have to remember
information from one part of the dialogue to another.
7. Flexibility and efficiency of use : Accelerators unseen by the novice user,
may often speed up the interaction for the expert user such that the system
can cater to both inexperienced and experienced users. Allow users to tailor
frequent actions.
8. Aesthetic and minimalist design: Dialogues should not contain information
which is irrelevant or rarely needed.
9. Help users recognize, diagnose, and recover from errors : Error messages
should be expressed in plain language, precisely indicate the problem, and
constructively suggest a solution.
10. Help and documentation : Even though it is better if the system can be
used without documentation, it may be necessary to provide help and
documentation.

2.10.2 Heuristic Evaluation


 A third expert-based approach is the use of models. Certain cognitive and
design models provide a means of combining design specification and
evaluation into the same framework.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
 For example, the GOMS (Goals, Operators, Methods and Selection) model
predicts user performance with a particular interface and can be used to filter
particular design options. Similarly, lower-level modelling techniques such as
the keystroke-level model provide predictions of the time users will take to
perform low-level physical tasks.
 Design methodologies, such as design rationale, also have a role to play in
assessment at the design stage. Design rationale provides a framework in
which design options can be evaluated.
 Dialog models can also be used to estimate dialog sequences for problems,
such as unreachable states, circular dialogs and complexity. Models such as
state transition networks are useful for evaluating dialog designs prior to
implementation.

2.11 Universal Design


 Universal design is the process of designing interactive systems that are
usable by anyone, with any range of abilities, using any technology platform.
 The design and composition of an environment so that it may be accessed,
understood and used.
a) To the greatest possible extent
b) In the most independent and natural manner possible
c) In the widest possible range of situations
d) Without the need for adaptation, modification
 Universal Design should incorporate a two level approach :
1. User-Aware Design : pushing the boundaries of 'mainstream' products,
services and environments to include as many people as possible.
2. Customisable Design : design to minimize the difficulties of adaptation to
particular

2.11.1 Universal Design Principle


 Universal design principles are as follows :
a. Equitable use : The design is useful and marketable to people with diverse
abilities
b. Flexibility in use : The design accommodates a wide range of individual
preferences and abilities.
c. Simple and intuitive to use : Use of the design is easy to understand,
regardless of the user's experience, knowledge, language skills, or current
concentration level.
d. Perceptible information : The design communicates necessary information
effectively to the user, regardless of ambient conditions or the user's
sensory abilities.
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
e. Tolerance for error : The design minimizes hazards and the adverse
consequences of accidental or unintended actions.
f. Low physical effort : The design can be used efficiently and comfortably and
with a minimum of fatigue
g. Size and space for approach and use : Appropriate size and space is
provided for approach, reach, manipulation, and use regardless of user's
body size, posture, or mobility.
Two Marks Questions with Answers

What is scenarios ?
Ans. : Scenarios are the core of any usability test as you have to correctly
communicate your participants, what you want them to do during the test. Well
crafted task scenario helps you focus on designing around the real needs of the
user and removes artificiality from the test.
What is usability ?
Ans. : Usability refers to the extent to which a product can be used by
specified users to achieve specified goals with effectiveness, efficiency and
satisfaction in a specified context of user.
Explain task migratability ?
Ans. : It is transfer of control for execution of tasks between system and user.
It should be possible for the user or system to pass the control of a task over to
the other or promote the task from a completely internalized one to a shared and
cooperative venture.
What is hill climbing ?
Ans. :  Hill climbing is a mathematical optimization heuristic method used for
solving computationally challenging problems that have multiple solutions.
 It is an iterative method belonging to the local search family which starts
with a random solution and then iteratively improves that solution one
element at a time until it arrives at a more or less optimized solution.
Write down the three categories of principles to support usability.
AU : April/May-18

Ans. :  Learnability : the ease with which new users can begin effective
interaction and achieve maximal performance.
 Flexibility : the multiplicity of ways in which the user and system exchange
information.
 Robustness : the level of support provided to the user in determining
successful achievement and assessment of goals.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 2- Design and Software
Interaction Process
What do you mean universal design ? AU : April/May-18

Ans. : Universal design is the process of designing interactive systems that are
usable by anyone, with any range of abilities, using any technology platform.
The design and composition of an environment so that it may be accessed,
understood and used.
What is design rationale ?
Ans. : Design rationale is about explaining why a product has been designed the
way it has. Design rationale is about recording those decisions and the reasons
why they were made. A design rationale is a useful design tool because it
explicitly lays out the reasoning behind a design process and it forces
designers to be explicit about what they are doing and why they are doing it.
What is cognitive walkthrough ?
Ans. : A cognitive walkthrough is a structured approach to evaluating usability of
a product. It involves the tester, who is not a user, asking four simple questions
about the way a specific user journey is conducted. They will record the
outcomes of these questions, in their opinion, and use these observations to
improve the product further.
What is Heuristic evaluation ?
Ans. : Heuristic evaluation is a usability engineering method for finding usability
problems in a user interface design, thereby making them addressable and
solvable as part of an iterative design process. It involves a small set of expert
evaluators who examine the interface and assess its compliance with
“heuristics,” or recognized usability principles. Such processes help prevent
product failure post-release.



®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
UNIT-III

3
Models and Theories

Syllabus
HCI Models : Cognitive models : Socio-organizational issues and stakeholder requirements-
communication and collaboration models-Hypertext, Multimedia and WWW.

Contents
Cognitive Models
Cognitive Architectures
Socio-Organizational Issues and Stakeholder Requirements
Communication and Collaboration Models
World Wide Web
Two Marks Questions with Answers

(3 - 1)
Human Computer 3- 2 Models and
Interaction Theories

3.1 Cognitive Models


 The field of human-computer interaction, whose goal is to make computers
support human activity in much more satisfying ways than they currently do,
has three main uses for cognitive modeling.
 A cognitive model can substitute for a human user to predict how users will
perform on a system before it is implemented or even prototyped.
 A system can generate a cognitive model of the user currently interacting
with the system in order to modify the interaction to better serve that user.
 Finally, cognitive models can substitute directly for people so groups of
individuals can be simulated in situations that require many participants, e.g.,
for training or entertainment. This paper presents some instances of such
models and the implications for GI design.
 A system can generate a cognitive model of an end user currently interacting
with the design and modify the interaction of the user with respect to the
design to understand different outcomes.
 Finally, cognitive models in design can replace a groups of individual users and
can act in those situations that require many participants.
 Consider your designing a banking application which include multiple personas
and multiple users using it, in such huge applications asking multiple questions
to multiple users to derive the best possible user flow and understanding how
will the user react to the flow is crucial.
 Cognitive Models are as follows :
1. Goal and task hierarchies (GOMS, CCT)
2. Linguistic notations (BNF, TAG)
3. Physical and device models (KLM)
4. Automating Inspection Methods
 There are at least three different uses for cognitive models in service of this general
goal.
1. Predicting human behavior on proposed (assumed) interactive systems
2. Changing the user as a guide for adaptive interaction and observe the outcomes
3. Replacing models in place of other participants in group interactions

3.1.1 GOMS
 The GOMS is an acronym for Goals, Operators, Methods and Selection. GOMS
is a task analysis technique.
 GOMS is family of user interface modeling techniques.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- 3 Models and
Interaction Theories
 The GOMS model has four components: goals, operators, methods and selection
rules.
1. Goals - Tasks are deconstructed as a set of goals and subgoals. GOMS
the goals are taken to represent a ‘memory point’ for the user
2. Operators - Tasks can only be carried out by undertaking specific actions.
Example : to decide which search engine to use.
3. Methods - it represent ways of achieving a goal. Example : drag mouse over
field.
4. Selection Rules - The method that the user chooses is determined by selection
rules
Example : GOMS for photocopying a paper from journal
 One possible GOMS description of the goal hierarchy for this task is given
below. Answers will vary depending on assumptions about the photocopier
used as the model for the exercise.
 In this example, we will assume that the article is to be copied one page at a
time and that a cover over the imaging surface of the copier has to be in place
before the actual copy can be made.
Goal: PHOTOCOPY-PAPER
Goal: LOCATE-ARTICLE
Goal: PHOTOCOPY-PAGE repeat until no more
pages [Select Goal: SELECT-PAGE --> CHOOSE-
PAGE-TO-COPY] Goal: ORIENT-PAGE
OPEN -COVER
POSITION-PAGE
CLOSE-COVER
PRESS-BUTTON
Goal: VERIFY-
COPY LOCATE-
OUT-TRAY
EXAMINE-COPY
Goal: COLLECT-
COPY LOCATE-
OUT-TRAY
REMOVE-COPY (outer goal satisfied!)
Goal: RETRIEVE-JOURNAL
OPEN-COVER
REMOVE-
JOURNAL CLOSE-
COVER

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- 4 Models and
Interaction Theories
Selection rules exist if a spoiled copy was printed. Consider the following :
Rule 1 : SELECT-PAGE if last page was copied successfully or start of
article.
Note : The goal SELECT-PAGE is only valid if we are at the start of the
article or the last copy was successful. If the last copy was spoiled the we
must recopy the current page, so only a re-orientation would be required.
Goal : PHOTOCOPY-PAPER
Goal : LOCATE-ARTICLE
Goal : PHOTOCOPY-PAGE repeat until no more pages
[Select Goal : SELECT-PAGE --> CHOOSE-PAGE-TO-COPY]
Goal : ORIENT-
PAGE OPEN -
COVER POSITION-
PAGE CLOSE-
COVER PRESS-
BUTTON Goal :
VERIFY-COPY
LOCATE-OUT-
TRAY EXAMINE-
COPY
Goal : RETRIEVE-JOURNAL
OPEN-COVER
REMOVE-
JOURNAL CLOSE-
COVER
Goal : COLLECT-
COPY LOCATE-OUT-
TRAY
REMOVE-COPY (outer goal satisfied!)
 Closure to Outer Goal, must force user to collect copy last.

3.1.1.1 Advantages and Disadvantages of GOMS

Advantages :
1. Easy to construct a simple GOMS model and saves time.
2. Helps discover usability problems.
3. Gives several qualitative and quantitative measures.
4. Less work than usability study.
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- 5 Models and
Interaction Theories
Disadvantages :
1. Only work for goal directed tasks.
2. Not for the novice user.
3. Not ideal for leading edge technology systems.
4. Not as easy as heuristics analysis, guidelines.

3.1.2 Cognitive Complexity Theory


 Cognitive complexity is a psychological characteristic or psychological variable
that indicates how complex or simple is the frame and perceptual skill of a
person.
 Cognitive complexity theory, introduced by Kieras and Polson.
 CCT has two parallel descriptions : one of the user’s goals and the other of the
computer system (called the device in CCT). For the system grammar, CCT
apply generalized transition networks, a form of state transition network.
 The description of the user’s goals is based on a GOMS - like goal hierarchy,
but is expressed primarily using production rules.
 The production rules are a sequence of rules :
If condition then action
 Where, condition is a statement about the contents of working memory. If the
condition is true then the production rule is said to fire. An action may consist
of one or more elementary actions, which may be either changes to the
working memory, or external actions such as keystrokes.
 As an example, consider an editing task using the UNIX vi text editor. The task
is to insert a space where one has been missed out in the text. The fragment of
the associated CCT production rules can be as below :
(SELECT-INSERT-SPACE
IF (AND (TEST-GOAL perform unit
task) (TEST-TEXT task is insert
space) (NOT (TEST-GOAL insert
space))
(NOT (TEST-NOTE executing insert
space))) THEN ((AND-GOAL insert space)
(ADD-NOTE executing insert
space) (LOOK-TEXT task is at
%LINE %COL) )) (INSERT-SPACE-
DONE
IF (AND (TEST-GOAL perform unit
task) (TEST-NOTE executing
insert space) (NOT (TEST-GOAL
insert space)) )
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- 6 Models and
Interaction Theories
THEN (DELETE-NOTE executing insert space)

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- 7 Models and
Interaction Theories
(DELETE-GOAL perform unit
task) (UNBIND %LINE %COL)
)) (INSERT-SPACE-1
IF(AND (TEST-GOAL insert space)
(NOT (TEST-GOAL move cursor))
(NOT (TEST-CURSOR %LINE
%COL)))
THEN ( (ADD-GOAL move cursor to %LINE
%COL) )) (INSERT-SPACE-2
IF (AND (TEST-GOAL insert
space) (TEST-CURSOR %LINE
%COL))
THEN ( (DO-KEYSTROKE
‘I’) (DO-KEYSTROKE
SPACE) (DO-
KEYSTROKE ESC)
(DELETE-GOAL insert space))
 To see how these rules work, imagine that the user has just seen the typing
mistake and thus the contents of working memory (w.m.) are,
(GOAL perform unit
task) (TEXT task is
insert space) (TEXT task
is at 5 23)
(CURSOR 8 7)
 Text uses the text of the document that is being edited and CURSOR refers to
the placing cursor on the screen. The position (5, 23) is the line and column
of the typing error where the space is required. However, present cursor
location is at line 8 and column 7.
 So the rule fires and its action is performed. This action has no external effect
in terms of keystrokes, but adds extra information to working memory.
 The rules in CCT must not represent error-free performance. They can be
utilized to explain error phenomena, even if they cannot predict them. The CCT
rules are closely related to GOMS-like goal hierarchy; or alternatively, the
rules may be generated from such a hierarchy, or alternatively, it may analyze
the production rules to obtain the goal tree.
 In fact, the CCT rules can characterize more difficult plans than the simple
sequential hierarchies of GOMS. However, one should regard CCT as an
engineering tool giving one a rough measure of learnability and difficulty
combined with a detailed description of user behaviour.

3.1.3 Linguistics Models

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- 8 Models and
Interaction Theories
 Linguistic models represent the user-system grammar. Understanding the
user's behaviour and cognitive difficulty based on analysis of language between
user and system.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- 9 Models and
Interaction Theories
 Backus - Naur Form (BNF) and Task - Action Grammar (TAG) are used to
represent this model.
 Backus - Naur Form
 BNF can be used to define the syntax of a language.
 It is based on techniques developed for use with natural languages, but was
specifically designed for use with computing languages.
 BNF defines a language in terms of Terminal symbols, Syntactic constructs
and productions.
 Terminal symbols are the elementary symbols of the language, such as
words and punctuation marks.
 In the case of computing languages, these may be variable -names,
operators, reserved words, etc.
 Syntactic constructs (or non-terminal symbols) are phrases, sentences, etc.
 In the case of computing languages, these may be conditions, statements,
programs, etc. TAG :
 Task–Action Grammar (TAG) attempts to deal with some of these problems by
including elements such as parameterized grammar rules to emphasize
consistency and encoding the user’s world knowledge.
 In BNF, three UNIX commands would be described as :
copy ::= cp + filename + filename | cp + filenames +
directory move ::= mv + filename + filename | mv +
filenames + directory link ::= ln + filename +
filename | ln + filenames + directory
 No BNF measure could distinguish between this and a less consistent
grammar in which link ::= ln + filename + filename | ln + directory +
filenames
 Consistency of argument order made explicit using a parameter, or semantic
feature for file operations.
 Feature Possible values : Op = copy; move; link
 Rules
file-op[Op] ::= command[Op] + filename + filename | command[Op] +
filenames + directory
command[Op = copy] ::=
cp command[Op = move]
::= mv command[Op =
link] ::= ln

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
3.1.4 Physical and Device Model : KLM model
 The Keystroke - Level Model (KLM) predicts how long it will take an expert
user to accomplish a routine task without errors using an interactive computer
system.
 The actions are termed keystroke level if they are at the level of actions like
pressing keys, moving the mouse, pressing buttons.
 There is a standard set of operators for use in the KLM, whose execution
times have been estimated from experimental data.
 The following is a step-by-step description of how to apply the KLM to estimate
the execution time required by a specified interface design :
1. Choose one or more representative task scenarios.
2. Have the design specified to the point that keystroke-level actions can be
listed for the specific task scenarios.
3. For each task scenario, figure out the best way to do the task, or the
way that you assume users will do it.
4. List the keystroke-level actions and the corresponding physical operators
involved in doing the task.
5. If necessary, include operators for when the user must wait for the system to
respond
6. Insert mental operators for when user has to stop and think.
7. Look up the standard execution time to each operator.
8. Add up the execution times for the operators.
9. The total of the operator times is the estimated time to complete the task.
 The model decomposes the execution phase into five different physical motor
operators, a mental operator and a system response operator :
K : Keystroking, actually striking keys, including shifts and other modifier keys.
B : Pressing a mouse button.
P : Pointing, moving the mouse (or similar device) at a
target. H : Homing, switching the hand between
mouse and keyboard. D : Drawing lines using the
mouse.
M : Mentally preparing for a physical action.
R : System response which may be ignored if the user does not have to wait for it, a
sin copy typing.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
Example :

KLM (Keystore - Level) Model predicts expert error - free task completion
time (human performance) with interactive computing systems. Total
predicted time for a task is given by equation.
tEXECUTE = tK + tP + tH + tD + tM + tR. What does each of the above
timing represent ? Develop a KLM model and predict time for the
completion of the task “Change font and style for the word “KLM” to
bold, Arial” using mouse only.
Sol. : A task is broken into a series of subtasks. Total predicted time is the sum of
the subtask times :
tEXECUTE = tK + tP + tH + tD + tM
+t
 Operators : R
K  keystroking P  pointing H  homing
D  drawing M  mental prep R  system response
 Task : Change the font and style for word “KLM” to bold, Arial.
 Operations :

Mouse KLM tP(s


Subtasks Operators )
Drag across text to select’ “KLM” M P[2.5, 0.5] 0.68
6
Move pointer to Bold button and click M P[13, 1] 0.93
6
Move pointer to Font drop-down button and M P[3.3, 1] 0.58
click 8
Move pointer down list to Arial and click M P[2.2, 1] 0.50
1
 tP = 2.71
 Prediction :

tEXECUTE = 4  tM +  tP = 4  1.35 + 2.71 =


8.11 seconds
 Operations :

Keyboard Subtasks

Select text

Convert to boldface

Activate Format menu and enter Font


Sub-menu

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
Type a (“Arial” appears at top of list)

Select “Arial”

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
tEXECUTE = 4  tM + 12  tK = 4  1.35 + 12  0.75 = 14.40 seconds

Use “typing complex codes” (tK = 0.75 s)

Limitations keystroke - level model :


 It measures only one aspect of performance : time, which means execution
time and not the time to acquire or learn a task.
 It considers only expert users. Generally, users differ regarding their
knowledge and experience of different systems and tasks, motor skills and
technical ability.
 It considers only routine unit tasks.
 The method has to be specified step by step.
 The execution of the method has to be error-free.
 The mental operator aggregates different mental operations and therefore
cannot model a deeper representation of the user’s mental operations. If this is
crucial, a GOMS model has to be used.

3.1.5 Compare KLM and GOMS Model

KL GOM
M S
Model creation is Quick and easy Time consuming to create model

KLM not used any selection rules GOMS uses selection rules

No methods Methods are informal programs

In KLM, no multiple goals it GOMS support goals and


support subgoals
Only operators on keystroke-level It is very flexible

3.2 Cognitive Architectures


Interacting Cognitive Subsystems (ICS) Model :
 The interacting cognitive subsystems model is based on detailed cognitive
experimentation which suggests that the human mind works by different
subsystems passing information from one to another and copying it in the
process.
 In this way, each subsystem has its own memory. Different systems operate
with different coding, for instance, verbal, visual, auditory.
 There are higher order systems that translate these coding, and integrate the
®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
information.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
 The architecture of ICS is built up by the coordinated activity of nine smaller
subsystems: five peripheral subsystems are in contact with the physical world
and four are central, dealing with mental processes.
 Each subsystem has the same generic structure. A subsystem is described in
terms of its typed inputs and outputs along with a memory store for holding
typed information.
 It has transformation functions for processing the input and producing the
output and permanently stored information.
 Each of the nine subsystems is specialized for handling some aspect of external
or internal processing.
 An example of a central subsystem is one for the processing of propositional
information, capturing the attributes and identities of entities and their
relationships with each other.

3.3 Socio-Organizational Issues and Stakeholder Requirements


 Social organization is a pattern of relationships between and among individuals
and social groups. Characteristics of social organization can include qualities
such as sexual composition, spatiotemporal cohesion, leadership, structure,
division of labor, communication systems, and so on.
 Organizational issues are as follows :

3.3.1 Computer-Supported Cooperative Work


 The term ‘Computer - Supported Cooperative Work’ (CSCW) seems to assume
that groups will be acting in a cooperative manner
 CSCW is the study of the tools and techniques of groupware as well as their
psychological, social, and organizational effects.
 CSCW was an effort by technologists to learn from social, economic,
organizational and anthropologic researchers or anyone who could shed light
on group activity.
 Fig. 3.3.1 shows the relationship between CSCW, groupware, workgroup
computing and workflow management.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories

Fig. 3.3.1 : CSCW

 CSCW studies the use of groupware. CSCW is the study of the tools and techniques
of groupware as well as their psychological, social, and organizational effects.
 Examples :
1. Scientists collaborating on a technical issue
2. Authors editing a document together
3. Programmers debugging a system concurrently
4. Workers collaborating over a shared video conferencing application
5. Buyers and sellers meeting on eBay
 Main Challenge : Support Awareness
1. Presence of collaborators
2. Behaviours and actions of collaborators
3. Presence of resources and arte-facts
4. Knowledge and expectations of counterparts.

3.3.2 Stake-Holders
 A stakeholder is as anyone who is affected by the success or failure of the system.
 Primary stakeholders are people who actually use the system, the end-users.
 Secondary stakeholders are people who do not directly use the system, but
receive output from it or provide input to it.
 Tertiary stakeholders are people who do not fall into either of the first two
categories but who are directly affected by the success or failure of the system.
 Facilitating stakeholders are people who are involved with the design,
development and maintenance of the system.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
 The airline booking system stakeholders can be classified as follows :
1. Primary stakeholders : travel agency staff, airline booking staff
2. Secondary stakeholders : customers, airline management
3. Tertiary stakeholders : competitors, civil aviation authorities, customers’
traveling companions, airline shareholders
4. Facilitating stakeholders : design team, IT department staff.

3.3.3 Socio - Technical Systems Theory


 Socio Technical Systems (STS) in organizational development is an approach to
complex organizational work design that recognizes the interaction between
people and technology in workplaces.
 Socio technical systems pertains to theory regarding the social aspects of
people and society and technical aspects of organizational structure and
processes. Here, technical does not necessarily imply material technology. The
focus is on procedures and related knowledge.
 Within a socio-technical systems perspective, any organisation, or part of it, is
made up of a set of interacting sub - systems, as shown in the diagram below.
 Thus, any organisation employs people with capabilities, who work towards
goals, follow processes, use technology, operate within a physical
infrastructure, and share certain cultural assumptions and norms.

 Socio - technical theory has at its core the idea that the design and
performance of any organisational system can only be understood and
improved if both ‘social’ and ‘technical’ aspects are brought together and
treated as interdependent parts of a complex system.
 Organisational change programmes often fail because they are too focused on
one aspect of the system, commonly technology, and fail to analyse and
understand the complex interdependencies that exist.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
 This is directly analogous to the design of a complex engineering product such
as a gas turbine engine. Just as any change to this complex engineering system
has to address the knock-on effects through the rest of the engine, so too does
any change within an organisational system.
 Socio-Technical models of system design are of three types : USTM/CUSTOM,
OSTA and Ethics.
 Potential benefits are as follows :
1. Strong engagement
2. Reliable and valid data on which to build understanding
3. A better understanding and analysis of how the system works now (the ‘as is’)
4. A more comprehensive understanding of how the system may be improved (the
‘to be’)
5. Greater chance of successful improvements.

3.3.3.1 CUSTOM Methodology


 CUSTOM is a socio-technical methodology designed to be practical to use in
small organizations. It is based on the User Skills and Task Match (USTM)
approach, developed to allow design teams to understand and fully document
user requirements
 It focuses on establishing stakeholder requirements and all stakeholders are
considered, not just the end-users.
 It is applied at the initial stage of design when a product opportunity has been
identified, so the emphasis is on capturing requirements.
 It is a forms-based methodology, providing a set of questions to apply at each of its
stages.
 There are six key stages to carry out in a CUSTOM analysis :
1. Describe the organizational context, including its primary goals, physical
characteristics, political and economic background.
2. Identify and describe stakeholders. All stakeholders are named,
categorized and described with regard to personal issues, their role in the
organization and their job.
3. Identify and describe work-groups. A work-group is any group of people
who work together on a task, whether formally constituted or not.
4. Identify and describe task–object pairs. These are the tasks that must be
performed, coupled with the objects that are used to perform them or to
which they are applied.
5. Identify stakeholder needs.
6. Consolidate and check stakeholder requirements.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
3.3.3.2 Open System Task Analysis (OSTA)
 OSTA is an alternative socio-technical approach, which attempts to describe
what happens when a technical system is introduced into an organizational
work environment.
 OSTA specifies both social and technical aspects of the system. It focus on tasks.
 Fig. 3.3.2 shows OSTA.

Fig. 3.3.2 OSTA


 OSTA has eight main
stages :
1. The primary task which the technology must support is identified in
terms of users’ goals.
2. Task inputs to the system are identified. These may have different
sources and forms that may constrain the design.
3. The external environment into which the system will be introduced is
described, including physical, economic and political aspects.
4. The transformation processes within the system are described in terms of
actions performed on or with objects.
5. The social system is analyzed, considering existing work-groups and
relationships within and external to the organization.
6. The technical system is described in terms of its configuration and
integration with other systems.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
7. Performance satisfaction criteria are established, indicating the social and
technical requirements of the system.
8. The new technical system is specified.

3.3.3.3 Soft Systems Methodology


 Soft Systems Methodology (SSM) is a cyclic learning system which uses models
of human activity to explore with the actors in the real world problem situation,
their perceptions of that situation and their readiness to decide upon
purposeful action which accommodates different actor’s perceptions,
judgments and values.
 The idea of the Soft Systems Methodology is to understand the particular
situation in which the problem is believed to lie and not to just find a general
solution to a specified problem
 Fig. 3.3.3 shows SSM.

Fig. 3.3.3 SSM

 Therefore, this approach first tries to define the situation (stages 1 and 2). The
definition of the situation occurs in 2 stages. First, a problem situation is
perceived and in stage 2, a better overview is gained by having the problem
situation expressed through communication with all involved parties.
 After expressing the problem in the given situation, the approach tries to
deliver a precise definition of the system (stage 3), which is applied to produce
conceptual models of the system.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
 These conceptual models in stage 4 include the consideration of
clients/customers, actors, transformations, world view, owners and the
environment. The precise root definition from stage 3 will have already
identified who all the involved people are.
 Stages 3 and 4 are typically abstract modeling processes that do not relate
directly to the specific situation at hand, i.e. many scenarios can be considered
and simulated. This is an abstract modeling process that does not take place in
the real world.
 Stages 5, 6 and 7 then go away from the abstract point of view and try to
embed the earlier processes into the real world
 Stage 5 will expose gaps in the original root definition after incorporating the
expressed situational circumstances until a well-formed root definition has
been found.
 In Stage 6, the conceptual model of the situation is compared with the original
expression of the situation
 It is possible that Stage 6 exposes further changes that might be necessary.
 In Stage 7, action takes place to improve the situation at hand with the help of
the designed system, but this may not succeed in the first place, so many
iterations might be necessary

3.3.3.4 Participatory Design


 In participatory design the users are involved in development of the products,
in essence they are co-designers.
 Participatory Design (PD) is a set of theories, practices, and studies related to
end users as full participants in activities leading to software and hardware
computer products and computer-based activities.
 Cultural differences can often arise between users and designers, sometimes
the users are unable to understand the language of the designers, it is
recommended that the team uses prototypes, such as mockups, or a paper-
based outline of the screen of a webpage, or a product.
 Other types of prototyping techniques are PICTIVE (Plastic Interface for
Collaborative Technology Initiatives through Video Exploration) and CARD
(Collaborative Analysis of Requirements and Design).
 The PICTIVE prototyping method uses low-fidelity office products, such as
pens, papers, and sticky notes.
 The actions of the users are videotaped. CARD uses playing cards with pictures
of specific items on them.
 PICTIVE concentrates on the detailed aspects of the system while CARD looks
at the flow of the task, just as storyboarding.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
3.4 Communication and Collaboration Models
1. Face-to-Face Communication
 It involves speech, hearing, body language and eye-gaze. A person has to be
familiar with existing norms, to learn a new norm.
 Another factor is the personal space, this varies based on the context,
environment, diversity and culture.
 The above factor comes into pitcher, when there is a video conference between
two individuals from different background.
 The factor of eye gaze is important during a video conference as the cameras
are usually mounted away from the monitor and it is important to have eye
contact during a conversation.
 Back channels help giving the listener some clues or more information about
the conversation.
 The role of interruptions like 'um's and 'ah's are very important as they can be
used by participants in a conversation to claim the turn.
2. Conversation
 Transcripts can be used as a heavily annotated conversation structure, but
still lacks the back channel information.
 Another structure is of turn-taking, this can be interpreted as Adjacency
pairs, e.g. : A-x, B-x, A-y, B-y.
 Context varies according to the conversation.
 The focus of the context can also varies, this means that it is difficult to keep
track of context using adjacency pairs.
 Break-downs during conversations is often a case and can be noticed by
analyzing the transcripts. Reaching a common ground or grounding is very
essential to understand the shared context.
 Speech act theory is based on the statements and its propositional meaning.
 A state diagram can be constructed considering these acts as illocutionary
points in the diagram. This is called conversation for action.
3. Group working
 The roles and relationship between the group individuals are different and may
change during the conversation.
 Physical layout is important to consider here to maintain the factors in face-to-
face communication.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
3.4.1 Hypertext
 Hypertext is text which contains links to other texts. The term was coined by
Ted Nelson around 1965. HyperMedia is a term used for hypertext which is
not constrained to be text : it can include graphics, video and sound, for
example. Apparently Ted Nelson was the first to use this term too.
 Hypertext consists of nodes connected by links to form networks or webs.
Depending on the system, a node can be restricted to one medium (text,
graphics, sound, animation, or video) or can include multiple media.
 Links can be unidirectional or bidirectional, labeled or typed, and can store
other information, such as author and creation date.
 Anchors are points or regions in a node to which a link attaches, often
represented by a button or other marking that indicates a navigational
possibility.
 When a user navigates to a new node, a new window may open or the existing
window may expand to incorporate the new information.
 Like hypertext, multimedia began with experiments in the 1960s and 1970s
that matured into vigorous commercial activity in the 1980s and 1990s.

3.5 World Wide Web


 The WWW is an evolving system for publishing and accessing resources and
services across the Internet. Web is an open system. Its operations are based
on freely published communication standards and documents standards. The
Web is one with respect to the types of 'resource' that can be published and
shared on. Fig. 3.5.1 shows the web servers and web browsers.

 Key
Features :

Fig. 3.5.1 : Web servers and browsers

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
1. Web provides a hypertext structure among the documents that it stores.
2. The documents contain links i.e. references to other documents or
resources. The structures of links can be arbitrarily complex and the set of
resources that can be added is unlimited.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
 The main standard components of Web :
1. HyperText Markup Language (HTML)
2. Uniform Resource Locators (URLs)
3. HyperText Transfer Protocol (HTTP)
 HTML specifies the contents and layout of web pages. The content contains
text, table, form, image, links, information for search engine, etc. The layout is
in the form of text format, background and frame. HTML is also used to specify
links and which resources are associated with them.
 URL identifies a resource to let browser find it. HTTP URL is mostly widely
used today. An HTTP URL has two main jobs to do :
1. To identify which web server maintains the resource.
2. To identify which of the resources at that server is required.
 HTTP defines a standard rule by which browsers and any other types of client
interact with web servers. Main features are :
1. Request - reply interaction
2. Content types may or may not be handled by browser - using plug-in or external
helper.
3. One resource per request so several requests can be made concurrently.
4. Simple access control : Any user with network connectivity to a web
server can access any of its published resources.

3.5.1 Static Web Content


i. The message and the medium : On the web documents, users want to
see the information and to retrieve it, which have an influence on design.
The proper copy of information does not have the same inbuilt hypertext
and active capabilities as the web page.
ii. Text : The text information on web can be presented with different fonts,
styles, colors, sizes in order to make the text more attractive and readable
as per the importance of text in the context.
iii. Graphics : There are number of sites on web that contain archives of
graphical images, icons, backgrounds and so on. There are also paint and
image manipulation packages available on almost all computer systems, and
scanners and digital cameras, where available, enable the input of
photographs and diagrams.
iv. Movies and Sound : Movies and Sound are both available to users of the
web, and hence to the page designers. It is made available with the help of
different file formats.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
3.5.2 Dynamic Web Content
i. The active web : This type involves complex forms of interactions on web.
The actual content may be fixed, but the user can change the form of
presentation. The web pages can be generated from database contents, and
the database information can be updated through the web.
ii. Fixed content – local interaction and changing view : Probably the
most overestimated aspect of the web in recent years has been java. In fact,
Java can be used to write server- end software and platform independent
standalone programs.
iii. Search : The user’s keywords are submitted to the server using an
HTML form; they are compared against pre-prepared indexes at the server.
iv. Dynamic content : The content of the web pages reacts to and is
updateable by the web user in dynamic web context. The user interacts
through a web browser with a web server. The Java Servlet Pages (JSP) and
Java Enterprise Beans (JEB) is used for this purpose. There is use of
‘business logic’ in data processing for e.g. online banking. These Java
enterprise beans takes data from corporate database using JDBC
connections.

3.5.3 Browser Safe Palette and Browser - Safe Colors


Browser Safe Palette :
 A website color palette is the combination of colors your choose for your site’s
design. You’ll stick to using these colors throughout your whole site.
 Color is displayed on your monitor with a combination of Red, Green and Blue
values. The RGB values for each color within this palette are all formed from
variations of the RGB number values 00, 51, 102, 153, 204 and 255.
 The hexadecimal code values for each color are all formed with variations of
the hex code values 00, 33, 66, 99, CC and FF.
 These colors were chosen from a mathematical color cube based on
multiplying six values of six colors (red, green, blue, cyan, magenta, and
yellow).
 This is why the browser-safe palette is sometimes called the 6 x 6 x 6 palette.
 It’s also frequently called Netscape palette, Explorer palette, Color cube,
Web color, and Web safe.
 Whenever you see these terms, it usually refers to the colors browsers impose
on end-users viewing the Web on an 8-bit system.
 If you design your Web graphics and color schemes with the Browser - safe
color palette, your site will not be prone to unexpected color shifting or
dithering on 8-bit systems.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
Browser - safe colors :
 The color management system currently used by web browser software is
based on an 8-bit, 216-color palette.
 The browser-safe color palette is a solution devised by Netscape to solve the
problem of displaying color graphics in a similar way on many kinds of display
screens, with browsers running under different operating systems.
 Because a majority of the web audience years ago had 8-bit display screens,
256 colors was the upper limit for the color palette. But the various versions of
the windows operating system reserve 40 colors for displaying such graphic
interface elements as windows, menus, screen wallpaper, icons, and buttons,
which leaves just 216 colors to display everything else.
 The 216 colors chosen by Netscape are identical in both the Macintosh and
Windows system palettes. Although the browser-safe color scheme originated
at Netscape, at present both of the dominant Web browsers use the same color
management system.
 Most web users have computers and monitors set to "thousands" or "millions"
of colors, so the importance of the so - called web - safe palette has sharply
diminished in the past few years.
 When the user has a monitor set to thousands or millions of colors all colors
display properly, so there is no longer any need to restrict your color choices to
the 216 Web-safe colors.
Two Marks Questions with Answers

What is participatory design ?


Ans. : Participatory Design (PD) is a set of theories, practices, and studies related
to end users as full participants in activities leading to software and hardware
computer products and computer-based activities
Explain Linguistics Models.
Ans. :  Linguistic models represent the user-system grammar. Understanding
the user's behaviour and cognitive difficulty based on analysis of language
between user and system.
 Backus - Naur Form (BNF) and Task - Action Grammar (TAG) are used to
represent this model.
List components of GOMS model.
Ans. : The GOMS model has four components : goals, operators, methods and
selection rules.
What is CUSTOM ?

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
Ans. : CUSTOM is a socio-technical methodology designed to be practical to
use in small organizations. It is based on the User Skills and Task Match (USTM)
approach, developed to allow design teams to understand and fully document
user requirements.
List the name of cognitive
models. Ans. : Cognitive Models
are as follows :
 Goal and task hierarchies (GOMS, CCT)
 Linguistic notations (BNF, TAG)
 Physical and device models (KLM)
 Automating Inspection Methods.
What is soft systems methodology ?
Ans. : Soft Systems Methodology (SSM) is a cyclic learning system which uses
models of human activity to explore with the actors in the real world problem
situation, their perceptions of that situation and their readiness to decide
upon purposeful action which accommodates different actor’s perceptions,
judgments and values.
List the advantages of GOMS.
Ans. : Advantages :
1. Easy to construct a simple GOMS model and saves time
2. Helps discover usability problems
3. Gives several qualitative and quantitative measures.
4. Less work than usability study.
What is cognitive models ?
Ans. : A cognitive model is the designer’s intended mental model for the user of
the system and set of ideas about how it is organized and operates.
Give the characteristics of direct
manipulation. Ans. :  Acts as an extension of
real word.
 Provides continuous visibility.
 Actions are rapid and incremental with visible display of results.
 Actions are incremental and easily reversible.



®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 3- Models and
Interaction Theories
Notes

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
UNIT-IV

4
Mobile HCI

Syllabus
Mobile Ecosystem : Platforms, Application framework – Types of Mobile Applications :
Widgets, Applications, Games – Mobile Information Architecture, Mobile 2.0, Mobile Design :
Elements of Mobile Design, Tools – Case Studies.

Contents
4.1 Mobile Ecosystem
4.2 Types of Mobile Applications ...................................April/May-17,........................Marks-16
Mobile Information Architecture
Mobile 2.0
4.5 Mobile Design ..........................................................April/May-17,18 ..................Marks-16
Two Marks Questions with Answers

(4 - 1)
Human Computer Mobile
Interaction HCI

UNIT-IV

Mobile HCI
Mobile Ecosystem
 Mobile devices do not exist in a vacuum - a series of networks and
interconnected systems exist to support modern mobility. The utility of modern
mobile devices is greatly enhanced by software applications and their
supporting cloud services.
 Fig. 4.1.1 shows layers of mobile ecosystem. Each layer is reliant on the others
to create a seamless, end-to-end experience.
Services

Applications

Application
Framework
OS

Platforms

Hardware Devices

Aggregators

Networks

Operators

Fig. 4.1.1 Layers of mobile ecosystem

 Mobile ecosystem consists of operators, networks, aggregators, hardware


devices, platforms, OS, application framework, applications and services.
1. Operators :
 Operators are the bottom layer of mobile ecosystem. Operators is also refereed
as mobile network operators, mobile service provider, mobile phone operators
or cellular companies. Role of operators in ecosystem is to create and maintain
a specific set of wireless services over a reliable cellular network. Various
network operators are BSNL, MTNL, Airtel, Reliance Communications, Idea
and Vodafone etc.
2. Networks :
 Operator operates wireless network. A cellular network is a radio
Human Computer Mobile
Interaction HCI
telecommunications network that provides wireless communications
infrastructure to support mobile radio transceivers (cell phones) located
throughout a geographical coverage area.
3. Device :
 Cell-phone handset is composed of two components : Radio Frequency (RF)
and Baseband. RF is the mode of communication for wireless technologies of
all kinds, including cordless phones, radar, ham radio, GPS, and radio and
television broadcasts. Fig. 4.1.2 shows cell phone block diagram.


Cell phone block diagram


Human Computer Mobile
Interaction HCI

4. Platforms :
 Mobile platforms primary duty is to provide access to the devices. To run
software and services on each of these devices. Platforms are of three types :
licensed, proprietary and open source.
 Open source is an approach to the design, development, and distribution of
software, offering practical accessibility to software’s source code.
 Open source licenses must permit non-exclusive commercial exploitation of the
licensed work, must make available the work’s source code, and must permit
the creation of derivative works from the work itself.
 Proprietary software is computer software which is the legal property of one
party. The terms of use for other parties is defined by contracts or licensing
agreements. These terms may include various privileges to share, alter,
dissemble, and use the software and its code.
Human Computer Mobile
Interaction HCI
5. Operating system :
 Mobile OSs provide dedicated application stores for end users offering a
convenient and customized means of adding functionality.
 Application stores pose an additional threat vector for attackers to distribute
malware or other harmful software to end users. This is especially true of third-
party application stores not directly supervised by mobile OS vendors.
 Android is a software environment built for mobile devices. It is not a hardware
platform. While components of the underlying OS are written in C or C++,
user applications are built for Android in Java.
 Android has been available as open source since October 2008. Google opened
the entire source code under an Apache License. With the Apache License,
vendors are free to add proprietary extensions without submitting those back
to the open source community.
 Other mobile OS are Symbian, windows CE, Palm OS, Linux and Mac OS X etc.
6. Application framework :
 First layer the developer can access is the application framework or API
released by one of the companies. An application framework is a software
library that provides a fundamental structure to support the development of
applications for a specific environment. An application framework acts as the
skeletal support to build an application.
 Application frameworks often run on top of operating system sharing core
services such as communications, messaging, graphics, location, security,
authentication etc.
 The Binary Runtime Environment for Wireless (BREW) gives developers a
mechanism for creating portable applications that will work on Code Division
Multiple Access (CDMA) - based handsets.
 BREW manages all of the telephony functions on the device, thereby allowing
application developers to create advanced wireless applications without having
to program to the device's system-level interface.
 S60 : S60 platform is formerly known as Series 60. It is application platform
for devices that run the Symbian OS. S60 applications can be created in Java,
the Symbian C++ framework etc.
 Adobe Flash Lite is a lightweight version of Adobe Flash Player, a software
application published by Adobe Systems for viewing Flash content. Flash Lite
operates on devices that Flash Player cannot, such as mobile phones and other
portable electronic devices like Wii, Chumby and Iriver.
 Cocoa Touch is the application development environment for building software
programs
Human Computer Mobile
Interaction HCI
to run on iOS for the iPhone and iPod Touch, iPadOS for the iPad, watchOS for the
Apple Watch, and tvOS for the fourth-generation Apple TV, from Apple Inc.
7. Applications :
 For creating applications, application framework is used. Applications are
web browser, camera or media player.

Types of Mobile Applications AU : April/May-17

 The mobile medium type is the type of application framework or mobile


technology that presents content or information to the user.

Widgets
 Mobile Widgets is a mobile application which gives a fast and simple access to
a set of mobile widgets.
 These mobile widgets are a very short way to access news, fun and a multitude
of Internet contents and services.
 The mobile website widgets create elements and containers for creating a
responsive mobile version of the website.
 The widget examples should be viewed on a suitable device, in the iOS
Simulator or in a desktop browser using responsive mode.
 The fig. 4.2.1 below shows the three different sizes of iPhone and the screen
width in portrait and landscape mode. It should be obvious that a fixed width
web page design is not going to perform well on the various sizes of phone.

Fig. 4.2.1
 For a design to work on different devices, the page width has to vary from 320
px to 414 px in portrait mode and up to 736 px in landscape. This is achieved
by giving the items 100% width with a maximum width setting.
Human Computer Mobile
Interaction HCI
 The maximum width setting is used to control the width of an item when
viewed in landscape mode. For example, an image with an aspect ratio of less
than 4 : 3 will be partially hidden when viewed in landscape mode so its
maximum width setting should be set to prevent the height from expanding
more than the device screen width which is actually the screen height in
landscape mode.
 Items such as videos are played in full screen mode anyway so it is not
necessary to set their maximum width to any more than about 480 px.

Fig. 4.2.2
 The Widget tab allows you to configure the functionality of the app. As widgets
are specifically designed to work with 2D or 3D data content, a set of widgets
for 2D apps is different from 3D apps.
Human Computer Mobile
Interaction HCI
 In addition, the initial set of widgets may vary from theme to theme, as each
theme has its own preconfigured set of widgets. Following are the widgets
from the Foldable theme when you build 2D apps.

Benefits of this service :


 Simple and direct access to contents and services (weather, news, finance,
games,...).
 Ergonomic client software with easy navigation through Grid or carousel.
 Already installed on device with default widgets featured.
 Additional widgets available for download.
 Data included in many tariff plans. Low data usage for people who don't have data
plans.
 Fast access to Internet services.

Limitations :
 They typically require a compatible widget platform to be installed on the device.
 They cannot run in any mobile web browser.
 They require learning additional proprietary, non-web-standard techniques.

4.2.2 Applications
 Mobile Web applications refer to applications for mobile devices that require
only a Web browser to be installed on the device.
 They typically use HTML and Ajax, although they may make use of augmented
Rich Internet Application (RIA) technologies, such as Flash, JavaFX and
Silverlight, but are not written specifically for the device.
 Rich, mobile Web applications have roughly equivalent usability to PC-rich web
applications (or RIAs), when designed specifically for smaller form factors.
 Simple mobile Web applications limit the use of RIA technologies and are
designed to present information in a readable, action-oriented format.
 Mobile Web applications differ from mobile native applications, in that they
use web technologies and are not limited to the underlying platform for
deployment.

Challenges are as follows :


 Screen shape - Most smart phone users hold their phones vertically, in
portrait mode. This means the screen is taller than it is wide, the opposite of a
desktop computer or laptop.
 Screen size - Smart phones have very small screens compared to desktop
computers, so designers need to make the pages simpler. Different models
have different screen sizes, but as a rule of thumb aim for 340px as a maximum
width for your mobile portrait design.
Human Computer Mobile
Interaction HCI
 User interactions - Mobile phones do not have a mouse, so effects that
appear "on hover" or "on blur" don't work.
 Navigation - The majority of websites tend to have a full-width top navigation
bar which doesn't work at all on a smart phone in portrait mode.
 Lower bandwidth - It depends whether you're in the middle of a city or the
countryside, but mobile users on a cellular connection can have slower internet
speeds. You may want to replace the full-screen background video on the
mobile version of your site.

Native Apps :
 Native apps live on the device and are accessed through icons on the device
home screen. Native apps are installed through an application store (such as
Google Play or Apple’s App Store).
 They are developed specifically for one platform, and can take full advantage of
all the device features, they can use the camera, the GPS, the accelerometer,
the compass, the list of contacts, and so on.
 They can also incorporate gestures (either standard operating-system gestures
or new, app- defined gestures). And native apps can use the device’s
notification system and can work offline.

Hybrid apps :
 Hybrid apps are part native apps, part web apps. Like native apps, they live in
an app store and can take advantage of the many device features available.
 Like web apps, they rely on HTML being rendered in a browser, with the
caveat that the browser is embedded within the app.
 Often, companies build hybrid apps as wrappers for an existing web page;
in that way, they hope to get a presence in the app store, without spending
significant effort for developing a different app.
 Hybrid apps are also popular because they allow cross platform development
and thus significantly reduce development costs : that is, the same HTML code
components can be reused on different mobile operating systems.
 Tools such as PhoneGap and Sencha Touch allow people to design and code
across platforms, using the power of HTML.
Advantages :
 They are easy to create, using basic HTML, CSS, and JavaScript knowledge.
 They are simple to deploy across multiple handsets.
Human Computer Mobile
Interaction HCI
 Thet offer a better user experience and a rich design, tapping into device
features and offline use.
 Content is accessible on any mobile web browser.
Disadvantages :
 The optimal experience might not be available on all handsets.
 They can be challenging (but not impossible) to support across multiple devices.
 They don’t always support native application features, like offline mode,
location lookup, filesystem access, camera, and so on.

4.2.3 Short Message Service


 Short Message Service (SMS) is a globally accepted wireless service that
enables the transmission of alphanumeric messages between mobile
subscribers and external systems such as electronic mail, paging, and voice
mail systems. Not requiring the end-to-end establishment of a traffic path.
 The service makes use of a Short Message Service Center (SMSC) which acts
as a store and forward system for short messages. The wireless network
provides for the transport of short messages between the SMSCs and
wireless handsets.
 SMS also guarantees delivery of the short message by the network. Temporary
failures are identified, and the short message is stored in the network until the
destination becomes available.
 SMS messages are transported in the core network using SS7.
 The SMSC can send SMS messages to the end device using a maximum
payload of 140 octets. This defines the upper bound of an SMS message to be
160 characters using 7-bit encoding.
 It is possible to specify other schemes such as 8 - bit or 16 - bit encoding,
which decreases the maxi-mum message length to 140 and 70 characters,
respectively.
 Send or receive during voice or data calls : SMS messaging makes use of a
separate channel, normally used for transfer of control messaging to transfer
its packets. Being out-of-band, this means voice and data calls will not be
interrupted by SMS transfer.
 Furthermore, the low-bandwidth requirements of transmitting short
alphanumeric strings allow messaging worldwide with very low latency. This of
course depends upon network operator agreements.
 The SMS service uses part of the bandwidth capacity of the existing 2G voice
network. It was originally intended for use by the mobile phone companies to
contact their operatives in the field undertaking maintenance work.
Human Computer Mobile
Interaction HCI
 The SMSC relays SMS messages from a Mobile Equipment (ME) to another
ME. The SMS is sent to the nearest call mast, forwarded to the SMSC. The
SMSC stores the SMS and then attempts to deliver it to the destination ME.
 After a preset period of time, known as the Validity Period, the message is
deleted from the SMSC if it cannot be delivered. The SMS can also be
forward on to a computer or logical ME is the mobile network's LAN.
 Fig. 4.2.3 shows how the components of the digital mobile network relevant to
short messaging fit together.

Fig. 4.2.3 SMS


 Apparent on the left are the external short messaging Entities as inputs to the
network. These feed directly into the short messaging service centre.
 This example has the SMSC connected to two separate base stations. Two
signal transfer points link the SMSC to the mobile switching centre. The home
location register is accessed through the signal transfer point, and the visitor
location register feeds directly into the MSC.
 Finally, the MSC transmits to the base station. The base station then provides
the wireless link to all mobile stations, mobile phones.

Pros :
1. They work on any mobile device.
2. They can be simple to set up and manage.
3. They can be incorporated into any web or mobile application.

Cons :
1. They are limited to 160 characters.
2. They provide limited text based experience.

4.2.4 Games
 A mobile game is a game played on a mobile phone (feature phone or
smartphone), tablet, smartwatch, PDA, portable media player.
Human Computer Mobile
Interaction HCI
 The final mobile medium is games, the most popular of all media available to
mobile devices. Technically games are really just native applications that use
the similar platform.
 They are different from native applications for two reasons : they cannot be
easily duplicated with web technologies, and porting them to multiple mobile
platforms is a bit easier than typical platform - based applications.
 Adobe’s Flash and the SVG (Scalable Vector Graphics) standard are the only
way for the mobile games on the Web now, and will likely be how it is done on
mobile devices in the future, the primary obstacle being the performance of the
device in dealing with vector graphics.
 The game mechanics are the only thing that needs to adapt to the various
platforms. Like in console gaming, there are a great number of mobile game
porting shops that can quickly take a game written in one language and port it
to another.
Merits
• They provide a simple and easy way to create an immersive experience.
• They can be ported to multiple devices relatively easily.
Demerits
• They can be costly to develop as an original game title.
• They cannot easily be ported to the mobile web.
University Question
Q.1 Appraise the types of mobile applications with example. AU : April/May-17, Marks-
16

3.Elaborately explain the mobile information architecture. (April /may2018)

Mobile Information Architecture


 Information Architecture (IA) plays a vital role in creating intuitive and well
structured navigation, content flow and structure of a software, website,
intranet or mobile application
 Mobile devices have their own set of information architecture patterns.
 While the structure of a responsive site may follow more “standard” patterns,
native apps, for example, often employ navigational structures that are tab-
based.
 Again, there’s no “right “way to architect a mobile site or application. Instead,
let’s take a look at some of the most popular patterns : Hierarchy, Hub &
spoke, Nested doll, Tabbed view, Bento box and Filtered view.
 Typically the activities undertaken in defining information architecture involve :
Human Computer Mobile
Interaction HCI
1. Content inventory : Examination of an existing application/website to
locate and identify existing app/site content
Human Computer Mobile
Interaction HCI
2. Content inventory : Examination of an existing application/website to
locate and identify existing app/site content
Content audit : Evaluation of content hierarchy, relevance, priorities, usefulness,
accuracy and overall effectiveness.
3. Information grouping : Definition of user-centered relationships
between content and its types.
4. Taxonomy development : Definition of a standardized naming convention
(controlled vocabulary) to apply to all site content.
5. Descriptive information creation : Definition of useful metadata that can
be utilized to generate “Related Link” lists or other navigation components
that aid discovery.
 Fig. 4.3.1 shows mobile information
architecture hierarchy.
 Hierarchy : The hierarchy pattern is a
standard site structure with an index page
and a series of sub pages. If you are
designing a responsive site you may be
restricted to this, however introducing
additional patterns could allow you to tailor
the experience for mobile.
 Hub & spoke : A hub and spoke pattern
Fig. 4.3.1 Hierarchy
gives you
a central index from which users will
navigate out. It’s the default pattern on
Apple’s iPhone. Users can’t navigate
between spokes but must return to the hub,
instead.
 Fig. 4.3.2 shows hub and spokes.
 Nested doll : The nested doll pattern leads
users in a linear fashion to more detailed
content. When users are in difficult
conditions this is a quick and easy method of
navigation. It also gives the user a strong Fig. 4.3.2 Hub and spokes
sense of where they are in the structure of
the content due to the perception of moving
forward and then back.
Human Computer Mobile
Interaction HCI
 Tabbed view : This is a pattern that regular app users will be familiar with.
It’s a collection of sections tied together by a toolbar menu. This allows the
user to quickly scan and understand the complete functionality of the app
when it’s first opened.
 Bento Box/Dashboard : The bento box or dashboard pattern brings more
detailed content
content. This pattern is more suited to tablet than mobile due to its complexity.
 Fig. 4.3.3 shows process flow diagram.

Fig. 4.3.3 Process flow diagram


 A prototype is an example that serves as a basis for future models.
Prototyping gives designers an opportunity to research new alternatives and
test the existing design to confirm a product’s functionality prior to production.
 Paper prototype : Easy and fast to do. It helps you think of specifics. Usually
good as a first round prototype. It can still do usability testing, even with
paper.
 Paper Prototyping is a prototyping method in which paper models are used to
simulate computer or web applications. After initial design, a paper prototype
is drawn on plain or construction paper, sometimes with colored markers. This
Human Computer Mobile
Interaction HCI
is often a quick method, but can have some drawbacks over using prototyping
software, in which designs can be easily copied, adapted and simulated.
Human Computer Mobile
Interaction HCI
Wire - Frames :
 The interaction modeling and interface options can be put together concretely
using the so-called wire-framing process.
 Wire-framing originated from making rough specifications for website page
design and resembles scenarios or storyboards.
 Usually, wire-frames look like page schematics or screen blueprints, which
serve as a visual guide that represents the skeletal framework of a website or
interface. It depicts the page layout or arrangement of the UI objects and how
they respond to each other.
 Wireframes can be pencil drawings or sketches on a whiteboard, or they can
be produced by means of a broad array of free or commercial software
applications.

Fig. 4.3.4 An example of a wire framing tool

 Fig. 4.3.4 shows such a wire - framing tool. Wireframes produced by these
toolscan be simulated to show interface behavior and depending on the tools,
the interface logic can be exported for actual code implementation (but usually
not).
 Note that there are tools that allow the user to visually specify UI elements and
their configuration and then automatically generate code. Regardless of which
type of tool is used, it is important that the design and implementation stages
be separated.
 Though wire-framing, the developer can specify and flesh out the kinds of
information displayed, the range of functions available and their priorities,
alternative sand interaction flow.

Mobile 2.0
 The Web was a read-only medium for a majority of users. Upto 1990, web was
combination of a telephone book and yellow pages. Some user was knows
about hyperlinks.
 When web 2.0 was invented by Tim O’Reilly, attitude towards the web was changed.
 Web 2.0 tools allow libraries to enter into a genuine conversation with their
users. Libraries are able to seek out and receive patron feedback and respond
Human Computer Mobile
Interaction HCI
directly.
Human Computer Mobile
Interaction HCI
 In 2003, noticeable shift in how people and businesses were using the web and
developing web-based applications.
 Tim O'Reilly said that “Web 2.0 is the business revolution in the computer
industry caused by the move to the Internet as a platform, and an attempt to
understand the rules for success on that new platform”
 Many web 2.0 companies are built almost entirely on user-generated content
and harnessing collective intelligence. Google, MySpace, Flickr, YouTube and
Wikipedia, users create the content, while the sites provide the platforms.
 The user is not only contributing content and developing open source software,
but directing how media is delivered, and deciding which news and information
outlets you trust.
 At that stage we thought the web 2.0 stack was fairly empty, but since those
days the extent that people collaborate, communication, and the range of tools
and technologies have rapidly changed.
 Editing blogs and wikis did not require any knowledge of HTML any more.
Blogs and wikis allowed individuals and groups to claim their personal space
on the Web and fill it with content at relative ease.
 The first online social networks entered the field at the same time as blogging
and wikis started to take off. Web 2.0 is the network as platform, spanning all
connected devices.
 Web 2.0 applications are those that make the most of the intrinsic advantages
of that platform. It delivers software as a continually-updated service that gets
better the more people use it.
 Consuming and remixing data from multiple sources, including individual
users, while providing their own data and services in a form that allows
remixing by others.

Web 3.0 :
 Web 3.0, or the semantic web, is the web era we are currently in, or perhaps
the era we are currently creating.
 Web 3.0, with its use of semantics and artificial intelligence is meant to be a
“smarter web”, one that knows what content you want to see and how you
want to see it so that it saves you time and improves your life.
 Semantic web is really the participatory web, which today includes “classics”
such as YouTube, MySpace, eBay, Second Life, Blogger, RapidShare, Facebook
and so forth.
 Web 2.0 is that users are willing to provide content as well as metadata. This
may take the form articles and facts organized in tables and categories in
Wikipedia, photos organized in
Human Computer Mobile
Interaction HCI
sets and according to tags in Flickr or structured information embedded into
homepages and blog postings using micro-formats.
 A major disadvantage associated with web 2.0 is that the websites become
vulnerable to abuse since, anyone can edit the content of a web 2.0 site. It is
possible for a person to purposely damage or destroy the content of a website.
 Web 2.0 also has to address the issues of privacy. Take the example of
YouTube. It allows any person to upload a video. But what if the video
recording was done without the knowledge of the person who is being shown
in the video ? Thus, many experts believe that web 2.0 might put the privacy
of a person at stake.
 The basic idea of web 3.0 is to define structure data and link them in order to
more effective discovery, automation, integration, and reuse across various
applications. It is able to improve data management, support accessibility of
mobile internet, simulate creativity and innovation, encourage factor of
globalization phenomena, enhance customers’ satisfaction and help to organize
collaboration in social web.
 Web 3.0 supports world wide database and web oriented architecture which in
earlier stage was described as a web of document
 Mobile devices with web access extend the ubiquity of the Internet, offering
anywhere, anytime access to information. This makes it possible to share
information with colleagues and friends through photo and video media, as
well as by voice.
 Consumers can quickly capture photos and videos with cameras embedded in
their mobile devices and share them with relatives and friend via email or post
them to MySpace or YouTube.

4.5 Mobile Design AU : April/May-17,18

 Elements of mobile designs are context, message, look and feel, layout,
typograph and graphics.

1. Context
 Context is critical to mobile design; understanding the big picture of a user’s
interaction with a device enables designers to create better user experiences
on that device.
 Mobile is based very much on the traditional understanding of context including :
a) Culture - Economic, religious, manners, legal, social, etc.
b) Environmental - The noise, the light, the space in which something is used,
privacy, etc.
c) The activity - Are they walking, driving, working, etc ?
d) The goals that the user has - Their status, social interaction, entertainment, etc.
Human Computer Mobile
Interaction HCI
e) The attention span the user has available - Continuous (full/partial) or
intermittent (full/partial).
f) The tasks the user wants to carry out - Make calls, send messages, etc.
g) The device on which the user operates - OS, hardware, capabilities, etc.
h) The connection available to the user - Speed, reliability, etc.
 Contextual model is based around the user’s intentions rather than their
physical location and while there may be some shift between levels on each
device.
2. Message
 The message is the overall mental impression we create explicitly through
visual design. Branding is the impression company name and logo gives
essentially reputation.
 A “heavy” design with use of dark colors and lots of graphics will tell the
user to expect something more immersive.
3. Look and Feel
 Establishing a look and feel usually comes from design inspiration.
 Look refer to visual design and feel refer to overall customer experience of
using a product, service, environment, machine or tool.
 On large mobile projects or in companies with multiple designers, a style guide
or pattern library is crucial, maintaining consistency in the look and feel and
reducing the need for each design decision to be justified.
4. Layout
 Layout is an important design element, because it is how the user will visually
process the page, but the structural and visual components of layout often get
merged together, creating confusion and making design more difficult to
produce.
 Layout plays a significant role in a graphic design. Layout refers to the
arrangement of elements on a page usually referring to specific placement of
image, text and style.
 Understanding the layout of design is very important. If the layout is not
correctly understood, there is a probability that the message you wanted to
convey will be lost and the cost of advertising would go to waste.
 There are two distinct types of navigation layouts for mobile devices: touch and
scroll.
 Navigation is the skeleton of the APP, supporting the overall content and
information, so that content in accordance with the information architecture
organically combined, visually and clearly displayed in front of users so that
fragmented content becomes full and orderly. Meanwhile, the structured
arrangement at the same time also enhances the sense of ecology.
Human Computer Mobile
Interaction HCI
5. Color
 Color is one of the most important elements of mobile design. When users open
your app, what is the first thing they observe ? That’s right, the main color.
 The Internet is filled with studies and articles about the psychology behind
colors in marketing and all these principles apply also to mobile apps so you
have to think about the color while building your mobile interface and it to
your mobile app design elements.
 A simple color could change everything about your mobile app. Take a look at
the scheme below and think about your users, their location, and their
characteristics. This way you will find the right complex of colors to match
your app’s style.
 Color palettes : Defining color palettes can be useful for maintaining a
consistent use of color in mobile design. Color palettes typically consist of a
predefined number of colors to use throughout the design.
 Selecting what colors to use varies from designer to designer, each having
different techniques and strategies for deciding on the colors.
 Three basic ways to define a color palette : Sequential, Adaptive and Inspired.
a) Sequential :
 In this case, there are primary, secondary, and tertiary colors. Often the
primary color is reserved as the “brand” color or the color that most closely
resembles the brand’s meaning. The secondary and tertiary colors are often
complementary colors that can be selected using a color wheel.
 Primary color is the color displayed most frequently across your app’s
screens and components. Your primary color can be used to make a color
theme for your app, including dark and light primary color variants.
 Secondary color provides more ways to accent and distinguish your
product. Having a secondary color is optional, and should be applied
sparingly to accent select parts of your user interface.
 Secondary colors are best for : Floating action buttons, Selection controls,
like sliders and switches, Highlighting selected text, Progress bars, Links
and headlines.
b) Adaptive :
 An adaptive palette is one in which we leverage the most common colors
present in a supporting graphic or image. When creating a design that is
meant to look native on the device, an adaptive palette can be used to
make sure that colors are consistent with the target mobile platform.
Human Computer Mobile
Interaction HCI
c) Inspired :
 This is a design that is created from the great pieces of design we might
see online or offline, in which a picture of the design might inspire us. This
could be anything from an old poster in an alley, a business card, or some
packaging. Like with the adaptive palette, we actually extract the colors
from the source image, though we should never ever use the source
material in a design.
6. Typograph
 Typography is the art and technique of arranging type to make written
language legible, readable, and appealing when displayed. The arrangement of
type involves selecting typefaces, point sizes, line lengths, line-spacing
(leading), and letter-spacing (tracking), and adjusting the space between pairs
of letters.
 Fig. 4.5.1 shows example of typography.

Fig. 4.5.1 Example of typography


 The term typography is also applied to the style, arrangement, and appearance
of the letters, numbers, and symbols created by the process.
 Typography is a vital part of design responsiveness. Wrong size, width, and
placement of fonts have a big impact on a whole composition. Even the most
insignificant changes may break the balance between all design elements.
 Creating typography for a digital product, designers need to consider how it
will look on different devices. Planning such things forehead helps to avoid
unnecessary problems in the future.
Human Computer Mobile
Interaction HCI
7. Graphics
 The final design element is graphics, or the images that are used to establish
or aid a visual experience. Graphics can be used to supplement the look and
feel, or as content displayed in line with the text.
 Iconography : The most common form of graphics used in mobile design is
icons. Iconography is useful to communicate ideas and actions to users in a
constrained visual space. The challenge is making sure that the meaning of the
icon is clear to the user.

4.5.1 HCI Design Challenges for Mobile Device


 Challenges in HCI design for mobile devices are of hardware and software.
Hardware Challenges :
 Due to the limitations of size and weight for portability purpose, the interface
design for mobile devices comes with more hardware challenges.
 These challenges include limited input facilities, limited output facilities, and
designing for mobility.
 There are 3 main inputs in current mobile device : Keyboard, Stylus on the
touch screens and Scroll wheel.
 The keyboard allows a user to hit a key to perform a task or navigate through
the mobile menu functionalities; the stylus with the touch screen allows a user
to hit the screen to do the task; the scroll wheel can be scrolled and pushed by
a user to do a task and also navigate through the menus and submenus.
 The design of keyboards for mobile devices has been a challenge because the
space for key installation on a mobile device is limited.
 Touch input would be problematic if the screen of a mobile device is small and
that would lead a user’s fingers to occlude the graphical elements he wishes to
work with.
 Scroll wheel can be used to navigate a mobile device menu in one direction,
either horizontally or vertically. It can also be used as a push button to do a
specific task to support the use of one hand to interact with the mobile device.
 Limited output facilities : The small - sized screen is one of the mainly and most
commonly used output facilities for mobile devices. Designing the screen for
outputting is a trade-off challenge that needs to be experimentally studied to
find out which is the efficient and most effective size of the screen that can be
used for the different types of mobile devices.
Software Challenges :
 System of menus : Taking a successful design from a desktop and apply it to a
mobile device without a clear understanding of the translation inputs and
outputs can lead to an
Human Computer Mobile
Interaction HCI
ineffective interaction design.
 The mainly and widely used alternative is the use of hierarchical menus. With a
hierarchal menu, a user can select a menu item that can then open another
submenu; and so on until the user reaches the desired function he or she is
aiming to reach.
 Navigating and browsing : To display information that is well suited for larger
screens, the information has to be segmented into many small presentation
units that can fit into the small screen of mobile devices. And this makes it
difficult to effectively organize information and help users navigate to and from
the information they want.
 Images and Icon : Display of graphical content described by raster and vector
graphics on mobile devices to allow appropriate and resource-saving
implementations.
4.5.2 Related to Mobile Device
a) Small Screen
 Screen size is a serious limitation for mobile devices. The content displayed
above the fold on a 30 inch monitor requires 5 screenful on a small 4 - inch
screen.
 Thus, mobile users must incur a higher interaction cost in order to access the
same amount of information;
 They must rely on short-term memory to refer to information that is not visible
on the screen.
 Users come to a site to find information that they need or to accomplish a task,
not to contemplate the beauty of buttons, navigation, menus, and other design
elements.
 Content is always of interest, but whereas on desktop there is enough screen
space for both content and chrome to coexist, often, on mobile, designers must
downplay the chrome to make space for essential content.
b)Single Windows
 Although some phone manufactures are trying to accommodate multiple
windows on the screen at the same time, the limited size of the mobile screen
makes that goal quite unpractical, even with today’s larger-screen phones.
 The vast majority of users only see a single window at a time; they cannot split
the screen and work with two different apps simultaneously.
 The single-window constrain means that design should be self - sufficient : Any
mobile tasks should be easy to complete in a single app or on a single website.
 Users should not have to leave an app or website to find information that the
app requires, but that it does not provide.
 If users must move information from one app to another, it’s likely that they will
need to
Human Computer Mobile
Interaction HCI
copy-and-paste it; the interaction will become more complex and error prone.
 Apps and websites should be self-sufficient and should not necessitate any
external props, be them physical or virtual.
c) Touchscreen
 Touchscreens come with many blessings. Gestures represent a hidden,
alternate User Interface (UI), that, when built with the right affordances can
make the interaction fluid and efficient and can save screen real estate.
 But they also suffer from low memorability and discoverability. On the other
hand, it’s hard to type proficiently on a tiny virtual keyboard and it’s easy to
accidentally touch the wrong target.
 Perhaps the biggest problem is related to typing : On a soft keyboard, users
need to continuously divide attention between the content that they are typing
and the keypad area.
 Touch typing is impossible in the absence of haptic feedback; plus, keypads
themselves are small and keys are crowded.
 Because on a touch screen there can be many target areas, it is easy to make
accidental touches. Some can leave the user disoriented and unsure of what
happened. Undo is one of the original 10 usability heuristics, and it is even
more important on touch devices.
University Questions

Q.1 List and explain the elements of mobile design. AU : April/May-17, Marks-16
Q.2 Discuss the various elements of mobile design with a step by step method. Explain how to
design a registration page for movie ticket booking.AU : April/May-18, Marks-16
Two Marks Questions with Answers

What is information architecture ?


The organization of data within an informational space. In other words,
Ans. :
how the user will get to information or perform tasks within a website or
application.
Define an Interaction design.
The design of how the user can participate with the information present,
Ans. :
either in a direct or indirect way, meaning how the user will interact with the
website of application to create a more meaningful experience and accomplish
the goals.
What is mobile widget ?
Ans. : Mobile Widget is a component of a user interface that operates in a
Human Computer Mobile
Interaction HCI
Write about clickstream.
Ans. : Clickstream is a term used for showing the behavior on websites,
displaying the order in which users travel through a site’s information
architecture, usually based on data gathered from server logs. Clickstreams are
usually historical, used to see the flaws in information architecture, typically
using heat-mapping or simple percentages to show where users are going.
They are a useful tool for re architecting large websites.
What is mobile 2.0 ?
Ans. : Mobile 2.0 refers to a perceived next generation of mobile internet
services that leverage the social web, or what some call Web 2.0. The social
web includes social networking sites and wikis that emphasize collaboration
and sharing amongst users.
What are the two fundamental problems of mobile software ?
Ans. : The first is forcing users through a single marketplace. The second
problem is the ability to update your application.
What is typography ?
Ans. :Typography is the art and technique of arranging type to make written
language legible, readable, and appealing when displayed.
Highlights the importance of mobile applications. AU : April/May-18
Today mobile phone has become an integral part of so many individuals
Ans. :
due to these apps, such that it could be said that it helps towards some sort
of maintaining organized
life. The availability of apps for contacts, relevant projects and events, personal
information
and future events in mobile phones attest to this. These set of information are
fully stored on our phones and can help us plan life, thus facilitating proper
time management.
Give examples of mobile design tools. AU : April/May-17
Ans. :Examples of mobile design tools are Google maps, Shutterfly, Typeform,
flashlight etc.
Identify the types of mobile platforms. AU : April/May-17

Ans. : Types of mobile platforms are licensed, proprietary and open source.
Define color palette. AU : Nov/Dec-18

Ans. : Defining color palettes can be useful for maintaining a consistent use of
color in mobile design. Color palettes typically consist of a predefined number
of colors to use throughout the design. Selecting what colors to use varies from
designer to designer, each having different techniques and strategies for
deciding on the colors. Three basic ways to define a color palette : sequential,
adaptive and inspired.
Human Computer Mobile
Interaction HCI

Human Computer Mobile
Interaction HCI
Notes
UNIT-V

5
Web Interface Design

Syllabus
Designing Web Interfaces - Drag & Drop, Direct Selection, Contextual Tools, Overlays, Inlays
and Virtual Pages, Process Flow - Case Studies.

Contents
Designing Web Interface
Contextual Tools
OVERLAYS
Inlays
Virtual Pages
Process flow
Two Marks Questions with Answers

(5 - 1)
Human Computer 5- 2 Web Interface
Interaction Design

5.1 Designing Web Interface


 Web design refers to the design of websites that are displayed on the internet.
It usually refers to the user experience aspects of website development rather
than software development.
 Two of the most common methods for designing websites that work well both
on desktop and mobile are responsive and adaptive design.
 In the context of websites, a web interface is a page that users interact with
when a site is fully downloaded on a web browser. A website is a collection of
code, but this code is not suitable for user interaction.

5.1.1 Drag and Drop


 Drag and Drop is a common feature in modern web design where your users
can click and grab an object and drag it to a different location, it’s quite
frequently used to order or reorder things in am a list or arrange things, etc.
 In 2000, a small startup, HalfBrain, launched a web-based presentation
application, BrainMatter. It was written entirely in DHTML and used drag and
drop as an integral part of its interface.
 Drag and drop is a common action performed within a graphical user interface.
It involves moving the cursor over an object, selecting it, and moving it to a
new location.

Interesting Moments :
 Available events for cueing the user during drag and drop interactions are as
follows :
1. Page load : before any interaction occurs, user can pre-signify the
availability of drag and drop.
2. Mouse Hover : mouse pointer hovers over an object that is draggable.
3. Mouse down : user holds down the mouse button on the draggable object.
4. Drag initiated : after the mouse drag starts.
5. Drag leaves original location.
6. Drop accepted : drop occurs over a valid target and drop has been accepted.
7. Drop Rejected : drop occurs over an invalid target and drop has been rejected.
 Actors : During each event you can visually manipulate a number of actors.
The page elements available include page, cursor, tool tip, drag object, drag
objects parent container and drop target.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- 3 Web Interface
Interaction Design
 Purpose of Drag and Drop are as follows :
a. Drag and drop module : used for rearranging the modules on a page.
b. Drag and Drop list : rearranging the lists.
c. Drag and drop object : used for changing relationship between objects.
d. Drag and drop action : invoking actions on a dropped object.
e. Drag and drop collection : maintaining collections through drag and drop.

Drag and Drop Module :


 How to drag a module from its current location on a page to and drop it into
another place.
 The drag-and-drop module supports animations both while sorting an element
inside a list, as well as animating it from the position that the user dropped it
to its final place in the list.
1. Normal display style

2. Invitation to drag

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- 4 Web Interface
Interaction Design
3. Dragging

Drag and Drop Action :


 Drag and drop is also useful for invoking an action or actions on a dropped
object. The Drag and Drop Action is a common pattern.
 Its most familiar example is dropping an item in the trash to perform the delete
action. Normally uploading files to a web application includes pressing the
upload button and browsing for a photo. This process is repeated for each
photo.

Drag & Drop Examples :


a) Simple Drag : This example shows a simple drag interaction that doesn't
require a drop interaction.
b) Drag Node Plugin : This example shows using the drag node plugin.
c) Proxy Drag : This example shows a simple proxy drag interaction that
doesn't require a drop interaction.
d) Drag Constrained to a Region : This example shows how to constrain a
draggable node to another nodes region.
e) Interaction Groups : Using interaction groups, this example demonstrates
how to tie into the drag & drop utility's interesting moments to provide
visual affordances for the current drag operation.
f) Using the Drag Shim : This example shows the use of the drag shim when
dragging nodes over other troublesome nodes.
g) Animated Drop Targets : This example will show you how to make an
animated node a drop target.
h) Drop Based Coding : This example shows how to use the drop target
events to code your application.
i) Window Scrolling : Using the window scroll plugin.
j) Drag Delegation : Using DD, Delgate for multiple items.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- 5 Web Interface
Interaction Design
k) Drag Delegation with a Drop Target : Using DD, Delgate for dragging
multiple items and dropping on a drop target.
l) Using Drag plugins with Delegate : Using drag plugins with delegate.
m) List reorder w/Bubbling : This example shows how to make a sortable list
using custom event bubbling.
n) List reorder w/Scrolling : This example shows how to make a sortable list
using Custom event bubbling and node scrolling.
o) Portal Style Example : Portal style example using drag & drop event
bubbling and animation.
p) Photo Browser : Example photo browser application.

5.1.2 Direct Selection


 Following are different types of selections :
1. Toggle selection : control-based selection
2. Collected selection : selection that spans multiple pages.
3. Object selection : direct object selection
4. Hybrid selection : combination of toggle and object selection.

Toggle Selection :
 A checkbox control has three states : unselected, selected, and indeterminate.
 The last state represents a situation where a list of sub-options is grouped
under a parent option and sub-options are in both selected and unselected
states.
 A toggle switch represents a physical switch that allows users to turn things on
or off, like a light switch.
 Tapping a toggle switch is a two-step action: selection and execution, whereas
checkbox is just selection of an option and its execution usually requires
another control.
 When deciding between a checkbox and toggle switch control, it is better to
focus on the usage context instead of their function.
a) Use Toggle switch for Instant response
 An instant response of applied settings is required without an explicit action.
 A setting requires an on/off or show/hide function to display the results.
 User needs to perform instantaneous actions that do not need a review or
confirmation.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- 6 Web Interface
Interaction Design

Fig. 5.1.1 : The options that require instant response are best selected using a toggle switch.

b) Use Checkbox for Settings confirmation


 Applied settings need to be confirmed and reviewed by user before they are
submitted.
 Defined settings require an action like Submit, OK, Next, Apply before displaying
results.
 User has to perform additional steps for changes to become effective.

Fig. 5.1.2 : Checkboxes are preferred when an explicit action is required to apply settings

c) Use Checkbox for Multiple choices


 Multiple options are available and user has to select one or more options from
them.
 Clicking multiple toggle switches one by one and waiting to see results
after each click takes extra time.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- 7 Web Interface
Interaction Design

Fig. 5.1.3 : Selecting multiple options in a list provides better experience using checkboxes

d) Use checkbox for Indeterminate state


 An intermediate selection state is required when multiple sub-options are
grouped under a parent option. The intermediate state will represent that
multiple sub-options (but not all of them) are selected in the list.

Fig. 5.1.4 : Indeterminate state is best shown using a checkbox.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- 8 Web Interface
Interaction Design
Collected Selection :
 Collected selection is a pattern for keeping track of selection as it spans multiple
pages.
 In Gmail, you can select items as you from page to page. The selections are
remembered for each page.
 If you select two items on page one, then move to page two and select three
items, there are only three items selected.
 Toggle selection is used for selecting bookmarks for editing, deleting etc.
 Object selection is used for initiating a drag drop.

5.2 Contextual Tools


 Contextual Tools enable the possibility to work with an object selected on the
page, such as a table, picture, or drawing. When the object is clicked, the
pertinent set of contextual tabs appear in an accent color next to the standard
tabs.
 Contextual menus are displayed on demand and contain a small set of relevant
actions, related to a control, a piece of content, a view in an app, or an area of
the UI. When designed right, they deliver relevant tools for completing tasks
without adding clutter to the interface.

Fitts’s Law :
 Fitts's Law is an ergonomic principle that ties the size of a target and its
contextual proximity to ease of use.
 Fitts stated that the size of the target object along with its distance from the
starting location could be directly measured, allowing him to model the ease at
which a person could perform the same action with a different target object.
 Fitts’ law, at its simplest form, is common sense. The bigger an object and the
closer it is to us, the easier it is to move to.
 Fitts’ law is a model that can help designers make educated decisions in user
interfaces and web page layouts. It can be used in conjunction with design
theories such as visual weight to give user interface items proper hierarchy
and placement.
 Contextual Tools are as follows :
1. Always visible tools : place contextual tools directly in the content. Digg’s
“digg it” button is a simple Contextual Tool that is always visible.
2. Hover Reveal tools : Show contextual tools on mouse hover. Fig. 5.2.1 and
5.2.2 shows normal state and invitation.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- 9 Web Interface
Interaction Design

Fig. 5.2.1 : Normal state Fig. 5.2.2 : Invitation

 Normal state : the edit and delete tools are hidden in the normal state.
 Invitation : on mouse hover, the tools are revealed. Tools are cut into the grey
bar, drawing the eye to the change.

Contextual Tools in an Overlay :


 Instead of placing tools beside the object being acted on, the revealed tools can
be placed in an overlay. However, there can be issues with showing contextual
tools in an overlay: Providing an overlay feels heavier. An overlay creates a
slight contextual switch for the user's attention.
There can be issue with showing contextual tools in an overlay :
1. Providing an overlay feels heavier. An overlay creates a slight contextual
switch for the user’s attention.
2. The overlay will usually cover other information - information that often
provides context for the tools being offered.
3. Most implementations shift the content slightly between the normal view
and the overlay view, causing the users to take a moment to adjust to the
change.
4. The overlay may get in the way of navigation. Because an overlay hides at
least part of the next item, it becomes harder to move the mouse through
the content without stepping into a “landmine”.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- Web Interface
Interaction Design

1. Toggle reveal tools : A master switch to toggle on/off contextual tools for
the page. Toggle a tool mode for an area or page when the actions are not
the main flow. The tools to accomplish this are revealed on the activation of
the toggle.
2. Multilevel tools : progressively reveal actions based on user interaction.
3. Secondary menus : show a secondary menus.
 Discoverability is a primary reason to choose always visible tools.

5.3 OVERLAYS
 Overlays are really just lightweight pop ups. We use the term lightweight to
make a clear distinction between it and the normal idea of a browser pop up.
 Browser pop ups are created as a new browser window lightweight overlays
are shown within the browser page as an overlay.
 Older style browser pop ups are undesirable because : Browser pop ups
display a new browser window.
 These windows often take time and a sizeable chunk of system resources to
create. Browser pop ups often display browser interface controls.
 Due to security concerns, in internet explorer 7 the URL bar is a permanent
fixture on any browser pop-up window.
 Types of overlay are dialog overlays, detail overlays, and input overlays.
1. Dialog Overlay : Dialog overlays replace the old-style browser pop ups.
Netflix provides a clear example of a very simple dialog overlay.
2. Detail Overlay : The second type of overlay is somewhat new to web
applications. The detail overlay allows an overlay to present additional
information when the user clicks or hovers over a link or section of content.
Toolkits now make it easier to create overlays

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- Web Interface
Interaction Design
across different browsers and to request additional information from the
server without refreshing the page.
3. Input Overlay : Input overlay is a lightweight overlay that brings additional
input information for each field tabbed into. American express uses this
technique in its registration for premium card such as its gold card

5.4 Inlays
 Information, or dialog with the user needs to be an overlay. Another
approach is to inlay the information directly within the page itself.
 Types of Inlays are as follows :
1. Dialog Inlay : A simple technique is to expand a part of the page, revealing a
dialog area within the page.
2. List Inlay : Lists are a great place to use inlays. Instead of requiring the user
to navigate to a new page for an item‘s detail or popping up the information
in an overlay, the information can be shown with a List Inlay in context.
3. Detail Inlay : A common idiom is to provide additional detail about items
shown on a page.

5.5 Virtual Pages


 Patterns that support virtual pages include Virtual Scrolling, Inline Paging,
Scrolled Paging, Panning and Zoomable User Interface.
1. Virtual scrolling : Virtual Scrolling demonstrate three different ways to
manage the virtual space
a. Yahoo! Mail creates the illusion that all data has been loaded up-front
by having the scrollbar reflect the total virtual space.
b. Microsoft live search creates the virtual illusion as the user moves
down through the search results.
2. Inline Paging : What if instead of scrolling through content we just wanted
to make pagination feel less like a page switch ? By only switching the
content in and leaving the rest of the page stable, we can create an Inline
Paging experience. This is what Amazon‘s Endless.com site does with its
search results.
3. Virtual Panning : One way to create a virtual canvas is to allow users the
freedom to roam in two-dimensional space. A great place for virtual panning
is on a map.
4. Scrolled paging : User can combine both scrolling and paging into scrolled
paging. Paging is performed as normal. But instead the content is scrolled
into view.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- Web Interface
Interaction Design
5. Zooming User Interface (ZUI) is a graphical environment where users can
change the scale of the viewed area in order to see more detail or less, and
browse through different documents. A ZUI is a type of Graphical User
Interface (GUI).

5.6 Process Flow


 For some process flows it makes sense to keep the user on the same page
throughout the process.
a. Google Blogger : Google Blogger is a free publishing platform run by
Google. It is designed to be easy to use so writers can upload content to
their blogs via email, Google Plus and various apps and programs.
 Google Blogger, originally called Blogger, was acquired by Google in 2003. The
publishing service facilitates the creation of blogs, or informal online
discussion sites.
 Users can sign up for a maximum of 100 blogs per account and customize
those blogs with pre-designed templates. Content is hosted on Google’s
servers, and the service is available in 60 languages.
 Although its popularity in the United States has waned somewhat, Google
Blogger has a large international user base and is popular in countries such as
Indonesia, India and Brazil.
b. The magic principle: Alan Cooper discusses a wonderful technique for
getting away from a technology-driven approach and discovering the
underlying mental model of the user.
c. Interactive single page processor : Consumer products come in a variety
of shapes, sizes, textures, colors, etc.
 Online shoppers will not only have to decide that they want shoes, but do they
want blue suede shoes ? And what size and width do they want them in ?
 In the end the selection is constrained by the available inventory. As the user
makes decisions, the set of choices gets more and more limited. This type of
product selection is typically handled with a multi-page workflow.
 On one page, the user selects a shirt and its color and size. After submitting
the choice, a new page is displayed. Only when the user arrives at this second
page does he find out that the true navy shirt is not available in the medium
size.
d. Configuration process : Sometimes a process Flow is meant to invoke
delight. In these cases, it is the engagement factor that becomes most
important. This is true with various configurator process interfaces on the
web.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- Web Interface
Interaction Design
Two Marks Questions with Answers
What is difference between Hover-Reveal tools and Toggle-Reveal
tools ? Ans. : Hover-Reveal Tools : Show Contextual Tools on
mouse hover.
Toggle-Reveal Tools : A master switch to toggle on/off Contextual Tools for the page.
What is meant by multi-level tools ?
Ans. : Multi-level tools is progressively reveal actions based on user interaction.
What is meant by radial menus ?
Ans. : Experienced users can rely on muscle memory rather than having to look
directly at the menu items. The proximity and targeting size make the menu
easy to navigate since the revealed menu items are all equally close at hand.
What is meant by dialog overlay ?
Ans. : Dialog overlay contains important information that the user should not
ignore. Both the Netflix purchase dialog and the Flickr rotate dialog are good
candidates for the lightbox Effect.
What is meant by Light box effect ?
Ans. : Lightbox is a JavaScript library that displays images and videos by filling
the screen, and dimming out the rest of the web page.
What is meant by detail overlay ?
Ans. : The detail overlay allows an overlay to present additional information
when the user clicks or hovers over a link or section of content. Toolkits now
make it easier to create overlays across different browsers and to request
additional information from the server without refreshing the page.
What is meant by Input overlay ?
Ans. : Input overlay is a lightweight overlay that brings additional input
information for each field tabbed into. American express uses this technique in
its registration for premium card such as gold card.
What is meant by detail inlay ?
Ans. : Detail inlay pattern shows a carousel of photos when the user clicks on the
“View photos” link. It uses a detail overlay to blow up a thumbnail when clicked on.
What is meant by virtual scrolling ?
Ans. : The virtual scrolling feature of data grid can be used to display large
amounts of records without paging. When scrolling with the vertical scrollbar,
the data grid executes Ajax requests to load and refresh the existing records.
The overall behavior is smooth and with no flicker.

®
TECHNICAL PUBLICATIONS - An up thrust for knowledge
Human Computer 5- Web Interface
Interaction Design
Define object selection. AU : April/May-18

Ans. : Object selection is to initiate a selection by directly clicking on the object


itself. It is based on make it direct principle.
What is purpose of drag and drop ? AU : April/May-18

Ans. : Purpose of Drag and Drop are as follows :


 Drag and drop module : used for rearranging the modules on a page.
 Drag and Drop list : rearranging the lists.
 Drag and drop object : used for changing relationship between objects.
 Drag and drop action : invoking actions on a dropped object.
 Drag and drop collection: maintaining collections through drag and drop.
What are the types of overlays ? AU : April/May-17

Ans. : Types of overlay are Dialog Overlays, Detail Overlays, and Input Overlays.
What do you mean by inlay ? AU : Nov/Dec-18

Ans. : Not every bit of additional control, information, or dialog with the user
needs to be an overlay. Another approach is to inlay the information directly
within the page itself. To distinguish from the pop-up overlay, we call these in -
page panels Inlays.
Differentiate between modal and non - modal overlays. AU : Nov/Dec-17

Ans. : A modal dialog is a dialog that appears on top of the main content and
moves the system into a special mode requiring user interaction. In contrast,
nonmodal (or modeless) dialogs and windows do not disable the main content
: showing the dialog box doesn't change the functionality of the user interface.



®
TECHNICAL PUBLICATIONS - An up thrust for knowledge

You might also like