100% found this document useful (2 votes)
937 views19 pages

CS8079 - Hci QB Unit1

This document contains a quiz on Human Computer Interaction (HCI). It begins with background information on the course CS8079 - HCI and contains 22 multiple choice questions about topics related to HCI including input/output channels, memory, reasoning, interaction models, interface design goals, and interface styles like command line and WIMP. It also includes short answer questions about concepts such as ergonomics and mental models.

Uploaded by

Ramesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (2 votes)
937 views19 pages

CS8079 - Hci QB Unit1

This document contains a quiz on Human Computer Interaction (HCI). It begins with background information on the course CS8079 - HCI and contains 22 multiple choice questions about topics related to HCI including input/output channels, memory, reasoning, interaction models, interface design goals, and interface styles like command line and WIMP. It also includes short answer questions about concepts such as ergonomics and mental models.

Uploaded by

Ramesh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

DHANALAKSHMI COLLEGE OF ENGINEERING


Tambaram, Chennai

Dept. of Computer Science and Engineering

CS8079 – Human Computer Interaction


Year / Sem : IV / VII
2 Marks Q & A

Dept. of CSE Dhanalakshmi College of Engineering 1


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

UNIT – I
FOUNDATIONS OF HCI

The Human: I/O channels – Memory – Reasoning and problem solving; The computer: Devices –
Memory – processing and networks; Interaction: Models – frameworks – Ergonomics – styles –
elements – interactivity – Paradigms.

PART – A
1. What is meant by Human-computer interaction?
Human-computer interaction is the study, planning and design of how people computer works together so
that a person’s needs are satisfied in the most effective way.

2. List the input and output channels?


Input and output channels
1) Visual channel
2) Auditory channel
3) Haptic channel –movement

3. Where the Information is stored in memory?


Information stored in following memory
1) Sensory memory
2) Short-term (working) memory
3) Long-term memory

4. What are the capabilities and limitations of visual processing?


Display screens can be used in various public places to offer information, link spaces or act as message
areas. These are often called situated displays as they take their meaning from the location in which they are
situated presenter’s shadow can often fall across the screen.

5. What is long-term memory?


It stores factual information, experiential knowledge, procedural rules of behaviour it has a huge, if not
unlimited, capacity. Secondly, it has a relatively slow access time of approximately a tenth of a second.
Thirdly, forgetting occurs more slowly.

6. What is short term memory?


Short-term memory or working memory acts as a ‘scratch-pad’ for temporary recall of information. It is
used to store information which is only required fleetingly Short-term memory can be accessed rapidly, in
the order of 70ms. However, it also decays rapidly, meaning that information can only be held there
temporarily, in the order of 200ms.

Dept. of CSE Dhanalakshmi College of Engineering 2


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

7. Define – Reasoning. (A/M−17)


Reasoning is the process by which we use the knowledge we have to draw conclusions or infer something
new about the domain of interest.

8. List the types of reasoning? (A/M−17)


Types of reasoning
1) Deductive reasoning
2) Inductive reasoning
3) Abductive reasoning

9. Define – Gestalt theory


Gestalt theory is defined as information processing theory. It is attractive in terms of its description of
human problem solving. it does not provide sufficient evidence or structure to support its theories.

10. List the POSITIONING, POINTING and DRAWING devices?


Positioning, Pointing and Drawing Devices
1) Mouse
2) Touchpad
3) Trackball and thumbwheel
4) Joystick and keyboard nipple
5) Touch – sensitive screens

11. What are the display devices?


Display devices
1) Bitmap displays – resolution and color
2) Liquid crystal display
3) Special displays
4) Virtual reality helmets
5) Whole – body tracking

12. What are the stages of execution and evaluation cycle / Norman’s model of interaction?
Stages for Norman’s model of interaction
1) Establishing the goal.
2) Forming the intention.
3) Specifying the action sequence.
4) Executing the action.
5) Perceiving the system state.
6) Interpreting the system state.
7) Evaluating the system state with respect to the goals and intentions.

13. What are the goals of interface design?


Dept. of CSE Dhanalakshmi College of Engineering 3
CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

The goals of interface design


1) Reduce visual work.
2) Reduce intellectual work.
3) Reduce memory work.
4) Reduce motor work.
5) Minimize or eliminate any burdens

14. What are the several factors that can limit the speed of an interactive system? (A/M−18)
Factors for to limit the speed of an interactive system
1) Computation bound
2) Storage channel bound
3) Graphics bound
4) Network capacity

15. What is ergonomics? (A/M−17)


Ergonomics (or human factors) is traditionally the study of the physical characteristics of the interaction:
how the controls are designed, the physical environment in which the interaction takes place, and the layout
and physical qualities of the screen.

16. What are the mental models and why they important in interface design? (A/M−17)
Mental models are one of the most important concepts in human–computer interaction (HCI). It's a prime
goal for designers to make the user interface communicate the system's basic nature well enough. The users
form reasonably accurate (and thus useful) mental models. Individual users each have their own mental
model.

17. What is text entry devices? (A/M−17)


Text entry device is an interface that is used to enter text information an electronic device. Most laptop
computers have an integrated mechanical keyboard, and desktop computers are usually operated primarily
using a keyboard and mouse. Devices such as smartphones and tablets mean that interfaces such as virtual
keyboards and voice recognition are becoming more popular as text entry systems.

18. What is Command line interface?


The command line interface was the first interactive dialog style to be commonly used. It provides a means
of expressing instructions to the computer directly, using function keys, single characters, abbreviations or
whole-word commands.

Dept. of CSE Dhanalakshmi College of Engineering 4


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

19. What are the Advantages of Command line interface?


Command line interfaces are powerful in that they offer direct access to system functionality and can be
combined to apply a number of tools to the same data. They are also flexible: the command often has a
number of options or parameters that will vary its behavior in some way.

20. What are the disadvantages of Command line interface?


Commands must be remembered, as no clue is provided in the command line to indicate which command is
needed. The commands used should be terms within the vocabulary of the user rather than the technician.

21. What is WIMP interface?


WIMP stands for windows, icons, menus and pointer. It is the default interface style for the majority of
interactive computer systems in use today, especially in the PC and desktop workstation arena.

22. List the elements of WIMP interface.


Elements of WIMP interface.
1) Windows
2) Icons
3) Pointers
4) Menus
5) Buttons
6) Toolbars
7) Palettes
8) Dialog boxes

Dept. of CSE Dhanalakshmi College of Engineering 5


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

Part - B
1. Explain in detail about I/O channels?
A person’s interaction with the outside world occurs through information being received and sent: input and
output. In an interaction with a computer the user receives information that is output by the computer, and
responds by providing input to the computer – the user’s output becomes the computer’s input and vice versa.
Consequently, the use of the terms input and output may lead to confusion shall blur the distinction somewhat
and concentrate on the channels involved. This blurring is appropriate since, although a particular channel may
have a primary role as input or output in the interaction, it is more than likely that it is also used in the other
role.
For example, sight may be used primarily in receiving information from the computer, but it can also be used
to provide information to the computer,

Vision
 Human vision is a highly complex activity with a range of physical and perceptual limitations, yet it is
the primary source of information for the average person.
 We can roughly divide visual perception into two stages: the physical reception of the stimulus from
the outside world, and the processing and interpretation of that stimulus.
 On the one hand the physical properties of the eye and the visual system the interpretative capabilities
of visual processing allow images to be constructed from incomplete information.

The human eye


 Vision begins with light. The eye is a mechanism for receiving light and transforming it into electrical
energy.
 Light is reflected from objects in the world and their image is focussed upside down on the back of the
eye. The receptors in the eye transform it into electrical signals which are passed to the brain.

 The eye has a number of important components (see Figure 1.1) which we will look at in more detail.
 The cornea and lens at the front of the eye focus the light into a sharp image on the back of the eye, the
retina. The retina is light sensitive and contains two types of photoreceptor: rods and cones.

Dept. of CSE Dhanalakshmi College of Engineering 6


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

 Rods are highly sensitive to light and therefore allow us to see under a low level of illumination.
However, they are unable to resolve fine detail and are subject to light saturation.
This is the reason for the temporary blindness we get when moving from a darkened room into sunlight: the
rods have been active and are saturated by the sudden light.

 The cones do not operate either as they are suppressed by the rods. We are therefore temporarily unable
to see at all.

Visual perception
 Understanding the basic construction of the eye goes some way to explaining the physical mechanisms
of vision but visual perception is more than this.
 The information received by the visual apparatus must be filtered and passed to processing elements
which allow us to recognize coherent scenes, disambiguate relative distances and differentiate color.

Perceiving size and depth


 Imagine you are standing on a hilltop. Beside you on the summit you can see rocks, sheep and a small
tree.
 On the hillside is a farmhouse without buildings and farm vehicles. Someone is on the track, walking
toward the summit. Below in the valley is a small market town.

Perceiving brightness

 A second aspect of visual perception is the perception of brightness. Brightness is in fact a subjective
reaction to levels of light.
 It is affected by luminance which is the amount of light emitted by an object. The luminance of an
object is dependent on the amount of light falling on the object’s surface and its reflective properties.
 Luminance is a physical characteristic and can be measured using a photometer. Contrast is related to
luminance: it is a function of the luminance of an object and the luminance of its background.

Perceiving color
 A third factor that we need to consider is perception of color. Color is usually regarded as being made
up of three components: hue, intensity and saturation.
 Hue is determined by the spectral wavelength of the light. Blues have short wavelengths, greens
medium and reds long.
 Approximately 150 different hues can bed is criminated by the average person. Intensity is the
brightness of the color, and saturation is the amount of whiteness in the color.

The capabilities and limitations of visual processing

In considering the way in which we perceive images we have already encountered vsome of the capabilities
and limitations of the human visual processing system.

Dept. of CSE Dhanalakshmi College of Engineering 7


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

 However, we have concentrated largely on low-level perception. Visual processing involves the
transformation and interpretation of a complete image, from the light that is thrown onto the retina. As
we have already noted, our expectations affect the way an image is perceived.

Reading
 So far, we have concentrated on the perception of images in general. However, the perception and
processing of text is a special case that is important to interface design, which invariably requires some
textual display. We will therefore end this section by looking at reading.
 Adults read approximately 250 words a minute. It is unlikely that words are scanned serially, character
by character, since experiments have shown that words can be recognized as quickly as single
characters. Instead, familiar words are recognized using word shape.
 This means that removing the word shape clues (for example, by capitalizing words) is detrimental to
reading speed and accuracy.

Hearing
 The sense of hearing is often considered secondary to sight, but we tend to under estimate the amount
of information that we receive through our ears. Close your eyes for a moment and listen.
What sounds can you hear? Where are they coming from? What is making them? As I sit at my desk I can
hear cars passing on the road outside, machinery working on a site nearby, the drone of a plane overhead and
bird song.
 But I can also tell where the sounds are coming from, and estimate how far away they are. So from the
sounds I hear I can tell that a car is passing on a particular road nearmy house, and which direction it is
traveling in.
 I know that building work is in progress in a particular location, and that a certain type of bird is
perched in the tree in my garden.

The human ear


 Just as vision begins with light, hearing begins with vibrations in the air or sound waves. The ear
receives these vibrations and transmits them, through various stages to the auditory nerves.
 The ear comprises three sections, commonly known as the outer ear, middle ear and inner ear.The outer
ear is the visible part of the ear. It has two parts: the pinna, which isthe structure that is attached to the
sides of the head, and the auditory canal, along which sound waves are passed to the middle ear. The
outer ear serves two purposes.
 First, it protects the sensitive middle ear from damage. The auditory canal contains wax which prevents
dust, dirt and over-inquisitive insects reaching the middle ear.It also maintains the middle ear at a
constant temperature.
 Secondly, the pinna and auditory canal serve to amplify some sounds.

Processing sound

Dept. of CSE Dhanalakshmi College of Engineering 8


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

 As we have seen, sound is changes or vibrations in air pressure. It has a number of characteristics
which we can differentiate. Pitch is the frequency of the sound.
 A low frequency produces a low pitch, a high frequency, a high pitch. Loudness is proportional to the
amplitude of the sound; the frequency remains constant.
 Timbre relates to the type of the sound: sounds may have the same pitch and loudness but be made by
different instruments and so vary in timbre. We can also identify a sound’s location.

Touch
 The third and last of the senses that we will consider is touch or haptic perception. Although this sense
is often viewed as less important than sight or hearing, imagine life without it.
 Touch provides us with vital information about our environment. It tells us when we touch something
hot or cold, and can therefore act as a warning.
 It also provides us with feedback when we attempt to lift an object, for example.
 Consider the act of picking up a glass of water. If we could only see the glass and not feel when our
hand made contact with it or feel its shape, the speed and accuracy of the action would be reduced.

Movement
 Before leaving this section on the human’s input–output channels, we need to consider motor control
and how the way we move affects our interaction with computers.
 A simple action such as hitting a button in response to a question involves a number of processing
stages.
 The stimulus (of the question) is received through the sensory receptors and transmitted to the brain.
The question is processed and a valid response generated.

2. Write in detail about ergonomics?


Ergonomics (or human factors) is traditionally the study of the physical characteristics of the interaction: how
the controls are designed, the physical environment in which the interaction takes place, and the layout and
physical qualities of the screen.
A primary focus is on user performance and how the interface enhances or detracts from this.

Arrangement of controls and displays


The exact organization that this will suggest will depend on the domain and the application, but possible
organizations include the following:

• Functional controls and displays are organized so that those that are functionally related are placed
together;
• Sequential controls and displays are organized to reflect the order of their use in a typical interaction
(this may be especially appropriate in domains where a particular task sequence is enforced, such as
aviation);

Dept. of CSE Dhanalakshmi College of Engineering 9


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

• Frequency controls and displays are organized according to how frequently they are used, with the most
commonly used controls being the most easily accessible.
The physical environment of the interaction
Physical issues in the layout and arrangement of the machine interface, ergonomics is concerned with the
design of the work environment itself. This will depend largely on the domain and will be more critical in
specific control and operational settings than in general computer use. The physical environment in which the
system is used may influence how well it is accepted and even the health and safety of its users. It should
therefore be considered in all design. The first consideration here is the size of the users. Obviously this is
going to vary considerably. All users should be comfortably able to see critical displays. For long periods of
use, the user should be seated for comfort and stability. Seating should provide back support. If required to
stand, the user should have room to move around in order to reach all the controls.

Health issues
There are a number of factors that may affect the use of more general computers. Again these are factors in the
physical environment that directly affect the quality of the interaction and the user ‘s
performance:
users should be able to reach all controls comfortably and see all displays. Users should not be expected to
stand for long periods and, if sitting, should be provided with back support. If a particular position for a part of
the body is to be adopted for long periods (for example, in typing) support should be provided to allow rest.

Temperature
Extremes of hot or cold will affect performance and, in excessive cases, health. Experimental studies show that
performance deteriorates at high or low temperatures, with users being unable to concentrate efficiently.
Lighting The lighting level will again depend on the work environment. adequate lighting should be provided
to allow users to see the computer screen without discomfort or eyestrain. The light source should also be
positioned to avoid glare affecting the display.
Noise Excessive noise can be harmful to health, causing the user pain, and in acute cases, loss of hearing.
Noise levels should be maintained at a comfortable level in the work environment.
This does not necessarily mean no noise at all. Noise can be a stimulus to users and can provide needed
confirmation of system activity.
Time The time users spend using the system should also be controlled. it has been suggested that excessive use
of CRT displays can be harmful to users, particularly pregnant women.

The use of color


Colors used in the display should be as distinct as possible and the distinction should not be affected by
changes in contrast. Blue should not be used to display critical information. If color is used as an indicator it
should not be the only cue: additional coding information should be included.

Dept. of CSE Dhanalakshmi College of Engineering 10


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

The colors used should also correspond to common conventions and user expectations. Red, green and yellow
are colors frequently associated with stop, go and standby respectively. Therefore, red may be used to indicate
emergency and alarms; green, normal activity; and yellow, standby and auxiliary function. These conventions
should not be violated without very good cause.

3. Explain execution–evaluation cycle.


The interactive cycle can be divided into two major phases: execution and evaluation. These can then be
subdivided into further stages, seven in all. The stages in Norman‘s model of interaction are as follows:
1) Establishing the goal.

2) Forming the intention.

3) Specifying the action sequence.

4) Executing the action.

5) Perceiving the system state.

6) Interpreting the system state.

7) Evaluating the system state with respect to the goals and intentions.

It is liable to be imprecise and therefore needs to be translated into the more specific intention, and the actual
actions that will reach the goal, before it can be executed by the user. The user perceives the new state of the
system, after execution of the action sequence, and interprets it in terms of his expectations. If the system state
reflects the user‘s goal then the computer has done what he wanted and the interaction has been successful;
otherwise the user must formulate a new goal and repeat the cycle.
Norman uses this model of interaction to demonstrate why some interfaces cause problems to their users. He
describes these in terms of the gulfs of execution and the gulfs of evaluation. As we noted earlier, the user and
the system do not use the same terms to describe the domain and goals – remember that we called the language
of the system the core language. and the language of the user the task language. The gulf of execution is the
difference between the user‘s formulation of the actions to reach the goal and the actions allowed by the
system. If the actions allowed by the system correspond to those intended by the user, the interaction will be
effective. The interface should therefore aim to reduce this gulf. The gulf of evaluation is the distance between
the physical presentation of the system state and the expectation of the user. If the user can readily evaluate the
presentation in terms of his goal, the gulf of evaluation is small. The more effort that is required on the part of
the user to interpret the presentation, the less effective the interaction.

4. Explain Interaction framework in detail.

Dept. of CSE Dhanalakshmi College of Engineering 11


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

The interaction framework attempts a more realistic description of interaction by including the system
explicitly, and breaks it into four main components. The nodes represent the four major components in an
interactive system – the System, the User, the Input and the
Output. Each component has its own language. In addition to the Users task language and the Systems core
language, which we have already introduced, there are languages for both the Input and Output components.
Input and Output together form the Interface.

The general interaction framework Translations between components

The System then transforms itself as described by the operations; the execution phase of the cycle is
complete and the evaluation phase now begins. The System is in a new state, which must now be
communicated to the User. The current values of system attributes are rendered as concepts or features of the
Output. It is then up to the User to observe the Output and assess the results of the interaction relative to the
original goal, ending the evaluation phase and, hence, the interactive cycle. There are four main translations
involved in the interaction:
articulation, performance, presentation and observation.

Assessing overall interaction


The interaction framework is presented as a means to judge the overall usability of an entire interactive
system. This is not surprising since it is only in attempting to perform a particular task within some domain
that we are able to determine if the tools we use are adequate. For a particular editing task, one can choose
the text editor best suited for interaction relative to the task. The best editor, if we are forced to choose only
one, is the one that best suits the tasks most frequently performed. Therefore, it is not too disappointing that
we cannot extend the interaction analysis beyond the scope of a particular task.

5. Describe in details about WIMP interface.


WIMP stands for windows, icons, menus and pointers (sometimes windows, icons, mice and pull-down
menus), and is the default interface style for the majority of interactive computer systems in use today,
Dept. of CSE Dhanalakshmi College of Engineering 12
CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

especially in the PC and desktop workstation arena. Examples of WIMP interfaces include Microsoft
Windows for IBM PC compatibles, MacOS for Apple Macintosh compatibles and various X Windows-based
systems for UNIX.

A typical UNIX windowing system – the OpenLook system. Source:


Sun Microsystems, Inc
Point-and-click interfaces
This point-and-click interface style is obviously closely related to the WIMP style. It clearly overlaps in the
use of buttons, but may also include other WIMP elements. The philosophy is simpler and more closely tied
to ideas of hypertext. In addition, the pointand-click style is not tied to mouse-based interfaces, and is also
extensively used in touch screen information systems. In this case, it is often combined with a menu-driven
interface. The point-and-click style has been popularized by World Wide Web pages, which incorporate all
the above types of point-and-click navigation: highlighted words, maps and iconic buttons.

Three-dimensional interfaces
There is an increasing use of three-dimensional effects in user interfaces. The most obvious example is virtual
reality, but VR is only part of a range of 3D techniques available to the interface designer. The simplest
technique is where ordinary WIMP elements, buttons, scroll bars, etc., are given a 3D appearance using
shading, giving the appearance of being sculpted out of stone. By unstated convention, such interfaces have a
light source at their top right. Where used judiciously, the raised areas are easily identifiable and can be used to

Dept. of CSE Dhanalakshmi College of Engineering 13


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

highlight active areas. Some interfaces make indiscriminate use of sculptural effects, on every text area, border
and menu, so all sense of differentiation is lost.

6. Write short notes on World Wide Web (WWW).


WWW or "Web" is a global information medium which users can read and write via computers connected to
the Internet. The term is often mistakenly used as a synonym for the Internet itself, but the Web is a service
that operates over the Internet, just as e-mail also does. The history of the Internet dates back significantly
further than that of the World Wide Web.
The internet is simply a collection of computers, each linked by any sort of data connection, whether it be slow
telephone line and modem or high-bandwidth optical connection. The computers of the internet all
communicate using common data transmission protocols (TCP/IP) and addressing systems (IP addresses and
domain names). This makes it possible for anyone to read anything from anywhere, in theory, if it conforms to
the protocol. The web builds on this with its own layer of network protocol (http), a standard markup notation
(such as HTML) for laying out pages of information and a global naming scheme (uniform resource locators or
URLs). Web pages can contain text, color images, movies, sound and, most important, hypertext links to other
web pages. Hypermedia documents can therefore be published by anyone who has access to a computer
connected to the internet.

Part – C
1. Describe in detail about memory?

Memory
Have you ever played the memory game? The idea is that each player has to recount a list of objects and add
one more to the end.
 There are many variations but the objects are all loosely related: ‘I went to the market and bought a
lemon, some oranges, bacon . . .’ or ‘I went to the zoo and saw monkeys, and lions, and tigers . . .’
 As the list grows objects are missed out or recalled in the wrong order and so people are eliminated from
the game.
 It allows us to repeat actions, to use language, and to use new information received via our senses. It also
gives us our sense of identity, by preserving information from our past experiences.
 Memory is structured and the activities that take place within the system. It is generally agreed that there
are three types of memory or memory function: sensory buffers, short term memory or working memory,
and long-term memory.
 There is some disagreement as to whether these are three separate systems or different functions of the
same system. We will not concern ourselves here with the details of this debate, which is discussed in
detail by Bad delay, but will indicate the evidence used by both sides as we go along.

Dept. of CSE Dhanalakshmi College of Engineering 14


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

 For our purposes, it is sufficient to note three separate types of memory.

Sensory memory
 The sensory memories act as buffers for stimuli received through the senses. A sensory memory exists for
each sensory channel: iconic memory for visual stimuli, echoic memory for aural stimuli and haptic
memory for touch.
 These memories are constantly overwritten by new information coming in on these channels. We can
demonstrate the existence of iconic memory by moving a finger in front of the eye.

Short-term memory

 Short-term memory or working memory acts as a ‘scratch-pad’ for temporary recall of information.
 It is used to store information which is only required fleetingly. For example, calculate the multiplication
35 × 6 in your head. The chances are that you will have done this calculation in stages, perhaps 5 × 6 and
then 30 × 6 and added the results; or you may have used the fact that 6 = 2 × 3 and calculated 2 × 35 = 70
followed by 3 × 70.
 To perform calculations such as this we need to store the intermediate stages for use later. Or consider
reading. In order to comprehend this, sentence you need to hold in your mind the beginning of the
sentence as you read the rest. Both of these tasks use short-term memory.

Dept. of CSE Dhanalakshmi College of Engineering 15


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

Long-term memory
If short-term memory is our working memory or ‘scratch-pad’, long-term memory is our main resource. Here
we store factual information, experiential knowledge, procedural rules of behavior – in fact, everything that
we ‘know’. It differs from short-term memory in a number of significant ways.
 First, it has a huge, if not unlimited, capacity. Secondly, it has a relatively slow access time of
approximately a tenth of a second. Thirdly, forgetting occurs more slowly in long term memory, if at
all.

Long-term memory structure

 There are two types of long-term memory: episodic memory and semantic memory. Episodic memory
represents our memory of events and experiences in a serial form.
 It is from this memory that we can reconstruct the actual events that took place at a given point in our
lives. Semantic memory, on the other hand, is a structured record of facts, concepts and skills that we
have acquired. The information in semantic memory is derived from that in our episodic memory, such
that we can learn new facts or concepts from our experiences.
Long-term memory processes
 So much for the structure of memory, but what about the processes which it uses? There are three main
activities related to long-term memory: storage or remembering of information, forgetting and
information retrieval.
 We shall consider each of these in turn. First, how does information get into long-term memory and
how can we improve this process? Information from short-term memory is stored in long-term memory
by rehearsal. The repeated exposure to a stimulus or the rehearsal of a piece of information transfers it
into long-term memory.

2. Explain in detail about Reasoning and problem solving?


We have considered how information finds its way into and out of the human system and how it is stored.
Finally, we come to look at how it is processed and manipulated.
This is perhaps the area which is most complex and which separates humans from other information
processing systems, both artificial and natural. Although it is clear that animals receive and store
information, there is little evidence to suggest that they can use it in quite the same way as humans.
Similarly, artificial intelligence has produced machines which can see (albeit in a limited way)and store
information. But their ability to use that information is limited to small domains.

Reasoning

 Reasoning is the process by which we use the knowledge we have to draw conclusions or infer
something new about the domain of interest.

Dept. of CSE Dhanalakshmi College of Engineering 16


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

 There are a number of different types of reasoning: deductive, inductive and abductive. We use each of
these types of reasoning in everyday life, but they differ in significant ways.

Deductive reasoning
 Deductive reasoning derives the logically necessary conclusion from the given premises.
 For example, If it is Friday then she will go to work It is Friday Therefore she will go to work.
 It is important to note that this is the logical conclusion from the premises; it does not necessarily have
to correspond to our notion of truth. So, for example, if it is raining then the ground is dry It is raining
Therefore the ground is dry.

Inductive reasoning
 Induction is generalizing from cases we have seen to infer information about cases we have not seen.
 For example, if every elephant we have ever seen has a trunk, we infer that all elephants have trunks.
Of course, this inference is unreliable and cannot be proved to be true; it can only be proved to be false.
 We can disprove the inference simply by producing an elephant without a trunk. However, we can
never prove it true because, no matter how many elephants with trunks we have seen or are known to
exist, the next one we see may be trunkless.
 The best that we can do is gather evidence to support our inductive inference.

Abductive reasoning
 The third type of reasoning is abduction. Abduction reasons from a fact to the action or state that
caused it.
 This is the method we use to derive explanations for the events we observe. For example, suppose we
know that Sam always drives too fast when she has been drinking.

Problem solving

 If reasoning is a means of inferring new information from what is already known, problem solving is
the process of finding a solution to an unfamiliar task, using the knowledge we have.
 Human problem solving is characterized by the ability to adapt the information we have to deal with
new situations. However, often solutions seem to be original and creative.

Problem space theory


 Newell and Simon proposed that problem solving centers on the problem space.
 The problem space comprises problem states, and problem solving involves generating and a goal state
and people use the operators to move from the former to the latter.
 Such problem spaces may be huge, and so heuristics are employed to select appropriate operators to
reach the goal. One such heuristic is means–ends analysis.

Errors and mental models

Dept. of CSE Dhanalakshmi College of Engineering 17


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

 Human capability for interpreting and manipulating information is quite impressive. However, we do
make mistakes. Some are trivial, resulting in no more than temporary inconvenience or annoyance.
 Others may be more serious, requiring substantial effort to correct.
 Occasionally an error may have catastrophic effects, aswe see when ‘human error’ results in a plane
crash or nuclear plant leak. Why do we make mistakes and can we avoid them? In order to answer the
latter part of the question we must first look at what is going on when we make an error.

3. Explain in detail about devices for virtual reality and 3d interaction?

Positioning in 3D space
• Virtual reality systems present a 3D virtual world. Users need to navigate through these spaces and
manipulate the virtual objects they find there. Navigation is not simply a matter of moving to a
particular location, but also of choosing a particular orientation.
• In addition, when you grab an object in real space, you don’t simply move it around, but also twist and
turn it, for example when opening a door.
• Thus the move from mice to 3D devices usually involvesa change from two degrees of freedom to six
degrees of freedom, not just three.

Cockpit and virtual controls


• Helicopter and aircraft pilots already have to navigate in real space. Many arcade games and also more
serious applications use controls modeled on an aircraft cockpit to ‘fly’ through virtual space.
• However, helicopter pilots are very skilled and it takes a lot of practice for users to be able to work
easily in such environments.
• In many PC games and desktop virtual reality (where the output is shown on an ordinary computer
screen), the controls are themselves virtual.
The 3D mouse

• There are a variety of devices that act as 3D versions of a mouse. Rather than just moving the mouse on
a table top, you can pick it up, move it in three dimensions, left/right orientation (called yaw) and the
amount it is twisted about its own axis (called roll).

Dataglove
• One of the mainstays of high-end VR systems (see Chapter 20), the dataglove is a 3Dinput device.
• Consisting of a lycra glove with optical fibers laid along the fingers, it detects the joint angles of the
fingers and thumb.
• As the fingers are bent, the fiber optic cable bends too; increasing bend causes more light to leak from
the fiber, and the reduction in intensity is detected by the glove and related to the degree of bend in the
joint. Attached to the top of the glove are two sensors that use ultrasound to determine 3D positional
information as well as the angle of roll, that is the degree of wrist rotation.

Dept. of CSE Dhanalakshmi College of Engineering 18


CS8079 – HUMAN COMPUTER INTERACTION VII Semester CSE

Virtual reality helmets


• The helmets or goggles worn in some VR systems have two purposes:
• they display We will discuss the former later when we consider output devices.
• The head tracking is used primarily to feed into the output side. As the user’s head moves around the
user ought to see different parts of the scene. However, some systems also use the user’s head direction
to determine the direction of movement within the space and even which objects to manipulate (rather
like the eye gaze systems).

Whole-body tracking
• Some VR systems aim to be immersive, that is to make the users feel as if they are really in the virtual
world. In the real world it is possible (although not usually wise) to walk without looking in the
direction you are going.
• If you are driving down the road and glance at something on the roadside you do not want the car to do
a sudden 90-degree turn! Some VR systems therefore attempt to track different kinds of body
movement.

3D displays
• Just as the 3D images used in VR have led to new forms of input device, they also require more
sophisticated outputs.
• Desktop VR is delivered using a standard computer screen and a 3D impression is produced by using
effects such as shadows, occlusion (where one object covers another) and perspective. This can be very
effective and you can even view 3D images over the world wide web using a VRML(virtual reality
markup language) enabled browser.

VR motion sickness
• We all get annoyed when computers take a long time to change the screen, pop up a window, or play a
digital movie.
• However, with VR the effects of poor display performance can be more serious. In real life when we
move our head the image our eyes see changes accordingly. VR systems produce the same effect by
using sensors in the goggles or helmet and then using the position of the head to determine the right
image to show.

Simulators and VR caves


• Because of the problems of delivering a full 3D environment via head mounted displays, some virtual
reality systems work by putting the user within an environment where the virtual world is displayed
upon it.
• The most obvious examples of this are large flight simulators – you go inside a mockup of an aircraft
cockpit and the scenes you would see through the windows are projected onto the virtual windows.

Dept. of CSE Dhanalakshmi College of Engineering 19

You might also like