0% found this document useful (0 votes)
41 views22 pages

Lesson 2

INF1520 UNIT2 NOTES

Uploaded by

scobbydubidooo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views22 pages

Lesson 2

INF1520 UNIT2 NOTES

Uploaded by

scobbydubidooo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 22

2.

2 Cognitive Psychology in HCI

Many cognitive processes underlie the performance of a task or action performed by


humans. Human information processing consists of three interacting systems: the
perceptual system, the cognitive system, and the motor system. We can therefore
characterise human (user) resources into three categories:

· Perception: the way that people detect information in their environment.

· Cognition: the way that they process that information.

· Physiology: the way in which they move and interact with physical objects
in their environment.

A vital foundation for designers of interactive systems is an understanding of the


cognitive and perceptual abilities of the user. Some regard perception as part of
cognition (Preece et al 2007; Preece et al 2019), but here we will discuss it as a
separate aspect of human information processing.
Perception involves the use of our senses to detect information. The human ability
to interpret sensory input rapidly and initiate complex actions makes the use of
modern computer systems possible. In computerised systems, this mainly involves
using the senses to detect audio instructions and output, visual displays and
output, and tactile (touchable) feedback.

Information from the external world is initially registered by the modality-


specific sensory stores or memories for visual, audio and tactile information
respectively. These stores can be regarded as input buffers holding a direct
representation of sensory information. But the information persists there for only
a few tenths of a second. So, if a person does not act on sensory input
immediately, it will not have any effect. Shneiderman et al (2014:71) identified
several design implications for the design of information to be perceptible and
recognisable across different media:

· Icons and other graphical representations should enable users to readily


distinguish their meaning.

· Boarders and spacing are effective visual ways of grouping information and
make it easier to perceive and locate items.

· If sound is used, it should be audible and distinguishable so that users


understand what they represent.

· Speech output should enable users to distinguish between sets of spoken


words and to understand what they mean.

· Text should also be legible and distinguishable from the background.

· When tactile feedback is used in a virtual environment, it should allow


users to recognise the meaning of the touch sensations being emulated, for example,
the sensation of squeezing is represented in a tactile form that is different from
the sensation of pushing.

Many factors affect perception, for example:

· A change in output such as changes in the loudness of audio feedback or in


the size of elements of the display.

· Maximum and minimum detectable levels of, for example, sound. People hear
different frequencies. They also differ in the number of signals they can process
at a time.

· The field of perception. Depending on the environment, not all stimuli may
be detectable. Not all parts of the display may be visible if a user, for example,
faces it at the wrong angle.

· Fatigue and circadian (biological) rhythms. When people are tired, their
reactions to stimuli may be slower.

Background noise.

Designers have to make sure that people can see or hear displays if they are to use
them. In some environments, this is particularly important. For instance, most
aircraft produce over 15 audible warnings. It is relatively easy to confuse them
under stress and with high levels of background noise. Such observations may be
worrying for the air traveller, but they also have significance for more general
HCI design. We must ensure that signals are redundant (e.g., it must be more than
what is needed, desired or required). If we display critical information through
small changes to the screen, many people will not detect the change. If you rely
upon audio signals to inform users about critical events, you exclude people with
hearing problems or people who work in a noisy environment. On the other hand,
audio signals may irritate users in shared offices.

Partial sight, ageing and congenital colour defects produce changes in perception
that reduce the visual effectiveness of certain colour combinations. Two colours
that contrast sharply when perceived by someone with normal vision may be far less
distinguishable to someone with a visual defect. People with colour perception
defects generally see less contrast between colours than someone with normal
vision. Lightening light colours and darkening dark colours will increase the
visual accessibility of a design.

Three aspects of colour influence how they are perceived:

· Colour hue describes the perceptual attributes associated with elementary


colour names. Hue enables us to identify basic colours such as blue, green, yellow,
red and purple. People with normal colour vision report that hues follow a natural
sequence based on their similarity to one another.

· Colour lightness corresponds to how much light is reflected from a surface


in relation to nearby surfaces. Lightness, like hue, is a perceptual attribute that
cannot be computed from physical measurements alone. It is the most important
attribute in making contrast more effective.

· Colour saturation indicates a colour’s perceptual difference from a white,


black or grey of equal lightness. Slate blue is an example of a desaturated colour
because it is similar to grey.

Congenital and acquired colour defects make it difficult to discriminate between


colours on the basis of hue, lightness or saturation. Designers can compensate for
these defects by using colours that differ more noticeably with respect to all
three attributes.
2.2.2 Cognition

Cognition refers to a variety of processes that take place in our heads. These
include:

· short-term memory and information processing

· long-term memory and learning


· problem-solving

· decision-making

· attention

· search and scanning

· time perception

Knowledge of these will help designers to create usable interfaces. Our discussion
of cognition will be limited to attention and memory.

2.2.2.1 Attention

Attention is the process of concentrating on something (e.g., an object, a task or


a conversation) at a specific point in time. It can involve our senses such as
looking at the road while driving or listening to a news story on the radio, or it
can involve thinking processes such as concentrating on solving a mathematical
problem in your head. People differ in terms of their attention span. Some people
are distracted easily whereas others can concentrate on a task in spite of external
disturbances. In the past 10 years it has become even more common for people to
switch between multiple tasks (Shneiderman et al 2014). Attention allows us to
focus on information which is relevant to what we are doing.

Attention is influenced by the way information is presented as well as by people’s


goals (Preece et al 2007, 2019). This has implications for designers of computer
systems. If information on an interface is poorly structured, users will have
difficulty in finding specific information. How information is displayed determines
how well people will be able to perform a searching task. When using a system with
a particular goal in mind, the user’s attention will remain focused more easily
than when he or she aimlessly browses through an application. Designers of browsing
and searching software should therefore find ways to lead users to the information
they want. In a computer game, it is important that users always know what their
next goal in the game is, otherwise they will lose interest.

The following activity is a good example of focusing your attention. Find the
price of a family room in a guest house that has five rooms in Table 2.1 (a). Then
find the telephone number in Table 2.1 (b). Which took longer to find information,
Table 2.1 (a) or Table 2.1 (b)?

Table 2.1: Finding information relating to accommodation

Gauteng

City

Guest House

Area code
Phone

Rates

Single

Double

Johannesburg

All-in-one

011

670 9232

R450

R900

All-in-one

Rest retreat

011

670 9232

R670

R1300

Rest retreat

Break a way

011

678 9834

R300

R600

Pretoria

All-in-one

012
690 1232

R550

R1000

Pretoria

Rest retreat

012

690 1453

R770

R1400

Pretoria

Break a way

012

678 9566

R330

R700

(a)

Cape Province

Holiday Inn Hotel: Cape Town

021 689 0987 S: R550 D: R1 200

Holiday Inn Hotel: Durbanville

021 689 1988 S: R450 D: R1 100

Holiday Inn Hotel: Muisenberg

021 6693458 S: R666 D: R1 499

Holiday Inn Hotel: Simon’s Town

021) 679 2388 S: R455 D: R1 222

Protea Hotel: Cape Town

021 689 8743 S: R350 D: R780

Protea Hotel: Stellenbosch

021 787 8932 S: R456 D: R876

(b)
In early studies conducted by Tullis, it was found that the two screens produce
different results: it takes on average 3,2 seconds to search for the information in
table 2.1 (a) and 5,5 seconds to find the same kind of information in table 2.1
(b). The question one can then ask is: Why so? The primary reason is the way in
which the characters are grouped in the display. In table 2.1 (a) the characters
are grouped into vertical categories of information with columns of space between
them. Because the information in table 2.1 (b) is bunched together, it is much
harder to go through it.

Shneiderman et al (2014) identified a few guidelines which designers can use to


get the user’s attention with the precaution that it should be used moderately to
avoid clutter:

· Intensity. The designer should make use of two levels only with a limited
use of high intensity to draw attention.

· Marking. Underline an item, enclose it in a box, point at it with an arrow,


or make use of an indicator such as an asterisk, bullet, plus sign or an X.

· Size. Use only four sizes; the larger sizes attracting attention.

· Choice of fonts. Never use more than three fonts.

· Inverse video. Use inverse colouring.

· Blinking. Make use of blinking display (2-4 Hz) or blinking colour changes
but use with caution and only in limited areas.

· Colour. Use only four standard colours and reserve additional colours for
occasional use.

· Audio. Use soft tones for regular positive feedback and harsh sounds for
emergency conditions.

Audio tones such as the click of a keyboard or the ring tone of a telephone, can
provide informative feedback about progress. Alarms that go off in an emergency are
a good example of getting a user’s attention, but there should also be a mechanism
for the user to suppress alarms. An alternative to alarms is voice messages.

2.2.2.2 Memory

Memory consists of a number of systems that can be distinguished in terms of their


cognitive structure as well as their respective roles in the cognitive process
(Gathercole 2002). Authors have different views on how memory is structured, but
most distinguish between long-term and short-term memories. Short-term memory (STM)
stores information or events from the immediate past and retrieval is measured in
seconds or sometimes minutes (Gathercole 2002). Long-term memory (LTM) holds
information about events that happened hours, days, months or years ago and the
information is usually incomplete.

STM has a relatively short retention period and is limited in the amount of
information that it can keep. It is easy to retrieve information from STM. Some
people refer to STM as “working memory” since it acts as a temporary memory that is
necessary to perform our everyday activities. The effectiveness of STM is
influenced by attention – any distraction can cause information to vanish from STM.
Generally, people can keep up to seven items (e.g., a seven-digit telephone number)
in their STM unless there is some distraction.
LTM, on the other hand, has a high capacity. As its name suggests, it can store
information over much longer periods of time, but access is much slower. It also
takes time to record memories there. If we have to extract the information from
LTM, it may involve several moments of thought: for example, naming the seven
dwarfs or the current members of the national soccer team. The information stored
in LTM is affected by people’s interpretation of the events or contexts.
Information retrieved from LTM is also influenced by the retriever’s current
context or state of mind.

We should design interfaces that make efficient use of users’ short-term memory.
Users should be required to keep only a few items of information in their STM at
any point during interaction. They should not be compelled to search back through
dim and distant memories of training programmes in order to operate the system.
User interfaces can support short-term memory by including cues on the display.
This is effectively what a menu does: it provides fast access to a list of commands
that do not have to be remembered. On the other hand, help facilities are more like
long-term memory. We have to retrieve them and search through them to find the
information that we need.

In line with the general STM capacity, seven is often regarded as the magic number
in HCI. Important information is kept within the seven-item boundary. Additional
information can be held, but only if users employ techniques such as chunking. This
involves the grouping of information into meaningful sections. National telephone
numbers are usually divided in this way: 012 429 6122. Chunking can be applied to
menus through separator lines or cascading menus.

As we have mentioned, it takes effort to hold things in STM. We all experience a


sense of relief when it is freed up. As a result of the strain of maintaining STM,
users often hurry to finish some tasks. They want to experience the sense of relief
when they achieve their objective. This haste can lead to error. Some ATMs issue
money before returning the user’s card. Users experience a sense of closure when
they have satisfied their objective of withdrawing money. They then walk away and
leave their cards in the machine. To avoid this, most ATMs dispense cash only after
the user has removed the card. The computerised system is designed so as to prevent
errors caused by the limitations of STM.

An important aim for user interface design is to reduce the load on STM. We can do
this by placing information “in the world” instead of expecting users to have it
“in the head” (Norman 1999). In computer use, knowledge in the world is provided
through the use of prompts on the display and the provision of paper documentation.
Shneiderman et al (2014) indicated that users increasingly save their digital
content on the Cloud, iCloud, Vimeo, Pinterest and Flickr so that they can access
it from multiple platforms. The challenge these companies face is to provide
interfaces that will enable users to store their content so that they can readily
access specific items, for example, a particular image, video or document. In order
to help users to remember what they saved, where they saved it or how they named
the file, different recall methods are used. Initially, the user tries recall-
directed memory and when it fails, recognition-based scanning, which takes longer.
Designers should consider both kinds of memory processes so that users can use
whatever memory they have to limit the area being searched and then represent the
information in this area of interface.

Complete activity 2.2

ACTIVITY 2.2

1. Draw up a table with two columns – one for STM and one for LTM – and list
the differences between the two types of memory.
2. Give your own example of how the load on the user’s STM can be relieved
through thoughtful design of the interface.
Short-Term Memory (STM) Long-Term Memory (LTM)
Limited capacity Virtually unlimited capacity
Duration is brief (seconds to minutes) Duration can last a lifetime
Susceptible to interference Less susceptible to interference
Primarily acoustic encoding Primarily semantic encoding
Information is easily forgotten if not rehearsed Information is retained for
long periods with or without rehearsal
Conscious awareness Information can be retrieved unconsciously
Storage for immediate tasks Storage for relatively permanent knowledge
Involved in active processing Involves passive storage

2.2.2.3 Knowledge in the World vs Knowledge in the Head

Norman (1999) refers to information kept in someone’s memory as “knowledge in the


head” and to external information as “knowledge in the world”. Both these kinds of
information are necessary for our functioning in the world (and also for our
interaction with computers), but how it is used depends on the individual. Some
people rely more on knowledge in the world (e.g., notes, lists and birthday
calendars) whereas others depend more on the knowledge in their heads (their
memory). There are advantages and disadvantages to both approaches. These are
summarised in table 2.2.

Table 2.2: Comparison of knowledge in the head and in the world (from Norman
(1999))

Property

Knowledge in the world

Knowledge in the head

Retrievability

Easily retrievable whenever visible or audible. (Depends on availability in the


environment.)

More difficult to retrieve. Requires memory search or reminding.

Learning

Learning is not required, only interpretation.

To get information there requires learning, which can be considerable.

Efficiency of use

Tends to be slowed up by the need to find and interpret the external sources.

Can be very efficient.

Ease of use at first encounter

High

Low
Aesthetics

Can be unesthetic and inelegant, especially if there is a need to maintain a great


amount of information. Can lead to clutter. Requires a skilled designer.

Nothing needs to be visible, which gives the designer more freedom.

When designing interfaces, the trade-off between knowledge in the world and
knowledge in the head must be kept in mind. Do not rely too much on the user’s
memory, but don’t clutter the interface with memory cues or information that is not
really necessary. Meaningful icons and menus can be used to relieve the strain on
memory, but the Help menu should provide additional information “in the world” that
is difficult to display properly on the interface.

Complete activity 2.3

ACTIVITY 2.3

Explain how we use cellular phones as knowledge in the world. Your answer should
make it clear what is meant by the term “knowledge in the world”.

2.2.2.4 Examples to Illustrate the Role of Memory in HCI

We end this section on cognition with two examples of how interface design can
relate to human cognition (the user’s memory in particular). The image given above
is of a message from Microsoft’s Word 97. The message appeared after you have
spell-checked a document that contained text that you have indicated should be
excluded from spell-checking (the no-proofing option). The message is certainly
informative but requires that the user either has an exceptional short-term memory
or has pen and paper handy to write down the steps that it refers to.

Proofing in Microsoft Word (Isys, 2000) From


http://hallofshame.gp.co.at/mdesign.htm

Given all the passwords each of us must keep track of, it’s all too easy to forget
the password for a particular account or program. The figure shows how many
applications nowadays help the user to remember a password. When creating a new
account, you are asked to specify the new password and, in addition, provide a
question and answer in the event that you forget your password at some later time.
The log-in window includes a “Forgot my password” button that will prompt you with
the question you provided at registration and await your response.

Mechanism to retrieve a forgotten password (Isys, 2000) From


http://hallofshame.gp.co.at/mdesign.htm

This is a good solution to a problem that has plagued system operators everywhere.
It is an interface feature that should be considered for every application that
requires a password.
2.3 Physiology

Physiology involves the study of the human anatomy. It might seem strange to
include this in a course on user interface design, but knowledge of physiology can
make a noticeable contribution to the design of a successful system.

2.3.1 Physical Interaction and the Environment

When using a computer system, users must at least be able to view the interface and
reach the input devices. Designers often have relatively little influence on the
working environments of their users. If they do have some power, here are a few
guidelines they can follow:

· Visual displays should always be positioned at the correct visual angle to


the user. Even relatively short periods of rotation of the neck can lead to long
periods of pain in the shoulders and lower back.

· Keyboard and mouse use: Prolonged periods of data entry place heavy stress
upon the wrist and upper arm. A range of low-cost wrist supports are now available.
They are a lot cheaper than the expense of employing and re-training new members of
staff. Problems in this regard include repetitive strain injury and carpal-tunnel
syndrome (both cause pain and numbness in the arms). Frequent breaks can help to
reduce the likelihood of these conditions.

· Chairs and office furniture: It’s no good providing a really good user
interface if your employees spend most of their time at a chiropractor. It is worth
investing in well-designed chairs that provide proper lower back support and
promote a good posture in front of a computer.

· Placement of work materials: Finally, it is important that users are able


to operate their system in conjunction with other sources of information and
documentation. Repeated gaze transfers lead to neck and back problems. Paper and
book stands can reduce this.

· Other people: You cannot rely on system operators to prevent bad things
from happening. Unexpected events in the environment can create the potential for
disaster. For example, a patient monitoring system should not rely on a touch
screen if doctors or nursing staff who move around the patient can accidentally
brush against it.

It also pays to consider the possible sources of distraction in the working


environment:

· Noise: Distraction can be caused by the sounds made by other workers (their
phone calls or the buzz of their computers) and by office equipment (fans or
printers). There are a number of low-cost solutions. For example, you may introduce
screens around desks or covers for devices such as printers. High-cost solutions
involve the use of white noise to mask intermittent beeps.

· Light: Bright lighting can distract users in their interaction with


computers. Its impact can be reduced by blinds and artificial lighting to reduce
glare in the room. A side-effect of this is that, over time, users may suffer from
fatigue and drowsiness. Many Japanese firms have invested in high-intensity
lighting systems to avoid this problem. Low-cost solutions involve moving furniture
or using polarising filters.

There are also a number of urban myths (untruths) about the impact of computer
systems on human physiology:

· Eyesight: Computer use does not damage your eyesight. It may, however, make
you aware of existing defects.

· Epilepsy: Computer use does not appear to induce epileptic attacks.


Television may trigger photosensitive epilepsy, but the visual display units of
computers do not seem to have the same effect. The effect of multimedia video
systems upon this illness is still unclear.

· Radiation: The National Radiological Protection Board in the UK stated that


VDUs do not significantly increase the risk of radiation-related illnesses.

Interfaces often reflect the assumptions that their designers make about the
physiological characteristics of their users. Buttons are designed so that an
average user can easily select them with a mouse, touchpad or tracker-ball.
Unfortunately, there is no such thing as an average user. Some users have the
physiological capacity to make fine-grained selections, but others do not. Although
users may have the physical ability to use these interfaces, workplace pressures
may reduce their physiological ability.

A rule of thumb is: Do not make interface objects so small that they cannot be
selected by a user in a hurry; also, do not make disastrous options so easy to
select that they can be started by accident.

Complete activity 2.4

ACTIVITY 2.4

Choose any computer-based activity you sometimes perform such as selecting and
playing a song, writing and sending an e-mail, or submitting an assignment through
myUnisa.

Name the activity. Now mention three broad categories of human resources we use
in processing an action. Relate each category to how you would, in practice, use
that resource in your chosen activity.

2.3.2 Users with Disabilities

Preece et al (2007) define accessibility as “the degree to which an interactive


product is usable by people with disabilities” (p 483). There is a wide range of
disabilities, including severe physical conditions such as blindness, deafness and
paralysis, and less severe ones such as dyslexia and colour blindness. Then there
are mental disabilities such as Down syndrome, autism and dementia. In the United
States, more than forty-eight million people were disabled in 2006 (Kraus, Stoddard
& Gilmartin, 2006). The 2001 South African census revealed that more than two
million people had some form of disability (Lehohla 2005). It was estimated that in
2006 more than five hundred million people around the world were disabled (United
Nations 2006).

The statistics above provide ample reason to compel designers to take


accessibility into consideration. It will have a profound impact on the development
of the user interface if people with disabilities form part of the target market.
Henry (2002) lists more reasons for designing systems that are accessible to people
with disabilities. These include:

· Compliance with regulatory and legal requirements: In many European


countries and Australia, there is a statutory obligation to provide access for
blind users when designing computer systems. In 1999 an Australian blind user
successfully sued the Sydney Organising Committee for the Olympic Games under the
Australian Disability Discrimination Act (DDA) due to his inability to order game
tickets using Braille technology (Waddell 2002). Section 508 of the American
Rehabilitation Act stipulates that all federal electronic information should be
accessible to people with auditory, visual and mobility impairments.

· Exposure to more people: Disabled people and the elderly have good reason
to use new technologies. People who are unable to drive or walk and those with
mobility impairments can benefit from accessible online shopping. Communication
technologies such as e-mail and mobile technology, can provide them with the social
interaction they would otherwise not have.

· Better design and implementation: Incorporating accessibility into design


results in an overall better system. Making systems accessible to the disabled will
also enhance usability for users without disabilities.

· Cost savings: The initial cost of incorporating accessibility features into


a design is high, but an accessible e-commerce site will result in more sales
because more people will be able to access the site. Addressing accessibility
issues will also reduce the legal expenses that could result from lawsuits by users
who might want to enforce their right to equal treatment.

Guidelines to promote accessibility for users with disabilities were included in


the US Rehabilitation Act (http://www.access-board.gov/508.html), an independent
U.S. government under this Act a government agency devoted to accessibility for
users with disabilities. The World Wide Web Consortium (W3C) adapted these
guidelines (http://www.w3.org/TR/WCAG20/). According to Shneiderman et al (2014)
the following accessibility guidelines were identified:

· Text alternatives. The idea behind a text alternative is to provide any


non-text content so that it can be changed into other forms which users need, for
example, into large print, Braille, speech, symbols or simpler language.

· Time-based media. If non-text content is time-based media, then text


alternatives at least provide descriptive identification of the non-text content
(e.g., movies or animations) and synchronise equivalent alternatives such as
caption or auditory descriptions of the visual track with the presentation.

· Distinguishable. This guideline makes it easier for users to see and hear
content and it separates the foreground from the background. Colour is not used as
the only visual means for conveying information, indicating an action, prompting a
response or distinguishing a visual element.

· Predictable. It means the designer should make web pages appear and operate
in predictable ways.

Advances in computer technology and the flexibility of computer software make it


possible for designers to provide special services to users with disabilities. The
flexibility of desktop, web and mobile devices makes it possible to design for
people with special needs and disabilities. Below we consider two user groups with
physical disabilities – visual and motor impairments – highlighting the limitations
of normal input and output devices for them.

2.3.2.1 Users with Vision Impairments

Visually impaired people experience difficulties with output display besides the
problems that the mouse and other input devices pose. Text-to-speech conversion can
help blind users to receive electronic mail or read text files, and speech-
recognition devices allow voice-controlled operation of some applications.
Enlarging portions of a display or converting displays to Braille or voice output
can be done with hardware and software that is easily obtainable. Speech generation
and auditory interfaces are also used by sighted users under difficult conditions,
for example, when driving an automobile, riding a bicycle or working in bright
sunshine.

Reading and navigating text or objects on a computer screen is a very different


experience for a user who cannot see properly. The introduction of graphical user
interfaces (GUIs) was a setback for vision-impaired users, but technology
innovations, such as screen readers facilitate the conversion of graphical
information into non-visual modes. Screen readers are software applications that
extract textual information from the computer’s video memory and send it to a
speech synthesiser that describes the elements of the display to the user
(including icons, menus, punctuation and controls). Not being able to skim an
entire page, the user has to navigate without any visual clues such as colour
contrast, font or position (Phipps, Sutherland & Seale 2002). Pages that are split
into columns, frames or boxes cannot be translated accurately by screen readers.

Using the mouse requires constant hand-eye coordination and reaction to visual
feedback. This complicates matters for the visually impaired. They need to execute
clicking and selecting functions by means of dedicated keys on a keyboard or
through a special mouse that provides tactile feedback. Users with partial sight
should be allowed to change the size, shape and colour of the onscreen mouse
cursor, and auditory or tactile feedback of actions will be helpful.

With regard to keyboard use, visually impaired users require keys with large
lettering, a high contrast between text and background, and even audible feedback
when keys are pressed. Blind users usually access all commands and options from the
keyboard; therefore, function and control keys need to be marked with Braille or
tactile identification.

2.3.2.2 People with Motor Impairments

A significant proportion of the population have motor disabilities acquired at


birth or through an accident or illness. Users with severe motor impairments are
often excluded from using standard devices. Low-cost modifications can easily
increase access without much effort. For those confined to bed, computers and the
internet in particular provide a satisfying and stimulating means of interaction
and give them access to resources, people and places to which they would not
otherwise have access.

Users with physical impairments may have difficulties with grasping and moving a
standard mouse. They also find fine motor coordination and selecting small on-
screen targets demanding, if not impossible. Clicking, double clicking and drag-
and-drop operations pose problems for these users. Designers must find ways to make
this easier, for example, by letting the mouse vibrate if the cursor is over the
target or implementing “gravity fields” around objects so that when the cursor
comes into that field, it is drawn towards the target. Another solution is provided
through trackballs that allow users to move the cursor using only the thumb.
Severely physically impaired users may be able to move only their heads; therefore,
either head-operated or eye tracking devices are required to control on-screen
cursor movements or head-mounted optical mice. Speech input is another alternative,
but there are still high error rates (especially if the user’s speech is also
affected by the impairment) and it can only be used in a quiet environment.

Keyboards need to be detachable so that they can be positioned according to the


user’s needs and there must be adequate grip between the keyboard and desktop so
that the user cannot accidentally move the keyboard around. Individual keys should
be separated by sufficient space and should not require much force to press.
Oversized keyboards, key guards to guide fingers onto keys, and software-enabled
sticky keys are possible solutions for users who experience uncertain touch. Some
users prefer mouse sticks or hand splints to hit buttons. Designers can adapt the
interface so that everything is controlled with a single button.
Users with hearing impairments use computers to convert tones to visual signals
and communicate by e-mail in an office environment. Then there are
telecommunication devices for the deaf (TDD or TYY) that enable telephone access to
information, to train or airplane schedules, and to services (Shneiderman et al
2014). Improving designs for users with disabilities is an international concern.

Complete activity 2.5


ACTIVITY 2.5

Stephen Hawking was a well-known physicist who has written influential books such
as A Brief History in Time. Find information on him on the internet and then
describe:

· the nature of his disability

· how it has affected his life

· how technology has helped him

· the mechanisms he used to interact with technology


2.4 Culture

Another perspective on individual differences has to do with cultural, ethnic,


racial or linguistic backgrounds. It seems obvious that users who were raised
learning to read Japanese or Chinese will scan a screen differently from users who
were raised to read English or Afrikaans. Users from cultures that have a more
reflective style or a great respect for ancestral traditions may prefer other
interfaces than those chosen by users from cultures that are more action-oriented
or novelty-based. Mobile device preferences may also vary across cultures and
rapidly changing styles.

The term “culture” is often wrongly associated with national boundaries. Culture
should rather be defined as the behaviour typical of a certain group or class of
people. Culture is conceptualised as a system of meaning that underlies routine and
behaviour in everyday working life. It includes race and ethnicity as well as other
variables and is manifested in customary behaviours, assumptions and values,
patterns of thinking and communicative style. According to Shneiderman et al
(2014), designers are still struggling to establish guidelines for designing for
multiple languages and cultures.

Nisbett (2003) compared the thought patterns of East Asians and Westerners and
classified them as holistic and analytic respectively. Holistically-minded people
tend to perceive a situation globally whereas analytically-minded people tend to
perceive an object separately from the context and to assign objects to categories.
Based on this distinction, Yong and Lee (2008) compared how these two groups view a
web page. They found distinct differences. For example, holistically-minded people
scan the whole page in a non-linear fashion, whereas analytically-minded people
tend to employ a sequential reading pattern.

As software producers expand their markets by introducing their products in other


countries, they face a host of new interface considerations. The influence of
culture on computer use is constantly being researched, but there are two well-
known approaches that designers follow when called on to create designs that span
language or culture groups:

· Internationalisation refers to a single design that is appropriate for use


worldwide among groups of nations. This is an important concept for designers of
web-based applications that can be accessed from anywhere in the world by
absolutely anybody. Localisation, on the other hand, involves the design of
versions of a product for a specific group or community with one language and
culture. The simplest problem here is the accurate translation of products into the
target language. For example, all text (instructions, help, error messages, labels)
might be stored in files so that versions in other languages could be generated
with no or little programming. Hardware concerns include character sets, keyboards
and special input devices. Other problems include sensitivity to cultural issues
such as the use of images and colour.

User interface design concerns for internationalisation are numerous and


internationalisation is full of pitfalls. Early designs were often forgiven for
their cultural and linguistic slips, but the current highly competitive atmosphere
means that more effective localisation will often produce a strong advantage.
Simonite (2010) reports on the online translation services that now makes it
possible to have web content immediately translated into other languages. These
services use a technique called statistical machine translation that is based on
statistical comparison of previously translated documents. This creates rules for
future translation. In 2010 Google’s translate services could translate between 52
different languages although the translations contained errors and needed some
human intervention (Simonite 2010).

There are many factors that need to be addressed before a software package can be
internationalised or localised. These can be categorised as overt and covert
factors:

· Overt factors are tangible, straightforward and publicly observable. They


include dates, calendars, weekends, day turnovers, time, telephone number and
address formats, character sets, collating order sequence, reading and writing
direction, punctuation, translation, units of measures and currency.

· Covert factors deal with the elements that are intangible and depend on
culture or special knowledge. Symbols, colours, functionality, sound, metaphors and
mental models are covert factors. Much of the literature on internationalising
software has advised caution in addressing covert factors such as metaphors and
graphics. This advice should be heeded to avoid misinterpretation of the meaning
intended by the developers or inadvertent offence to the users of the target
culture.

An example of misinterpretation is the use of the trash can icon in the Apple
Macintosh user interface. People from Thailand do not recognise the American trash
can because in Thailand trash cans are actually wicker baskets. Some visuals are
recognisable in certain cultures, but they convey a totally different meaning. In
the United States, the owl is a symbol of knowledge but in Central America, the owl
is a symbol of witchcraft and black magic. A black cat is considered bad luck in
the US but good luck in the UK. Similarly, certain colours hold different
connotations in different cultures.

One culture may find certain covert elements inoffensive, but another may find the
same elements offensive. In most English-speaking countries, images of the ring or
OK hand gesture is understood correctly, but in France it means “zero”, “nothing”
or “worthless”. In some Mediterranean countries, the gesture means that a man is
homosexual. Covert factors will only work if the message intended in those covert
factors is understood in the target culture. Before any software with covert
factors is used, the software developers need to ensure that the correct
information is communicated by validating these factors with the users in the
target culture.
2.5 Personality and Gender

Some people dislike computers or get anxious when they have to use them; others are
attracted to or eager to use any new kind of technology. Often members of these
divergent groups disapprove or are suspicious of members of the other community.
Even people who enjoy using computers may have different preferences regarding
interaction styles, the pace of interaction, graphics versus tabular presentations,
dense versus sparse data presentation, step-by-step work versus all-at-once work,
and so on. These differences are important. A clear understanding of personality
and cognitive styles can be helpful in designing systems for a specific community
of users.

Despite fundamental differences between men and women, clear patterns of


preferences in interaction have been documented. Social network sites such as
Facebook and Twitter, tend to have more female subscribers. Huff and Cooper (1987)
in their study on sex bias in educational software, found a bias when they asked
teachers to design educational games for boys or girls. The designers created game-
like challenges when they expected boys as their users, and more conversational
dialogues when they expected girls as users. When told to design for students, the
designers produced “boy-style” games.

It is often pointed out that the majority of video arcade game players and
designers are young males. There are female players for any game, but popular
choices among women for early video games were “Pacman” and its variants, plus a
few other games such as “Donkey Kong” or “Tetris”. We can only speculate as to why
women prefer these games. One female reviewer labelled Pacman as “oral aggressive”
and could appreciate the female style of play. Other women have identified the
compulsive cleaning up of every dot as an attraction. These games are distinguished
by their less violent action and soundtrack. Also, the board is fully visible,
characters have personality, softer colour patterns are used, and there is a sense
of closure and completion. Can these informal conjectures be converted to
measurable criteria and then validated? Can designers become more aware of the
needs and desires of women, and create video games that will be more attractive to
women than to men?

Turning from games to office automation, the predominant male designers may not
realise the effect on female users when the command names require the users to KILL
a file or ABORT a program. These and other potentially unfortunate mistakes and
mismatches between the user interface and the user might be avoided by paying more
attention to individual differences among users.
2.6 Age

Historically, computers and computer applications have been designed for use by
adults for assisting them in their work. Consequently, in many accepted definitions
of human-computer interaction and interaction design, there is a hidden assumption
that users are adults. In definitions of HCI there are, for example, references to
users’ “everyday working lives” or the organisations they belong to. Nowadays,
however, computer users span all ages. Applications are developed for toddlers aged
two or three and special applications and mobile devices are designed for the
elderly. User groups of different ages can have vastly different preferences with
regard to interaction with computers.

The average age of the user population affects interface design. It is an


indication of the level of expertise that may be assumed. In many instances, it
affects the flexibility and tolerance of the user group. This does not always mean
that younger users will be more flexible. They are likely to have used a wider
range of systems and may have higher expectations. Age also determines the level of
perceptual and cognitive resources to be expected from potential users. By this we
mean that our ability to sense (perception) and process (cognition) information
declines over time. Many user interfaces fail to take these factors into account.

Below we look at two special user groups – young children and the elderly – in
detail.

2.6.1 Young Children

Child-computer interaction has emerged in recent years as a special research field


in human-computer interaction. Children make up a substantial part of the larger
user population. Whereas products for adult users usually aim to improve
productivity and enhance performance, children’s products are more likely to
provide entertainment or engaging educational experiences. Applications designed
for use by children in learning environments have completely different goals and
contexts of use than applications for adults in a work environment (Inkpen 1997).
While adults’ main reasons for using technology are to improve productivity and to
communicate, children do it for enjoyment. Another reason for distinguishing
between adult and child products is young children’s slower information processing
skills that affect their motor skills and consequently their use of the mouse and
other input devices (Hutchinson, Druin & Bederson, 2007).

Computer technology makes it possible for children to easily apply concepts in a


variety of contexts (Roschelle et al 2000). It exposes them to activities and
knowledge that would not be possible without computers. For example, a young child
who cannot yet play a musical instrument can use software to compose music. People
opposed to the use of computers by young children have warned against some
potential dangers. These include keeping children from other essential activities,
causing social isolation and reduced social skills and reducing creativity. There
is general agreement that young children should not spend long hours in front of a
computer, but computers do stimulate interaction rather than stifle it. Current
advances in technology make it possible to create applications that offer highly
stimulating environments and opportunities for physical interaction. New tangible
and robotic interfaces are changing the way children play with computers (Plowman &
Stephen 2003). The term “computer” in child-computer interaction refers not only to
the ordinary desktop or notebook computer, but also to programmable toys, cellular
phones, remote controls, programmable musical keyboards, robots and more. Tanaka,
Cicourel and Movellan (2007, https://www.pnas.org/content/104/46/17954/tab-article-
info) determined in their study that children treated the robot different than they
treat each other (videos and clips are available on their site to view).

Children interacting with the robot QRIO (https://www.pnas.org/content/104/46/17954


(Copyright (2007), National Academy of Sciences)

One way to address the concerns about the physical harm in spending too much time
inactively in front of a computer screen is to develop technology that require
children to move around. Dance mats that use sensory devices to detect movement are
widely available. Computer vision and hearing technology can also be used to create
games that use movement as input. A widely used commercial application that uses
movement input is Sony’s EyeToy™. The EyeToy is a motion recognition USB camera
used with Sony’s Play Station 2. It can detect movement of any part of the body,
but most EyeToy games involve arm movements. An image of the player is projected on
the screen to form part of the game space. Depending on the game context, certain
areas of the screen are active during the game. Players must move so that their
hands on the projected image interact with screen objects that are active in the
game. For example, they have to hit or catch a moving ball. In other words, the
user manipulates screen elements through his or her projected image.

Projected images of children playing Sony EyeToy games (Game Vortex, 2008) From
http://www.psillustrated.com/psillustrated/soft_rev.php/2686/eyetoy-play-2-ps2.html

Clearly, technology has become an important element of the context in which


today’s children grow up and it is important to understand its impact on children
and their development. According to Druin (1996), we should use this understanding
to improve technology so that it supports children optimally. The development of
any technology can only be successful if the designers truly understand the target
user group. Knowledge of children’s physical development and familiarity with the
theories of children’s cognitive development are thus essential when designing for
them. The way children learn and play, the movies and television programmes they
watch, and the way they make friends and communicate with others, are influenced by
the presence of computer technology in their everyday lives. For this reason, Druin
(1996) believes it is critical that designers of future technology observe and
involve children in their work. When designing for children, the important thing is
to accommodate them so that they can perform activities on the computer that are at
their level of development.

Children uses focus on entertainment and education. Educational technology such as


LeapFrog (http://www.leapfrog.com), designed educational packages for pre-readers
using computer-controlled toys, music generators and art tools (Shneiderman et al
2014). As children’s reading skills mature and they gain more keyboard skills, a
wider range of desktop applications, web services and mobile devices can be
incorporated. When children develop into teenagers, they can even assist parents
and elderly users. This growth path identified by Shneiderman et al (2014) is
followed by children who have access to technology and supportive parents as well
as peers. But there are children who are not that privileged and lack the financial
resources, supportive learning environment or access to technology. These
constraints often frustrate them in their use of technology.

When designing for children, it is important to incorporate educational


acceleration, facilitate socialisation with peers, and foster self-confidence that
is normally associated with skill mastery. Shneiderman et al (2019) recommend that
educational games should promote intrinsic motivation and constructive activities
as goals. When designing for children, designers need to consider not only
children’s desire for challenge but also parents’ requirements relating to safety.
Children find it easy to deal with some level of frustration, but they also need to
know that they can clear the screen, start over, and try again without penalties or
limited penalties. They don’t tolerate inappropriate humour and prefer familiar
characters, exploratory environments, and the capacity to repeat. For example,
children replay a game far more than adults do.

It is also important for designers to take note of children’s limitations such as:

· evolving dexterity – meaning that mouse dragging, double-clicking, and


small targets cannot always be used

· emerging literacy – meaning that written instructions and error messages


are not effective

· low level of abstraction – meaning that complex sequences must be avoided


unless the child uses the application under adult supervision

· short attention span and limited capacity to work with multiple concepts
simultaneously (Shneiderman et al 2014).

According to Shneiderman et al (2014), the use of technology such as playful


creativity in art and music, and writing combined with educational activities in
science and math are areas which should inspire the development of children’s
software. Educational materials can be made available to children at libraries,
museums, government agencies, schools and commercial sources to enrich learning
experiences. Educational material can also provide a basis for children to
construct web resources, participate in collaborative efforts and contribute to
community projects.
2.6.2 The Elderly

Owing to advances in health care technologies and living standards, the human life
span is constantly increasing. This means that the population of older people is
steadily growing, and that older people are more active than before. Although
people now live longer, many of them will still develop some degenerative
disabilities due to their advanced age (Darzentas & Miesenberger 2005).

The elderly have often been ignored as users of computers since they are assumed
to be both dismissive of and unable to keep up with advancing technology. According
to Shneiderman et al (2014), if designers understand human factors involved in
aging, they can create user interfaces that facilitate access by older adult users.
The stereotype that senior citizens are averse to the use of new technologies is
not necessarily true (Dix, Finlay, Abowd & Beal, 2004). They do, however,
experience impairments related to their vision, movement and memory capacity
(Kaemba 2008) that affect the way they interact with devices. They have problems
with mouse use because they complete movements slowly and have difficulty in
performing fine motor actions such as cursor positioning. Moving the mouse cursor
over small targets may be difficult for senior users, and double-clicking actions
may be problematic, especially for users with hand tremors.

Shneiderman et al (2014) identified some benefits relating to senior citizens and


their use of technology, for example, improved chances of productive employment and
opportunities to use writing, e-mail and other computer tools. The benefits to
society include seniors who share their valuable experience and offer emotional
support to others. Senior citizens can also communicate with their children and
grandchildren by e-mail or on social media. Many designers adapt their designs to
cater for older adults because the world’s population ages and gets much older than
in the past. According to Shneiderman et al (2014), desktop, web and mobile devices
can be improved for all users by providing better control over font sizes, display
contrast and audio levels. Hart et al (2008) recommend the following improvements
of interfaces used by senior citizens: easier-to-use pointing devices, clean
navigation paths as well as consistent layouts and a simpler command language.

The dexterity of our fingers decreases as we age, so elderly users may experience
many difficulties typing long sequences of text on a keyboard. Keyboards that can
easily be reached, have sufficient space between keys, provide audible or tactile
feedback of pressed keys, and a high contrast between text and background may be
required. Networking projects such as the San Francisco-based SeniorNet, provide
elderly users over the age of 50 with access to and education about computing and
the internet. The key focus of SeniorNet is to enhance elderly users’ lives and to
enable them to share their knowledge and wisdom (Shneiderman et al 2014).
Nintendo’s Wii also discovered that computer games are popular with elderly users
because it stimulates social interaction, practises their sensory and motor skills
such as eye-to-hand coordination, enhances their dexterity and improves their
reaction time. In their study, Shneiderman et al (2014) also discovered that there
was some fear of computers among elderly users and that they believed that they
were incapable of using computers. But after a few positive experiences with
computers, for example, sharing photos, exploring e-mail and using educational
games, the fear gave way, and they were satisfied and eager to learn. Most of the
mechanisms for supporting users with motor impairments described in section 2.3.2.2
are applicable to elderly users.

Many senior users find the text size on typical monitors too small and require
more contrast between text and background. Even more so on small displays of mobile
phones. Touch screens solve some of the interaction problems, but older users’
habit of a finger along a text line while reading can result in unintended
selections (Kaemba 2008). Clearly, the physical, social and mental contexts of the
elderly differ from that of younger adults. The needs and preferences of adult
technology users can therefore not be transferred to the elderly.
2.7 Expertise

The way in which a system is designed, built and sold depends on the intended
users, on whether they are experts or novices. In the former case, designers must
build upon existing skills. Issues such as consistency with previous interfaces,
are absolutely critical. In the case of novice users, designers must provide a
higher level of support. They must also anticipate some of the learning errors that
can arise during interaction. It is difficult to begin the development process if
designers are unaware of such general characteristics of their user population.
Some people may only have partial information about how to complete a task. This is
the typical situation of novice users of a computer application. They will need
procedural information about what to do next. Experts, on the other hand, will have
well-formed task models and do not need guidance. It follows, therefore, that novel
task designers may have greater flexibility in the way that they implement their
interface. In more established applications, expert users will have well-developed
task structures and may not notice or adapt so quickly to any changes introduced in
a system.

A number of models of skill levels have been developed to provide an explanation of


how users operate at the different levels. The model shows the differences between
users with different degrees of information about an interactive system. At the
lowest level, the knowledge-based level, they may only be able to use general
knowledge to help them understand the system. Designers can exploit this to support
novice users. For example, in the Windows desktop, inexperienced users can apply
their general knowledge in several ways, but sometimes with an unwanted effect. To
recover a deleted file, a user might think he has to empty the recycle bin (waste
bin). This is a dangerous approach. If they lack knowledge, then users are forced
to guess.

The second level of interaction introduces the idea that users apply rules to guide
their use of a system. This approach is slightly more informed than the use of
general knowledge. For example, users will make inferences based on previous
experience. This implies that designers should develop systems that are consistent.
Similar operations should be performed in a similar manner. If this approach is
adopted, then users can apply the rules learned with one system to help them
operate another, for instance: “To print this page, I go to the File menu and
select the option labelled Print”. There are two forms of consistency:

· Internal consistency refers to similar operations being performed in a


similar manner within an application. This is easy to achieve if designers have
control over the finished product.

· External consistency refers to similar operations being performed in a


similar manner between several applications. This is hard to achieve as it involves
the design of systems in which the designer may not be involved. This is the reason
why companies such as Apple and IBM publish user interface guidelines.

Operating a user interface by referring to rules learned in other systems can be


hard work. Users have to work out when they can apply their expertise. It also
demands a high level of experience with computer applications. Over time, users
will acquire the expertise that is required to operate a system. They will no
longer need to think about previous experience with other systems and will become
skilled in the use of the system. This typifies expert use of an application (the
skill-based level in figure 2.5).

What designers should always keep in mind is that the more users have to think
about using the interface, the less cognitive and perceptual resources they will
have available for the main task.
2.8.1 Types of Error

People make errors routinely. It is part of human nature. There are several forms
of human error or mistakes. Norman (1999) distinguishes the following main
categories:

· Mistakes (also called “incorrect plans”): This category includes incorrect


plans such as forming the wrong goal or performing the wrong action with relation
to a specific goal. Situations in which operators adopt unsafe working practices
are examples of this. These can arise either through a lack of training, poor
management or through deliberate negligence. Mistakes are thus the result of a
conscious but erroneous consideration of options.

· Slips: Slips are observable errors and result from automatic behaviour.
They include confusions such as the confusion between left and right.

So, with a slip the person had the correct goal but performed the incorrect action;
with a mistake the goal was incorrect.

The difference between mistakes and slips: humans will make slips so designers
should design in such a way that makes the consequences of slip errors less
irreversible. That is one of the reasons why emergency buttons are big and red.
Mistakes occur when users don’t know what to do because they haven’t learned or
haven’t been taught to use something properly, for example, if someone uses an old
Xbox game controller like a motion-sensitive Wiimote and waves it through the air
instead of pressing the buttons. Slips occur mostly in skilled behaviour; when the
user does not pay proper attention. Users who are still learning don’t make slips
(Norman 1999).

Norman (1999) distinguishes between the following kinds of slips:

· Capture errors: This occurs when an activity that you perform frequently is
executed instead of the intended activity. For example, when I, on the day I have
leave, drop my child at the pre-school and without thinking drive to work instead
of driving home.

· Description errors: This occurs when, instead of the intended activity, you
do something that has a lot in common with what you wanted to do. For example,
instead of putting the ice-cream in the freezer, you put it in the fridge.

· Data-driven errors: These errors are triggered by some kind of sensory


input. I once asked the babysitter to write her telephone number in my telephone
directory. Instead of her own number, she copied the number of the entry just above
her own. She was looking at that entry to see whether that person’s name or surname
was written first.

· Mode errors: These occur when a device has different modes of operation,
and the same action has a different purpose in the different modes. For example, a
watch can have a time-reading mode and a stopwatch mode. If the button that
switches on a light in time-reading mode is also the button that resets the
stopwatch, one may try to read the stopwatch in the dark by pressing the light
button and thereby accidentally clearing the stopwatch.

· Associative activation errors: These are similar to description errors, but


they are triggered by internal thoughts or associations instead of external data.
For example, our secretary’s name is Lynette, but she reminds me of someone else I
know called Irene. I often call her Irene.
· Loss-of-activation errors: These are errors due to forgetfulness. For
example, you find yourself sitting with the phone in your hand, but you have
forgotten who you wanted to call.
2.8.2 The Cause of Human Error

What is the true cause of human error? In the aftermath of many major accidents, it
is typical to hear reports of an “operator error” as the primary cause of the
failure. This term has little meaning unless it is supported by a careful analysis
of the accident. For example, if an operator is forced to manage as best he/she can
with a bug-ridden, unreliable system, is an accident then his/her fault or that of
the person who implemented the program? If bugs are the result of poorly defined
requirements or cost cutting during testing, are these failures then the fault of
the programmer or the designer?

What appears to be an operator error is often the result of management failures.


Even if systems are well designed and implemented, accidents can be caused because
operators are poorly trained to use them. This raises practical problems because
operators are frequently ill-equipped to respond to low-frequency but high-cost
errors. How then can companies predict these events that, although they rarely
occur, are sufficiently critical that users and operators should be trained in the
procedures to solve them?

Further sources of error come from poor working environments. Again, a system may
work well in a development environment, but the noise, heat, vibration or altitude
of a user’s daily life may make the system unfit for its actual purpose.

2.8.3 How to prevent Human Error

There is no simple way to improve the operational safety of computer systems.


Short-term improvements in operator training will not address the fundamental
problems created by mistakes and slips. Reason (1990) argues that errors are latent
within each one of us and, therefore, we should never hope to engineer out human
error. This pessimistic analysis has been confirmed by experience. Even
organisations with an exemplary training and recruitment system such as NASA, have
suffered from the effects of human error.

There are, however, some obvious steps that can be taken to reduce both the
frequency and the cost of human error. In terms of cost, it is possible to engineer
decision support systems that provide users with guidance and help during the
performance of critical operations. These systems may even implement cool-off
periods during which users’ commands will not be effective until they have reviewed
the criteria for a decision. These systems engineering solutions impose interlocks
on control and limit the scope of human intervention. The consequences are obvious
when such locks are placed in inappropriate areas of a system.

It is also possible to improve working practices. Most organisations see this as


part of an ongoing training programme. In safety-critical applications there may be
continuous and on-the-job competence monitoring, as well as formal examinations.

When designing systems, one should keep in mind the kinds of errors people make.
For example, minimising different modes or making the different modes clearly
visible, will avoid mode errors. Users may click on a delete button when they meant
to click on the save button (maybe the delete button is located where, in a
different application, the save button was placed). To prevent the user from
incorrectly deleting something important, the interface should request confirmation
before going through with a delete action.

You might also like