0% found this document useful (0 votes)
38 views41 pages

Chapter 4

Chapter_4

Uploaded by

jennycakez97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
38 views41 pages

Chapter 4

Chapter_4

Uploaded by

jennycakez97
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

HUMAN-COMPUTER INTERACTION

HCI DESIGN
Chapter 4

P R E P A R E D B Y : RYAN JISON DE LA GENTE


SUMMARY OF
TOPICS
MAIN POINTS COVERED

The Overall Design Process


Interface Selection Options
Hardware Platforms
Software Interface Components
Wire-Framing
“Naïve” Design Example: No Sheets 1.0
Requirements Analysis
User Analysis
Making a Scenario and Task Modeling
Interface Selection and Consolidation
THE OVERALL
DESIGN
PROCESS
HCI design is an integral part of a larger software design
(and its architectural development) and is defined as the
process of establishing the basic framework for user
interaction (UI), which includes the following iterative steps
and activities. HCI design includes all of the preparatory
activities required to develop an interactive software product
that will provide a high level of usability and a good user
experience when it is actually implemented.
THE OVERALL DESIGN PROCESS
REQUIREMENTS ANALYSIS
Any software design starts with a careful analysis of the functional
requirements.

For interactive software with a focus on the user experience, we take a


particular look at functions that are to be activated directly by the user
through interaction (functional-task requirements) and functions that
are important in realizing certain aspects of the user experience
(functional-UI requirements), even though these may not be directly
activated by the user.
REQUIREMENTS ANALYSIS
One such example is an automatic functional feature of adjusting the
display resolution of a streamed video based on the network traffic. It is
not always possible to computationally separate major functions from
those for the user interface.

That is, certain functions actually have direct UI objectives.

Finally, we identify nonfunctional UI requirements, which are UI features


(rather than computational functions) that are not directly related to
accomplishing the main application task. For instance, requiring a
certain font size or type according to a corporate guideline may not be
a critical functional requirement, but a purely HCI requirement feature.
REQUIREMENTS ANALYSIS
FUNCTIONAL REQUIREMENTS
define the basic system behaviour. Essentially, they are
what the system does or must not do, and can be
thought of in terms of how the system responds to
inputs. Functional requirements usually define if/then
behaviours and include calculations, data input, and
business processes.

NONFUNCTIONAL REQUIREMENTS (NFRS)


define system attributes such as security, reliability,
performance, maintainability, scalability, and usability.
They serve as constraints or restrictions on the design of
the system across the different backlogs.
USER ANALYSIS
The results of the user analysis will be reflected
back to the requirements, and this could
identify additional UI requirements (functional
or nonfunctional). It is simply a process to
reinforce the original requirements' analysis to
further accommodate the potential users in a
more complete way.

For instance, a particular age group might


necessitate certain interaction features such as
a large font size and high contrast, or there
might be need for a functional UI feature to
adjust the scrolling speed.
SCENARIO AND TASK
MODELING
This is the crux of interaction modeling: identifying the
application task structure and the sequential
relationships between the different elements.

With a crude task model, we can also start to draw a


more detailed scenario or storyboard to envision how
the system would be used and to assess both the
appropriateness of the task model and the feasibility of
the given requirements.

Again, one can regard this simply as an iterative


process to refine the original rough requirements.
INTERFACE SELECTION AND
CONSOLIDATION
For each of the subtasks and scenes in the storyboard—
p a r t i c u l a r l y s o f t w a r e i n t e r f a c e c o m p o n e n t s ( e .g ., w i d g e t s ) ,
interaction technique (e.g., voice recognition), and hardware
( s e n s o r s , a c t u a t o r s , b u t t o n s , d i s p l a y , e t c .) — c h o i c e s w i l l b e
made.

The chosen individual interface components need to be


consolidated into a practical package, because not all of
these interface components may be available on a working
platform (e.g., Android™-based smartphone, desktop PC,
mp3 player).

Certain choices will have to be retracted in the interest of


e m p l o y i n g a p a r t i c u l a r i n t e r a c t io n p l a t f o r m . F o r i n s t a n c e , f o r
a particular subtask and application context, the designer
might have chosen voice recognition to be the most fitting
interaction technique. However, if the required platform does
not support a voice sensor or network access to the remote
recognition server, an alternative will have to be devised.
Such concessions can be made for many reasons besides
platform requirements, such as
due to constraints in budget, time, personnel, etc.
INTERFACE
SELECTION
OPTIONS
HARDWARE PLATFORMS
Different interactions and subtasks may require
various individual devices (sensors and displays).
We take a look at the hardware options in terms of
the larger computing platforms, which are
composed of the usual devices. The choice of a
design configuration for the hardware interaction
platform is largely determined by the
c h a r a c t e r i s t i c s o f t h e t a s k / a p p li c a t i o n t h a t
necessitates a certain operating environment.
Therefore, the different platforms listed here are
suited for and reflect various operating
environments.
DESKTOP
(STATIONARY)
Monitor (typical size: 17–42 in.; resolution: 1280 × 1012 or higher); keyboard, mouse, speakers/
headphones (microphone)
SMARTPHONES/HANDHELDS
(MOBILE):
L C D s c r e e n ( t y p i c a l s i z e : 3 . 5 – 5 i n ., r e s o l u t i o n :
720 × 1280 or higher, weight ≈ 120g), buttons, touch
screen, speaker/headphones, microphone, camera,
sensors (acceleration, tilt, light, gyro, proximity,
compass, barometer), vibrators, mini “qwerty”
keyboard.

Suited for: Simple and short tasks, special-purpose


tasks
TABLET/PADS
(MOBILE):
L C D s c r e e n ( t y p i c a l s i z e : 7 – 1 0 i n ., r e s o l u t i o n : 7 2 0 × 1 2 8 0
or higher, weight ≈ 700 g), buttons, touch screen,
speaker/headphones, microphone, camera, vibrators,
sensors (acceleration, tilt, light, gyro, proximity,
compass, barometer)

Suited for: Simple, mobile, and short tasks, but those that
r e q u i r e a r e l a t i v e l y l a r g e s c r e e n ( e .g ., a s a l e s p i t c h ; )
EMBEDDED (STATIONARY/MOBILE):

L C D / L E D s c r e e n ( t y p i c a l s i z e : l e s s t h a n 3 – 5 i n ., r e s o l u t i o n : l o w ) , b u tto n s , s p e c ia l
s e n s o r s , a n d o u t p u t d e v i c e s ( t o u c h s c r e e n , s p e a k e r , m i c r o p h o n e , s p e c ia l s e n s o r s ) ;
e m b e d d e d d e v i c e s m a y b e m o b i l e o r s t a t i o n a r y a n d o f f e r o n l y v e r y lim ite d
interaction for a few simple functionalities.

S u i t e d f o r : S p e c i a l t a s k s a n d s i t u a t i o n s w h e r e i n t e r a c t i o n a n d c o m p u ta tio n s a r e
n e e d e d o n t h e s p o t ( e . g . , p r i n t e r , r i c e c o o k e r , m p 3 p l a y e r , p e r s o n a l m e d ia p la y e r )
TV/CONSOLES (STATIONARY)

L C D / L E D s c r e e n ( t y p i c a l s i z e : > 4 2 i n ., r e s o l u t i o n :
HD), button-based remote control, speaker,
microphone, game controller, special sensors,
peripherals (camera, wireless keyboard, Wii
mote–like device [1], depthsensor such as
Kinect)

Suited for: Public users and installations, limited


interaction, short series of selection tasks,
monitoring tasks
VIRTUAL REALITY (STATIONARY)

L a r g e - s u r r o u n d a n d h i g h - r e s o l u t i o n p r o j e c t i o n s c r e e n / h e a d - m o u n te d d is p la y / s te r e o s c o p ic d is p la y ,
3 - D t r a c k i n g s e n s o r s , 3 - D s o u n d s y s t e m , h a p t i c / t a c t i l e d i s p l a y , s p e c ia l s e n s o r s , p e r ip h e r a ls
(microphone, camera, depth sensors, glove)

S u i t e d f o r : S p a t i a l t r a i n i n g , t e l e - e x p e r i e n c e a n d t e l e - p r e s e n c e , im m e r s iv e e n te r ta in m e n t
SOFTWARE INTERFACE
COMPONENTS
Most of these software components are quite well
known and familiar
to most of the readers, so we only highlight
important issues to consider in the interface
selection.
WINDOWS/LAYERS:
Modern desktop computer interfaces
are designed around windows, which
are visual output channels and
abstractions for individual
computational processes.

For a single application, a number of


subtasks may be needed concurrently
and thus must be interfaced through
multiple windows.

For relatively large displays,


overlapping windows may be used.
However, as the display size decreases
(e.g., mobile devices), nonoverlapping
layers (a full-screen window) may be
used in which individual layers are
activated in turn by “flipping” through
them (e.g., flicking movements on
touch screens)
ICONS
Interactable objects may be visually
represented as a compact and small
pictogram such as an icon (and similarly as
a n “ e a r c o n ” f o r t h e a u r a l m o d a li t y ) . C l i c k a b l e
icons are simple and intuitive.

As a compact representation
designed for facilitated interaction, icons
must be designed to
be as informative or distinctive as possible
despite their small
size and compactness.
MENUS:
Menus allow activations
of commands and tasks
through selection
recognition) rather
than recall.
DIRECT INTERACTION
The mouse/touch-based interaction is strongly tied to
the concept of direct and visual interaction.

Before the mouse era, the HCI was mostly in the form
of keyboard inputting of text commands. The mouse
made it possible for users to apply a direct
metaphoric “touch” upon the target objects (which are
visually and metaphorically represented as concrete
objects with the icons) rather than “commanding” the
operating system (via keyboard input) to indirectly
invoke the job. In addition to this virtual “touch” for
simple enactment, the direct and visual interaction
h a s f u r t h e r e x t e n d e d t o d i r e c t m a n i p u l a t i o n , e .g .,
moving and gesturing with the cursor against the
target interaction objects.

“Dragging and dropping,” “cutting and pasting,” and


“rubber banding” are typical examples of these
extensions.
GUI COMPONENTS
Software interaction objects are mostly visual.
We have already discussed the windows, icons, menus, and
mouse/pointer-based interactions, which are the essential
elements for the graphical user interface (GUI), also
sometimes referred to as the WIMP (window, icon, mouse, and
pointer)

The term WIMP is deliberately chosen for its negative


connotation to emphasize its contrast with a newer upcoming
generation of user interfaces (such as voice/language and
gesture based). However, WIMP interfaces have greatly
contributed to the mass proliferation of computer
technologies.

GUI interface components:


(a) Text box: Used for making short/medium alphanumeric
input

(b) Toolbar: A small group of frequently used icons/functions


organized horizontally or vertically for a quick direct access

(c) Forms: Mixture of menus, buttons, and text boxes for long
thematic input

(d) Dialog/combo boxes: Mixture of menus, buttons, and text


boxes for short mixed-mode input
TEXT BOX

TOOLBAR
FORMS
MODAL
3-D INTERFACE (IN 2-D INTERACTION
INPUT SPACE)

Standard GUI elements that are


operated and presented in the 2-D
space, i.e., they are controlled by a
mouse or touch screen and laid out on
a 2-D screen.

However, 2-D control in a 3-D


a p p l i c a t i o n i s o f t e n n o t s u f f i c i e n t ( e .g .,
3-D games).

The mismatch in the degrees of


freedom brings about fatigue and
inconvenience. For this reason, non-
WIMP–based interfaces such as 3D
motion gestures are gaining popularity.
OTHER (NON-WIMP) INTERFACES

The WIMP interface is synonymous with the GUI. It


has been a huge success since its introduction in
t h e e a r l y 1 9 8 0 s , w h e n i t r e v o l u t io n i z e d c o m p u t e r
operations.

Thanks to continuing advances in interface technologies


(e.g., voice recognition, language understanding, gesture
recognition, 3-D tracking) and changes in the computing
environment (e.g., personal to ubiquitous, sensors
everywhere)—new interfaces are starting to making their
way into our everyday lives. In addition, the cloud-
computing environment has enabled running
computationally expensive interface algorithms, which
non-WIMP interfaces often require, over less powerful
( e . g . , m o b i l e ) d e v i c e s a g a i n s t la r g e s e r v i c e p o p u l a t i o n s .
WIRE-
FRAMING
WIRE-
FRAMING
The interaction modeling and interface options
can be put together concretely using the so-
called wire-framing process. Wire-framing
originated from making rough specifications for
website page design and resembles scenarios or
storyboards. Usually, wire-frames look like
page schematics or screen blueprints, which
serve as a visual guide that represents the
skeletal framework of a website or interface.
WIRE-FRAMING
Wireframes can be pencil drawings or sketches
on a whiteboard, or they can be produced by
means of a broad array of free or commercial
software applications.

Through wire-framing, the developer can


specify and flesh out the kinds of
information displayed, the range of
functions available, and their priorities,
alternatives, and interaction flow.
“NAÏVE” DESIGN
EXAMPLE: NO
SHEETS 1.0
REQUIREMENTS ANALYSIS
To illustrate the HCI design process more concretely,
we will go through the design of a simple interactive
smartphone (Android) application, called No Sheets.
The main purpose of this application is to use the
s m a r t p h o n e t o p r e s e n t s h e e t m u s i c ,* t h e r e b y
eliminating the need to handle paper sheet music.

INITIAL REQUIREMENTS FOR NO SHEETS

1. U s e t h e s m a r t p h o n e t o p r e s e n t t r a n s c r i b e d m u s i c l i k e “s h e e t m u s ic . ” T r a n s c r ip tio n
i n c l u d e s o n l y t h o s e f o r b a s i c a c c o m p a n i m e n t l i k e t h e c h o r d i n f o r m a tio n ( k e y a n d
t y p e s u c h a s C # d o m 7 ) , b e a t i n f o r m a t i o n ( e . g . , s e c o n d b e a t in th e m e a s u r e ) .
2. E l i m i n a t e t h e n e e d t o c a r r y a n d m a n a g e p h y s i c a l s h e e t m u s i c . S to r e m u s ic
transcription files using a simple file format.
3. H e l p t h e u s e r e f f e c t i v e l y a c c o m p a n y t h e m u s i c b y t i m e d a n d e f f e c tiv e p r e s e n ta tio n
of musical information (e.g., paced according to a preset tempo).
4. H e l p t h e u s e r e f f e c t i v e l y p r a c t i c e t h e a c c o m p a n i m e n t a n d s i n g a lo n g th r o u g h
flexible control (e.g., forward, review, home buttons).
5. H e l p u s e r s i n g a l o n g b y s h o w i n g t h e l y r i c s a n d b e a t s i n a t i m e d f a s h io n .
USER ANALYSIS
The typical user for No Sheets is a smartphone owner and
novice/intermediate piano player (perhaps someone who
wants to show off their musical skill at a piano bar). Since a
smartphone is used, we would have to expect a reasonable
power of sight for a typical usage (e.g., a viewing distance of
about 50 cm subtending a letter of ±1 cm). There does not
seem to be a particular consideration for a particular age group
or gender. However, there may be a consensus on how the
c h o r d / m u s i c i n f o r m a t i o n s h o u l d b e d i s p l a y e d ( e .g ., p o r t r a i t v s .
landscape, information layout and locations of the control
buttons, color-coding method, up-down scrolling vs. left-right
paging, etc.). A very minimal user analysis (that of the
developer himself) resulted in (naïve first trial) interface
requirements. Note that, for now, most of the requirements or
choices are rather arbitrary without clear justifications.
MAKING A SCENARIO AND TASK MODELING
Bsed on the short requirements, we derive a hierarchical simple
task model as shown in the following list:
Select song: Select the song to view
Select tempo: Set the tempo of the paging
Show timed music information: Show the current/next
chord/beat/lyric
Play/Pause: Activate/deactivate the paging
Fast-forward: Manually move forward to a particular point in
the song
Review: Manually move backward to a particular point in the
song
Show instruction: Show the instruction as to how to use the
system
Set preferences: Set preferences for information display and
others
Show software information: Show version number and developer
information
The subtasks, as actions to be taken by the
user, can be viewed computationally as action
events or, reversely, as states that are activated
according to the action events. Figure 4.20
shows a possible statetransition diagram for No
Sheets. Through such a perspective, one can
identify the precedence relationship among the
subtasks. From the top main menu (middle of
the figure), the user is allowed to
set/select/change/view the preference, tempo,
song, and software information.

The user is also able to play and view the timed


display of the musical information, but only
after a song has been chosen (indicated by the
dashed arrow). While the timed music
information is displayed, the user can
concurrently—the four states (or equivalently
actions) in the transparent box in the right are
concurrent—play/stop, move forward, and move
backward. Such a model can serve as a rough
starting point for defining the overall software
architecture for No Sheets.
MAKING A SCENARIO AND TASK MODELING
W e e n d t h i s e x e r c i s e b y f i n a l i z i n g t h e c h o i c e o f p a r t i c u l a r i n t e r f a c e s f o r th e in d iv id u a l s u b ta s k s . .

INITIAL FINALIZATION OF THE INTERFACE


DESIGN CHOICE FOR NO SHEETS
MAKING A SCENARIO AND TASK MODELING
W e e n d t h i s e x e r c i s e b y f i n a l i z i n g t h e c h o i c e o f p a r t i c u l a r i n t e r f a c e s f o r th e in d iv id u a l s u b ta s k s . .

INITIAL FINALIZATION OF THE INTERFACE


DESIGN CHOICE FOR NO SHEETS
WIRE-FRAME

I n i t i a l d e s i g n w i r e f r a m e f o r N o S h e e t s 1 . 0 u s i n g a w i r e - f r a m i n g to o l. L e f t: Ic o n s
a n d G U I e l e m e n t s i n t h e m e n u i n t h e l e f t c a n b e d r a g g e d o n t o th e r ig h t to d e s ig n th e in te r f a c e la y e r .
R i g h t : N a v i g a t i o n a m o n g t h e d e s i g n l a y e r s c a n b e d e f i n e d a s w e ll ( in d ic a te d b y th e a r r o w s ) .
SUMMARY
I n t h i s c h a p t e r , w e h a v e d e s c r i b e d t h e d e s i g n p r o c e s s f o r i n t e r a c tiv e
a p p l i c a t i o n s , f o c u s i n g o n m o d e l i n g o f t h e i n t e r a c t i o n a n d s e l e c tio n o f th e
i n t e r f a c e . T h e d i s c u s s i o n s t a r t e d w i t h a r e q u i r e m e n t s a n a l y s i s a n d its
c o n t i n u e d r e f i n e m e n t t h r o u g h u s e r r e s e a r c h a n d a p p l i c a t i o n - ta s k m o d e lin g .
T h e n , w e d r e w u p a s t o r y b o a r d a n d c a r e f u l l y c o n s i d e r e d d i f f e r e n t o p tio n s f o r
p a r t i c u l a r i n t e r f a c e s b y a p p l y i n g a n y r e l e v a n t H C I p r i n c i p l e s , g u id e lin e s , a n d
t h e o r i e s . T h e o v e r a l l p r o c e s s w a s i l l u s t r a t e d w i t h a s p e c i f i c e x a m p le d e s ig n
p r o c e s s f o r a s i m p l e a p p l i c a t i o n . I t r o u g h l y f o l l o w e d t h e a f o r e m e n tio n e d
p r o c e s s , b u t i t d i d s o ( p u r p o s e f u l l y ) i n a h u r r i e d a n d s i m p l i s t ic f a s h io n ,
l e a v i n g m u c h p o t e n t i a l f o r l a t e r i m p r o v e m e n t . N e v e r t h e l e s s , t h is e x e r c is e
e m p h a s i z e s t h a t t h e d e s i g n p r o c e s s i s g o i n g t o b e u n a v o i d a b ly ite r a tiv e ,
because it is not usually possible to have the provisions for all usage
p o s s i b i l i t i e s . T h i s i s w h y a n e v a l u a t i o n i s a n o t h e r n e c e s s a r y s te p in a s o u n d
H C I d e s i g n c y c l e , e v e n i f a s i g n i f i c a n t e f f o r t i s t h o u g h t t o h a v e g o n e in to th e
i n i t i a l d e s i g n a n d p r o t o t y p i n g . I n t h e n e x t c h a p t e r s , w e f i r s t l o o k a t is s u e s
i n v o l v e d w i t h t a k i n g t h e d e s i g n i n t o a c t u a l i m p l e m e n t a t i o n . T h e im p le m e n te d
p r o t o t y p e ( o r f i n a l v e r s i o n ) m u s t t h e n b e e v a l u a t e d i n r e a l s i tu a tio n s f o r
f u t u r e c o n t i n u e d i t e r a t i v e i m p r o v e m e n t , e x t e n s i o n , a n d r e f i n e m e n t.

You might also like