0% found this document useful (0 votes)
47 views52 pages

Unit 3 Final

The document discusses different types of cognitive models including hierarchical models, linguistic models, and physical and device models. Hierarchical models represent a user's task and goal structure. Linguistic models represent the user-system grammar. Physical and device models represent human motor skills.

Uploaded by

michaelgatsi2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views52 pages

Unit 3 Final

The document discusses different types of cognitive models including hierarchical models, linguistic models, and physical and device models. Hierarchical models represent a user's task and goal structure. Linguistic models represent the user-system grammar. Physical and device models represent human motor skills.

Uploaded by

michaelgatsi2003
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

Unit 3

Models
Introduction of cognitive Model
• The techniques and models in this chapter all claim to have some
representation of users as they interact with an interface; that is, they
model some aspect of the user’s understanding, knowledge,
intentions or processing. The level of representation differs from
technique to technique – from models of high-level goals and the
results of problem-solving activities, to descriptions of motor-level
activity, such as keystrokes and mouse clicks.
• One way to classify the models is in respect to how well they describe
features of the competence and performance of the user
Continue
• Competence models tend to be ones that can predict legal behaviour
sequences but generally do this without reference to whether they
could actually be executed by users.
• In contrast, performance models not only describe what the
necessary behaviour sequences are but usually describe both what
the user needs to know and how this is employed in actual task
execution.
cognitive models in this chapter follows this
classification scheme
• hierarchical representation of the user’s task and goal structure n
• linguistic and grammatical models
• physical and device-level models
Hierarchical models represent a user’s task and goal structure. n
Linguistic models represent the user–system grammar.
Physical and device models represent human motor skills
GOAL AND TASK HIERARCHIES
• Many models make use of a model of mental processing in which the
user achieves goals by solving subgoals in a divide-and-conquer
fashion. We will consider two models, GOMS and CCT, where this is a
central feature
Continue(example of GOAL AND TASK
HIERARCHIES)
• Imagine we want to produce a report on sales of introductory HCI
textbooks. To achieve this goal we divide it into several subgoals, say
gathering the data together, producing the tables and histograms, and
writing the descriptive material. Concentrating on the data gathering,
we decide to split this into further subgoals: find the names of all
introductory HCI textbooks and then search the book sales database
for these books. Similarly, each of the other subgoals is divided up
into further subgoals, until some level of detail is found at which we
decide to stop. We thus end up with a hierarchy of goals and
subgoals. The example can be laid out to expose this structure:
Example continue
GOMS
• The GOMS model of Goals, Operators, Methods and Selection. A
GOMS description consists of these four elements:
Goals These are the user’s goals, describing what the user wants to
achieve. Further, in GOMS the goals are taken to represent a ‘memory
point’ for the user, from which he can evaluate what should be done
and to which he may return should any errors occur
Continue
• Operators These are the lowest level of analysis. They are the basic
actions that the user must perform in order to use the system. They
may affect the system (for example, press the ‘X’ key) or only the
user’s mental state (for example, read the dialog box). There is still a
degree of flexibility about the granularity of operators; we may take
the command level ‘issue the SELECT command’ or be more primitive:
‘move mouse to menu bar, press center mouse button . . .’.
• Methods As we have already noted, there are typically several ways in
which a goal can be split into subgoals. For instance, in a certain
window manager a currently selected window can be closed to an
icon either by selecting the ‘CLOSE’ option from a pop-up menu, or by
hitting the ‘L7’ function key. In GOMS these two goal decompositions
are referred to as methods, so we have the CLOSE-METHOD and the
L7-METHOD:
Continue
Selection
• Selection From the above snippet we see the use of the word select
where the choice of methods arises. GOMS does not leave this as a
random choice, but attempts to predict which methods will be used.
This typically depends both on the particular user and on the state of
the system and details about the goals. For instance, a user, Sam,
never uses the L7-METHOD, except for one game, ‘blocks’, where the
mouse needs to be used in the game until the very moment the key is
pressed. GOMS captures this in a selection rule for Sam:
User Sam: Rule 1: Use the CLOSE-METHOD unless another rule applies.
Rule 2: If the application is ‘blocks’ use the L7-METHOD
LINGUISTIC MODELS
• The user’s interaction with a computer is often viewed in terms of a
language, so it is not surprising that several modeling formalisms have
developed centered around this concept. Indeed, BNF grammars are
frequently used to specify dialogs. The models here, although similar
in form to dialog design notations, have been proposed with the
intention of understanding the user’s behavior and analyzing the
cognitive difficulty of the interface.
BNF (LINGUISTIC MODELS)
• BNF has been used widely to specify the syntax of computer
programming languages, and many system dialogs can be described
easily using BNF rules. For example, imagine a graphics system that
has a line-drawing function. To select the function the user must
select the ‘line’ menu option. The line-drawing function allows the
user to draw a polyline, that is a sequence of line arcs between
points. The user selects the points by clicking the mouse button in the
drawing area. The user double clicks to indicate the last point of the
polyline
Continue

The names in the description are of two types: non-terminals,


shown in lower case, and terminals, shown in upper case.
Terminals represent the lowest level of user behavior, such as
pressing a key, clicking a mouse button or moving the mouse.
Non-terminals are higher-level abstractions. The non-terminals
are defined in terms of other non-terminals and terminals by a
definition of the form name ::= expression The ‘::=’ symbol is
read as ‘is defined as’
Task–action grammar
• Task–action grammar attempts to deal with some of these problems
by including elements such as parametrized grammar rules to
emphasize consistency and encoding the user’s world knowledge.
• To illustrate consistency, we consider the three UNIX commands: cp
(for copying files), mv (for moving files) and ln (for linking files). Each
of these has two possible forms. They either have two arguments, a
source and destination filename, or have any number of source
filenames followed by a destination directory
copy ::= ‘cp’ + filename + filename | ‘cp’ + filenames + directory
3. PHYSICAL AND DEVICE MODELs
• . KLM (Keystroke-Level Model ) uses this understanding as a basis for detailed
predictions about user performance. It is aimed at unit tasks within
interaction – the execution of simple command sequences, typically taking no
more than 20 seconds.
• Examples of this would be using a search and replace feature, or changing the
font of a word. It does not extend to complex actions such as producing a
diagram. The assumption is that these more complex tasks would be split into
subtasks (as in GOMS) before the user attempts to map them into physical
actions.
• The task is split into two phases:
acquisition of the task, when the user builds a mental representation of the
task;
execution of the task using the system’s facilities.
Three-state model
• we saw that a range of pointing devices exists in addition to the mouse.
Often these devices are considered logically equivalent, if the same inputs
are available to the application. That is, so long as you can select a point
on the screen, they are all the same. However, these different devices –
mouse, trackball, light pen – feel very different. Although the devices are
similar from the application’s viewpoint, they have very different sensory–
motor characteristics
• Button has developed a simple model of input devices , the three-state
model, which captures some of these crucial distinctions. He begins by
looking at a mouse. If you move it with no buttons pushed, it normally
moves the mouse cursor about. This tracking behavior is termed state 1.
Depressing a button over an icon and then moving the mouse will often
result in an object being dragged about. This he calls state 2
Continue
• If instead we consider a light pen with a button, it behaves just like a
mouse when it is touching the screen. When its button is not
depressed, it is in state 1, and when its button is down, state 2.
However, the light pen has a third state, when the light pen is not
touching the screen. In this state the system cannot track the light
pen’s position. This is called state 0
Continue
• A touchscreen is like the light pen with no button. While the user is
not touching the screen, the system cannot track the finger – that is,
state 0 again. When the user touches the screen, the system can
begin to track – state 1. So a touchscreen is a state 0–1 device
whereas a mouse is a state 1–2 device
SOCIO-ORGANIZATIONAL ISSUES AND
STAKEHOLDER REQUIREMENTS
• In this, we discuss some of the organizational issues that arise when
new technological solutions are introduced. Then outline a number of
models and methods that can be used to capture this broader view of
stakeholder requirements, including socio-technical models, soft
systems methodology and participatory design
1.ORGANIZATIONAL ISSUES
• Cooperation or conflict?
The term ‘computer-supported cooperative work’ (CSCW) seems to
assume that groups will be acting in a cooperative manner

Imagine that an organization is already highly computerized, the different


departments all have their own systems and the board decides that an
integrated information system is needed. The production manager can
now look directly at stocks when planning the week’s work, and the
marketing department can consult the sales department’s contact list to
send out marketing questionnaires. All is rosy and the company will clearly
run more efficiently – or will it?
Continue
• The storekeeper always used to understate stock levels slightly in
order to keep an emergency supply, or sometimes inflate the quoted
levels when a delivery was due from a reliable supplier. Also, requests
for stock information allowed the storekeeper to keep track of future
demands and hence plan future orders. The storekeeper has now lost
a sense of control and important sources of information. Members of
the sales department are also unhappy: their contacts are their
livelihood. The last thing they want is someone from marketing
blundering in and spoiling a relationship with a customer built up over
many years. Some of these people may resort to subverting the
system, keeping ‘sanitized’ information online, but the real
information in personal files. The system gradually loses respect as
the data it holds is incorrect, morale in the organization suffers and
2. Changing power structures
• Indeed, all organizations have these informal networks that support both
social and functional contacts. However, the official lines of authority and
information tend to flow up and down through line management. New
communications media may challenge and disrupt these formal managerial
structures.
• The physical layout of an organization often reflects the formal hierarchy: each
department is on a different floor, with sections working in the same area of
an office. If someone from sales wants to talk to someone from marketing
then one of them must walk to the other’s office. Their respective supervisors
can monitor the contact. Furthermore, the physical proximity of colleagues
can foster a sense of departmental loyalty. An email system has no such
barriers; it is as easy to ‘chat’ to someone in another department as in your
own. This challenges the mediating and controlling role of the line managers
Continue..
• Furthermore, in face-to-face conversation, the manager can easily
exert influence over a subordinate: both know their relative positions
and this is reflected in the patterns of conversation and in other non-
verbal cues. Email messages lose much of this sense of presence and
it is more difficult for a manager to exercise authority
Who benefits
• One frequent reason for the failure of information systems is that the
people who get the benefits from the system are not the same as those
who do the work. In these systems the sender has to do work in putting
information into fields appropriately, but it is the recipient who benefits.
Another example is shared calendars. The beneficiary of the system is a
manager who uses the system to arrange meeting times, but whose
personal secretary does the work of keeping the calendar up to date.
• Subordinates are less likely to have secretarial support, yet must keep up
the calendar with little perceived benefit. Of course, chaos results when a
meeting is automatically arranged and the subordinates may have to
rearrange commitments that have not been recorded on the system. The
manager may force use by edict or the system may simply fall into disuse.
Many such groupware systems are introduced on a ‘see if it works’ basis,
and so the latter option is more likely.
Free Rider Problem
• Even where there is no bias toward any particular people, a system
may still not function symmetrically, which may be a problem,
particularly with shared communication systems. One issue is the free
rider problem. Take an electronic conferencing system. If there is
plenty of discussion of relevant topics then there are obvious
advantages to subscribing and reading the contributions. However,
when considering writing a contribution, the effort of doing so may
outweigh any benefits. The total benefit of the system for each user
outweighs the costs, but for any particular decision the balance is
overturne
Capturing Requirement
a. Who are the stakeholders?
• Imagine that a new billing system is to be introduced by a local gas supplier.
Who will be affected by this decision? Obviously, the people who are
responsible for producing and sending out bills – they will be the ones using
the system directly. But where do they get the information from to produce
the bills? To whom do they send the bills? Who determines the level of
charging and on what grounds? Who stands to make a profit from increased
revenue? Who will suffer if customers choose to switch supplier due to the
improved service? Meter readers, customers, managers, regulators,
shareholders and competitors are all stakeholders in the system. We need
approaches that will capture the complexity of their concerns, which may be
in conflict with each other.
• A stakeholder, therefore, can be defined as anyone who is affected by the
success or failure of the system.
Continue…
• Primary stakeholders are people who actually use the system – the end-
users.
• Secondary stakeholders are people who do not directly use the system,
but receive output from it or provide input to it (for example, someone
who receives a report produced by the system).
• Tertiary stakeholders are people who do not fall into either of the first
two categories but who are directly affected by the success or failure of
the system (for example, a director whose profits increase or decrease
depending on the success of the system).
Facilitating stakeholders are people who are involved with the design,
development and maintenance of the system.
Task Analysis
• Task analysis is the process of analyzing the way people perform their
jobs: the things they do, the things they act on and the things they need
to know.
• Task decomposition which looks at the way a task is split into subtasks,
and the order in which these are performed.
• Knowledge-based techniques which look at what users need to know
about the objects and actions involved in a task, and how that
knowledge is organized.
• Entity–relation-based analysis which is an object-based approach where
the emphasis is on identifying the actors and objects, the relationships
between them and the actions they perform
TASK DECOMPOSITION
• The example above of vacuum cleaning showed how a task, ‘clean the
house’, was decomposed into several subtasks: ‘get the vacuum
cleaner out’ and so on
• 0. in order to clean the house 1. get the vacuum cleaner out 2. fix the
appropriate attachment 3. clean the rooms 3.1. clean the hall 3.2.
clean the living rooms 3.3. clean the bedrooms 4. empty the dust bag
5. put the vacuum cleaner and attachments away Plan 0: do 1 – 2 – 3
– 5 in that order. when the dust bag gets full do 4 Plan 3: do any of
3.1, 3.2 or 3.3 in any order depending on which rooms need cleaning
KNOWLEDGE-BASED ANALYSIS
• Knowledge-based task analysis begins by listing all the objects and
actions involved in the task, and then building taxonomies of them.
ENTITY–RELATIONSHIP-BASED
TECHNIQUES
• Entity–relationship modeling is an analysis technique usually
associated with database design and more recently object-oriented
programming.
• example
DIALOG NOTATIONS AND DESIGN
• Dialog, as opposed to a monolog, is a conversation between two or more parties. It
has also come to imply a level of cooperation or at least intent to resolve conflict.
In the design of user interfaces, the dialog has a more specific meaning, namely the
structure of the conversation between the user and the computer system.
• We can look at computer language at three levels: Lexical The lowest level: the
shape of icons on the screen and the actual keys pressed. In human language, the
sounds and spellings of words.
• Syntactic The order and structure of inputs and outputs. In human language, the
grammar of sentence construction.
• Semantic The meaning of the conversation in terms of its effect on the computer’s
internal data structures and/or the external world. In human language, the
meaning ascribed by the different participants to the conversation.
DIALOG DESIGN NOTATION
• In this section we shall look at some of the notations which have been
used for describing human–computer dialogs. Some may be familiar
to the computer scientist as they have their roots in other branches of
computing and have been ‘appropriated’ by the user interface
developer.
• But why bother to use a special notation? We already have
programming languages, why not use them? Let us look at a simple
financial advice program to calculate mortgage repayments. (This is
not supposed to be an example of good dialog design.)
Continue..
1. DIAGRAMMATIC NOTATION
• Diagrammatic notations are heavily used in dialog design. At their
best they allow the designer to see at a glance the structure of the
dialog. However, they often have trouble coping with more extensive
or complex dialog structures.
1. State transition networks
2. Concurrent dialogs and combinatorial
explosion of states
Flow charts
• flow charts in various forms are perhaps the most widely used of any
diagrammatic notation for programming.
DIALOG ANALYSIS AND DESIGN
• We will discuss these dialog properties under three headings. The first
focusses on user actions and whether they are adequately specified
and consistent. The second concerns the dialog states, including those
you want to get to and those you do not. Finally, we will look at
presentation and lexical issues, what things look like and what keys do
what.
16.6.1 Action properties
• There are four types of user action in these diagrams: selecting from
the main menu (graphics, text or paint), selecting a pop-up menu
choice (circle or line), clicking on a point on the drawing surface and
double clicking a point on the drawing surface.
• For example, the pop-up menu choices can only happen while a pop-
up menu is displayed. So, we do not need to worry about the user
doing ‘select “line”’ from the Main menu or while in state Line 1. But
what happens if we click on the drawing surface whilst at the Main
menu, or try to select something from the main menu whilst in the
middle of the drawing a circle, say in state Circle 2? The dialog
description is not complete.
Modelling rich interaction
• The majority of more detailed models and theories in HCI are
focussed on the ‘normal’ situation of a single user interacting with
traditional applications using a keyboard and screen. In fact, this
‘normal’ situation is increasingly looking like the exception.
• Even traditional computer systems are used not in isolation, but in
office and other work settings that involve different people and
physical objects. Normal human–computer interaction usually
includes looking at pieces of paper, walking around rooms, talking to
people. Finally, the more ubiquitous environments deeply challenge
the idea of intention behind human–computer interaction:
increasingly things simply happen to us.
STATUS–EVENT ANALYSIS
• Status–event analysis looks at different layers of the system, such as
user, screen (presentation), dialog and application. It looks for the
events perceived at each level and the status changes at each level.
• This, combined with the naïve psychological analysis of the
presentation/user boundary, allows the designer to predict failures
and more importantly suggest improvements. This approach is
demonstrated in two examples: the ‘mail has arrived’ interface to an
email system and the behavior of an on-screen button.
Properties of events
Status-Brian’s watch is a status – it always tells the time – so is Alison’s
calendar. Moreover, assuming Brian’s watch is analog, this
demonstrates that status phenomena may be both discrete (the
calendar) or continuous (the watch face).
Events-The passing of the time 7:35, when Brian wanted to stop work,
was an event. A different, but related, event was when Brian got up to
go. The alarm on Brian’s watch (if he could use it) would have caused
an event, showing that Brian’s watch is capable of both status and
event outputs. Alison also experienced an event when she noticed it
was Brian’s birthday the next day, and of course, his birthday will also
be an event.
RICH CONTEXTS(Collaboration – doing it
together)
Information – what you need to know and
when you need to know it
• It is a simple matter to add an information analysis stage to any task
analysis method or notation. Note that some tasks have no
information requirements – other than the fact that they are to
happen. For example, the ‘make pot of tea’ subtask requires no
information other than the fact that the kettle has boiled. However,
information is required whenever:
a) a subtask involves inputting (or outputting) information
b) there is some kind of choice
c) a subtask is repeated a number of times that is not prespecified.
Continue
• Having discovered that information is required it may come from several
sources:
(i) It is part of the task (e.g., in the case of a phone call, whom one is going to
phone).
(ii) The user remembers it (e.g. remembering the number after ringing
directory enquiries).
(iii) It is on a computer/device display (e.g. using a PDA address book and
then dialing the number).
(iv) It is in the environment: either pre-existing (e.g. number in phone
directory), or created as part of the task (e.g. number written on piece of
paper)
Triggers – why things happen when they
happen
• Workflows and process diagrams decompose processes into smaller
activities and then give the order between them
• Figure 18.9 shows a simple example, perhaps the normal pattern of
activity for an office worker dealing with daily post. Notice the simple
dependency that the post must be collected from the pigeonhole
before it can be brought to the desk and before it can be opened.
However, look again at the activity ‘open post’ – when does it actually
happen? The work process says it doesn’t happen before the ‘bring
post to desk’ activity is complete, but does it happen straightaway
after this or some time later
Continue..
Placeholders – knowing what happens next
• We have already discussed two aspects of this memory: information
required to perform tasks, and triggers that remind us that something
needs to happen. However, there is one last piece of this puzzle that
we have hinted at several times already. As well as knowing that we
need to do something, we need to know what to do next. In the
complex web of tasks and subtasks that comprise our job – where are
we?

You might also like