0% found this document useful (0 votes)
41 views

4.3 Types of Eye Meovement

There are several types of eye movements that occur without conscious control, including saccades, fixations, pursuits, and vergences. Saccades are rapid eye movements that change fixation points, while fixations occur between saccades when the eye is stationary. Miniature movements occur during fixations to stimulate the retina. Pursuits track moving objects to stabilize the image. Vergences occur when focusing on near or far objects. The document then provides examples of these eye movements during activities like reading, driving, and sports.

Uploaded by

kks
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views

4.3 Types of Eye Meovement

There are several types of eye movements that occur without conscious control, including saccades, fixations, pursuits, and vergences. Saccades are rapid eye movements that change fixation points, while fixations occur between saccades when the eye is stationary. Miniature movements occur during fixations to stimulate the retina. Pursuits track moving objects to stabilize the image. Vergences occur when focusing on near or far objects. The document then provides examples of these eye movements during activities like reading, driving, and sports.

Uploaded by

kks
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

4.

3 TYPES OF EYE MEOVEMENT


The information given here is deliberately non-technical and is based for a general
audience. With apologies to eye movement research colleagues for liberties taken in
simplification.
In our everyday lives we regularly make various types of eye movements, usually
without being aware of them. Such movements occur as a result of the activity of the three
pairs of antagonistic muscles that support each eye. There are several ways to record eye
movements and more information can be found here.
In looking at paintings, or other two-dimensional scenes, only saccadic eye
movements and their associated fixations, in the main, are of research interest. Miniature eye
movements also occur, and these overlay the fixations causing many small movements
which, to a large extent, can be regarded as system noise (unless the researcher is
specifically interested in these movements)?
Main types of eye movements:

Saccade
Miniature
Pursuit
Smooth
Compensatory
Vergence
Nystagmus

Saccade
A saccade is a rapid eye movement (a jump) which is usually conjugates (i.e. both eye
move together in the same direction) and under voluntary control. Broadly speaking the
purpose of these movements is to bring images of particular areas of the visual world to fall
onto the fovea. Saccades are therefore a major instrument of selective visual attention. It is
often convenient (but somewhat inaccurate) to consider both that a saccadic eye movement
always occurs in a straight line and also that we do not see during these movements. We can
therefore simply consider that we often see the world by means of a series of saccadic jumps
from one area to another, interspersed with fixations. Note, however, that these are over
simplifications.
A fixation is when the eye is stationary (but see the miniature eye movements
section) between saccades and it is convenient to consider that the area imaged on to the

fovea (or very near to the fovea) during a fixation is being visually attended to by the
observer.
Fixations differ in their length but tend to be about 200-300ms, although much longer
fixations can occur. The length of fixations is an important research topic in itself as it relates
to the visual information to which the observer is attending as well as to his/her cognitive
state. Precisely when a fixation starts and ends is itself a matter of research interest as the
recorded fixation length is itself somewhat related to the temporal sampling rate of the eye
movement recording technique being used.
Most saccades are less than about 150 in size. When we make a saccadic movement
towards a specific object then the saccade can either accurately land on the object or,
commonly, either overshoot or undershoot it, so giving rise to a subsequent small corrective
saccade to the object. Note that saccadic movements can be curved as well as straight.
Saccades are very fast with a peak velocity of about 700osec-1. During a saccadic movement
our vision is not completely eliminated but it is considerably reduced (this is known as
saccadic suppression). There is a small refractory period between saccades of about 150ms
which limits the number of saccadic movements we can make in a given period of time.
Examples: A good example of a sequence of saccades and fixations is reading a book or
looking at pictures. In driving we make saccades as we look at; cars ahead of us, road traffic
signs, the vehicle instrumentation and the rear view mirror.
Miniature eye movements:
When we stare at (i.e. fixate) an object we may think that our eyes are not moving. In
fact they are constantly moving, making very small movements which are all generally less
than 10 in size. There are various types of miniature eye movements, including; flicks, drifts,
irregular movements and high frequency tremors. When we fixate on an object then its image
falls on the fovea. The effect of such small movements is to constantly shift this image
minutely over the fovea so that the fovea is constantly being stimulated if this did not
happen then the image of the object would fade.
Drifts are slow movements away from a fixation point. Flicks or micro saccades
reposition the eye on the target. Predominantly these are corrective movements, correcting for
the off centre foveal position produced by a drift eye movement. Irregular slow movements
of the eye also occur. High frequency tremor causes the image of an object to constantly
stimulate cells in the fovea.

Pursuit eye movements:


These are conjugate eye movements which smoothly track slowly moving objects in
the visual field. They typically require a moving object to elicit them and are not usually
under voluntary control. Their purpose, partly, is to stabilize moving objects on the retina
thereby enabling us to perceive the object in detail.
Examples: When we watch a rugby football match and follow the ball as it is passed from
player to player then we are making pursuit eye movements following the ball. When
driving we can use pursuit movements (and other types of eye movements) to monitor the
movement of other vehicles on the road.
Smooth eye movements:
These are similar to pursuit movements but can be made in the absence of a moving
stimulus.
Compensatory eye movements:
These are smooth compensatory movements which are related to pursuit movements.
They act to compensate for movement of the head or body so as partially to stabilize an
object on the retina.
Example: Consider a rugby player is run; and at the same time is visually following
the flight of a ball, so that he can catch it. He will therefore be making eye movements
Which compensate both for his body and his head movements as he runs. In driving we
make compensatory movements to allow for our head movements which we constantly
make.

Vergence eye movements:


These are movements where both eyes move in opposite horizontal directions to
permit the acquisition of a near or far object. With an object coming towards us then our two
eyes move together slightly to maintain binocular vision of it, as the object recedes away
from us then the two eyes diverge again.
Example: When a rugby player is trying to catch a ball coming towards him then he will
be tracking the ball visually using a vergence eye movement. In driving we regularly

monitor different vehicles around us and use vergence movements to help us do this as
some vehicles move away and others approach us.
Nystagmus eye movements:
This is a regular form of eye movement, comprising two alternating components a
slow and fast phase. The best example is in a train where looking out of the window you look
at a tree and follow it as it moves past (slow phase) then make a rapid movement back (fast
phase) and fixate on the next object which again moves past you.

There are several different types of nystagmus:


Optokinetic
This is produced by moving repetitive patterns in the visual field.
This is caused by stimulating the semicircular canals as the head is rotated. A good
example is when on a spinning fairground ride where you regularly fixate on an object as it
passes (slow phase) and then rapidly make a movement back to fixate on another object. This
action is repeated.
Vestibular
This is a pendulum movement exhibited by some people where both
phases of the nystagmus have a similar velocity and so it is unlike optokinetic or
vestibular nystagmus.
Voluntary
A rare clinical condition where no nystagmus is present but when one eye
is covered up then nystagmus occurs.

4.4 EXPERIENCES WITH EYE MOVEMENTS


Configuration
An Applied Science Laboratories eye tracker in the laboratory , the user sits at a
conventional (government-issue) desk, with a 16" Sun computer display, mouse and
keyboard, in a standard chair and office. The eye tracker camera/illuminator sits on the desk
next to the monitor. Other than the illuminator box with its dim red glow, the overall setting is
thus far just like that for an ordinary office computer user. In addition, the room lights are

dimmed to keep the users pupil from becoming too small. The eye tracker transmits the x
and y coordinates for the users visual line of gaze every 1/60 second, on a serial port, to a
Sun 4/260 computer. The Sun performs all further processing, filtering, fixation recognition,
and some additional calibration. Software on the Sun parses the raw eye tracker data stream
into tokens that represent events meaningful to the user-computer dialogue. The user interface
management system, closely modeled after that described in, multiplexes these tokens with
other inputs (such as mouse and keyboard) and processes them to implement the user
interfaces under study.
The eye tracker is, strictly speaking, non-intrusive and does not touch the user in any
way. User setting is almost identical to that for a user of a conventional office computer.
Nevertheless, the users find it is difficult to ignore the eye tracker. It is noisy; the dimmed
room lighting is unusual; the dull red light, while not annoying, is a constant reminder of the
equipment; and, most significantly, the action of the servo-controlled mirror, which results in
the red light following the slightest motions of users head gives one the eerie feeling of
being watched. One further wrinkle is that the eye tracker is designed for use in experiments,
where there is a subject whose eye is tracked and an experimenter who monitors and
adjusts the equipment. Operation by a single user playing both roles simultaneously is
somewhat awkward because, as soon as you look at the eye tracker control panel to make an
adjustment, your eye is no longer pointed where it should be for tracking.

Accuracy and Range


A user generally need not position his or her eye more accurately than the width of the
fovea (about one degree) to see an object sharply. Finer accuracy from an eye tracker might
be needed for studying the operation of the eye muscles but adds little for our purposes. The
eyes normal jittering further limits the practical accuracy of eye tracking. It is possible to
improve accuracy by averaging over a fixation, but not in a real-time interface.
Despite the servo-controlled mirror mechanism for following the users head, we find
that the steadier the user holds his or her head, the better the eye tracker works. We find that

we can generally get two degrees accuracy quite easily, and sometimes can achieve one
degree (or approximately 0.4" or 40 pixels on the screen at a 24" viewing distance). The eye
tracker should thus be viewed as having a resolution much coarser than that of a mouse or
most other pointing devices, perhaps more like a traditional touch screen. An additional
problem is that the range over which the eye can be tracked with this equipment is fairly
limited. In our configuration, it cannot quite cover the surface of a 19" monitor at a 24"
viewing distance.
Local Calibration
Our first step in processing eye tracker data was to introduce an additional layer of
calibration into the chain. The eye tracker calibration procedure produces a mapping that is
applied uniformly to the whole screen. Ideally, no further calibration or adjustment is
necessary.
In practice, we found small calibration errors appear in portions of the screen, rather
than systematically across it. We introduced an additional layer of calibration into the chain,
outside of the eye tracker computer, which allows the user to make local modifications to the
calibration, based on arbitrary points he or she inputs whenever he feels it would be helpful.
The procedure is that, if the user feels the eye tracker is not responding accurately in some
area of the screen, he or she moves the mouse cursor to that area, looks at the cursor, and
clicks a button.
Surprisingly, this had the effect of increasing the apparent response speed for object
selection and other interaction techniques. The reason is that, if the calibration is slightly
wrong in a local region and the user stares at a single target in that region, the eye tracker will
report the eye position somewhere slightly outside the target. If the user continues to stare at
it, though, his or her eyes will in fact jitter around to a spot that the eye tracker will report as
being on the target. The effect feels as though the system is responding too slowly, but it is a
problem of local calibration. The local calibration procedure results in a marked improvement
in the apparent responsiveness of the interface as well as an increase in the users control over
the system (since the user can re-calibrate when and where desired).
Fixation Recognition

After improving the calibration, we still observed what seemed like erratic behavior in
the user interface, even when the user thought he or she was staring perfectly still. This was
caused by both natural and artificial sources: the normal jittery motions of the eye during
fixations as well as artifacts introduced when the eye tracker momentarily fails to obtain an
adequate video image of the eye.
It plots the x coordinate of the eye position output against time over a relatively jumpy
three-second period. (A plot of they coordinate for the same period would show generally the
same areas of smooth vs. jumpy behavior, but different absolute positions.) Zero values on
the ordinate represent periods when the eye tracker could not locate the line of gaze, due
either to eye tracker artifacts, such as glare in the video camera, lag in compensating for head
motion, or failure of the processing algorithm, or by actual user actions, such as blinks or
movements outside the range of the eye tracker. Unfortunately, the two cases are
indistinguishable in the eye tracker output.
Re-assignment of Off-target Fixations
The processing steps described thus far are open-loop in the sense that eye tracker
data are translated into recognized fixations at specific screen locations without reference to
what is displayed on the screen. The next processing step is applied to fixations that lie
outside the boundaries of the objects displayed on the screen. This step uses knowledge of
what is actually on the screen, and serves further to compensate for small inaccuracies in the
eye tracker data. It allows a fixation that is near, but not directly on, an eye-selectable screen
object to be accepted. Given a list of currently displayed objects and their screen extents, the
algorithm will reposition a fixation that lies outside any object, provided it is "reasonably"
close to one object and "reasonably" further from all other such objects. It is important that
this procedure is applied only to fixations detected by the recognition algorithm, not to
individual raw eye tracker position reports.
4.5 USER INTERFACE MANAGEMENT SYSTEM
In order to make the eye tracker data more tractable for use as input to an interactive
user interface, we turn the output of the recognition algorithm into a stream of tokens. We
report tokens for eye events considered meaningful to the user-computer dialogue, analogous
to the way that raw input from a keyboard (shift key went down, letter a key went down, etc.)

is turned into meaningful events (one ASCII upper case A was typed). We report tokens for
the start, continuation (every 50 ms in case the dialogue is waiting to respond to a fixation of
a certain duration), and end of each detected fixation.
Each such token is tagged with the actual fixation duration to date, so an interaction
technique that expects a fixation of a particular length will not be skewed by delays in
processing by the UIMS (user interface management system) or by the delay inherent in the
fixation recognition algorithm. Between fixations, we periodically report a non-fixation token
indicating where the eye is, although our current interaction techniques ignore this token in
preference to the fixation tokens, which are more filtered.
A token is also reported whenever the eye tracker fails to determine eye position for
200 ms, and again when it resumes tracking. In addition, tokens are generated whenever a
new fixation enters or exits a monitored region, just as is done for the mouse. Note that jitter
during a single fixation will never cause such an enter or exit token, though, since the
nominal position of a fixation is determined at the start of a fixation and never changes during
the fixation. These tokens, having been processed by the algorithms described above, are
suitable for use in a user-computer dialogue in the same way as tokens generated by mouse or
keyboard events.
4.6 INTERACTION TECHNIQUES
Interaction techniques provide a useful focus for this type of research because they are
specific, yet not bound to a single application. An interaction technique represents an
abstraction of some common class of interactive task, for example, choosing one of several
objects shown on a display screen. Research in this area studies the primitive elements of
human computer dialogues, which apply across a wide variety of individual applications. The
goal is to add new, high-bandwidth methods to the available store of input/output devices,
interaction techniques, and generic dialogue components. Mockups of such techniques are
then studied by measuring their properties, and attempts are made to determine their
composition rules. This section describes the first few eye movement-based interaction
techniques that we have implemented and o ur initial observations from using them.
Interaction between User and Computer

Figure 4.5 Basic Interaction Techniques

You might also like