0% found this document useful (0 votes)
86 views10 pages

Pupil Detection Algorithm Based On Feature Extraction For Eye Gaze

This document summarizes a pupil detection algorithm based on feature extraction for eye gaze. It introduces using eye tracking technology to analyze learner behavior and emotions during e-learning. The proposed algorithm uses Gabor ordinal measures for face recognition and detects pupils within different eye image intensities. It aims to create a system for analyzing learner behavior to improve the online learning experience by adapting content to individual needs. Existing systems only use iris recognition for authentication or deliver static course materials. The algorithm is evaluated using various eye detection and localization methods discussed in the literature review.

Uploaded by

Mounika Mouni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
86 views10 pages

Pupil Detection Algorithm Based On Feature Extraction For Eye Gaze

This document summarizes a pupil detection algorithm based on feature extraction for eye gaze. It introduces using eye tracking technology to analyze learner behavior and emotions during e-learning. The proposed algorithm uses Gabor ordinal measures for face recognition and detects pupils within different eye image intensities. It aims to create a system for analyzing learner behavior to improve the online learning experience by adapting content to individual needs. Existing systems only use iris recognition for authentication or deliver static course materials. The algorithm is evaluated using various eye detection and localization methods discussed in the literature review.

Uploaded by

Mounika Mouni
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 10

PUPIL DETECTION ALGORITHM BASED ON FEATURE EXTRACTION FOR EYE GAZE

INTRODUCTION:
In a virtual learning environment, learners can lose motivation and concentration easily,
especially in a platform that is not tailored to their needs. Our research is based on
studying learner’s behavior on an online learning platform to create a system able to
clustering learners based on their behavior, and adapting educational content to their
needs.
Eye tracking is a set of techniques for recording eye movements. This technology is used to
measure eye positions and eye movement for research in psychology, psycholinguistics,
ergonomics, e-learning and pre-testing of advertising. This paper introduces the use of eye
tracking technology to track and analyze the learners' behavior and emotion on e-learning
platform like level of attention, stress, relaxation, problem solving and tiredness.
In e-learning, it is necessary to create more effective interaction between the educational
content and learners. In particular, increasing motivation by stimulating learners' interest
is very important. Users' eyes can be a significant source of information to analyze learner
behavior. What we look at, and how we do that, can be exploited to improve the learning
process. Eye movements provide an indication of learner interest and focus of attention.
They provide useful feedback to personalize learning interactions, and this can bring back
some of the human functionality of a teacher. With a study of eye movement, learners may
be more motivated, and may find learning more fun.

OBJECTIVE:
Our main goal is to create a system for analyzing the behavior of learners that can be used
in online learning platforms to improve the learning situation by placing the learner at the
center of the learning process.

ABSTRACT:
• Exact real-time pupil tracking is an important step in a live eye gaze. Since pupil
centre is a base point’s reference, exact eye centre localization is essential for many
applications.
• In this project, we extract pupil eye features exactly within different intensity levels
of eye images, mostly with localization of determined interest objects and where the
human is looking, since it’s a digital world and digital transformation, everything is
becoming virtual. Hence this concept has a huge scope in e-learning, class room
training, analyse human behaviour.
• This project covers main process like Eye Ball and mentality & mood Recognition of
Human Beings.
• Feature extraction method named Gabor ordinal measures (GOM) is used for Face
Recognition process.
• Eye Ball recognition is done with the techniques of Black Matching and Low rank.
• Harr’s cascade classifier was used to first locate the eye’s area, and once found.
• We also include the state of emotions facial landmarks of the salient patches on face
image using automated learning-free facial landmark detection technique.

EXISTING SYSTEM:
• In existing system, IRIS recognition is been used only for user authentication.
• In online training legendary companies like udemy, udacity, privatetutors also just
deliver the online course materials and videos but didn’t focus on analyzing the end
user attention.

LITERATURE SURVEY:

WHEELCHAIR MOVEMENT USING EYEBALL DETECTION


Parth Pancholi, Jaimeet Patel, Saumil Ajmera
• Designing a wheelchair is dealing with a thought of blending emotions with
technology of unbeatable standards. The essence of scientific advancements is
purely put through vigorous thought process considering and exploring all the
available possibilities. Manually operated wheelchair; helping the physically
challenged in their mundane life made their work dogmatic. Therefore, being an
engineer, it is a task of elephantine proportions to put forth some of the
technological developments which can render flexibility and robustness for the
endemics. Hence, the idearose to make an “AUTOMATED WHEELCHAIR” which can
be regulated by EYE-BALL movement. So an individual can control his/her
wheelchair and can mitigate the dependency. Navigation and control serves as the
major limitation of the overall performance, accuracy and robustness for this
intelligent vehicle. This concept will address the problem and provide aunique
navigation and control scheme for an automatic wheelchair utilizing the concepts of
Image Processing and other guiding technologies.
Detection of Eyes by Circular Hough Transform and Histogram of Gradient
Yasutaka Ito, Wataru Ohyama, Tetsushi Wakabayashi, Fumitaka Kimura
• In order to achieve high accuracy of face recognition, detection of facial parts such
as eyes, nose, and mouth is essentially important. In this paper, we propose a
method to detect eyes from frontal face images. The proposed method consists of
two major steps. The first is two dimensional Hough transformation for detecting
circle of unknown radius. The circular Hough transform first generates two
dimensional parameter space (xc, yc) using the gradient of grayscale. The radius of
circle r is determined for each local maximum in the (xc, yc) space. The second step
of the proposed method is evaluation of likelihood of eye using histogram of
gradient and Support Vector Machine (SVM). The eye detection step of proposed
method firstly detects possible eye center by the circular Hough transform. Then it
extracts histogram of gradient from rectangular window centered at each eye
center. Likelihood of eye of the extracted feature vector is evaluated by SVM, and
pairs of eyes satisfying predefined conditions are generated and ordered by sum of
the likelihood of both eyes. Evaluation experiment is conducted using 1,409 images
of the FERET database of frontal face image. The experimental result shows that the
proposed method achieves 98.65% detection rate of both eyes.

A SURVEY OF EYE TRACKING METHODS AND APPLICATIONS


ROBERT GABRIEL LUPU and FLORINA UNGUREANU
• In the last decade, the development of eye tracking (ET) systems represented a
challenge for researchers and different companies in the area of IT, medical
equipment or multimedia commercial devices. An eye tracking system is based on a
device to track the movement of the eyes to know exactly where the person is
looking and for how long. It also involves software algorithms for pupil detection,
image processing, data filtering and recording eye movement by means of fixation
point, fixation duration and saccade as well. A large variety of hardware and
software approaches were implemented by research groups or companies
according to technological progress. The suitable devices for eye movement
acquiring and software algorithms are chosen in concordance with the application
requirements. Some vendors (e.g. SensoMotoric Instruments, Tobii or MyGaze) have
invested in eye tracking technology, but their solutions are focused on commercial
remote camera-based eye-tracker systems for which the light source and camera
are permanently affixed to a monitor. Because these commercial systems including
software and support are expensive some mobile and low cost devices for eye
tracking were developed by some research groups. The eye tracking applications
covers human computer interaction, brain computer interaction, assistive
technology, e-learning, psychology investigation, pilot training assistance, virtual
and augmented reality and so on.

Automatic Eye Detection and Its Validation


Peng Wang, Matthew B. Green, Qiang Ji and James Wayman
• The accuracy of face alignment affects the performance of a face recognition system.
Since face alignment is usually conducted using eye positions, an accurate eye
localization algorithm is therefore essential for accurate face recognition. In this
paper, we first study the impact of eye locations on face recognition accuracy, and
then introduce an automatic technique for eye detection. The performance of our
automatic eye detection technique is subsequently validated using FRGC 1.0
database. The validation shows that our eye detector has an overall 94.5% eye
detection rate, with the detected eyes very close to the manually provided eye
positions. In addition, the face recognition performance based on the automatic eye
detection is shown to be comparable to that of using manually given eye positions.

Eye Pupil Location Using Webcam


 
Michal Ciesla
Three different algorithms used for eye pupil location were described and tested.
Algorithm efficiency comparison was based on human faces images taken from the BioID
database. Moreover all the eye localization methods were implemented in a dedicated
application supporting eye movement based computer control. In this case human face
images were acquired by a webcam and processed in a real-time.

Fast and Accurate Algorithm for Eye Localization for Gaze Tracking in Low
Resolution Images
Anjith George
Iris centre localization in low-resolution visible images is a challenging problem in
computer vision community due tonoise, shadows, occlusions, pose variations, eyeblinks,
etc. This paper proposes an efficient method for determining iris centre in low-resolution
images in the visible spectrum. Even low-cost consumer-grade webcams can be used for
gaze tracking without any additional hardware. A two-stage algorithm is proposed for iris
centre localization. The proposed method uses geometrical characteristics of the eye. In the
first stage, a fast convolution based approach is used for obtaining the coarse location of
iris centre (IC). The IC location is further refined in the second stage using boundary
tracing and ellipse fitting. The algorithm has been evaluated in public databases like BioID,
Gi4E and is found to outperform the state of the art methods.

PROPSOED SYSTEM:
• Exact real-time pupil tracking is an important step in a live eye gaze. Since pupil
centre is a base point’s reference, exact eye centre localization is essential for many
applications.
• In this project, we extract pupil eye features exactly within different intensity levels
of eye images, mostly with localization of determined interest objects and where the
human is looking, since it’s a digital world and digital transformation, everything is
becoming virtual. Hence this concept has a huge scope in e-learning, class room
training, analyse human behaviour.
• This project covers main process like Eye Ball and mentality & mood Recognition of
Human Beings.
• Feature extraction method named Gabor ordinal measures (GOM) is used for Face
Recognition process.
• Eye Ball recognition is done with the techniques of Black Matching and Low rank.
• Harr’s cascade classifier was used to first locate the eye’s area, and once found.
• We also include the state of emotions facial landmarks of the salient patches on face
image using automated learning-free facial landmark detection technique.
• A template matching was used to track the eye for faster processing and gives high
accuracy with alarms notification for people in real road driving conditions.
• It offers high classification accuracy with acceptably low error sand false alarms for
people of various ethnicity and gender in real road driving conditions.
• Emotions includes face as well as the facial landmark points in an image, thereby
extracting some salient patches that are estimated during training stage.
• This is a current research area, which has huge scope and applications can be
extended.
• We are planning to implement this proposed system using Matlab (or) Python.
SYSTEM REQUIREMENT:
SOFTWARE REQUIREMENTS 
   Operating system                   :          Windows 7 Professional.
   Coding Language                   :           Python - Pycharm.
 
HARDWARE REQUIREMENTS 

System            :           Pentium IV 2.4 GHz

Hard Disk       :           40 GB

Monitor          :           15 VGA Color

Mouse             :           Logitech

Ram                :           512 MB.


ARCHITECTURE

WEB CAM

Pre-processing
EYES

Segmentation

Eyeball detection Color blearing

Feature Extraction

Classification

Bored Interactive

Result
MODULES:
Face detection and eye region localization
Knowledge of the position and pose of the face is an essential factor in determining the
point of gaze. Detection and tracking of the face help in obtaining candidate regions for eye
detection. This reduces the false positive rate as well as computation time. Haar-like
feature based method is used for face detection because of its higher accuracy and faster
execution.

Iris centre localization


The proposed method uses a coarse to fine approach for detecting the accurate centre of
the iris. The two-stage approach reduces the computational complexity of the algorithm as
well as reduces false detection.
1. Coarse iris centre detection: Iris detection is formulated as circular disc detection in
this stage. An average ratio between the width of face and iris radius was obtained
empirically. For a particular image, the radius range is computed using this ratio and
width of the detected face. The image gradient of iris boundary points will always be
pointing outwards. The gradient directions and intensity information is used for the
detection of eyes. The gradients of the image are invariant to uniform illumination
changes.

2. Sub-pixel edge refining and ellipse fitting: In this stage, the rough centre points
obtained in the previous stage are used to refine the IC location. The objective is to
fit the iris boundary with an ellipse. The constraints on the major and minor axis can
be obtained empirically ( Rmin and Rmax ).

Iris tracking
Kalman filter (KF) is used to track the IC in a video sequence. The search region for iris
detection can be limited with the tracking approach. Once the IC is detected with sufficient
confidence, the point can be tracked in subsequent frames easily.

Eye closure detection


The IC localization algorithm may return false positives when the eyes are closed.
Thresholds on the peak magnitude were used to reject false positives. However, the quality
of peak may degrade in conditions such as low contrast, image noise and motion blur. The
accuracy of the algorithm may fall in these conditions, and hence machine learning based
approach is used to classify the eye states as open or close.
Eye corner detection and tracking
The appearance of inner eye corner exhibits insignificant variations with eye movements
and blinks. Therefore, this paper proposes to use inner eye corners as reference points for
gaze tracking. The eye corners can be located easily in the eye ROI. The vectors connecting
eye corners and iris centres can be used to calculate gaze position.

Support Vector Machines


Support vector machines (SVMs) seek to define a linear boundary between classes such
that the margin of separation between samples from different classes that lie next to each
other is maximized. Classification by SVMs is concerned only with data from each class near
the decision boundary, called support vectors. Support vectors lie on the margin and carry
all the relevant information about the classification problem. We used a library for support
vector machines called LIBSVM which is available online in, with interfaces for MATLAB,
Perl and Python.

CONCLUSION:
Eye-movement analysis does appear to be a promising new tool for evaluating learners'
behavior. This technology can provide many benefits to e-learning, such as facilitating
adaptive and personalized learning. Even though the cost of an advanced eye tracking
system is still high, in a couple of years the rapid technical progress may come with low-
cost solutions and accurate eye tracking systems.

REFERENCE:
[1] Su Yeong Gwon1, Chul Woo Cho1, Hyeon Chang Lee1, Won Oh Lee1 and Kang R Young
Park, “Robust Eye and Pupil Detection Method for Gaze Tracking”, International Journal of
Advanced Robotic Systems, 2013
[2] A. Laddi & N. Rup Prakash, “An augmented image gradients based supervised
regression technique for iris center localization”, Springer Science+Business Media New
York 2016.
[3] Zhaocui Han & Tieming Su & Zongying Ou &Wenji Xu ,“Precise localization of eye
centers with multiple cues”, Springer Science+Business Media, LLC 2012
[4] T. MORAVýÍK, E. BUBENÍKOVÁ , ď. MUZIKÁ ěOVÁ , “Detection of Determined Eye
Features in Digital Image”, Annals of Faculty Engineering Hunedoara –International Journal
of Engineering, 2011.
[5] Z. LIU, Q. TIAN, E. ZHANG, “An Improved Fast Algorithm of Iris Localization”,
International Congress on Image and Signal Processing (CISP 2015).
[6] V. Roselin E. Chirchi1 and L. M. Waghmare, “Feature Extraction and Pupil Detection
Algorithm Used for Iris Biometric Authentication System”, International Journal of Signal
Processing, Image Processing and Pattern Recognition, 2013.
[7] Ghassan J. Mohammed, Bing-Rong Hong, Ann A. Jarjes ,“Accurate Pupil Features
Extraction Based on New Projection Function”, Computing and Informatics, Vol. 29, 2010.

You might also like