I AOI
I AOI
1 Introduction
Parkinson’s Disease(PD) is usually attributed to the degeneration of the nervous
system. In PD, a person’s movement is impacted. The first signs may be a barely
perceptible tremor in one hand when they first appear. Although the disorder
frequently results in tremors, it also frequently slows or stiffens movement. There
is also a significant change in a visual pattern that can be observed in PD. In
PD, saccades are delayed and hence reading might be challenging if the eyes
2 Akshay S et al.
can’t find the right spot on the following line. The blink rate, which is generally
between 16 and 24 times per minute, lowers in PD patients, with rates of 12 and
14 blinks per minute being documented. Eye-tracking is a research technique to
assess expert visual behavior and also aids in medical diagnosis by tracking illness
progression over time. In this work, the participant’s gaze statistics are used to
determine the Area of Interest (AOI). This will help in analyzing the difference
in visual patterns between healthy controls and PD participants. It will also aid
in determining a pattern for PD participants based on the severity of the disease.
iAOI does binary classification to determine if the participant is viewing within
the AOI or outside the AOI. For multi-class classification, the Artificial Neural
Network model – Multi-Layer Perceptron is utilized. The number of layers and
the activation functions being used is varied to compare the performance metrics
and to find the optimal model. Using Visualization tools, a plot is designed with
different features of the participants to observe the AOI they are observing for a
given instance of time. The related work is explained in section 2. The proposed
iAOI is explained in section 3. The results of the iAOI model are represented in
4. The conclusion and future scope are provided in 5.
2 Related Work
that PD patients did not have any statistical differences from control partic-
ipants in saccade dynamics but there was a prominent difference observed in
visual search tasks. [3] Use head-mounted eye-tracking for the control system in-
volving visual attention in natural settings for toddlers. Methods to collect data
that can be used to answer questions not only about visual attention, but also
about a wide variety of other perceptual, cognitive, and social abilities and their
development if this technique is applied successfully was presented. [11] The au-
thors discuss how components such as the instrument, technique, environment,
participant, and so on, impact the quality of the recorded eye-tracking data and
the derived eye-movement and gaze metrics. A minimal and flexible reporting
guideline was derived in the end.[15] This work uses r binocular vergence accu-
racy (i.e. fixation disparity) to explore pupillary artefact. The pupillary artefact
can be corrected using a regression between records of pupil size and fixation dis-
parity. The results offer a quantitative estimate of pupillary artefact on observed
eye position as a function of viewing distance and brightness, for both monoc-
ular and binocular eye position measurements. Work by [7] examined the most
commonly used eye-tracking metrics and demonstrated how they were employed
in two studies. The first experiment used medical photos to examine perception
in the diagnosing process. The second experiment looked at how participants’
visual attention changed during psychomotor assessments. The authors provided
a generic approach for visual attention analysis utilizing eye-tracking data and
area of interest as a summary. [16] This research provides an overview of the
current state of the art in terms of how video games and visual attention in-
teract. A holistic glimpse into the future of visual attention and eye tracking
in video games is provided. [10] The pupil signal from video-based eye trackers
incorporates post-saccadic oscillations, according to a recent study. Using two
high-quality video eye trackers, the authors evaluated PSOs in horizontal and
vertical saccades of various sizes. Within observers, PSOs were fairly compara-
ble, but not between observers. The incidence of PSOs is linked to deceleration
at the conclusion of a saccade based on this data. Further [17] explain the steps
in developing an eye movement-based application pertaining to real-world prob-
lems. Work by [6] explains how eye movements can be used for glaucoma. They
establish an investigation using deep learning. Insights on PD can be obtained by
[9] as it explains the affects of PD. The idea of using a simulation for the study
of PD was given by [14]. The idea of giving a questionnaire task [13] motivated
us to use a questionnaire-based task for the proposed iAOI.
3 iAOI
iAOI is a system that identifies the region of interest. The stages are explained
in figure 1. The participant is asked to sit in front of an eye tracker and iAOI
projects stimulus for visual search task to the participants. The AOI is manually
marked before projecting the stimulus to the participant. Once the user starts
the eye movement experiment the movements are recorded and the raw eye
movements are obtained for all stimuli. This data is then processed to derive the
4 Akshay S et al.
higher-level features with respect to the AOI. The derived data set contains AOI-
related statistics with 23 features. On this derived data set, EDA (Exploratory
Data Analysis) is performed to understand the different features of the dataset.
A hypothesis is then created based on the understanding from EDA. For this
research, given the eye gaze statistics of a participant iAOI predicts the AOI
name of the participant. This will help in understanding how far off or near is
the actual fixation. A Multi-layer perceptron is used to determine the AOI name.
After this step, PowerBI is used to draw a dashboard that explains the various
insights from the dataset.
The dataset used in iAOI is obtained using a 1000 Hz eye-tracker. In this exper-
iment, participants include both healthy control and patients suffering from PD.
They are shown 12 different stimuli and their gazing pattern is observed. These
readings are used to derive statistical features of importance which would further
aid in analyzing the gaze pattern of the participants. For experimentation, par-
ticipants were asked to look at a certain set of stimuli. The first stimulus (figure
2a) is to give instructions to the participant where the participant is asked to
sit comfortably in front of the eye tracker. It also marks the beginning of the
experiment for the participants. The next image stimulus (figure 2b) instructs
the user about the calibration process and stimulus that is displayed for the
purpose of calibration.
The calibration process prompts the user to look at the yellow dot on the left
(figure 3a), right (figure 3b), center(figure 3c), top (figure 3d), and bottom (figure
3e) of the screen in order to calibrate the eye tracker with the participant’s eye
movements. Once the participant’s eye movements are calibrated then another
image consisting of the first set of instructions in figure 2a is shown. It says ”Sit
comfortably, When you are seated the experimenter will present before you the
apparatus and clearly explain to you the method of the experiment”. Once the
participant reads it the second set of instructions as in figure 2b says ”You will
be seeing a fixation cross on your screen, you just have to look at it, Minimal
movements will help the experimenter better analyze the data. We appreciate
your time and effort to participate for the study, thanks” and makes the par-
ticipant understand the importance of the experiment. Then the experimenter
asks to find the number among the alphabet and the image stimulus in figure 2c
is displayed. After the instruction, the next 3 stimuli with actual tasks are pro-
jected by the eye tracker. The image stimuli that include the visual search task
are depicted in 4. All three image stimuli consist of a digit amongst alphabets
that needs to be viewed by the participants. As the participant views the Stim-
ulus, eye movements are tracked by the sensor, and the raw data is collected.
From the features, all the categorical features are converted to a numerical form
in order to pass them through an artificial neural network.
iAOI 5
Fig. 1: iAOI
1. ReLu - ReLu is one of the activation functions which are non-linear, it stands
for rectified linear unit. It ranges from (0,x). Not activating all the available
neurons at the same time is the major benefit of using ReLu as an activation
function in any model. It also avoids the vanishing gradient problem.
2. Leaky ReLU - The ReLU activation function has been upgraded to become
the leaky ReLU function. It fixes the fading ReLU function issue. The ReLU
activation function is specified as a very tiny linear component of x rather
than as 0 for negative inputs(x).
3. Softmax - For the classification of multi-class problems, softmax is used
as the activation function in the output layer. It is based on multinomial
probability distribution.
4. Optimizers are functions that are used to modify a neural network’s weights
and learning rate. It contributes in improving the accuracy and reduce overall
loss.
5. AdaGrad - The adaptive gradient descent algorithm, employs various learn-
ing rates for every iteration. The difference in the parameters during training
determines the change in learning rate. The learning rate varies less notice-
ably the more the settings are altered. Real-world datasets include both
dense and sparse features, therefore this change is quite advantageous.
6. RMSProp - Root Mean Squared Propagation, or RMSProp, is a variation
on gradient descent that adapts the step size for each parameter using a
declining average of partial gradients. The drawback of AdaGrad is overcome
by using a decaying moving average, which enables the algorithm to ignore
iAOI 7
(a) (b)
(c)
Fig. 4: Visual Search Task
early gradients and focus on the recent recorded partial gradients detected
as the search progresses.
7. Adam - Adaptive moment estimate is the source of the name Adam. Adam
optimizer modifies the learning rate for each network weight independently
as opposed to keeping a single learning rate during SGD training. Both
Adagrad and RMS prop features are inherited by the Adam optimizer.
iAOI uses 4 models to predict the AOI at which the participant is looking based
on the inputs given by the eye tracker. The models used train themselves on the
eye movement metrics with respect to each AOI and trains for eye movement
behavior in each of 56 manually marked AOIs. The data contains 23 features
based on fixations, saccades, pupil diameter and AOI. Once the model is trained
it predicts the AOI based on eye movement features as mentioned above.
8 Akshay S et al.
3.3 Visualization
In eye movement research generally, the eye movements like fixation and saccades
are visualized. The standard visualization techniques include fixation density
maps, scan paths, and heat maps. Apart from the regular visualizations, addi-
tional visualizations are proposed that reveal the eye movement behavior with
respect to the AOI. The histogram in Figure 5 shows the number of observa-
tions inside AOI and the number of observations outside AOI. It can be observed
from the histogram that the number of fixations outside the AOI in each image
stimulus is always high compared to the number of fixations inside the AOI. An-
other histogram in Figure 6 shows the number of fixations in each AOI across all
the image stimuli. The visualization suggests that fixations in White space and
other unwanted parts of the image are high compared to the fixation in other
AOIs such as numbers and alphabets. Figure 7 shows the visualization for the
number of fixations inside and outside each AOI with respect to the estimated
reaction time. As the reaction time increases the fixations inside the AOI are less
iAOI 9
compared to the fixations outside the AOI. This implies that the PD patients
face difficulty in fixating within AOI. The Wordcloud visualization in figure 8
depicts the name of the AOI which is viewed by the participant. It is clear that
the whitespaces in the image take most of the fixations. Numbers that are the
actual region of interest is observed to be less fixated. The size of the word that
contains the name of the AOI represents the majority of fixation. The bigger
the word in the word cloud more it is viewed by the participant. Even modern
visualizations like word cloud also depict that the PD patients view whitespaces
more as they find it difficult to fixate on the actual region of interest. General
heatmaps are represented to understand the eye movements with respect to dif-
10 Akshay S et al.
ferent AOI by plotting the superimposing heat signatures that relate directly
to fixations. More fixations are depicted in high-intensity color patches on the
superimposed image.
(a) (b)
Fig. 9: Heatmaps
6 Acknowledgement
References
1. Doğan, M., Metin, Ö., Tek, E., Yumuşak, S., Öztoprak, K.: Speculator and influ-
encer evaluation in stock market by using social media. In: 2020 IEEE International
Conference on Big Data (Big Data). pp. 4559–4566. IEEE (2020)
2. Guan, C., Liu, W., Cheng, J.Y.C.: Using social media to predict the stock market
crash and rebound amid the pandemic: the digital ‘haves’ and ‘have-mores’. Annals
of Data Science 9(1), 5–31 (2022)
3. Hiransha, M., Gopalakrishnan, E.A., Menon, V.K., Soman, K.: Nse stock market
prediction using deep-learning models. Procedia computer science 132, 1351–1362
(2018)
12 Akshay S et al.
4. Jiao, P., Veiga, A., Walther, A.: Social media, news media and the stock market.
Journal of Economic Behavior & Organization 176, 63–90 (2020)
5. Khan, W., Ghazanfar, M.A., Azam, M.A., Karami, A., Alyoubi, K.H., Alfakeeh,
A.S.: Stock market prediction using machine learning classifiers and social media,
news. Journal of Ambient Intelligence and Humanized Computing pp. 1–24 (2020)
6. Krishnan, S., Amudha, J., Tejwani, S.: Gaze exploration index (ge i)-explainable
detection model for glaucoma. IEEE Access 10, 74334–74350 (2022)
7. Kuttichira, D.P., Gopalakrishnan, E., Menon, V.K., Soman, K.: Stock price pre-
diction using dynamic mode decomposition. In: 2017 International Conference on
Advances in Computing, Communications and Informatics (ICACCI). pp. 55–60.
IEEE (2017)
8. Li, D., Wang, Y., Madden, A., Ding, Y., Tang, J., Sun, G.G., Zhang, N., Zhou, E.:
Analyzing stock market trends using social media user moods and social influence.
Journal of the Association for Information Science and Technology 70(9), 1000–
1013 (2019)
9. Menon, B., Nayar, R., Kumar, S., Cherkil, S., Venkatachalam, A., Surendran, K.,
Deepak, K.S.: Parkinson’s disease, depression, and quality-of-life. Indian Journal
of Psychological Medicine 37(2), 144–148 (2015), pMID: 25969597
10. Nair, B.B., Kumar, P., Prasad, S., Singh, L., Vijayalakshmi, K., Sai Ganesh, R.,
Reshma, J.: Forecasting short-term stock prices using sentiment analysis and ar-
tificial neural networks. Journal of Chemical and Pharmaceutical Sciences 9(1),
533–536 (2016)
11. Nair, B.B., Minuvarthini, M., Sujithra, B., Mohandas, V.: Stock market prediction
using a hybrid neuro-fuzzy system. In: 2010 International Conference on Advances
in Recent Technologies in Communication and Computing. pp. 243–247. IEEE
(2010)
12. Piñeiro-Chousa, J., Vizcaı́no-González, M., Pérez-Pico, A.M.: Influence of social
media over the stock market. Psychology & Marketing 34(1), 101–108 (2017)
13. Radhakrishnan, S., Menon, U.K., Sundaram, K., et al.: Usefulness of a modified
questionnaire as a screening tool for swallowing disorders in parkinson disease: A
pilot study. Neurology India 67(1), 118 (2019)
14. Sasidharakurup, H., Melethadathil, N., Nair, B., Diwakar, S.: A systems model of
parkinson’s disease using biochemical systems theory. Omics: a journal of integra-
tive biology 21(8), 454–464 (2017)
15. Selvin, S., Vinayakumar, R., Gopalakrishnan, E., Menon, V.K., Soman, K.: Stock
price prediction using lstm, rnn and cnn-sliding window model. In: 2017 inter-
national conference on advances in computing, communications and informatics
(icacci). pp. 1643–1647. IEEE (2017)
16. Unnithan, N.A., Gopalakrishnan, E., Menon, V.K., Soman, K.: A data-driven
model approach for daywise stock prediction. In: Emerging Research in Electronics,
Computer Science and Technology, pp. 149–158. Springer (2019)
17. Venugopal, D., Amudha, J., Jyotsna, C.: Developing an application using eye
tracker. In: 2016 IEEE International Conference on Recent Trends in Electronics,
Information and Communication Technology (RTEICT). pp. 1518–1522 (2016)