0% found this document useful (0 votes)
37 views5 pages

Multi Label Classification For Emotion Analysis of Autism Spectrum Disorder Children Using Deep Neural Networks

Uploaded by

jigyasa1062
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views5 pages

Multi Label Classification For Emotion Analysis of Autism Spectrum Disorder Children Using Deep Neural Networks

Uploaded by

jigyasa1062
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Proceedings of the Third International Conference on Inventive Research in Computing Applications (ICIRCA-2021)

IEEE Xplore Part Number: CFP21N67-ART; ISBN: 978-0-7381-4627-0

Multi Label Classification for Emotion Analysis of Autism


Spectrum Disorder Children using Deep Neural Networks
T.Lakshmi Praveena N.V.Muthu Lakshmi
Research Scholar Assistant Professor
2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA) | 978-1-6654-3877-3/21/$31.00 ©2021 IEEE | DOI: 10.1109/ICIRCA51532.2021.9545073

Sri Padmavathi Viswa vidyalayam, AP, India Sri Padmavathi Viswa vidyalayam, AP, India
[email protected] [email protected]

ABSTRACT Facial emotions help us to recognise the feelings,


present status of person and helps us to change
Emotion recognition and analysis is the process our behaviour accordingly. So the ability of
of identifying emotions and feelings of a person. producing facial expression is known as the
Emotion analysis process is accurate in social communication ability and interaction
identifying expression in normal people in a ability [4].
single attempt. Emotion analysis is difficult in
case of Autism Spectrum Disorder (ASD) Autism Spectrum Disorder (ASD) is a neuro
children which are suffering with communication developmental disorder. ASD individual has
problems and speech problems. This paper repetitive behaviours, lack of social
proposed an optimized deep learning model with communication. ASD individual face difficulty
multi label classification for predicting ASD and in identifying and understanding of facial
NoASD with emotion analysis in children of age emotions [5]. Recent studies say that recognizing
group 1 to 10 years. The kaggle dataset [1] of facial emotions in ASD individuals is also
1857 ASD children and 1850 Typically difficult [6]. Research on face processing of ASD
Developed (TD) children are used in this paper. individuals proved that ASD people are less
Proposed model performance is tested on Yale responsive in upper part of the face i.e they are
Expression Dataset [2], CAFÉ children dataset neutral in eyes region while expressing emotions
[3] and also tested on social media dataset of in face [7]. So the lower part of face is very
autism parents group. The model is implemented important in recognizing emotion in ASD
by extracting face landmarks and is used to people, mouth, chin, jaw and cheeks. Kris Evers
predict ASD and NoASD as first classification et.al [8] perform research on lower part of face
label and emotion is detected based on for recognizing emotion in ASD people by
landmarks by computing internal and external generating hybrid faces.
distances by feature wise. Convolutional Neural
Networks (CNN) is used to work with extracted In this paper face emotion recognition model is
face landmarks by using optimization methods, proposed by analysing deep features of face and
dropout, batch normalization and parameter recognize the face emotion and predict the ASD
updating. The proposed model is applied to in general people.
predict 6 emotions irrespective of 4 general
Remaining paper is organized into 4 sections.
emotions with better accuracy.
Section 2 provides literature survey related to
KEYWORDS ASD and emotion recognition. Section 3
Facial emotion recognition, Face Landmarks, explains about proposed model and architecture
Multi Label Classification, Autism Spectrum of model. Section 4 is the implementation and
Disorder, Convolutional Neural Networks, Deep results of proposed model.
Neural Networks.
II. LITERATURE SURVEY
I. INTRODUCTION ASD is a neurodevelopmental disorder which
affects behavioural properties of children and

978-0-7381-4627-0/21/$31.00 ©2021 IEEE 1018


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on June 25,2024 at 07:21:23 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Inventive Research in Computing Applications (ICIRCA-2021)
IEEE Xplore Part Number: CFP21N67-ART; ISBN: 978-0-7381-4627-0

social communication features of children [9]. role in predicting ASD and understanding the
Diagnosing ASD is comparatively difficult than behaviour of ASD children [4,5]. With
diagnosing other diseases. Because there is no advancements in the technology like Artificial
clinical test for ASD. The two factors the affect Intelligence, machine learning and deep learning
in diagnosing ASD are 1) The wide range of improving research on ASD. Egger et.al
symptoms and types, 2) the behavioural performed research on head pose and expression
properties depends on non ASD behaviour analysis to analyse ASD in children [16].
properties like cognitive and activities [10]. Rudovic et al. studied on facial landmarks and
Facial attributes like emotions, arousal and position of body combined with audio and bio
action units also used as biomarkers in predicting signals to study the state of ASD children and
ASD [11]. behaviour of ASD children [17].

EmotioNet and AffectNet facial emotion image III. PROPOSED METHOD


datasets are released in 2017. These large scale
A. Dataset Description
datasets make possible to CNN to train and
predict emotion [12,13]. ASD does not has large Dataset used for this research is collected from
scale dataset which makes difficulty in applying Kaggle.com datasets. Dataset is uploaded by
CNN especially to train with ASD data. So the Gerry piosenka [1] in April, 2020. Dataset
dataset used in different ASD researches is contains facial images of autistic (n=1857) and
collected from autism centers and doctors and is non-autistic (n=1850) children with different
also possible for only funded researches. The expressions. The images are RGB images of
dataset used in present research is received from dimension 224x224x3. Dataset contains images
kaggle.com and dataset author is Gerry [1] who of age group 1 year to 10 years of both boys and
worked for one year to collect 1857 ASD and girls. Dataset is divided into 80% training set and
1850 TD images from different web resources. 20% test set. Random selection is used to select
training and test data.
Face Detection is the process of detecting facial
regions in face and drawing rectangle for the face B. Proposed model architecture
region. This process is performed by using haar
cascade classifier and viola jones algorithm of Proposed model has four stages and the process
face detection [14]. Face Recognition is the is given in algorithm as follows:
process of recognizing the face from a database
of face images. Face recognition process uses the 1. Initial stage:
landmarks of face using the frontal face predictor a. pre-processing stage which accepts
algorithm. Dlib library of OpenCV provides 68 images from dataset
landmarks predictor which helps to recognize b. Resizes images, apply sobel filtering
face, extract features of face and also used in c. Divide dataset to training and test
emotion recognition [15]. Multi label sets.
classification is labelling data or object with 2. Face Detection stage:
multiple classification results where each label a. Converts images to gray scale images
belongs to different dependent attributes and b. Detects face region based on haar
each has multiple classes [18]. Machine Learning cascade classifier.
(ML) algorithms provide single label 3. Feature Extraction stage:
classification models, the deep learning models a. Extracts landmarks of required
are efficient to work with multi-label features
classification. b. Calculate external distance between
face central point and landmarks of
Recent studies states that facial emotions, facial features
attributes and facial features plays an important

978-0-7381-4627-0/21/$31.00 ©2021 IEEE 1019


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on June 25,2024 at 07:21:23 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Inventive Research in Computing Applications (ICIRCA-2021)
IEEE Xplore Part Number: CFP21N67-ART; ISBN: 978-0-7381-4627-0

c. Calculate internal distance between Figure 1. Deep Neural network model


feature central point and feature implementation diagram.
landmarks
d. Apply linear normalization on IV. RESULTS
calculated distances The input vectorised data is created with the help
e. Calculate angle between landmarks of Dlib, OpenCV libraries. The Deep Neural
f. Finally vectorise the calculated values. network model is created using Tensor Flow and
4. Deep Neural Network model for multi label Keras API in Google Co Lab. Fig 2 shows lines
classification for predicting ASD/Non-ASD drawn for internal and external distance to
and for detection of emotions landmarks.

Table 1: Deep Neural Network model


architecture

Layer/Stride Repeat Size


Dense1 1 1024*1024
Dense2 1 1024*1024
Dropout1 1 0.5
Dropout2 1 0.5
Dense1 1 256*256 Figure 2. Internal and external distances
Dense2 1 256*256
between central points and landmarks.
Dropout1 1 0.5
Dropout2 1 0.5
Dense1 1 128*128
Dense2 1 128*128
Dropout1 1 0.5
Dropout2 1 0.5
Dense1 1 16*16
Dense2 1 16*16
Linear1 1 1*1
Linear2 1 1*6
Output1 CASD/NOASD Figure 3. Accuracy of training set and loss of
Output2 CHappy,CSad,CFear, validation set.
CNeutral, CSurprise,
CAngry Sample result:

Training has completed. Now loading test set to


see how accurate the model is
Model accuracy on Test Set is 40.00 %
[5.765913486480713, 0.4000000059604645]
P
r Instructions for updating:
o 256x 1x1 CASD Please use Model.predict, which supports
1024x
/NOASD
c 1024 256
generators.
e
s Emotion values
s [[0.68672776 0.07596081 0.06341558
e
CHappy
CSad 0.12819432 0.04004677 0.00565484]
d
1024x 256x 1x6 CFear, [0.71876657 0.04922354 0.00457868
d 1024 256 CNeutral 0.16637465 0.06024956 0.00080704]
a
CSurprise [0.18121372 0.01501627 0.69325703
t
a CAngry 0.08381665 0.02432744 0.00236885]
[0.40790105 0.02882759 0.00675793
0.33749518 0.21814582 0.00087247]

978-0-7381-4627-0/21/$31.00 ©2021 IEEE 1020


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on June 25,2024 at 07:21:23 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Inventive Research in Computing Applications (ICIRCA-2021)
IEEE Xplore Part Number: CFP21N67-ART; ISBN: 978-0-7381-4627-0

[0.6686394 0.19261755 0.02365685 [1] Gerry (2020, May). Autism Children Data
0.10405558 0.00775144 0.00327908]] Set, Version 9. Retrieved May 20, 2020
ASD status is True from https://www.kaggle.com/gpiosenka/
autistic-children-data-set-traintestvalidate.
Table 2 lists out some images of dataset and [2] P. N. Bellhumer, J. Hespanha, and D.
Kriegman. Eigenfaces vs. fisherfaces:
results predicted from model. ASD is binary
Recognition using class specific linear
classifier and emotions are classified on another projection. IEEE Transactions on Pattern
CNN model with six emotions. Analysis and Machine Intelligence, Special
Issue on Face Recognition, 17(7):711--720,
Table 2: Predicted results of multi label 1997
classification [3] LoBue, V. & Thrasher, C. (2015). The Child
Affective Facial Expression (CAFE) Set:
Image CASD CHappy,CSad, CFear, Validity and reliability from untrained
/NOASD CNeutral CSurprise, CAngry adults. Frontiers in Emotion Science, 5.
[4] I. Nachson, “On the modularity of face
ASD Predicted emotion: sad recognition: the riddle of domain
Actual emotion sad specificity,” Journal of Clinical and
Experimental Neuropsychology, vol. 17, no.
2, pp. 256–275, 1995.
ASD Predicted emotion: Angry
[5] American Psychiatric Association,
Actual Emotion: Angry
Diagnostic and Statistical Manual of Mental
Disorders, Washington, DC, USA, 4th
ASD Predicted emotion: Fear edition, 2000.
Actual Emotion: Fear [6] S. Kuusikko, H. Haapsamo, E. Jansson-
Verkasalo et al., “Emotion recognition in
ASD Predicted emotion: Surprise children and adolescents with autism
Actual Emotion: Surprise spectrum disorders,” Journal of Autism and
Developmental Disorders, vol. 39, no. 6, pp.
938–945, 2009.
ASD Predicted emotion: Happy
Actual Emotion: Happy [7] S. Weigelt, K. Koldewyn, and N.
Kanwisher, “Face identity recognition in
autism spectrum disorders: a review of
ASD Predicted emotion: Neutral behavioural studies,” Neuroscience and
Actual Emotion: Neutral
Biobehavioral Reviews, vol. 36, no. 3, pp.
1060–1084, 2012
[8] Kris Evers et.al, “No differences in face
emotion recognition strategies in children
V. CONCLUSION AND with ASD: Evidence from hybrid faces”,
FUTURE WORK Hindawi publications, Autism research and
Present paper proposed a Deep Neural Network Treatment , Volume 2014, Article ID
model with multi label classification to predict 345878, 8 pages,
ASD/NoASD and facial emotion in ASD and http://dx.doi.org/10.1155/2014/345878.
NoASD children. Proposed model more effective [9] A Ting Wang, Mirella Dapretto, Ahmad R
and reliable. The model can also be used to Hariri, Marian Sigman, and Susan Y
extract different features and attributes in facial Bookheimer, “Neural correlates of facial
images of autism children like action units, affect processing in children and adolescents
arousal, and valence. Facial attributes plays an with autism spectrum disorder,” Journal of
important role in predicting ASD and understand the American Academy of Child &
the behaviour of children. Adolescent Psychiatry, vol. 43, no. 4, pp.
Acknowledgement: The dataset used in this 481–490, 2004.
paper is collected from web sources [1]. I am [10] Tony Charman, “Variability in
thankful to the author of dataset. I prefer this neurodevelopmental disorders: evidence
dataset to other autism researchers. from autism spectrum disorders,” in
Neurodevelopmental Disorders. 2014.
REFERENCES

978-0-7381-4627-0/21/$31.00 ©2021 IEEE 1021


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on June 25,2024 at 07:21:23 UTC from IEEE Xplore. Restrictions apply.
Proceedings of the Third International Conference on Inventive Research in Computing Applications (ICIRCA-2021)
IEEE Xplore Part Number: CFP21N67-ART; ISBN: 978-0-7381-4627-0

[11] E Loth, L Garrido, J Ahmad, E Watson, A Espinosa, Kathleen Campbell, Samuel


Duff, and B Duchaine, “Facial expression Brotkin, Jana Schaich-Borg, Qiang Qiu,
recognition as a candidate marker for autism Mariano Tepper, et al., “Automatic emotion
spectrum disorder: how frequent and severe and attention analysis of young children at
are deficits?,” Molecular autism, 2018. home: a researchkit autism feasibility
[12] A. Mollahosseini, B. Hasani, and M. H. study,” npj Digital Medicine, vol. 1, no. 1,
Mahoor, Affectnet: A database for facial pp. 20, 2018.
expression, valence, and arousal computing [17] Ognjen Rudovic, Jaeryoung Lee, Miles Dai,
in the wild,” IEEE Transactions on Affective Bjorn Schuller, and Rosalind Picard,
Computing, 2018. “Personalized machine learning for robot
[13] C Fabian Benitez-Quiroz, Ramprakash perception of affect and engagement in
Srinivasan, and Aleix M Martinez, autism therapy,” Science. Doi:
“Emotionet: An accurate, real-time 3.10.1126/scirobotics. aao6760., 2018.
algorithm for the automatic annotation of a [18] de Carvalho A.C.P.L.F., Freitas A.A.
million facial expressions in the wild,” in (2009) A Tutorial on Multi-label
Proceedings of the IEEE Conference on Classification Techniques. In: Abraham A.,
Computer Vision and Pattern Recognition, Hassanien AE., Snášel V. (eds)
2016, pp. 5562–5570. Foundations of Computational Intelligence
[14] Zhanpeng Zhang, Ping Luo, Chen Change Volume 5. Studies in Computational
Loy, and Xiaoou Tang, “Facial landmark Intelligence, vol 205. Springer, Berlin,
detection by deep multi-task learning,” in Heidelberg.
European Conference on Computer Vision. [19] Maxwell, A., Li, R., Yang, B. et al. Deep
Springer, 2014, pp. 94–108. learning architectures for multi-label
[15] Momin H., Tapamo JR. (2011) Automatic classification of intelligent health risk
Detection of Face and Facial Landmarks prediction. BMC Bioinformatics 18, 523
for Face Recognition. In: Kim T., Adeli H., (2017). https://doi.org/10.1186/s12859-017-
Ramos C., Kang BH. (eds) Signal 1898-z
Processing, Image Processing and Pattern
Recognition. SIP 2011. Communications in
Computer and Information Science, vol
260. Springer, Berlin, Heidelberg.
[16] Helen L Egger, Geraldine Dawson, Jordan
Hashemi, Kimberly LH Carpenter, Steven

978-0-7381-4627-0/21/$31.00 ©2021 IEEE 1022


Authorized licensed use limited to: SRM Institute of Science and Technology. Downloaded on June 25,2024 at 07:21:23 UTC from IEEE Xplore. Restrictions apply.

You might also like