0% found this document useful (0 votes)
9 views10 pages

Retraction

Article Research

Uploaded by

Naser Al Zoubi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views10 pages

Retraction

Article Research

Uploaded by

Naser Al Zoubi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Hindawi

Computational and Mathematical Methods in Medicine


Volume 2023, Article ID 9896070, 1 page
https://doi.org/10.1155/2023/9896070

Retraction
Retracted: Facial Features Detection System to Identify
Children with Autism Spectrum Disorder: Deep Learning Models

Computational and Mathematical Methods in Medicine


Received 31 October 2023; Accepted 31 October 2023; Published 1 November 2023

Copyright © 2023 Computational and Mathematical Methods in Medicine. This is an open access article distributed under the
Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.

This article has been retracted by Hindawi following an The corresponding author, as the representative of all
investigation undertaken by the publisher [1]. This investi- authors, has been given the opportunity to register their
gation has uncovered evidence of one or more of the follow- agreement or disagreement to this retraction. We have kept
ing indicators of systematic manipulation of the publication a record of any response received.
process:

(1) Discrepancies in scope References


(2) Discrepancies in the description of the research [1] Z. A. T. Ahmed, T. H. H. Aldhyani, M. E. Jadhav et al., “Facial
reported Features Detection System To Identify Children With Autism
Spectrum Disorder: Deep Learning Models,” Computational
(3) Discrepancies between the availability of data and and Mathematical Methods in Medicine, vol. 2022, Article ID
the research described 3941049, 9 pages, 2022.

(4) Inappropriate citations

(5) Incoherent, meaningless and/or irrelevant content


included in the article

(6) Peer-review manipulation


The presence of these indicators undermines our confi-
dence in the integrity of the article’s content and we cannot,
therefore, vouch for its reliability. Please note that this notice
is intended solely to alert readers that the content of this arti-
cle is unreliable. We have not investigated whether authors
were aware of or involved in the systematic manipulation
of the publication process.
Wiley and Hindawi regrets that the usual quality checks
did not identify these issues before publication and have
since put additional measures in place to safeguard research
integrity.
We wish to credit our own Research Integrity and
Research Publishing teams and anonymous and named
external researchers and research integrity experts for con-
tributing to this investigation.
Hindawi
Computational and Mathematical Methods in Medicine
Volume 2022, Article ID 3941049, 9 pages
https://doi.org/10.1155/2022/3941049

Research Article
Facial Features Detection System To Identify Children With
Autism Spectrum Disorder: Deep Learning Models

E D
Zeyad A. T. Ahmed,1 Theyazn H. H. Aldhyani ,2 Mukti E. Jadhav,3
Mohammed Y. Alzahrani,4 Mohammad Eid Alzahrani,5 Maha M. Althobaiti ,6
Fawaz Alassery ,7 Ahmed Alshaflut,8 Nouf Matar Alzahrani,8 and Ali Mansour Al-madani1
1
2
3
4
Department of Computer Science, Dr. Babasaheb Ambedkar Marathwada University, India

C T
Applied College in Abqaiq, King Faisal University, P.O. Box 400, Al-Ahsa 31982, Saudi Arabia
Shri Shivaji Science & Arts College, Chikhli Dist. Buldana, India
Department of Computer Sciences and Information Technology, Albaha University, Albaha, P.O. Box 1988, Saudi Arabia

A
5
Department of Engineering and Computer Science, Al Baha University, Albaha, P.O. Box 1988, Saudi Arabia
6
Department of Computer Science, College of Computing and Information technology, Taif University, P.O.Box 11099,
Taif 21944, Saudi Arabia
7
Department of Computer Engineering, College of Computers and Information Technology, Taif University, P.O. Box 11099,
Taif 21944, Saudi Arabia

R
8
College of Computer Science and Information Technology, Albaha University, Albaha, P.O. Box 1988, Saudi Arabia

Correspondence should be addressed to Theyazn H. H. Aldhyani; [email protected]

Received 30 November 2021; Revised 27 December 2021; Accepted 29 December 2021; Published 4 April 2022

T
Academic Editor: Deepika Koundal

Copyright © 2022 Zeyad A. T. Ahmed et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is

E
properly cited.

Autism spectrum disorder (ASD) is a neurodevelopmental disorder associated with brain development that subsequently affects
the physical appearance of the face. Autistic children have different patterns of facial features, which set them distinctively apart
from typically developed (TD) children. This study is aimed at helping families and psychiatrists diagnose autism using an easy

R
technique, viz., a deep learning-based web application for detecting autism based on experimentally tested facial features using
a convolutional neural network with transfer learning and a flask framework. MobileNet, Xception, and InceptionV3 were the
pretrained models used for classification. The facial images were taken from a publicly available dataset on Kaggle, which
consists of 3,014 facial images of a heterogeneous group of children, i.e., 1,507 autistic children and 1,507 nonautistic children.
Given the accuracy of the classification results for the validation data, MobileNet reached 95% accuracy, Xception achieved
94%, and InceptionV3 attained 0.89%.

1. Introduction children if they are autistic. An autistic patient needs special


care in order to develop perceptual skills to communicate with
Developmental disorders are chronic disabilities that have a their family and society. When an autistic patient is diagnosed
critical impact on many people’s daily performance. The num- at an early stage, the results of behavioral therapy will be more
ber of people with autism is increasing around the world, effective. Scientists at the University of Missouri, who studied
which leads families to have increased anxiety about their chil- the diagnosis of children using facial features [1], found that
dren and requires an examination to ensure the health of their autistic children share common distinctive facial feature
2 Computational and Mathematical Methods in Medicine

compared to children without autism (nonautistic children). detect autism. The methodologies are based on three perspec-
Those features are an unusually broad upper face, including tives. First, studies based on the facial features from the image
wide-set eyes, and a shorter middle region of the face, includ- of a face; Praveena and Lakshmi [2], Yang [27], and Beary

D
ing the cheeks and nose. Figure 1 shows the differences in et al. [28] developed deep learning models using online images
facial features between children with autism in the first row of faces of autistic children and nonautistic children. Grossard
and nonautistic children in the second row. Those images et al. [29] found a different pattern distributed in the eyes and
were taken from the Kaggle database, which was used in this mouth in the image data of two groups: ASD and TD children.
study. [27] used a pretraining model, which achieved 94% accuracy

E
Recognizing developmental disorders from the image of a on validation data. Grossard et al. [29] achieved 90% accuracy
face is a challenge in the field of computer vision. Autism spec- with his classifier model. Beary et al. [28] used the VGG19 pre-
trum disorder (ASD) is a neurological problem that affects the training model, which achieved 0.84% accuracy on validation
child’s brain from an early age. The autistic children suffer data. Haque and Valles [30] implemented a ResNet50 pre-

T
from difficulties in speech and social interaction, including dif- training model on a small dataset, which consisted of 19 TD
ficulty establishing the correct visual focus of attention [1]. and 20 ASD children and achieved 89.2% accuracy. Some dif-
Those problems can be detected after 36 months. The chal- ferent techniques that have been used to detect the autism are
lenge is how to provide an accurate diagnosis. Artificial intel- presented in Figure 2.
ligence is characterized by speed of decision-making and a

C
reduction in the time required to make a diagnostic compared 2. Materials and Methods
to traditional methods of diagnoses to detect autism in the
early stage of life. The process of making an autism diagnosis Children with autism have different patterns of facial fea-
is manual, and it is based on psychologists observing the tures, and the distance between facial landmarks and the
behavior of a child for a long time. Sometimes this task takes width of landmarks is different compared to TD children.

A
more than two sessions. The development of technology has Figure 3 displays the framework of the proposed system to
enabled the development of screening and diagnostic mecha- identify autism from different human faces. Deep learning
nisms to diagnose autism. The development of artificial intel- techniques are used to extract these subtle facial features
ligence has led to its increased use in the field of health and and classify children into autistic and nonautistic groups.
medical care, and researchers are very engaged in developing This study is based on CNN, which can learn the essential

R
methods to diagnose ASD and detect autism at an early age shapes in the first layers and improve the image’s learning
using different technologies [2], such as brain MRI, eye track- features in the deeper layers, to access accurate image classi-
ing, electroencephalogram (EEG) signals, and eye contact [3]. fication. In other words, the feature extraction is done auto-
A few researchers have used facial features to detect autism. matically through the CNN layers [31]. The advantages of

T
Examples of differences in facial features are presented in the pretraining model are reducing the time needed to build
Figure 1. and train the model from scratch. It is also good for achiev-
Currently, artificial intelligence has played the significant ing high accuracy with a small dataset, such as the one used
role for developing computer-aided diagnosis of autism in this study.
[4–11] and moreover developing interactive system to assis-

E
tance in the reintegration and treatment of autistic patients 2.1. Dataset. This paper used a publicly available dataset
[12–14]. A number of researchers have applied machine from Kaggle [32]. This dataset consists of 3,014 images of
learning algorithms for classification autism, such as schizo- faces, 1,507 of autistic and 1,507 of nonautistic children, as
phrenia [15–18], dementia, depression [19], autism [20–24], described in Figure 4. The images of the faces of autistic chil-

R
ADHD [25], and Alzheimer’s [26] by using MRI. dren were collected from online sources related to autism
The main objective of this study was to develop a deep disorder, and the images of the faces of nonautistic children
learning-based web application using a convolutional neural were randomly collected from the Internet. Figure 5 shows
network with transfer learning to identify autistic children numbers of examples used to evaluate the proposed system.
using facial features. This application was designed to help
families and psychiatrists to diagnose autism easily and effi- 2.2. Preprocessing. The preprocessing of the dataset was done
ciently. The current study applied one of the most popular by the dataset creator to remove duplicated images and crop
deep learning algorithms, called CNN, to the Kaggle dataset. the images to show only the face, as shown in Figure 2. The
The dataset consists of images of faces divided into classes: dataset was then divided into three categories: 2,654 for
autistic and nonautistic children. The face of a human being training, 280 for validation, and 80 for testing. After obtain-
has many features that can be used to recognize identity, ing the data, Keras preprocessing Image Data Generator
behavior, emotion, age, and gender. The researchers of this normalized the dataset by rescaling the parameters, which
study used facial features to develop a deep learning model scaled all the images in the dataset based on pixel values
to classify autistic and nonautistic children based on their from [0, 255] to [0,1]. The reason for normalizing was to
faces. make the dataset ready for training using the CNN model.
The diagnosis of Autism can be done using different tech-
niques, such as brain MRI, eye-tracking, EEG, and eye contact 2.3. Convolutional Neural Network Models. This section
[3]. Figure 2 shows autism detection techniques. This section gives an overview of convolutional neural network architec-
reviews some previous studies that used facial features to ture and the CNN pretraining models.
Computational and Mathematical Methods in Medicine 3

E D
C T
R A
Figure 1: The differences in facial features between children with autism in the first row and children without autism in the second row.

Facial features

E T
Facial landmarks Facial expressions Robot assessed EEG signals Eye tracking

R
Autism apps

Monitoring the Action


Eye contact Questionnaires Smartphone apps Brain MRI
behavioral recognition

Figure 2: Autism detection techniques.

The convolutional layer is an important part of the CNN 2.3.1. Pooling Layer. When the CNN has a number of the
model that is used for feature extraction. The first convolu- hidden layers, that increases the depth of the output layer,
tional layer learns basic features: edges and lines. The next so parameters of the CNN are large, requiring optimization
layer extracts the features of squares and circles. More com- of the mathematical operations space complexity and time
plex features are extracted in the following layers, including of learning process. The pooling layer reduces the number
the face, eyes, and nose. The basic components of the CNN of the parameters in the output from the convolution layer,
model are the input layer, convolutional layer, activation using a summary statistical function. There are two types
function, pooling layer, fully connected layer, and output of pooling layers: Max-pooling and Average-pooling. Max-
prediction. pooling takes the maximum values in each window in stride,
4 Computational and Mathematical Methods in Medicine

Face capture Web application

D
Camera
Deep learning model

Figure 3: The architecture of the proposed application.

E
1,600 1,507 1,507
1,400

T
1,200
1,000
800
600

C
400
200
0
1

A
ASD
TD

Figure 4: Bar chart showing the number of pictures used to detect autism from the facial image dataset.

R
as shown in Figure 6. Average-pooling calculates the mean pretrained model sometimes takes a long time; it can take
value of each window in stride. In this case, the input to weeks to train using a large dataset and high configuration
the next layer reduces the feature map, as shown in Figure 5. computers with GPUs. For example, MobileNet, Inception-

T
ResNetV2, and InceptionV3 models [31] are trained on the
2.3.2. Fully Connected Layer. The fully connected layer ImageNet database for image recognition. The ImageNet
receives the output from the convolutional layer and pooling database [32] from Google consists of 14 million images of
layer, which have all the features, and puts them in a long different objects. There are several pretraining models and
tube. Figure 7 shows the fully connected layer. databases for image recognition. To use transfer learning,

E
2.3.3. Activation Function. The activation function classifies fine-tuning should be applied according to the following
the output. The SoftMax activation function classifies the out- steps. First, one should select a model similar to his/her
put when the number of classes is two or more. In this study, problem. Second, one should decide how many classes one
there are two classes: ASD and TD. The sigmoid activation intends to have as output. In this dataset, the problem
needed only two classes (ASD and TD) as shown in

R
function can be used with binary classification problems.
The activation function is calculated using formula (1). This Figure 3. Third, the network architecture of the model is
formula is called the slope and intercept of a line. Figure 8 used as a fixed feature extractor for the dataset. After that,
shows a diagram of the activation function. one should initialize all the weights randomly and train the
model according to the dataset (in this case ASD and TD).
y = x1 ∗ w1 + x2 ∗ w2 + b, ð1Þ The input image size is 224∗224∗3. The model’s custom
uses the model’s architecture and weight by adding two
where x is training data and w is weighetd of neural net- dense layers on the top. The first layer has 1,024 neurons
work. For output prediction, the results of the CNN model, and the rule activation function. The Kernel regularizer l2
after training, predicate the input image (ASD or TD) in our (0.015) is used to reduce the weights (the use bias is false).
study. The dense layer is followed by a dropout layer with (0.4).
The second layer for the output prediction uses the Softmax
2.4. Pretraining Models. This study implemented different function.
CNN pretraining models for autism detection using images In this study, the output of this model was autism/non-
of faces. Then, the results were compared to select the best autism. The RMSprop optimizer was used to reduce the
performing model. model output error during training by adjusting the custom
Transfer learning uses the architecture and the weight of parameters that are added to the top of the model. Various
the model, which is trained on a large database and achieves strategies were used to avoid overfitting of the deep learning
high accuracy; then, it is used to solve a similar problem. The models, such as batch-normalization and dropout (0.4), but
Computational and Mathematical Methods in Medicine 5

Pre-training model

Globalaverage Batch Fully Softmax


pooling2D normalization

D
connected layer

ASD

Dense (1024)
Face

rule
detection to Normalization
crop
TD

E
Image size
Input images ⁎ ⁎
224 224 3

Pre-processing Output
Training

T
Figure 5: The architecture of the proposed model (this image is from Zeyad A.T. Ahmed).

C
1 1 2 4
5 9 7 6 9 7
3 5 1 0
1 2 8 4 Max pool with 2⁎2 filter and 2 strides 5 8

A
Figure 6: Convolutional layer with Max-pooling.

TR
R E
the models’ performance did not improve. To conclude, the
Figure 7: Convolutional layer with Max-pooling.

best option was to use the early stopping strategy to monitor


validation loss with ten patients and stop and save the best
performance.
patients, the training stopped on 33 epochs using an early
stopping strategy, and the best performance was saved after
training with the MobileNet model. The model achieved
95% accuracy on the validation data and 100% on the training
data. Figures 9 and 10 plot the model’s training accuracy, val-
idation accuracy, training loss, and validation loss. The other
3. Results two pretraining models (Xception and InceptionV3) were
The models were trained on the cloud using the Google Colab applied to the same dataset, using the same fine-tuning of
environment with python language, which supports the most the layers and optimizer. The accuracy results on the valuation
popular libraries for deep learning, e.g., TensorFlow and data were 0.94% for Xception and 0.89% for InceptionV3.
Keras. The declaration epochs number was 100 with an 80
batch size. The results of the MobileNet, Xception, and Incep- 3.1. Evaluation Metrics. In this study, several metrics were
tion V3 models are presented in Table 1. However, after ten used to evaluate the classifier of the models, including
6 Computational and Mathematical Methods in Medicine

b
Inputs Wights Activation function Output

D
x1 W1
𝛴 y

E
w2
X2

Figure 8: Node with activation function.

Model
MobileNet
Xception
InceptionV3
Table 1: The results of the tested models.

Accuracy
0.95
0.94
0.89
Sensitivity
0.97
0.92
0.95
Specificity
0.93
0.95
0.83
True label
ASD

C
131

T 9
120

100

80

60

A
TD 4 136 40

1.0 20

0.9

ASD

R TD
0.8
Predicted label
0.7

0.6 Figure 11: Confusion matrix.

T
0.5

0.4 Detection of autism spectrum disorder based on


0 5 10 15 20 25 30 facial recognition

E
Train acc
Val acc

Figure 9: Model accuracy.

Figure 12: The home page of the autism web app.

R
10
The accuracy result
8 Autism: 0.97
Non-autism: 0.0.3
6

0
Figure 13: The autism detection web page.
0 5 10 15 20 25 30

Train loss
Val loss

Figure 10: Model loss.


Computational and Mathematical Methods in Medicine 7

Table 2: Comparison of the results of the proposed system against existing models.

Application
No. Author Year Dataset Method Accuracy
deployment

D
VGGFace
1 Musser [35] 2020 Detect autism from a facial image [32] 85% No
model
2 Beary [28] 2020 Detect autism from a facial image [32] MobileNet 94% No
3 Tamilarasi [36] 2020 The dataset included 19 TD and 20 ASD children ResNet50 89.2% No

E
4 Jahanara [37] 2021 Detect autism from a facial image [32] VGG19 0.84% No
Our proposed
5 2021 Detect autism from a facial image [32] MobileNet 95% Yes
model

T
accuracy, sensitivity, and specificity. Sensitivity assesses image of the child for testing. Then, the image will be
whether the model correctly identifies images of faces with uploaded, and the user can click the submit button. The
Autism (“True Positives”). The model accurately identified app will test the image using the trained models, and the
them as “Autism Positive,” with 0.97% accuracy. Sensitivity prediction will appear, as shown in Figure 13.

C
was used to measure how often a positive prediction of
ASD was correct. Sensitivity was calculated using formula
(2). 4. Discussion
True Positives Diagnosing autism is a complex task because it involves the
Sensitivity : ð2Þ

A
True Positives + False positives behavior of humans. Some cases of autism are misdiagnosed
in real-life because of the similarities between ASD and other
Specificity refers to “True Negatives,” which are images disorders [32]. To develop an application that recognizes
of the faces of nonautism children. It measures how often autism using facial feature patterns, one needs to collect data
a negative prediction is correct. The model accurately iden- from two groups of children, ASD and TD, label the data as

R
tified them as “Autism Negative,” with 0.93% accuracy. the autism group and the nonautism group, and apply deep
Specificity was calculated using formula (4). learning algorithms. To train a deep learning model, the
dataset has to be large enough, and the data collection must
True Negatives follow certain standards because only good data will give
specificity = × 100%,

T
True Negatives + False Negatives good results. This study tried various pretraining models
and fine-tuned them, including changing the parameters
ð3Þ and learning rate, increasing the hidden layer, and adding
batch normalization data augmentation. All these tech-
TP + TN
Accuracy = × 100%: ð4Þ niques were applied to improve the model’s performance.

E
FP + FN + TP + TN Table 2 summarizes the results of the proposed system com-
3.2. Confusion Matrix. For True Positives (TP), the model pared to the existing system.
could correctly predict 131 children as having ASD. For A literature review, which was performed to find the best
True Negatives (TN), the model could correctly predict accuracy for this dataset [32], found three studies that used
it. The first study was conducted by Mikian Musser [35],

R
136 children as TD.
For False Positives (FP), the model falsely predicted 9 who tried to classify ASD and TD using the VGGFace
children as TD, but they had ASD. For False Negatives model, achieved 85% accuracy on the validation data. The
(FN), the model falsely predicted 4 children as ASD, though second study, which was done by Tamilarasi [36] using the
they did not have ASD. The confusion matrix details are MobileNet model, achieved 94% accuracy on the validation
shown in Figure 11. data. The third study, which was done by Beary et al. [28],
used the VGG19 model to detect autism facial images. They
3.3. Web Application for Autism Detection. Python provides all used the same dataset [32] and achieved at least 0.84%
a micro web framework, called Flask, to develop a web appli- validation accuracy. A fourth study that was done by Jaha-
cation to deploy a deep learning model. In this study, the nara [37], which implemented the ResNet50 pretraining
researchers developed a simple web app for autism facial model on a small dataset consisting of faces of 19 TD and
recognition using HTML, CSS, and JS as the front-end and 20 ASD children, achieved 89.2% accuracy. The previous
Flask with the trained model’s architecture as the back- studies relied on the classification method without develop-
end. The application supports an easy interactive user inter- ing an application that facilitates the diagnostic process in
face that enables specialists to test the child’s behavioral state a real-world environment through the use of a user interface.
through facial features. In this study, the researchers experimentally tested three
Figure 12 shows the home page of the app where the user pretraining models (MobileNet, Xception, and InceptionV3)
can click on the “choose file” button, select the folder in on a dataset [34] and applied the same fine-tuning of the
which an image of the face is stored, and select the facial layers and optimizer. Table 1 shows the results of the
8 Computational and Mathematical Methods in Medicine

models. MobileNet achieved the highest accuracy (95%) on Conflicts of Interest


valuation data with 0.97% sensitivity. Xception achieved
the highest specificity (0.95%), but the accuracy was 0.94%. The authors declare no conflict of interest.

D
InceptionV3 achieved 0.89% accuracy, 0.95% sensitivity,
and 0.83% specificity.
However, the classification results of the MobileNet Acknowledgments
model are promising, and we are in the process of develop- We deeply acknowledge Taif University for supporting this
ing an application of it for autism detection using facial fea-

E
research through Taif University Researchers Supporting
tures. The web application for autism detection requires the Project Number (TURSP-2020/328), Taif University, Taif,
image to be cropped to show only the face for testing. If the Saudi Arabia. This work was supported by the Deanship of
facial image has noise or low resolution and consists of a Scientific Research, Vice Presidency for Graduate Studies
background, which can lead to misclassification. The limita- and Scientific Research, King Faisal University, Saudi Arabia

T
tion of using an online dataset can lead to relative deficien- (Project No: GRANT388).
cies in which some faces look nonautistic, the quality of
some images is low, and the age range is not appropriate.
Moreover, there was no other public dataset available on References
which to validate this methodology and application.

C
[1] F. W. Alsaade, T. H. H. Aldhyani, and M. H. Al-Adhaileh,
“Developing a recognition system for classifying covid-19
5. Conclusions using a convolutional neural network algorithm,” Computers,
Materials & Continua, vol. 68, no. 1, pp. 805–819, 2021.
This study developed a deep learning-based web application [2] T. L. Praveena and N. M. Lakshmi, “A methodology for detect-

A
for detecting autism using a convolutional neural network ing ASD from facial images efficiently using artificial neural
with transfer learning. The CNN architecture has appropri- networks,” in International Conference On Computational
ate models to extract the features of facial images by creating And Bio Engineering, pp. 365–373, Springer, 2019.
patterns of facial features and assessing the distance between [3] Z. A. T. Ahmed and M. E. Jadhav, “A review of early detection
facial landmarks, which can classify faces into autistic and of autism based on eye-tracking and sensing technology,” in

R
2020 International Conference on Inventive Computation
nonautistic. The researchers applied three pretraining
Technologies (ICICT), pp. 160–166, Coimbatore, India, 2020.
models (MobileNet, Xception, and InceptionV3) to the data-
[4] O. Rudovic, Y. Utsumi, J. Lee et al., “Culturenet: a deep learn-
set and used the same fine-tuning of the layers and optimizer
ing approach for engagement intensity estimation from face
to achieve highly accurate results. MobileNet achieved the images of children with autism,” 2018 IEEE/RSJ International

T
highest accuracy with 95% on data valuation and 0.97% sen- Conference on Intelligent Robots and Systems (IROS), 2018,
sitivity. Xception achieved the highest specificity (0.95%), pp. 339–346, Madrid, Spain, 2018.
but the accuracy was 0.94%. InceptionV3 achieved 0.89% [5] A. Di Nuovo, D. Conti, G. Trubia, S. Buono, and S. Di Nuovo,
accuracy, 0.95% sensitivity, and 0.83% specificity. Therefore, “Deep learning systems for estimating visual attention in
a MobileNet model was used to develop a web app that can robot-assisted therapy of children with autism and intellectual

E
detect autism using facial images. Future work will improve disability,” Robotics, vol. 7, no. 2, p. 25, 2018.
this model by increasing the size of the samples and the col- [6] M. Jogin, M. S. Madhulika, G. D. Divya, R. K. Meghana, and
lected dataset from psychologists’ diagnoses of autistic chil- S. Apoorva, “Feature extraction using convolution neural net-
dren who vary in age. This type of application will help works (CNN) and deep learning,” 2018 3rd IEEE International
Conference on Recent Trends in Electronics, Information &

R
parents and psychologists diagnose ASD in children. Getting
the right diagnosis of autism could help improve the skills of Communication Technology (RTEICT), 2018, pp. 2319–2323,
autistic children by choosing a good plan of treatment. Bangalore, India, 2018.
[7] https://towardsdatascience.com/detecting-autism-spectrum-
disorder-in-children-with-computer-vision-8abd7fc9b40a.
Data Availability [8] M. Leo, P. Carcagnì, C. Distante et al., “Computational analy-
sis of deep visual data for quantifying facial expression produc-
tion,” Applied Sciences, vol. 9, p. 4542, 2019.
This paper uses a publicly available dataset from Kaggle at
[9] X. Liu, Q. Wu, W. Zhao, and X. Luo, “Technology-facilitated
https://www.kaggle.com/gpiosenka/autistic-children-data-
diagnosis and treatment of individuals with autism spectrum
set-traintestvalidate. This dataset consists of 3,014 face
disorder: an engineering perspective,” Applied Sciences, vol. 7,
images: 1,507 autistic and 1,507 of nonautistic children. no. 10, 2017.
The autistic children face images are collected from online [10] D. Johnston, H. Egermann, and G. Kearney, “SoundFields: a
sources related to autism disorder, and the nonautistic chil- virtual reality game designed to address auditory hypersensi-
dren face images are randomly collected from the Internet. tivity in individuals with autism spectrum disorder,” Applied
Sciences, vol. 10, 2020.
[11] D. Johnston, H. Egermann, and G. Kearney, “Measuring the
Consent behavioral response to spatial audio within a multi-modal vir-
tual reality environment in children with autism spectrum dis-
No consent was necessary. order,” Applied Sciences, vol. 9, p. 3152, 2019.
Computational and Mathematical Methods in Medicine 9

[12] M. Magrini, O. Curzio, A. Carboni, D. Moroni, O. Salvetti, and [28] M. Beary, A. Hadsell, R. Messersmith, and M. P. Hosseini,
A. Melani, “Augmented interaction systems for supporting “Diagnosis of autism in children using facial analysis and deep
autistic children. Evolution of a multichannel expressive tool: learning,” http://arxiv.org/abs/2008.02890.
the SEMI project feasibility study,” Applied Sciences, vol. 9, [29] C. Grossard, A. Dapogny, D. Cohen et al., “Children with

D
p. 3081, 2019. autism spectrum disorder produce more ambiguous and less
[13] A. Garrity, G. D. Pearlson M.D., K. McKiernan Ph.D., D. Lloyd socially meaningful facial expressions: an experimental study
Ph.D., K. A. Kiehl Ph.D., and V. D. Calhoun Ph.D., “Aberrant using random forest classifiers,” Molecular Autism, vol. 11,
“Default Mode” functional connectivity in schizophrenia,” The no. 1, pp. 1–14, 2020.

E
American Journal of Psychiatry, vol. 164, no. 3, pp. 450–457, 2007. [30] M. I. U. Haque and D. Valles, “A facial expression recognition
[14] Y. Zhou, M. Liang, L. Tian et al., “Functional disintegration in approach using DCNN for autistic children to identify emo-
paranoid schizophrenia using resting-state fMRI,” Schizophre- tions,” in 2018 IEEE 9th Annual Information Technology, Elec-
nia Research, vol. 97, pp. 194–205, 2007. tronics and Mobile Communication Conference (IEMCON),
[15] M. J. Jafri, G. D. Pearlson, M. Stevens, and V. D. Calhoun, “A pp. 546–551, Vancouver, BC, Canada, 2018.

T
method for functional network connectivity among spatially [31] S. Kornblith, J. Shlens, and Q. V. Le, “Do better imagenet
independent resting-state components in schizophrenia,” models transfer better?,” in Proceedings of the IEEE/CVF Con-
NeuroImage, vol. 39, pp. 1666–1681, 2008. ference on Computer Vision and Pattern Recognition (CVPR),
[16] V. Calhoun, J. Sui, K. Kiehl, J. Turner, E. Allen, and pp. 2661–2671, Long Beach, CA, USA, 2019.
G. Pearlson, “Exploring the psychosis functional connectome:

C
[32] K. Ramasubramanian and A. Singh, “Deep learning using
aberrant intrinsic networks in schizophrenia and bipolar dis- Keras and TensorFlow,” in Machine Learning Using R. Apress,
order,” Frontiers in Psychiatry, vol. 2, p. 75, 2012. pp. 667–688, Springer, 2019.
[17] R. Craddock, P. Holtzheimer, X. Hu, and H. Mayberg, “Dis- [33] J. Han, Y. Li, J. Kang et al., “Global synchronization of multi-
ease state prediction from resting state functional connectiv- channel EEG based on Rényi entropy in children with autism
ity,” Magnetic Resonance in Medicine, vol. 62, no. 6, spectrum disorder,” Applied Sciences, vol. 7, p. 257, 2017.

A
pp. 1619–1628, 2009.
[34] https://www.kaggle.com/gpiosenka/autistic-children-data-set-
[18] M. Plitt, K. Barnes, and A. Martin, “Functional connectivity traintestvalidate.
classification of autism identifies highly predictive brain fea-
tures but falls short of biomarker standards,” NeuroImage: [35] https://towardsdatascience.com/detecting-autism-spectrum-
Clinical, vol. 7, pp. 359–366, 2015. disorder-in-children-with-computer-vision-8abd7fc9b40a.

R
[19] J. Anderson, J. Nielsen, A. Froehlich et al., “Functional connec- [36] F. C. Tamilarasi and J. Shanmugam, “Convolutional Neural
tivity magnetic resonance imaging classification of autism,” Network based Autism Classification,” in 2020 5th Interna-
Brain, vol. 134, no. 12, pp. 3742–3754, 2011. tional Conference on Communication and Electronics Systems
(ICCES), Coimbatore, India, June 2020.
[20] C. Shi, J. Zhang, and X. Wu, “An fMRI feature selection
method based on a minimum spanning tree for identifying [37] S. Jahanara and S. Padmanabhan, “Detecting autism from

T
patients with autism,” Symmetry, vol. 12, no. 12, p. 1995, 2020. facial image,” 2021.
[21] Z. Rakhimberdina, X. Liu, and T. Murata, “Population graph-
based multi-model ensemble method for diagnosing autism
spectrum disorder,” Sensors, vol. 20, p. 6001, 2020.

E
[22] T. Zhang, C. Li, P. Li et al.et al., “Separated channel attention
convolutional neural network (SC-CNN-attention) to identify
ADHD in multi-site rs-fMRI dataset,” Entropy, vol. 22, p. 893,
2020.
[23] K. K. Mujeeb Rahman and M. M. Subashini, “Identification of

R
autism in children using static facial features and deep neural
networks,” Brain Sciences, vol. 12, no. 1, p. 94, 2022.
[24] F. W. Alsaade, T. H. H. Aldhyani, and M. H. Al-Adhaileh,
“Developing a recognition system for diagnosing melanoma
skin lesions using artificial intelligence algorithms,” Computa-
tional and Mathematical Methods in Medicine, vol. 2021, Arti-
cle ID 9998379, pp. 1–20, 2021.
[25] M. Greicius, G. Srivastava, A. Reiss, and V. Menon, “Default-
mode network activity distinguishes Alzheimer’s disease from
healthy aging: evidence from functional MRI,” Proceedings of
the National Academy of Sciences of the United States of Amer-
ica, vol. 101, no. 13, pp. 4637–4642, 2004.
[26] G. Battineni, M. A. Hossain, N. Chintalapudi et al., “Improved
alzheimer’s disease detection by MRI using multimodal
machine learning algorithms,” Diagnostics, vol. 11, no. 11,
p. 2013, 2011.
[27] Y. Yang, “A preliminary evaluation of still face images by deep
learning: a potential screening test for childhood developmen-
tal disabilities,” Medical Hypotheses, vol. 144, 2020.

You might also like