0% found this document useful (0 votes)
21 views11 pages

ASD Classification For Children Using Deep Nueral Network

Uploaded by

Safwana Zoya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views11 pages

ASD Classification For Children Using Deep Nueral Network

Uploaded by

Safwana Zoya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Journal Pre-proof

ASD Classification for Children using Deep Neural Network

Ashima Sindhu Mohanty , Priyadarsan Parida ,


Krishna Chandra Patra

PII: S2666-285X(21)00070-4
DOI: https://doi.org/10.1016/j.gltp.2021.08.042
Reference: GLTP 66

To appear in: Global Transitions Proceedings

Received date: 16 June 2021


Accepted date: 3 July 2021

Please cite this article as: Ashima Sindhu Mohanty , Priyadarsan Parida , Krishna Chandra Patra ,
ASD Classification for Children using Deep Neural Network, Global Transitions Proceedings (2021),
doi: https://doi.org/10.1016/j.gltp.2021.08.042

This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition
of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of
record. This version will undergo additional copyediting, typesetting and review before it is published
in its final form, but we are providing this version to give early visibility of the article. Please note that,
during the production process, errors may be discovered which could affect the content, and all legal
disclaimers that apply to the journal pertain.

© 2021 The Authors. Publishing Services by Elsevier B.V. on behalf of KeAi Communications Co.
Ltd.
This is an open access article under the CC BY-NC-ND license
(http://creativecommons.org/licenses/by-nc-nd/4.0/)
Global Transitions
ScienceDirect Proceedings
http://www.keaipublishing.co
Procedia Manufacturing 00 (2019) 000–000
m/en/journals/global-transiti
ons-proceedings/

Global Transitions 2019, 14 October, 2019 - 16 October, 2019, Tsinghua University, Beijing, China

ASD Classification for Children using Deep Neural Network

Ashima Sindhu Mohantya, Priyadarsan Paridab*, Krishna Chandra Patrac


a
Department of Electronics, Sambalpur University, Sambalpur, 768019, India,[email protected]
b
Department of Electronics& Communication Engineering, GIET University,Gunupur, 765022, India, [email protected]
c
Department of Electronics, Sambalpur University, Sambalpur, 768019, India, [email protected]

* Corresponding author. Tel.: +91-98614 12884; E-mail address: [email protected]

Abstract

The recognition of a person in society is based on the behaviour and socio-communicative skills. But some neurodevelopment illness like
Autism Spectrum disorder (ASD) highly influences the behaviour and communication skill of an individual. Individuals with such illness need
early detection for minimizing its effect. The ASD diagnosis is done by a mobile-based screening app to extract information from all
individuals irrespective of age. The information is stored in publicly accessible authenticated research UCI Machine Learning (ML) repository
and Kaggle. The proposed approach in this paper is investigated up on child data set gathered from UCI repository. The analysis is done for two
distinct cases: complete and missing data via standardization by Mean Standard deviation approach followed by dimension reduction using
Diffusion Mapping and finally classification of ASD class utilising Deep Neural Network Prediction and Classification (DNNPC) model. The
performance of DNNPC classifier model is validated by distinct performance parameters.

© 2019 The Authors. Published by Elsevier B.V.


This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 8th International Conference on Through-Life Engineering Service –
TESConf 2019.

Keywords: ASD; Classification; Diffusion Mapping; DNNPC; Standardization.

1. Introduction

ASD is one of the severe neurodevelopment states which is accompanied with the traits of repetitive behaviours as well as
impairments in socio-communicative skills [1]. The inceptive symptoms of ASD are observed during the first 6 to 18 months of
an individual‟s life span. Following which, the individual further experiences abnormal motor development between 18 to 36
months in life span resulting in loss of social and communication ability [2]. It is simple to detect ASD in child rather than in
adult and adolescent cases as with ageing, the ASD signs get overlapped with the signs of other neural disorders. But, the general
procedure for diagnosing ASD in individuals is lengthy and costly. So in contrast, this research emphasizes on the ASD
classification using DL in children within age group of 4 to 11 years in which the “Autism Spectrum Quotient (AQ-10)” [3]
grounded on 10 screening questions forms the base of research. The screening questions also subsists in the child ASD dataset on
which investigation is done [4].

Fadi Fayez Thabtah developed the dataset used in this research. The data set has 21 number of attributes with one output class.
The classifier model is built using Deep Learning (DL) to classify ASD class. The training data trains the classifier model and test

2351-9789 © 2019 The Authors. Published by Elsevier B.V.


This is an open access article under the CC BY-NC-ND license(http://creativecommons.org/licenses/by-nc-nd/4.0/)
Peer-review under responsibility of the scientific committee of the 8th International Conference on Through-Life Engineering Service – TESConf 2019.
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 2

data evaluates the respective model. Present study on ASD classification stresses on reducing the number of features in dataset,
developing new ML and DL approaches to classify ASD class, improving performance parameters along with dropping the ASD
diagnosis time. The entire approach in this paper can help medical professionals in the best possible ways to put up attention
towards children with ASD symptoms for further evaluation.
In this paper, section 2 highlights the related research work with various classifier models, section 3 sketches the dataset
collected for the research, section 4 outlines the proposed methodology, the result obtained being discussed in section 5 and
finally, section 6 describes the conclusion.

2. Literature survey

The investigator developed a mobile based ASDTest app [5] which took into account Q-CHAT in addition with AQ-10
screening questionnaire. Through the app, the researcher collected 1452 instances for all category of individuals covering adult,
adolescent, Child and toddler. The toddler dataset was excluded from the investigation due to its unbalanced nature dropping the
number of cases to 1100 with 21 features. Following wrapping filtering method for feature extraction, the investigation classified
ASD classes utilizing Naïve Bayes (NB) [6] as well as Logistic Regression (LR) [7] by evaluating distinct performance
parameters.
The author in [8] investigated the ASD class upon the category of child using fuzzy data mining models. The source of dataset
being UCI ML repository consists of 509 number of instances with 21 features. The fuzzy data mining classification algorithms:
FURIA [9], JRIP [10], RIDOR [11] and PRISM [12] have been used to analyse the overall performance. The result of FURIA
classification model outperformed rest of the classification models.
The researcher in [13] put up the research upon child dataset only. The dataset was assembled from UCI ML repository and
consists of 292 number of cases in addition with 21 features. Followed by swarm intelligence based binary firefly feature
selection approach, the investigation classified the ASD class utilizing NB, J48 Decision Tree [14], Support Vector Machine
(SVM) [15], K-Nearest Neighbor (KNN) [16] and Multilayer Perceptron (MLP) [17]. The performance of the classifiers is
validated with the resulting accuracy exceeding 90 percent.
The authors in [18] investigated on child category for detecting ASD. The data set was collected from UCI ML repository
consisting of 292 instances and 21 attributes. The research dropped the missing values. The author implemented two predictive
models for testing: Fuzzy Rule model (FR) [19] and combination of LR and FR (LR-FR) and evaluated the performance. The
performance of the classifier model got validated by the evaluation of performance parameters.
The authors proposed a Rule based Machine Learning (RML) [20] and investigated up on 1100 cases gathered by ASDTest
app thereby dropping toddler cases. The performance of nine ML classifier models: RIDOR, RIPPER, Nnge Bagging, Boosting,
CART, C4.5, PRISM in addition with RML got compared out of which RML proved to be the best showing maximum efficacy
among all.
The authors in [21] focused on early detection of ASD in child, adolescent and adult based on supervised learning. The data
being gathered from UCI ML repository consisting of 21 features with one output class and 1100 cases was pre-processed and
then classified thereby employing KNN, SVM and Random Forest (RF) classifiers [22]. The pre-processed data was partitioned
into the training data expressed in α i.e. 80 percent as well as test data expressed as (100 - α) i.e. 20 percent. The performance
parameters validated the performance of classifier models.
The authors in [23] analyzed all category of individuals whose data sets were collected from UCI ML repository and Kaggle
having 2154 instances and 21 features. The research incorporated Principal Component Analysis (PCA) for dimension reduction
followed by Deep Neural Network (DNN) classifier model for classifying ASD class. The DNN showed acceptable performance
for all category of individuals.
Table 1 outlines the approaches for ASD classification as mentioned in survey.

Table.1. ASD classification techniques

Number of
Category of
Paper Source of data collection Classifiers used extracted Performance parameters
individuals
Features

Accuracy= 0.913
[8] 2018 UCI ML Repository Child FURIA 16 Sensitivity=0.914
Specificity=0.880

[13] Accuracy= 0.938


UCI ML Repository Child KNN 10
2018 RMSE=0.230
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 3

[18] Accuracy= 0.920


UCI ML Repository Child FR 6
2018 Sensitivity=0.852

Accuracy= 0.928
[5] 2019 ASD screening Test App. Child NB 4 Sensitivity=0.928
Specificity=0.913

Accuracy= 0.905
[20]
ASD screening Test App. Child RML 19 Sensitivity=0.910
2020
Specificity=0.915

[21] Accuracy= 0.935


UCI ML Repository Child SVM 8
2020 Sensitivity=0.926

Accuracy= 0.857
[23] Sensitivity=1
UCI ML Repository, Kaggle Child DNN 6
2021 Specificity=0.681
RMSE=0.176

3. Data Collection

The ASD child dataset is assembled from UCI ML repository which is an authenticated as well as public access site for research
[24]. The data is collected by ASDTest app developed by the author in [6]. The dataset contains 292 number of cases and 21
number of attributes. The attributes are present in Categorical, continuous and binary type. Some missing values in the attributes:
“ethnicity”, “age” and “Who_completed_the_test” are also found. Out of 292, 141 numbers of individuals are associated with
ASD class and rest 151 belongs to no ASD class. After dropping the missing values, the number of cases reduced to 249 with
126 number of cases associated with ASD class and rest 123 belong to no ASD class.

4. Methodology

The existing approaches discussed for detection of ASD using different ML approaches achieve classification accuracy around
92% which can be further improved to achieve a better clinically acceptable result. So, a DL framework has been for achieving
better classification accuracy in case of ASD classification for children. In the proposed approach, the input child ASD data
collected from UCI repository is pre-processed prior to classification of ASD class. The investigation is carried out for two
distinct cases: complete and missing data. In pre-processing, firstly the raw input is standardized which fit the numeric data into
DNNPC model within a particular range. The second stage of pre-processing applied the standardized inputs to dimension
reduction Diffusion Maps model for decreasing the number of attributes in the dataset. Then the training data trained the DNNPC
model and the test data tested the respective model for classification. The investigation set a training parameter, α as 0.8 and test
parameter as (1- α) [21]. The performance of the investigation is validated by the evaluation of performance parameters like
accuracy (Acc), sensitivity (Sen), specificity (Spe), Mean Square Error (MSE), Root Mean Square Error (RMSE) and
Co-efficient of Determination (R-Squared). The work flow of the proposed approach is shown in Fig. 1.
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 4

Dimension Reduction
Input child ASD dataset

Trained DNNPC
using Diffusion
Standardization

Performance
Training data

evaluation
Mapping

model

DNNPC
(α=0.8)

Testing

model
(1-α=0.2)
Test data
4.1. Preprocessing

Prior to set forth the child data to Fig. 1. Flow diagram of proposed method DNNPC model for classifying
ASD classes, the gathered data set from UCI repository is
pre-processed. During pre-processing, both the
complete as well as missing data underwent standardization followed by dimension reduction by Diffusion Mapping (DM).

Standardization

The raw child data set gathered from the UCI repository is not properly scaled. The attributes present in the data set lack
similar scaling. As a result of which the data is standardized by mean standard deviation approach [25]. Mathematically,
the standardized data is given in equation (1)

(x  x mean )
Sta_X  (1)
(x std )
where, x represents current value of input x , x mean represents mean value of x and x std represents standard deviation of x.

Dimension reduction using diffusion map

DM [26] is a time-dependent walk process on a data set. In order to evaluate the distance of any geometric structure, the
diffusion process runs through distinct paths over a time t and achieves multiple probabilities of distinct paths. Mathematically it
is stated as steady-state probability of Markov Chain. In between two data points, A and B, the connectivity is probability of
jump from A to B in single step of random walk. The connectivity is given in equation (2) as

( A, B)  P( A, B) (2)

The connectivity is also expressed as diffusion kernel which is mathematically expressed as row normalized likelihood
function, K and is given in equation (3)

2
| A B|
K ( A, B )  exp( ) (3)

A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 5

where,  represents the diffusion rate, P represents probability of single hop from data point A to B. P2 represents
probability of two hops from A to B.

Pt is the probability of t number of hops from A to B and thus the process of diffusion runs forward.

4.2. Deep learning classifier model

A clear picture of DNNPC model is shown in the Fig.2 below

Input Child
ASD Dataset Sequence Input
layer LSTM Regression Layer
Layer

ASD No ASD

Fully Connected Layer Predicted ASD classes

Fig. 2. DNNPC model for ASD classification

The proposed DNNPC network consists of a Sequence input layer, Long Short-Term Memory (LSTM) layer which is a
Recurrent Neural Network (RNN) followed by a Fully Connected (FC) layer and a regression layer.
The pre-processed data obtained after standardization and dimension reduction techniques are fed to the first layer of DNNPC
model i.e. the sequence input layer. The layer sequentially fed the processed data to the next layer of classifier model i.e. LSTM
layer.
The LSTM architecture [27] follows time based back propagation algorithm. But unlike LSTM architecture, the other back
propagation networks are unable to address the back propagated error signals properly as the signals either vanish resulting in
oscillating weights or got delayed in achieving the result.

th
The fundamental unit of LSTM network is a memory cell. The j memory cell of the network can be represented by c j .

Each memory cell is associated with a central linear unit with a specific self-connecting weight in addition to netcj , where

c j gets input from multiplicative unit out j and from another unit in j . The activation of in j at time t is represented
mathematically in equation (4)
inj
y (t )  finj ( netint j (t )) (4)

where,
u
netin (t )   win u y (t  1) (5)
j u j

th
and win is the weight connected to j input node.
j

Similarly, the activation of out j at time t is given in equation (6)

outj
y (t )  f outj ( netoutj (t )) (6)
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 6

where,
u
netout (t )   wout u y (t  1), (7)
j u j

th
where, wout is the weight connected to j output node.
j

th
We also have weighted sum of inputs to j node as represented in equation (8)
u
netc (t )   wc u y (t  1). (8)
j u j

th
where wc is the weight connected to j hidden node.
j

The summation mark indicates that u can be used as input unit or gate unit or memory cell or conventional hidden cell.
Specifically, LSTM layer in the proposed network is used for the classification task as well as compensating the time delay for
the sequential ASD dataset.
A FC layer [28-30] is used next to the LSTM layer where every single neuron of FC layer is connected with every other
neuron of the next layer shown in Figure.2. Hence the neurons can communicate with each other for proper prediction of the
output. This layer is also used to control the shape and size of the output layer.
Following FC layer, a regression layer [31-33] is connected which predicts a continuous label. The goal is to produce a model
that represents the „best fit‟ to the observed data. The basic purpose of this layer is to compute the half mean-squared-error loss
for the regression tasks.
The mean squared error from this layer is given in equation (9)
2
( t  yi )
MSE  iR1 i (9)
R
th
where, R is the total number of response variables, t i is the target for i response and y i is the predicted response from the
th
network for i response.
In this layer the loss function is calculated in equation (10)

1 2
Loss  iS1  Rj1 (tij  yij ) (10)
2S
th
where, S is the length of the sequence, t ij and yij are the target as well as actual output of j node respectively.

The LSTM in addition with regression layer helps the network to simultaneously predict as well as classify the output class,
which the normal LSTM layer can‟t perform alone [34-36].
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 7

4.3. Performance parameters

The performance parameters [23] measure the efficacy of the proposed approach to classify ASD class. Parameters like Acc,
Sen, Spe, MSE, RMSE and R-Squared are utilized in this approach which are evaluated from True Positive, True Negative, False
Positive and False Negative values in confusion matrix [23].

5. Result and Discussion

The proposed method is implemented using a PC having MATLAB 2016 environment with 1.99GHz processor and 12GB RAM.
Following pre-processing up on missing child dataset with 3 number of attributes, the DNNPC model is trained by the training
data with a learning rate of 0.005, 250 number of epochs, epoch duration of 82 sec, learn rate drop period of 125 sec and learn
drop factor of 0.2. Fig. 3 shows the training process of DNNPC model for missing data where the RMSE found from the trained
model is 0.01. Same procedure was repeated for complete dataset in which the DNNPC model got trained with a learning rate of
0.005, 250 number of epochs, epoch duration of 83 seconds, learn rate drop period of 125 sec and learn drop factor of 0.2. Fig. 4
shows the training process of DNNPC model for complete data and the RMSE found from the trained model is almost 0. In both
the figures, number of iteration and RMSE are present in X and Y label respectively. In both cases, the trained models are further
tested up on the test data for evaluating the classification performance. Table 2 represents a comparative analysis on performance
of the proposed trained model with respect to other state of art methods Fig. 5 represents the statistical performance of the
proposed model in comparison with other state of art methods in which year of research and value of performance parameters are
present in X and Y label respectively.
In case of missing data, out of 249 instances, 50 test instances are formulated as per the test parameter (1- α). Out of 50, 20
and 27 instances are truly classified as ASD and no ASD classes respectively. In addition, there is not a single instance found to
be falsely classified as no ASD class. It resulted in a good rate of accuracy and sensitivity. But due to 3 instances which got
falsely classified as ASD class, the specificity rate slightly dropped but is acceptable.
In case of complete data, out of 50, 19 and 27 instances are truly classified as ASD and no ASD classes respectively. There
is a single instance found to be falsely classified as no
ASD class. It yielded a good rate of accuracy and sensitivity. Due to 3 number of instances which got falsely classified as ASD
class, the specificity rate slightly dropped but is acceptable.

Fig. 4. Training
Fig. 3.DNNPC model
Training withmodel
DNNPC complete
withdata
missing data
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 8

Table 2. Performance of the proposed approach along with other state of art approaches

Performance Parameters

Paper Accuracy Sensitivity Specificity RMSE MSE R-Squared

[8] 2018 0.913 0.914 0.880 - - -

[13] 2018 0.938 - - 0.230 - -

[18] 2018 0.920 0.852 - - - -

[5] 2019 0.928 0.928 0.913 - - -

[20] 2020 0.905 0.910 0.915 - - -

[21] 2020 0.935 0.926 - - - -

[23] 2021 0.857 1 0.681 0.176 - -

Proposed
method 0.940 1 0.900 0.240 0.060 0.750
(MD)
Proposed
method 0.920 0.950 0.900 0.282 0.080 0.666
(CD)

*MD represents Missing Data


*CD represents Complete Data
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 9

Fig. 5. Statistical Performance of the proposed approach along with other state of art approaches

6. Conclusion

The proposed approach emphasizes early detection of ASD in children to improve their quality of life. In this research, DM is
utilized for reducing the number of attributes following which the DNNPC model got trained by training data of 80 percent and
rest 20 percent test data tested the trained model. The performance parameters are evaluated which determined the efficacy of the
trained model in classifying ASD class and are clinically acceptable. The performance of trained model proved to be higher in
case of missing data as compared to complete data because of slightly more number of misclassification under ASD class in
complete data. Though the performance of trained model up on complete data is slightly dipped but still it is clinically
acceptable. Overall, the fact of applying DNN models for detecting ASD can be persuaded successfully. In addition, the
performance of DNN models can be further improvised by adding more number of layers to the neural network model. The
research being limited to child data set can be further reached out to rest of the category of individuals with further improvement
in efficacy.

References

[1] Mohanty AS, Patra KC, Parida P. Toddler ASD Classification Using Machine Learning Techniques. Int J Online Biomed Eng 2021;17:156.
[2] Parr JR. Does developmental regression in autism spectrum disorder have biological origins? Dev Med Child Neurol 2017;59:889–889.
[3] Auyeung B, Baron-Cohen S, Wheelwright S, Allison C. The Autism Spectrum Quotient: Children‟s Version (AQ-Child). J Autism Dev Disord
2008;38:1230–40.
[4] Thabtah F, Kamalov F, Rajab K. A new computational intelligence approach to detect autistic features for autism screening. Int J Med Inform
2018;117:112–24.
A. Mohanty, K. Patra and P. Parida/ Procedia Manufacturing 00 (2019) 000–000 10

[5] Thabtah F. An accessible and efficient autism screening method for behavioural data and predictive analyses. Health Informatics J 2019;25:1739–55.
[6] Deepa B, Jeen Marseline K. Exploration of Autism Spectrum Disorder using Classification Algorithms. Procedia Comput Sci 2019;165:143–50.
[7] Thabtah F, Abdelhamid N, Peebles D. A machine learning autism classification based on logistic regression analysis. Heal Inf Sci Syst 2019;7:1–11.
[8] Al-diabat M. Fuzzy Data Mining for Autism Classification of Children. Int J Adv Comput Sci Appl 2018;9.
[9] Virendra Dahe S, Sai Manikandan G, Jegadeeshwaran R, Sakthivel G, Lakshmipathi J. Tool condition monitoring using Random forest and FURIA
through statistical learning. Mater Today Proc 2021;46:1161–6.
[10] Catania LJ. AI applications in prevalent diseases and disorders. Found. Artif. Intell. Healthc. Biosci., Elsevier; 2021, p. 293–444.
[11] Negin F, Ozyer B, Agahian S, Kacdioglu S, Ozyer GT. Vision-assisted recognition of stereotype behaviors for early diagnosis of Autism Spectrum
Disorders. Neurocomputing 2021;446:145–55.
[12] Hadi W, Issa G, Ishtaiwi A. ACPRISM: Associative classification based on PRISM algorithm. Inf Sci (Ny) 2017;417:287–300.
[13] Vaishali R SR. A machine learning based approach to classify autism with optimum behavior sets. Int J Eng Technol 2018;7:4216–9.
[14] Mienye ID, Sun Y, Wang Z. Prediction performance of improved decision tree-based algorithms: a review. Procedia Manuf 2019;35:698–703.
[15] Bi X, Wang Y, Shu Q, Sun Q, Xu Q. Classification of Autism Spectrum Disorder Using Random Support Vector Machine Cluster. Front Genet 2018;9.
[16] Taunk K, De S, Verma S, Swetapadma A. A Brief Review of Nearest Neighbor Algorithm for Learning and Classification. 2019 Int. Conf. Intell.
Comput. Control Syst., IEEE; 2019, p. 1255–60.
[17] Ranjeeth S, Latchoumi TP. Predicting Kids Malnutrition Using Multilayer Perceptron with Stochastic Gradient Descent. Rev d‟Intelligence Artif
2020;34:631–6.
[18] Kemal Akyol YG and AK. A Study on Autistic Spectrum Disorder for Children Based on Feature Selection and Fuzzy Rule. Int. Congr. Eng. Life Sci.,
2018.
[19] Qazi S, Raza K. Fuzzy logic-based hybrid knowledge systems for the detection and diagnosis of childhood autism. Handb. Decis. Support Syst. Neurol.
Disord., Elsevier; 2021, p. 55–69.
[20] Thabtah F, Peebles D. A new machine learning model based on induction of rules for autism detection. Health Informatics J 2020;26:264–86.
[21] Erkan U, Thanh DNH. Autism Spectrum Disorder Detection with Machine Learning Methods. Curr Psychiatry Res Rev 2020;15:297–308.
[22] Cordova M, Shada K, Demeter D V, Doyle O, Miranda-Dominguez O, Perrone A, et al. Heterogeneity of executive function revealed by a functional
random forest approach across ADHD and ASD. NeuroImage Clin 2020;26:102245.
[23] Mohanty AS, Parida P, Patra KC. Identification of Autism Spectrum Disorder using Deep Neural Network. J Phys Conf Ser 2021;1921:012006.
[24] Thabtah FF. machine-learning-databases/00419 2017. https://archive.ics.uci.edu/ml.
[25] Andrade C. Understanding the Difference Between Standard Deviation and Standard Error of the Mean, and Knowing When to Use Which. Indian J
Psychol Med 2020;42:409–10.
[26] Lindenbaum O, Yeredor A, Salhov M, Averbuch A. Multi-view diffusion maps. Inf Fusion 2020;55:127–49.
[27] Li J, Zhong Y, Han J, Ouyang G, Li X, Liu H. Classifying ASD children with LSTM based on raw videos. Neurocomputing 2020;390:226–38.
[28] Pham, V.T., Nguyen, T.N., Liu, B.H. and Lin, T., 2021, March. Minimizing latency for multiple-type data aggregation in wireless sensor networks.
In 2021 IEEE Wireless Communications and Networking Conference (WCNC) (pp. 1-6). IEEE.
[29] Tran, D.N., Nguyen, T.N., Khanh, P.C.P. and Trana, D.T., 2021. An iot-based design using accelerometers in animal behavior recognition
systems. IEEE Sensors Journal.
[30] Basha SHS, Dubey SR, Pulabaigari V, Mukherjee S. Impact of fully connected layers on performance of convolutional neural networks for image
classification. Neurocomputing 2020;378:112–9.
[31] Do, D.T., Van Nguyen, M.S., Nguyen, T.N., Li, X. and Choi, K., 2020. Enabling multiple power beacons for uplink of noma-enabled mobile edge
computing in wirelessly powered IOT. IEEE Access, 8, pp.148892-148905.

[32] Jagannathan, P., Rajkumar, S., Frnda, J., Divakarachari, P.B. and Subramani, P., 2021. Moving Vehicle Detection and Classification Using Gaussian
Mixture Model and Ensemble Deep Learning Technique. Wireless Communications and Mobile Computing, 2021.
[33] Baygin M, Dogan S, Tuncer T, Datta Barua P, Faust O, Arunkumar N, et al. Automated ASD detection using hybrid deep lightweight features
extracted from EEG signals. Comput Biol Med 2021;134:104548.
[34] Parameshachari, B.D. and Panduranga, H.T., 2021. Secure Transfer of Images Using Pixel-Level and Bit-Level Permutation Based on Knight Tour
Path Scan Pattern and Henon Map. In Cognitive Informatics and Soft Computing (pp. 271-283). Springer, Singapore.
[35] Subramani, P., Srinivas, K., Sujatha, R. and Parameshachari, B.D., 2021. Prediction of muscular paralysis disease based on hybrid feature extraction
with machine learning technique for COVID-19 and post-COVID-19 patients. Personal and Ubiquitous Computing, pp.1-14.
[36] Z.Guo, K. Yu, Y. Li, G. Srivastava, and J. C. -W. Lin, “Deep Learning-Embedded Social Internet of Things for Ambiguity-Aware Social
Recommendations”, IEEE Transactions on Network Science and Engineering, doi: 10.1109/TNSE.2021.3049262.

You might also like