Status of Deep Learning For EEG-based Brain-Computer Interface Applications
Status of Deep Learning For EEG-based Brain-Computer Interface Applications
REVIEWED BY
Shinji Kawakura, Khondoker Murad Hossain1 , Md. Ariful Islam2 , Shahera Hossain3 ,
Osaka City University, Japan
Amir Rastegarnia, Anton Nijholt4 and Md Atiqur Rahman Ahad5*
Malayer University, Iran
1
Department of Computer Science and Electrical Engineering, University of Maryland Baltimore County,
*CORRESPONDENCE
Baltimore, MD, United States, 2 Department of Robotics and Mechatronics Engineering, University of Dhaka,
Md Atiqur Rahman Ahad Dhaka, Bangladesh, 3 Kyushu Institute of Technology, Kitakyushu, Japan, 4 Human Media Interaction,
[email protected] University of Twente, Enschede, Netherlands, 5 Department of Computer Science and Digital Technology,
RECEIVED 29 July 2022 University of East London, London, United Kingdom
ACCEPTED 23 December 2022
PUBLISHED 16 January 2023
CITATION In the previous decade, breakthroughs in the central nervous system bioinformatics
Hossain KM, Islam MA, Hossain S, Nijholt A and and computational innovation have prompted significant developments in
Ahad MAR (2023) Status of deep learning for
EEG-based brain–computer interface brain–computer interface (BCI), elevating it to the forefront of applied science
applications. and research. BCI revitalization enables neurorehabilitation strategies for physically
Front. Comput. Neurosci. 16:1006763. disabled patients (e.g., disabled patients and hemiplegia) and patients with brain
doi: 10.3389/fncom.2022.1006763
injury (e.g., patients with stroke). Different methods have been developed for
COPYRIGHT
electroencephalogram (EEG)-based BCI applications. Due to the lack of a large
© 2023 Hossain, Islam, Hossain, Nijholt and
Ahad. This is an open-access article distributed set of EEG data, methods using matrix factorization and machine learning were
under the terms of the Creative Commons the most popular. However, things have changed recently because a number of
Attribution License (CC BY). The use,
large, high-quality EEG datasets are now being made public and used in deep
distribution or reproduction in other forums is
permitted, provided the original author(s) and learning-based BCI applications. On the other hand, deep learning is demonstrating
the copyright owner(s) are credited and that great prospects for solving complex relevant tasks such as motor imagery
the original publication in this journal is cited, in
classification, epileptic seizure detection, and driver attention recognition using
accordance with accepted academic practice.
No use, distribution or reproduction is EEG data. Researchers are doing a lot of work on deep learning-based approaches in
permitted which does not comply with these the BCI field right now. Moreover, there is a great demand for a study that emphasizes
terms.
only deep learning models for EEG-based BCI applications. Therefore, we introduce
this study to the recent proposed deep learning-based approaches in BCI using
EEG data (from 2017 to 2022). The main differences, such as merits, drawbacks,
and applications are introduced. Furthermore, we point out current challenges and
the directions for future studies. We argue that this review study will help the EEG
research community in their future research.
KEYWORDS
deep learning, EEG, BCI, future challenge, convolutional neural network (CNN)
1. Introduction
BCI is a method that uses psychology, electronics, computers, neuroscience, signal
processing, and pattern recognition to work together. It is used to generate various control
signals or commands from recorded brain signals of neural responses in order to determine the
intentions of the medically challenged subject to perform a motor action to restore a quality
of life. In a nutshell, the BCI turns the neural responses of the human brain into control
signals or commands that can be used to control things such as prosthetic limbs, walking,
neurorehabilitation, and movement. It is also used to assist medically challenged people with
severe motor disorders, as well as healthy people, in their daily activities.
A generic BCI system (Schalk et al., 2004; Hassanien and Azar, 2015) comprises: (i)
electrodes to obtain electrophysiological scheme patterns from a human subject; (ii) signal
acquisition devices to record the neural responses of the subject’s brain scheme; (iii) feature
extraction to generate the discriminative nature of brain signals to decrease the size of data
needed to classify the neural scheme; (iv) a translation algorithm review has focused exclusively on deep learning. One of the best
to generate operative control signals; (v) a control interface to things about deep learning is that it can do feature engineering on its
convert into output device commands; and (vi) a feedback system to own. In this method, the data are combed through by an algorithm
guide the subject to refine specific neural activity to ensure a better that searches for features that correlate with one another, and then
control mechanism. combines those features to facilitate faster learning without any
On the other hand, there are two types of signal acquisition explicit instructions. A comprehensive review is much anticipated
methods to trace neural activity, namely invasive and non-invasive as deep learning is the state-of-the-art classification pipeline. In this
methods (Schalk et al., 2004). A generic EEG-based BCI architecture review, we report the most recent deep learning-based BCI research
is shown in Figure 1. Microelectrodes are neurosurgically implanted studies for the last 6 years. Figure 2 shows the PRISMA flow diagram
to the entire surface of the cerebral cortex or over the entire surface of our literature review process.
of the cerebrum under the scalp in an invasive method (Abdulkader We used PubMed, ERIC, JSTOR, IEEE Xplore, and Google
et al., 2015). Even though this method gives high-resolution neural Scholar as the electronic databases to get and retrieve the articles.
signals, it is not the best way to record neural activity from a human As our goal is to include studies that relate to the three keywords:
brain because it can cause scar tissue and infections. EEG data, BCI applications, and deep learning, we looked for
In that case, the non-invasive method is preferred due to its studies that included all three keywords. From the 245 studies,
flexibility and reduced risk. There are many techniques (Lotte we removed 31 as they were either fully duplicated or subversions
et al., 2018) by which the neural activity is recorded, such as of other articles. After screening the remaining 214 papers, we
magnetoencephalography (MEG), functional magnetic resonance excluded 57 because they used deep learning only for related
imaging (fMRI) (Acar et al., 2022; Hossain et al., 2022), and works or as only a part of the full pipeline, resulting in 157
electroencephalography (EEG), and fully functioning near-infrared studies. But we could not fully retrieve 34 studies out of 157,
spectroscopy (fNIRS). The EEG method is preferred due to its and this filtering gives us 123 articles, of which five do not have
robustness and user-friendly approach (Bi et al., 2013). a clear dataset description, and the tasks of eight studies are
Artificial intelligence (AI) refers to systems or computers that irrelevant to our review. Finally, we explored 110 articles for
imitate human intelligence to carry out tasks and can (iteratively) this review.
improve themselves depending on the information that they acquire. To show how important this review is, we compare it to review
AI can take several forms, including machine learning and deep that have been done recently in Table 1. As the comparison criteria,
learning. Machine learning refers to the form of AI that can we have selected the coverage of the studies, the number of studies
automatically adapt with only minimal intervention from humans. that are included in the review, the presence of dataset-specific studies
On the other hand, deep learning is a subset of machine learning in the review, whether the review is BCI application-specific, having
that learns with large data by exploiting more neural network layers future recommendations for the researchers, and whether the review
than classical machine learning schemes. There are several reviews is based only on deep learning. This study is the most recent study,
on EEG-based BCI using signal processing and machine learning which covers the articles until late 2022 and comprises the highest
(Craik et al., 2019; Al-Saegh et al., 2021; Alzahab et al., 2021; Rahman number of studies for the past 6 years. There has been no other
et al., 2021; Wang and Wang, 2021). Nevertheless, machine learning review study that has done dataset-specific filtration of EEG-based
reviews consist of a small part of deep learning modalities, so no BCI research, whereas we show the number of studies and results for
FIGURE 1
A generic brain–computer interface system.
FIGURE 2
PRISMA flow diagram of the literature review process for studies on deep learning-based EEG-based BCI.
TABLE 1 Comparison between previous review works and our proposed review study.
References Coverage No. of Dataset specific Only BCI Deep learning Future
studies studies application? specific recommendation
Cao (2020) 2017–2020 Unspecified No Yes No No
each dataset separately. Furthermore, with a rich tabular comparison The study is organized as follows: After the introduction in
between the two works, we only consider the EEG data classification Section 1, we introduce the core elements of EEG-based BCI in
for BCI application-specific. Finally, we only concentrate on the deep Section 2. Section 3 includes the classical methods, which have
learning algorithms for the EEG classification in contrast to most of been exploited for EEG-based BCI tasks. Then, we analyzed the
the reviews. implementation of deep learning and related parts of this domain
FIGURE 3
An architecture of BCI based on EEG data.
in Section 4. Sections 5, 6 are the discussion and conclusion of this boisterous and containing numerous exceptions. The human neural
article, respectively. impulses acquired from a BCI based on EEG include noise and other
attributes in addition to the signal of neural activity. The challenges
are getting rid of noise, trying to pull out relevant characteristics, and
2. EEG-based BCI preliminaries accurately classifying the signal. By translating the extracted feature
set to give it a proper class label, operational commands can be
To translate mortal objectives or aspirations into real-time made. There are two categories of classification algorithms: linear and
equipment control signals, the cognitive responses of humans are non-linear classifiers (Guger et al., 2021).
related to the physical world. In Figure 3, we depict the usage of The goal of quantitative classification is to figure out an object’s
EEG data in BCI applications. Electrophysiological activity patterns system of classification based on how it looks. To recognize distinct
of human subjects are recorded by the acquisition device. Scalp types of brain activity, linear classifiers subscribe to the regime
electrodes are mounted over the headset to capture the neural of trying to establish a linear relationship/function between both
responses of human subjects (Sakkalis, 2011). Furthermore, a pre- the dependent and independent variables of a classification method
amplifier is used to make the brain signals stronger, and then the (Schalk et al., 2004). This set of classifiers involves linear discriminant
signal that has been strengthened is sent through a filter to get analysis (LDA) and support vector machines (Wang et al., 2009).
rid of unwanted parts, noise, or interference. After that, an analog- It sets up a hyperplane, which is a linear numerical operation that
to-digital converter (ADC) converts the filtered analog signal to a separates the different functions of the brain from the disentangled
digital signal. The electrical activities that had been recorded were collection of characteristics.
then standardized to improve the signal-to-noise ratio (SNR) of the Because of its simple, strong, and non-overfit operation and
digital signal. computing needs, the LDA, presuming the Gaussian distribution
It is important to note that feature extraction gives you the things of data, has been implemented in several BCI platforms (Wang
that neural activity cannot do. This means that you need less data to et al., 2009). Support Vector Machine (SVM) is a type of artificial
put the neural strategy into a category. Then, the data or information intelligence that can be used for both regression and classification
is put into a specific group or category of brain patterns. After this (Wang et al., 2009). Even though we mention regression issues, it is
stage, the retrieved feature set is transformed into operational control best suited for classification. The primary goal of the SVM algorithm
signals. The control signals made in the previous step are used to is to track down a hyperplane in an N-dimensional space that
control the external interface device. Thus, the BCI applications can evidently summarizes the data points. When no algorithmic solution
be controlled by these command signals. can be found between the dependent and independent variables of
the classification method, nonlinear classifiers are now used. Artificial
neural networks (ANNs), k-nearest neighbor (KNN), and SVMs are
3. Classical methods for EEG-based BCI some of these machine learning approaches (Lotte et al., 2018; Akhter
applications et al., 2020; Islam et al., 2020).
The ANNs are broadly utilized in an assortment of classification
EEG is by far the most prevalent strategy due to its high efficiency and design acknowledgment assignments as they can memorize from
and usability (Schalk et al., 2004). Be that as it may, pattern-based preparing tests, and, in this way, classify the input tests in a like
control utilizing EEG signals is troublesome due to being exceedingly manner. These are the most broadly utilized ANNs for efficaciously
characterizing multiclass neurological actions. They operate on the achieved an accuracy of 92.1%. Sulaiman et al. (2011) extracted
basis of conducting a preparatory calculation to adjust the weights distinguishing features for human stress from EEG-based BCI neural
pertaining to specific input and hidden layer neurons to minimize activity. They combined the power spectrum ratio of EEG and
the violent square error (Wang et al., 2009). spectral centroid techniques to enhance the accuracy (88.89%) of the
Herman et al. (2008) conducted a classification of EEG-based k-nearest neighbor (kNN) classifier, detecting and classifying human
BCI by investigating the type-2 fuzzy logic approach. They claimed stress in two states, such as close-eye and open-eye.
that their model exhibited better classification accuracy than the Wang et al. (2009) conducted a review of various classification
type-1 model of fuzzy logic. They also compared this method with approaches for motor imagery (BCI competition III) and finger
a well-known classifier based on LDA. On the other hand, Aznan movement (BCI competition IV) on EEG signals. They compared the
and Yang (2013) applied the Kalman filter to an EEG-based BCI results in terms of the accuracy of the classification. Gaussian SVM
for recognizing motor visuals in an attempt to optimize the system’s (GSVM) and k-NN show the desired performance because these
accuracy and consistency. types of classification are more vigorous than nonlinear classifiers,
The quintessential dispersion (CSP) was used to collect the as shown in Figure 4. However, learning vector quantization neural
necessary information, and the radial basis function (RBF) was networks (LVQNN) and quadratic discriminant analysis (QDA)
used to categorize the signal. They also compared their results with demonstrate the lowest accuracy. In addition, the performances
the LDA method and claimed that their RBF method showed a of linear discriminant analysis (LDA) and linear SVM are almost
better result. identical. These demonstrate that the classical machine learning
Zhang H. et al. (2013) linked Bayes classification error to spatial methods are not yet optimal for this domain. Therefore, we need
filtering, which is an important tool to extract and classify the EEG to try out deep learning methods on large datasets in EEG-based
signal. They claimed that by validating the positive relationship BCI applications.
between the Bayes error and the Rayleigh quotient, a spatial filter with
a lower Rayleigh quotient measuring the ratio of power features could
reduce the Bayes error. Zhang R. et al. (2013) proposed z-score LDA, 4. Utilizing deep learning in EEG-based
an updated version of LDA that introduces a new decision boundary BCI
capable of effectively handling heteroscedastic class distribution-
related classification. Table 2 lists all the EEG-based BCI studies using deep learning
Agrawal and Bajaj (2020) proposed a brain state signal measuring for the last 6 years. We have listed the five most important parts
method based on non-muscular channel EEG to record the brain of the studies: datasets, number of subjects, deep learning mode,
activity acting as a source to facilitate communication between a BCI application, and classification result. This table will assist future
patient and the outside environment. They used fast and short-term researchers in determining the state of the art in this domain.
Fourier transforms to decompose the signals obtained from neural
activity into smaller segments. They implemented the classification
tasks using a support vector machine. Depending on the values of 4.1. Data preprocessing
the evaluation grades, the overall accuracy of the system was found
to be approximately 92%. Pan et al. (2016) suggested a framework Due to the presence of artifacts and contamination, EEG data
for a sentiment state detection system based on EEG-based BCI arestill not being used for large-scale BCI studies (Pedroni et al.,
technology. They categorized two emotional responses, including 2019). Even though some deep learning studies for EEG-based
happiness and sadness, using SVM. According to their observations, BCI say they did not use any preprocessing steps, most of the
roughly 74.17% precision was noticed for such two classes. time, preprocessing steps are very important. Some research works
Bousseta et al. (2018) proposed a BCI system based on EEG combine the preprocessing steps in their deep learning pipeline and
to control a robot arm by decoding the disabled person’s thoughts call it as end-to-end framework (Antoniades et al., 2018; Aznan et al.,
obtained from the brain. They combined the principal component 2018; Zhang et al., 2021). Moreover, an additional CNN layer has been
analysis with the fast Fourier transform to perform the feature used for the preprocessing in some cases (Amin et al., 2019a; Tang
extraction and then fed it to the radial basis function-based support et al., 2019).
vector machine as a classifier. The outputs of this classifier were Most of the time, frequency domain filters were used in research
turned into commands that the robot arm followed. to limit the bandwidth of the EEG signal. This is useful when there is
Amarasinghe et al. (2014) proposed a method consisting of three a specific frequency range of interest so that the rest can be safely
steps based on self-organizing maps to recognize neural activities ignored (Islam et al., 2016; Kilicarslan et al., 2016). In 30% of the
for unsupervised clustering. They identified two thought patterns, studies, a signal below 45 Hz, or below a typical low gamma band, was
such as moving forward and resting. They also implemented the low-pass filtered. The filtered frequency ranges were grouped by task
classification process based on feed-forward ANNs. They claimed type and artifact reduction methods. It shows that most research used
that their mapping methods showed approximately 8% improvement a technique to get rid of artifacts along with lowering the frequency
over ANN-based classification. ranges that were studied.
Korovesis et al. (2019) established an electroencephalography From our observation, 20% of the studies manually eliminated
BCI system that controls the movement of a mobile robot in response artifacts (Rammy et al., 2020; Atilla and Alimardani, 2021;
to the eye blinking of a human operator. They used the EEG signals Sundaresan et al., 2021). It is easy to see unexpected outliers visually,
of brain activity to find the right features and then fed those features such as when data are missing or significant EEG artifacts are
into a well-trained neural network to guide the mobile robot. They evident. But it might be hard to tell the difference between noisy
FIGURE 4
Classification algorithms and the corresponding accuracies of different classical classification methods based on a study.
channels that are always on and noisy channels that are only noisy numerous descriptive dimensions to investigate this question: the
sometimes. Furthermore, since the way the data are processed is very number of participants, the amount of EEG data collected, and the
random, it is hard for other researchers to copy the methods. In task of the datasets. There are few studies that make use of their own
addition to this, 10% of the studies did not routinely eliminate EEG collected datasets (Tang et al., 2017; Vilamala et al., 2017; Antoniades
artifacts. Independent component analysis (ICA) (Delorme et al., et al., 2018; Aznan et al., 2018; Behncke et al., 2018; El-Fiqi et al., 2018;
2007) and discrete wavelet transformation (DWT) were the most Nguyen and Chung, 2018; Alazrai et al., 2019; Chen et al., 2019b;
common artifact removal methods that were utilized in the remaining Fahimi et al., 2019; Hussein et al., 2019; Zgallai et al., 2019; Gao
two-thirds of the analyzed research (Kline et al., 2015). et al., 2020b; León et al., 2020; Maiorana, 2020; Penchina et al., 2020;
The EEG electrodes also take up undesired electrical physiological Tortora et al., 2020; Atilla and Alimardani, 2021; Cai et al., 2021; Cho
signals from eye blinks and neck muscles (Crespo-Garcia et al., et al., 2021; Mai et al., 2021; Mammone et al., 2021; Petoku and Capi,
2008; Amin et al., 2019b). Additionally, there are issues with motion 2021; Reddy et al., 2021; Shoeibi et al., 2021; Sundaresan et al., 2021;
artifacts brought on by cable motion and electrode displacement Ak et al., 2022). However, most of the deep learning studies have been
while the individual is moving (Arnau-González et al., 2017; Chen conducted based on publicly available EEG datasets, such as:
et al., 2019a; Gao et al., 2019). There have been a lot of studies
performed on how to find and remove EEG artifacts (Nathan and • The dataset used to validate the classification method and signal
Contreras-Vidal, 2016), but it is not the primary focus of our review processing for brain–computer interfaces was obtained from the
work. In summary, one of the three methods (i.e., manual process, BCI competition (Tabar and Halici, 2016; Amin et al., 2019b;
automated process, or no removal of artifact) is considered in each Dai et al., 2019; Olivas-Padilla and Chacon-Murguia, 2019; Qiao
study to conduct the artifact removal procedure. and Bi, 2019; Roy et al., 2019; Song et al., 2019; Tang et al.,
2019; Tayeb et al., 2019; Zhao et al., 2019; Li Y. et al., 2020;
Miao et al., 2020; Polat and Özerdem, 2020; Rammy et al., 2020;
Yang et al., 2020; Deng et al., 2021; Huang et al., 2021, 2022;
4.2. Datasets Tiwari et al., 2021; Zhang et al., 2021). This dataset comprises
EEG data obtained from participants. Class 1 was the left hand,
One of the main limitations of the classical EEG-based BCI is Class 2 was the dominant hand, Class 3 was both feet, and
the number of subjects who participated in this study. Within the Class 4 was the tongue in the cue-based BCI structure. For
course of this review, EEG-based datasets were covered. This scope each subject, two workouts were captured on interspersing time
was taken into account as keywords to find the right research articles frames. Each session consisted of six runs separated by relatively
on the Google Scholar and Research Gate websites. For this literature short pauses. A phase includes 288 efforts, with each effort being
review, more than 100 research studies were found on these two implemented 48 times.
websites by using the above criteria. Among these, around 47% of • DRYAD dataset contains five studies that investigate natural
research has been conducted based on the BCI competition dataset. speech understanding using a diversity of activities along with
Moreover, 9%, 16%, and 7% of the studies have been conducted on acoustic, cinematic, and envisioned verbal sensations (Amber
DRYAD, SEED-VIG, and EEG MI datasets, respectively (Figure 5). et al., 2019).
Deep learning has enabled larger datasets and more rigorous • CHB-MIT dataset contains EEG recordings from children who
experiments in BCI. “How much data is enough data?” remains a have intractable seizures (Dang et al., 2021). After people
significant question when using DL with EEG data. We looked at stopped taking their seizure medicine, they were watched for up
TABLE 2 EEG-based BCI studies using deep learning for the last 6 years.
Aznan et al. (2018) 4 subjects (4) CNN Classifying SSVEP frequencies 96.00%
Dose et al. (2018) Physionet EEG MI Dataset (109) CNN Stroke rehabilitation 80.38%
El-Fiqi et al. (2018) 2 datasets (5 and 12) CNN Person identification 96.80%
Nguyen and Chung (2018) 8 healthy subjects (8) CNN Developing a speller system 99.20%
Shoeibi et al. (2021) 21 patients with focal epilepsy (21) CNN, LSTM Diagnosing epileptic seizures 99.10% (CNN),
100% (LSTM)
Antoniades et al. (2018) 17 subjects (17) CNN Detecting epileptic discharges 68.00%
Völker et al. (2018) Flanker task dataset (31) CNN Decoding error 81.70%
Behncke et al. (2018) 5 males and 6 females (11) CNN Decoding robot errors 75.00%
Oh et al. (2020) 20 Parkinson patients (20) CNN Identifying Parkinson Disease 88.25%
Zeng et al. (2018) 10 healthy subjects (10) LSTM Predicting mental states of drivers 91.79%
Hussein et al. (2019) BCI (7) LSTM Detecting epileptic seizures 100%
Vilamala et al. (2017) 10 males and 10 females (20) CNN Scoring sleep stage 89–97%
Tabar and Halici (2016) BCI competition IV dataset 2b (9) CNN+SAE Classifying right and left hand 72.40%
movement
Olivas-Padilla and BCI competition IV dataset 2a (9) CNN Classifying MI 67.50% - 82.09%
Chacon-Murguia (2019)
Tayeb et al. (2019) BCI competition IV dataset 2b (9) CNN Decoding MI movements 77.72%
Sundaresan et al. (2021) 8 with autism and 5 healthy subjects (13) CNN+RNN Classifying mental stress with autism 93.27%
Cai et al. (2021) 26 healthy subjects (26) CNN Classifying attentive state 72.73%
Ieracitano et al. (2021) 15 subjects (15) CNN Discriminating hand motion planning 76.21%
Petoku and Capi (2021) 462 trials of a single subject (1) CNN Detecting object movement 60.00%
Zhang et al. (2021) BCI Competition IV dataset 2a and 2b (18) CNN Classifying MI 88.40%
Mai et al. (2021) 4 males and 2 females (6) CNN Detecting emotional states 93.34%
Deng et al. (2021) BCI Competition IV 2a, III (12) CNN Classifying MI tasks 85.30%
Atilla and Alimardani (2021) 14 subjects while driving (14) CNN Classifying drivers attention 89.00%
Mammone et al. (2021) 15 participants (15) CNN Decoding motion planning 90.77%
Huang et al. (2021) BCI competition IV dataset 2a (9) CNN Classifying MI 90.00%
León et al. (2020) 10 subjects (10) CNN, RNN Classifying SSMVEP signals 96.83%
Miao et al. (2020) BCI competition IVa (5), right index finger CNN Classifying MI 90.00%
MI dataset (10)
Ko et al. (2020) SEED-VIG dataset (15) CNN Estimating driver vigilance 96.00%
Penchina et al. (2020) 11 subjects (11) RNN, LSTM Classifying anxiety in adolescents with 93.27%
autism
Tortora et al. (2020) 11 healthy subjects walking on a LSTM Decoding gait AUC=90%
treadmill (8)
(Continued)
TABLE 2 (Continued)
Liu J. et al. (2020) DEAP (32), SEED (15) CNN+SAE Classifying emotion 92.86% (DEAP),
96.77% (SEED)
Gao et al. (2020b) DEAP (32), SEED (15) CNN Recognizing emotion 90.63%
Hwang et al. (2020) SEED dataset (15) CNN Recognizing emotion 90.41%
Gao et al. (2020a) 15 right-handed healthy students (15) CNN Recognizing emotion 92.44%
Yang et al. (2020) BCI competition IV dataset 1 (7) CNN Decoding MI EEG 86.40%
Fahimi et al. (2019) 120 healthy subjects performed the Stroop CNN Detecting attention 79.26%
color test (120)
Tang et al. (2019) BCI competition data IV 2a (9) CNN+SAE Classifying eMI task 79.90%
Roy et al. (2019) BCI competition IV 2b dataset (9) CNN Classifying brain states 80.32%
Wilaiprasitporn et al. (2019) DEAP dataset (32) CNN, RNN Identifying person 99.90%
Qiao and Bi (2019) BCI competition IV 2a dataset (9) CNN+Bi-GRU Classifying MI 76.62%
Zgallai et al. (2019) 10 volunteers (10) CNN EEG-driven BCI smart wheelchair 70.00 (raw EEG),
96.00% (Fourier)
Gao et al. (2019) 8 subjects in fatigue states (8) CNN Evaluating driver fatigue 97.37%
Song et al. (2019) BCI Competition IV dataset 2a (9) CNN Classifying MI 73.40%
Zhao et al. (2019) BCI Competition IV dataset 2a (9) CNN Classifying MI Mean kappa: 0.64
Chen et al. (2019b) DEAP dataset (32) CNN Recognizing emotion AUC: 99.88%
Chen et al. (2019a) 157 subjects (157) CNN Identifying biometric 96.00%
Dai et al. (2019) BCI Competition IV dataset 2b (9) CNN+VAE Classifying MI Kappa = 0.60
Amin et al. (2019b) BCI Competition IV dataset 2a (9) CNN Classifying MI 75.7%
Ozdemir et al. (2019) DEAP dataset (32) CNN Estimating emotional state 95.96%
Tiwari et al. (2021) BCI competition V dataset (3), Emotiv CNN Classifying left hand and right hand 72.51% (BCI V),
dataset (16) task 72.00% (Emotiv)
Dang et al. (2021) CHB-MIT datasets (24) CNN Detecting epilepsy 99.56%
Polat and Özerdem (2020) BCI competition 2003 (1) CNN Detecting cursor movements 90.38%
Chakladar et al. (2020) STEW dataset (48) Bi-LSTM Estimating mental workload 82.57%
Alazrai et al. (2019) 22 subjects (22) CNN Decoding MI tasks of the same hand 73.70%
Liu Y. et al. (2020) DEAP dataset (32) CNN Recognizing emotion 95.27%
Arnau-González et al. (2017) DREAMER dataset (23) CNN Identifying subject 94.01%
Zhu et al. (2022) MBT-42 (42), Med-62 (62) ConvNet, 3D-CNN Classifying MI 73.12% (MBT-42),
72.66% (Med-62)
Mattioli et al. (2022) EEG Motor Movement Dataset V 1.0.0 (109) 1D-CNN Classifying MI 99.38%
FIGURE 5
Distributions of datasets that are explored for EEG-based BCI applications.
to a few days to find out more about their seizures and see if effectiveness of other analyses and classifications. Third, DL makes
they were good candidates for surgery. There are 23 patients in it easier to make tasks such as conceptual sculpting and domain
the dataset, separated into 24 cases (a patient has 2 recordings, acclimation, which are not tried as often and fail less often when
1.5 years apart). There are 969 h of scalp EEG recordings in this using EEG data. Deep learning techniques have made it feasible
dataset, comprising 173 seizures. Seizures of various sorts can be to synthesize high-dimensional structured data, such as images
found in the dataset (clonic, atonic, and tonic). and audio.
• DEAP dataset (Koelstra et al., 2011) includes 32 individuals Deep learning-based methods have been used to sum up
who saw 1-min long music video snippets and judged high-dimensional, well-organized content such as images and
arousal/valence/like–dislike/dominance/familiarity, as well as speech. Computational methods could be used by readers to grasp
the frontal facial recording of 22 out of 32 subjects (Chen et al., transitional depictions or complement data. Deep neural networks
2019b; Ozdemir et al., 2019; Wilaiprasitporn et al., 2019; Aldayel combined with techniques such as linkage synchronization make it
et al., 2020; Gao et al., 2020a; Liu J. et al., 2020). easier to learn representations that do not depend on the domain,
• The SEED-VIG dataset integrates EEG data with diligence while keeping information about the task for domain adaptation.
indicators throughout a driving virtual environment. In Similar methods can be implemented with EEG data to obtain more
addition, there are 18 conductive gels and eye-tracking (Ko et al., accurate depictions, and as a result, improve the performance of
2020). EEG-based models across a wide range of subjects and tasks.
• SEED dataset wherein EEG was documented over 62 streams Various deep learning algorithms have been employed in EEG-
from 15 participants as they regarded short videos eliciting based BCI applications, whereas CNN is clearly the most frequent
positive, negative, or neutral feelings (Gao et al., 2020a; Hwang one. For example, Arnau-González et al. (2017), Tang et al. (2017),
et al., 2020; Liu J. et al., 2020). Vilamala et al. (2017), Antoniades et al. (2018), Aznan et al. (2018),
• The STEW dataset includes the raw EEG data of 48 participants Behncke et al. (2018), Dose et al. (2018), El-Fiqi et al. (2018), Nguyen
who took part in a multi-threaded workflow test using the and Chung (2018), Völker et al. (2018), Alazrai et al. (2019), Amber
SIMKAP experiment (Chakladar et al., 2020). et al. (2019), Amin et al. (2019b), Chen et al. (2019a,b), Fahimi
• One participant observes an arbitrary picture (chosen from 14k et al. (2019), Gao et al. (2019), Olivas-Padilla and Chacon-Murguia
pictures in the ImageNet ILSVRC2013 training dataset) for 3 s, (2019), Ozdemir et al. (2019), Roy et al. (2019), Song et al. (2019),
while their EEG signals are documented. Over 70,000 specimens Tayeb et al. (2019), Zgallai et al. (2019), Zhao et al. (2019), Aldayel
are also included (Fares et al., 2019). et al. (2020), Gao et al. (2020a,b), Hwang et al. (2020), Ko et al.
(2020), Li Y. et al. (2020), Liu J. et al. (2020), Miao et al. (2020),
Oh et al. (2020), Polat and Özerdem (2020), Atilla and Alimardani
(2021), Cai et al. (2021), Dang et al. (2021), Deng et al. (2021), Huang
4.3. Deep learning modality et al. (2021), Ieracitano et al. (2021), Mai et al. (2021), Mammone
et al. (2021), Petoku and Capi (2021), Reddy et al. (2021), Tiwari
Deep Neural Networks (DNNs) are highly structured and et al. (2021), Zhang et al. (2021), Ak et al. (2022), and, Huang
therefore can learn features from unrefined or modestly heavily et al. (2022) have explored deep learning-based algorithms. However,
processed data, minimizing the need for domain-specific processing more recent BCI studies have implemented other deep learning
and feature extraction processes. Furthermore, DNN-learned modalities including,
attributes may be even more proficient or evocative than human-
designed attributes. Second, as in many realms where DL has • Long short-term memory (LSTM) (Zeng et al., 2018; Fares et al.,
surpassed the previous condition, it has the potential to improve the 2019; Hussein et al., 2019; Puengdang et al., 2019; Saha et al.,
FIGURE 6
A comparative schematic of accuracies by various deep learning approaches [i.e., Convolutional neural network (CNN) (Islam et al., 2021), long
short-term memory (LSTM), stacked autoencoder (SAE), and variational autoencoder (VAE)] on the BCI competition dataset.
FIGURE 7
A graph of accuracies by various deep learning approaches on the DEAP dataset.
5.3. BCI applications and deep learning Hussein et al., 2019; Dang et al., 2021; Shoeibi et al., 2021). CNN and
RNN are common deep learning models in this context, as are hybrid
Deep learning-based BCI systems are mostly used in the models that combine RNN and CNN. Several methods (Turner et al.,
healthcare industry to identify and diagnose mental illnesses, 2014) combined deep learning models for feature extraction with
including epilepsy, Alzheimer’s disease, and other disorders (Dose classical classifiers for detection. To diagnose seizures, researchers
et al., 2018). First, research focusing on sleep-stage recognition used a VAE in feature engineering followed by SVM.
based on sleeping spontaneous EEG is utilized to identify sleeping Smart environments are a possible future application scenario
disorders (Vallabhaneni et al., 2021). As a result, the researchers for BCI. With the rise of the Internet of Things (IoT), BCI can
do not need to seek out patients with sleeping issues since it is be linked to a growing number of smart settings. For instance, an
simple to gather the sleeping EEG signals from healthy people in this aiding robot may be used in a smart house (Zhang et al., 2018c)
condition. The diagnosis of epileptic seizures has also garnered a great in which the robot is controlled by brain impulses. In addition,
deal of interest. The majority of seizure detection is dependent on Behncke et al. (2018) examined how to operate a robot using
spontaneous EEG and mental illness signs (Antoniades et al., 2018; visually stimulated spontaneous EEG and fNIRS data. BCI-controlled
FIGURE 8
The accuracies of the SEED dataset based on CNN or CNN+SAE.
exoskeletons might assist individuals with compromised lower limb systems are emerging as attractive alternatives. Individual EEG
motor control in walking and everyday activities (Kwak et al., 2017). waves are almost impossible for an impostor to replicate, making
In future, research on brain-controlled equipment may be useful this method extremely resistant to spoofing assaults faced by other
for developing smart homes and smart hospitals for the elderly and identification methods. Deep neural networks were used by Mao et al.
the crippled. (2017) to identify the user’s ID based on EEG signals, and CNN was
In comparison to other human–machine interface approaches, used for personal identification. Zhang et al. (2017) presented and
the greatest benefit of BCI is that it allows patients who have analyzed an attention-based LSTM model on both public and local
lost most motor functions, such as speaking, to interact with the datasets. The researchers (Zhang et al., 2018a) subsequently merged
outside world (Nguyen and Chung, 2018). Deep learning technology EEG signals with gait data to develop a dual-authentication system
has considerably enhanced the efficiency of the brain’s signal-based using a hybrid deep learning model.
communications. The P300 speller is a common paradigm that allows Several articles simply aim to categorize the user’s emotional state
individuals to type without a motor system, which can turn the user’s as a binary (positive/negative) or three-category (positive, neutral,
intent into text (Cecotti and Graser, 2010). In addition, Zhang et al. and negative) issue using deep learning algorithms (Chen et al.,
(2018b) suggested a hybrid model that combines RNN, CNN, and AE 2019b; Ozdemir et al., 2019; Gao et al., 2020a,b; Hwang et al., 2020;
to extract relevant characteristics from MI EEG to detect the letter the Liu J. et al., 2020; Liu Y. et al., 2020; Sundaresan et al., 2021). Diverse
user intends to write. The suggested interface consists of 27 characters articles employed CNN and its modifications to identify emotional
(26 English alphabets and the space bar) split into three character EEG data (Li et al., 2016) and lie detection (Amber et al., 2019).
blocks (each block containing nine characters) in the first interface. Most of the time, the CNN-RNN deep learning model is used to
There are three possible choices, and each one leads to a separate find hidden traits in spontaneous emotional EEG. Using EEG data,
sub-interface with nine characters. Xu and Plataniotis (2016) employed a deep belief network (DBN) as
A prominent topic of interest for BCI researchers is the a particular feature extractor for the emotional state categorization
security industry. A security issue may be broken down into task. Moreover, on a more basic level, some studies seek to identify
authentication (also known as “verification”) and identity (also a positive/negative condition for each emotional dimension. For
known as “recognition”) components (Arnau-González et al., 2017; identifying emotions, Yin et al. (2017) suggested a multiple-fusion-
El-Fiqi et al., 2018; Chen et al., 2019b; Puengdang et al., 2019; layer-based ensemble classifier of AE. Each AE is made up of three
Maiorana, 2020). The goal of the former, which is often a multi- hidden layers that remove unwanted noise from the physiological
class classification task, is to identify the test subject (Zhang et al., data and give accurate representations of the features.
2017). This is usually a simple yes-or-no question that only looks For traffic safety to be assured, a driver must be able to keep up
at whether the test subject is allowed or not. Existing biometric their best performance and pay close attention. It has been shown that
identification/authentication systems rely primarily on the unique EEG signals may be beneficial in assessing people’s cognitive status
inherent physiological characteristics of people (e.g., face, iris, retina, while doing certain activities (Almogbel et al., 2018). A motorist is
voice, and fingerprint). Anti-surveillance prosthetic masks that may often considered alert if their response time is less than or equal to
defy face recognition, contact lenses that can fool iris detection, 0.7 s and weary if their reaction time is more than or equal to 2.1 s. By
vocoders that can compromise speech identification, and fingerprint extracting the distinctive elements from the EEG data, Hajinoroozi
films that can fool fingerprint sensors are all vulnerable. Due to their et al. (2015) investigated the prediction of a driver’s weariness. They
great attack resilience, EEG-based biometric person identification investigated a DBN-based dimensionality reduction strategy. It is
important to be able to tell when a driver is tired since that can make 2020), it is mostly undiscovered. Any research in this domain
accidents more likely. Furthermore, it is practical to identify driver using GCN might be the breakthrough needed to trigger deep
weariness in daily life. The technology that is used to record EEG data learning-based BCI studies.
is easy to find and small enough to use in a car. In addition, the cost • Transfer Learning: The study of deep neural network-based
of an EEG headset is reasonable for the majority of individuals. Deep methods for successfully transferring information from relevant
learning algorithms have greatly improved the accuracy of tiredness disciplines is known as “deep transfer learning”. Transfer
detection. In conclusion, driving sleepiness based on EEG may be learning focuses on dealing with facts that defy this notion by
identified with excellent accuracy (83–98%) (Fahimi et al., 2019; Ko utilizing knowledge acquired while completing one assignment
et al., 2020; Atilla and Alimardani, 2021; Cai et al., 2021). The self- for a different but related job. Transfer learning uses data that
driving situation is where driver fatigue monitoring will likely be used have already been used to increase the size of the dataset.
in future. Since the human driver is often expected to react correctly This means that there is no need to calibrate from scratch,
to a request to intervene in most self-driving scenarios, the driver transferred information is less noisy, and TL can loosen BCI
must always be aware. As a result, we think that using BCI-based drive constraints. Session-to-session transfer learning in BCIs is based
fatigue detection can help the development of autonomous vehicles. on the idea that features extracted by the training module and
Human operators play an important role in automation systems algorithms can be used to help a subject do the same task in a
for decision-making and strategy formulation. Human functional different session. To find the best way to divide decisions among
states, unlike those of machines or computers, cannot always meet the different training sections, it is important to look at what
the needs of a task because working memory is limited, and psycho- they all have in common. As TL has a lot more opportunities
physiological experience changes over time. A lot of researchers have in BCI applications, we have a few recommendations for
concentrated on this subject. The mental effort may be calculated future researchers.
using spontaneous EEG. Bashivan et al. (2015) introduced a DBN The majority of TL research has focused on inter-subject
model, a statistical technique for predicting cognitive load from single and intersession transfer. Cross-device transfers are beginning
trial EEG. to gain interest, although cross-task transfers are mostly
unexplored. Since 2016, there has, to the best of our knowledge,
been only one similar research (He and Wu, 2020). Transfers
5.4. Recommendation for future research between devices and tasks would make EEG-based BCIs far
more realistic.
However, there are still plenty of deep learning premises and Utilizing the transferability of adversarial cases, adversarial
domains to be used in EEG-based BCI, which will not only improve assaults–one of the most recent advancements in EEG-based
the performance but also make them more generalizable. Here are BCIs, may be carried out across several machine learning
a few suggestions for future researchers regarding where they can models. However, specifically considering TL across domains
uncover novelty utilizing deep learning. may boost the attack’s performance further. In black box attacks,
for example, TL can use publicly available datasets to reduce the
• Graph Convolutional Networks (GCNs): One of the number of queries to the victim model or better approximate the
fundamental functions of the BCI is controlling machines using victim model with the same number of queries.
only the MI and no physical motions. For the development Regression issues and emotional BCI are two fresh uses
of these BCI devices, it is very important to be able to classify of EEG-based BCIs that have been piquing curiosity among
MI brain activity in a reliable way. Even though previous researchers. It is interesting that they are both passive BCIs.
research has shown promising results, there is still a need to Although affective BCI may be used to create both classification
improve classification accuracy to make BCI applications that and regression problems, the majority of past research has been
are useful and cost-effective. One problem with making an on classification issues.
EEG MI-based wheelchair is that it is still hard to make it • Generative Deep Learning : The primary purpose of generative
flexible and resistant to differences between people. Traditional deep learning models is to produce training samples or
techniques to decipher EEG data do not include the topological supplement data. In other words, generative deep learning
link between electrodes. So, it is possible that the Euclidean models help the BCI industry by making the training data
structure of EEG electrodes does not give a good picture of how better and giving it more of it. After augmenting the data,
signals interact with each other. To solve the problem, graph discriminative models will be used for classification. This
convolutional neural networks (GCNs) are presented to decode method is meant to make trained deep learning networks more
EEG data. GCN is a semi-supervised model that is often used reliable and effective, especially when there is not a lot of
to get topological properties from data in non-Euclidean space. training data. In short, the generative models use the input
GCNs have been used successfully in a number of graph-based data to make a set of output data that is similar to the input
applications. Graphs can show complicated relationships data. This section will present two common generative deep
between entities. GCN not only successfully extracts topological learning models: variational autoencoder (VAE) and generative
information from data but also it has interpretability and adversarial networks (GANs).
operability. Recently, researchers are shifting to GCN from VAE is an important type of AE and one of the best
CNNs for various applications as it can capture relational algorithms for making things. The standard AE and its
data better than CNNs. Though some studies have recently variations can be used for representation, but they cannot be
reported GCN in EEG-based BCI (Hou et al., 2020; Jia et al., used for generation since the learned code (or representation)
might not be continuous. Therefore, it is impossible to make a rate of EEG data. Using a predictive model, one can train a neural
random sample that is the same as the sample that was put in. network on a sample of subjects before fine-tuning it on a single
In other words, interpolation is not supported by the standard individual, which is likely to result in favorable results with less data
AE. Therefore, we can duplicate the input sample but cannot from the individual. DNNs are typically regarded as “black boxes”
construct one that is similar. This trait is what makes VAE so when likened to more conventional means; therefore, it is crucial
valuable for generative modeling: the latent spaces are meant to to scrutinize trained DL models. Indeed, simple model inspection
be continuous, which can make a huge contribution to capturing techniques such as showing the weights of a linear classifier do not
EEG data features for BCI applications (Lee et al., 2022). apply to deep neural networks, making their decisions far more
Machine learning and deep learning modules must be difficult to comprehend.
trained on a significant amount of real-world data to perform This study presents an overview of EEG-based BCIs
classification tasks; however, there may be restrictions on incorporating deep learning, with a concentration on the
obtaining enough real data or the time and resources required epistemological advantages and pitfalls, as well as the invaluable
may be simply too great. GANs, have seen an increase in activity efforts in this area of study. This study shows that more research
in recent years, and are primarily used for data augmentation needs to be conducted on how much data are needed to use deep
to address the issue of how to produce synthetic yet realistic- learning in EEG processing to its fullest potential. This type of
looking samples to mimic real-world data using generative research could look at the relationship between performance and
models so that the training data sample number can be data volume, effectiveness and data augmentation, performance, data
increased. In comparison to CNNs, GANs have, to the best of volume, and network depth. For each BCI application, researchers
our knowledge, been studied much less in BCIs. This is primarily have examined measurement techniques, control signals, EEG
due to the incomplete evaluation of the viability of using a feature extraction, classification techniques, and performance
GAN to generate time sequence data. The spatial, spectral, and evaluation metrics. Tuning hyper-parameters could have been
temporal properties of the EEG data produced by the GAN are the key to increasing the efficiency of deeper frameworks in deep
comparable to those of actual EEG signals (Fahimi et al., 2020). learning mode by adjusting hyper-parameters. As mentioned earlier
This opens up new avenues for future research on GANs in about the lack of hyper-parameter search in this domain, this issue
EEG-based BCIs. should be addressed in future studies.
References
Abdulkader, S. N., Atia, A., and Mostafa, M.-S. M. (2015). Brain computer interfacing: Acar, E., Roald, M., Hossain, K. M., Calhoun, V. D., and Adali, T. (2022). Tracing
applications and challenges. Egyptian Inf. J. 16, 213–230. doi: 10.1016/j.eij.2015. evolving networks using tensor factorizations vs. ica-based approaches. Front. Neurosci.
06.002 16, 861402. doi: 10.3389/fnins.2022.861402
Abiri, R., Borhani, S., Sellers, E. W., Jiang, Y., and Zhao, X. (2019). A comprehensive Agrawal, R., and Bajaj, P. (2020). “EEG based brain state classification technique using
review of eeg-based brain-computer interface paradigms. J. Neural Eng. 16, 011001. support vector machine-a design approach,” in 2020 3rd International Conference on
doi: 10.1088/1741-2552/aaf12e Intelligent Sustainable Systems (ICISS) (Thoothukudi: IEEE), 895–900.
Ak, A., Topuz, V., and Midi, I. (2022). Motor imagery eeg signal classification Chen, J., Mao, Z., Yao, W., and Huang, Y. (2019a). “EEG-based biometric identification
using image processing technique over googlenet deep learning algorithm for with convolutional neural network,” in Multimedia Tools and Applications (Dordrecht),
controlling the robot manipulator. Biomed. Signal Process. Control. 72, 103295. 1–21.
doi: 10.1016/j.bspc.2021.103295 Chen, J., Zhang, P., Mao, Z., Huang, Y., Jiang, D., and Zhang, Y. (2019b). Accurate
Akhter, T., Islam, M. A., and Islam, S. (2020). Artificial neural network based COVID- eeg-based emotion recognition on combined features using deep convolutional neural
19 suspected area identification. J. Eng. Adv. 1, 188–194. doi: 10.38032/jea.2020.04.010 networks. IEEE Access 7, 44317–44328. doi: 10.1109/ACCESS.2019.2908285
Alazrai, R., Abuhijleh, M., Alwanni, H., and Daoud, M. I. (2019). A deep learning Cho, J.-H., Jeong, J.-H., and Lee, S.-W. (2021). Neurograsp: Real-time eeg classification
framework for decoding motor imagery tasks of the same hand using eeg signals. IEEE of high-level motor imagery tasks using a dual-stage deep learning framework. IEEE
Access 7, 109612–109627. doi: 10.1109/ACCESS.2019.2934018 Trans. Cybern. 52, 13279–13292. doi: 10.1109/TCYB.2021.3122969
Aldayel, M., Ykhlef, M., and Al-Nafjan, A. (2020). Deep learning for Craik, A., He, Y., and Contreras-Vidal, J. L. (2019). Deep learning for
eeg-based preference classification in neuromarketing. Appl. Sci. 10, 1525. electroencephalogram (eeg) classification tasks: a review. J. Neural Eng. 16, 031001.
doi: 10.3390/app10041525 doi: 10.1088/1741-2552/ab0ab5
Crespo-Garcia, M., Atienza, M., and Cantero, J. L. (2008). Muscle artifact removal
Almogbel, M. A., Dang, A. H., and Kameyama, W. (2018). “EEG-signals based
from human sleep eeg by using independent component analysis. Ann. Biomed. Eng. 36,
cognitive workload detection of vehicle driver using deep learning,” in 2018 20th
467–475. doi: 10.1007/s10439-008-9442-y
International Conference on Advanced Communication Technology (ICACT) (Chuncheon:
IEEE), 256–259. Dai, M., Zheng, D., Na, R., Wang, S., and Zhang, S. (2019). EEG classification of motor
imagery using a novel deep learning framework. Sensors 19, 551. doi: 10.3390/s19030551
Al-Saegh, A., Dawwd, S. A., and Abdul-Jabbar, J. M. (2021). Deep learning for motor
imagery eeg-based classification: a review. Biomed. Signal Process. Control 63, 102172. Dang, W., Lv, D., Rui, L., Liu, Z., Chen, G., and Gao, Z. (2021). Studying multi-
doi: 10.1016/j.bspc.2020.102172 frequency multilayer brain network via deep learning for eeg-based epilepsy detection.
IEEE Sens. J. 21, 27651–27658. doi: 10.1109/JSEN.2021.3119411
Alzahab, N. A., Apollonio, L., Di Iorio, A., Alshalak, M., Iarlori, S., Ferracuti, F.,
et al. (2021). Hybrid deep learning (hdl)-based brain-computer interface (bci) systems: Delorme, A., Sejnowski, T., and Makeig, S. (2007). Enhanced detection of artifacts in
a systematic review. Brain Sci. 11, 75. doi: 10.3390/brainsci11010075 EEG data using higher-order statistics and independent component analysis. Neuroimage
34, 1443–1449. doi: 10.1016/j.neuroimage.2006.11.004
Amarasinghe, K., Wijayasekara, D., and Manic, M. (2014). “EEG based brain activity
Deng, X., Zhang, B., Yu, N., Liu, K., and Sun, K. (2021). Advanced tsgl-eegnet
monitoring using artificial neural networks,” in 2014 7th International Conference on
for motor imagery EEG-based brain-computer interfaces. IEEE Access 9, 25118–25130.
Human System Interactions (HSI) (Costa da Caparica: IEEE), 61–66.
doi: 10.1109/ACCESS.2021.3056088
Amber, F., Yousaf, A., Imran, M., and Khurshid, K. (2019). “P300 based deception
Dose, H., Møller, J. S., Iversen, H. K., and Puthusserypady, S. (2018). An end-to-end
detection using convolutional neural network,” in 2019 2nd International Conference on
deep learning approach to mi-eeg signal classification for bcis. Expert. Syst. Appl. 114,
Communication, Computing and Digital Systems (C-CODE) (Islamabad: IEEE), 201–204.
532–542. doi: 10.1016/j.eswa.2018.08.031
Amin, S. U., Alsulaiman, M., Muhammad, G., Bencherif, M. A., and Hossain,
Du, Y., and Liu, J. (2022). IENet: a robust convolutional neural network for eeg based
M. S. (2019a). Multilevel weighted feature fusion using convolutional neural
brain-computer interfaces. J. Neural Eng. 19, 036031. doi: 10.1088/1741-2552/ac7257
networks for EEG motor imagery classification. IEEE Access 7, 18940–18950.
doi: 10.1109/ACCESS.2019.2895688 El-Fiqi, H., Wang, M., Salimi, N., Kasmarik, K., Barlow, M., and Abbass, H. (2018).
“Convolution neural networks for person identification and verification using steady state
Amin, S. U., Alsulaiman, M., Muhammad, G., Mekhtiche, M. A., and Hossain,
visual evoked potential,” in 2018 IEEE International Conference on Systems, Man, and
M. S. (2019b). Deep learning for EEG motor imagery classification based on
Cybernetics (SMC) (Miyazaki: IEEE), 1062–1069.
multi-layer cnns feature fusion. Future Generat. Comput. Syst. 101, 542–554.
doi: 10.1016/j.future.2019.06.027 Fahimi, F., Dosen, S., Ang, K. K., Mrachacz-Kersting, N., and Guan, C. (2020).
Generative adversarial networks-based data augmentation for brain-computer interface.
Antoniades, A., Spyrou, L., Martin-Lopez, D., Valentin, A., Alarcon, G., Sanei, S., et al.
IEEE Trans. Neural Netw. Learn. Syst. 32, 4039–4051. doi: 10.1109/TNNLS.2020.3016666
(2018). Deep neural architectures for mapping scalp to intracranial EEG. Int. J. Neural
Syst. 28, 1850009. doi: 10.1142/S0129065718500090 Fahimi, F., Zhang, Z., Goh, W. B., Lee, T.-S., Ang, K. K., and Guan, C. (2019).
Inter-subject transfer learning with an end-to-end deep convolutional neural network for
Arnau-González, P., Katsigiannis, S., Ramzan, N., Tolson, D., and Arevalillo-Herrez,
eeg-based bci. J. Neural Eng. 16, 026007. doi: 10.1088/1741-2552/aaf3f6
M. (2017). “Es1d: a deep network for eeg-based subject identification,” in 2017 IEEE 17th
International Conference on Bioinformatics and Bioengineering (BIBE) (Washington, DC: Fares, A., Zhong, S.-H., and Jiang, J. (2019). EEG-based image classification via a
IEEE), 81–85. region-level stacked bi-directional deep learning framework. BMC Med. Inform. Decis.
Mak. 19, 1–11. doi: 10.1186/s12911-019-0967-9
Atilla, F., and Alimardani, M. (2021). “EEG-based classification of drivers attention
using convolutional neural network,” in 2021 IEEE 2nd International Conference on Gao, Z., Li, Y., Yang, Y., Wang, X., Dong, N., and Chiang, H.-D. (2020a). A
Human-Machine Systems (ICHMS) (Magdeburg: IEEE), 1–4. gpso-optimized convolutional neural networks for eeg-based emotion recognition.
Neurocomputing 380, 225–235. doi: 10.1016/j.neucom.2019.10.096
Aznan, N. K. N., Bonner, S., Connolly, J., Al Moubayed, N., and Breckon, T. (2018). “On
the classification of ssvep-based dry-EEG signals via convolutional neural networks,” in Gao, Z., Wang, X., Yang, Y., Li, Y., Ma, K., and Chen, G. (2020b). A channel-fused
2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC) (Miyazaki: dense convolutional network for EEG-based emotion recognition. IEEE Trans. Cognit.
IEEE), 3726–3731. Dev. Syst. 13, 945–954. doi: 10.1109/TCDS.2020.2976112
Aznan, N. K. N., and Yang, Y.-M. (2013). “Applying kalman filter in EEG-based brain Gao, Z., Wang, X., Yang, Y., Mu, C., Cai, Q., Dang, W., et al. (2019). EEG-based spatio-
computer interface for motor imagery classification,” in 2013 International Conference on temporal convolutional neural network for driver fatigue evaluation. IEEE Trans. Neural
ICT Convergence (ICTC) (Jeju: IEEE), 688–690. Netw. Learn. Syst. 30, 2755–2763. doi: 10.1109/TNNLS.2018.2886414
Bashivan, P., Yeasin, M., and Bidelman, G. M. (2015). “Single trial prediction of normal Guger, C., Allison, B. Z., and Gunduz, A. (2021). “Brain-computer interface research: a
and excessive cognitive load through eeg feature fusion,” in 2015 IEEE Signal Processing state-of-the-art summary 10,” in Brain-Computer Interface Research (Springer), 1–11.
in Medicine and Biology Symposium (SPMB) (Philadelphia, PA: IEEE), 1–5.
Hajinoroozi, M., Jung, T.-P., Lin, C.-T., and Huang, Y. (2015). “Feature extraction
Behncke, J., Schirrmeister, R. T., Burgard, W., and Ball, T. (2018). “The signature with deep belief networks for driver’s cognitive states prediction from EEG data,” in 2015
of robot action success in eeg signals of a human observer: decoding and visualization IEEE China Summit and International Conference on Signal and Information Processing
using deep convolutional neural networks,” in 2018 6th International Conference on (ChinaSIP) (Chengdu: IEEE), 812–815.
Brain-Computer Interface (BCI) (Gangwon: IEEE), 1–6.
Hassanien, A. E., and Azar, A. (2015). Brain-Computer Interfaces. Switzerland: Springer
Bi, L., Fan, X.-A., and Liu, Y. (2013). EEG-based brain-controlled mobile robots: a 74.
survey. IEEE Trans. Hum. Mach. Syst. 43, 161–176. doi: 10.1109/TSMCC.2012.2219046
He, H., and Wu, D. (2020). Different set domain adaptation for brain-computer
Bousseta, R., El Ouakouak, I., Gharbi, M., and Regragui, F. (2018). Eeg based brain interfaces: A label alignment approach. IEEE Trans. Neural Syst. Rehabil. Eng. 28,
computer interface for controlling a robot arm movement through thought. Irbm 39, 1091–1108. doi: 10.1109/TNSRE.2020.2980299
129–135. doi: 10.1016/j.irbm.2018.02.001
Herman, P., Prasad, G., McGinnity, T. M., and Coyle, D. (2008). Comparative analysis
Cai, H., Xia, M., Nie, L., Wu, Y., and Zhang, Y. (2021). “Deep learning models with time of spectral approaches to feature extraction for eeg-based motor imagery classification.
delay embedding for eeg-based attentive state classification,” in International Conference IEEE Trans. Neural Syst. Rehabil. Eng. 16, 317–326. doi: 10.1109/TNSRE.2008.926694
on Neural Information Processing (Springer), 307–314.
Hossain, K. M., Bhinge, S., Long, Q., Calhoun, V. D., and Adali, T. (2022). “Data-
Cao, Z. (2020). A review of artificial intelligence for eeg-based brain- computer driven spatio-temporal dynamic brain connectivity analysis using falff: application to
interfaces and applications. Brain Sci. Adv. 6, 162–170. doi: 10.26599/BSA.2020.9050017 sensorimotor task data,” in 2022 56th Annual Conference on Information Sciences and
Systems (CISS) (Princeton, NJ: IEEE), 200–205.
Cecotti, H., and Graser, A. (2010). Convolutional neural networks for p300 detection
with application to brain-computer interfaces. IEEE Trans. Pattern Anal. Mach. Intell. 33, Hou, Y., Jia, S., Lun, X., Shi, Y., and Li, Y. (2020). Deep feature mining via attention-
433–445. doi: 10.1109/TPAMI.2010.125 based bilstm-gcn for human motor imagery recognition. arXiv preprint arXiv:2005.00777.
doi: 10.48550/arXiv.2005.00777
Chakladar, D. D., Dey, S., Roy, P. P., and Dogra, D. P. (2020). Eeg-based mental
workload estimation using deep blstm-lstm network and evolutionary algorithm. Biomed. Huang, E., Zheng, X., Fang, Y., and Zhang, Z. (2021). Classification of
Signal Process. Control. 60, 101989. doi: 10.1016/j.bspc.2020.101989 motor imagery EEG based on time-domain and frequency-domain dual-stream
convolutional neural network. IRBM 43, 107–113. doi: 10.1016/j.irbm.2021. Maiorana, E. (2020). Deep learning for eeg-based biometric recognition.
04.004 Neurocomputing 410, 374–386. doi: 10.1016/j.neucom.2020.06.009
Huang, W., Chang, W., Yan, G., Yang, Z., Luo, H., and Pei, H. (2022). EEG-based motor Mammone, N., Ieracitano, C., and Morabito, F. C. (2021). “Mpnnet: a motion planning
imagery classification using convolutional neural networks with local reparameterization decoding convolutional neural network for EEG-based brain computer interfaces,” in
trick. Expert. Syst. Appl. 187, 115968. doi: 10.1016/j.eswa.2021.115968 2021 International Joint Conference on Neural Networks (IJCNN) (Shenzhen: IEEE), 1–8.
Hussein, R., Palangi, H., Ward, R. K., and Wang, Z. J. (2019). Optimized deep neural Mao, Z., Yao, W. X., and Huang, Y. (2017). “EEG-based biometric identification with
network architecture for robust detection of epileptic seizures using eeg signals. Clini. deep learning,” in 2017 8th International IEEE/EMBS Conference on Neural Engineering
Neurophysiol. 130, 25–37. doi: 10.1016/j.clinph.2018.10.010 (NER) (Shanghai: IEEE), 609–612.
Hwang, S., Hong, K., Son, G., and Byun, H. (2020). Learning cnn features from Mattioli, F., Porcaro, C., and Baldassarre, G. (2022). A 1D CNN for high accuracy
de features for EEG-based emotion recognition. Pattern Anal. Appl. 23, 1323–1335. classification and transfer learning in motor imagery eeg-based brain-computer interface.
doi: 10.1007/s10044-019-00860-w J. Neural Eng. 18, 066053. doi: 10.1088/1741-2552/ac4430
Ieracitano, C., Morabito, F. C., Hussain, A., and Mammone, N. (2021). A hybrid- Miao, M., Hu, W., Yin, H., and Zhang, K. (2020). Spatial-frequency feature learning and
domain deep learning-based bci for discriminating hand motion planning from EEG classification of motor imagery eeg based on deep convolution neural network. Comput.
sources. Int. J. Neural Syst. 31, 2150038. doi: 10.1142/S0129065721500386 Math. Methods Med. 2020, 1981728. doi: 10.1155/2020/1981728
Islam, M., Shampa, M., and Alim, T. (2021). Convolutional neural network based Nathan, K., and Contreras-Vidal, J. L. (2016). Negligible motion artifacts in scalp
marine cetaceans detection around the swatch of no ground in the bay of bengal. Int. electroencephalography (eeg) during treadmill walking. Front. Hum. Neurosci. 9, 708.
J. Comput. Digit. Syst. 12, 173. doi: 10.12785/ijcds/120173 doi: 10.3389/fnhum.2015.00708
Islam, M. A., Hasan, M. R., and Begum, A. (2020). Improvement of the handover Nguyen, T.-H., and Chung, W.-Y. (2018). A single-channel ssvep-based bci speller
performance and channel allocation scheme using fuzzy logic, artificial neural network using deep learning. IEEE Access 7, 1752–1763. doi: 10.1109/ACCESS.2018.2886759
and neuro-fuzzy system to reduce call drop in cellular network. J. Eng. Adv. 1, 130–138.
Oh, S. L., Hagiwara, Y., Raghavendra, U., Yuvaraj, R., Arunkumar, N., Murugappan,
doi: 10.38032/jea.2020.04.004
M., et al. (2020). A deep learning approach for parkinson’s disease diagnosis from EEG
Islam, S., Reza, R., Hasan, M. M., Mishu, N. D., Hossain, K. M., and Mahmood, Z. H. signals. Neural Comput. Appl. 32, 10927–10933. doi: 10.1007/s00521-018-3689-5
(2016). Effects of various filter parameters on the myocardial perfusion with polar plot
Olivas-Padilla, B. E., and Chacon-Murguia, M. I. (2019). Classification of multiple
image. Int. J. Eng. Res. 4, 1–10.
motor imagery using deep convolutional neural networks and spatial filters. Appl. Soft
Jia, S., Hou, Y., Shi, Y., and Li, Y. (2020). Attention-based graph resnet Comput. 75, 461–472. doi: 10.1016/j.asoc.2018.11.031
for motor intent detection from raw eeg signals. arXiv preprint arXiv:2007.13484.
Ozdemir, M. A., Degirmenci, M., Guren, O., and Akan, A. (2019). “EEG based
doi: 10.48550/arXiv.2007.13484
emotional state estimation using 2-d deep learning technique,” in 2019 Medical
Kilicarslan, A., Grossman, R. G., and Contreras-Vidal, J. L. (2016). A robust adaptive Technologies Congress (TIPTEKNO) (Izmir: IEEE), 1–4.
denoising framework for real-time artifact removal in scalp eeg measurements. J. Neural
Pan, J., Li, Y., and Wang, J. (2016). “An EEG-based brain-computer interface for
Eng. 13, 026013. doi: 10.1088/1741-2560/13/2/026013
emotion recognition,” in 2016 International Joint Conference on Neural Networks (IJCNN)
Kline, J. E., Huang, H. J., Snyder, K. L., and Ferris, D. P. (2015). Isolating gait-related (Vancouver, BC: IEEE), 2063–2067.
movement artifacts in electroencephalography during human walking. J. Neural Eng. 12,
Pedroni, A., Bahreini, A., and Langer, N. (2019). Automagic:
046022. doi: 10.1088/1741-2560/12/4/046022
standardized preprocessing of big eeg data. Neuroimage 200, 460–473.
Ko, W., Oh, K., Jeon, E., and Suk, H.-I. (2020). “Vignet: a deep convolutional neural doi: 10.1016/j.neuroimage.2019.06.046
network for EEG-based driver vigilance estimation,” in 2020 8th International Winter
Penchina, B., Sundaresan, A., Cheong, S., Grace, V., Valero-Cabré, A., and
Conference on Brain-Computer Interface (BCI) (Gangwon: IEEE), 1–3.
Martel, A. (2020). Evaluating deep learning EEG-based anxiety classification in
Koelstra, S., Muhl, C., Soleymani, M., Lee, J.-S., Yazdani, A., Ebrahimi, T., et al. (2011). adolescents with autism for breathing entrainment BCI. Brain Inform. 8, 13.
Deap: A database for emotion analysis; using physiological signals. IEEE Trans. Affect. doi: 10.21203/rs.3.rs-112880/v1
Comput. 3, 18–31. doi: 10.1109/T-AFFC.2011.15
Petoku, E., and Capi, G. (2021). “Object movement motor imagery for EEG based BCI
Korovesis, N., Kandris, D., Koulouras, G., and Alexandridis, A. (2019). Robot motion system using convolutional neural networks,” in 2021 9th International Winter Conference
control via an eeg-based brain-computer interface by using neural networks and alpha on Brain-Computer Interface (BCI) (Gangwon: IEEE), 1–5.
brainwaves. Electronics 8, 1387. doi: 10.3390/electronics8121387
Polat, H., and Özerdem, M. S. (2020). “Automatic detection of cursor movements
Kwak, N.-S., Müller, K.-R., and Lee, S.-W. (2017). A convolutional neural network for from the eeg signals via deep learning approach,” in 2020 5th International Conference
steady state visual evoked potential classification under ambulatory environment. PLoS on Computer Science and Engineering (UBMK) (Diyarbakir: IEEE), 327–332.
ONE 12, e0172578. doi: 10.1371/journal.pone.0172578
Puengdang, S., Tuarob, S., Sattabongkot, T., and Sakboonyarat, B. (2019). “EEG-based
Lee, D.-Y., Jeong, J.-H., Lee, B.-H., and Lee, S.-W. (2022). Motor imagery classification person authentication method using deep learning with visual stimulation,” in 2019 11th
using inter-task transfer learning via a channel-wise variational autoencoder-based International Conference on Knowledge and Smart Technology (KST) (Phuket: IEEE),
convolutional neural network. IEEE Trans. Neural Syst. Rehabil. Eng. 30, 226–237. 6–10.
doi: 10.1109/TNSRE.2022.3143836
Qiao, W., and Bi, X. (2019). “Deep spatial-temporal neural network for classification
León, J., Escobar, J. J., Ortiz, A., Ortega, J., González, J., Martín-Smith, P., et al. (2020). of eeg-based motor imagery,” in Proceedings of the 2019 International Conference on
Deep learning for eeg-based motor imagery classification: accuracy-cost trade-off. PLoS Artificial Intelligence and Computer Science, 265–272.
ONE 15, e0234178. doi: 10.1371/journal.pone.0234178
Rahman, M. M., Sarkar, A. K., Hossain, M. A., Hossain, M. S., Islam, M. R., Hossain,
Li, F., He, F., Wang, F., Zhang, D., Xia, Y., and Li, X. (2020). A novel simplified M. B., et al. (2021). Recognition of human emotions using eeg signals: a review. Comput.
convolutional neural network classification algorithm of motor imagery eeg signals based Biol. Med. 136, 104696. doi: 10.1016/j.compbiomed.2021.104696
on deep learning. Appl. Sci. 10, 1605. doi: 10.3390/app10051605
Rammy, S. A., Abrar, M., Anwar, S. J., and Zhang, W. (2020). “Recurrent deep learning
Li, J., Zhang, Z., and He, H. (2016). “Implementation of EEG emotion recognition for eeg-based motor imagination recognition,” in 2020 3rd International Conference on
system based on hierarchical convolutional neural networks,” in International Conference Advancements in Computational Sciences (ICACS) (Lahore: IEEE), 1–6.
on Brain Inspired Cognitive Systems (Springer), 22–33.
Reddy, T. K., Arora, V., Gupta, V., Biswas, R., and Behera, L. (2021). EEG-based
Li, Y., Yang, H., Li, J., Chen, D., and Du, M. (2020). EEG-based intention drowsiness detection with fuzzy independent phase-locking value representations using
recognition with deep recurrent-convolution neural network: performance and channel lagrangian-based deep neural networks. IEEE Trans. Syst. Man Cybern. Syst. 52, 101–111.
selection by grad-cam. Neurocomputing 415, 225–233. doi: 10.1016/j.neucom.2020. doi: 10.1109/TSMC.2021.3113823
07.072
Roy, S., McCreadie, K., and Prasad, G. (2019). “Can a single model deep learning
Liu, J., Wu, G., Luo, Y., Qiu, S., Yang, S., Li, W., et al. (2020). EEG-based emotion approach enhance classification accuracy of an EEG-based brain-computer interface?” in
classification using a deep neural network and sparse autoencoder. Front. Syst. Neurosci. 2019 IEEE International Conference on Systems, Man and Cybernetics (SMC) (Bari: IEEE),
14, 43. doi: 10.3389/fnsys.2020.00043 1317–1321.
Liu, Y., Ding, Y., Li, C., Cheng, J., Song, R., Wan, F., et al. (2020). Multi-channel eeg- Saha, P., Fels, S., and Abdul-Mageed, M. (2019). “Deep learning the eeg manifold
based emotion recognition via a multi-level features guided capsule network. Comput. for phonological categorization from active thoughts,” in ICASSP 2019-2019 IEEE
Biol. Med. 123, 103927. doi: 10.1016/j.compbiomed.2020.103927 International Conference on Acoustics, Speech and Signal Processing (ICASSP) (Brighton,
UK: IEEE), 2762–2766.
Lotte, F., Bougrain, L., Cichocki, A., Clerc, M., Congedo, M., Rakotomamonjy, A., et al.
(2018). A review of classification algorithms for eeg-based brain-computer interfaces: a Sakkalis, V. (2011). Review of advanced techniques for the estimation of
10 year update. J. Neural Eng. 15, 031005. doi: 10.1088/1741-2552/aab2f2 brain connectivity measured with eeg/meg. Comput. Biol. Med. 41, 1110–1117.
doi: 10.1016/j.compbiomed.2011.06.020
Mai, N.-D., Long, N. M. H., and Chung, W.-Y. (2021). “1D-CNN-based bci system
for detecting emotional states using a wireless and wearable 8-channel custom-designed Schalk, G., McFarland, D. J., Hinterberger, T., Birbaumer, N., and Wolpaw, J. R. (2004).
EEG headset,” in 2021 IEEE International Conference on Flexible and Printable Sensors Bci2000: a general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed.
and Systems (FLEPS) (Manchester: IEEE), 1–4. Eng. 51, 1034–1043. doi: 10.1109/TBME.2004.827072
Shoeibi, A., Khodatars, M., Ghassemi, N., Jafari, M., Moridian, P., Alizadehsani, R., Wilaiprasitporn, T., Ditthapron, A., Matchaparn, K., Tongbuasirilai, T.,
et al. (2021). Epileptic seizures detection using deep learning techniques: a review. Int. J. Banluesombatkul, N., and Chuangsuwanich, E. (2019). Affective eeg-based person
Environ. Res. Public Health 18, 5780. doi: 10.3390/ijerph18115780 identification using the deep learning approach. IEEE Trans. Cognit. Dev. Syst. 12,
486–496. doi: 10.1109/TCDS.2019.2924648
Song, Y., Wang, D., Yue, K., Zheng, N., and Shen, Z.-J. M. (2019). “EEG-based
motor imagery classification with deep multi-task learning,” in 2019 International Joint Xu, H., and Plataniotis, K. N. (2016). “Affective states classification using EEG and
Conference on Neural Networks (IJCNN) (Budapest: IEEE), 1–8. semi-supervised deep learning approaches,” in 2016 IEEE 18th International Workshop
Sulaiman, N., Taib, M. N., Lias, S., Murat, Z. H., Aris, S. A. M., and Hamid, N. H. on Multimedia Signal Processing (MMSP) (Montreal, QC: IEEE), 1–6.
A. (2011). “EEG-based stress features using spectral centroids technique and k-nearest Yang, J., Ma, Z., Wang, J., and Fu, Y. (2020). A novel deep learning scheme for
neighbor classifier,” in 2011 UkSim 13th International Conference on Computer Modelling motor imagery EEG decoding based on spatial representation fusion. IEEE Access 8,
and Simulation (Cambridge, UK: IEEE), 69–74. 202100–202110. doi: 10.1109/ACCESS.2020.3035347
Sundaresan, A., Penchina, B., Cheong, S., Grace, V., Valero-Cabré, A., and Yin, Z., Zhao, M., Wang, Y., Yang, J., and Zhang, J. (2017). Recognition of emotions
Martel, A. (2021). Evaluating deep learning eeg-based mental stress classification using multimodal physiological signals and an ensemble deep learning model. Comput.
in adolescents with autism for breathing entrainment bci. Brain Inform. 8, 1–12. Methods Programs Biomed. 140, 93–110. doi: 10.1016/j.cmpb.2016.12.005
doi: 10.1186/s40708-021-00133-5
Zeng, H., Yang, C., Dai, G., Qin, F., Zhang, J., and Kong, W. (2018). Eeg
Tabar, Y. R., and Halici, U. (2016). A novel deep learning approach for classification of
classification of driver mental states by deep learning. Cogn. Neurodyn. 12, 597–606.
eeg motor imagery signals. J. Neural Eng. 14, 016003. doi: 10.1088/1741-2560/14/1/016003
doi: 10.1007/s11571-018-9496-y
Tang, X., Zhao, J., Fu, W., Pan, J., and Zhou, H. (2019). “A novel classification algorithm
Zgallai, W., Brown, J. T., Ibrahim, A., Mahmood, F., Mohammad, K., Khalfan, M.,
for MI-EEG based on deep learning,” in 2019 IEEE 8th Joint International Information
et al. (2019). “Deep learning ai application to an EEG driven BCI smart wheelchair,” in
Technology and Artificial Intelligence Conference (ITAIC) (Chongqing: IEEE), 606–611.
2019 Advances in Science and Engineering Technology International Conferences (ASET)
Tang, Z., Li, C., and Sun, S. (2017). Single-trial eeg classification of motor imagery using (Dubai: IEEE), 1–5.
deep convolutional neural networks. Optik 130, 11–18. doi: 10.1016/j.ijleo.2016.10.117
Zhang, C., Kim, Y.-K., and Eskandarian, A. (2021). Eeg-inception: an accurate and
Tayeb, Z., Fedjaev, J., Ghaboosi, N., Richter, C., Everding, L., Qu, X., et al. (2019). robust end-to-end neural network for EEG-based motor imagery classification. J. Neural
Validating deep neural networks for online decoding of motor imagery movements from Eng. 18, 046014. doi: 10.1088/1741-2552/abed81
eeg signals. Sensors 19, 210. doi: 10.3390/s19010210
Zhang, H., Yang, H., and Guan, C. (2013). Bayesian learning for spatial filtering
Tiwari, S., Goel, S., and Bhardwaj, A. (2021). Midnn-a classification approach for the in an EEG-based brain-computer interface. IEEE Trans. Neural Netw. Learni. Syst. 24,
eeg based motor imagery tasks using deep neural network. Appl. Intell. 52, 4824–4843. 1049–1060. doi: 10.1109/TNNLS.2013.2249087
doi: 10.1007/s10489-021-02622-w
Zhang, R., Xu, P., Guo, L., Zhang, Y., Li, P., and Yao, D. (2013). Z-score linear
Tortora, S., Ghidoni, S., Chisari, C., Micera, S., and Artoni, F. (2020). Deep learning- discriminant analysis for eeg based brain-computer interfaces. PLoS ONE 8, e74433.
based bci for gait decoding from eeg with lstm recurrent neural network. J. Neural Eng. doi: 10.1371/journal.pone.0074433
17, 046011. doi: 10.1088/1741-2552/ab9842
Zhang, X., Yao, L., Huang, C., Gu, T., Yang, Z., and Liu, Y. (2017). Deepkey:
Turner, J., Page, A., Mohsenin, T., and Oates, T. (2014). “Deep belief networks used on an EEG and gait based dual-authentication system. arXiv preprint arXiv:1706.01606.
high resolution multichannel electroencephalography data for seizure detection,” in 2014 doi: 10.48550/arXiv.1706.01606
AAAI Spring Symposium Series.
Zhang, X., Yao, L., Kanhere, S. S., Liu, Y., Gu, T., and Chen, K. (2018a). Mindid: Person
Vallabhaneni, R. B., Sharma, P., Kumar, V., Kulshreshtha, V., Reddy, K. J., Kumar, S. S., identification from brain waves through attention-based recurrent neural network. Proc.
et al. (2021). Deep learning algorithms in eeg signal decoding application: a review. IEEE ACM Interact. Mobile Wearable Ubiquitous Technol. 2, 1–23. doi: 10.1145/3264959
Access 9, 125778–125786. doi: 10.1109/ACCESS.2021.3105917
Zhang, X., Yao, L., Sheng, Q. Z., Kanhere, S. S., Gu, T., and Zhang, D. (2018b).
Vilamala, A., Madsen, K. H., and Hansen, L. K. (2017). “Deep convolutional neural “Converting your thoughts to texts: Enabling brain typing via deep feature learning
networks for interpretable analysis of EEG sleep stage scoring,” in 2017 IEEE 27th of EEG signals,” in 2018 IEEE International Conference on Pervasive Computing and
International Workshop on Machine Learning For Signal Processing (MLSP) (Tokyo: Communications (PerCom) (Athens: IEEE), 1–10.
IEEE), 1–6.
Zhang, X., Yao, L., Zhang, S., Kanhere, S., Sheng, M., and Liu, Y. (2018c). Internet
Völker, M., Schirrmeister, R. T., Fiederer, L. D., Burgard, W., and Ball, T. (2018). “Deep of things meets brain-computer interface: A unified deep learning framework for
transfer learning for error decoding from non-invasive EEG,” in 2018 6th International enabling human-thing cognitive interactivity. IEEE Internet Things J. 6, 2084–2092.
Conference on Brain-Computer Interface (BCI) (Gangwon: IEEE), 1–6. doi: 10.1109/JIOT.2018.2877786
Wang, B., Wong, C. M., Wan, F., Mak, P. U., Mak, P. I., and Vai, M. I. (2009). Zhao, X., Zhang, H., Zhu, G., You, F., Kuang, S., and Sun, L. (2019). A multi-branch 3d
“Comparison of different classification methods for eeg-based brain computer interfaces: convolutional neural network for EEG-based motor imagery classification. IEEE Trans.
a case study,” in 2009 International Conference on Information and Automation (Zhuhai; Neural Syst. Rehabil. Eng. 27, 2164–2177. doi: 10.1109/TNSRE.2019.2938295
Macau: IEEE), 1416–1421.
Zhu, H., Forenzo, D., and He, B. (2022). On the deep learning models for EEG-based
Wang, J., and Wang, M. (2021). Review of the emotional feature extraction and brain-computer interface using motor imagery. IEEE Trans. Neural Syst. Rehabil. Eng. 30,
classification using eeg signals. Cognit. Rob. 1, 29–40. doi: 10.1016/j.cogr.2021.04.001 2283–2291. doi: 10.1109/TNSRE.2022.3198041