Cognitive Workload Detection From Raw EEG-Signals of Vehicle Driver Using Deep Learning
Cognitive Workload Detection From Raw EEG-Signals of Vehicle Driver Using Deep Learning
Abstract—Electroencephalography (EEG) signals have been multiple frequencies. Only well-known frequency is then
proven to be effective in evaluating human’s cognitive state used for feature design process. For example, the alpha band
under specific tasks. Conventional classification models utilized (8 to 15[Hz]) is correlated with relaxing, and the beta band
for EEG classification heavily rely on signal pre-processing and
hand-designed features. In this paper, we propose an end-to-end (16 to 31[Hz]) associates with mental stress. Pre-processing
deep neural network which is capable of classifying multiple methods such as Butterworth bandpass filter and stationary
types of cognitive workload of a vehicle driver and the context wavelet transform filters [14] are used to remove high and
of driving using only raw EEG signals as its input without any low frequency noises [13]. Several efforts in applying deep
pre-processing nor the need for conventional hand-designed learning on EEG have been carried out [11][19][20][21].
features. Data used in this study are collected throughout However, all the previous researches still rely on signal pre-
multiple driving sessions conducted on a high-fidelity driving
simulator. Experimental results conducted on 4 channels of raw processing pipelines, and none come up with a system that
EEG data show that the proposed model is capable of accurately can handle raw data directly. Thus, valuable information
detecting the cognitive workload of a driver and the context of might be discarded during the pre-processing.
driving. The purpose of this paper is to introduce an end-to-end
deep neural network model that can directly infer cognitive
Keywords— Deep Learning, EEG, Neural Networks, Cognitive workload and context of driving from raw EEG, where signal
Workload, Driving, Stress pre-processing as well as conventional hand-designed
features are not required.
I. INTRODUCTION
To evaluate the proposed model, EEG signal recordings
Maintaining safe levels of cognitive workload is extremely have been carried out on a subject driving a vehicle in a
crucial to ensure optimal performance and attention whilst relatively realistic simulated environment under several
driving automobiles. Different methods have been widely different types of cognitive workloads and contexts of
investigated to monitor driver’s cognitive workload including driving.
heart rate monitoring [1][2], galvanic skin response [3][4], Our experiment shows that the proposed model can
facial expression [5][6] and so on [7][8][9][10]. One of the accurately classify different labels of cognitive workload and
most popular measures for assessing the mental state of the driving context from raw EEG signals of vehicle driver in
humans is electroencephalography (EEG) signals. EEG has a simulated environment without the use of any conventional
reportedly shown success in evaluating the mental state such hand-designed features.
as drowsiness [11], mind wandering [12] and alertness [13]. This paper is organized as follows: Section II previews the
Despite being an active research area, understanding and related researches. Section III explains the proposed model
applying EEG signals are still limited due to the lack of true used to classify the data. Section IV details the data collection
understanding of the brain’s activities which prevents use of methodology, and the conducted experiments for evaluating
good EEG features from being engineered. Besides, EEG the proposed model. Section V summarizes the acquired
signals are usually strongly affected by noise and results, and finally the conclusion is given in Section VI.
interference. Accurate reading requires a delicate equipment
and sealing. II. RELATED WORKS
Conventional analyzing approaches make extensive use of Monitoring electrical brain activity in a non-invasive
Fourier transform in order to decompose EEG signal into means is referred to as EEG in clinical context. Brain
activities produce electrical charges caused by the neurons
inside the brain, and voltage fluctuations are measured using
conductive material called electrodes and a reference
Manuscript received on Dec 31, 2017.
Mohammad A. Almogbel is with Graduate School of Fundamental Science electrode attached to the head and scalp [15]. These voltages
and Engineering, Waseda University, Tokyo, Japan (Corresponding author, pass through an amplifier for analysis. Thanks to recent
email: [email protected]) advancements, compact lightweight devices such as
Anh H. Dang is with GITS, Waseda University, Tokyo, Japan. (email:
[email protected])
MindWave Mobile Headset [16], Emotiv Insight 5 [17] and
Wataru Kameyama is with Faculty of Science and Engineering, Waseda Muse [18] which is used in this study, have been introduced
University, Tokyo, Japan (email: [email protected])
EEG
Conv1
Conv2
Conv3
Conv4
Conv5
N 4 4 Conv6
Conv7
4 8
16
Conv8
4 4 4 4
16
4
32 32
4 4
4 64 64
32
32
16 32
64
8 8
8
4
Fig. 1 Network Diagram. The blue cuboid denotes the four raw EEG channels while the gray cuboid inside denotes the slicing window. The white cuboid
denotes the kernel while the red cubes denotes the filters applied within each convolutional layer. All the configuration are detailed in Table II.
into the market to allow a more consumer friendly means to The FC block outputs a two-, three- or six-dimensional
monitor EEG signals. vector depending on the type of classification. Softmax
Conventional approaches to classify EEG signals consist function is used to interpret this vector into the probability of
of decomposing the signals to extract features to be used for each classification class. Except the final layer of FC block,
classification. Recent studies of EEG analysis to utilize deep all layers in this deep network utilize rectified linear unit
learning with different pre-processing methodologies are (RELU) [24] activation function. Batch normalization [25] is
shown in Table I. also used after every RELU activation function.
In [11], FFT is used before classifying the signals to detect Our CNN filters are applied to all EEG channels in the
driver’s drowsiness. [19] evaluates drivers’ cognitive same time. This method ensures that all possible
performance in a simulated environment using filtered combinations of features are captured. Small filter size and
frequency EEG. Similarly, [20] also proposes a model that stride together with the high number of CNN layers also
can predict right and left hand movements with frequency ensure that features from multiple hierarchical levels are
filtered EEG signals. [21] applies a spatial filter before extracted.
classifying pathological from normal EEG recordings. Different from images, EEG comprises multiple time
TABLE I series from each channel. For that reason, it is essential to
RECENT STUDIES ON EEG ANALYSIS WITH DEEP LEARNING design CNN filters to stride along time direction only. Thus,
Ref. Pre-processing Feature-Extraction Classifier Accuracy it must fully cover all other dimensions of input data.
[19] Frequency filtering CNN 1-layer ANN 86.06% Table II describes the network configuration for the
[11] FFT N/A 1-layer ANN 86.50% proposed model. The output shape of each layer is omitted
[21] Spatial Filter CNN 1-layer ANN 84.80% from the table since it is tested on different input sizes. Input
[20] Frequency filtering CNN 1-layer ANN 86.41%
sizes are explained in Table III in Section IV-A.
channel configuration has been utilized; two dry electrodes three levels; zero traffic (light), moderately dense traffic
on the forehead AF7 (Left) and AF8 (Right), two behind the (medium) and high dense traffic (high). Table IV shows the
ears TP9 (Left) and TP10 (Right), and a reference electrode traffic flow and density for each level where traffic flow
(Fpz) in the middle of the forehead above the nasion. Muse is denotes the average number of cars passing a single point per
capable of recording EEG signals at the sampling rate of hour and traffic density denotes the average number of cars
256[Hz]. Moreover, it is very portable, lightweight, easy to within 1 kilometer of a single point.
use, and it utilizes low-energy Bluetooth technology to TABLE IV
transmit data making it feasible for real-life usages. EXPERIMENT CONDITIONS AND COMBINATIONS
Traffic Type Combinations
Flow Density City Highway
Light 0 cars/min 0 cars/km LC LH
Medium 30 cars/min 22.5 cars/km MC MH
High 36 cars/min 33.23 cars/km HC HH
ccurac
raining
Evaluation
terations
Fig. 4 Learning curve for training and evaluation of the model with the best performing slicing window size (150 [sec]) for workload (left), context
(middle) and workload/context (right)
context of driving regardless of the cognitive workload of the the training of the model throughout all the data. Input data is
driver. Inferring the context of driving directly from EEG divided into batches of 64. During the training the process
signals can be helpful in analyzing the brain signals activity 50% dropout is applied to the first and second FC layers.
during different context of driving. Therefore, we group our To evaluate the model, a random batch of 64 and its
EEG data into 2 classes: city driving and highway driving; corresponding label is sampled from the evaluation set to be
- City: LC, MC, HC used for cross-validation, and for every batch; recall,
- Highway: LH, MH, HH precision and accuracy scores are calculated.
Each of the two classes yields 36 sessions (8,294,400 × 4
data points) for each of city and highway sessions. The last V. RESULTS & DISCUSSION
session of each merged class is used for evaluation. In this Fig. 4 shows the learning curve for training and evaluation
type of classification, the network configuration is unchanged of the model for the best performing slicing window size of
from the default parameters as in Table II. 150[sec] for all three types of classification.
3) Context/Workload Classification: 1) Workload Classification
Finally, we are interested in classifying the cognitive We train our model with data prepared using different sizes
workload of the driver as well as the driving context at the of slicing windows. Table V shows the evaluation result of
same time. We assign a unique label to every cognitive our proposed model for each window after the training
workload and context mentioned above to perform a six-way process. The bold type denotes the highest score among
classification for detecting both the cognitive workload of the different slicing window sizes. As shown in the table, the
driver and the context of driving. model achieves the highest performance with larger slicing
Each of the six types yields 2,764,800 data points for every window sizes. By average, 120[sec] is the optimal window
raw EEG channel. The last session for each type is used for sizes for this experiment.
evaluation. Slight modifications to the model are as follows; Furthermore, the network is capable of accurately
for the slicing window size of 180[sec], the number of units classifying low workload sessions, whereas high workload
in FC1 is increased to 128. And for the slicing window sizes data are sometimes misclassified as medium workload.
of 60/30 [sec], the number of units in the FC1 is decreased to Session types are grouped together with regards to the traffic
32 instead of 64, and the strides for the 7th convolutional layer flow/density, however the actual cognitive workload could
is lowered to a stride of 2 down from a stride of 4. The rest not be correlated in such manner in real life situations.
remains the same as the default configuration shown in Table 2) Context Classification
II. Similar to the workload classification, Table VI shows the
evaluation results of our model on context classification. This
C. Training and Evaluation Parameters type of classification achieves the best overall results, and by
The model is trained for 10 epochs using RMSProp average, 150[sec] is the optimal window sizes for this
optimizer with a learning rate of 0.0001. An epoch represents
TABLE V
WORKLOAD CLASSIFICATION METRICS FOR DIFFERENT WINDOW SIZES
180 150 120 90 60 30 Avg.
Recall 0.997 0.979 0.960 0.928 0.936 0.791 0.932
Precision 1.000 0.996 1.000 0.982 0.987 0.982 0.991
Light
Accuracy 0.999 0.991 0.984 0.970 0.976 0.926 0.974
F1 Score 0.999 0.987 0.980 0.954 0.961 0.876 0.959
Recall 0.833 0.921 0.905 0.915 0.890 0.974 0.906
Precision 0.997 0.878 0.950 0.673 0.867 0.673 0.840
Medium
Accuracy 0.950 0.923 0.953 0.834 0.912 0.838 0.902
F1 Score 0.908 0.899 0.927 0.776 0.878 0.796 0.864
Recall 1.000 0.871 1.000 0.627 0.912 0.749 0.860
Precision 0.884 0.926 0.900 0.912 0.900 0.986 0.918
High
Accuracy 0.951 0.932 0.969 0.855 0.933 0.908 0.925
F1 Score 0.938 0.898 0.947 0.743 0.906 0.851 0.881
Recall 0.943 0.923 0.955 0.823 0.912 0.838 0.899
Precision 0.960 0.933 0.950 0.856 0.918 0.880 0.916
Average
Accuracy 0.967 0.949 0.969 0.886 0.940 0.891 0.934
F1 Score 0.948 0.928 0.951 0.824 0.915 0.841 0.901
TABLE VI
CONTEXT CLASSIFICATION METRICS FOR DIFFERENT WINDOW SIZES
180 150 120 90 60 30 Avg.
Recall 0.885 0.954 0.862 0.918 0.886 0.903 0.901
Precision 0.926 0.939 0.862 0.887 0.935 0.901 0.908
City
Accuracy 0.996 0.998 0.985 1.000 0.995 0.981 0.976
F1 Score 0.905 0.947 0.862 0.903 0.910 0.902 0.905
Recall 0.927 0.937 0.886 0.886 0.946 0.890 0.912
Precision 0.890 0.957 0.886 0.921 0.901 0.896 0.908
Highway
Accuracy 0.996 0.998 0.985 1.000 0.995 0.981 0.976
F1 Score 0.908 0.947 0.886 0.903 0.923 0.893 0.910
Recall 0.906 0.946 0.874 0.902 0.916 0.896 0.907
Precision 0.908 0.948 0.874 0.904 0.918 0.898 0.908
Average
Accuracy 0.996 0.998 0.985 1.000 0.995 0.981 0.976
F1 Score 0.906 0.947 0.874 0.903 0.916 0.897 0.907
experiment. The network can accurately distinguish between CNN architecture to eliminate the need for pre-processing
city and highway driving which shows a high correlation and feature-extracion.
between the context of driving and the EEG signals. Nevertheless, our proposed model achieves high accuracy,
3) Workload/Context Classification recall and precision scores on raw EEG signals without
Table VII shows the workload/context classification applying any conventional pre-processing methodologies.
results. Similar to context classification, the window size of
150[sec] achieves the best performance by average. However, VI. CONCLUSION
in general the network performance slightly drops in this type Ever-proposed and conventional EEG analysis and
of classification. Since we train the network each class classification studies are heavily reliant on signal pre-
individually, each class yields smaller number of samples to processing and hand-designed features. Such methodologies
train per class and the potential similarities between specific can be time consuming and potentially cause loss for viable
types of sessions such as MC and HH. information during the process. In this paper, we propose an
Low recall scores in LC and MH can be examined in end-to-end deep neural network which eliminates the
smaller slicing window sizes due to the network necessity for such pre-processing pipelines, whilst achieving
misclassifying them for MC or HC. high classification performance.
Discussion Using only raw EEG signals from 4 channels as its input,
In Table I in section II we show recent reseach that utilize the proposed model performs highly robust and accurate
deep learning on EEG signals, however study does not classification. The model is able to achieve an accuracy of
impose in any way a direct comparison with the distinguished 0.960 on average for the vehicle driver’s cognitive wor load
previous works because the used data, experimental and the context of driving.
conditions and classification targets are different in each, but Future works include testing the proposed model with
rather explores and introduces the potential of using deep public available data sets. Further investigation will be
TABLE VII
CONTEXT/WORKLOAD CLASSIFICATION METRICS FOR DIFFERENT WINDOW SIZES
` 180 150 120 90 60 30 Avg.
Recall 0.745 0.773 0.744 0.796 0.685 0.704 0.748
Precision 1.000 1.000 0.983 0.958 0.935 0.789 0.944
LC
Accuracy 0.963 0.970 0.951 0.958 0.942 0.911 0.949
F1 Score 0.854 0.872 0.847 0.870 0.790 0.744 0.830
Recall 1.000 0.994 0.972 0.889 0.995 0.711 0.928
Precision 1.000 1.000 0.938 0.955 0.954 0.806 0.942
LH
Accuracy 1.000 1.000 0.984 0.977 0.992 0.926 0.980
F1 Score 1.000 0.997 0.955 0.921 0.974 0.756 0.934
Recall 0.722 0.682 0.693 0.892 0.679 0.918 0.757
Precision 0.739 0.758 0.771 0.701 0.696 0.693 0.727
MC
Accuracy 0.898 0.905 0.905 0.913 0.895 0.909 0.904
F1 Score 0.731 0.718 0.730 0.786 0.688 0.789 0.740
Recall 0.725 0.910 0.809 0.898 0.517 0.807 0.777
Precision 1.000 0.965 0.983 0.887 0.886 0.826 0.927
MH
Accuracy 0.955 0.980 0.973 0.960 0.914 0.945 0.954
F1 Score 0.840 0.937 0.888 0.892 0.653 0.816 0.838
Recall 0.932 0.918 1.000 0.635 0.867 0.784 0.857
Precision 0.770 0.765 0.738 0.811 0.741 0.914 0.786
HC
Accuracy 0.939 0.927 0.928 0.913 0.919 0.948 0.929
F1 Score 0.844 0.835 0.849 0.712 0.799 0.844 0.814
Recall 0.932 0.918 1.000 0.635 0.867 0.784 0.951
Precision 0.804 0.904 0.921 0.841 0.682 0.915 0.843
HH
Accuracy 0.954 0.977 0.983 0.953 0.905 0.964 0.956
F1 Score 0.863 0.911 0.959 0.724 0.763 0.845 0.844
Recall 0.843 0.866 0.870 0.791 0.768 0.785 0.836
Precision 0.886 0.899 0.889 0.859 0.816 0.824 0.862
Avg.
Accuracy 0.952 0.960 0.954 0.946 0.928 0.934 0.945
F1 Score 0.855 0.878 0.871 0.817 0.778 0.799 0.833
carried out by collecting more data from more subjects with [23] K. Fukushima, “Neocognitron: A self-organizing neural network
model for a mechanism of pattern recognition unaffected by shift
different driving experiences. in position,” Biol. Cybern., vol. 36, no. 4, pp. 193–202, 1980.
[24] V Nair and G E Hinton, “Rectified Linear Units Improve
REFERENCES Restricted Boltzmann Machines,” Proc. 27th Int. Conf. Mach.
[1] N Munla, M Khalil, A Shahin, and A Mourad, “Driver stress Learn., no. 3, pp. 807–814, 2010.
level detection using HRV analysis,” 2015 Int. Conf. Adv. [25] S Ioffe and C Szegedy, “Batch Normalization: Accelerating
Biomed. Eng. ICABME 2015, pp. 61–64, 2015. Deep Networ raining by Reducing Internal Covariate Shift,”
[2] B. Eilebrecht et al., “ he relevance of HRV parameters for driver Feb. 2015.
wor load detection in real world driving,” Comput. Cardiol. [26] “Gta V,” 3 [Online] Available:
(2010)., vol. 39, pp. 409–412, 2012. https://www.rockstargames.com/V/.
[3] L. Boon-Leng, L. Dae-Seok, and L. Boon-Giin, “Mobile-based [27] Y Chen, W Li, and L Van Gool, “ROAD: Reality Oriented
wearable-type of driver fatigue detection by GSR and EMG,” Adaptation for Semantic Segmentation of Urban Scenes,” 7
IEEE Reg. 10 Annu. Int. Conf. Proceedings/TENCON, vol. 2016– [28] P. Bazilinskyy et al., “Eye movements while cycling in G A V,”
Janua, pp. 1–4, 2016. no. January, pp. 7–11, 2018.
[4] V Rajendra and O Dehzangi, “Detection of distraction under [29] M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar, K.
naturalistic driving using Galvanic S in Responses,” 2017 IEEE Rosaen, and R Vasudevan, “Driving in the Matrix: Can virtual
14th Int. Conf. Wearable Implant. Body Sens. Networks, BSN worlds replace human-generated annotations for real world
2017, no. 2, pp. 157–160, 2017. tas s?,” Proc. - IEEE Int. Conf. Robot. Autom., pp. 746–753,
[5] M Venturelli, G Borghi, R Vezzani, and R Cucchiara, “Deep 2017.
head pose estimation from depth data for in-car automotive [30] M. Angus et al., “Unlimited Road-scene Synthetic Annotation
applications,” Lect. Notes Comput. Sci. (including Subser. Lect. (URSA) Dataset,”
Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 10188 LNCS, [31] D A Hennessy, D L Wiesenthal, and P M Kohn, “ he
pp. 74–85, 2018. Influence of Traffic Congestion, Daily Hassles, and Trait Stress
[6] H Gao, A Yuce, and J P hiran, “Detecting emotional stress Susceptibility on State Driver Stress: An Interactive Perspective
from facial expressions for driving safety,” 2014 IEEE Int. Conf. ,”
Image Process. ICIP 2014, no. May 2015, pp. 5961–5965, 2014.
[7] A Sahayadhas, K Sundaraj, and M Murugappan, “Detecting Mohammad A. Almogbel (S’ ) received his
driver drowsiness based on sensors: A review,” Sensors bachelor’s degree in Information Systems from King
(Switzerland), vol. 12, no. 12, pp. 16937–16953, 2012. Saud University, Riyadh, Saudi Arabia in 2009. He
[8] S. F. Liang, C. T. Lin, R. C. Wu, Y. C. Chen, T. Y. Huang, and T. joined King Abdul-Aziz City for Science and
P Jung, “Monitoring driver’s alertness based on the driving Technology in Saudi Arabia as a researcher in the same
performance estimation and the EEG power spectrum analysis ,” year and received a scholarship to complete his
Conf. Proc. IEEE Eng. Med. Biol. Soc., vol. 6, pp. 5738–5741, graduate school in He then received master’s
2005. degree in computer science from Waseda University in
[9] A. Palazzi, D. Abati, S. Calderara, F. Solera, and R. Cucchiara, 2014 and he continued to pursue his Ph.D. since then.
“Predicting the Driver’s Focus of Attention: the DR(eye)VE He is a member of IEEE, ITS and JSAE.
Project,” pp –25, 2017.
[10] X Li, H Huang, and Y Sun, “Dri ri: An in-vehicle wireless Anh H. Dang (S’ 9) received his bachelor’s degree in
sensor network platform for daily health monitoring,” Proc. IEEE business administration, information &
Sensors, pp. 3–5, 2017. communication technology from Ritsumeikan Asia
[11] I Bela hdar, W Kaaniche, R Djmel, and B Ouni, “Detecting Pacific University (Beppu, Oita, Japan) in 2010. He
driver drowsiness based on single electroencephalography then received the master’s degree in computer science
channel,” 13th Int. Multi-Conference Syst. Signals Devices, SSD from Waseda University (Shinjuku, Tokyo, Japan) in
2016, pp. 16–21, 2016. 2012. Since 2012, he is a Ph.D. candidate at Waseda
[12] C. L. Baldwin, D. M. Roberts, D. Barragan, J. D. Lee, N. Lerner, University. He is a member of IEEE, ACM, and IEICE.
and J S Higgins, “Detecting and Quantifying Mind Wandering His research interests are machine learning, artificial
during Simulated Driving,” Front. Hum. Neurosci., vol. 11, no. intelligence, and computer vision.
August, pp. 1–15, 2017.
[13] L. Bi, R. Zhang, and Z. Chen, “Study on Real-time Detection of Wataru Kameyama (M’ ) received the bachelor’s,
Alertness Based on EEG,” 2007 IEEE/ICME Int. Conf. Complex master’s, and D.Eng. degrees from the School of
Med. Eng., pp. 1490–1493, 2007. Science and Engineering, Waseda University, in 1985,
[14] S S Daud and R Sudirman, “Butterworth Bandpass and 1987, and 1990, respectively. He joined ASCII
Stationary Wavelet Transform Filter Comparison for Corporation in 1992 and was transferred to France
Electroencephalography Signal,” Proc. - Int. Conf. Intell. Syst. Telecom CCETT from 1994 to 1996 for his
Model. Simulation, ISMS, vol. 2015–Octob, pp. 123–126, 2015. secondment. After joining Waseda University as an
[15] S. Donald L. and F. H. L. da Silva, Electroencephalography: Associate Professor in 1999, he has been a Professor
Basic Principles, Clinical Applications, and Related Fields. with the Department of Communications and
Lippincott Williams & Wilkins, 2011. Computer Engineering, School of Fundamental Science and Engineering,
[16] NeuroS y, “MindWave Mobile Headset ,” 5 [Online] Waseda University, since 2014. He has been involved in MPEG, MHEG,
Available: https://store.neurosky.com/pages/mindwave. DAVIC, and the TV-Anytime Forum activities. He was a Chairman of
[17] EMO IV, “EMO IV Insight 5 Channel Mobile EEG - Emotiv,” ISO/IECTC1/SC29/WG12, and a Secretariat and Vice Chairman of the TV-
Emotive, 2018. [Online]. Available: Anytime Forum. He is a member of IEICE, IPSJ, ITE, IIEEJ, and ACM. He
https://www.emotiv.com/product/emotiv-insight-5-channel- received the Best Paper Award of Niwa-Takayanagi in 2006, the Best Author
mobile-eeg/. Award of Niwa-Takayanagi in 2009 from the Institute of Image Information
[18] Interaxon, “Muse: the brain sensing headband,” Tech. Specif. and Television Engineers, and the International Cooperation Award from the
Valid. Res. use, pp. 4–9, 2017. ITU Association of Japan in 2012.
[19] M. Hajinoroozi, Z. Mao, T. P. Jung, C. T. Lin, and Y. Huang,
“EEG-based prediction of driver’s cognitive performance by deep
convolutional neural networ ,” Signal Process. Image Commun.,
vol. 47, pp. 549–555, 2016.
[20] Z. Tang, C. Li, and S. Sun, “Single-trial EEG classification of
motor imagery using deep convolutional neural networ s,” Optik
(Stuttg)., vol. 130, pp. 11–18, 2017.
[21] R. T. Schirrmeister, L. Gemein, K. Eggensperger, F. Hutter, and
Ball, “Deep learning with convolutional neural networks for
decoding and visualization of EEG pathology,” 7
[22] Y LeCun, K Kavu cuoglu, and C Farabet, “Convolutional
networ s and applications in vision,” IEEE Int. Symp. Circuits
Syst. Nano-Bio Circuit Fabr. Syst., pp. 253–256, 2010.