Preprints202106 0016 v2
Preprints202106 0016 v2
v2
Article
1 Product Design, Mechatronics and Environment Department, Transilvania Univesity of Brasov, Brasov,
Romania; [email protected]
* Correspondence: [email protected]; Tel.: +40 785 149 820
Abstract: The Brain-Computer Interface (BCI) is a scientific field aimed at helping people with neu-
romotor disabilities. Among the current drawbacks of BCI research is the need for a cost-effective
software instrument for simple integration with portable EEG headsets, the lack of a comparative
assessment approach of various techniques underlying recognizing the most precise BCI control
signal –voluntary eye-blinking, and the need for EEG datasets allowing the classification of multiple
voluntary eye-blinks. The proposed BCI research-related virtual instrument accomplishes the data
acquisition, processing, features extraction, and the ANN-based classification of the EEG signal de-
tected by the NeuroSky embedded biosensor. The developed software application automatically
generated fifty mixtures between selected EEG rhythms and statistical features. The EEG rhythms
are related to the time and frequency domains of the raw, delta, theta, alpha, beta, and gamma. The
extracted statistical features contain the mean, median, standard deviation, route mean square, Kur-
tosis coefficient, mode, sum, skewness, maximum, and range = maximum-minimum. The results
include 100 EEG datasets to classify multiple voluntary eye-blinks: 50 datasets with 4000 recordings
and 50 with 800 recordings. The LabVIEW application determined the optimal ANN models for
classifying the EEG temporal sequences corresponding to detecting zero, one, two, or three volun-
tary eye-blinks.
Keywords: brain-computer interface, EEG signal, artificial neural networks, LabVIEW application,
features extraction, eye-blinks detection, EEG portable headset
1. Introduction
Brain-Computer Interface is a multidisciplinary research field, which comprises
achievements in related scientific and technical areas: artificial intelligence, computer sci-
ence, mechatronics [1-3], signal processing, neuroscience, and psychology [4]. Beyond its
various applications in non-clinical fields (digital games, sleep avoidance, mental states
monitoring, advertisement, and business), the fundamental aim of a brain-computer in-
terface system is related to helping people with neuromotor disabilities who cannot com-
municate with the outside environment by using natural paths, such as muscles and pe-
ripheral nerves. These patients have suffered a cerebral vascular accident or severe inju-
ries to the spinal cord, so that they have lost the ability to move their upper and lower
limbs. Other reasons which provoked their impairments are related to awful diagnosis:
amyotrophic lateral sclerosis or locked-in syndrome. An innovator solution able to pro-
vide an alternative way of regaining their independence and confidence is the Brain-Com-
puter Interface (BCI). BCI is a thought-provoking field with a rapid evolution because of
its applications based on brain-controlled mechatronics devices (wheelchairs [5-8], robot
arm [9-10], robot hand [11], mobile robots [12], household items [13] and intelligent home
optimal intellectual, financial, human, and timing resources to conduct high-quality re-
search in the brain-computer interface scientific field. Even the professors from the Uni-
versity Centre, which are not part of those 14.35% privileged states, encounter several dif-
ficulties that negatively impact the design, implementation, and experimentation of a
novel brain-computer interface system providing a real-life application for people with
neuromotor disabilities. These external difficulties include too expensive EEG equipment,
inexistent laboratory conditions for conducting invasive and non-invasive experiments,
the unavailable partnership between multidisciplinary groups to enable the BCI research,
and the absence of knowledge-sharing sessions with BCI experts. Unfortunately, all these
issues determine the professors to manifest a lack of motivation to overcome the signifi-
cant technical challenges addressed by the BCI research field.
Probably an awful consequence refers to the discouragement, uncertainty, frustra-
tion, and intrigue that young researchers may feel, such as passionate, enthusiastic, eager,
creative, and ambitious undergraduates or doctoral students. Those professors from the
countries, which do not benefit from financial support for BCI research, do not appreciate
their student’s endeavors and consider them joyful experiments. That is because they re-
alize that their dream scientific project seems not to be achievable.
Therefore, it results in the utmost importance of the primary objective accomplished
by this paper: the design, the development, and the implementation of a flexible, robust,
simple to use, user-friendly and cost-effective virtual instrument aimed for BCI research.
Thus, the young scientists benefit from a quick experimental platform enabling the fun-
damental processes: the acquisition, monitoring, processing, features extraction, and clas-
sification based on artificial neural networks of the biosignal detected with the help of an
affordable portable EEG headset, such as NeuroSky Mindwave Mobile. Intelligent pro-
cessing consists of automated solutions that the young researchers could customize ac-
cording to the particular scientific purpose that they are pursuing.
The proposed research virtual instrument reveals a novel approach to designing a
BCI versatile framework by allowing multiple selections between the EEG rhythms (both
in time and frequency domains) and statistical features by delivering complete training
and testing datasets and getting neural networks-based models aimed for the EEG data
classification. The use of the BCI research-related virtual instrument revealed in this paper
does not involve programming skills, signal processing abilities, or neuroscience
knowledge. This advantage results in its universal usefulness and general purpose by at-
tracting researchers from non-technical fields, such as psychology, social sciences, music,
arts, business, and advertisement media.
Regarding the secondary objective of the current research, the most straightforward
application tested with the proposed virtual instrument is the neural networks-based clas-
sification of the multiple voluntary eye-blinks used as precise control signals in a brain-
computer interface application. The voluntary eye-blinking is an artifact across the raw
EEG signal. It is easy to detect by its specific pattern showing an increase and a decrease
of the EEG signal amplitude following the two states of closing and opening the eye. By
capturing the EEG based eye-blinking pattern and counting its occurrences, there resulted
in different commands that are easy to execute by people with neuromotor disabilities to
control specific assistive devices: an electrical wheelchair, a mobile robot [33 - 34], a robot
hand [35 - 36], a robot arm, home appliances, experimental prototypes [37] and commu-
nication systems [38 - 39].
The BCI scientific literature reports numerous papers focused on employing volun-
tary eye-blinking in developing a brain-computer interface system. The novice researchers
preferred to call off-the-shelf functions to measure the strength of the eye-blinking neces-
sary to set a threshold value [40 - 41], to apply statistical calculus to implement an algo-
rithm for counting the voluntary eye-blinks [42 - 44] for the development of experimental
BCI prototypes [45 – 46]. As for drawbacks, the thresholding-based method for voluntary
eye-blink detection involves calibration sessions and user-customized amplitude thresh-
old that could determine variable accuracy. Otherwise, the experienced researchers ex-
plored the advanced classification techniques based on neural networks [47 – 50], wavelet
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
design [51], and support vector machines [52] for getting high accuracy of discriminating
between various types of eye-blinks (voluntary, involuntary, short, long, simple, and dou-
ble) aimed for the achievement of highly performant brain-computer interfaces. Table 1
shows a summary including the acquisition, processing, extracted features, classifier of
some of the papers aimed for voluntary eye-blinking that did not involve amplitude
thresholding.
Table 1. A summary (acquisition, processing, extracted features, classifier) of the papers aimed for voluntary eye-blinking classifi-
cation that did not involve amplitude thresholding
Eye-blink fingerprint
Both the novice and experienced researchers focused on a particular BCI application
so that they established fixed, rigid, and restrictive thresholding, statistical and artificial
intelligence-based research methods for the detection and counting of multiple voluntary
eye-blinks. Currently, the BCI-related scientific literature does not prove any recent evi-
dence or proposal of a flexible, versatile, customizable EEG research software solution
aimed for further investigation, comparative analysis, and performance evaluation of the
results obtained to classify the multiple voluntary eye-blinks.
Table 2 shows a summary, including the software, hardware, task, application, and
the availability of a dataset related to the papers aimed for voluntary eye-blinking classi-
fication that did not involve amplitude thresholding.
Table 2. A summary (software, hardware, task, application, dataset) of the papers aimed for voluntary eye-blinking classification
that did not involve amplitude thresholding
Thus, 50 EEG training datasets (format .csv) contain 4000 recordings: 1000 – No Eye-
Blink; 1000 – One Eye-Blink; 1000 – Two Eye-Blinks and 1000 – Three Eye-Blinks. The 50
EEG datasets contain 50 different mixtures representing 50 possible combinations be-
tween the selection of ten EEG rhythms and ten statistical features.
The ten EEG rhythms are the time and frequency domain of raw, delta, theta, alpha,
beta, and gamma. The ten statistical features are: mean, median, route mean square, stand-
ard deviation, Kurtosis coefficient, mode, sum, skewness, maximum value, and range =
maximum-minimum.
The 50 EEG datasets provided the training of the 50 classification models (format
.json) based on neural networks delivered by the current research. In addition, there re-
sulted in 50 EEG testing datasets (format .csv), each containing 800 recordings: 200 – No
Eye-Blink; 200 – One Eye-Blink; 200 – Two Eye-Blink and 200 – Three Eye-Blinks. The EEG
training and testing datasets have a similar content related to the values resulting from
combining different EEG rhythms and statistical features.
Regarding the structure of the current paper, Section 2 provides some detailed in-
sights about the materials and methods necessary to develop the LabVIEW application,
Section 3 shows the obtained results, and Section 4 analyzes the outcomes by comparing
them with previous similar achievements. Finally, Section 5 comprises some conclusions
about the overall project work and highlights the future research directions.
2.3. An overview of the proposed LabVIEW application based on a State Machine paradigm
involving the acquisition, processing, and classification of the EEG signal detected from the
embedded sensor of Neurosky
The main original contribution of this paper is the proposal of a novel research ap-
proach on the development of a portable brain-computer interface system. An original
LabVIEW application addresses this challenge by implementing several custom virtual
instruments aimed to integrate the following three stages: the acquisition, the processing,
and the classification of the EEG signal detected from the embedded sensor of the Neuro-
Sky Mindwave Mobile headset.
The proposed LabVIEW application is consisting of a State Machine paradigm ac-
complishing the following functionalities (Figure 1):
• Manual Mode of data acquisition for displaying the EEG signal (raw, delta, theta, al-
pha, beta, and gamma) both in time and frequency domain;
• Automatic Mode of data acquisition for recording the EEG temporal sequences asso-
ciated with particular cognitive tasks necessary for the preparation of the EEG da-
tasets;
• Processing the obtained EEG temporal sequences by the extraction of statistical fea-
tures and the assignment of proper labels corresponding to each of the four classes: 0
– No Eye-Blink; 1 – One Eye-Blink; 2 – Two Eye-Blinks and 3 – Three Eye-Blinks;
• The automatic generation of a series of EEG datasets based on the proposed mixtures
between the EEG signals (raw, delta, theta, alpha, beta, and gamma) in time and fre-
quency domains and the extracted statistical features (arithmetic mean, median, mode,
skewness and others);
• The training of a neural networks model either by setting specific hyperparameter or
by searching the optimized hyperparameters applied on each EEG dataset delivered
from the previous stage;
• The evaluation of each trained neural networks model by running it to classify another
EEG dataset that can be delivered by using a similar procedure as previously described
regarding the proposed mixtures between EEG signals and statistical features.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 1. An overview of the proposed LabVIEW application aimed for the acquisition, pro-
cessing, and classification of the electroencephalographic signal used in a brain-computer interface
The selection of the corresponding virtual button accomplishes each of the above
functionalities by opening the tab or graphical windows consisting of customized settings
and options. Moreover, each of these tabs comprises a button for returning to the main
Configuration graphical window shown in Figure 2.
Figure 2. A sequence of the Front Panel showing the Configuration graphical window displaying
the virtual buttons corresponding to all the functionalities based on a State Machine paradigm
2.4. The manual mode of data acquisition and the EEG signal processing
A significant function, included by the LabVIEW NeuroSky toolkit and used to de-
velop the application presented in this paper, is the ‘ThinkGear Read – MultiSample Raw
(EEG).’ This function enables the raw EEG signal acquisition and returns an array con-
taining a specific number of numerical values. The input parameter ‘Samples to Read’
should specify this number by assigning a numerical value 512, 256, 128, 64, or other. The
‘Samples to Read’ parameter does not have the same meaning as the ‘Sampling Fre-
quency.’ According to technical specifications, in the NeuroSky chipset, the sampling fre-
quency is a fixed value, established to 512 Hz, referring to the acquisition of 512 samples
in one second. Therefore, setting ‘Samples to read = 512’ results in a single buffer or 1D
array containing 512 numerical values. Otherwise, by setting ‘Samples to read = 256’, there
are returned two buffers or 2 x 1D arrays, each of them containing 256 numerical values.
In LabVIEW, a 1D array is a matrix with one dimension, meaning either one row and
many columns or one column and many rows. Other functions that allow the communi-
cation between LabVIEW and NeuroSky headset were linked (Figure 3): ‘Clear Connec-
tions,’ ‘Create Task,’ ‘Start Task,’ ‘Signal Quality,’ ‘Read MultiSample Raw,’ and ‘Clear
Task.’ ‘Clear Connections’ is used to reset all previous connections. ‘Create Task’ is used
for the initial settings of serial data transfer: port name, baud rate, data format. Start Task’
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
is used to start the connection or to open the communication port. ‘Signal Quality’ is used
to display the value characterizing the percentage of signal quality. ‘Read MultiSample
Raw’ is used to acquire an array of samples of EEG raw signal. ‘Clear Task’ is used to close
the connection or the communication port.
Figure 3. A sequence of the Block Diagram showing the set of the functions allowing the commu-
nication between LabVIEW application and NeuroSky headset.
Further, according to Figure 4, the output array of numerical values returned by the
‘Read MultiSample Raw’ function is used to input the ‘Build Waveform’ function. In ad-
dition, two parameters (t0 = current time in hours, minutes, seconds, and milliseconds and
dt = time interval in seconds between data points) are necessary to obtain the appropriate
data format to apply a filter for the extraction of a particular range of frequencies, which
can be graphically displayed. The ‘Get Date/Time’ function facilitates calculating the ‘t0 =
Current time’ parameter. The ‘dt = time interval’ is given by the division of 1 to 512, taking
into account ‘Samples to Read = 512’.
Figure 4. A sequence of the Block Diagram showing the set of the functions allowing the acquisi-
tion, processing, and graphical displaying of the raw EEG signal acquired from the NeuroSky
The output of the ‘Build Waveform’ function is passed through the ‘Filter’ function
so that it results in a particular sequence of signal frequencies extracted from the entire
range representing 0 – 512 Hz. According to Figure 5, the configuration of the ‘Filter’ is
the following: filter type = Bandpass; lower cut-off = 14 Hz (for beta EEG rhythm); upper
cut-off =30 Hz (for beta EEG rhythm); option = infinite impulse response (IIR); topology =
Butterworth; order = 6. Two of the previously mentioned parameters – lower cut-off and
upper cut-off – should be customized depending on the frequency range of the EEG
rhythms: delta (0.1 – 3.5); theta (4 – 7.5); alpha (8 – 13) and beta (14 – 30). Another type of
filter, called Highpass, extracts the gamma (upper cut-off = 30) EEG rhythm. The output
of the ‘Filter’ function is an array of samples or numerical values represented on a Wave-
form Chart.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 5. The settings and description corresponding to ‘Filter Express VI’ included by the ‘Signal
Analysis Express VIs’ LabVIEW palette
Figure 6. The settings and description corresponding to ‘Tone Measurements Express VI’ included
by the ‘Signal Analysis Express VIs’ LabVIEW palette
Figure 7. The settings and description corresponding to ‘Spectral Measurements Express VI’ in-
cluded by the ‘Signal Analysis Express VIs’ LabVIEW palette
A case structure (Figure 8) encompasses all the previously mentioned functions: Fil-
ter – Tone Measurements – Spectral Measurements. The selector of the case structure is a
button consisting of a Boolean control with two states: true and false. Those three func-
tions are linked to each other to get output signals that are graphically displayed, depend-
ing on the state of the button corresponding to a certain EEG rhythm: delta, theta, alpha,
beta, gamma, or raw signal.
Overall, five case structures represent the five EEG rhythms, which can be activated
or deactivated by pressing those buttons. Therefore, the user can select specific EEG
rhythms displayed on either the Waveform Chart corresponding to the time domain (that
is, the output of the ‘Filter’ function) or the Waveform Graph, associated with the fre-
quency domain (that is the output of the ‘Spectral Measurements’ function). A while loop
includes all the five case structures. The while loop also contains the same network of
functions previously described regarding displaying the raw EEG signal. Using the ‘while
loop,’ the LabVIEW application runs in manual mode until occurring an exit condition.
Figure 8. The settings and description corresponding to ‘Spectral Measurements Express VI’ in-
cluded by the ‘Signal Analysis Express VIs’ LabVIEW palette
Considering that the State Machine design pattern is underlying the Block Diagram,
the transition between states or different sequences should be quick and straightforward.
Therefore, by pressing the ‘Config’ button, the exit condition is fulfilled so that the
running of the manual mode of data acquisition (Figure 9) stops. The application
continues to run in the ‘Configuration’ state, where the user can select another option:
automatic mode of EEG data acquisition or features extraction or training neural networks
model.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 9. Monitoring EEG rhythms by graphical displaying their time variation (Waveform Charts – first
and third columns) and its frequency variation (Waveform Graph – second and fourth columns)
Figure 10. A sequence of the Front Panel showing the virtual chronometer aimed to calculate both
the elapsed and remained time for the automatic acquisition of EEG signal
or a set of 512 samples for every one of the 12 EEG signals (both time and frequency
domain).
Figure 11. A diagram representing the block instructions underlying the implementation of Data
Acquisition in the Automatic Mode – first view showing all the steps leading to obtaining the raw
EEG signal (time and frequency domain) and extracting the EEG rhythms (gamma, beta, alpha,
theta, and delta)
Accordingly, when the chronometer stops, indicating the finish of the EEG signal
acquisition in automatic mode, 12 x 2D arrays will be returned. They consist of six types
of EEG signals in the Time Domain (Figure 12) plus six types of EEG signals in Frequency
Domain (FFT – Peak – Figure 13): raw, delta, theta, alpha, beta, and gamma. A 2D array is
a matrix containing 80 rows (temporal sequences) and 512 columns (512 samples).
Figure 12. A diagram representing the block instructions underlying the implementation of EEG
Data Acquisition in the Automatic Mode – the second view showing the five EEG rhythms ob-
tained in the time domain.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 13. A diagram representing the block instructions underlying the implementation of EEG
Data Acquisition in the Automatic Mode – the third view showing the five EEG rhythms obtained
in the frequency domain.
2.6. The preparation of the EEG temporal sequences
Before applying the EEG acquired data preparation algorithm, every one of 12 x 2D
arrays contains: N rows and ‘Samples to Read’ Columns = 80 rows and 512 columns → 80
temporal sequences of 512 elements. Table 3 shows every one of the 12 x 2D arrays (both
time and frequency domain of raw, gamma, beta, alpha, theta, delta) before the prepara-
tion of the EEG data.
Table 3. Structure of the 2D arrays (Time and Frequency Domain of all EEG Signals) before the
preparation of the EEG acquired data
After applying the algorithm of preparation of the EEG acquired data, every one of
12 x 2D arrays contains: ‘N divided by Time Interval’ rows and ‘Time Interval multiplied
by Samples to Read’ Columns = 40 rows and 2 x 512 columns = 40 rows and 1024 columns
→ 40 sequences of 1024 elements. Table 4 shows every one of the 12 x 2D arrays (both time
and frequency domain of raw, gamma, beta, alpha, theta, delta) after the preparation of
the EEG data. Further, it results in the extraction or calculation of features (for example:
mean, median, standard deviation) from every one of the 40 sequences, each of them con-
taining 1024 elements.
The algorithm of preparation of the acquired EEG data includes three stages. The first
stage is related to using the predefined ‘Read Delimited Spreadsheet VI’ to read each of
the 12 x 2D arrays containing 40960 samples corresponding to the EEG rhythms previ-
ously saved .csv files. The second stage consists of implementing a customized VI aiming
at converting each of the 12 x 2D arrays into 12 x 3D arrays, the third dimension results
from the separate extraction of two rows or two sequences composed of 1024 samples.
The third stage is related to the implementation of another customized VI to achieve the
conversion of each of the 12 x 3D arrays into 12 x 2D arrays by removing the third dimen-
sion because the previously extracted two rows or two sequences should form a single
row or a single sequence and all the resulted rows/sequences determine a 2D array.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Table 4. Structure of the 2D arrays (Time and Frequency Domain of all EEG Signals) after apply-
ing the preparation of the EEG acquired data
2.7. The label assignment for each EEG temporal sequence by visually checking the graphical
display in time and frequency domains
After the preparation of EEG data is finished, according to Figure 14, the user can
manually set the label for each EEG temporal sequence by visually checking the graphical
display in time and frequency domains. Figure 14 shows the options and settings related
to checking the raw EEG signal. Other tabs / graphical windows with similar content as-
sess each EEG rhythm (delta, theta, alpha, beta, and gamma).
Figure 14. A sequence of the Front Panel – EEG Raw Signal - showing various options allowing
the label assignment for each EEG temporal sequence by visually checking the graphical display in
time and frequency domains
Figure 15. The settings corresponding to ‘Statistics Express VI’ included by the ‘Signal Analysis
Express VIs’ LabVIEW palette
Figure 16. An example for the LabVIEW display of the EEG temporal sequences associated with
each label: 0 – No Eye-Blink; 1 – One Eye-Blink; 2 – Two Eye-Blinks and 3 – Three Eye-Blinks.
Figure 17. First view of the diagram representing the block instructions underlying the generation
of training or testing dataset based on multiple mixtures between the selected EEG signals and the
extracted features
Figure 18. The second view of the diagram representing the block instructions underlying the gen-
eration of training or testing dataset based on multiple mixtures between the selected EEG signals
and the extracted features
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 19. The third view of the diagram representing the block instructions underlying the gener-
ation of training or testing dataset based on multiple mixtures between the selected EEG signals
and the extracted features
Figure 20. Forth view of the diagram representing the block instructions underlying the genera-
tion of training or testing dataset based on multiple mixtures between the selected EEG signals
and the extracted features
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 21. Fifth view of the diagram representing the block instructions underlying the generation
of training or testing dataset based on multiple mixtures between the selected EEG signals and the
extracted features
2.9. Training a NN model for the EEG Signals classification by setting certain hyper-parameters
This phase involves applying the generated dataset to the classification process
based on artificial neural networks (NN). Using the default subVIs included by the ‘Ana-
lytics and Machine Learning’ (AML) toolkit [60] results in the classification process.
‘Aml_Read CSV File. vi’ is used to open the CSV file and read the training dataset.
‘Load Training Data (2D Array).vi’ is used to load the dataset for training the model.
‘Normalize. vi’ is used to normalize the training data with the 2-Score or Min-Max
Method. Normalization is related to scaling each value of the training dataset in the spec-
ified range. The ‘Normalize. vi’ has two parameters: one shot and batch.
‘Initialize Classification Model (NN).vi’ initializes the parameter of the classification
algorithm: neural networks (NN). The user should set a specific value for every hyperpa-
rameter: the number of hidden neurons, the hidden layer type (Sigmoid, Tanh or Rectified
Linear Unit functions), the output layer type (Sigmoid or Softmax function), the cost func-
tion type (Quadratic or Cross-Entropy function), tolerance and max iteration.
According to the AML LabVIEW toolkit [60], the ‘hidden layer type’ is related to the
activation function applied to the neurons from the hidden layer type. Table 5 defines the
available activation functions. According to Table 6, Sigmoid and Softmax are the two
activation functions available in the neurons regarding the' output layer type.' Tables 7
shows the mathematical formulas for the supported cost functions type.
Tolerance or max iteration parameter value constitutes the criteria determining train-
ing stops or fitting the neural networks model. The tolerance specifies the training error,
and the max iteration specifies the maximum number of optimization iterations. The de-
fault value for tolerance is 0.0001. The default value for max iteration is 1000.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Table 5. Information about the available activation functions determining the hidden layer type
set as a hyper-parameter included by the ‘Analytics and Machine Learning’ LabVIEW toolkit
Table 6. Information about the available activation functions determining the output layer type set
as a hyper-parameter included by the ‘Analytics and Machine Learning’ LabVIEW toolkit
Table 7. Information about the available cost functions type set as a hyper-parameter included by
the ‘Analytics and Machine Learning’ LabVIEW toolkit
Likewise, the user can set the ‘Cross-Validation Configuration,’ which is an input
cluster containing the following elements: a Boolean control called ‘enable’ (used to enable
or disable cross-validation in training model), number of folds (defining the number of
sections that this VI divides the training data into) and metric configuration (average
method: micro, macro, weighted or binary).
‘Train Classification Model. vi’ is used to train a classification model. ‘Aml_save
Model to JSON.vi’ is used to save the model as a JSON file. It converts the trained model
to a JSON string to save the trained model to a file.
According to the documentation of the AML LabVIEW toolkit, by enabling the
‘Cross-Validation Configuration,’ confusion matrix and metrics are returned as output
values of the ‘Train Classification Model. vi’. The default number of folds is 3, meaning
that the test data consists of one section and the training data comprises the remaining
sections. The metric configuration parameter determines the evaluation metric in cross-
validation—the neural networks models trained by the proposed LabVIEW application
involved ‘weighted metric configuration’ type.
Figure 22 shows the entire structure of the previously described AML functions used
to enable the setting of hyperparameters for training the neural networks model. Figure
23 shows the graphical user interface of this LabVIEW programming sequence.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 22. A sequence of the Block Diagram displaying the corresponding LabVIEW programming
functions used to enable the setting of hyperparameters for training the neural networks model
Figure 23. A sequence of the Front Panel showing the graphical window corresponding to training
the neural networks model for the EEG signal classification by setting certain hyper-parameters
2.10. Training the NN model for the EEG Signals classification by searching the optimal hyper-
parameters
All the information presented in the above section – Classification by setting the pa-
rameters – are also applicable to the current section – Classification by searching the opti-
mal parameters. Nevertheless, there is a single exception related to the ‘Initialize Classifi-
cation Model (NN).vi.’ According to Figure 24, the user should specify multiple values for
each hyper-parameter to the ‘Train Classification Model. vi’ could use a grid search to find
the optimal set of parameters. This technique is underlying the training of the neural net-
works models from the current research paper because it is more reliable, efficient, and
straightforward by enabling the option ‘Exhaustive Search’ to determine those metrics
(accuracy, precision, recall, and F1 score) with all the possible mixtures between hyper-
parameters. It will result in a mixture including the optimal hyper-parameters necessary
to get the highest values for the metric specified in the ‘Evaluation Metric’ parameter. If
the option ‘Random Search’ was enabled, the ‘number of searchings’ parameter indicates
testing only some possible mixtures between hyper-parameters.
Moreover, the graphical user interface from Figure 24 displays the number of cor-
rectly/incorrectly detected samples/temporal sequences, calculated by taking into account
the mathematical formulas for the metrics described in Table 8.
The current research analyzed 50 generated artificial neural network-based models,
and each of them needed a training time interval between 1 and 3 hours.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Figure 24. A sequence of the Front Panel showing the graphical window corresponding to training
the NN model for the EEG signal classification by searching the optimized hyper-parameters
Table 8. The mathematical formulas for the evaluation metrics (accuracy, precision, f1 score, re-
call) described in the documentation of the AML LabVIEW toolkit
2.11. The flexibility of enabling/disabling the randomization of the normalized EEG data
The following behavior is initially applicable: whenever the previous phases related
to the classification process are running, it results in different values for the evaluation
parameters of the NN model: accuracy, precision, recall, and F1 score. These parameters
vary due to using a ‘Random Number’ function included in a subVI contained by the
‘Train Classification Model. vi’. The training dataset is normalized and randomly distrib-
uted in subsets by using the ‘Random Number’ function. At the initialization of the model,
it results in the configuration of the total number of subsets. One of these subsets is aimed
for the pre-training phase, while the others are for the pre-testing phase. These phases are
preceding the obtaining of the trained classification model. After running each training
session, removing the ‘Random Number’ function will result in the same evaluation pa-
rameters. Keeping the exact configuration of the model should be accomplished to get
identical results after running each training session.
The LabVIEW application presented in this paper provides flexibility by implement-
ing a novel method that allows a button with two logical states, called ‘Riffle Data,’ to
activate or deactivate the ‘Random Number’ function. Thus, the LabVIEW application can
interactively enable or disable the random generation of the normalized EEG data. There-
fore, it is necessary to implement some modifications in the subVIs provided by the ‘An-
alytics and Machine Learning’ toolkit. These changes are necessary to make the ‘Riffle
Data’ button available in the main LabVIEW application, outside the subVI where it orig-
inally belonged. The updates are related to certain object-oriented programming concepts
in LabVIEW explained in the below paragraphs.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
The ‘Analytics and Machine Learning. lvlib’ should be accessed, which is the library
of functions contained by the LabVIEW Toolkit. Then, the developer should open the fol-
lowing structure of folders: Classification/Data/2D array/Classes/AML 2D Array.lvclass,
to find the ‘AML 2D Array. ctl’. This element needs modification by adding a boolean
control corresponding to the ‘Riffle Data’ button. This modification influences the
‘aml_Read Data. vi’ and ‘aml_Write Data. vi’ containing a typedef control called ‘data.’
The developer should add the ‘data’ typedef control to the ‘Riffle Data’ button. Further,
the developer should open the following structure of folders (Figure 25): Classifica-
tion/Data/Common/subVIs, to find out the ‘Load Training Data (2D array).vi’. A ‘Bundle
by Name’ function is necessary for this virtual instrument to add the ‘Riffle Data’ Boolean
control (button) as a new input element to the ‘data’ cluster. This button should be avail-
able in the main application, as shown below, outside of the subVI implementing its func-
tionality.
Figure 25. The Block Diagram of ‘Load Training Data (2D array).vi’ – modified by adding the ‘Riffle Data’ button
The following phases are necessary to implement the possibility of removing the
‘Random Number’ Function applied to the training data.
• Open the Block Diagram (BL) of the ‘Train Classification Model. vi’.
• Open the Block Diagram (BL) of the ‘aml_Auto Tune. vi’.
• Choose ‘AML Neural Network.lvclass: aml_Auto Tune.vi’.
• Open the Block Diagram of the ‘AML Neural Network. lvclass: aml_Auto Tune. vi’.
• Open the Block Diagram of the ‘aml_Cross Validation. vi’.
• Open the Block Diagram of the ‘aml_Stratified K Folds. vi’.
• Open the Block Diagram of the ‘aml_Riffle Training Data. vi’ and modify it by adding
a ‘Select’ Function.
Implementation of the ‘aml_ Riffle Training Data. vi’ (Figure 26) focuses on re-organ-
izing the training data in random order using a ‘Random Number’ function. The ‘Select’
function activates or deactivates the ‘Random Number’ function, based on the state of the
‘Riffle Data’ button, which is a Boolean control wired as an input terminal to the connector
pane of the ‘aml_Riffle Training Data.vi.’ ‘Select’ function is corresponding to the ‘if…else’
statement from procedural programming.
Figure 26. The Block Diagram of ‘aml_Riffle Training Data. vi’ – modified by adding the ‘Select’
Function to enable the possibility to activate or deactivate the ‘Random Number’ Function
• Modify the following virtual instruments: ‘aml_Stratified K Folds. vi’, ‘aml_Cross Val-
idation. vi’, ‘aml_Auto Tune. vi’ and ‘Train Classification Model. vi’. Then a Boolean
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
3. Results
The proposed research virtual instrument aims to acquire, process, and classify the
EEG signals corresponding to neuronal patterns elicited by different cognitive tasks. The
eye-blink is considered an artifact across the EEG signal, but it can also be considered a
precise control signal in a brain-computer interface application. A simple spike pattern
that increases and decreases the biopotential characterizes the voluntary eye-blink result-
ing from an ordinary effort. Therefore, if it does not require a higher amplitude or a strong
effort, then the voluntary eye-blink is associated with a general pattern that could be easy
to detect even by visual checking of the EEG signal.
Thus, the classification of multiple voluntary eye-blinks is a testing method of the
working principle underlying the proposed BCI research related virtual instrument based
on the processing of the EEG temporal sequences consisting of multiple mixtures between
several EEG rhythms (raw, delta, theta, alpha, beta, gamma) in Time and Frequency Do-
mains and certain statistical features (mean, median, RMS, standard deviation, mode, the
sum of values, skewness, Kurtosis coefficient, maximum and range = maximum-mini-
mum).
During the experiments conducted in the current research work, there resulted in
4000 temporal EEG sequences from a single subject (female, 29 years). The duration of
each session of EEG data acquisition was 1 minute and 20 seconds. For example, during
every period equivalent to 80 seconds, at each time interval of 2 seconds, the subject had
to accomplish one of the following four tasks: avoid the voluntary eye-blinks, execute one
voluntary eye-blink, perform two voluntary eye-blinks and achieve three voluntary eye-
blinks.
The results of the current research paper consist of 25 sessions of EEG data acquisi-
tion, each of them including 40 EEG temporal sequences. A session of EEG data acquisi-
tion set to 80 seconds corresponds to recording a series of 40 EEG temporal sequences.
Therefore, there resulted in 25 x 40 = 1000 EEG temporal sequences for each of the four
classes: 0 – No Eye-Blink Detected; 1 – One Eye-Blink Detected; 2 – Two Eye-Blinks De-
tected and 3 – Three Eye-Blinks Detected.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
In fact, for the training dataset generation, the subject has been involved in 4 x 25
sessions of EEG data acquisition, enabling the recording of the 4 x 25 x 40 = 4000 EEG
temporal sequences corresponding to the previously mentioned four classes.
Otherwise, for the generation of the testing dataset, the subject has been involved in
4 x 5 sessions of EEG data acquisition, enabling the recording of the 4 x 5 x 40 = 800 EEG
temporal sequences corresponding to the previously mentioned four classes. The duration
of each session of EEG data acquisition was 80 seconds. Figure 27 shows the previously
described general structure. Both training and testing datasets include a column that is
assigned the labels.
Figure 27. An overview of the proposed paradigm for performing the experimental sessions to get
the datasets used for both training and testing of the neural networks model of voluntary multiple
eye-blinks classification
Note. The output consists of 12 x 2D arrays → 6 x 2D arrays are related to EEG Signals
(raw, delta, theta, alpha, beta, gamma) in Time Domain and 6 x 2D arrays are related to
EEG Signals in Frequency Domain - FFT Peak.
3. Start the EEG Data Acquisition in Automatic Mode.
4. According to visual and auditory indicators, from 2 to 2 seconds, the user should
execute one eye-blink.
Note: Thus, at the end of the acquisition, 40 temporal sequences will be returned,
each of them including the EEG signal pattern of an eye-blink.
5. Wait until Duration of Acquisition = 1 minute and 20 seconds, and the EEG Data
Acquisition in Automatic Mode is over.
6. Select Config Button to return to the main window of the LabVIEW application, then
select the Button that enables the extraction of features and the generation of the EEG
dataset.
7. Select the Tab corresponding to the graphical displaying of each temporal sequence
of the EEG Data acquired in the Automatic Mode. Visually analyze every one of the
40 EEG patterns and associate to it the appropriate Label - 1 for Eye-Blink Detected.
8. Select the Tab corresponding to the configuration of multiple mixtures between se-
lected signals and extracted features to generate the EEG Training Dataset and save
it to a .csv file.
9. There will result in 50 multiple mixtures, and for every one of them, the EEG signals
(Table 9) and the corresponding statistical features can be both manually or automat-
ically selected.
10. Set ‘First Index = 0’ so that in the resulted .csv file, the rows can be counted starting
from 0 (zero).
11. Deselect Label Button so that the first row from the resulted .csv file should contain
the corresponding names or description of columns.
12. Set a correct path for saving the .csv file representing the Training Dataset.
Note. The path's name automatically incremented as follows: training_dataset_1;
training_dataset_2; …...; training_dataset_50.
13. Set a correct path for saving the .csv file containing the configuration (For example:
Samples to Read = 512; Selected Signals = Alpha, Beta, Gamma; Extracted Features:
Median, RMS, Standard Deviation);
Note. The path's name automatically incremented as follows: config_1; config_2; ……
config_50.
14. Select the ‘Processing’ Button to generate the Training Dataset and save it with the
configuration to .csv files. In the end, it should result in 50 .csv files containing the
Training Datasets and 50 .csv files, including the corresponding configuration of the
50 multiple mixtures of selected EEG signals and extracted features (Table 10).
Note. Currently, these 50 .csv files contain only 40 temporal sequences corresponding
to Label - 1 for Eye-Blink Detected. Further, it is still necessary to acquire EEG Data
in Automatic Mode (a record for 1 minute and 20 seconds) and obtain another 40
temporal sequences corresponding to Label - 0 for No Eye-Blink Detected. Thus, it is
possible to generate the Training Dataset containing two classes of signals that can
be classified.
Table 9. Information regarding the selected EEG signals in the 50 training datasets
Training
Selected EEG Signals Extracted Statistical Features
Dataset
1 All 12 signals in the time and frequency domain All the ten statistical features
2 Only the six signals in the time domain All the ten statistical features
3 Only the six signals in the frequency domain All the ten statistical features
4 Raw All the ten statistical features
5 Delta All the ten statistical features
6 Theta All the ten statistical features
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
In the end, as shown in Table 11, it results in the evaluation metrics (accuracy, preci-
sion, recall, F1 score) corresponding to the deployment of every one of the 50 neural net-
works-based models.
Table 10. Information regarding the generation of hyper-parameters and evaluation metrics after initialization, configuration, and
training of the 50 neural networks-based models
A B C D E F G H I J K L M
Cross-
1 20 Sigmoid Sigmoid 120 Weighted 0.98 0.98 0.98 0.98 3916 84
entropy
Cross-
2 5 Sigmoid Sigmoid 60 Weighted 0.98 0.98 0.98 0.98 3924 76
entropy
Cross-
3 50 ReLU Sigmoid 60 Weighted 0.95 0.95 0.95 0.95 3803 197
entropy
4 500 Tanh Softmax Cross- 10 Weighted 0.98 0.98 0.98 0.98 3937 63
entropy
5 1000 ReLU Sigmoid Cross- 10 Weighted 0.91 0.91 0.91 0.91 3659 341
entropy
6 200 ReLU Sigmoid Cross- 10 Weighted 0.77 0.77 0.77 0.77 3089 911
entropy
7 100 ReLU Softmax Cross- 10 Weighted 0.83 0.82 0.83 0.82 3305 695
entropy
8 50 ReLU Sigmoid Cross- 10 Weighted 0.81 0.81 0.81 0.81 3226 774
entropy
9 10 ReLU Softmax Cross- 10 Weighted 0.62 0.61 0.62 0.61 2462 1538
entropy
10 300 ReLU Softmax Cross- 10 Weighted 0.92 0.92 0.92 0.92 3665 335
entropy
11 500 ReLU Sigmoid Cross- 10 Weighted 0.83 0.83 0.83 0.83 3329 671
entropy
12 500 ReLU Sigmoid Quadratic 10 Weighted 0.78 0.77 0.78 0.78 3116 884
13 500 ReLU Softmax Cross- 10 Weighted 0.80 0.80 0.80 0.80 3206 794
entropy
14 200 ReLU Sigmoid Cross- 10 Weighted 0.74 0.73 0.74 0.74 2953 1047
entropy
15 300 ReLU Softmax Cross- 10 Weighted 0.57 0.56 0.57 0.56 2268 1732
entropy
16 20 Sigmoid Sigmoid Cross- 30 Weighted 0.98 0.98 0.98 0.98 3925 75
entropy
17 50 Tanh Sigmoid Cross- 40 Weighted 0.98 0.98 0.98 0.98 3933 67
entropy
18 10 Sigmoid Sigmoid Cross- 40 Weighted 0.98 0.98 0.98 0.98 3925 75
entropy
19 20 Tanh Softmax Cross- 30 Weighted 0.99 0.99 0.99 0.99 3942 58
entropy
20 20 Tanh Sigmoid Cross- 20 Weighted 0.82 0.82 0.82 0.82 3275 725
entropy
21 100 Tanh Sigmoid Cross- 30 Weighted 0.91 0.91 0.91 0.91 3644 356
entropy
22 50 Tanh Sigmoid Cross- 30 Weighted 0.92 0.92 0.92 0.91 3661 339
entropy
23 400 ReLU Sigmoid Cross- 20 Weighted 0.91 0.91 0.91 0.91 3651 349
entropy
24 200 Tanh Softmax Cross- 15 Weighted 0.98 0.98 0.98 0.98 3930 70
entropy
25 1000 ReLU Sigmoid Cross- 10 Weighted 0.90 0.90 0.90 0.90 3619 381
entropy
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
26 300 Tanh Softmax Cross- 10 Weighted 0.92 0.92 0.92 0.92 3664 336
entropy
27 50 ReLU Sigmoid Cross- 15 Weighted 0.92 0.92 0.92 0.92 3665 335
entropy
28 10 Sigmoid Sigmoid Cross- 20 Weighted 0.98 0.98 0.98 0.98 3926 74
entropy
29 20 ReLU Sigmoid Cross- 25 Weighted 0.94 0.94 0.94 0.94 3757 243
entropy
30 20 Tanh Sigmoid Cross- 24 Weighted 0.98 0.98 0.98 0.98 3925 75
entropy
31 50 Tanh Sigmoid Cross- 20 Weighted 0.93 0.93 0.93 0.93 3739 261
entropy
32 500 ReLU Softmax Cross- 8 Weighted 0.88 0.88 0.88 0.88 3526 474
entropy
33 20 Tanh Sigmoid Cross- 12 Weighted 0.91 0.91 0.91 0.91 3655 345
entropy
34 50 Tanh Sigmoid Cross- 8 Weighted 0.82 0.82 0.82 0.82 3262 738
entropy
35 10 Tanh Sigmoid Cross- 12 Weighted 0.98 0.98 0.98 0.98 3912 88
entropy
36 200 ReLU Sigmoid Cross- 16 Weighted 0.98 0.98 0.98 0.98 3929 71
entropy
37 20 ReLU Sigmoid Cross- 42 Weighted 0.98 0.98 0.98 0.98 3926 74
entropy
38 10 Tanh Sigmoid Cross- 35 Weighted 0.94 0.94 0.94 0.94 3763 237
entropy
39 50 ReLU Sigmoid Cross- 14 Weighted 0.91 0.91 0.91 0.91 3658 342
entropy
40 50 ReLU Sigmoid Cross- 21 Weighted 0.92 0.92 0.92 0.92 3668 332
entropy
41 20 Tanh Softmax Cross- 21 Weighted 0.91 0.91 0.91 0.91 3639 361
entropy
42 20 ReLU Sigmoid Cross- 14 Weighted 0.81 0.81 0.81 0.81 3254 746
entropy
43 5 Tanh Sigmoid Cross- 21 Weighted 0.98 0.98 0.98 0.98 3937 63
entropy
44 10 Sigmoid Sigmoid Cross- 28 Weighted 0.98 0.98 0.98 0.98 3929 71
entropy
45 50 Tanh Sigmoid Cross- 21 Weighted 0.98 0.98 0.98 0.98 3927 73
entropy
46 20 Tanh Sigmoid Cross- 28 Weighted 0.98 0.98 0.98 0.98 3935 65
entropy
47 100 ReLU Sigmoid Cross- 42 Weighted 0.95 0.95 0.95 0.95 3799 201
entropy
48 50 Tanh Sigmoid Cross- 14 Weighted 0.87 0.86 0.87 0.86 3461 539
entropy
49 100 Tanh Sigmoid Cross- 21 Weighted 0.87 0.87 0.87 0.87 3492 508
entropy
50 50 Tanh Sigmoid Cross- 21 Weighted 0.95 0.95 0.95 0.95 3790 210
entropy
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Table 11. Information regarding evaluation metrics after the deployment of the 50 neural networks-based models on the testing
datasets
4. Discussion
The targets of analyzing the previously described results are the following:
- To identify which of the available ten EEG rhythms are associated with getting NN
models with the highest accuracy in the training or testing phases;
- To identify which of the available ten statistical features are associated with getting
NN models with the highest accuracy in the training or testing phases;
- To identify which hyper-parameters are associated with getting NN models with
the highest accuracy in the training or testing phases;
- To identify how many correctly/incorrectly samples are detected by the NN models
with the highest accuracy in the training or testing phases.
According to Table 10, the first highest accuracy of a neural networks-based model
is 0.99 for uploading the 19 th training dataset. According to Table 9, this training dataset
comprises the three EEG rhythms (raw, beta, and gamma) and all the ten available statis-
tical features. The hyper-parameters characterizing the NN model with the first highest
accuracy of 0.99 are the following: number of hidden neurons = 20; hidden layer type =
Tanh; output layer type = Softmax; cost function = Cross-entropy; number of input neu-
rons = 30; average method = Weighted. Additionally, from the Total = 4000 samples, 3942
samples were correctly detected, and 58 were incorrectly detected.
According to Table 9, these training datasets are composed of the following groups
of extracted statistical features:
- 1st, 2nd, 4th, 16th, 17th, 18th = all the ten statistical features;
- 24th, 28th = mean, median, RMS, standard deviation, Kurtosis Coefficient;
- 30th, 35th, and 36th = median, RMS, standard deviation, Kurtosis Coefficient;
- 37th, 43rd, and 44th = median, RMS, standard deviation, Kurtosis Coefficient, mode,
sum, skewness;
- 4th = 500;
- Hidden layer type:
- 36th and 37th = ReLU;
- 1st, 2nd, 16th, 18th, 28th, and 44th = Sigmoid;
- 4th, 17th, 24th, 30th, 35th, 43th, 45th, and 46th = Tanh;
- Output layer type:
- 1st, 2nd, 16th, 17th, 18th, 28th, 30th, 35th, 36th, 37th, 43th, 44th, 45th, and 46th = Sigmoid;
- 4th and 24th = Softmax;
- Cost function: = Cross-entropy for all the NN models/training datasets: 1 st, 2nd, 4th,
16th, 17th, 18th, 24th, 28th, 30th, 35th, 36th, 37th, 43th, 44th, 45th, and 46th.
- Number of input neurons is between 10 and 120, as it follows for the next NN mod-
els/training datasets: 4th = 10; 35th = 12; 24th = 15; 36th = 16; 28th = 20; 43th and 45th =21;
30th = 24; 44th and 46th =28; 17th and 18th = 40; 37th = 42; 2nd = 60; 1st = 120;
- Average method = Weighted for all the NN models/training datasets: 1 st, 2nd, 4th,
16th, 17th, 18th, 24th, 28th, 30th, 35th, 36th, 37th, 43th, 44th, 45th, and 46th.
In addition, regarding the NN models that reported a training accuracy of 0.98, from
Total = 4000 samples, the number of correctly/incorrectly detected samples is as follows
for the below datasets:
- 35th = 3912 correctly detected samples and 88 incorrectly detected samples;
- 1st = 3916 correctly detected samples and 84 incorrectly detected samples;
- 2nd = 3924 correctly detected samples and 76 incorrectly detected samples;
- 16th, 18th, and 30th = 3925 correctly detected samples and 75 incorrectly detected
samples;
- 28th and 37th = 3926 correctly detected samples and 74 incorrectly detected sam-
ples;
- 45th = 3927 correctly detected samples and 73 incorrectly detected samples;
- 36th and 44th = 3929 correctly detected samples and 71 incorrectly detected sam-
ples;
- 24th = 3930 correctly detected samples and 70 incorrectly detected samples;
- 17th = 3933 correctly detected samples and 67 incorrectly detected samples;
- 46th = 3935 correctly detected samples and 65 incorrectly detected samples;
- 4th and 43rd = 3937 correctly detected samples and 63 incorrectly detected samples;
According to Table 10, the third-highest accuracy of the neural networks-based mod-
els is 0.95 for uploading the three training datasets numbered as follows: the 3 rd, 47th, and
50th. According to Table 9, these training datasets are composed of the following groups of
selected EEG rhythms:
- 3rd and 47th = only the six signals in the frequency domain;
- 50th = Raw, Delta, Theta in the frequency domain.
According to Table 9, these training datasets are composed of the following groups of
extracted statistical features:
- 3rd = all the ten statistical features;
- 47th and 50th = median, RMS, standard deviation, Kurtosis, Mode, Sum, Skewness;
The hyper-parameters characterizing the NN model with the third-highest accuracy
of 0.95 are the following:
- The number of hidden neurons is between either 50 or 100, as it follows for the
below NN models/training datasets:
- 3rd and 50th = 50;
- 47th = 100;
- Hidden layer type:
- 3rd and 47th = ReLU;
- 50th = Tanh;
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
- Output layer type = Sigmoid for all the NN models/training datasets: 3rd, 47th, and
50th;
- Cost function: = Cross-entropy for all the NN models/training datasets: 3 rd, 47th
and 50th;
- Number of input neurons is between 21 and 60, as it follows for the next NN
models/training datasets: 3rd = 60; 47th = 42; 50th = 21;
- Average method = Weighted for all the NN models/training datasets: 3 rd, 47th, and
50th.
In addition, regarding the NN models that reported a training accuracy of 0.95, from
a Total = 4000 samples, the number of correctly/incorrectly detected samples is as follows
for the below datasets:
- 3rd = 3803 correctly detected samples and 197 incorrectly detected samples;
- 47th = 3799 correctly detected samples and 201 incorrectly detected samples;
- 50th = 3790 correctly detected samples and 210 incorrectly detected samples;
According to Table 11, the first highest accuracy of a neural networks-based model
is 0.97 for uploading the four testing datasets numbered as follows: the 1 st, 2nd, 30th, and
36th. The groups of selected EEG rhythms and the extracted statistical features composing
these testing datasets are described above to the 1st, 2nd, 30th, and 36th training datasets. The
above paragraphs described the hyper-parameters characterizing the corresponding NN
models.
In addition, regarding the NN models that reported a testing accuracy of 0.97, from
a Total = 800 samples, the number of correctly/incorrectly detected samples is as follows
for the below datasets:
- 1st = 778 correctly detected samples and 22 incorrectly detected samples;
- 2nd = 775 correctly detected samples and 25 incorrectly detected samples;
- 30th = 776 correctly detected samples and 24 incorrectly detected samples;
- 36th = 773 correctly detected samples and 27 incorrectly detected samples;
According to Table 11, it results that the second-highest accuracy of a neural net-
works-based model is 0.96 for uploading the five testing datasets numbered as follows:
the 17th, 19th, 35th, 37th, and 45th. The groups of selected EEG rhythms and the extracted
statistical features composing these testing datasets are described above to the 17 th, 19th,
35th, 37th, and 45th training datasets. The above paragraphs described the hyper-parameters
characterizing the corresponding NN models.
In addition, regarding the NN models that reported a testing accuracy of 0.96, from
a Total = 800 samples, the number of correctly/incorrectly detected samples is as follows
for the below datasets:
- 17th = 770 correctly detected samples and 30 incorrectly detected samples;
- 19th = 768 correctly detected samples and 32 incorrectly detected samples;
- 35th = 768 correctly detected samples and 32 incorrectly detected samples;
- 37th = 766 correctly detected samples and 34 incorrectly detected samples;
- 45th = 765 correctly detected samples and 35 incorrectly detected samples;
According to Table 11, it results that the third-highest accuracy of a neural networks-
based model is 0.95 for uploading the six testing datasets numbered as follows: the 16 th,
18th, 24th, 28th, 44th, and 46th. The groups of selected EEG rhythms and the groups of ex-
tracted statistical features composing these testing datasets are described above to the 16 th,
18th, 24th, 28th, 44th, and 46th training datasets. The above paragraphs described the hyper-
parameters characterizing the corresponding NN models.
In addition, regarding the NN models that reported a testing accuracy of 0.95, from
a Total = 800 samples, the number of correctly/incorrectly detected samples is as follows
for the below datasets:
- 16th = 763 correctly detected samples and 37 incorrectly detected samples;
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Taking into account the previous analysis, the same 20 neural networks-based models
reported almost similar highest accuracy values both in the training phase (0.99; 0.97; 0.95)
by uploading datasets consisting of 4000 recordings each and in the testing phase (0.97;
0.96; 0.95) by uploading datasets consisting of 800 recordings each.
Table 11 summarizes the above discussion about results consisting of complete infor-
mation related to the top 20 neural networks-based models. Therefore, the significance of
the abbreviations corresponding to column headings is the following: A – NN model, B –
number of hidden neurons, C – hidden layer type, D – output layer type, E – cost function,
F – number of input neurons, G – average method, H – training accuracy, I – testing accu-
racy, J – selected EEG signals, K – extracted statistical features.
Table 11. Summary content about the top 20 artificial neural networks-based models that reported the highest accuracy values both
in the training and testing phase
A B C D E F G H I J K
1 20 Sigmoid Sigmoid Cross- 120 Weighted 0.98 0.97 All 12 signals in the All the ten statistical fea-
entropy time and frequency tures
domain
2 5 Sigmoid Sigmoid Cross- 60 Weighted 0.98 0.97 Only the six signals in All the ten statistical fea-
entropy the time domain tures
3 50 ReLU Sigmoid Cross- 60 Weighted 0.95 0.83 Only the six signals in All the ten statistical fea-
entropy the frequency domain tures
4 500 Tanh Softmax Cross- 10 Weighted 0.98 0.94 Raw All the ten statistical fea-
entropy tures
16 20 Sigmoid Sigmoid Cross- 30 Weighted 0.98 0.95 Raw, delta, theta All the ten statistical fea-
entropy tures
17 50 Tanh Sigmoid Cross- 40 Weighted 0.98 0.96 Raw, alpha, beta, All the ten statistical fea-
entropy gamma tures
18 10 Sigmoid Sigmoid Cross- 40 Weighted 0.98 0.95 Raw, delta, theta, All the ten statistical fea-
entropy gamma tures
19 20 Tanh Softmax Cross- 30 Weighted 0.99 0.96 Raw, beta, gamma All the ten statistical fea-
entropy tures
24 200 Tanh Softmax Cross- 15 Weighted 0.98 0.95 Raw, delta, theta Mean, median, RMS,
entropy Standard Dev, Kurtosis
28 10 Sigmoid Sigmoid Cross- 20 Weighted 0.98 0.95 Raw, alpha, beta, Mean, median, RMS,
entropy gamma Standard Dev, Kurtosis
30 20 Tanh Sigmoid Cross- 24 Weighted 0.98 0.97 Raw, delta, theta, al- Median, RMS, Standard
entropy pha, beta, gamma Dev, Kurtosis
35 10 Tanh Sigmoid Cross- 12 Weighted 0.98 0.96 Raw, beta, gamma Median, RMS, Standard
entropy Dev, Kurtosis
36 200 ReLU Sigmoid Cross- 16 Weighted 0.98 0.97 Raw, delta, theta, al- Median, RMS, Standard
entropy pha Dev, Kurtosis
37 20 ReLU Sigmoid Cross- 42 Weighted 0.98 0.96 Raw, delta, theta, al- Median, RMS, Standard
entropy pha, beta, gamma Dev, Kurtosis, Mode,
Sum, Skewness
43 5 Tanh Sigmoid Cross- 21 Weighted 0.98 0.94 Raw, beta, gamma Median, RMS, Standard
entropy Dev, Kurtosis, Mode,
Sum, Skewness
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
44 10 Sigmoid Sigmoid Cross- 28 Weighted 0.98 0.95 Raw, alpha, beta, Median, RMS, Standard
entropy gamma Dev, Kurtosis, Mode,
Sum, Skewness
45 50 Tanh Sigmoid Cross- 21 Weighted 0.98 0.96 Raw, delta, theta Median, RMS, Standard
entropy Dev, Kurtosis, Mode,
Sum, Skewness
46 20 Tanh Sigmoid Cross- 28 Weighted 0.98 0.95 Raw, delta, theta, al- Median, RMS, Standard
entropy pha Dev, Kurtosis, Mode,
Sum, Skewness
47 100 ReLU Sigmoid Cross- 42 Weighted 0.95 0.83 Only the six signals in Median, RMS, Standard
entropy the frequency domain Dev, Kurtosis, Mode,
Sum, Skewness
50 50 Tanh Sigmoid Cross- 21 Weighted 0.95 0.81 FFT Peak Raw, FFT Median, RMS, Standard
entropy Peak Delta, FFT Peak Dev, Kurtosis, Mode,
Theta Sum, Skewness
Further, to determine the first high-performance NN models from the top 20, the sum
of the correctly detected samples obtained at the training phase and the correctly detected
samples at the testing phase was calculated and displayed in Table 12. Then, taking into
account that the maximum sum of correctly detected samples is 4800 (4000 samples for
the training phase and 800 samples for the testing phase), it results in the overall accuracy
of the top 20 NN models.
Thus, the NN model that reported the highest accuracy of 98.13% was trained and
tested by uploading a dataset comprising all the possible mixtures between the raw, beta,
and gamma EEG rhythms and all the ten statistical features. Accordingly, the maximum
number of correctly detected samples was 4710 out of 4800.
The significance of the abbreviations corresponding to column headings from Table
12 is the following: A – NN model, B – selected EEG signals, C – extracted statistical fea-
tures, D – correctly detected samples at training the NN model, E – correctly detected
samples at testing the NN model, F – total number of correctly detected samples, G – ac-
curacy.
Table 12. A summary content about the top 20 artificial neural networks-based models with the highest sum of correctly detected
samples from the training and testing phase
A B C D E F G
19 Raw, beta, gamma All the 10 statistical features 3942.00 768.00 4710.00 98.13
17 Raw, alpha, beta, gamma All the 10 statistical features 3933.00 770.00 4703.00 97.98
Median, RMS, Standard Dev,
36 Raw, delta, theta, alpha 3929.00 773.00 4702.00 97.96
Kurtosis
Raw, delta, theta, alpha, beta, Median, RMS, Standard Dev,
30 3925.00 776.00 4701.00 97.94
gamma Kurtosis
Only the 6 signals in time
2 All the 10 statistical features 3924.00 775.00 4699.00 97.90
domain
18 Raw, delta, theta, gamma All the 10 statistical features 3925.00 759.00 4684.00 97.58
Median, RMS, Standard Dev,
35 Raw, beta, gamma 3912.00 768.00 4680.00 97.50
Kurtosis
Only the 6 signals in
3 All the 10 statistical features 3803.00 661.00 4464.00 93.00
frequency domain
The results obtained by the current research confirm, bring complete information,
reveal new insights, add improvements, and thoroughly explore the stages underlying
the classification of voluntary eye-blinking also addressed in other scientific articles: [47],
[48], and [50]. According to the paper [47], the extracted features were: maximum ampli-
tude, minimum amplitude, and the Kurtosis coefficient. According to the paper [48], the
extracted features were: Kurtosis coefficient, maximum amplitude, and minimum ampli-
tude. According to the paper [50], the extracted features were: minimum value, maximum
value, median, mode, and standard deviation.
The future versions of the presented LabVIEW-based virtual application for BCI re-
search should also include additional features, such as average power spectral density,
average spectral centroid, and average log energy entropy, addressed by [61]. These fea-
tures were extracted from alpha and beta EEG rhythms, previously analyzed in time and
frequency domains to classify three mental activities: quick math solving, do nothing (re-
lax), and playing a game [61]. Moreover, an extensive comparative analysis could com-
plete the current discussion about research regarding the values of statistical features and
EEG rhythms associated with each of the four detected states: no eye-blink, one eye-blink,
two eye-blink, and three eye-blink. Regarding the content of the 19th dataset, a compari-
son is possible between the values of Mean_Raw, Mean_Beta, and Mean_Gamma. A sep-
arate set of values is corresponding to each of the four previously described states. In the
same way, there can be determined and compared the values of Median_Raw,
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
Median_Beta, and Median Gamma and all the other statistical features corresponding to
the four different states of voluntary eye-blink classification.
In contrast with the current research that analyzes various mixtures between the se-
lected EEG signals and the extracted statistical features, as well as different architectures
of NN models, the papers [47], [48], and [50] focuses on the experimentation of a single
method of features extraction and a specific classification technique that is considered op-
timal.
The paper [47] proposes a binary classifier based on Probabilistic Neural Network
(RBF = Radial Basis Function). The EEG signal was acquired at 480 Hz sampling fre-
quency, filtered by applying Butterworth Band Pass, detrended, normalized, and seg-
mented into windows with 480 samples each.
The paper [48] reports the neural networks with the highest performance: R=0.8499
for FFBP (Feed-forward backpropagation) and R=0.90856 for CFBF (Cascade-forward
Backpropagation).
Regarding the paper [50], the authors designed an artificial multi-layer neural net-
work with backpropagation, taking into account the following structure: input layer con-
taining 48 neurons based on the extraction of 6 features, three hidden layers, and one out-
put layer including only one neuron. The used activation function is the binary sigmoid.
Moreover, the current paper reported higher values for accuracy than those from the
previous scientific articles [47], [48], and [50]. Nevertheless, due to the execution of exper-
iments during pandemic restrictions, both training and testing datasets from this paper
were obtained based on the raw EEG data acquisition from a single subject, which could
be a limitation that can influence the overall results. Therefore, the plan is to implement
and assess an updated version of the presented BCI research-based virtual instrument by
involving several healthy or disabled subjects of different categories, including age, pro-
fession, psychological traits, and intellectual background.
Otherwise, the current paper's advantage is the convenience of using the most af-
fordable portable EEG headset with only one embedded biosensor for quick set-up and
efficient neuronal biopotentials monitoring. The other scientific articles [47], [48] and [50]
present experimental activities that involved the following EEG expensive devices: a wire-
less biomedical monitor, called BioRadio with four channels [47]; RMS EEG 32 Super Sys-
tem with two Ag-AgCl electrodes [48] and OpenBCI Cython with eight channels [50].
Nevertheless, the future version of the presented virtual instrument related to BCI
research should be updated to handle more EEG channels to classify complex mental ac-
tivities efficiently. Moreover, the aim is to use, as much as possible, the most inexpensive
portable EEG headsets, for example, Muse and Emotiv Insight, to ensure the simple-to-
use working principle and availability for the researchers with minimal experience related
to the Brain-Computer Interface field. Although using a commercial portable headset
seems to be the most straightforward solution there is still necessary to implement a cus-
tomized communication protocol enabling the EEG data acquisition in a user-friendly
software environment, such as LabVIEW.
Overall, the proposed solution aims for BCI research in the beginning stage by prov-
ing a standalone, simple to use virtual instrument with a user-friendly graphical user in-
terface accomplishing all the necessary fundamental functions: EEG data acquisition, pro-
cessing, features extraction, and classification.
5. Conclusion
This paper proposed a BCI research-related LabVIEW virtual instrument to acquire,
process, and classify the EEG signal detected by the embedded sensor of NeuroSky Mind-
wave Mobile headset, second edition. The artificial neural network-based techniques fa-
cilitated the classification process by using the versatile functions included by the ‘Ana-
lytics and Machine Learning’ toolkit. Its functionality was customized to remove the ran-
domization of EEG data.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
The new approach described in this paper is consisting of original programming se-
quences implemented in LabVIEW. The LabVIEW application aims to recognize EEG sig-
nal patterns corresponding to different cognitive tasks efficiently. This paper presented
the classification of multiple voluntary eye-blinks.
The application developed in the current research is consisting of different states. The
first one allows the manual and automatic acquisition mode. The second one enables the
processing of the EEG raw signal. The third one is related to preparing the 50 EEG training
datasets with 4000 recordings each and 50 EEG testing datasets with 800 recordings each
based on the generation of 50 multiple mixtures. Thus, there result in 50 multiple selec-
tions between ten EEG rhythms and ten statistical features. The selected EEG rhythms
include time-domain: raw, delta, theta, alpha, beta, gamma, and frequency domain – Fast
Fourier Transform with Peak parameter applied on the same signals. The extracted statis-
tical features are the following: mean, median, route mean square, standard deviation,
Kurtosis Coefficient, mode, summation, skewness, maximum, and range = maximum-
minimum.
The use of the LabVIEW application developed in the presented work resulted in
automatically identifying the most relevant multiple mixtures. Further, the training EEG
datasets facilitated the initialization, configuration, and obtaining the 50 artificial neural
networks (ANN) based classification models. After that, the trained ANN models are de-
ployed on the testing EEG datasets to result in evaluation metrics, such as accuracy and
precision. The final phase is a comparative assessment to determine the highest values of
accuracy reported by the training of the top 20 ANN models (0.99; 0.97; 0.95) and the test-
ing of the same 20 ANN models (0.07; 0.96; 0.95). Accordingly, it analyzed the correspond-
ing mixtures between the selected EEG rhythms and the extracted statistical features un-
derlying the datasets used for training and testing the 20 ANN models. Also, the hyperpa-
rameters were listed to generate the top 20 ANN models reporting the highest accuracy.
Determining the maximum sum of the correctly detected samples in the training and test-
ing phases resulted in the high-performance ANN model with an accuracy equal to
98.13% that can correctly recognize 4710 out of 4800 samples.
Future research directions are related to further improvements regarding the Lab-
VIEW-based system's assessment by enabling different EEG signal patterns classification.
Furthermore, the intention is to add more types of signals and more significant features.
Likewise, other versatile applications of Brain-Computer Interface will surely need more
flexibility achieved by the execution of training processes based on Supported Vector Ma-
chines or Logistic Regression models. Therefore, a future BCI-related research project
should also consider the ‘Analytics and Machine Learning’ LabVIEW toolkit, including
these two methods.
Supplementary Materials: A quick video demonstration of the working principle of the proposed
LabVIEW applications is uploaded to this YouTube unlisted link: https://youtu.be/bmr04-QKJOg
Author Contributions: Conceptualization, O.A.R.; methodology, O.A.R.; software, O.A.R.; valida-
tion, O.A.R.; formal analysis, O.A.R.; investigation, O.A.R.; resources, O.A.R.; data curation, O.A.R.;
writing—original draft preparation, O.A.R.; writing—review and editing, O.A.R.; visualization,
O.A.R.; supervision, O.A.R.; project administration, O.A.R. The author has read and agreed to the
published version of the manuscript.
Funding: This research received no external funding.
Conflicts of Interest: The author declares no conflict of interest.
References
1. Nguyen, T.-H.; Chung, W.-Y. Detection of Driver Braking Intention Using EEG Signals During Simulated Driving. Sen-
sors 2019, 19, 2863. DOI:10.3390/s19132863.
2. Al-Hudhud, G.; Alqahtani, L.; Albaity, H.; Alsaeed, D.; Al-Turaiki, I. Analyzing Passive BCI Signals to Control Adaptive Auto-
mation Devices. Sensors 2019, 19, 3042. DOI:10.3390/s19143042.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
3. Mamani, M.A.; Yanyachi, P.R. Design of computer-brain interface for flight control of unmanned air vehicle using cerebral
signals through headset electroencephalograph. Proceedings of the IEEE International Conference on Aerospace and Signals
(INCAS), Peru, 8 - 10 Nov. 2017. DOI: 10.1109/INCAS.2017.8123499.
4. López-Hernández, J.L.; González-Carrasco, I.; López-Cuadrado, J.L.; Ruiz-Mezcua, B. Towards the Recognition of the Emotions
of People with Visual Disabilities through Brain–Computer Interfaces. Sensors 2019, 19, 2620. DOI: 10.3390/s19112620.
5. Choudhari, A.M.; Porwal, P.; Jonnalagedda, V.; Mériaudeau, F. An Electrooculography based Human Machine Interface for
wheelchair control. Biocybernetics and Biomedical Engineering 2019, 39, 673-685. DOI: https://doi.org/10.1016/j.bbe.2019.04.002.
6. Choudhari, A.M.; Jonnalagedda, V. Bio-potentials for smart control applications. Health and Technology 2019, 9, 1-25, DOI:
10.1007/s12553-019-00314-7.
7. Cheng, C.; Li, S.; Kadry, S. Mind-Wave Controlled Robot: An Arduino Robot Simulating the Wheelchair for Paralyzed Patients.
International Journal of Robotics and Control 2018, 1, 6-14, DOI:10.5430/ijrc.v1n1p6.
8. Dev, A.; Rahman, M.; Mamun, N. Design of an EEG-Based Brain Controlled Wheelchair for Quadriplegic Patients. Proceedings
of the IEEE 3rd International Conference for Convergence in Technology (I2CT), India, 6 - 8 April 2018, DOI:
10.1109/I2CT.2018.8529751.
9. Đumić D.; Đug M.; Kevrić J. Brainiac’s Arm—Robotic Arm Controlled by Human Brain. In: Hadžikadić M., Avdaković S. (eds)
Advanced Technologies, Systems, and Applications II. IAT 2017. Proceedings of the International Symposium on Innovative
and Interdisciplinary Applications of Advanced Technologies (IAT), Bosnia and Herzegovina, 25 – 28 May 2017, DOI:
https://doi.org/10.1007/978-3-319-71321-2_73.
10. Bright, D.; Nair, A.; Salvekar, D.; Bhisikar, S. EEG-based brain controlled prosthetic arm. Proceedings of the Conference on
Advances in Signal Processing (CASP), India, 9 – 11 June 2016. DOI:10.1109/CASP.2016.7746219.
11. Reust, A.; Desai, J.; Gomez, L. Extracting Motor Imagery Features to Control Two Robotic Hands. Proceedings of the 2018 IEEE
International Symposium on Signal Processing and Information Technology (ISSPIT), Louisville, KY, USA, 6 – 8 Dec. 2018, DOI:
10.1109/ISSPIT.2018.8642627.
12. Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Speed control of Festo Robotino mobile robot using NeuroSky MindWave EEG
headset based brain-computer interface. Proceedings of the 2016 7th IEEE International Conference on Cognitive Infocommu-
nications (CogInfoCom), Poland, 16 – 18 Oct. 2016, DOI: 10.1109/CogInfoCom.2016.7804557.
13. Xiao, Y.; Jia, Y.; Cheng, X.; Yu, J.; Liang, Z.; Tian, Z. I Can See Your Brain: Investigating Home-Use Electroencephalography
System Security. IEEE Internet of Things Journal 2019, 6, 6681-6691. DOI:10.1109/JIOT.2019.2910115.
14. Jafri, S.; Hamid, T.; Mahmood, R.; Alam, M.; Rafi, T.; Ul Haque, M.; Munir, M. Wireless Brain Computer Interface for Smart
Home and Medical System. Wireless Personal Communications 2019, 106, 2163 – 2177. DOI: https://doi.org/10.1007/s11277-018-
5932-x.
15. López, A.; Ferrero, F.; Yangüela, D.; Álvarez, C.; Postolache, O. Development of a Computer Writing System Based on EOG. Sen-
sors 2017, 17, 1505. DOI: 10.3390/s17071505.
16. Wadekar, R.S.; Kasambe, P.V.; Rathod, S.S. Development of LabVIEW platform for EEG signal analysis, Proceedings of the 2017
International Conference on Intelligent Computing and Control (I2C2), India, 23 – 24 June 2017, DOI: 10.1109/I2C2.2017.8321942.
17. Kumar, S.; Kumar, V.; Gupta, B. Feature extraction from EEG signal through one electrode device for medical application, Pro-
ceedings of the 2015 1st International Conference on Next Generation Computing Technologies (NGCT), India, 4 – 5 September
2015, DOI: 10.1109/NGCT.2015.7375181.
18. Mutasim, A.K.; Tipu, R.S.; Bashar, M.R.; Islam, M.K.; Amin, M.A. Computational Intelligence for Pattern Recognition in EEG
Signals. In Computational Intelligence for Pattern Recognition; Pedrycz W., Chen S.M., Eds.; Studies in Computational Intelligence,
vol. 777, 291 – 320, Springer, Cham.
19. Zhang J.; Huang W.; Zhao S.; Li Y.; Hu S. Recognition of Voluntary Blink and Bite Base on Single Forehead EMG. In: Liu D.; Xie
S.; Li Y.; Zhao D.; El-Alfy, E.S (eds) Neural Information Processing. ICONIP 2017. Proceedings of the International Conference
on Neural Information Processing, China, 14 – 18 November 2017, DOI: 10.1007/978-3-319-70096-0_77.
20. Harsono, M.; Liang, L.; Zheng, X.; Jesse, F.F., Cen, Y.; Jin, W. Classification of Imagined Digits via Brain-Computer Interface
Based on Electroencephalogram. In: Wang Y., Huang Q., Peng Y. (eds) Image and Graphics Technologies and Applications.
IGTA 2019. Proceedings of the Chinese Conference on Image and Graphics Technologies, China, 19 – 20 April 2019, DOI:
10.1007/978-981-13-9917-6_44.
21. Ko, L.-W.; Chang, Y.; Wu, P.-L.; Tzou, H.-A.; Chen, S.-F.; Tang, S.-C.; Yeh, C.-L.; Chen, Y.-J. Development of a Smart Helmet for
Strategical BCI Applications. Sensors 2019, 19, 1867. DOI:10.3390/s19081867.
22. Padfield, N.; Zabalza, J.; Zhao, H.; Masero, V.; Ren, J. EEG-Based Brain-Computer Interfaces Using Motor-Imagery: Techniques
and Challenges. Sensors 2019, 19, 1423. DOI:10.3390/s19061423
23. Majidov, I.; Whangbo, T. Efficient Classification of Motor Imagery Electroencephalography Signals Using Deep Learning Meth-
ods. Sensors 2019, 19, 1736. DOI:10.3390/s19071736.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
24. Rashid, M.; Sulaiman, N.; Mustafa, M.; Khatun, S.; Bari, B.S. The Classification of EEG Signal Using Different Machine Learning
Techniques for BCI Application. In: Kim JH., Myung H., Lee SM. (eds) Robot Intelligence Technology and Applications. RiTA
2018, Proceedings of the International Conference on Robot Intelligence Technology and Applications, Malaysia, 16 – 18 De-
cember 2018, DOI: https://doi.org/10.1007/978-981-13-7780-8_17.
25. Garcia A.; Gonzalez, J.M.; Palomino, A. Data Acquisition System for the Monitoring of Attention in People and Development
of Interfaces for Commercial Devices. In: Agredo-Delgado V., Ruiz P. (eds) Human-Computer Interaction, Proceedings of the
Iberoamerican Workshop on Human-Computer Interaction (HCI-COLLAB), Colombia, 23 – 27 April 2018, DOI:
https://doi.org/10.1007/978-3-030-05270-6_7.
26. Wannajam S.; Thamviset, W. Brain Wave Pattern Recognition of Two-Task Imagination by Using Single-Electrode EEG. In:
Unger H., Sodsee S., Meesad P. (eds) Recent Advances in Information and Communication Technology 2018, Proceedings of
the International Conference on Computing and Information Technology, Thailand, 5 – 6 July 2018, DOI: 10.1007/978-3-319-
93692-5_19.
27. Li, M.; Liang, Z.; He, B.; Zhao, C.-G.; Yao, W.; Xu, G.; Xie, J.; Cui, L. Attention-Controlled Assistive Wrist Rehabilitation Using
a Low-Cost EEG Sensor. IEEE Sensors Journal 2019, 19, 6497-6507. DOI: 10.1109/JSEN.2019.2910318.
28. Zavala, S.; Bayas, L.J.L.; Ulloa A.; Sulca, J.; López, M.J.L.; Yoo, S.G. Brain Computer Interface Application for People with Move-
ment Disabilities, Proceedings of the HCC: International Conference on Human Centered Computing, Mexico, 5 – 7 December
2018, DOI: 10.1007/978-3-030-15127-0_4.
29. Venuto, D.; Annese, V.; Mezzina, G. Towards P300-Based Mind-Control: A Non-invasive Quickly Trained BCI for Remote Car
Driving, Proceedings of the International Conference on Sensor Systems and Software, France, 1 -2 December 2016, DOI:
10.1007/978-3-319-61563-9_2.
30. Kim, K.; Suk, H.; Lee, S. Commanding a Brain-Controlled Wheelchair Using Steady-State Somatosensory Evoked Potentials.
IEEE Transactions on Neural Systems and Rehabilitation Engineering 2018, 26, 654-665, DOI: 10.1109/TNSRE.2016.2597854.
31. Varela, M. Raw EEG signal processing for BCI control based on voluntary eye blinks. Proceedings of the 2015 IEEE Thirty Fifth
Central American and Panama Convention (CONCAPAN XXXV), Honduras, 11 – 13 Nov. 2015, DOI: 10.1109/CONCA-
PAN.2015.7428477.
32. BNCI HORIZON 2020, http://bnci-horizon-2020.eu/community/research-groups, accessed on 4th August 2021.
33. O A Ruşanu et al 2020 IOP Conf. Ser.: Mater. Sci. Eng. 997 012059, DOI: https://doi.org/10.1088/1757-899X/997/1/012059.
34. O A Ruşanu et al 2018 IOP Conf. Ser.: Mater. Sci. Eng. 444 042014, DOI: https://doi.org/10.1088/1757-899X/444/4/042014.
35. O. A. Ruşanu, L. Cristea, M. C. Luculescu, and S. C. Zamfira, "Experimental Model of a Robotic Hand Controlled by Using
NeuroSky Mindwave Mobile Headset," 2019 E-Health and Bioengineering Conference (EHB), 2019, pp. 1-4, DOI:
10.1109/EHB47216.2019.8970050.
36. O. A. Ruşanu, L. Cristea and M. C. Luculescu, "Simulation of a BCI System Based on the Control of a Robotic Hand by Using
Eye-blinks Strength," 2019 E-Health and Bioengineering Conference (EHB), 2019, pp. 1-4, DOI: 10.1109/EHB47216.2019.8969941.
37. Oana A. Rusanu, Luciana Cristea and Marius C. Luculescu., "The development of a BCI prototype based on the integration
between NeuroSky Mindwave Mobile EEG headset, Matlab software environment and Arduino Nano 33 IoT board for control-
ling the movement of an experimental motorcycle". 11th International Conference on Information Science and Information Lit-
eracy, Sciendo, 2021, pp. 290-297, https://doi.org/10.2478/9788395815065-033
38. O. A. Rușanu, L. Cristea and M. C. Luculescu, "LabVIEW and Android BCI Chat App Controlled By Voluntary Eye-Blinks
Using NeuroSky Mindwave Mobile EEG Headset," 2020 International Conference on e-Health and Bioengineering (EHB), 2020,
pp. 1-4, DOI: 10.1109/EHB50910.2020.9280193.
39. O A Rusanu, et al 2019 IOP Conf. Ser.: Mater. Sci. Eng. 514 012020, DOI: http://dx.doi.org/10.1088/1757-899X/514/1/012020
40. Izabela Rejer, Łukasz Cieszyński, RVEB—An algorithm for recognizing voluntary eye blinks based on the signal recorded from
prefrontal EEG channels, Biomedical Signal Processing and Control, Volume 59, 2020, 101876, ISSN 1746-8094, DOI:
https://doi.org/10.1016/j.bspc.2020.101876.
41. Kamal Sharma, Neeraj Jain, Prabir K. Pal, Detection of eye closing/opening from EOG and its application in robotic arm control,
Biocybernetics and Biomedical Engineering, Volume 40, Issue 1, 2020, Pages 173-186, ISSN 0208-5216, DOI:
https://doi.org/10.1016/j.bbe.2019.10.004.
42. Sebastián Poveda Zavala, Sang Guun Yoo, David Edmigio Valdivieso Tituana, “Controlling a Wheelchair using a Brain Com-
puter Interface based on User Controlled Eye Blinks”, International Journal of Advanced Computer Science and Applications
(IJACSA), Vol. 12, No. 6, 2021, DOI: https://dx.doi.org/10.14569/IJACSA.2021.0120607.
43. Yadav P., Sehgal M., Sharma P., Kashish K. (2019) Design of Low-Power EEG-Based Brain–Computer Interface. In: Singh S.,
Wen F., Jain M. (eds) Advances in System Optimization and Control. Lecture Notes in Electrical Engineering, vol 509. Springer,
Singapore. DOI: https://doi.org/10.1007/978-981-13-0665-5_19.
Preprints (www.preprints.org) | NOT PEER-REVIEWED | Posted: 4 January 2022 doi:10.20944/preprints202106.0016.v2
44. Prem S., Wilson J., Varghese S.M., Pradeep M. (2021) BCI Integrated Wheelchair Controlled via Eye Blinks and Brain Waves. In:
Pawar P.M., Balasubramaniam R., Ronge B.P., Salunkhe S.B., Vibhute A.S., Melinamath B. (eds) Techno-Societal 2020. Springer,
Cham. DOI: https://doi.org/10.1007/978-3-030-69921-5_32.
45. William C Francis, C. Umayal, G. Kanimozh, “Brain-Computer Interfacing for Wheelchair Control by Detecting Voluntary Eye
Blinks”, Indonesian Journal of Electrical Engineering and Informatics (IJEEI), Vol. 9, No. 2, June 2021, pp. 521~537, ISSN: 2089-
3272, DOI: http://dx.doi.org/10.52549/ijeei.v9i2.2749.
46. P. K. Tiwari, A. Choudhary, S. Gupta, J. Dhar, and P. Chanak, "Sensitive Brain-Computer Interface to help manoeuvre a Minia-
ture Wheelchair using Electroencephalography," 2020 IEEE International Students' Conference on Electrical, Electronics and
Computer Science (SCEECS), 2020, pp. 1-6, DOI: https://doi.org/10.1109/SCEECS48394.2020.73.
47. Rihana S., Damien P., Moujaess T. (2013) EEG-Eye Blink Detection System for Brain Computer Interface. In: Pons J., Torricelli
D., Pajaro M. (eds) Converging Clinical and Engineering Research on Neurorehabilitation. Biosystems & Biorobotics, vol 1.
Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34546-3_98.
48. Chambayil, B., Singla, R., Jha, R.: EEG Eye Blink Classification using neural network. In: Proceedings of the World Congress on
Engineering (2010).
49. M. Lo Giudice et al., "1D Convolutional Neural Network approach to classify voluntary eye blinks in EEG signals for BCI
applications," 2020 International Joint Conference on Neural Networks (IJCNN), 2020, pp. 1-7, DOI:
10.1109/IJCNN48605.2020.9207195.
50. Saragih, A. S., Pamungkas, A., Zain, B. Y., and Ahmed, W., “Electroencephalogram (EEG) Signal Classification Using Artificial
Neural Network to Control Electric Artificial Hand Movement”, in Materials Science and Engineering Conference Series, 2020, vol.
938, no. 1, p. 012005. DOI: 10.1088/1757-899X/938/1/012005.
51. Miranda, M., Salinas, R., Raff, U., & Magna, O. (2019). Wavelet Design for Automatic Real-Time Eye Blink Detection and Recog-
nition in EEG Signals. Int. J. Comput. Commun. Control, 14, 375-387, DOI: https://doi.org/10.15837/ijccc.2019.3.3516.
52. M. Agarwal and R. Sivakumar, "Blink: A Fully Automated Unsupervised Algorithm for Eye-Blink Detection in EEG Signals,"
2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton), 2019, pp. 1113-1121, DOI:
10.1109/ALLERTON.2019.8919795.
53. G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw, "BCI2000: a general-purpose brain-computer
interface (BCI) system," in IEEE Transactions on Biomedical Engineering, vol. 51, no. 6, pp. 1034-1043, June 2004, DOI:
10.1109/TBME.2004.827072.
54. Yann Renard, Fabien Lotte, Guillaume Gibert, Marco Congedo, Emmanuel Maby, Vincent Delannoy, Olivier Bertrand, Anatole
Lécuyer; OpenViBE: An Open-Source Software Platform to Design, Test, and Use Brain–Computer Interfaces in Real and Virtual
Environments. Presence: Teleoperators and Virtual Environments 2010; 19 (1): 35–53. DOI: https://doi.org/10.1162/pres.19.1.35.
55. Arnaud Delorme, Scott Makeig, EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including inde-
pendent component analysis, Journal of Neuroscience Methods, Volume 134, Issue 1, 2004, Pages 9-21, ISSN 0165-0270, DOI:
https://doi.org/10.1016/j.jneumeth.2003.10.009.
56. National Instruments. What is LabVIEW? Available Online: https://www.ni.com/ro-ro/shop/labview.html (accessed on 15 Sep-
tember 2019).
57. NeuroSky Mindwave Mobile 2. Available Online: https://store.neurosky.com/pages/mindwave (accessed on 15 September
2019).
58. Rieiro, H.; Diaz-Piedra, C.; Morales, J.M.; Catena, A.; Romero, S.; Roca-Gonzalez, J.; Fuentes, L.J.; Di Stasi, L.L. Validation of
Electroencephalographic Recordings Obtained with a Consumer-Grade, Single Dry Electrode, Low-Cost Device: A Compara-
tive Study. Sensors 2019, 19, 2808. DOI:10.3390/s19122808.
59. Bednář R.; Brozek J. Neural Interface: The Potential of Using Cheap EEG Devices for Scientific Purposes. In: Ntalianis K., Croi-
toru A. (eds) Applied Physics, System Science and Computers II. APSAC 2017. Proceedings of the International Conference on
Applied Physics, System Science and Computers, Croatia, 27 – 29 September 2017, DOI: https://doi.org/10.1007/978-3-319-75605-
9_18.
60. National Instruments. Analytics and Machine Learning Toolkit. Available Online: http://zone.ni.com/reference/en-
XX/help/377059B-01/lvamlconcepts/aml_overview/ (accessed on 15 September 2019).
61. Rashid M. et al. (2020) Analysis of EEG Features for Brain Computer Interface Application. In: Kasruddin Nasir A.N. et al. (eds)
InECCE2019. Lecture Notes in Electrical Engineering, vol 632. Springer, Singapore, DOI: https://doi.org/10.1007/978-981-15-
2317-5_45