0% found this document useful (0 votes)
7 views

Brain-Computer_Interface_in_Virtual_Reality

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Brain-Computer_Interface_in_Virtual_Reality

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

9th International IEEE EMBS Conference on Neural Engineering

San Francisco, CA, USA, March 20 - 23 , 2019

Brain-Computer Interface in Virtual Reality


Reza Abbasi-Asl1 , Mohammad Keshavarzi2 , and Dorian Yao Chan3

Abstract— We study the performance of brain computer gle neurons. However, developing system using non-
interface (BCI) systems in a virtual reality (VR) environ- invasive devices such as Electroencephalography (EEG)
ments and compare it to 2D regular displays. First, we [10] is a much more challenging task. EEG devices
design a headset that consists of three components: a wear-
able electroencephalography (EEG) device, a VR headset are less expensive and more user-friendly compared
and a virtual interface. Recordings of brain and behavior to invasive recording systems, but the measures signal
from human subjects, performing a wide variety of tasks using each electrode in EEG is the average activity
using our device are collected. The tasks consist of object of thousands of neurons. Additionally, because EEG
rotation or scaling in VR using either mental commands or electrodes are placed along the scalp, EEG signals are
facial expression (smile and eyebrow movement). Subjects
are asked to repeat similar tasks on regular 2D monitor often noisy and distorted [10]. Therefore, it is difficult to
screens. The performance in 3-D virtual reality environment recover the underlying activity of the brain from EEG
is considerably higher compared to the to the 2D screen. recordings. Another limitation of the EEG device is the
Particularly, the median number of success rate across trials number of channels. A higher signal to noise ratio can
within VR settings is double of that for the 2D setting be achieved using an EEG with large number of elec-
(8 successful commands in VR settings compared to 4
successful commands in 2D screen in 1 minute trials). Our trodes. However, these high-bandwidth EEG devices
results suggest that the design of future BCI systems can are often less user-friendly compared and have higher
remarkably benefit from virtual reality settings. cost. In this paper, we are interested in applications of
BCI that is adaptable to daily life of users interacting
I. I NTRODUCTION with digital devices. Therefore, we accept the challenge
The development of brain-computer interface (BCI) of the brain recordings with low signal-to-noise ratio
systems have valuable applications in areas such as and limit our study to the user-friendly wearable EEG
medicine [1], [2], robotics [3], [4], and human entertain- headsets with small number of electrodes.
ment industry [5], [6]. Helping people with movement EEG devices often record not only the average ac-
disabilities to retract their motor functionalities through tivity of the brain areas, but also the facial expressions
mental commands and communicate with the digital such as eyebrow and lip movement [11]. To decode
world is one of the most important consequences of the target command, EEG-enabled BCI device primar-
such technology. A BCI system often consists of three ily benefit from state-of-the-art machine learning algo-
main components. First, a module to record the activity rithms. Methods such as deep neural networks [12],
of neurons in the brain (either single neuron [7] or av- [13], generative models [14] and Bayesian models [16]
erage of thousands of neurons [9]). Second, a digital or have shown satisfactory performance in these systems.
mechanical environment (such as computers or robotic Being able to efficiently interact with a digital device
arms) that the user intend to control. Third, an interface is a necessity in many of today’s real-life applications.
that processes the brain signal and translate it to an These applications range from simple tasks such as us-
actionable command in the target environment. ing cellphones or moving a cursor on a screen to much
While there have been major progress in designing more complicated tasks such as controlling robotic arm
and employing each of BCI three components over movements. In all of these applications, high band-
the last decades, some limitations in each category are width in communication is considered one of the most
yet to be addressed. The available devices that record important aspects of the human-machine interactions
from the brain often have a very low signal-to-noise [17]. In this study, we are interested in the limitations of
ratio [10]. Researchers have shown a remarkable perfor- user interaction with a virtual reality (VR) environment
mance for the invasive BCI systems [7] or visual recon- and seek BCI solutions to increase the bandwidth.
struction systems [8] that receive recordings from sin- VR is particularly an important tool in fields such as
design [18], education [19] and communication [20]. To
1 Reza Abbasi-Asl is with Department of Electrical Engineering
increase the communication bandwidth between user
and Computer Sciences, University of California, Berkeley, CA, USA.
[email protected]
and VR environment, we study the application of EEG-
2 Mohammad Keshavarzi is with Department of Ar- based BCI systems in VR. With the recent progress
chitecture, University of California, Berkeley, CA, USA. in development of virtual and augmented reality, it
[email protected] has been necessary to analyze, study, and evaluate
3 Dorian Yao Chan is with Department of Electrical Engineering
and Computer Sciences, University of California, Berkeley, CA, USA. the performance of BCI devices in controlling these
[email protected] virtual environments. Being able to use communica-
978-1-5386-7921-0/19/$31.00 ©2019 IEEE 1220
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE GOIAS. Downloaded on October 22,2024 at 12:17:48 UTC from IEEE Xplore. Restrictions apply.
be simple as possible to avoid user distraction. In the
event of a specific command, the 3D cross can move
forward and backward, rotate, scale and change color
(Figures 2.B and 2.C) . In the training process, such
transforms are initiated from the start of the training
session and last 10 seconds while in the execution pro-
cess the transforms occur when the mental command
are triggered and turn off when neutral conditions are
Fig. 1: Flowchart of the pipeline detected. In such events, the occurrence of the transfor-
mation gradually dissolves to minimize distraction of
the inconsistency of the mental and facial commands.
tion channels that do not need hand movement and The virtual reality device used in this experiment is the
gestures is essential in this setting. There has been a HTC Vive, as the larger size of the headset comparing
limited number of studies trying to build such a con- to the Oculus Rift - allows easier allocation of EEG
nection. Additionally, several industrial start-ups such sensors.
as Neurable [21] are aiming to improve the reliability
of BCI-driven VR. C. Interface
In this paper, we design and implement protocols to
quantitatively study the possibility of directly control- We use Emotive mental-command software package
ling the virtual reality environment using commands to train the model. Based on the notes from developers
translated from brain activity. Additionally, we com- [21], this software includes the following modules to
pare the performance of BCI in 3D VR environment process the EEG signals. First, a basic filtering and real-
to the 2D regular screen. The rest of the paper is time classification to knock out spikes and other non-
organized as follows. In section 2, we introduce our set biological signals/noise. In this step, a certain amount
up to control a virtual reality application using an EEG of muscle noise (below a threshold) is allowed to
device. The experimental protocols followed by our give a good experience for novice users and assist
main results are presented in section 3. We conclude the learning process. The training procedure is built
and discuss our future directions in section 4. upon EEG features such as frequency content and
spatial distribution of components. Correlation analysis
II. D EVICE S ETUP technique is used to reduce the input dimensionality
Figure 1 shows the flowchart of our BCI-enabled VR. to a manageable level for the amount of training data
Our pipeline consists of an EEG module, a VR module available. Final classification is done by calculating the
and and interface. Our specific choice for each module relative likelihood of a given observation belonging to
is summarized in this section. each of the trained classes, assigning the point to the
command with the highest posterior probability.
A. EEG
To record the brain activity, we use Emotiv EPOC+ III. R ESULTS
EEG headset. This EEG device has 14 electrodes with
saline based wet sensors. These electrodes do not re- A. Data collection
quire any gel and therefore is a better match to our
application compared to regular wet electrodes. 14 To quantitatively compare the performance of BCI
channels is sufficient in our application and has enough system between a 2D display and 3-D virtual reality
bandwidth to enable us to control VR/AR device. The environment, we designed the following protocol to
user is able to wear this EEG headset together with the collect data from subjects. First, the interface predictive
AR/VR headset. Therefore, this design is user-friendly. model is trained by having the user repeatedly attempt
We did not chose Electro-cap EEG because it gradually a command, for a total duration of 5 minutes. We then
squeezes the users head and becomes uncomfortable performed 1 minute trials, where the participants were
for the user after a short time. asked to attempt a particular command every 6 seconds
(10 commands per trial). This procedure is repeated
B. Virtual reality module for 10 times (100 commands for each condition). We
To train and execute mental and facial expressions, recorded the success rate, as well as the number of
we use the Unity3D game engine to visualize the false positives. Figure 3 shows a picture of our experi-
outcome of the users commands in the 2D screen and ment setup. Participants attempted to use commands
virtual reality. In this application, 7 boxes shaping a to rotate or push objects. In some trials, we asked
3D cross each with different colors assigned are placed participants to alternate between these conditions. Data
in the middle of the users view (Figure 2.A). The collected from three subjects, all male and between ages
background and lighting of the scene is designed to of 22 and 29.
1221
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE GOIAS. Downloaded on October 22,2024 at 12:17:48 UTC from IEEE Xplore. Restrictions apply.
A B C

Fig. 2: A. 3D cross neutral. B. 3D cross scale. C. 3D cross scale

pressions maintain accuracy over all cases.


We found that mental commands were often incon-
sistent, where different trials would result in drastically
different results - perhaps due to movement of the EEG
headset. Mental commands were also persistent - they
tended to stay active far past when the user wanted the
command to stop. In contrast, facial expressions tended
to be both consistent and bursty. For comfort, facial
expressions were a bit difficult for our subjects while
wearing the VR headset, due to the positioning on the
Fig. 3: Setup of the experiment face. However, most users still found facial expressions
quite usable. Users also reported facial expressions to
be more repeatable than mental commands - thinking
B. Analysis a command multiple times the same way was found
Figure 4.A and 4.B illustrates the boxplot for the to be rather difficult. In contrast, users quickly picked
number of successful commands in each trial across 10 up facial expressions to accurately and consistently
trials for VR and 2D screen. Figures 4.C and 4.D shows achieve tasks.
the boxplots for the number of false positives in each We also postulate that facial expressions are much
trail across 10 trails. The boxplots are shown separately more desirable than mental commands for the wider
for three conditions. First condition is performing pull, VR environment, as facial expressions are easily trans-
push or rotate using only mental command. Second ferable between applications. Meanwhile, mental com-
and third conditions are performing the same task mands differ greatly between applications, as different
using eyebrow movement and smile, respectively. In applications have different use cases. Due to the length
these figures, the accuracy for a binary experiment of training, this process can take significant time and
is reported. That is, the command is either correctly effort. Thus, we conclude that for accuracy, a major
identified or not. In general, we found that mental requirement for most practical VR applications, facial
commands performed better under VR conditions. The expressions should be used. However, if a more simple
median number of success rate is 8 for VR setting while ”state of mind” is more desirable, mental commands
for the 2D setting the median is 4. However, were still are suitable.
far inferior to facial expressions for practical application
usage. In VR, 10 of our 10 commands are identified C. Carving Brush - VR Application
correctly using eyebrow movement in all trails. The Considering the performance results from each type
median for 2D screen setting is 9 in this case. of command mentioned above, we design a VR appli-
Figure 5 shows similar boxplots for the trinary ex- cation to enhance a simple creative process with taking
periments. In this setting, the accuracy is defined is advantage of mental and facial controls. Using the
the number of successful identification of a task from Unity3D game engine, we generate an array of boxes
three states. The states are object push, object rotation based on the users preference of dimension and scale.
and neutral state. The accuracies are lower compared These boxes surround the user and are rendered with
to binary experiments. Our trials demonstrate that in distinguishable colors to form a gradient box. Figure 6
the 3D environment, mental commands are suitable for illustrates three snapshots of this application.
binary cases. However, when more than one command Using the VR controllers as a destructor brush, the
is desired, accuracy plummets. In contrast, facial ex- user can carve out negative volumes from its sur-
1222
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE GOIAS. Downloaded on October 22,2024 at 12:17:48 UTC from IEEE Xplore. Restrictions apply.
A B C D
2D Display Virtual Reality 2D Display Virtual Reality
10 10 10 10

# False positives out of 10


# False positives out of 10
# Successes out of 10

# Successes out of 10
0 0 0 0
Basic Eyebrow Smile Basic Eyebrow Smile Basic Eyebrow Smile Basic Eyebrow Smile

Fig. 4: Number of successes for binary experiment in 2D display (A) and VR setting (B). Number of false positives
for binary experiment in 2D display (C) and VR setting (D).

A 2D Display
B Virtual Reality
C 2D Display
D Virtual Reality
10 10 10 10

# False positives out of 10

# False positives out of 10


# Successes out of 10

# Successes out of 10

0 0 0 0
Basic Facial Basic Facial Basic Facial Basic Facial

Fig. 5: Number of successes for trinary experiment in 2D display (A) and VR setting (B). Number of false positives
for trinary experiment in 2D display (C) and VR setting (D).

rounding to make stylized forms and objects with a specific pixel, the more increase of RGB values happen
negative carving technique. This process was ac hived in each frame until they reach the limit of RGB (255,
by applying a sphere collider at the top point of the VR 255, 255) which indicates the color of white.
controller and deleting each pixel (box) when a collision As this is a 6DOF experience, the user can walk in the
was detected. In order to optimize the computation space to perform and perform the carving process and
process and avoid rendering all the pixel arrays in the mental/facial commands simultaneously. For larger
initial gradient box, we only render the outer layer of models, a walking locomotion function is implemented
the object. With each collision between the brush and which the user navigates the virtual space using touch-
specific pixel in space, its surrounding pixels would pad buttons of the VR controller. This method is not
instantiate to form the updated carved form of the recommended as many users experience VR sickness
object. The size of the destructor brush changes with (nausea) in such locomotion functions.
mental and facial commands. When the user thinks of
a bigger brush or raises its eyebrows, the diameter of IV. D ISCUSSION , A SSESSMENT AND F UTURE W ORK
the sphere collider increases resulting a larger brush In this paper, we developed a pipeline to assess the
for the carving process. The brush size also decreases application of BCI in VR environments. Our results
when the user thinks of a smaller brush or furrows it showed that BCI is more accurate in VR compared to
brows. We also design The strength of the brush as a 2D screen. However, ultimately, mental commands as a
function of speed, which depending on how fast the valid input mechanism for VR applications may need
stroke is made in space, it would change the collider to wait for better technology. After experimenting and
threshold for pixel destruction. analyzing mental and facial commands using EEG sen-
In addition to the size of the brush, changing the sors in both environments 2D screens and virtual en-
color of pixels that surround the brush is also done by vironments we believe that due to the inaccurate and
mental and facial commands. By thinking of a lighter noisy inputs, training and implementing commands
pixel color or smiling, the surrounding pixels increment with current learning methods were not efficient. We
their RGB values based on their proximity and position believe advanced machine learning methods such as
to the VR controller. The closer the controller is to a interpretable deep learning tools [22], [23] and non-
1223
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE GOIAS. Downloaded on October 22,2024 at 12:17:48 UTC from IEEE Xplore. Restrictions apply.
[3] Berger, T. W., Chapin, J. K., Gerhardt, G. A., McFarland, D. J.,
Principe, J. C., Soussou, W. V., ... and Tresco, P. A. (2008). Brain-
Computer Interfaces: An international assessment of research
and development trends. Springer Science and Business Media.
[4] Bell, C. J., Shenoy, P., Chalodhorn, R., and Rao, R. P. (2008).
Control of a humanoid robot by a noninvasive braincomputer
interface in humans. Journal of neural engineering, 5(2), 214.
[5] Alomari, M. H., Abubaker, A., Turani, A., Baniyounes, A. M.,
and Manasreh, A. (2014). EEG mouse: A machine learning-based
brain computer interface. Int. J. Adv. Comput. Sci. Appl, 5, 193-
198.
[6] Krepki, R., Blankertz, B., Curio, G., and Mller, K. R. (2007).
The Berlin Brain-Computer Interface (BBCI)towards a new com-
munication channel for online control in gaming applications.
Multimedia Tools and Applications, 33(1), 73-90.
[7] Maynard, E. M., Nordhausen, C. T., and Normann, R. A. (1997).
The Utah intracortical electrode array: a recording structure
for potential brain-computer interfaces. Electroencephalography
and clinical neurophysiology, 102(3), 228-239.
[8] Abbasi-Asl, R., Chen, Y., Bloniarz, A., Oliver, M., Willmore, B.
Fig. 6: VR Carving Brush application D., Gallant, J. L., Yu, B. (2018). The DeepTune framework for
modeling and characterizing neurons in visual cortex area V4.
bioRxiv, 465534.
[9] Pfurtscheller, G., Scherer, R., and Neuper, C. (2007). EEG-
linear model estimation algorithms [24] are helpful in based brain-computer interface. OXFORD SERIES IN HUMAN-
enhancing classification problems. TECHNOLOGY INTERACTION, 315.
Also, applying such commands for time-related func- [10] Teplan, M. (2002). Fundamentals of EEG measurement. Mea-
surement science review, 2(2), 1-11.
tions or accurate actions in games and applications can [11] Badcock, N. A., Mousikou, P., Mahajan, Y., de Lissa, P., Thie, J.,
not be applicable at this point. This concern amplifies and McArthur, G. (2013). Validation of the Emotiv EPOC EEG
as we see the number of false negatives increase when gaming system for measuring research quality auditory ERPs.
PeerJ, 1, e38.
a combination of commands is extracted for various [12] Zhang, X., Yao, L., Sheng, Q. Z., Kanhere, S. S., Gu, T., and
tasks. Actions such as eye-movement, unwanted facial Zhang, D. (2017). Converting Your Thoughts to Texts: Enabling
expressions, and walking also create noise signals in Brain Typing via Deep Feature Learning of EEG Signals. arXiv
preprint arXiv:1709.08820.
the EEG recordings. These distortions are unavoidable [13] Carvalho, S. R., Cordeiro Filho, I., De Resende, D. O., Siravenha,
in some cases to control in the user experience process. A. C., De Souza, C., Debarba, H. G., ... and Boulic, R. (2017, Octo-
Our future work includes improving the quality of ber). A Deep Learning Approach for Classification of Reaching
Targets from EEG Images. In Graphics, Patterns and Images
the interface module. Designing an algorithm that is (SIBGRAPI), 2017 30th SIBGRAPI Conference on (pp. 178-184).
robust to noise in processing the EEG signals is the IEEE.
most important principle in this direction. Developing [14] Palazzo, S., Spampinato, C., Kavasidis, I., Giordano, D., and
Shah, M. (2017). Generative Adversarial Networks Conditioned
new applications in virtual-reality such as in the artistic by Brain Signals. In Proceedings of the IEEE Conference on
painting, city design, and educational programs are a Computer Vision and Pattern Recognition (pp. 3410-3418).
few examples that require such precision. Robustness [15] Alcaide, R. (2017). Evaluating a Novel Brain-Computer Interface
and EEG Biomarkers For Cognitive Assessment in Children
to this brain noise also extends beyond EEG devices, With Cerebral Palsy.
so work in this area will certainly help future BCI in- [16] Phillips, C., Mattout, J., Rugg, M. D., Maquet, P., and Friston,
terfaces that deal with the same issues as EEG devices. K. J. (2005). An empirical Bayesian solution to the source
reconstruction problem in EEG. NeuroImage, 24(4), 997-1011.
We also plan to increase the number of human subjects [17] Cheng, M., Gao, X., Gao, S., and Xu, D. (2002). Design and
and tasks in future experiments. This will allow for a implementation of a brain-computer interface with high transfer
more reliable evaluation of our pipeline. rates. IEEE transactions on biomedical engineering, 49(10), 1181-
1186.
[18] Sherman, W. R., and Craig, A. B. (2002). Understanding virtual
ACKNOWLEDGMENT reality: Interface, application, and design. Elsevier.
[19] Snchez, ., Barreiro, J. M., and Maojo, V. (2000). Design of virtual
The authors would like to thank Allen Yang and reality systems for education: a cognitive approach. Education
James F. OBrien for the support and their constructive and information technologies, 5(4), 345-362.
feedback. This work was conducted as part of the CS- [20] Biocca, F., and Levy, M. R. (Eds.). (2013). Communication in the
age of virtual reality. Routledge.
294 class project at UC Berkeley. [21] http://www.neurable.com.
[22] Abbasi-Asl, R., Yu, B. (2017). Structural Compression of Con-
R EFERENCES volutional Neural Networks Based on Greedy Filter Pruning.
arXiv preprint arXiv:1705.07356.
[1] Pinheiro, O. R., Alves, L. R., Romero, M. F. M., and de Souza, [23] Abbasi-Asl, R., Yu, B. (2017). Interpreting Convolutional
J. R. (2016, December). Wheelchair simulator game for training Neural Networks Through Compression. arXiv preprint
people with severe disabilities. In Technology and Innovation arXiv:1711.02329.
in Sports, Health and Wellbeing (TISHW), International Confer- [24] Abbasi-Asl, R., Khorsandi, R., Farzampour, S., Zahedi, E.
ence on (pp. 1-8). IEEE. (2011). Estimation of muscle force with emg signals using
[2] Pfurtscheller, G., Flotzinger, D., and Kalcher, J. (1993). Brain- hammerstein-wiener model. In 5th Kuala Lumpur Interna-
computer interfacea new communication device for handi- tional Conference on Biomedical Engineering 2011 (pp. 157-160).
capped persons. Journal of Microcomputer Applications, 16(3), Springer, Berlin, Heidelberg.
293-299.

1224
Authorized licensed use limited to: UNIVERSIDADE FEDERAL DE GOIAS. Downloaded on October 22,2024 at 12:17:48 UTC from IEEE Xplore. Restrictions apply.

You might also like