0% found this document useful (0 votes)
21 views23 pages

AR/VR For Brain Computer Interface

Uploaded by

Logesh Kumar A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views23 pages

AR/VR For Brain Computer Interface

Uploaded by

Logesh Kumar A
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/355789243

A Review on Virtual Reality and Augmented Reality Use-Cases of Brain


Computer Interface Based Applications for Smart Cities

Article in Microprocessors and Microsystems · October 2021


DOI: 10.1016/j.micpro.2021.104392

CITATIONS READS

92 3,134

5 authors, including:

Varun Kohli Utkarsh Tripathi


National University of Singapore Solventum
13 PUBLICATIONS 332 CITATIONS 6 PUBLICATIONS 131 CITATIONS

SEE PROFILE SEE PROFILE

Vinay Chamola
Birla Institute of Technology and Science Pilani
240 PUBLICATIONS 11,276 CITATIONS

SEE PROFILE

All content following this page was uploaded by Vinay Chamola on 31 October 2021.

The user has requested enhancement of the downloaded file.


Noname manuscript No.
(will be inserted by the editor)

A Review on Virtual Reality and Augmented Reality Use-Cases of


Brain Computer Interface Based Applications for Smart Cities

Varun Kohli1 , Utkarsh Tripathi1 , Vinay Chamola1 ,


Senior Member, IEEE, Bijay Kumar Rout2 , and Salil
S. Kanhere3 , Senior Member, IEEE

Received: date / Accepted: date

Abstract Brain Computer Interfaces (BCIs) and Extended Reality (XR) have seen significant ad-
vances as independent disciplines over the past 50 years. XR has been developed as an umbrella
domain, covering Virtual Reality (VR), Augmented Reality (AR) and Mixed Reality (MR), giving
rise to human-machine interactions. This intersection sees diverse applications ranging from reha-
bilitation, navigation, entertainment, robotics and home control for smart cities. This review takes
an in-depth look at BCI and XR technologies, and gives examples of how their combination pro-
duces promising results pertaining to the above stated applications. It presents a detailed discussion
on the background of BCI, VR and AR technologies and further their individual applications. The
review then discusses the works that use the conjunction of these technologies for various real life
applications in smart cities. In addition, we also present the future scope of applications that use a
combination of BCI and XR technologies.

Keywords Brain Computer Interfaces · Extended Reality · Virtual Reality · Augmented Reality ·
Smart Cities

1 INTRODUCTION

The domain of healthcare applications has seen an unprecedented growth in the last decade in
terms of incorporation of technologies like image processing [1], blockchain [2], Internet of Things
(IoT) [3], Deep Learning [4], VR [5] and BCI [6]. A BCI acquires and analyses bio-signals obtained
from sensors, and translates them into commands that are forwarded to devices that give feedback
to the users or complete desired actions. Over the years of development, the term BCI applies to
not only computers but other machines as well and thus can be used interchangeable with Brain
Machine Interface (BMI). Though the initial intent of BCI was to restore or replace function for
people with neuromuscular disorders, its applications have not been limited to the medical field. The
Third International BCI Meeting in 2005 established the value of BCI technology on discussing its
useful applications [7]. BCI has the potential to improve the quality of life and finds applications in
various areas such rehabilitation [8, 9], entertainment [10], navigation [11], and home control [12] to
name a few. Since the initial development of Electroencephalography (EEG) [13] based spellers and
simple BCIs for device control, researchers have developed BCIs for neuro-rehabilitation [14], cursor
control [15], robotics [16, 17] and prostheses [18], wheelchairs [11], gaming [19] and to make many
other complex devices [20]. Based on the application the bio-signals processed may be the electrical
activity of the brain, the corneo-retinal standing potentials existing between the front and back of
the eye, or the electrical signals from muscle tissues. Brain signals are collected through non-invasive
sensor monitoring with EEG [21] or through invasive methods such as intercranial EEG (iEEG) also
known as Electrocorticography (ECoG) [22]. Signals from the eyes and muscles can be recorded using
Electrooculography and Electromyography respectively [23]. Such non-brain signals can often be a
1 Department of Electrical and Electronics Engineering & APPCAIR, BITS Pilani, Rajasthan, India 333031
2 Department of Mechanical Engineering, BITS Pilani, Rajasthan, India 333031
3 School of Computer Science and Engineering, UNSW, Sydney, Australia, ACT 2600
Corresponding author: Vinay Chamola, E-mail: [email protected]
2 Microprocessors and Microsystems

Table 1: Technologies in Conjunction with BCI

Technology Feedback Details Main Uses References

Entertainment
Tactile sensations may be created in mid-air [25]
Electronic Devices
Haptics Haptic without the need to touch a physical object or [26]
Exoskeletons
through vibrations of a device in contact. [27]
Navigation

Electrical Impulses are sent to paralysed muscles

to restore function. The technology is [28]


FES Electrical Neurorehabilitation
used to assist with walking, standing, breathing [29]

grasping and to improve bowel funtion.

Subjects can visually see whether their actions Entertainment


[25]
VR Visual bring desired results and correct themselves Navigation
[28]
accordingly. Neurorehabilitation

source of valuable information. Functional Magnetic Resonance Imaging (fMRI) and Near-Infrared
Spectroscopy (NIRS) are gaining popularity as well [24].
BCI poses multiple challenges [30]. The signals gathered from EEG sensors generally have low
signal to noise ratio (SNR) [31] because of environmental disturbances such as channel noise, motion
artefacts and electrode-contact noise. SNR is not consistent across sessions, making it more difficult
for researchers to perform multiple human experiments with the same profile without frequent cal-
ibrations. The useful signal is a small portion of raw output EEG. Multiple filters and processing
algorithms are employed to obtain the required refined signal. An electrode measures superposition
of several brain activities, making the isolation of the required signal difficult [32]. The algorithms
developed for a specific person might not work for another because of the difference in cortex folding
and functional mappings. The brain folding is different even in the case of identical/monozygotic
twins [33]. EEG signals resulting from a summation of multiple neural firing processes can be lever-
aged to perform desired actions. Only certain kinds of neural activities can be captured by surface
electrodes, primarily, the event of pyramidal neuron firing. Since these neurons lie perpendicular to
the surface of the cortex, their potentials undergo interference, giving a value detectable by exter-
nal surface electrodes [34]. Generally, events such as the flash of a light or sound [35], produce a
cascaded effect. All these are examples of oscillatory processes that create EEG-observable changes.
This technology has successfully been deployed in multiple use-cases. As an example, this may be
used by a differently-abled person to trigger certain actions. Authors in [36] used this for navigating
in a virtual environment (VE). Neuro-mapping is also a critical pre-requisite for brainwave analysis.
But mapping a staggering 100 billion neurons in the brain is not feasible. Therefore, certain parts of
the brain are mapped to functions they are expected to perform. This is also known as ’localization
of function’. Neurosurgeons use neuro-mapping to scheme out strategies for intricate surgeries. As an
example, fMRI is extensively used to pinpoint the location of the epileptic centre, thereby acting as
a guide for the doctors for further progress [37]. The user security and privacy challenges [38] of BCI
are beyond the scope of this paper.
VR technology [39] utilizes computer graphic systems to create VEs. It places users into a sim-
ulated 3D environment which they can interact with. This is achieved with head-mounted displays
(HMD) [40, 41], differentiating VR systems from traditional user interfaces. Some of the commer-
cially available VR headsets include Occulus (Facebook) [42], HTC-Vive series [43] and Valve Index
(Steam) [44]. Senses such as vision, hearing, touch and smell are also simulated, thus providing an
immersive experience. VR is used in military training [45], education [46], rehabilitation [6], business
[47] and several other fields. AR on the other hand utilizes smartphones and hence are closer to
the physical world. It provides users with a real-time interactive experience wherein the real-world
objects are enhanced and superimposed with computer-generated information across various sen-
sory modalities such as auditory, visual, haptic, olfactory and somatosensory. AR finds applications
in navigation, architecture, archaeology, STEM education, manufacturing, entertainment, commerce
and visual arts. Merging AR and VR technologies gives rise to MR environments and visualizations,
A Review on VR and AR Use Cases of BCI Applications in Smart cities 3

in which virtual objects interact with real objects in real-time. MR does not take place exclusively in
either the real or virtual world, but rather in a hybrid of the two. Its applications range from inter-
active product content management, simulation-based learning, military training, remote working,
medicine and the creative arts. XR is the umbrella term that covers the different reality technologies
and human machine interactions.
VR is currently one of the best available technology that provides testing grounds for applications
involving physical limitations such as training, education, rehabilitation and even to improve user
experience and quality of life [48]. As also discussed later in the paper, BCI research involves the use
of tools for visual, haptic, electrical and auditory feedback. Table-1 shows a few examples of haptics,
functional electrical stimulation (FES) and VR used in conjunction with BCIs. VR provides the most
rich visual experience and its applications with BCI include rehabilitation, entertainment, navigation
and robotics. Applications of AR and MR with BCI include X-Ray vision, auditing, home and device
control among several others. These technologies pose various challenges.
There are challenges posed by VR devices as well [49]. First, high-grade VR headsets require high
graphics processing power to show high-resolution imagery. Second, tracking head rotations in VR is
one of the key metrics to increase immersion. Current technology uses gyroscopes and accelerometers
to track direction, but BCI may provide a potential solution to this [50]. By predicting required
field of view, highly optimised VR headsets can be used for high graphic requirements by utilizing
occlusion culling algorithms. Therefore, simpler GPUs will be able to handle intense imagery tasks.
It can further be used with holography to decrease the load on the CPU.
The main contributions of this review are as follows
i. We present an overview of BCI and XR technologies, their classifications and their use cases.
ii. We review existing works which use a fusion of BCI and VR/AR/MR technologies for applications
ranging from navigation, entertainment, rehabilitation, robotics and home control.
iii. We present the future scope of XR and BCI-based applications.
Having discussed the individual nuances of BCI and XR, their combination comes with several
challenges of its own. Further in this paper, the background concepts and tools of BCI and XR
technologies are explained in Section-2, followed by an in-depth discussion on previous research and
their feasibility on the applications of their combination in navigation, entertainment, rehabilitation,
robotics and home control in Sections-3 and 4. In addition, insights on future use cases are also
presented in Section-5.

2 BACKGROUND INFORMATION

This section gives an overview of systems and frameworks employed to create BCI and XR. Specifically
details on BCI-classification, EEG positioning system, details on extended reality, motor imagery,
event related potentials and other information are discussed, which would assist the reader to further
understand the core concepts. BCI can be divided into various categories on different rationals as
shown in Figure-1.

2.1 BCI AND USE-CASES

Methods used to capture the signals can vary depending on the application. On the basis of medical
approach used, BCIs can be classified as non-invasive and invasive [51]. Non-invasive BCI makes use
of either dry or wet electrodes placed directly on the scalp or on the skin on other parts of the body.
They measure surface potentials. Invasive systems are implanted directly into the brain during a
neurosurgery. These can be single-unit BCIs [52], used for acquiring signals from a specific area of
brain, or multi-unit BCIs [53],which are used to capture signals from multiple areas.
There are three common sub-types of BCIs based on the flow of information: Passive, Active
and Reactive. Passive BCIs are predominantly used to gauge the mental and emotional states of
the subject [54]. This is done by monitoring brain signals, Galvanic Skin Response (GSR) and eye
tracking. On the contrary only brain activity is monitored in an active BCI. Active BCIs are commonly
used for stroke rehabilitation [6]. In case of Reactive BCI, the brain’s response to an external stimulus
or event is analyzed, thus making it a perfect fit for gaming and entertainment applications [55].
BCIs can also be classified as endogenous or exogenous [56]. An endogenous BCI [57] relies solely
on brain patterns that are spontaneously generated, an example of which is motor imagery (MI).
4 Microprocessors and Microsystems

Brain Computer Interface


Classification
Invasive Non-Invasive

Neurosurgery

EEG MEG fMRI


Partially-Invasive

Imagery Based Stimulus Based


Electrocorticography (ECoG)

Fig. 1: BCI Classification

(a) Brain regions for electrode placement (b) 10-20 Rule

Fig. 2: Electrode Placement and Brain Regions

Exogenous BCI on the other hand is based on the brain’s response to external stimulus, for example,
P300 and Steady State Visually Evoked Potentials (SSVEP). [58].
BCIs can also be classified as independent and dependent [59]. Independent BCIs rely solely on
brain activity, whereas dependent BCIs access signals from peripheral nerves and muscles as well.
Endogenous BCIs, as discussed above, are a part of the independent BCI paradigm.There are several
techniques used for brain imaging including functional Near Infrared Spectroscopy (fNIR), fMRI,
Magnetoencephalography (MEG), Positron Emission Tomography (PET) etc. fNIRS makes use of
blood oxygen levels by shining infrared light and analyzing response. Most of these are not practical
enough to be used by general public. EEG based BCIs are inexpensive depending on application.
Electrode location for EEG systems needs to be consistent for all head shapes and sizes. A standard
electrode positioning system used for this is the international 10-20 system [60]. The electrodes
are marked as per their locations to achieve positive mapping results. The basis of this system is
the relationship between the electrodes and the underlying cerebral cortex area. Figure-2a shows
the different important regions of the brain and electrodes pertaining to them. The electrodes are
identified by a letter followed by a number. F, T, C, P and O stand for the Frontal, Temporal, Central,
Parietal and Occipital regions of the cortex. Among the numbering from 1-8, all odd numbered
electrodes (1,3,5,7) lie on the left hemisphere while the even numbered electrodes (2,4,6,8) lie on the
A Review on VR and AR Use Cases of BCI Applications in Smart cities 5

Fig. 3: P300 Typing

right hemisphere. The numbers in the 10-20 rule refer to the 10% and 20% electrode distance as
shown in the Figure-2b.
BCI systems can employ big data based cloud analytics as done in past works for healthcare sys-
tems [41, 61]. A major target use-case of BCIs is to restore, or replace useful neuro-muscular function
for people with neuro-muscular disorders such amyotrophic lateral sclerosis (ALS), tetraplegia, cere-
bral palsy, stroke and spinal cord injury. ALS is a condition wherein the patient loses his/her motor
capability, also known as locked-in syndrome. BCI systems have the potential to provide people with
motor disabilities with a means to control artificial limbs i.e. neuro-prosthetics [62]. Speller programs
that utilize reactive BCI are used for treatments of tetraplegia. EEG, ECoG, intracortical and other
brain signals have also been used for complex control of cursors [15], robotics and prosthesis [18],
and wheelchairs [20], and also to secure the IoT [63]. Passive BCI is steadily entering the general
public domain as companies like EMOTIV [64], NeuroSky [65], Natus-Medical [66] and MindMaze
[67] introduce BCI-powered applications. BCI has also been used along with FES for the treatment
of Spinal cord injuries [68]. Neuro-rehabilitation is an emerging research field where researchers are
using this technology, which is also discussed in its dedicated section as well in this paper. In the
area of forensics BCI is used for brain fingerprinting [69] and lie detection [70].
Brain-wave sensors are used for reading EEG signals from the brain. The sensor data can be an-
alyzed using various programming languages including Python, MATLAB etc. The aim is to classify
the signals into certain useful tasks. Hardware headsets available in the consumer market include
headsets such as MindWave Mobile [71], Epoc Neuroheadset [72], Muse [73] and Enobio [74].These
are affordable and can help start BCI development. Software such as BCI2000 [75], Open-vibe [76]
and Asterics [77] are open source platforms for acquiring and analyzing data. These may also in-
clude synchronization with other interfaces such as joysticks and keyboards. They are python-based
platforms developed for users who are willing to address various applications of BCI. Advances in
machine learning techniques [78] are paving the way to extract information from more entangled
signals with greater ease. Pattern recognition in EEG data can be done using linear classifiers such
as Linear Discriminant Analysis (LDA), non-linear classifiers like Support Vector Machines (SVM)
and k-Nearest Neighbots (kNN) and neural network-based methods such as MLPs. An overview of
these Machine Learning based classification techniques in BCI-VR systems is presented in [79].
Among the sub-classes of BCIs, Event Related Potential (ERP) and Motor Imagery (MI) are the
two most commonly used ones. The subsections below give a background on these sub-classes:
Event Related Potentials: Event Related Potentials (ERPs) occur when sensory or cognitive
processes are used to trigger brain responses. The most commonly used Sensory Evoked Potentials
(SEPs) are either Visual Evoked Potentials (VEPs) [80] or Auditory Evoked Potentials (AEPs) [81].
ERP waveforms are usually described by their amplitudes and latencies. P300 is a commonly seen
ERP, predominantly known through the P300 speller [82], as shown in Figure-3. SSVEPs are another
type of VEP, that are evoked by using a repetitive optical stimulus.
Motor Imagery: The act of thinking or observing an action activated the motor cortex. Even
if the person is unable to perform motor movement, they can still produces the same signals, by
imagining the action. Motor imagery requires mental practice of movements without actual physical
movement. It can be implicit (person imagining an activity that indirectly requires motor use) or
explicit (example, imagining the movement of the left hand).
6 Microprocessors and Microsystems

2.2 XR AND USE CASES

Extended Reality (XR) is an advanced form of immersive technology which covers virtual, augmented
and mixed reality along with human machine interactions. XR has innumerable applications which
include (but not limited to) entertainment, education, professional or military training. VR is useful
when a high extent of immersion is necessary and the real environment cannot meet the requirements.
The Reality-Virtuality continuum scale [83] incorporates all XR variations, as shown in Figure-4.
For VR, specialized head-mounted displays (HMDs) from Oculus (Facebook), HTC-Vive, Valve Index
etc are available in the market. These use high fidelity optics and well developed motion sensing. They
have dedicated controllers for interaction with the projected reality. VR is used in multiple areas and
the democratization of this technology has brought the technology in the hands of common people.
Using Google’s cardboard viewer, even a mobile phone with gyroscope and accelerometer can be
used as a VR device. These devices are being used as a part of exposure therapy to create immersive
environments. VR is extensively used in the industry for recruitment [84], training [45], treatment of
PTSD [5], work collaboration and for many other applications [85]. 3D modeling software [86] like
3DSMax, Maya, Rhino, Fusion360 can be used to create 3D models and environments alongside tools
like substance painter for textures. Further, these 3D models are used inside development engines
like Unity3D [87] and Unreal Engine [88], in which programming languages along with blueprints are
used to control object behavior.
In the case of AR [89, 90], most current smartphones make use of ARkit (iOS) or ARcore (Android)
for augmented reality based applications. Specialized AR headsets like Tesseract are also available in
the Asian market for a smartphone independent experience. AR finds several applications in smart
home control, auditing, healthcare and surgery [91]. On the other hand, the first hardware for for
Mixed Reality (MR), HoloLens, was launched by Micosoft in 2016. Other companies include Google,
Samsung, Nintendo and Huawei. The Virtual Fixtures platform, developed by Armstrong Laborato-
ries in 1992, was the first MR system providing enveloping sound, touch and sight. It demonstrated
improved human performance by overlaying virtual objects on the real physical environment.
The following sections will now focus on the past research on the applications of BCI and XR in
navigation, entertainment, rehabilitation, home conrtol and robotics.

3 BCI-VR APPLICATIONS

BCIs increases the communication bandwidth of interactions between humans and VR systems. Illu-
sions of an artificially perceived reality can be induced to offer various immersive scenarios [92]. The
multiple areas of application for BCI-XR can be seen in Figure-5. This section discusses the existing
BCI-VR research in depth, following the previous work in the fields of navigation, entertainment,
rehabilitation and robotics.

3.1 NAVIGATION

This sub-section discusses motor imagery-based VR navigation. Using brain signals to navigate in
a virtual world has been worked upon by multiple research groups extensively. BCI-VR navigation
systems are used in areas including gaming, rehabilitation and neuro-feedback training. The ease
of navigation is an important supporter of VR environments because of physical space and action

Mixed
Reality

Augmented Augmented
Real Reality Virtuality Virtual
Environment Environment

Fig. 4: Reality–Virtuality Continuum


A Review on VR and AR Use Cases of BCI Applications in Smart cities 7

Virtual Navigation

Entertainment Home Control

Robotics Rehabilitation

Fig. 5: BCI-XR Applications

constraints. But environment designers always need to ensure that motion sickness is avoided which
can also hamper the process. Figure-6 shows a typical BCI-VR navigation system. In the late 1990s,
researchers from Lancaster University developed one of the first systems with which subjects could
navigate through and interact with a virtual world [93]. Subject training was based on a reward
system depending on a threshold of maintaining the EEG component signal. The environment for
training was constructed from sets of VR Modelling Language (VRML) components. VR in general
has proved to be more engaging for similar use-cases. Although both motor imagery and evoked
potentials have been used for VR based navigation, our focus is mainly on MI-based navigation as
discussed in the following subsection.

3.1.1 Navigation using Motor Imagery

Visual feedback is useful for reinforcing behaviours and neuro-plasticity plays an important role in
this process. Rigorous training may be required to enforce such intended thoughts and imagined
situations. A typical protocol for training a subject in an MI-BCI is the execution of an MI task
with a bar of variable length or a moving object as feedback. The bar or an object gives a visual
feedback on how the person is performing. This induces a sense of embodiment which significantly
increases the rate of learning in MI tasks [94]. The training process can hinder the progress if the
feedback is not positive. But the current machine learning techniques can decipher the signal even if
the SNR ratio is not appropriate. MI techniques are generally preceded by capturing the signals in
offline mode, because the range of signals captured can vary.
Motor imagery provides a step away from cue-based and synchronised BCIs, making the use of
BCI closer to the real world. A study conducted on MI based navigation in a VE showcased that the
graphical capabilities of VR opens possibilities to create new BCI paradigms with improved feedback
[95, 96]. Another study discussed the feasibility of using MI-BCI for navigating through a virtual city
[97]. The tasks included moving forward, backward or staying stationary to navigate from one place
to the other in the city. The motion control in the VE was based on 2-class MI classification, left hand
for turning left and right hand for turning right. A 3-channel EEG and Motor Imagery were employed
with naive subjects trained over three sessions to comfortably navigate a VE for viewing apartments
[98]. Goal-oriented tasks, varied decision periods and high mental workload was kept in mind while
designing the virtual apartment. Subjects could maintain stable MI over a minimum of two seconds.
8 Microprocessors and Microsystems

Fig. 6: BCI-VR Navigation

In addition to the use case, it was also observed that the dedicated and motivated subjects performed
better than their unmotivated counterparts. Leeb et al. in 2006 [99] proposed a Graz-BCI system,
wherein subjects were made to navigate a CAVE with an HMD. In [100], an experimenter-cued
asynchronous BCI was designed to navigate through the Australian National Library.
Navigation is also closely linked to controlling the virtual object being moved through the VE. Luu
et al. explored the possibility of closed-loop non-invasive BCI-based avatar walking, using delta band
(0.1-3Hz) as a feature [101]. Subjects’ r values increased significantly over the course of 8 training
days for hip, knee and ankle control. The study also reveals that cortical involvement is different
with and without closed-loop BCI control. While such a system is useful in navigation of human
like avatars in VEs, it may also prove to be useful for post-stroke rehabilitation patients, which are
reviewed in a later section. Scherer et al. presented a 3-class Graz-BCI-based on the detection of
sensorimotor rhythms (SMRs, 13-15 Hz) [102]. EOG and EMG artifacts were removed and 3 bipolar
EEG channels were used. The paper presents two applications, freeSpace VE and Brainloop. freeSpace
is a computer game where subjects can navigate collect coins as rewards [103]. Brainloop provides
an interface between Google Earth and Graz-BCI. A multi projection based stereo VE called DAVE
was used by Lotte et al. in 2012 [48].
An MI based navigation system designed for people with special needs was presented in [36].
The system comprises of an MI-BCI, communication module, EEG analyser and a VE linked to the
real world apartment. The study showed reasonable classification of MI to be used in a VR setting.
Fujisawa et al. demonstrated training-less BCI-VE navigation employing a common spatial pattern
(CSP) for 2 class MI, left and right hands [104]. Friedman et al. used a CAVE system in a virtual bar
to navigate through using left hand and right hand (2 class) MI [105]. The subjects were also made
to navigate along a street in 1D by imagining either hands or feet motion. In [106], a 3-class BCI was
presented to navigate through a single path maze. The 3 classes are left and right hands, and feet.
The study showed an increase in user engagement while using BCI-VR. Table-2 summarizes the MI
based BCI research conducted so far by categorising the navigation as self-paced (at the pace of the
subject) or synchronous (subjects are given progress cues). The MI commands used in the studies
may either be low-level or high-level based on the complexity of the action.

3.1.2 Navigation using Event Related Potentials

Research groups are also consistently working on creating P300-based virtual navigation environ-
ments. Navigation in the MazeSuite virtual environment was achieved by using a 3x3 P300 matrix
containing navigation icons. The ERP generated are processed and online classification commands
are generated to achieve the desired result [107]. Curtin et al. used this for spatial navigation control
in MindMaze virtual environment shown on one computer screen, and the 3x3 matrix on another.
82-89% accuracy was found depending on the complexity of environment. Cattan et al showed that
using a P300-VR and a P300-PC has little difference in BCI accuracy [108]: only P300 is wider in
the virtual case.
A Review on VR and AR Use Cases of BCI Applications in Smart cities 9

Table 2: BCI-VR Navigation

Reference VE/Object Number of MI Tasks Type Commands Target Area

[102] Google Earth 3 Self-Paced Low-level Games

[103] Free Space 3 Self-paced Low-level Games

[106] Maze 3 Self-paced Low-level Games

[109] Museum 3 Self-paced High-level General

[99] Street 2 Synchronous Low-level General

[110] Street 2 Synchronous Low-level General

[97] City 2 Self-paced Low-level General

[98] Apartment 2 Semi-Synchronous Low-level General

[98] Pub 2 Synchronous Low-level General

[95] Conference Room 2 Self-paced Low-level General

[105] Bar and Street 2 Synchronous Low-level General

[36] Apartment 2 Self-paced Low-level Medical

[111] Maze or Park 1-2 Self-paced Low-level Medical

[112] Park 1-2 Self-paced Low-level Medical

[100] Library 1 Self-paced Low-level Medical

[113] Wheelchair 1 Synchronous High-level Medical

[114] Space Ship 1 Self-paced Low-level General

[100] Street 1 Self-paced Low-level Medical

3.2 ENTERTAINMENT

The creative industry, and researchers have come up with ways to connect VR and BCI for entertain-
ment. EEG signals can also be used to interact with the VEs in leisure VR. As discussed before, EEG
signal acquisition takes time and is not easy to use. But with strong algorithms, BCI and VR have
seen a shift and have come up as a promising mode of human computer interaction. Andujar et al.
[115] recently published their work on P300-based painting in VR which can be used by patients with
ALS as a channel to communicate more creatively, by painting objects usiing a 6x6 grid. BCI games
are also used in rehabilitation applications, for example, to give biofeedback therapies to ADHD and
trauma patients. Real-time games use synchronous BCI for online classifications. As noticed above,
the processes differ in terms of feature extraction, classification, ease of use and accuracy. Games can
be classified into categories based on Mental state / Attention level of user, SSVEP and P300.
Mental state detection (attention: focused/relaxed or an emotional state) can be used to trigger
actions in VEs. Other than noise, these signals also contain eye artifacts which are produced by eye
movements and blinks. A good overview of these signals and related concepts can be found in [116].
Some gaming systems even make use of these artifacts as a part of control signal. For understanding
the subjects’ mental state, the band power of signals give the amounts of θ (4–8 Hz), α (8Hz–15 Hz), δ
(15–30 Hz) and γ (30–70 Hz) waves. θ waves are connected to the meditative concentration and deep
relaxation. δ waves are only observed in adults when they are in deep sleep. α rhythms are related
to visual processing and memory brain functions. Also, a suppression in α waves can be seen during
an increase in mental efforts. The alpha activity indicates that the brain is in a state of relaxation.
Beta rhythms are connected to excitement of the brain and also they are de-synchronized during
real movements/motor imagery and its symmetrical distribution changes. The relaxation, attention
and focus levels are calculated based on the band-power values calculated by taking required α, β,
θ ratios [117]. Power Spectral Density (PSD) points can also be selected as feature vectors (FV) to
train machine learning classifiers with labeled examples. Different emotional states can be analyzed
10 Microprocessors and Microsystems

by using the band power values at different frequencies. Required frequencies are selected uniformly
from alpha, beta and theta ranges irrespective of the frequency band length. A number of games that
make use of these states are discussed in this section (computer games, followed by immersive virtual
games in conjunction with BCI).
Several research articles have been published that utilize attention-based BCI for gaming. Jiang
et al. created a BCI controlled 3D hand that took hold of fruits on a plate [118]. Authors use
concentration measure as a parameter to change the position of the hand and grasping force. Joselli
et al. created a game similar to fruit ninja, where the aim was to slice maximum number of fruits in
one minute [119]. Retaining the attention during the game-play helps keep the value above the limit
set initially to progress ahead in the game. Neuro-feedback games have far-reaching potential other
than entertainment as well: these can be used to check students’ attention levels and help boost the
learning process in case of attention deficit. A drawing game was presented by Moon et al. which also
utilizes gaze tracking to know the coordinates of players’ focus points in addition to attention state
analysis [120]. High player attention leads to the creation of patterns whereas low attention creates
a negative feedback. Lalor et al. in 2005 developed game that uses attention along with SSVEP and
the aim is to secure 1D character balance over a rope [121]. Two checkerboards were placed beside
the character, phase reversed (17, 20 Hz). The player has to focus on the right checkerboard to ensure
the stability of the character. Koo et al. designed a VR maze game with the goal to guide a ball
into a given space in the 3D VE using a 4 option SSVEP [122]. The experiments show an increase
of 10% information transfer rate using an HMD rather than a monitor screen. The research thus
demonstrates that SSVEP stimulus through an HMD is more engaging for BCI.

3.3 REHABILITATION AND ROBOTICS

One of the common causes of paralysis are strokes, which cause cognitive and motor impairment
due to the damage to a large number of brain cells. Low-cost, effective BCI-VR solutions capable of
presenting neuro-feedback and analysing the BCI performance of subjects can increase the functional
capabilities of stroke rehabilitation systems [123]. Table-3 compiles various studies conducted on
BCI-VR-based rehabilitation. Several companies are now using BCI-VR to help stroke survivors.
The REINVENT platform [6] lets stroke patients use VR for body immersion and BCI for limb
movements, which helps provide better observation and rehabilitation ground to patients as well as
doctors in figuring out the brain signals and observing them. This makes it possible for researchers
to predict the brain damage by observing the gamified signals of the brain, without using any sensors
or invasive methods. It can be used for physiotherapy for paralysed patients who need to exercise
everyday. Further extension can also include rehabilitation for astronauts who have spent enough
time in the space that they have forgotten to perform earthly actions like walking. In addition
apart from regular rehabilitation techniques, video games have also proven to be an engaging way
of rehabilitation. Video games often require cognitive involvement from players, thus offering an
engrossing way to treat stroke rehabilitation patients. MI can be used to control the in game objects
and avatars with multiple possible tasks for the patients to perform. This will help promote motor
learning and may be less frustrating during the recovery phase of motor function [124].
Motor imagery has proven to be an effective way for post-stroke rehabilitation. MI based BCI-
VR methods are shown to provide upper rehabilitation to post-stroke patients with improvements
observed on both motor-function as well as neuro-plasticity [139]. By providing a simulated world with
which the patients can interact, data can be collected and patients may be monitored in real time. A
visual, auditory and haptic multisensory-feedback provision may be provided to make training less
monotonous whilst promoting motor learning and enhancing participation [25]. A neurorehabilitation
system using MI was developed in [140]. The proposed BCI detected 3 MI produced stimulation that
produce virtual stimulation to control the VR environment. Discrete wavelet transforms (DWT)
and multi-layer perception (MLP) neural networks were used respectively for preprocessing and
classification, miscalculations removed using an expounder.
BCI-FES rehabilitation can lead to significant recovery of motor function and purposeful plasticity
due to the activation of efferent (central nervous system (CNS) to limbs) and afferent (sensory
organs to CNS) body pathways. [141]. BCI-FES techniques combined with VR were also proposed
to empower stroke rehabilitation. Patients were exposed to a VE using an Occulus Rift device where
they performed exercises as guided by a therapist. Proper execution of exercises was checked using an
electroencephalograph and electrooculograph. The BCI system predicted the intention of the patient.
The classification of intent was then converted into a command for controlling the FES device [29, 28].
A Review on VR and AR Use Cases of BCI Applications in Smart cities 11

Table 3: BCI-VR-based rehabilitation studies [125]

Reference Disorder Paradigm VE action No. subjects Accuracy

[126] Nerve injury MI Character control 3 97%

[127] Stroke MI Hand control 8 96.7%

[128] Stroke MI Arm control 10 93%

[129] Stroke MI Hand control 3 90.4%

[48] Disability MI Character control 7 90%

[130] Spinal injury MI Navigation 1 67.5%

[131] Cognitive impairment SSVEP Navigation 1 91%

[132] Stroke SSVEP Mole game 5 90%

[133] Stroke SSVEP Character control 3 81.4%

[134] Alzheimers P300 Navigation 22 90.%

[135] Spinal injury P300 Character control 21 89%

[136] Cognitive impairment P300 Navigation 50 87%

[137] Paralysis P300 Sequence spelling 18 80.5%

[138] Autism P300 Flickering object 17 80%

(a) VR-MI Limb Control (b) Virtual Boat Rowing Game

Fig. 7: VR-MI Rehabilitation

Simulating a prosthetic arm on a virtual-L shape workbench display system with the help of
BCI is another useful case of the two technologies [142]. In this experiment, users were evaluated on
various criteria like depth recognition and distance recognition. Simulating arms is an excellent way
to enter the stream of artificial wearables for people with special needs, since the signals of limbs
are comparatively easier to recognise and map. Starting with basic mapping and hand movement, it
can be extended to other body parts with a more difficult implementation. In [143], the subject was
made to control the arm of a VR avatar, as shown in Figure-7a, using a Soft Robotics rehabilitation
device with a BCI for upper limb classification. The use of hybrid-BCI for 3D object control in VR
is also gaining traction among the research groups [144]. Chun et al. use an eye-tracking module to
track and select a 3D object in the VE. Users are asked to concentrate their mind on the object to
manipulate is using BCI. The manipulations of the objects in the experiment range from basic to
complex interactions which help observers gather the right data. A group also developed a multiplayer
checkers game in which a robotic arm moves the piece to the square that player wants to move to.
12 Microprocessors and Microsystems

Another game-based real-time rehabilitation BCI was developed in [145]. The 3D VR game was
controlled by the BCI after preprocessing, feature extraction and classification of MI signals. A CSP
feature extractor and SVM classifier were used. An MI based boat rowing VR game, as shown in
Figure-7b, for entertainment and rehabilitation was presented in [19].
Using a wheel-chair for injured people is extremely difficult. In this case study, users were put
in VE with avatars on the streets [100]. The users need to travel from one point to another. This is
a very small application in an extremely resource deprived industry. Patients (especially older ones)
find it extremely difficult to adapt to newer technologies that might make their life easier. Such VEs
and games along with BCI can be used to train people on the advanced technology as well as also
collect surveys and observation on how useful the technology is for patients.
Vision impaired patients are unable to use BCIs based on visual-feedbacks. Auditory methods can
act as a good replacement in such cases. A hybrid auditory system of P300-BCI combined with the
auditory steady state response (ASSR) was presented in [35]. By providing audio stimuli at different
amplitude modulated (AM) frequencies along with a beep sound arbitrarily between sound sources.
Different auditory responses lead to different ASSR and the beeps cause P300s. The system is vision
independent and provides slightly lower accuracy as compared to the traditional P300 BCI systems
available.

4 BCI-AR/MR Applications

Augmented and Mixed Reality enables better human-computer interactions due to the new commu-
nication channels it offers. BCI is a powerful way to interact with an AR headset in real-time. The
Epson Moverio, which acts like a ”thinking mouse”, is one such instance [153]. The most common ap-
plications used in conjunction with BCI employ optical see through HMDs. Such HMD-BCI systems
are well compatible and can be tolerate small head movements using the BCI [154]. Table-4 provides
the details of few studies conducted on BCI and AR in the domains of robotics and medicine.
Researchers at the TU-Munich proposed a Superman-like X-ray vision application for doctors
to use during surgeries [152]. Switching between normal vision and X-ray vision would traditionally
require a UI but the need for this is removed with the use of a BCI employing EMG signals and gaze
tracking. Neural Impulse Actuator (NIA) can acquire brain waves (α, β) and signals (EMG, EOG).
NIA is traditionally marketed for computer gaming and hands-free computer access to people with
disabilities. Due to the importance of gaze position, eye movement could not be used for switching
between x-ray and normal vision. Furthermore, the utilizing α and β brain waves to control the
system requires learning on the end of the Medical Doctors (MDs). Thus a learning free EMG model
with gaze tracking was proposed: Instead of augmenting the entire anatomy onto the patients body,
only small windows where the gaze lies are augmented. The augmentation is switched on either as a
toggle at a certain muscle tension threshold or it stays on only above the threshold and drops when

Table 4: Brain-Computer Interface and Augmented Reality

Technologies AR Display Field Objective Reference

P300-BCI Computer Screen Robotics Robot Steering [146]

P300-BCI HMD Robotics Robot Limb Control [147]

SSVEP-BCI Computer Screen Medical Wheelchair Control [148]

SSVEP-BCI HMD Robotics Robot Limb Control [149]

MI-BCI Computer Screen Medical Phantom Pain Therapy [150]

MI-BCI Computer Screen Robotics Robot Limb Control [151]

MI-BCI + EMG Computer Screen Medical Phantom Pain Therapy [150]

SSVEP-BCI + EMG HMD Medical X-Ray Vision [152]


A Review on VR and AR Use Cases of BCI Applications in Smart cities 13

the muscle is relaxed. Such a system could be improved, adding additional functionalities, if alpha
and beta waves are used instead but would require additional training time for the MDs.
BCI-AR system for industrial monitoring was proposed by Angrisani et al where a smart trans-
ducer was used for the required information [155]. This removes any requirement to physically mea-
sure the industrial-system parameters saving time and providing safety from potential accidents.
An SSVEP based model for which stimulus is provided through AR glasses. AR glasses are semi-
transparent and semi-reflexive thus providing the ability to see both real and virtual scenes. AR with
a P300-based BCI was employed by Takano et al to make an environment control system [156]. When
the attached camera on the HMD detects an AR marker on the device, the control panel of the device
is created in the HMD with flickering visual stimuli, providing the user with the ability to control
it. This system does not require much training from the user. Such BCI-AR systems using an HMD
may be focal in building intelligent environments.

5 WAY FORWARD: FUTURE SCOPE OF XR BASED BCI APPLICATIONS

As we progress further, radical changes are expected in how BCI and VR will be combined, and
what problem these will put forth. Experimenting with the synchronisation of various modules with
BCI can help make VR a completely hands free experience. As a trend setting example, Neuralink
by Elon Musk, is creating minimally invasive neural implants that will suit a wider range of future
applications. It’s first generation N1 implantable sensors claim to provide the real world experience to
its immediate target, people without use of their limbs. Neuralink has also teased a neural mesh-like
device called the ’Neural Lace’ that could give more access to the brain since they would be injected
into its capillaries. To put it briefly, the future of the fusion of these technologies looks bright, but
still has a long way to go before becoming an integral part of our lives. Following the example of
Neuralink, with time, as our knowledge for other modules and BCI increases, so will the possibilities
in XR. Having said that, here are some of the suggestive application which can be looked into in the
near future.
Google has released its Glass Enterprise Edition 2 [157], a wearable AR device that aims to
streamline business processes in different areas. It also aims to help employees work smarter, faster
and safer by providing them with voice activated or glance based assistance. Such an AR device can be
combined with a BCI and applied to several areas of work including surgery and aviation. We discussed
research on X-Ray vision with BCI-AR earlier in this paper. If doctors could be provided with a
quick retrieval system of crucial information through AR glasses with ocular or EEG based control,
it would make surgical operations a lot less prone to error, thereby making operations more precise.
Researchers are also interested in moving away from ’head-down displays’ in the cockpits (where the
pilots need to look down on the screens to see information regarding aircraft path coordinates and
upcoming obstacles) towards technologies that provide head-up displays. These project important
intelligence on head mounted transparent screens to avoid unnecessary distractions that might be
fatal. It also makes retrieval of information faster without the need of unnecessary head movement
and enables the pilots to focus better. Adding a very high accuracy BCI-based control to the AR
display - for instance searching for an alternative path, or selecting from multiple options, would
further reduce the chances of a mishap and improve the quality of operation for the pilots by making
information retrieval easier for them. Such BCI-AR (or BCI-MR) devices can also be developed to
show details regarding the stores a particular person is interested in, while he/she is on the street.
These could be reviews, ratings or even the names and history.
EEG is one of the lowest resolution methods of reading brain activity. Semi-invasive methods as
employed by Neuralink can prove to give better results across all current and future application of
BCI. This would open up the possibilities of transcending the 5 senses and the limited bandwidth of
our brain, and open up the possibilities of achieving symbiosis with AI. Since BCI controlled weapons
are already under the radar, BCI-VR based war training for soldiers can be an important application
for the future of defense and security. VR and BCI together can make specialised doctors accessible
everywhere. They shall be able to perform operations without the need of any physical equipment or
movement of their hands. BCI tracked thoughts and signals can potentially open avenues for neuro-
marketing, a highly targeted marketing technique that is bound to change the future of interactive
and predictive marketing campaigns. BCI-VR can help build inventory-less shopping for users to
shop whenever they want. Neuro-gaming can provide a hands-free, immersive experience of gaming.
BCI-VR will enable people to experience immersive environments that are otherwise inaccessible to
them either for luxiry or immersive entertainment.
14 Microprocessors and Microsystems

6 CONCLUSIONS

This paper presented the state of the art combination: BCI and XR, and covered major contributions
in this intersection of research domains. It reviews the previous works on the fusion of the stated
technologies in navigation, entertainment, rehabilitation, robotics and home control. Combining XR
and BCIs is useful in scenarios favoring hands-free interaction, and future works in this domain may
look to explore this combination in many more use cases than the ones discussed in depth. This review
also gives direction for design of new interaction techniques and feedback modalities that would take
advantage of the conjunction of these two technologies. In addition, it also presents some possible
future developments that can be pursued in this combination of BCI and XR technologies.

Acknowledgement

This work is supported by BITS ACRG funding under Project Grant File no. PLN/AD/2018-19/6
for the Project titled ”Brain Computer Interface Controlled Humanoid”.

References

1. L. Jiang, S. Ye, X. Yang, X. Ma, L. Lu, A. Ahmad, and G. Jeon, “An adaptive anchored neighbor-
hood regression method for medical image enhancement,” Multimedia Tools and Applications,
vol. 79, no. 15, pp. 10 533–10 550, 2020.
2. J. Wang, K. Han, A. Alexandridis, Z. Chen, Z. Zilic, Y. Pang, G. Jeon, and F. Piccialli, “A
blockchain-based ehealthcare system interoperating with wbans,” Future Generation computer
systems, vol. 110, pp. 675–685, 2020.
3. V. Chamola, V. Hassija, V. Gupta, and M. Guizani, “A comprehensive review of the covid-19
pandemic and the role of iot, drones, ai, blockchain, and 5g in managing its impact,” Ieee access,
vol. 8, pp. 90 225–90 265, 2020.
4. K. Muhammad, S. Khan, J. Del Ser, and V. H. C. de Albuquerque, “Deep learning for multigrade
brain tumor classification in smart healthcare systems: A prospective survey,” IEEE Transac-
tions on Neural Networks and Learning Systems, vol. 32, no. 2, pp. 507–522, 2020.
5. B. O. Rothbaum, L. Hodges, R. Alarcon, D. Ready, F. Shahar, K. Graap, J. Pair, P. Hebert,
D. Gotz, B. Wills et al., “Virtual reality exposure therapy for ptsd vietnam veterans: A case
study,” Journal of Traumatic Stress: Official Publication of The International Society for Trau-
matic Stress Studies, vol. 12, no. 2, pp. 263–271, 1999.
6. A. Vourvopoulos, O. Marin-Pardo, M. Neureither, D. Saldana, E. Jahng, and S.-L. Liew, “Mul-
timodal head-mounted virtual-reality brain-computer interface for stroke rehabilitation,” in In-
ternational Conference on Human-Computer Interaction. Springer, 2019, pp. 165–179.
7. J. R. Wolpaw, G. E. Loeb, B. Z. Allison, E. Donchin, O. F. do Nascimento, W. J. Heetderks,
F. Nijboer, W. G. Shain, and J. N. Turner, “Bci meeting 2005-workshop on signals and recording
methods,” IEEE Transactions on neural systems and rehabilitation engineering, vol. 14, no. 2,
pp. 138–141, 2006.
8. D. Mattia, L. Astolfi, J. Toppi, M. Petti, F. Pichiorri, and F. Cincotti, “Interfacing brain
and computer in neurorehabilitation,” in 2016 4th International Winter Conference on Brain-
Computer Interface (BCI). IEEE, 2016, pp. 1–2.
9. Z. Lv, C. Esteve, J. Chirivella, and P. Gagliardo, “Serious game based personalized healthcare
system for dysphonia rehabilitation,” Pervasive and Mobile Computing, vol. 41, pp. 504–519,
2017.
10. D. P.-O. Bos, H. Gürkök, B. Van de Laar, F. Nijboer, and A. Nijholt, “User experience evaluation
in bci: Mind the gap!” International Journal of Bioelectromagnetism, vol. 13, no. 1, pp. 48–49,
2011.
11. Y. Li, J. Pan, F. Wang, and Z. Yu, “A hybrid bci system combining p300 and ssvep and
its application to wheelchair control,” IEEE Transactions on Biomedical Engineering, vol. 60,
no. 11, pp. 3156–3166, 2013.
12. G. Edlinger, C. Holzner, and C. Guger, “A hybrid brain-computer interface for smart home
control,” in International conference on human-computer interaction. Springer, 2011, pp. 417–
426.
A Review on VR and AR Use Cases of BCI Applications in Smart cities 15

13. X. Gu, Z. Cao, A. Jolfaei, P. Xu, D. Wu, T.-P. Jung, and C.-T. Lin, “Eeg-based brain-computer
interfaces (bcis): A survey of recent studies on signal sensing technologies and computational
intelligence approaches and their applications,” arXiv preprint arXiv:2001.11337, 2020.
14. M. Bamdad, H. Zarshenas, and M. A. Auais, “Application of bci systems in neurorehabilitation:
a scoping review,” Disability and Rehabilitation: Assistive Technology, vol. 10, no. 5, pp. 355–
364, 2015.
15. Y. Li, J. Long, T. Yu, Z. Yu, C. Wang, H. Zhang, and C. Guan, “An eeg-based bci system for
2-d cursor control by combining mu/beta rhythm and p300 potential,” IEEE Transactions on
Biomedical Engineering, vol. 57, no. 10, pp. 2495–2505, 2010.
16. U. Tripathi, R. Saran, V. Chamola, A. Jolfaei, and A. Chintanpalli, “Advancing remote health-
care using humanoid and affective systems,” IEEE Sensors Journal, 2021.
17. V. Chamola, A. Vineet, A. Nayyar, and E. Hossain, “Brain-computer interface-based humanoid
control: A review,” Sensors, vol. 20, no. 13, p. 3620, 2020.
18. A. Athanasiou, I. Xygonakis, N. Pandria, P. Kartsidis, G. Arfaras, K. R. Kavazidi, N. Foroglou,
A. Astaras, and P. D. Bamidis, “Towards rehabilitation robotics: off-the-shelf bci control of
anthropomorphic robotic arms,” BioMed research international, vol. 2017, 2017.
19. A. Vourvopoulos, A. Ferreira, and S. B. i Badia, “Neurow: an immersive vr environment for
motor-imagery training with the use of brain-computer interfaces and vibrotactile feedback,”
in International Conference on Physiological Computing Systems, vol. 2. SCITEPRESS, 2016,
pp. 43–53.
20. J. J. Shih, D. J. Krusienski, and J. R. Wolpaw, “Brain-computer interfaces in medicine,” in
Mayo Clinic Proceedings, vol. 87, no. 3. Elsevier, 2012, pp. 268–279.
21. M. Arvaneh, C. Guan, K. K. Ang, and C. Quek, “Optimizing the channel selection and clas-
sification accuracy in eeg-based bci,” IEEE Transactions on Biomedical Engineering, vol. 58,
no. 6, pp. 1865–1873, 2011.
22. J. A. Wilson, E. A. Felton, P. C. Garell, G. Schalk, and J. C. Williams, “Ecog factors underlying
multimodal control of a brain-computer interface,” IEEE transactions on neural systems and
rehabilitation engineering, vol. 14, no. 2, pp. 246–250, 2006.
23. L. F. Nicolas-Alonso and J. Gomez-Gil, “Brain computer interfaces, a review,” Sensors, vol. 12,
no. 2, pp. 1211–1279, 2012.
24. J.-H. Han, S. Ji, C. Shi, S.-B. Yu, and J. Shin, “Recent progress of non-invasive optical modality
to brain computer interface: A review study,” in The 3rd International Winter Conference on
Brain-Computer Interface. IEEE, 2015, pp. 1–2.
25. W. Wang, B. Yang, C. Guan, and B. Li, “A vr combined with mi-bci application for upper limb
rehabilitation of stroke,” in 2019 IEEE MTT-S International Microwave Biomedical Conference
(IMBioC), vol. 1. IEEE, 2019, pp. 1–4.
26. R. D. Walker, S. B. Andersson, C. A. Belta, and P. E. Dupont, “In-haptics: Interactive navigation
using haptics,” in 2010 IEEE Haptics Symposium. IEEE, 2010, pp. 463–466.
27. J. R. Kim, R. H. Osgouei, and S. Choi, “Effects of visual and haptic latency on touchscreen
interaction: A case study using painting task,” in 2017 IEEE World Haptics Conference (WHC).
IEEE, 2017, pp. 159–164.
28. R. G. Lupu, D. C. Irimia, F. Ungureanu, M. S. Poboroniuc, and A. Moldoveanu, “Bci and fes
based therapy for stroke rehabilitation using vr facilities,” Wireless Communications and Mobile
Computing, vol. 2018, 2018.
29. F. A. Jure, L. C. Carrere, G. G. Gentiletti, and C. B. Tabernig, “Bci-fes system for neuro-
rehabilitation of stroke patients,” in Journal of Physics: Conference Series, vol. 705, no. 1.
IOP Publishing, 2016, p. 012058.
30. S. N. Abdulkader, A. Atia, and M.-S. M. Mostafa, “Brain computer interfacing: Applications
and challenges,” Egyptian Informatics Journal, vol. 16, no. 2, pp. 213–230, 2015.
31. S. Vaid, P. Singh, and C. Kaur, “Eeg signal analysis for bci interface: A review,” in 2015 fifth
international conference on advanced computing & communication technologies. IEEE, 2015,
pp. 143–147.
32. P. Herman, G. Prasad, T. M. McGinnity, and D. Coyle, “Comparative analysis of spectral
approaches to feature extraction for eeg-based motor imagery classification,” IEEE Transactions
on Neural Systems and Rehabilitation Engineering, vol. 16, no. 4, pp. 317–326, 2008.
33. T. White, N. C. Andreasen, and P. Nopoulos, “Brain volumes and surface morphology in
monozygotic twins,” Cerebral cortex, vol. 12, no. 5, pp. 486–493, 2002.
16 Microprocessors and Microsystems

34. T. Radman, L. Parra, and M. Bikson, “Amplification of small electric fields by neurons; implica-
tions for spike timing,” in 2006 International Conference of the IEEE Engineering in Medicine
and Biology Society. IEEE, 2006, pp. 4949–4952.
35. N. Kaongoen and S. Jo, “A novel hybrid auditory bci paradigm combining assr and p300,”
Journal of neuroscience methods, vol. 279, pp. 44–51, 2017.
36. D. Suh, H. S. Cho, J. Goo, K. S. Park, and M. Hahn, “Virtual navigation system for the
disabled by motor imagery,” in Advances in Computer, Information, and Systems Sciences, and
Engineering. Springer, 2007, pp. 143–148.
37. J. A. Detre, “fmri: applications in epilepsy,” Epilepsia, vol. 45, pp. 26–31, 2004.
38. E. Debie, N. Moustafa, and M. T. Whitty, “A privacy-preserving generative adversarial net-
work method for securing eeg brain signals,” in 2020 International Joint Conference on Neural
Networks (IJCNN). IEEE, 2020, pp. 1–8.
39. G. C. Burdea and P. Coiffet, Virtual reality technology. John Wiley & Sons, 2003.
40. T. Nokuo and T. Sumii, “Head mounted display,” Dec. 23 2014, uS Patent App. 29/502,182.
41. Z. Lv, J. Chirivella, and P. Gagliardo, “Bigdata oriented multimedia mobile health applications,”
Journal of medical systems, vol. 40, no. 5, p. 120, 2016.
42. B. Egliston and M. Carter, “Oculus imaginaries: The promises and perils of facebook’s virtual
reality,” New Media & Society, p. 1461444820960411, 2020.
43. P. Dempsey, “The teardown: Htc vive vr headset,” Engineering & Technology, vol. 11, no. 7-8,
pp. 80–81, 2016.
44. V. Angelov, E. Petkov, G. Shipkovenski, and T. Kalushkov, “Modern virtual reality headsets,”
in 2020 International Congress on Human-Computer Interaction, Optimization and Robotic
Applications (HORA). IEEE, 2020, pp. 1–5.
45. A. Zook, S. Lee-Urban, M. O. Riedl, H. K. Holden, R. A. Sottilare, and K. W. Brawner, “Au-
tomated scenario generation: toward tailored and optimized military training in virtual envi-
ronments,” in Proceedings of the international conference on the foundations of digital games,
2012, pp. 164–171.
46. S. Helsel, “Virtual reality and education,” Educational Technology, vol. 32, no. 5, pp. 38–42,
1992.
47. M. Farshid, J. Paschen, T. Eriksson, and J. Kietzmann, “Go boldly!: Explore augmented reality
(ar), virtual reality (vr), and mixed reality (mr) for business,” Business Horizons, vol. 61, no. 5,
pp. 657–663, 2018.
48. F. Lotte, J. Faller, C. Guger, Y. Renard, G. Pfurtscheller, A. Lécuyer, and R. Leeb, “Combining
bci with virtual reality: towards new applications and improved bci,” in Towards Practical Brain-
Computer Interfaces. Springer, 2012, pp. 197–220.
49. S. Mandal, “Brief introduction of virtual reality & its challenges,” International Journal of
Scientific & Engineering Research, vol. 4, no. 4, pp. 304–309, 2013.
50. A.-M. Brouwer, J. Van Der Waa, and H. Stokking, “Predicting head rotation using eeg to
enhance streaming of images to a virtual reality headset,” 2018.
51. D. Steyrl, R. J. Kobler, G. R. Müller-Putz et al., “On similarities and differences of invasive
and non-invasive electrical brain signals in brain-computer interfacing,” Journal of Biomedical
Science and Engineering, vol. 9, no. 08, p. 393, 2016.
52. D. A. Heldman and D. W. Moran, “Local field potentials for bci control,” in Handbook of Clinical
Neurology. Elsevier, 2020, vol. 168, pp. 279–288.
53. A. K. Bansal, W. Truccolo, C. E. Vargas-Irwin, and J. P. Donoghue, “Decoding 3d reach and
grasp from hybrid signals in motor and premotor cortices: spikes, multiunit activity, and local
field potentials,” Journal of neurophysiology, vol. 107, no. 5, pp. 1337–1355, 2012.
54. P. Aricò, G. Borghini, G. Di Flumeri, N. Sciaraffa, and F. Babiloni, “Passive bci beyond the
lab: current trends and future directions,” Physiological measurement, vol. 39, no. 8, p. 08TR02,
2018.
55. B. Kerous, F. Skola, and F. Liarokapis, “Eeg-based bci and video games: a progress report,”
Virtual Reality, vol. 22, no. 2, pp. 119–135, 2018.
56. M. Marchetti, F. Piccione, S. Silvoni, and K. Priftis, “Exogenous and endogenous orienting
of visuospatial attention in p300-guided brain computer interfaces: A pilot study on healthy
participants,” Clinical Neurophysiology, vol. 123, no. 4, pp. 774–779, 2012.
57. S.-I. Choi, C.-H. Han, G.-Y. Choi, J. Shin, K. S. Song, C.-H. Im, and H.-J. Hwang, “On the
feasibility of using an ear-eeg to develop an endogenous brain-computer interface,” Sensors,
vol. 18, no. 9, p. 2856, 2018.
A Review on VR and AR Use Cases of BCI Applications in Smart cities 17

58. E. Marx, M. Benda, and I. Volosyak, “Optimal electrode positions for an ssvep-based bci,” in
2019 IEEE International Conference on Systems, Man and Cybernetics (SMC). IEEE, 2019,
pp. 2731–2736.
59. A. Ravi, N. H. Beni, J. Manuel, and N. Jiang, “Comparing user-dependent and user-independent
training of cnn for ssvep bci,” Journal of Neural Engineering, vol. 17, no. 2, p. 026028, 2020.
60. R. W. Homan, J. Herman, and P. Purdy, “Cerebral location of international 10–20 system
electrode placement,” Electroencephalography and clinical neurophysiology, vol. 66, no. 4, pp.
376–382, 1987.
61. I. Mehmood, Z. Lv, Y.-D. Zhang, K. Ota, M. Sajjad, and A. K. Singh, “Mobile cloud-assisted
paradigms for management of multimedia big data in healthcare systems: Research challenges
and opportunities,” International Journal of Information Management, no. 45, pp. 246–249,
2019.
62. J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, “Brain–
computer interfaces for communication and control,” Clinical neurophysiology, vol. 113, no. 6,
pp. 767–791, 2002.
63. F. Schiliro, N. Moustafa, and A. Beheshti, “Cognitive privacy: Ai-enabled privacy using eeg
signals in the internet of things,” in 2020 IEEE 6th International Conference on Dependability
in Sensor, Cloud and Big Data Systems and Application (DependSys). IEEE, 2020, pp. 73–79.
64. Emotiv, “Brain data measuring hardware and software solutions,” http://www.emotiv.com/.
65. NeuroSky, “Biosensors technology,” http://neurosky.com//.
66. N. M. Incorporated, “Natus,” https://natus.com/.
67. Mindmaze, “Neurotechnology company,” https://www.mindmaze.com/.
68. M. Capogrosso, T. Milekovic, D. Borton, F. Wagner, E. M. Moraud, J.-B. Mignardot, N. Buse,
J. Gandar, Q. Barraud, D. Xing et al., “A brain–spine interface alleviating gait deficits after
spinal cord injury in primates,” Nature, vol. 539, no. 7628, pp. 284–288, 2016.
69. F. Yousefi, H. Kolivand, and T. Baker, “Sas-bci: a new strategy to predict image memorability
and use mental imagery as a brain-based biometric authentication,” Neural Computing and
Applications, pp. 1–15, 2020.
70. H. Wang, W. Chang, and C. Zhang, “Functional brain network and multichannel analysis for
the p300-based brain computer interface system of lying detection,” Expert Systems with Ap-
plications, vol. 53, pp. 117–128, 2016.
71. A. Sezer, Y. İnel, A. Seçkin, and U. Uluçınar, “An investigation of university students’ attention
levels in real classroom settings with neurosky’s mindwave mobile (eeg) device,” in Intenational
Educational Technology Conference, İstanbul, Turkey, 2015, pp. 27–29.
72. R. Lievesley, M. Wozencroft, and D. Ewins, “The emotiv epoc neuroheadset: an inexpensive
method of controlling assistive technologies using facial expressions and thoughts?” Journal of
Assistive Technologies, 2011.
73. O. E. Krigolson, C. C. Williams, A. Norton, C. D. Hassall, and F. L. Colino, “Choosing muse:
Validation of a low-cost, portable eeg system for erp research,” Frontiers in neuroscience, vol. 11,
p. 109, 2017.
74. G. Ruffini, S. Dunne, E. Farrés, Í. Cester, P. C. Watts, S. Ravi, P. Silva, C. Grau, L. Fuentemilla,
J. Marco-Pallares et al., “Enobio dry electrophysiology electrode; first human trial plus wireless
electrode system,” in 2007 29th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society. IEEE, 2007, pp. 6689–6693.
75. G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, and J. R. Wolpaw, “Bci2000:
a general-purpose brain-computer interface (bci) system,” IEEE Transactions on biomedical
engineering, vol. 51, no. 6, pp. 1034–1043, 2004.
76. Y. Renard, F. Lotte, G. Gibert, M. Congedo, E. Maby, V. Delannoy, O. Bertrand, and
A. Lécuyer, “Openvibe: An open-source software platform to design, test, and use brain–
computer interfaces in real and virtual environments,” Presence: teleoperators and virtual envi-
ronments, vol. 19, no. 1, pp. 35–53, 2010.
77. A. Garcı́a-Soler, U. Diaz-Orueta, R. Ossmann, G. Nussbaum, C. Veigl, C. Weiss, and K. Pecyna,
“Addressing accessibility challenges of people with motor disabilities by means of asterics: A
step by step definition of technical requirements,” in International Conference on Computers
for Handicapped Persons. Springer, 2012, pp. 164–171.
78. V. Chamola, V. Hassija, S. Gupta, A. Goyal, M. Guizani, and B. Sikdar, “Disaster and pandemic
management using machine learning: A survey,” IEEE Internet of Things Journal, 2020.
18 Microprocessors and Microsystems

79. S. Li, A. Leider, M. Qiu, K. Gai, and M. Liu, “Brain-based computer interfaces in virtual
reality,” in 2017 IEEE 4th International Conference on Cyber Security and Cloud Computing
(CSCloud). IEEE, 2017, pp. 300–305.
80. F.-B. Vialatte, M. Maurice, J. Dauwels, and A. Cichocki, “Steady-state visually evoked poten-
tials: focus on essential paradigms and future perspectives,” Progress in neurobiology, vol. 90,
no. 4, pp. 418–438, 2010.
81. T. W. Picton, S. A. Hillyard, H. I. Krausz, and R. Galambos, “Human auditory evoked poten-
tials. i: Evaluation of components,” Electroencephalography and clinical neurophysiology, vol. 36,
pp. 179–190, 1974.
82. C. Hintermüller, C. Kapeller, G. Edlinger, and C. Guger, “Bci integration: application inter-
faces,” Brain-Computer Interface Systems-Recent Progress and Future Prospects, pp. 21–41,
2013.
83. P. Milgram, H. Takemura, A. Utsumi, and F. Kishino, “Augmented reality: A class of displays on
the reality-virtuality continuum,” in Telemanipulator and telepresence technologies, vol. 2351.
International Society for Optics and Photonics, 1995, pp. 282–292.
84. A. B. Holm, “Virtual hrm: A case of e-recruitment,” in 11th International Conference on En-
terprise Information Systems. INSTICC Press, 2009, pp. 49–68.
85. P. Cipresso, I. A. C. Giglioli, M. A. Raya, and G. Riva, “The past, present, and future of virtual
and augmented reality research: a network and cluster analysis of the literature,” Frontiers in
psychology, vol. 9, p. 2086, 2018.
86. F. Remondino and S. El-Hakim, “Image-based 3d modelling: a review,” The photogrammetric
record, vol. 21, no. 115, pp. 269–291, 2006.
87. F. Messaoudi, G. Simon, and A. Ksentini, “Dissecting games engines: The case of unity3d,” in
2015 international workshop on network and systems support for games (NetGames). IEEE,
2015, pp. 1–6.
88. A. Sanders, An introduction to Unreal engine 4. CRC Press, 2016.
89. Z. Lv, A. Halawani, S. Feng, S. Ur Réhman, and H. Li, “Touch-less interactive augmented reality
game on vision-based wearable device,” Personal and Ubiquitous Computing, vol. 19, no. 3, pp.
551–567, 2015.
90. B. Jiang, J. Yang, Z. Lv, and H. Song, “Wearable vision assistance system based on binocular
sensors for visually impaired users,” IEEE Internet of Things Journal, vol. 6, no. 2, pp. 1375–
1383, 2018.
91. J. Carmigniani, B. Furht, M. Anisetti, P. Ceravolo, E. Damiani, and M. Ivkovic, “Augmented
reality technologies, systems and applications,” Multimedia tools and applications, vol. 51, no. 1,
pp. 341–377, 2011.
92. F. Putze, A. Vourvopoulos, A. Lécuyer, D. Krusienski, S. B. i Badia, T. Mullen, and C. Herff,
“Brain-computer interfaces and augmented/virtual reality,” Frontiers in Human Neuroscience,
vol. 14, 2020.
93. J. Allanson and J. Mariani, “Mind over virtual matter: Using virtual environments for neuro-
feedback training,” in Proceedings IEEE Virtual Reality (Cat. No. 99CB36316). IEEE, 1999,
pp. 270–273.
94. M. Alimardani, S. Nishio, and H. Ishiguro, “Brain-computer interface and motor imagery train-
ing: The role of visual feedback and embodiment,” Evolving BCI Therapy-Engaging Brain State.
Dynamics, vol. 2, p. 64, 2018.
95. R. Leeb, R. Scherer, C. Keinrath, C. Guger, and G. Pfurtscheller, “Exploring virtual environ-
ments with an eeg-based bci through motor imagery/erkundung von virtuellen welten durch
bewegungsvorstellungen mit hilfe eines eeg-basierten bci,” Biomedizinische Technik/Biomedical
Engineering, vol. 50, no. 4, pp. 86–91, 2005.
96. R. Leeb, R. Scherer, F. Lee, H. Bischof, and G. Pfurtscheller, “Navigation in virtual environ-
ments through motor imagery,” in 9th Computer Vision Winter Workshop, CVWW, vol. 4,
2004, pp. 99–108.
97. R. Leeb and G. Pfurtscheller, “Walking through a virtual city by thought,” in The 26th Annual
International Conference of the IEEE Engineering in Medicine and Biology Society, vol. 2.
IEEE, 2004, pp. 4503–4506.
98. R. Leeb, F. Lee, C. Keinrath, R. Scherer, H. Bischof, and G. Pfurtscheller, “Brain–computer
communication: motivation, aim, and impact of exploring a virtual apartment,” IEEE Trans-
actions on Neural Systems and Rehabilitation Engineering, vol. 15, no. 4, pp. 473–482, 2007.
A Review on VR and AR Use Cases of BCI Applications in Smart cities 19

99. R. Leeb, C. Keinrath, D. Friedman, C. Guger, R. Scherer, C. Neuper, M. Garau, A. Antley,


A. Steed, M. Slater et al., “Walking by thinking: The brainwaves are crucial, not the muscles!”
Presence: Teleoperators and Virtual Environments, vol. 15, no. 5, pp. 500–514, 2006.
100. R. Leeb, V. Settgast, D. Fellner, and G. Pfurtscheller, “Self-paced exploration of the austrian
national library through thought,” International Journal of Bioelectromagnetism, vol. 9, no. 4,
pp. 237–244, 2007.
101. T. P. Luu, Y. He, S. Nakagome, and J. L. Contreras-Vidal, “Eeg-based brain-computer interface
to a virtual walking avatar engages cortical adaptation,” in 2017 IEEE International Conference
on Systems, Man, and Cybernetics (SMC). IEEE, 2017, pp. 3054–3057.
102. R. Scherer, A. Schloegl, F. Lee, H. Bischof, J. Janša, and G. Pfurtscheller, “The self-paced
graz brain-computer interface: methods and applications,” Computational intelligence and neu-
roscience, vol. 2007, 2007.
103. R. Scherer, F. Lee, A. Schlogl, R. Leeb, H. Bischof, and G. Pfurtscheller, “Toward self-paced
brain–computer communication: navigation through virtual worlds,” IEEE Transactions on
Biomedical Engineering, vol. 55, no. 2, pp. 675–682, 2008.
104. J. Fujisawa, H. Touyama, and M. Hirose, “Eeg-based navigation of immersing virtual environ-
ment using common spatial patterns,” in 2008 IEEE Virtual Reality Conference. IEEE, 2008,
pp. 251–252.
105. D. Friedman, R. Leeb, A. Antley, M. Garau, C. Guger, C. Keinrath, A. Steed, G. Pfurtscheller,
and M. Slater, “Navigating virtual reality by thought: First steps,” in Proceedings of the 7th
Annual International Workshop on Presence, vol. 160, 2004, p. 167.
106. B. Reyhani-Masoleh and T. Chau, “Navigating in virtual reality using thought: The devel-
opment and assessment of a motor imagery based brain-computer interface,” arXiv preprint
arXiv:1912.04828, 2019.
107. A. Curtin, H. Ayaz, Y. Liu, P. A. Shewokis, and B. Onaral, “A p300-based eeg-bci for spa-
tial navigation control,” in 2012 Annual International Conference of the IEEE Engineering in
Medicine and Biology Society. IEEE, 2012, pp. 3841–3844.
108. G. H. Cattan, A. Andreev, C. Mendoza, and M. Congedo, “A comparison of mobile vr display
running on an ordinary smartphone with standard pc display for p300-bci stimulus presenta-
tion,” IEEE Transactions on Games, 2019.
109. F. Lotte, A. Van Langhenhove, F. Lamarche, T. Ernest, Y. Renard, B. Arnaldi, and A. Lécuyer,
“Exploring large virtual environments by thoughts using a brain–computer interface based on
motor imagery and high-level commands,” Presence: teleoperators and virtual environments,
vol. 19, no. 1, pp. 54–70, 2010.
110. G. Pfurtscheller, R. Leeb, C. Keinrath, D. Friedman, C. Neuper, C. Guger, and M. Slater,
“Walking from thought,” Brain research, vol. 1071, no. 1, pp. 145–152, 2006.
111. R. Ron-Angevin, A. Diaz-Estrella, and F. Velasco-Alvarez, “A two-class brain computer interface
to freely navigate through virtual worlds/ein zwei-klassen-brain-computer-interface zur freien
navigation durch virtuelle welten,” Biomedizinische Technik/Biomedical Engineering, vol. 54,
no. 3, pp. 126–133, 2009.
112. F. Velasco-Álvarez, R. Ron-Angevin, and M. J. Blanca-Mena, “Free virtual navigation using
motor imagery through an asynchronous brain–computer interface,” Presence: teleoperators and
virtual environments, vol. 19, no. 1, pp. 71–81, 2010.
113. F. Velasco-Álvarez, R. Ron-Angevin, L. da Silva-Sauer, and S. Sancha-Ros, “Audio-cued motor
imagery-based brain–computer interface: Navigation through virtual and real environments,”
Neurocomputing, vol. 121, pp. 89–98, 2013.
114. F. Lotte, Y. Renard, and A. Lécuyer, “Self-paced brain-computer interaction with virtual worlds:
A quantitative and qualitative study “out of the lab”,” 2008.
115. W. McClinton, D. Caprio, D. Laesker, B. Pinto, S. Garcia, and M. Andujar, “P300-based 3d
brain painting in virtual reality,” in Extended Abstracts of the 2019 CHI Conference on Human
Factors in Computing Systems, 2019, pp. 1–6.
116. J. Gomez-Gil et al., “Brain computer interfaces, a review,” Sensors, pp. 1211–1279, 2012.
117. S. Lim, M. Yeo, and G. Yoon, “Comparison between concentration and immersion based on eeg
analysis,” Sensors, vol. 19, no. 7, p. 1669, 2019.
118. L. Jiang, C. Guan, H. Zhang, C. Wang, and B. Jiang, “Brain computer interface based 3d game
for attention training and rehabilitation,” in 2011 6th IEEE conference on industrial electronics
and applications. IEEE, 2011, pp. 124–127.
20 Microprocessors and Microsystems

119. M. Joselli, F. Binder, E. Clua, and E. Soluri, “Mindninja: Concept, development and evaluation
of a mind action game based on eegs,” in 2014 Brazilian Symposium on Computer Games and
Digital Entertainment. IEEE, 2014, pp. 123–132.
120. J.-H. Moon, K.-H. Park, and S.-W. Lee, “Neurodrawing: Neurofeedback for enhancing attention
by drawing,” in 2016 4th International Winter Conference on Brain-Computer Interface (BCI).
IEEE, 2016, pp. 1–2.
121. E. C. Lalor, S. P. Kelly, C. Finucane, R. Burke, R. Smith, R. B. Reilly, and G. Mcdarby, “Steady-
state vep-based brain-computer interface control in an immersive 3d gaming environment,”
EURASIP Journal on Advances in Signal Processing, vol. 2005, no. 19, p. 706906, 2005.
122. B. Koo, H.-G. Lee, Y. Nam, and S. Choi, “Immersive bci with ssvep in vr head-mounted display,”
in 2015 37th annual international conference of the IEEE engineering in medicine and biology
society (EMBC). IEEE, 2015, pp. 1103–1106.
123. M. McMahon and M. Schukat, “A low-cost, open-source, bci-vr prototype for real-time signal
processing of eeg to manipulate 3d vr objects as a form of neurofeedback,” in 2018 29th Irish
Signals and Systems Conference (ISSC). IEEE, 2018, pp. 1–6.
124. H. Arora, A. P. Agrawal, and A. Choudhary, “Conceptualizing bci and ai in video games,” in
2019 International Conference on Computing, Communication, and Intelligent Systems (ICC-
CIS). IEEE, 2019, pp. 404–408.
125. D. Wen, Y. Fan, S.-H. Hsu, J. Xu, Y. Zhou, J. Tao, X. Lan, and F. Li, “Combining brain–
computer interface and virtual reality for rehabilitation in neurological diseases: A narrative
review,” Annals of physical and rehabilitation medicine, vol. 64, no. 1, p. 101404, 2021.
126. M. Y. W. Y.-J. GAO and X.-R. G. Shang-Kai, “Virtual reality rehabilitation training platform
based on brain computer interface (bci)[j],” Chinese Journal of Biomedical Engineering, vol. 3,
2007.
127. J. E. Muñoz, L. H. Rı́os, and O. A. Henao, “Low cost implementation of a motor imagery
experiment with bci system and its use in neurorehabilitation,” in Conf. Proc. IEEE Eng. Med.
Biol. Soc, 2014, pp. 1230–1233.
128. M. D. Rinderknecht, “Device for a novel hand and wrist rehabilitation strategy for stroke pa-
tients based on illusory movements induced by tendon vibration,” in 2012 16th IEEE Mediter-
ranean Electrotechnical Conference. Ieee, 2012, pp. 926–931.
129. R. Ortner, D.-C. Irimia, J. Scharinger, and C. Guger, “A motor imagery based brain-computer
interface for stroke rehabilitation.” Annual Review of Cybertherapy and Telemedicine, vol. 181,
pp. 319–323, 2012.
130. Z. Y. Chin, K. K. Ang, C. Wang, and C. Guan, “Navigation in a virtual environment using mul-
ticlass motor imagery brain-computer interface,” in 2013 IEEE Symposium on Computational
Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB). IEEE, 2013, pp. 152–157.
131. J. Legény, R. V. Abad, and A. Lécuyer, “Navigating in virtual worlds using a self-paced ssvep-
based brain–computer interface with integrated stimulation and real-time feedback,” Presence,
vol. 20, no. 6, pp. 529–544, 2011.
132. X. Zeng, G. Zhu, L. Yue, M. Zhang, and S. Xie, “A feasibility study of ssvep-based passive
training on an ankle rehabilitation robot,” Journal of healthcare engineering, vol. 2017, 2017.
133. X. Zhang, G. Xu, J. Xie, M. Li, W. Pei, and J. Zhang, “An eeg-driven lower limb rehabilita-
tion training system for active and passive co-stimulation,” in 2015 37th Annual International
Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). IEEE, 2015,
pp. 4582–4585.
134. M. de Tommaso, K. Ricci, M. Delussi, A. Montemurno, E. Vecchio, A. Brunetti, and V. Bevilac-
qua, “Testing a novel method for improving wayfinding by means of a p3b virtual reality visual
paradigm in normal aging,” Springerplus, vol. 5, no. 1, pp. 1–12, 2016.
135. E. Tidoni, M. Abu-Alqumsan, D. Leonardis, C. Kapeller, G. Fusco, C. Guger, C. Hintermüller,
A. Peer, A. Frisoli, F. Tecchia et al., “Local and remote cooperation with virtual and robotic
agents: a p300 bci study in healthy and people living with spinal cord injury,” IEEE Transactions
on Neural Systems and Rehabilitation Engineering, vol. 25, no. 9, pp. 1622–1632, 2016.
136. I. A. Tarnanas, N. Laskaris, and M. Tsolaki, “On the comparison of vr-responses, as performance
measures in prospective memory, with auditory p300 responses in mci detection.” 2012.
137. I. Käthner, A. Kübler, and S. Halder, “Rapid p300 brain-computer interface communication
with a head-mounted display,” Frontiers in Neuroscience, vol. 9, p. 207, 2015.
138. C. P. Amaral, M. A. Simões, S. Mouga, J. Andrade, and M. Castelo-Branco, “A novel brain
computer interface for classification of social joint attention in autism and comparison of 3
A Review on VR and AR Use Cases of BCI Applications in Smart cities 21

experimental setups: a feasibility study,” Journal of neuroscience methods, vol. 290, pp. 105–
115, 2017.
139. A. T. Vourvopoulos, C. Jorge, R. Abreu, P. Figueiredo, J.-C. Fernandes, and S. Bermúdez i Ba-
dia, “Efficacy and brain imaging correlates of an immersive motor imagery bci-driven vr system
for upper limb motor rehabilitation: A clinical case report,” Frontiers in Human Neuroscience,
vol. 13, p. 244, 2019.
140. F. Parivash, L. Amuzadeh, and A. Fallahi, “Design expanded bci with improved efficiency for
vr-embedded neurorehabilitation systems,” in 2017 Artificial Intelligence and Signal Processing
Conference (AISP). IEEE, 2017, pp. 230–235.
141. A. Biasiucci, R. Leeb, I. Iturrate, S. Perdikis, A. Al-Khodairy, T. Corbet, A. Schnider,
T. Schmidlin, H. Zhang, M. Bassolino et al., “Brain-actuated functional electrical stimulation
elicits lasting arm motor recovery after stroke,” Nature communications, vol. 9, no. 1, pp. 1–13,
2018.
142. G. Heisenberg, Y. Rezaei, T. Rothdeutsch, and W. Heiden, “Arm prosthesis simulation on a
virtual reality l-shaped workbench display system using a brain computer interface,” Journal of
Pain Management, vol. 9, no. 3, pp. 205–214, 2016.
143. M. Wairagkar, I. Zoulias, V. Oguntosin, Y. Hayashi, and S. Nasuto, “Movement intention based
brain computer interface for virtual reality and soft robotics rehabilitation using novel autocor-
relation analysis of eeg,” in 2016 6th IEEE International Conference on Biomedical Robotics
and Biomechatronics (BioRob). IEEE, 2016, pp. 685–685.
144. J. Chun, B. Bae, and S. Jo, “Bci based hybrid interface for 3d object control in virtual reality,”
in 2016 4th International Winter Conference on Brain-Computer Interface (BCI). IEEE, 2016,
pp. 1–4.
145. A. Aamer, A. Esawy, O. Swelam, T. Nabil, A. Anwar, and A. Eldeib, “Bci integrated with vr
for rehabilitation,” in 2019 31st International Conference on Microelectronics (ICM). IEEE,
2019, pp. 166–169.
146. C. Escolano, J. M. Antelis, and J. Minguez, “A telepresence mobile robot controlled with a
noninvasive brain–computer interface,” IEEE Transactions on Systems, Man, and Cybernetics,
Part B (Cybernetics), vol. 42, no. 3, pp. 793–804, 2011.
147. A. Lenhardt and H. Ritter, “An augmented-reality based brain-computer interface for robot
control,” in International Conference on Neural Information Processing. Springer, 2010, pp.
58–65.
148. S. M. T. Müller, T. F. Bastos-Filho, and M. Sarcinelli-Filho, “Using a ssvep-bci to command a
robotic wheelchair,” in 2011 IEEE International Symposium on Industrial Electronics. IEEE,
2011, pp. 957–962.
149. N. Martens, R. Jenke, M. Abu-Alqumsan, C. Kapeller, C. Hintermüller, C. Guger, A. Peer,
and M. Buss, “Towards robotic re-embodiment using a brain-and-body-computer interface,” in
2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012, pp.
5131–5132.
150. E. Correa-Agudelo, A. M. Hernandez, C. Ferrin, and J. D. Gomez, “Vilimbs: Improving phan-
tom limb treatment through multisensory feedback,” in Proceedings of the 33rd Annual ACM
Conference Extended Abstracts on Human Factors in Computing Systems, 2015, pp. 1313–1318.
151. T. Lampe, L. D. Fiederer, M. Voelker, A. Knorr, M. Riedmiller, and T. Ball, “A brain-computer
interface for high-level remote control of an autonomous, reinforcement-learning-based robotic
system for reaching and grasping,” in Proceedings of the 19th international conference on Intel-
ligent User Interfaces, 2014, pp. 83–88.
152. T. Blum, R. Stauder, E. Euler, and N. Navab, “Superman-like x-ray vision: Towards brain-
computer interfaces for medical augmented reality,” in 2012 IEEE International Symposium on
Mixed and Augmented Reality (ISMAR). IEEE, 2012, pp. 271–272.
153. L. Angrisani, P. Arpaia, N. Moccaldi, and A. Esposito, “Wearable augmented reality and brain
computer interface to improve human-robot interactions in smart industry: A feasibility study
for ssvep signals,” in 2018 IEEE 4th International Forum on Research and Technology for Society
and Industry (RTSI). IEEE, 2018, pp. 1–5.
154. H. Si-Mohammed, J. Petit, C. Jeunet, F. Argelaguet, F. Spindler, A. Evain, N. Roussel,
G. Casiez, and A. Lécuyer, “Towards bci-based interfaces for augmented reality: Feasibility,
design and evaluation,” IEEE transactions on visualization and computer graphics, 2018.
155. L. Angrisani, P. Arpaia, A. Esposito, and N. Moccaldi, “A wearable brain-computer interface
instrument for augmented reality-based inspection in industry 4.0,” IEEE Transactions on In-
22 Microprocessors and Microsystems

strumentation and Measurement, 2019.


156. K. Takano, N. Hata, and K. Kansaku, “Towards intelligent environments: an augmented reality–
brain–machine interface operated with a see-through head-mount display,” Frontiers in neuro-
science, vol. 5, p. 60, 2011.
157. Google. Glass enterprise edition 2: faster and more helpful. [Online]. Available:
https://blog.google/products/devices-services/glass-enterprise-edition-2/

View publication stats

You might also like