Navigation Systems For Visualy Impaired
Navigation Systems For Visualy Impaired
Review
Navigation Systems for the Blind and Visually
Impaired: Past Work, Challenges, and Open Problems
Santiago Real * and Alvaro Araujo
B105 Electronic Systems Lab, ETSI Telecomunicación, Universidad Politécnica de Madrid Avenida
Complutense 30, 28040 Madrid, Spain
* Correspondence: [email protected]; Tel.: +34-91-0672-244
Received: 18 July 2019; Accepted: 30 July 2019; Published: 2 August 2019
Abstract: Over the last decades, the development of navigation devices capable of guiding the blind
through indoor and/or outdoor scenarios has remained a challenge. In this context, this paper’s
objective is to provide an updated, holistic view of this research, in order to enable developers to
exploit the different aspects of its multidisciplinary nature. To that end, previous solutions will
be briefly described and analyzed from a historical perspective, from the first “Electronic Travel
Aids” and early research on sensory substitution or indoor/outdoor positioning, to recent systems
based on artificial vision. Thereafter, user-centered design fundamentals are addressed, including
the main points of criticism of previous approaches. Finally, several technological achievements
are highlighted as they could underpin future feasible designs. In line with this, smartphones and
wearables with built-in cameras will then be indicated as potentially feasible options with which to
support state-of-art computer vision solutions, thus allowing for both the positioning and monitoring
of the user’s surrounding area. These functionalities could then be further boosted by means of remote
resources, leading to cloud computing schemas or even remote sensing via urban infrastructure.
Keywords: assisting systems; navigation systems; perception; situation awareness; visually impaired
1. Introduction
Recent studies on global health estimate that 217 million people suffer from visual impairment, and
36 million from blindness [1]. The affected have their autonomy jeopardized in terms of many everyday
tasks, with the emphasis being placed on those that involve moving through an unknown environment.
Generally, individuals rely primarily on vision to know their own position and direction in the
environment, recognizing numerous elements in their surroundings, as well as their distribution and
relative location. Those tasks are usually grouped under the categories of “orientation” or “wayfinding,”
while the capability to detect and avoid nearby obstacles relates to “mobility.” A lack of vision heavily
hampers the performance of such tasks, requiring a conscious effort to integrate perceptions from the
remaining sensory modalities, memories, or even verbal descriptions. Past work described this as a
“cognitive collage” [2].
In this regard, a navigation system’s purpose is to provide users with required and/or helpful
data to get to a destination point, monitoring their position in previous modeled maps. As we will see,
researchers working in this field have yet to find effective, efficient, safe, and cost-effective technical
solutions for both the outdoor and indoor guidance needs of blind and visually impaired people.
Nevertheless, in recent years, we have seen unprecedented scientific and technical improvements,
and new tools are now at our disposal to face this challenge. Thus, this study was undertaken
to re-evaluate the perspective of navigation systems for the blind and visually impaired (BVI)
in this new context, attempting to integrate key elements of what is frequently a disaggregated
multidisciplinary background.
Given the purpose of this work, its content and structure differ from recent reviews on the same
topic (e.g., [3,4]). Section 2 presents a historical overview that gathers together previous systems in
order to present a novel survey of the principles, key points, strategies, rules, and approaches of
assistive device design that are currently applicable. This is particularly important in the field of
non-visual human-machine interface, as the perceptual and cognitive processes remain the same. Next,
Section 3, on related innovation fields, reviews several representative devices to introduce a set of
technical resources that are yet to be fully exploited, e.g., remote processing techniques, simultaneous
localization and mapping (SLAM), wearable haptic displays, etc. Finally, Sections 4 and 5 include
a brief introduction to user-centered design approaches, and a discussion of the currently available
technical resources, respectively.
though it took a long time, the skin on the back could not handle the “vibratory-image” resolution [14]
and the large amount of data observed in outdoor tests easily overloaded the user.
Despite its limitations, this project led to a number of similar systems culminating with BrainPort,
a version currently available on the market [15]. This device is based on a tongue electrotactile
interface: the tactile stimulus was artificially induced with surface currents in the tongue targeting
each area’s corresponding afferent nerves. Some years later (2016), a study was conducted to evaluate
the functional performance of BrainPort in profoundly blind individuals [16], with encouraging results
from object recognition and basic orientation and mobility tasks.
Most subsequent visual-tactile sensory substitution devices kept mapping point-for-point camera
images into haptic replicas by means of mechanical elements or electrotactile interfaces (e.g., Forehead
Retina System [17], HamsaTouch [18]). On the other hand, systems like the Electro-Neural Vision
System (ENVS) or Haptic Radar [19], which will both be further described in later sections, focused on
providing users with distance measurements from nearby obstacles so as to give them a rough, yet
intuitive, notion of their surroundings.
Conversely, visual-auditory sensory substitution has experienced far more improvements over
the years. The first devices developed were Kay’s Sonic Torch [20] and Sonic Guide [21]. These
devices moved the sonar reflected signal spectrum within the hearing range, leaving the task of feature
recognition through sound up to the user. Some of them could even identify elements such as poles,
vegetation, etc. Again, consequently, large amounts of data overloaded the user; a fact that led to
solutions such as using a narrower beam width to attenuate background noise.
Later visual-auditory designs focused on mapping images and/or proximity sensor readings into
sounds in a way that could be easily deciphered by the brain. Leaving aside projects like UMS’s
NAVI [22] or Hokkaido University’s mobility aid, which tried to enhance the Sonic Guide original
design by replicating the echolocation of bats [23], one of the most well-known projects was Peter B. L.
Meijer’s vOICe [24].
Throughout the years since its development, vOICe has been studied from different perspectives,
from achieved visual acuity [25] to its potential when integrated with 3D cameras. Also, some
user interviews revealed what seems to be acquired synesthesia, as they reported recovering visual
perceptions such as depth [26].
Lastly, another approach exemplified by La Laguna University’s Virtual Acoustic Space [27]
resorts to human hearing to recognize some 3D space characteristics from room reverberation, sound
tone, etc. By means of stereo-vision 3D recording and head-related transfer function (HRTF) processed
sounds, the device could reproduce virtual sound sources located over the captured surfaces through
the user’s headphones. As the researchers stated: “the musical effect of hearing this stimulus could be
described as perceiving a large number of raindrops striking the surface of a pane of glass.” Later tests
Sensors 2019, 19, 3404 4 of 20
showed how blind subjects could make use of these sounds to build a basic schematic diagram of their
surroundings [27].
enhanced GNSS positioning for BVI people guidance. For example, PERNASVIP’s technical objectives
included locating “visually disabled pedestrians in urban environments within a 4-m accuracy, 95% of
the time, with less than 15 s of the time to first fix.” Regretfully, mainly due to multipath errors in some
urban areas, these specifications were only partially achieved.
As can be seen, locating technology became the backbone of navigation systems. Therefore, because
of limited coverage by GNSS—e.g., indoor signal obstruction—and inertial navigation accumulated
error, complementary systems were needed to keep track of users along their route.
Some of the preferred solutions were networks consisting of:
• Ultrasound transmitters: As an illustrative case, the University of Florida’s Drishti project [38]
(2004) applied this kind of technology to BVI people guidance, combining differential GPS
outdoors and an ultrasound transmitter infrastructure indoors. As for the latter, a mean error of
approximately 10 cm and 22 cm maximum, was observed. However, the accuracy may be easily
degraded due to signal obstruction, reflection, etc.
• Optical transmitters: By 2001, researchers from Tokyo University developed a BVI guidance
system made of optical beacons, which were installed in a hospital [39]. The transmitters
were positioned on the ceiling, with each one sending an identification code associated to its
position. The equipment carried by the users read the code in range, then reproduced recorded
messages accordingly. Another system worth mentioning belongs to the National University of
Singapore [40] (~2004). This time the position was inferred by means of fluorescent lights, each of
these lights having its own code to identify the illuminated area. As can be seen, this line of work
has similar features to those of Li-fi.
• RFID tags: Many of the technical solutions for positioning services were based on an infrastructure
of beacons, be they radio frequency, infrared, etc. However, the subsequent costs of installation and
maintenance, or their rigidity against changes in the environment (e.g., furniture rearrangement),
were points against their implementation. To make up for these problems, RFID tag networks
were proposed. Whereas, active tag costs are usually in the tens of dollars, passive tags cost only
tens of cents. Also, as batteries are discarded, the network lifetime increases while maintenance
costs are lowered, thus making them attractive solutions for locating systems. Even though their
range only covers a few meters, range measuring techniques based on receive signal strength
(RSS), received signal phase (RSP) or time of arrival (TOA) could be applied [41]. However,
the estimation of the user’s position is usually that of the tag in range. As an example of this
line of work, the University of Utah launched an indoor Robotic Guide project for the visually
impaired in 2003 [42]. One year later, their prototype collected positioning data from a passive
RFID network with a range of 1.5 m, effectively guiding test users along a 40-m route. By 2005,
their installation in shopping carts was proposed [43]. In line with this, the PERCEP’ project [44]
provided acoustic guidance messages by means of a deployment of passive RFID tags and an
RFID reader embedded in a glove. RFID positioning will be widely adopted in the coming years,
becoming one of the classic solutions. Nevertheless, the applications are not only limited to this
area. For example, they were also found suitable to search for or identify distant objects [45].
From then on, most navigation systems for the BVI would resort to a combination of technologies,
which are usually classified as indoor and/or outdoor solutions. Also, they started to gather
complementary data from external sources through the net.
This can be exemplified by the schematic diagram of the SmartVision project [48] shown in
Figure 2. As illustrated in the previous figure, stereo vision was applied for vision positioning, and
in subsequent projects included obstacle recognition functions, although it again resulted in poor
performance when it came to reliability, accuracy, etc. [49]. Therefore, the locating system would
effectively rely on external infrastructure (GPS, RFID, Wi-Fi). Positioning data were then combined
with maps and points-of-interest (POI) available on a geographic information system (GIS) server, and
thereafter offered directly to users.
From then on, various indoor positioning technologies were tested, some of which were based on
Ultra-Wide Band (UWB) [50,51], passive Infrared Radiation (IR) tags [52], or Bluetooth low energy
(BLE) beacons [53] combined with inertial sensors [54], and even some that exploited the magnetic
signature of a building’s steel frame [55]. Among them, UWB technology stands out mainly because
of its sub-meter accuracy (e.g., 15-20 cm in [50]) and robustness to multipath interference, an issue
inherent to both indoor and outdoor positioning. However, navigation through indoor scenarios
usually does not require sub-meter accuracy due to similar patterns between scenarios, a reduced set
of potentially hazardous elements, or a reduced size of the environment, which eases orientation and
mobility tasks.
Nevertheless, as navigation systems continued their development, and the amount of information
collected for blind navigation grew larger, the need for efficient user interfaces became even
more apparent.
Several classic solutions involved speech, beginning with recorded messages (e.g., Guide Dog
Robot, Sonic Pathfinder); later, speech synthesis and recognition were also gradually incorporated (e.g.,
Tyflos [56]). At this point, sensory substitution became an attractive solution for blind navigation system
user interfaces, more so when the user needed the system to rapidly provide detailed information
regarding its immediate surroundings, while maintaining a low cognitive load.
In line with this, the ENVS project [57] is another representative example that conveys depth
perceptions through haptics. Again, it makes use of a pair of cameras to capture the 3D environment
and present it to the user as tactile stimuli in their fingers. Distance data were encoded in the pulse
width of electrotactile stimulation signals. If the gloves were aligned with the cameras, it seemed as if
things were being touched at a distance. Furthermore, the tests showed how this solution allowed
users to intuitively assimilate information from 10 virtual proximity sensors (Figure 3) with a relatively
low cognitive load.
Sensors 2019, 19, 3404 7 of 20
By 2005, the device incorporated a built-in GPS and compass to allow for outdoor guidance [58].
Orientation data were passed on to the user through the electrotactile gloves, overlapping the
distance-encoding signals.
residual vision by enhancing 3D perceptions with simplified images emphasizing depth (Figure 4) [64].
They recently tried to access the market with their Smart Specs [65] glasses, with VA-ST start-up funding.
Alternatively, mixed reality allows users to interact with virtual elements overlapping with
their actual surroundings, thus providing intuitive cues of orientation, distance from and shapes of
objects, etc.
The usage of virtual sound sources to guide pedestrians along a route is one of the classic solutions
seen in projects like UCSB PGS, or even Haptic Radar. The latter combined its original IR-based obstacle
avoidance system with virtual sound guidance, which resulted in positive after-test appraisals [66].
Nevertheless, some criticisms and suggestions were made, mainly in relation to the area covered by
the IR sensors and the vibrational interface.
Also, virtual sounds could not only be applied for guidance, but also for at least several tasks that
involved 3D enhanced perception, as previously seen in Virtual Acoustic Space.
Aside from solutions based on sound, virtual tactile elements were also studied, albeit apparently
less. The Virtual Haptic Radar project [67], originating from Haptic Radar, is a representative example.
It substituted its predecessor’s IR sensors by the combination of a three-dimensional model of the
surroundings plus an ultrasonic-based motion capture system worn by the user. As described in Figure 5,
once the user reached a certain area near the object, warning vibrations were triggered accordingly.
However, one of the main problems hampering tactile-based solutions is the haptic interfaces
available. Most portable designs seem to resort to mechanical components, thus causing a conflict
Sensors 2019, 19, 3404 9 of 20
(a) (b)
HamsaTouch (b).
Figure 6. Lazzus (a). HamsaTouch
Nevertheless,
Nevertheless, the
the focus
focus of
of attention
attention was
was placed
placed on
on GNSS-based
GNSS-based outdoor
outdoor navigation.
navigation. Next,
Next, some
some
representative examples of available applications are briefly described:
representative examples of available applications are briefly described:
•
• Moovit
Moovit [75]:
[75]: aa free,
free, effective,
effective, and
and easy-to-use
easy-to-use tool
tool that
that offers
offers guidance
guidance on on the
the public
public transport
transport
network,
network, managing schedules, notifications, and even warnings in real time. one
managing schedules, notifications, and even warnings in real time. It is It isofone
the of
assets
the
for mobility
assets tasks recommended
for mobility by ONCEby
tasks recommended (National Organization
ONCE (National of Spanish Blind
Organization People).
of Spanish Blind
People).
Sensors 2019, 19, 3404 10 of 20
• BlindSquare [76]: specifically designed for the BVI, this application conveys the relative
location of previously recorded POIs through speech. It makes use of Foursquare’s and
OpenStreetMap’s databases.
• Lazzus [77]: a paid application, again designed for BVI users, which coordinates GPS and built-in
motion capture and orientation sensors to provide users with intuitive cues about the location
of diverse POIs in the surrounding area, even including zebra crossings. It offers two modes of
operation: the 360◦ mode verbally informs of the distance and orientation to nearby POIs, whereas
the beam mode describes any POI in a virtual field of view in front of the smartphone. Its main
sources of data are Google Places and OpenStreetMap.
Some of these functionalities are also shared by an increasing number of commercially available
applications, each with specific characteristics and improvements. For example, Seeing AI GPS [78]
includes solutions analogous to 360◦ and beam modes of Lazzus plus pre-journey information; NearBy
Explorer offers several POI notification filters, etc.
3.3. Wearables
So far, bone conduction headphones and smart glasses with a built-in camera have mainly been
used for BVI mobility support. Furthermore, as the size and cost of sensors and microprocessors further
decreased, and given the advantages of wearable devices, the development of designs specifically
aimed at these people has been slowly boosted.
Some of the main points in favor of wearable designs include the sensors’ wider field-of-view, the
usage of immersive user interfaces, or users’ request for discreet, hands-free solutions. In Figure 7,
some strategic placements of these sensors and interfaces are shown, including a few examples of
market-available products.
Firstly, regarding the sensors’ field-of-view, some devices rely on the user to scan their surroundings,
whereas others resort to intermediary systems that monitor the scene. Among them, the first strategy
was therefore to look for placements that eased “scanning movements,” placing sensors on the
wrist (Figure 7B), the head (Figure 7A) or embedded in the cane (Figure 7C). Specifically, systems
corresponding with Figure 7B,C tended to imitate the features of the first ETA. This was exemplified
by Ultracane, SmartCane (Figure 7C) or Sunu-band [79] (Figure 7B), as all of them offered obstacle
detection functionalities supported by ultrasound proximity sensors via a vibrational user interface.
On the other hand, the third category of wearables (Figure 7A) was usually seen in camera-based
sensory substitution or artificial vision systems, e.g., Seeing AI, Orcam MyEye [80], BrainPort, or
even vOICe.
Sensors 2019, 19, 3404 11 of 20
Conversely, the second strategy generally opts for a wider field-of-view, thus sensors were often
positioned in relatively static and non-occlusive placements all over the torso (red dots in Figure 7).
That was the case with Toyota’s Project Blaid [81], a camera-based, inverted-U-shaped wearable that
rested on the user’s shoulders. Among its functionalities, it pursued object and face recognition, with
an emphasis placed on elements related to mobility such as stairs, signals, etc.
Regarding user interfaces, speech and Braille made up the first solutions for acoustic and tactile
verbal interfaces, coupled with headphones and braille displays. As an example, Figure 7B shows the
“Dot” braille smartwatch.
Other kinds of solutions strived for a reduced cognitive load by means of intuitive guidance cues,
usually exploiting the innate space perception capabilities of touch and hearing. Many examples have
been mentioned in this text, from Virtual Acoustic Space or UCSB PGS to Haptic Radar. Non-occlusive
headphones and vibratory interfaces are some of the devices most commonly used as they benefit from
a low cost, a reduced-weight design, etc., while still being able to generate immersive perceptions such
as virtual sound sources, or the approach to tactile virtual objects, as seen initially in Haptic Radar, and
later in Virtual Haptic Radar.
This latter approach is also found in the Spatial Awareness project, based on Intel RealSense.
The developed prototype conveys distance measurements through the vibration of eight haptic
actuators distributed over the user’s torso and legs.
1. “The presence, location, and preferably the nature of obstacles immediately ahead of the traveller.”
This relates to obstacle avoidance support.
2. Data on the “path or surface on which the traveller is walking, such as texture, gradient, upcoming
steps,” etc.
3. “The position and nature of objects to the sides of the travel path,” i.e., hedges, fences,
doorways, etc.
4. Information that helps users to “maintain a straight course, notably the presence of some type of
aiming point in the distance,” e.g., distant traffic sounds.
5. “Landmark location and identification,” including those previously seen, particularly in (3).
6. Information that “allows the traveller to build up a mental map, image, or schema for the chosen
route to be followed.” This point involves the study of what is frequently termed “cognitive
mapping” in blind individuals [83].
Whilst the first ETAs were oriented to the first category of information, solutions that placed
virtual sound sources over POIs easily covered points (4) and (5), and solutions based on artificial
vision could provide data in any category.
Sensors 2019, 19, 3404 12 of 20
One key factor to be aware of in this context is the theory behind the development of sensory
substitution devices, which has been mentioned throughout the text when describing the “cognitive
load” or “intuitiveness” of some user interfaces. At this point, the work in [84] is highlighted as it
introduces the basics.
In the first place, some major constraints to be considered are the difference of throughput data
capability between sensory modalities (bandwidth), and the compatibility with higher-nature cognitive
processes [84]. Two respective examples of these constraints would be the overloading of touch
seen in numerous attempts to convey visual perceptions [85], and the inability to decipher visual
representations of sounds, even though vision has comparatively more ‘bandwidth’ than hearing.
Some other main factors would be the roles of synesthesia and neuroplasticity, or even how
intelligent algorithms can be used to filter the information needed in particular scenarios [84].
Once it was proven that distant elements can be recognized through perceptions induced by
sensory substitution devices of vision (Section 2.2), thus straying into the field of “distal attribution”
(e.g., [84,85]), it started an ambitious pursue of general-purpose visual-tactile and visual-auditory
devices. Several recent studies in neuroscience showed the high potential of this field [86,87], as areas
of the brain though to be associated to visual-type tasks, e.g., involved in shape recognition, showed
activity with visual-encoded auditory stimulation.
Nevertheless, given the limitations of the remaining senses to collect visual-type information, it is
usually necessary to focus on what users require to carry out specific tasks [88,89].
Lastly, the poor acceptance of past designs by their intended public should be taken into account;
a recent discussion on this topic can be found in [88]. In line with this, an aspect that was recently
taken advantage of is the growing penetration of technology in the daily routines of BVI people, with
an emphasis placed on the usage of smartphones.
Figure 8 shows the increasing growth of mobile phone and computer use, including how many
BVI people use these devices to access the Internet, a tendency likely to continue among younger
generations. This trend is also reflected in the creation of entities such as Amovil, which promotes
the accessibility of these devices to the BVI people, or the smartphone-compatible infrastructure of
London’s WayFindr [90] (similar to [91,92]), Bucharest’s Smart Public Transport [93], or Barcelona’s
NaviLens [93], which are oriented to boosting the autonomy of BVI individuals when using public
transportation. In line with this, Carnegie Mellon University’s NavCog, based on a BLE network,
recently added Pittsburgh International Airport to the list of supported locations [94].
Figure 8. Percentages of Spanish BVI users of mobile phones (blue) and computers (orange); percentage
of those who access the Internet (gray), and references to the overall population (green). Data obtained
from INE and [51] (2013).
Sensors 2019, 19, 3404 13 of 20
These last factors can be exemplified by the natural cross-modal associations observed in the
project vOICe, such as volume-to-brightness and pitch-to-spatial height (see “weak synesthesia” in [98]).
This was even evident in Disney-supported research on color-vibration correspondences [99], which
came from the pursuit of more immersive experiences. Other illustrative cases include individuals
exploiting the spatial-rich information of sound to extreme levels, e.g., the echolocation techniques
shown by Daniel Kish. These techniques might be reminiscent of the first ETA described in Section 1.
Another remarkable aspect to point out is the effect on distal attribution of the correspondence
between body movement and perceptions [100]. For example, in Bach-y-Rita’s et al. visual-tactile
experiments, it was observed that users needed to manipulate the camera themselves to notice the
“contingencies between motor activity and the resulting changes in tactile stimulation” [84].
The use of these proprioception correspondences might be a fundamental element in the design
of future orientation and mobility aids, given the good performance of past projects.
Several of the mentioned projects incorporate mixed-reality-type user interfaces, such as the
virtual sound sources seen in UCBS PGS and Virtual Acoustic Space, or the virtual tactile objects of
Virtual Haptic Radar. Another system worth highlighting is Lazzus, which tracks the smartphone’s
position and orientation to trigger verbal descriptions according to which direction it is being pointed
in. As seen with Talking Signs, these approaches have users’ support [101].
Nevertheless, some of these solutions are also affected by technical limitations. While
bone-conduction earphones and head motion tracking techniques are sufficient for most sound-based
applications, portable haptic interfaces are heavily constrained. Even though haptic displays such
as those commercialized by Blitab could promote tactile-map approaches, portable alternatives are
limited to vibrational interfaces. These devices by no means exploit the full capabilities of touch, thus
hampering further exploration in fields such as the application of extended touch [96] in a context of
mixed reality. However, recent advances might boost the growth of a versatile classic solution known
as “electrotactile.”
This technology, which benefits from low cost, low power consumption, and lightweight design,
encompasses a wide range of virtual perceptions. Nevertheless, it has an insufficient theoretical
foundation in terms of neural stimulation, and several designs have revealed problems related to poor
electrical contact through the skin. This could be partially compensated for by choosing placements
with more adequate electrical conditions, such as the tongue (BrainPort), or by the use of a hydrogel
for better control of the flow of the electrical current (e.g., Forehead Retina System), etc.
Nowadays the same BrainPort makes a market-available device that shows the feasibility of this
haptic technology for some applications. In addition, over the years, subsequent prototypes have
strived for various improvements, such as combining electrotactile technology with mechanical
stimuli [102,103], stabilizing the transcutaneous electrode-neuron electrical contact, albeit with
closed-loop designs [104], or micro-needle interfaces [105,106], etc. Furthermore, the neural stimulation
theoretical basis continues to advance through research in related fields, e.g., when developing
myoelectric prostheses that provide a sense of touch via the electrical stimulation of afferent nerves.
6. Conclusions
Numerous devices have been developed to guide and assist BVI individuals along indoor/outdoor
routes. However, they have not completely met the technical requirements and user needs.
Most such unmet aspects are currently being answered separately in several research fields,
ranging from indoor positioning, computation offloading, or distributed sensing, to the analysis of
spatial-related perceptual and cognitive processes of BVI people. On the other hand, smartphones
and similar tools are rapidly making their way into their daily routines. In this context, old and novel
solutions have become feasible, some of which are currently available in the market as smartphone
applications or portable devices.
Sensors 2019, 19, 3404 16 of 20
In line with this, the present article attempts to provide a holistic, multidisciplinary view of the
research on navigation systems for this population. The feasibility of classic and new designs is then
briefly discussed according to a new architecture scheme proposal.
Author Contributions: Conceptualization, S.R. and A.A.; Methodology, S.R.; Formal Analysis, S.R.;
Writing—Original Draft Preparation, S.R.; Writing—Review & Editing, A.A.; Supervision, A.A.
Funding: This study received no external funding.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Bourne, R.R.A.; Flaxman, S.R.; Braithwaite, T.; Cicinelli, M.V.; Das, A.; Jonas, J.B.; Keeffe, J.; Kempen, J.;
Leasher, J.; Limburg, H.; et al. Magnitude, temporal trends, and projections of the global prevalence of
blindness and distance and near vision impairment: A systematic review and meta-analysis. Lancet Glob.
Health 2017, 5, e888–e897. [CrossRef]
2. Tversky, B. Cognitive Maps, Cognitive Collages, and Spatial Mental Models; Springer: Berlin/Heidelberg, Germany,
1993; pp. 14–24.
3. Tapu, R.; Mocanu, B.; Zaharia, T. Wearable assistive devices for visually impaired: A state of the art survey.
Pattern Recognit. Lett. 2018. [CrossRef]
4. Elmannai, W.; Elleithy, K. Sensor-based assistive devices for visually-impaired people: Current status,
challenges, and future directions. Sensors 2017, 17, 565. [CrossRef] [PubMed]
5. Working Group on Mobility Aids for the Visually Impaired and Blind; Committee on Vision. Electronic
Travel Aids: New Directions for Research; National Academies Press: Washington, DC, USA, 1986;
ISBN 978-0-309-07791-0.
6. Benjamin, J.M. The laser cane. Bull. Prosthet. Res. 1974, 443–450.
7. Russel, L. Travel Path Sounder. In Proceedings of the Rotterdam Mobility Research Conference; American
Foundation for the Blind: New York, NY, USA, 1965.
8. Armstrong, J.D. Summary Report of the Research Programme on Electronic Mobility Aids; University of Nottingham:
Nottingham, UK, 1973.
9. Pressey, N. Mowat sensor. Focus 1977, 11, 35–39.
10. Heyes, A.D. The Sonic Pathfinder—A new travel aid for the blind. In High Technology Aids for the Disabled;
Elsevier: Edinburgh, UK, 1983; pp. 165–171.
11. Maude, D.R.; Mark, M.U.; Smith, R.W. AFB’s Computerized Travel Aid: Two Years of Research. J. Vis. Impair.
Blind. 1983, 77, 71, 74–75.
12. Collins, C.C. On Mobility Aids for the Blind. In Electronic Spatial Sensing for the Blind; Springer: Dordrecht,
The Netherlands, 1985; pp. 35–64.
13. Collins, C.C. Tactile Television-Mechanical and Electrical Image Projection. IEEE Trans. Man-Mach. Syst.
1970, 11, 65–71. [CrossRef]
14. Rantala, J. Jussi Rantala Spatial Touch in Presenting Information with Mobile Devices; University of Tampere:
Tampere, Finland, 2014.
15. BrainPort, Wicab. Available online: https://www.wicab.com/brainport-vision-pro (accessed on 29 July 2019).
16. Grant, P.; Spencer, L.; Arnoldussen, A.; Hogle, R.; Nau, A.; Szlyk, J.; Nussdorf, J.; Fletcher, D.C.; Gordon, K.;
Seiple, W. The Functional Performance of the BrainPort V100 Device in Persons Who Are Profoundly Blind.
J. Vis. Impair. Blind. 2016, 110, 77–89. [CrossRef]
17. Kajimoto, H.; Kanno, Y.; Tachi, S. Forehead electro-tactile display for vision substitution. In Proceedings of
the EuroHaptics, Paris, France, 3–6 July 2006.
18. Kajimoto, H.; Suzuki, M.; Kanno, Y. HamsaTouch: Tactile Vision Substitution with Smartphone and
Electro-Tactile Display. In Proceedings of the 32nd Annual ACM Conference on Human Factors in
Computing Systems: Extended Abstracts, Toronto, ON, Canada, 26 April–1 May 2014; pp. 1273–1278.
19. Cassinelli, A.; Reynolds, C.; Ishikawa, M. Augmenting spatial awareness with haptic radar. In Proceedings
of the 10th IEEE International Symposium on Wearable Computers (ISWC 2006), Montreux, Switzerland,
11–14 October 2006; pp. 61–64.
20. Kay, L. An ultrasonic sensing probe as a mobility aid for the blind. Ultrasonics 1964, 2, 53–59. [CrossRef]
Sensors 2019, 19, 3404 17 of 20
21. Kay, L. A sonar aid to enhance spatial perception of the blind: Engineering design and evaluation. Radio
Electron. Eng. 1974, 44, 605. [CrossRef]
22. Sainarayanan, G.; Nagarajan, R.; Yaacob, S. Fuzzy image processing scheme for autonomous navigation of
human blind. Appl. Soft Comput. J. 2007, 7, 257–264. [CrossRef]
23. Ifukube, T.; Sasaki, T.; Peng, C. A blind mobility aid modeled after echolocation of bats. IEEE Trans. Biomed.
Eng. 1991, 38, 461–465. [CrossRef] [PubMed]
24. Meijer, P.B.L. An Experimental System for Auditory Image Representations. IEEE Trans. Biomed. Eng. 1992,
39, 112–121. [CrossRef] [PubMed]
25. Haigh, A.; Brown, D.J.; Meijer, P.; Proulx, M.J. How well do you see what you hear? The acuity of
visual-to-auditory sensory substitution. Front. Psychol. 2013, 4. [CrossRef] [PubMed]
26. Ward, J.; Meijer, P. Visual experiences in the blind induced by an auditory sensory substitution device.
Conscious. Cognit. 2010, 19, 492–500. [CrossRef] [PubMed]
27. Gonzalez-Mora, J.L.; Rodriguez-Hernaindez, A.F.; Burunat, E.; Martin, F.; Castellano, M.A. Seeing the world
by hearing: Virtual Acoustic Space (VAS) a new space perception system for blind people. In Proceedings of
the 2006 2nd International Conference on Information & Communication Technologies, Damascus, Syria,
24–28 April 2006; Volume 1, pp. 837–842.
28. Hersh, M.A.; Johnson, M.A. Assistive Technology for Visually Impaired and Blind People; Springer: London, UK,
2008; ISBN 9781846288661.
29. Ultracane. Available online: https://www.ultracane.com/ (accessed on 29 July 2019).
30. Tachi, S.; Komoriya, K. Guide dog robot. In Autonomous Mobile Robots: Control, Planning, and Architecture;
Mechanical Engineering Laboratory: Ibaraki, Japan, 1985; pp. 360–367.
31. Borenstein, J. The guidecane—A computerized travel aid for the active guidance of blind pedestrians.
In Proceedings of the 1997 International Conference on Robotics and Automation (ICRA 1997), Albuquerque, NM,
USA, 20–25 April 1997; IEEE: Piscataway, NJ, USA; Volume 2, pp. 1283–1288.
32. Shoval, S.; Borenstein, J.; Koren, Y. Mobile robot obstacle avoidance in a computerized travel aid for the
blind. In Proceedings of the 1994 IEEE International Conference on Robotics and Automation, San Diego,
CA, USA, 8–13 May 1994; pp. 2023–2028.
33. Loomis, J.M. Digital Map and Navigation System for the Visually Impaired; Department of Psychology, University
of California-Santa Barbara; Unpublished work; 1985.
34. Loomis, J.M.; Golledge, R.G.; Klatzky, R.L.; Marston, J.R. Assisting wayfinding in visually impaired travelers.
In Applied Spatial Cognition: From Research to Cognitive Technology; Lawrence Erlbaum Associates, Inc.:
Mahwah, NJ, USA, 2007; pp. 179–203.
35. Crandall, W.; Bentzen, B.L.; Myers, L.; Brabyn, J. New orientation and accessibility option for persons with
visual impairment: Transportation applications for remote infrared audible signage. Clin. Exp. Optom. 2001,
84, 120–131. [CrossRef]
36. Loomis, J.M.; Klatzky, R.L.; Golledge, R.G. Auditory Distance Perception in Real, Virtual, and Mixed
Environments. In Mixed Reality; Springer: Berlin/Heidelberg, Germany, 1999; pp. 201–214.
37. PERNASVIP—Final Report. 2011. Available online: pernasvip.di.uoa.gr/DELIVERABLES/D14.doc (accessed
on 1 August 2019).
38. Ran, L.; Helal, S.; Moore, S. Drishti: An Integrated Indoor/Outdoor Blind Navigation System and Service.
In Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications,
Orlando, FL, USA, 17–17 March 2004.
39. Harada, T.; Kaneko, Y.; Hirahara, Y.; Yanashima, K.; Magatani, K. Development of the navigation system for
visually impaired. In Proceedings of the 26th Annual International Conference of the IEEE Engineering in
Medicine and Biology Society, San Francisco, CA, USA, 1–5 September 2004; pp. 4900–4903.
40. Cheok, A.D.; Li, Y. Ubiquitous interaction with positioning and navigation using a novel light sensor-based
information transmission system. Pers. Ubiquitous Comput. 2008, 12, 445–458. [CrossRef]
41. Bouet, M.; Dos Santos, A.L. RFID tags: Positioning principles and localization techniques. In Proceedings of
the 1st IFIP Wireless Days, Dubai, UAE, 24–27 November 2008; pp. 1–5.
42. Kulyukin, V.A.; Nicholson, J.; Kulyukin, V.; Nicholson, J. RFID in Robot-Assisted Indoor Navigation for the
Visually Impaired RFID in Robot-Assisted Indoor Navigation for the Visually Impaired. In Proceedings of the
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–
2 October 2004; Volume 2, pp. 1979–1984.
Sensors 2019, 19, 3404 18 of 20
43. Kulyukin, V.; Gharpure, C.; Nicholson, J. RoboCart: Toward robot-assisted navigation of grocery stores by
the visually impaired. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and
Systems, Edmonton, AB, Canada, 2–6 August 2005; pp. 2845–2850.
44. Ganz, A.; Schafer, J.; Gandhi, S.; Puleo, E.; Wilson, C.; Robertson, M. PERCEPT Indoor Navigation System for
the Blind and Visually Impaired: Architecture and Experimentation. Int. J. Telemed. Appl. 2012. [CrossRef]
45. Lanigan, P.; Paulos, A.; Williams, A.; Rossi, D.; Narasimhan, P. Trinetra: Assistive Technologies for Grocery
Shopping for the Blind. In Proceedings of the 2006 10th IEEE International Symposium on Wearable
Computers, Montreux, Switzerland, 11–14 October 2006; pp. 147–148.
46. Hub, A.; Diepstraten, J.; Ertl, T. Design and development of an indoor navigation and object identification
system for the blind. In Proceedings of the 6th International ACM SIGACCESS Conference on Computers
and Accessibility, Atlanta, GA, USA, 18–20 October 2004; pp. 147–152.
47. Hub, A.; Hartter, T.; Ertl, T. Interactive tracking of movable objects for the blind on the basis of environment
models and perception-oriented object recognition methods. In Proceedings of the Eighth International
ACM SIGACCESS Conference on Computers and Accessibility, Portland, OR, USA, 23–25 October 2006;
pp. 111–118.
48. Fernandes, H.; Costa, P.; Filipe, V.; Hadjileontiadis, L.; Barroso, J. Stereo vision in blind navigation assistance.
In Proceedings of the World Automation Congress, Kobe, Japan, 19–23 September 2010; pp. 1–6.
49. Fernandes, H.; Costa, P.; Paredes, H.; Filipe, V.; Barroso, J. Integrating Computer Vision Object Recognition with
Location Based Services for the Blind; Springer: Cham, Switzerland, 2014; pp. 493–500.
50. Martinez-Sala, A.S.; Losilla, F.; Sánchez-Aarnoutse, J.C.; García-Haro, J. Design, implementation and
evaluation of an indoor navigation system for visually impaired people. Sensors 2015, 15, 32168–32187.
[CrossRef]
51. Riehle, T.H.; Lichter, P.; Giudice, N.A. An indoor navigation system to support the visually impaired.
In Proceedings of the 2008 30th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society, Vancouver, BC, Canada, 20–25 August 2008; pp. 4435–4438.
52. Legge, G.E.; Beckmann, P.J.; Tjan, B.S.; Havey, G.; Kramer, K.; Rolkosky, D.; Gage, R.; Chen, M.; Puchakayala, S.;
Rangarajan, A. Indoor Navigation by People with Visual Impairment Using a Digital Sign System. PLoS ONE
2013, 8, 14–15. [CrossRef]
53. Ahmetovic, D.; Gleason, C.; Ruan, C.; Kitani, K. NavCog: A navigational cognitive assistant for the blind.
In Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices
and Services (MobileHCI ’16), Florence, Italy, 6–9 September 2016; pp. 90–99.
54. Murata, M.; Ahmetovic, D.; Sato, D.; Takagi, H.; Kitani, K.M.; Asakawa, C. Smartphone-Based Indoor
Localization for Blind Navigation across Building Complexes. In Proceedings of the 2018 IEEE International
Conference on Pervasive Computing and Communications (PerCom), Athens, Greece, 19–23 March 2018;
pp. 1–10.
55. Giudice, N.A.; Whalen, W.E.; Riehle, T.H.; Anderson, S.M.; Doore, S.A. Evaluation of an Accessible, Free
Indoor Navigation System by Users Who Are Blind in the Mall of America. J. Vis. Impair. Blind. 2019, 113,
140–155. [CrossRef]
56. Dakopoulos, D. Tyflos: A Wearable Navigation Prorotype for Blind & Visually Impaired; Design, Modelling and
Experimental Results; Wright State University and OhioLINK: Dayton, OH, USA, 2009.
57. Meers, S.; Ward, K. A vision system for providing 3D perception of the environment via transcutaneous
electro-neural stimulation. In Proceedings of the Eighth International Conference on Information Visualisation,
London, UK, 16–16 July 2004; pp. 546–552.
58. Meers, S.; Ward, K. A Substitute Vision System for Providing 3D Perception and GPS Navigation via
Electro-Tactile Stimulation. In Proceedings of the International Conference on Sensing Technology, Palmerston
North, New Zealand, 21–23 November 2005; pp. 551–556.
59. Zöllner, M.; Huber, S.; Jetter, H.-C.; Reiterer, H. NAVI—A Proof-of-Concept of a Mobile Navigational Aid for Visually
Impaired Based on the Microsoft Kinect; Human-Computer Interaction—INTERACT; Springer: Berlin/Heidelberg,
2011; pp. 584–587.
60. Zhang, H.; Ye, C. An Indoor Wayfinding System Based on Geometric Features Aided Graph SLAM for the
Visually Impaired. IEEE Trans. Neural Syst. Rehabilit. Eng. 2017, 25, 1592–1604. [CrossRef] [PubMed]
61. Li, B.; Munoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-based Mobile Indoor
Assistive Navigation Aid for Blind People. IEEE Trans. Mob. Comput. 2019, 18, 702–714. [CrossRef] [PubMed]
Sensors 2019, 19, 3404 19 of 20
62. Jafri, R.; Campos, R.L.; Ali, S.A.; Arabnia, H.R. Visual and Infrared Sensor Data-Based Obstacle Detection
for the Visually Impaired Using the Google Project Tango Tablet Development Kit and the Unity Engine.
IEEE Access 2017, 6, 443–454. [CrossRef]
63. Neto, L.B.; Grijalva, F.; Maike, V.R.M.L.; Martini, L.C.; Florencio, D.; Baranauskas, M.C.C.; Rocha, A.;
Goldenstein, S. A Kinect-Based Wearable Face Recognition System to Aid Visually Impaired Users. IEEE Trans.
Hum.-Mach. Syst. 2017, 47, 52–64. [CrossRef]
64. Hicks, S.L.; Wilson, I.; Muhammed, L.; Worsfold, J.; Downes, S.M.; Kennard, C. A Depth-Based Head-Mounted
Visual Display to Aid Navigation in Partially Sighted Individuals. PLoS ONE 2013, 8, e67695. [CrossRef]
[PubMed]
65. VA-ST Smart Specs—MIT Technology Review. Available online: https://www.technologyreview.com/s/
538491/augmented-reality-glasses-could-help-legally-blind-navigate/ (accessed on 29 July 2019).
66. Cassinelli, A.; Sampaio, E.; Joffily, S.B.; Lima, H.R.S.; Gusmo, B.P.G.R. Do blind people move more confidently
with the Tactile Radar? Technol. Disabil. 2014, 26, 161–170. [CrossRef]
67. Zerroug, A.; Cassinelli, A.; Ishikawa, M. Virtual Haptic Radar. In Proceedings of the ACM SIGGRAPH ASIA
2009 Sketches, Yokohama, Japan, 16–19 December 2009.
68. Fundación Vodafone España. Acceso y uso de las TIC por las personas con discapacidad; Fundación Vodafone
España: Madrid, España, 2013. Available online: http://www.fundacionvodafone.es/publicacion/acceso-y-
uso-de-las-tic-por-las-personas-con-discapacidad (accessed on 1 August 2019).
69. Apostolopoulos, I.; Fallah, N.; Folmer, E.; Bekris, K.E. Integrated online localization and navigation for
people with visual impairments using smart phones. In Proceedings of the International Conference on
Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 1322–1329.
70. BrainVisionRehab. Available online: https://www.brainvisionrehab.com/ (accessed on 29 July 2019).
71. EyeMusic. Available online: https://play.google.com/store/apps/details?id=com.quickode.eyemusic&hl=en
(accessed on 29 July 2019).
72. The vOICe. Available online: https://www.seeingwithsound.com/ (accessed on 29 July 2019).
73. Microsoft Seeing AI. Available online: https://www.microsoft.com/en-us/ai/seeing-ai (accessed on
29 July 2019).
74. TapTapSee—Smartphone application. Available online: http://taptapseeapp.com/ (accessed on 29 July 2019).
75. Moovit. Available online: https://company.moovit.com/ (accessed on 29 July 2019).
76. BlindSquare. Available online: http://www.blindsquare.com/about/ (accessed on 29 July 2019).
77. Lazzus. Available online: http://www.lazzus.com/en/ (accessed on 29 July 2019).
78. Seeing AI GPS. Available online: https://www.senderogroup.com/ (accessed on 29 July 2019).
79. Sunu Band. Available online: https://www.sunu.com/en/index.html (accessed on 29 July 2019).
80. Orcam MyEye. Available online: https://www.orcam.com/en/myeye2/ (accessed on 29 July 2019).
81. Project Blaid. Available online: https://www.toyota.co.uk/world-of-toyota/stories-news-events/toyota-
project-blaid (accessed on 29 July 2019).
82. Schinazi, V. Representing Space: The Development, Content and Accuracy of Mental Representations by the
Blind and Visually Impaired. Ph.D. Thesis, University College, London, UK, 2008.
83. Ungar, S. Cognitive Mapping without Visual Experience. In Cognitive Mapping: Past Present and Future;
Routledge: London, UK, 2000; pp. 221–248.
84. Loomis, J.M.; Klatzky, R.L.; Giudice, N.A. Sensory substitution of vision: Importance of perceptual and
cognitive processing. In Assistive Technology for Blindness and Low Vision; CRC Press: Boca Ratón, FL, USA,
2012; pp. 162–191.
85. Spence, C. The skin as a medium for sensory substitution. Multisens. Res. 2014, 27, 293–312. [CrossRef]
86. Maidenbaum, S.; Abboud, S.; Amedi, A. Sensory substitution: Closing the gap between basic research and
widespread practical visual rehabilitation. Neurosci. Biobehav. Rev. 2014, 41, 3–15. [CrossRef]
87. Proulx, M.J.; Brown, D.J.; Pasqualotto, A.; Meijer, P. Multisensory perceptual learning and sensory substitution.
Neurosci. Biobehav. Rev. 2014, 41, 16–25. [CrossRef]
88. Giudice, N.A. Navigating without Vision: Principles of Blind Spatial Cognition; Handbook of Behavioral
and Cognitive Geography; Edward Elgar Publishing: Cheltenham, UK; Northampton, MA, USA, 2018;
pp. 260–288.
89. Giudice, N.A.; Legge, G.E. Blind Navigation and the Role of Technology. In Engineering Handbook of Smart
Technology for Aging, Disability, and Independence; John Wiley & Sons: Hoboken, NJ, USA, 2008; pp. 479–500.
Sensors 2019, 19, 3404 20 of 20
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).