Developing A Flexible Navigational Assistive Application For The Visually Impaired and Blind Using Depth Imaging Analysis
Developing A Flexible Navigational Assistive Application For The Visually Impaired and Blind Using Depth Imaging Analysis
Abstract: The means of assisting the visually impaired and blind people (VIB) when travelling
usually relies on other people which makes them dependent. Assistive devices are developed
to assist in blind navigation, but current assistive devices lack in terms of portability and
flexibility that creates an inconvenient travelling experience for the VIB. Thus, this paper
proposes an unsophisticated mobile APP that can assist in navigation through cognitive and
spatial awareness by implementing obstacle and object detection with other assistive features
as safety feature. Providing a user-friendly interface with gesture-based inputs combined with
audio accessibility features that the VIB can easily use. The proposed system includes the use
of a depth imaging analysis to calculate distance of an obstacle, warning the users within 1.6
meters. The object detection also includes a unique interactable feature that enables
interactivity in finding objects by providing an audio and vibration feedback, giving VIBs the
independent solution in locating their desired objects for an 83% accuracy detection.
Keywords: Visually impaired and blind people (VIBs), mobile application, depth imaging
analysis,
object detection
1. Introduction
The number of visually impaired and blind people (VIB) has been increasing at an
alarming rate that international organizations including the World Health
Organization (WHO) highlights it at a concerning stage [1, 2]. In 2020, an estimated
of more than 200 million individuals are recorded with various levels of visual
impairment. Among of them experience moderate to severe vision impairment which
is projected by year 2050 to reach to a staggering 550 million people [3]. There are
numerous causes for the increase for visual impairment and blindness which can be
attributed from different sources or instances such as genetics, accidents, diseases,
or the trending ageing population from both developing and developed countries.
The level of severity for visual impairment and blindness varies among genders,
while also the countries’ economic income contributes on how they respond to such
crisis, which inclusive access to research and development and medication could
have prevented or reduce worse cases of this disability beforehand [4].
Visually impaired and blind people are met with difficulties in their daily lives
as they lack the ability to perceive visual information. This limits their capability to
process their surroundings and interact with the society, hindering their day-to-day
activities which decreases their quality of life (QoL), thereby VIBs are in most need of
assistance on their mobility constraint which requires an adaptation of assistive
devices and accessible infrastructure [5]. The evolution of assistive technologies for
the VIBs has gradually increased over the years of development, electronic travel
aids for the blind are developed by researchers globally in response to address their
complications.
However, using with additional other assistive features to call for help and open map for safe
the current navigation.
assistive
technologies 2. Related Studies
come with
challenges as the The inconvenience of the current assistive devices for the VIBs has always been
flexibility and the a great interest of development for various researchers around the world, as some
portability aspect developed technology shows to be unportable, too technical, or impractical to use. As
remains to be an technology continuously develops, these limitations are slowly being improved.
issue due to Existing studies researched on different problem aspects in terms of spatial
hardware and awareness, cognitive awareness, and inclusive user interface design which are to be
usability discussed in Sections 2.1 to 2.3.
limitations [6].
Traditional 2.1 Spatial awareness
assistive devices Assistive devices are continuously evolving and expanding with the rapid
such as development of technologies and the extensive research to improve the QoL of the
eyeglasses, VIBs. Earlier researches on assistive technologies focused mainly on mobility and
tactile symbols, orientation, cognitive and context awareness, obstacle detection, etc. This particular
magnifiers, research by See, et al, developed a personal wearable assistive device (PAD) with a
walking cane, modular architecture that was based on a robot operating system (ROS) that detects
etc. are still and warns about obstacles using intel RealSense camera, which contributed
introduced and significantly for the inclusivity and the independence of the VIBs in outdoor
used by the navigating scenarios [8]. Navigation is a huge concern for the VIBs as safety
visually impaired concerns arises with the lack of visual information. Obstacles, terrain, and overall
and blind as it environmental inconsistencies takes into account when travelling from one
allows them to destination to another. Sighted people perceive spatial awareness through the means
get by with basic of direction, acquiring one’s specific location and relative position [9]. However, VIBs
daily tasks. lack the capability to do so, hence, assistive navigating technologies for spatial
However, these awareness are developed.
devices are only
focused on one A study by Fernandes, et al, developed a multi-module navigational assistance for
aspect in which the blind people that generates landmarks through implementation of various points
its assistive of interest which adjusts based on the environment. Orientation and location are
capability can be provided to the user by means of audio feedback and vibration actuators, allowing a
further expanded non-intrusive and reliable navigation. However, the research is still in progress and
by exploring and actual testing is yet to be delivered [10]. A study by Patra, et al, utilized an eBox
applying smart 2300TM connected with USB camera, ultrasonic sensor, and a headphone for
sensing detecting obstacles of up to 300cm through an implementation of ultrasound-based
technologies [7]. distance measurement. An additional human presence algorithm is programmed
which detects face, skin, and cloth within 120cm [11]. The developed module proves
In this paper, to have promising result as it detects obstacles with 95.45% accuracy, but it is
the researchers mounted on a helmet as a sensor unit while carrying the eBox 2300 TM weighing at
developed an about 500gm which compromises the portability and flexibility aspect. Research by
application to Li, et al, developed an application called ISANA on Google Tango tablet by taking
cater both advantage of the embedded RGB-D camera that provides depth information, enabling
cognitive and navigation and a novel way-point path finding. A smart cane is also developed to
spatial handle interaction from the tablet to the user which outputs vibration feedback and
awareness using tactile input whenever an obstacle is detected [12]. The developed application is
an embedded 3D robust system that also implements semantic map construction and a multi-modal
depth camera on user interface, however, the support for Google Tango tablet has already shut down
a smartphone back in 2018 because of a newer augmented reality system [13].
which analyses
depth images to
process and
calculate
distances for
obstacle and
object detection.
An inclusive user
interface is
developed for the
VIBs to navigate
with ease using
both gesture and
voice command
2.2 Cognitive with an adaptive assistive technology that focuses on cognitive solution,
awareness independence can be achieved.
Visually A study by Joshi, et al, developed an efficient multi-object detection trained on a
impaired and deep learning model with a custom dataset on a YOLO v3 framework installed on a
blind people DSP processor with camera and distance sensor, capturing different angles and
struggle to lighting conditions to achieve an accuracy of real-time 99.69% detection [17]. The
partake in developed system is a robust assistive detection device that provides broad capability
education due to with its integration of artificial intelligence, although system maintenance seems to
the lack of visual be difficult as each object are to be manually established unto the system. Research
information, by Rahman, et al, developed an automated object recognition through an internet of
although things enabled devices such as Pi camera, GPS modular, Raspberry Pi,
specialized accelerometer, and more. Objects are detected through the installed laser sensors
educational with a single-shot-detector model which are defined by different directions such as
materials do exist front, left, right, and ground [18]. The developed system has an accuracy of 99.31%
for learning. in detecting objects and 98.43% in recognizing its type, although it is currently
Tactile materials limited to five types of objects and its size and weight can be further reduced for the
such as braille future works.
books, audio
books, screen 2.3 Inclusive User Interface Design
readers,
The demand for integration of inclusive usability on technologies escalated as
refreshable
these gadgets are becoming a necessary day-to-day means of communications, labor,
displays, and
and entertainment for the society. Additionally, according to the World Health
many more are
Organization, people with disabilities are expected to increase over the years due in
used to teach in
part to an aging population and increase in chronic health conditions [19]. A means
blind schools
of adaptability can bridge the complication for disabled people in using technology
[14]. However,
which already existed and it is what developers termed the accessibility features. The
perceiving and
accessibility features have been around for a long time which are not necessarily
acquiring
meant only for disabled people. Accessibility features are a set of options that can be
common daily
activated for the reason of convenience, preference, or a means of easier navigation
objects on their
for those who can’t use the smartphone in their default setup. In an article by Kriti,
own is a
she stated that an accessible design is to capitalize ease of use for all levels of ability
limitation that
that anyone can utilize for a specific goal of inclusivity for all kinds of users [20].
makes them
Different forms of accessibility such as touch, visual, hearing and speaking are
dependent to
available to cater whether it's a permanent, temporary or situational solution.
other people.
Examples of accessibility in touch can be the fingerprint scanner, assistive touch, or
Independence for
even the capability to adjust button sizes are considered an accessibility. When it
the VIB is a
comes to visuals, increasing contrast, reducing motion, adjusting a larger text, using
valuable aspect
magnifier, inverting colors, turning on subtitles and captions and even a guided
as it reduces
access is available for personal use. For hearing and speaking, accessibilities such as
social stigma
VoiceOver or Text-To-Speech are a popular use. These features are mostly available
such as overly
for smartphones and other similar devices.
helpful
individuals and With the availability of a smartphone’s built-in features such as VoiceOver, text
other to speech, haptic feedback, screen magnifier, larger text size, inverted colors, etc. An
misconceptions inclusive user interface can be constructed although designing a unified interface
that society that accommodates the VIBs special needs requires extensive research especially on
assumes [15]. user experience design (UXD) to provide the best of user experience and usability.
Cognitive There are already existing assistive mobile applications for the blind and visually
awareness allows impaired such as a study by Nayak, et al, that aims to provide a solution for difficulty
one to be aware in appointing schedules, writing emails, and SMS reading on a smartphone
of the completely based on voice
surrounding
environment and
enables
interaction of
objects by
utilizing different
senses and
reasoning [16].
This is an asset
that VIBs
struggles to
achieve on their
own, however,
commands [21]. Android v11 Operating System with a Qualcomm Snapdragon 730G processor, 8GB
However, the RAM, 6.7” infinity display, 48MP + 8MP rotating rear camera that supports 3D depth
usability of the estimation with an embedded Time-of-Flight (ToF) sensor. The developed user
application is interface for the mobile application is a combination of different accessibility features
limited with it of a smartphone that helps users with vision impairment through the use of different
being only a full gestures and voice commands. The main workflow of the system is shown in Figure 1
voice command- where the process starts off with specific voice commands to open the features. The
based navigation, grey area shows the 3 main components which are the object detection, obstacle
detection, and other assistive features.
creating a
possibility for
complications
with
pronunciation
and audibility of
the said
commands.
3. Materials and
Methods
To develop a
system is a
rigorous task, it
involves a lot of
factors for a
system to work The object detection feature utilizes Tensorflow Lite framework with a custom
Figure 1. The proposed system workflow: users (left), mobile application interface (middle),
cloud back-end component (right)
together as a trained COCO SSD MobileNet model and an additional unique interactable feature for
whole. System VIBs to use. The obstacle detection takes advantage of the generated depth map on
development ARCore by Google that is planted with different coordinates on multiple directions to
starts off with a locate near obstacles, which then depth map generation performance is improved with
clear overall the help of ToF sensor. Other assistive device is implemented as a risk management in
objective of the case of emergency situations that enables user to call for help or open map within the
project then application. For all of the features, the output will be a vibration and audio feedback.
expands with all The application requires an Android v7+ as the minimum SDK version is v24. In figure
the details 2 shows the proposed wearable system where the smartphone is inserted on a
included to ensure portable, foldable fabric sling bag to be attached around the chest of the user.
that every part of
the system is
working well and
serves its purpose
that is aligned to
the project’s goal
[22]. The objective
of this research is
to develop a safe,
portable, and
flexible
navigational
solution that
caters to both
cognitive and
spatial awareness
of the VIBs.
The
smartphone opted
for the research
will the Samsung
Galaxy A80
(Samsung
Corporation in
Seoul, South
Korea) running on
Figure 3. Proposed wearable mobility assistive system by using Android Smartphone (right),
and SEER sling (left)
Figure 2. Original image (left), denerated depth map at 1.6m (middle), depth map
with planted coordinates on different locations (right)
The planted coordinates work together in detecting distances, enabling different
combinations as shown in table 1. Number of coordinates for each direction are listed
which only one coordinate per direction is required to activate the combined direction
such as full right, full left, full ground, and full body obstructions. The output for
detecting obstacles will be audio feedback that is based on the direction of the
obstacle.
Table 1. List of directions and combinations of obstacles detected within 1.6m with umber of
coordinates per direction and the audio warning feedbacks
(Left Torso +
Full left obstruction “Full Left”
Left Ground) 10
As previously stated, generating the depth map only requires a single moving RGB
camera, although performance improvement can be achieved by initializing a
dedicated depth sensing technology such as ToF sensor which instantly provides
depth map without calibrating the camera motion. ToF works by calculating the
distance travel time of the light source emitted onto a certain object then bounces
back to the camera as shown in figure 4, formulating a mathematical equation as
Speed of Light x Time
distance = [24]
2
Figure 4. Time-of-Flight concept. Measuring travel time distance from IR light to the target
and back to the camera [24]
3.2 Object Detection Integration
The object detection module utilizes the Tensorflow Lite framework which is an
on-device inference for different kinds of machine learning models typically applied on
IoT devices. For the proposed system, a custom COCO SSD MobileNet v2 model is
trained specifically to provide relevance for the VIBs daily struggle of finding their
desired objects, enabling over 90 different classes of objects such as walking cane,
handbag, umbrella, tie, eyeglasses, hat, etc. Including street assets such as signage,
traffic lights, bench, pedestrian lane, and many more. Figure 5 shows the user
interface for the voice command on the left image and the object detection result with
confidence values on the right image.
Figure 5. Object detection module. Voice command interface (left), object detection result
(right)
Figure 6. Object detection unique interactable feature that activates whenever gesture input
within the bounding box is detected.
Table 2. List of different touch gestures and voice commands with its function and feedback
4. Results
The concept of the study revolves with the integration of the obstacle and object
detection in one application to provide a portable and flexible mobility assistive
solution for the VIBs. The following sections conduct a testing on each module with
different settings for the five blindfolded individuals of ages from 21 to 26. Details will
be observed with the interaction between the application and the user during testing.
Table 3. Battery life duration evaluation based on activated connections while running SEER app
Figure 8. Depth
map color distance
representations. Red
is < 1.7m, yellow or
green is > 1.7m, and
blue >= 2m
The SEER
mobile is a
wearable device
that is placed on Figure 10. Blindfolded participants with walking cane to test the obstacle detection
the chest area of
the user. The
smartphone is
placed backside
vertically inside a
sling cloth bag as
seen in figure 10.
During the testing,
the participants
are blindfolded
and was given a
walking cane as it
is recommended to
use together with
the application to
determine their
direction and the
terrain of the
environment
safely.
and failed to detect were also on the average side while participant 3 finished the
course really fast scoring a total of 4 minutes and 8 seconds. It was observed that the
process time of the application could not keep up with the user’s movement as some
parts of the area takes time to focus and update the distance calculation. It is now
recommended to use the application while walking at an average pace.
Table 4. Obstacle detection evaluation with finish score, successfully detected & evaded, and
detection fails of the application
# Obstacles # Obstacles
Obstacle Detection
Finish Time successfully failed to detect
Test Results
detected & evaded by the APP
Participant 1 4:57mins 8 2
Participant 2 6:53mins 7 4
Participant 3 4:08mins 3 6
Participant 4 5:58mins 6 3
The obstacle
detection test Participant 5 5:15mins 7 3
results are listed
in table 4 that
includes the finish
time, the number
of obstacles 4.2 Evaluation of Object Detection Module and Other Assistive Features
successfully The object detection experiment took place indoors where selected common
detected and objects are placed randomly on the table for the blindfolded participants to find by
evaded, and the using the unique interactable feature. Different angles, lighting, shapes, and other
number of variables of an object are observed during the testing which affected the detection
obstacles failed to capability as seen on table 5 where the average confidence score and the successful
detect by the grab counts for each object are listed.
application. The
five participantsTable 5. Object detection evaluation with average confidence score and successful grab counts
were asked tofor each object
navigate on their
own pace where Objects Average Confidence Successful Grab counts
participant 1 and Scores for each participant
participant 5 [Max of 5]
finished at average AC Remote 72.31% 4
time of 4 minutes
and 86 seconds. Bottle 96.59% 5
These two
participants
scored the highest Cup 92.01% 5
obstacles
successfully Scissors 65.23% 2
detected and
evaded while also Mouse 86.72% 4
the least obstacles
failed to detect by
the application.
Participant 2 and
4 were on the
slower pace where
it scored them an
average of 6
minutes to finish
the course,
obstacles evaded
The confidence score and the grab count for the bottle and cup scored the
highest as its position and its overall visibility was consistent throughout the testing,
followed by mouse and AC remote which scored lower as it is quite a flat object,
resulting in a misinterpreted object. The scissors were observed to score the lowest
with an average confidence score of 65.23% as it is really flat on the table with the
ability to change its shape and its metal part was reflecting the light to the camera,
resulting in complication during detection. This limitation can be fixed by training a
more robust object detection model that can consider the different variables of an
image such as angle and lighting.
The text-to-speech capability for other assistive features were tested to evaluate
its reliability during an emergency situation. The participants tried to navigate to a
desired place, set an emergency contact, and call emergency via voice command and
gestures only and the results are as shown in figure 11. The success rates are in blue
and the error rates are in red. Navigating to a desired place via voice command
experience difficulties as some places are hard to pronounce that the speech-to-text
system misinterprets the words. Similar to set emergency contact and call contact that
some names are misspelled. For the emergency feature, it scored a 100% success rate
throughout the testing as it is easy for the users to command.
Figure 11. Other assistive feature evaluation. Blue as success rates and red as error rates for the
voice command
5. Discussion
During the development of the system, a spectrum of vision loss is extensively
researched which it was found out that there is a difference of perceiving and adapting
information depending on the severity of blindness [25]. For example, individuals who
are born blind has no perception of colors and basic concept of shapes at first. Thus,
they need more supervision as compared to those whose vision gradually lost or worsen
over time as they already have the basic concepts in mind. Although navigating
independently can be really stressful, hence, assistive devices like the SEER mobile are
developed. After the experiment, the participants are asked to evaluate the experience
in using the SEER mobile which the survey can be seen in table 6. Feedbacks were
taken during the obstacle detection experiment saying that the device was very
lightweight and comfortable to use. The audio warning was helpful but some users find
the looping audio warning annoying to listen at times. The overall assistive capability of
the obstacle detection is relatively safe, but caution and using walking cane is always
recommended as blind spots still exist with its detection range limitation. Feedback on
the object detection states that it needs some learning to use effectively.
Table 6. Overall SEER mobile evaluation
6. Limitations
The current navigational capability of the mobile application limits to calculating
any obvious material in the camera’s path only, unable to recognize the upcoming
object. In cases such as stairs, person, animal or vehicles, it won’t be able to warn the
users ahead as it only categorizes any upcoming material within 1.6m as an obstacle.
Thus, a synchronous implementation of both object and obstacle detection can be
integrate for future works. The object detection also currently limits the number of
objects it can detect, a more robust deep learning model can be trained to include
necessary objects that VIBs should recognize. The application is developed as an open
system that can integrate new features quite easily as the developer wishes, this allows
an easier access for future fixes and improvements.
7. Conclusion
A system is successfully developed that provides users a portable and flexible
navigational assistive device that features obstacle detection and object detection in a
single application that is controllable via gestures and voice command-abled user
interface. The obstacle detection uses depth map generation to calculate the distance of
a material to be considered as an obstacle within 1.6m of distance and the object
detection is capable of detecting relevant objects to the VIBs with an 83% accuracy
rate. Other assistive features are implemented to open map and call for help in case of
emergency situations.
Expansion of the platform is possible because of the versatile implementation of
the Android system especially within the software and hardware components. As
smartphones continue to evolve overtime, its capability to assist people with disabilities
will be improved further, even possibly leading the new generation of assistive devices.
The platform can also be further improved with the use of different technologies such
as stereo imaging [26], radar [27], and LiDAR [28] but some of these are expensive or
will lead to a more complicated implementation.
References
[1] W. H. Organization. "Blindness and vision impairment."
https://www.who.int/news-room/fact-sheets/detail/blindness-and-visual-impairment (accessed January, 2022).
[2] J. D. Steinmetz et al., "Causes of blindness and vision impairment in 2020 and trends over 30 years, and
prevalence of avoidable blindness in relation to VISION 2020: the Right to Sight: an analysis for the Global
Burden of Disease Study," The Lancet Global Health, vol. 9, no. 2, pp. e144-e160, 2021.
[3] U. N. C. o. Trade and Development, The Impact of Rapid Technological Change on Sustainable Development.
United Nations, 2020.
[4] W. Wang, W. Yan, A. Müller, S. Keel, and M. He, "Association of Socioeconomics With Prevalence of Visual
Impairment and Blindness," JAMA Ophthalmology, vol. 135, no. 12, pp. 1295-1302, 2017, doi:
10.1001/jamaophthalmol.2017.3449.
[5] T. Litman, "Evaluating Accessibility for Transportation Planning," 2007.
[6] H. Hoenig, J. Donald H. Taylor, and F. A. Sloan, "Does Assistive Technology Substitute for Personal Assistance
Among the Disabled Elderly?," American Journal of Public Health, vol. 93, no. 2, pp. 330-337, 2003, doi:
10.2105/ajph.93.2.330.
[7] S. S. Senjam, "Assistive Technology for People with Visual Loss," The Official Scientific Journal of Delhi
Ophthalmological Society, vol. 30, pp. 7-12, 2020. [Online]. Available:
https://www.djo.org.in/articles/30/2/Assistive-Technology-for-People-with-Visual-Loss.html.
[8] A. R. See, "Development of a Modular Assistive Device for the Visually Impaired and the Blind using ROS.."
[9] cognifit. "What is spactial Perception? Cognitive Ability."
https://www.cognifit.com/science/cognitive-skills/spatial-perception (accessed January, 2022).
[10] H. Fernandes, P. Costa, V. Filipe, L. Hadjileontiadis, and J. Barroso, "Stereo vision in blind navigation
assistance," in 2010 World Automation Congress, 2010: IEEE, pp. 1-6.
[11] A. Kumar, R. Patra, M. Manjunatha, J. Mukhopadhyay, and A. K. Majumdar, "An electronic travel aid for
navigation of visually impaired persons," in 2011 Third International Conference on Communication Systems
and Networks (COMSNETS 2011), 2011: IEEE, pp. 1-5.
[12] B. Li et al., "Vision-based mobile indoor assistive navigation aid for blind people," IEEE transactions on mobile
computing, vol. 18, no. 3, pp. 702-714, 2018.
[13] T. I. Express. "Google kills Project Tango AR platform, as focus shifts to ARCore."
https://indianexpress.com/article/technology/mobile-tabs/google-kills-project-tango-ar-project-as-focus-shifts-to-
arcore/ (accessed January, 2022).
[14] M. R. Rony, "Information Communication Technology to support and include Blind students in a school for all An
Interview study of teachers and students’ experiences with inclusion and ICT support to blind students," 2017.
[15] E. Brady, M. R. Morris, Y. Zhong, S. White, and J. P. Bigham, "Visual challenges in the everyday lives of blind
people," in Proceedings of the SIGCHI conference on human factors in computing systems, 2013, pp. 2117-2126.
[16] lumencandela. "What is cognition? Introduction to Psychology." https://courses.lumenlearning.com/wmopen-
psychology/chapter/what-is-cognition/ (accessed January, 2022).
[17] R. C. Joshi, S. Yadav, M. K. Dutta, and C. M. Travieso-Gonzalez, "Efficient Multi-Object Detection and Smart
Navigation Using Artificial Intelligence for Visually Impaired People," Entropy, vol. 22, no. 9, p. 941, 2020.
[18] M. A. Rahman and M. S. Sadi, "IoT Enabled Automated Object Recognition for the Visually Impaired," Computer
Methods and Programs in Biomedicine Update, p. 100015, 2021.
[19] W. H. Organization. "Disability and health." https://www.who.int/news-room/fact-sheets/detail/disability-and-
health (accessed January, 2022).
[20] K. Krishan. "Accessibility in UX: The case for radical empathy." https://uxmag.com/articles/accessibility-in-ux-
the-case-for-radical-empathy (accessed January, 2022).
[21] S. Nayak and C. Chandrakala, "Assistive mobile application for visually impaired people," 2020.
[22] R. Patnayakuni, A. Rai, and A. Tiwana, "Systems development process improvement: A knowledge integration
perspective," IEEE Transactions on Engineering Management, vol. 54, no. 2, pp. 286-300, 2007.
[23] R. Du et al., "DepthLab: Real-Time 3D Interaction With Depth Maps for Mobile Augmented Reality," in
Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology, 2020, pp. 829-
843.
[24] S. Imaging. "3D time of flight cameras." https://www.stemmer-imaging.com/en-ie/knowledge-base/cameras-3d-
time-of-flight-cameras/ (accessed January, 2022).
[25] P. S. f. t. Blind. "Four prevalent, different types of blindness." https://www.perkins.org/four-prevalent-different-
types-of-blindness/ (accessed January, 2022).
[26] W. Kazmi, S. Foix, G. Alenyà, and H. J. Andersen, "Indoor and outdoor depth imaging of leaves with time-of-flight
and stereo vision sensors: Analysis and comparison," ISPRS journal of photogrammetry and remote sensing, vol.
88, pp. 128-146, 2014.
[27] F. Nobis, M. Geisslinger, M. Weber, J. Betz, and M. Lienkamp, "A deep learning-based radar and camera sensor
fusion architecture for object detection," in 2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF),
2019: IEEE, pp. 1-7.
[28] C. Ton et al., "LIDAR assist spatial sensing for the visually impaired and performance analysis," IEEE
Transactions on Neural Systems and Rehabilitation Engineering, vol. 26, no. 9, pp. 1727-1734, 2018.