0% found this document useful (0 votes)
5 views9 pages

3D CMOS Sensor

For Visually Impaired people

Uploaded by

sowmimohan2805
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views9 pages

3D CMOS Sensor

For Visually Impaired people

Uploaded by

sowmimohan2805
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/259672176

3D CMOS sensor based acoustic object detection and navigation system for blind
people

Conference Paper · October 2012


DOI: 10.1109/IECON.2012.6389214

CITATIONS READS
25 1,128

4 authors:

Larisa Dunai B. Defez


Polytechnic University of Valencia Polytechnic University of Valencia
100 PUBLICATIONS 759 CITATIONS 47 PUBLICATIONS 301 CITATIONS

SEE PROFILE SEE PROFILE

Ismael Lengua Guillermo Peris-Fajarnés


Polytechnic University of Valencia Polytechnic University of Valencia
54 PUBLICATIONS 460 CITATIONS 103 PUBLICATIONS 882 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Larisa Dunai on 10 March 2014.

The user has requested enhancement of the downloaded file.


3D CMOS sensor based acoustic object detection
and navigation system for blind people
Larisa Dunai, Beatriz Defez Garcia, Ismael Lengua, Guillermo Peris-Fajarnés
Universitàt Politècnica de Valencia, Camino de Vera s/n, 46022, Valencia, Spain
[email protected]

also includes other devices such as Nothingham Obstacle


Abstract- The paper presents a new wearable Cognitive Aid Detector, Sonic Torch, Sonic Pathfinder, PolaronTM Sensory
System for Blind People (CASBliP). The prototype device was 6, WalkMate [8], [6], [9], [10]. The existing obstacle
developed as an obstacle detector, orientation and navigation detectors ETA devices are able to detect the obstacles in a
Electronic Travel Aid (ETA) for blind people. The device
provides a binaural acoustic image representation. The range covering up to 15 feet and 15º in azimuth.
environmental information acquisition system is based on an On the other hand, the most popular laser obstacle detector
array of 1× ×64 CMOS Time-of-Flight sensors. Through ETA is the Laser Cane [8], [11]. Laser Cane is based on laser
stereoscopic (binaural) acoustic sounds the device relays the technology which is implemented into the cane, making use
surrounding near and far environment. Experimental results of the triangulation principle. Its photo detectors receive the
demonstrate that blind users are able to detect obstacles and
navigate through unknown and known environments safety and light beams which, after collision with the obstacle surface,
confidently. CASBliP works accurately in range of 15m in are reflected. This enables the device to calculate the distance
distance and 64º in azimuth, providing significant advantages in up to the obstacle. The detection range is up to 4m. Through
comparison with currently existing ETA systems. acoustic signals, the blind user is informed about detected
object.
The second class, environmental sensors, includes devices
I. INTRODUCTION
that try to go further than obstacle detectors devices [6], [11],
There are over 314 million of blind and partially impaired providing additional features. Bat K Sonar Cane [12],
people in the world. Over 45 million of them are totally blind Ultracane, KASPA, Trisensor, Miniguide, Sonic Torch,
[1]. Besides independence and communication limitation, the Sonicguide and Navbelt [13] belong to this class. The most
main problem for blind people is their mobility. Mobility has influential ETA from environmental sensors is the Binaural
been defined by Foulke [2] as “the ability to travel safety, Sonar Electronic Travel Aid also known as Sonicguide [14].
comfortably, gracefully and independently through the Sonicguide is the improved Sonic Torch device, developed by
environment”. Kay in 1959 [8], [15]. The device incorporates two acoustic
Since early decades, together with the sensors channels which help blind users not only to detect the objects
development, many efforts have been made to develop new but also to recognize and count them in range up to 4m in
and sophisticated Electronic Travel Aids (ETAs) able to distance and 55º in azimuth [16], [14]. On the other hand,
perceive and represent the surrounding environment. The idea vOICe device is one of the latest ETA systems using
underlying the development of such devices was to overcome stereovision technology for environmental image acquisition.
human sense limitations, such as blindness. These devices The device, via acoustic signals and a tactile interface,
would help blind people to perceive their surrounding informs the users about their surrounding environment [17],
environment through tactile, vibrotactile, vocal or acoustic [18].
senses. The third class of ETAs corresponds to the orientation and
Traditionally, three classes of ETAs are known: obstacle navigation aids; the main objective of these devices is the
detectors, environmental sensors and orientation and familiarization with the environments as well as the
navigation aids. The first two classes make use of ultrasonic, navigation [19], [20], [21]. The pioneer orientation device is
laser and artificial vision to measure the distance to the Talking Signs [22]. The device, by means of an infrared beam
nearest objects or receptors. The third class uses global pattern, controls the range and coverage of each sign in a
positioning systems (GPS) for route prediction and range up to 30m in outdoor environments. SONA – Sonic
orientation task. Orientation Navigation Aid is based on a similar idea as
The first remarkable obstacle detector ETA is the Lindsay Talking Signs [23]. The system was developed at Georgia
Russell Pathsounder [3], [4]. Also, the Mowat Sonar Sensor Institute of Technology in 1979. Other devices such as Verbal
is one of the simplest and most popular [5]. Both devices use Landmark [24], Easy Walker [11], Pilot Light, etc…belong to
ultrasonic transducers which measure the distance to the this class too. Also, the Global Positioning Systems (GPS)
nearest object, providing tactile or vibrations feedback [6], [10] are included in this group, [25]; among them, one of the
[7]. This subgroup of ultrasonic based obstacle detector ETAs best known is the GPS ETA, developed at the University of

978-1-4673-2420-5/12/$31.00 ©2012 IEEE 4188


Fig.1. Measurement principle of the Time of Flight (TOF) method

California [26], and known as Personal Guidance System people. In the present section, the developed object detector
[27]. This device is based on radio signals from satellites, and navigation Electronic Travel Aid is described.
which provide real information about every point of the Earth In general terms, the CASBliP system detects the objects in
surface [28], [29]. Through synthetic speech [30] or tactile the frontal view of the user and converts the information
[31] the user gets informed about his location. The system regarding object location into binaural virtual sounds which
range is 20m. Within this group we can also find the Mobility are delivered to the user via stereophonic headphones. The
for Blind and Elderly People Interacting with Computers object detection is achieved by emitting laser light beams
(MoBIC) device [32], Makino [27], Electronic Guide [27], which are reflected after colliding with the surrounding
GPS Braille Note [33], Geotact [34] Easy Walker [35], objects (see Fig.1). These light beams are received by the
Loadstone, Trekker [36] or Tormes [37], Tanea prototype sensor system of the device and converted into sounds by
[38], [39], System for Wearable Audio Navigation (SWAN) means of its internal software.
[40], [41], Tyflos [42], [43], [44], [45], [46]. All these devices As it will be later detailed, the object detection system of
provide artificial speech, acoustic or tactile feedback to the the device is based on an improved 1×64 3D CMOS image
user. sensor, originally developed for pedestrian recognition by
The aim of this paper is to propose a new system able not cars [47], [48]. This sensor is implemented into a pair of
only to enable mere object detection, but also to provide glasses which are also equipped with the laser electronics for
orientation and navigation aid. This system is specially suited light beam emission. Besides, the Field Programmable Gate
for blind and partially blind people. It provides significant Array (FPGA), in charge of information processing and
advantages against other conventional ETA systems, such as conversion into sounds, is implemented in an additional
wider detection range, possibility of use both in indoor and module of the device. This FPGA contains functions
outdoor environments as well as a simple and easy utilization. regarding sensor control, data acquisition, distance
The paper is structured as follows: Section II provides a calculation, application software, sounds files, sound
general description of the developed device, listing its main reproducing system, power supply and connectors between
advantages and drawbacks. Section III details the technical the parts of the device. As a conclusion then, the device is
implementation of the sensor module whereas Section IV based on two main parts; sensor module, which includes the
describes the acoustic module. In Section V, the experimental glasses and part of FPGA (for laser beam emission and
results are presented and analyzed. Finally, Section VI shows reception) and the acoustic module based on the part of the
the conclusions of the work. FPGA- in charge of information processing and conversion
into sounds- and on the stereo headphones (to deliver the
II. DESCRIPTION OF THE ACOUSTIC SYSTEM sounds to the user).
The aim of the Acoustic System is not only to inform the
Advances in sensor technology and electronics allow user about the object presence in the range of the device, but
providing environmental information to visually impaired also to inform him about object direction and distance. The
device operates in an on-line mode. This means that it

4189
delivers environmental information in real-time. Moreover,
Sensor electronics
the device works properly both in known and unknown
environments, and both in simple and in complex situations. 3D CMOS sensor
Acoustic sounds perceived via headphones by the human
brain create a map of the detected object, leaving the ADC-analog to
Interface
cognition task to the user. By hearing virtual sounds, the user digital converter Laser
is in charge to decide whether to avoid the obstacles in his Glases electronics
direction or not. In this context, the functions of the Acoustic I/O Flash & RAM
System are the detection and information tasks, whereas the Power Laser
navigation and route planning tasks are carried out by the suply FPGA diodes
user.
Although there are many Electronic Travel Aids that
implement acoustic sounds, such some of those commented Fig.2. Multiple Double Short-Time Integrated measurement principle
in Section I, there are significant differences between the
developed acoustic system and other acoustic ETAs.
- The acoustic system enables the user to decide route III. IMPLEMENTATION OF THE SENSOR MODULE
and path planning. The device detects all objects
appearing in the area of view. At the same time it The environmental image acquisition device or sensor
informs the user about their presence and the user is module consists of a 4×64 3D CMOS image sensor based on
free to decide to decide his way. 1×64 photodetector pulses.
- The device provides a full environmental image due to The design of the sensor module is based on three parts:
the 1×64 pixels 3D CMOS image sensor which covers interface, sensor electronics and laser electronics. Fig 2 shows
60º in azimuth and a distance range between 0,5m and the sensor system design, where the laser pulse emitters are
5m. The device simultaneously provides 64 sounds placed at the left and right side of the 3D CMOS sensor close
covering the whole detected environment. together on a pair of glasses. The processing unit of the
- The device perceives the object depth, since the sensor module is implemented into FPGA, which is in charge
different distances are represented by specific sounds. of sensor control, data acquisition, and distance calculation
- Virtual acoustic sounds are binaural sounds; the human and application software.
brain easily perceives them as sounds provided by the Sensor module transmits and receives identical 1×64 pixels
environmental objects. laser pulses with refreshing rate of 25fps in a horizontal line
- The device provides environmental information in at the user eye level, where the distance range is strictly
real-time. Sensor and Acoustic modules operate so fast dependent of the sensor module. Sensor module range for
that the device does not require supplementary time to object detection is between 0,5m and 5m in distance and up to
interpret the environmental information. 60º in azimuth. 3D CMOS sensor generates pulses in the
- No false noises; when there is no object in the area of range of 20 to 200ns, the rise and fall timing range is 10ns,
view of the system, the device is silent, and so no maximum shutter speed is 30ns, pulse repetition frequency is
sounds are reproduced by the system. up to 20 kHz and the laser pulse peak power is up to 2kW.
- The system works on darkness and sunny days, cold The wavelength is 905 nm in the near-infrared laser pulse
and hot weather, because the 3D CMOS sensor works (NIR) range; therefore the laser light becomes invisible to the
with infrared laser impulses. human eye. Maximum frame rate of the image sensor is
- Acoustic sounds are sounds which do not represent 19500 fps for a synchronic clock of 66 MHz, each pixel clock
any impact material, also do not disturb the user when is of 5MHz, and output swing is 1,9V. The generated pulses
the device is used during a long time. are strongly rectangular pulses. The pixel pitch is 130µm in
- The environmental acquisition module of the device is horizontal direction resulting in a sensitive area of 64×130µm
placed at the human eyes level, which allows the user =8,32mm2. The chip is completely controlled by the FPGA.
to perceive the surrounding from the head level. Since Laser pulses generated by the NIR laser diodes and
it is a head mounted device, the user is able to move defocused by diffractive optics are emitted to the environment
the head and scan the environment at the direction of illuminating entire field of view [49]. When an object
view of the user. appears in the laser range, the pulses are reflected to the
- Because of the binaural cues of the acoustic sounds, camera lenses of the 3D CMOS image sensor calculating the
the sounds do not overlap when there is more than one distance between the object surface and the sensor system for
object in the area of view. every 1×64 pixels simultaneously (see Fig.1).
- With the device the user is able to perceive the object The time delay τTOF of the laser light between the sensor
shape, i.e., if it has a plane, convex or concave surface. and the object for a distance d can be calculated as.
- The used sound does not interfere with external noises
such as traffic lights, human noises, etc.
2*d
τ TOF = (1)
c

4190
The Time-of-Flight (TOF) measuring method is based on sounds is 2fps. 16 Mb memories are needed to process
the fact that the emitted light pulse is not measured directly, acoustic module. Distance displacement is strongly dependent
but through accumulation of charges on a sensor chip. This on the sound intensity and on the pitch. At nearer distances,
procedure is equivalent to the Continuous wave distance the sound is stronger than at farther distances. The more the
measuring method when using a short single pulse [47]: distance increases, the more the sound intensity decreases.
Camera lenses transmit a part of the reflected laser pulse to Virtual sounds were obtained by convolving an acoustic
the 3D CMOS sensor chip surface, which operates through its sound with non-individual Head-Related Transfer Functions
light sensitive pixels as if they were electronic short-time (HRTF) previously measured using KEMAR manikin
windowed integrators. Then the distance between object described below. Fig. 3 shows measurement model of Head-
surface and 3D CMOS sensor is calculated as: Related Transfer Function.
τ TOF νc U1 The working principle of the acoustic module is similar to
dx = c = (T1 − Tw ) (2) ‘read and play’. This is, the acoustic module reads the output
2 2 U2 data, which represent coordinates in distances and in azimuth,
where T1 represents the short integration time, Tw is the from the sensor module and plays the sound with the same
pulse width of the linear laser, U1 is the measured sensor coordinates. The time interval between sounds is 8ms when
signal at integration time T1 and U2 is the sensor signal there are sounds playing. When there are no sounds, after 5ms
measured for integration timeT2, d is measured distance, νc is the sound module recalls the sensor module.
light propagation speed. The propagation time can be To measure the HRTFs a Maximum Length Binary
calculated by computing the quotient of the two integrated Sequence (MLBS) was used to generate the sound source.
shutter intensities U1/U2. HRTF measurement is based on the calculation of the
Fig.1 shows the measurement principle of indirect TOF impulse response for both ears by using the filter in frequency
method [47] used in the development of the Acoustic System. domain.
The travel time between the emission and reception of the For a sound signal x1(n) reproduced by the speakerphone,
laser pulse at pixel x depends on the travel distance, and can the registered response can be calculated as:
be calculated as:
2v c Y 1 = X 1 LFM (4)
TTravel , x = (3)
dx where X1 is the representation of the sound x1(n) in the
Due to the near-infrared laser pulses, which have the ability transformed frequency domain, L is the transfer function of
to illuminate the entire scene simultaneously avoiding any the speakerphone and all reproduction equipment, F is the
sequential scanning, Time-of-Flight system process images in transfer function of the space between the speakerphone and
real time. In order to reduce the required laser power and to human ear and M is the transfer function of the microphone
increase the measurement accuracy, multiple double short- from the human ear and all recording equipment.
time integration algorithms (MDSI) were used. In order to calculate the filter, it was necessary to generate
a x2(n) - sound reproduced by headphones- so that the
registered response Y2 would be equal to Y1. Y2 can be
IV. IMPLEMENTATION OF THE ACOUSTIC MODULE
calculated as:
Whereas the sensor module provides to the acoustic
module the linear image of the surrounding environment, the Y2 = X 2 HM (5)
acoustic module, is in charge of transmitting this information
where H is transfer function of the headphone and all
to the blind user, by using virtual acoustic sounds. The
reproduction equipment.
principle of the acoustic module is to assign an acoustic
Then, the filter T is obtained from (4) and (5).
sound to each one of the 64 pixels, for different distances.
The acoustic sounds will be reproduced through the
headphones from the position of the detected object, always
T = LF / H (6)
that the sensor will send to the acoustic module distance
In this way, the filter T is measured for the impulse
values. The sound module contains a bank of acoustic sounds
responses for each ear.
previously generated for a spatial area between 0,5m and 5m,
The impulse response is obtained by circular cross-
for 64 image pixels, where one pixel has the coordinate of
correlation between the MLBS from the system input and the
0,94º in azimuth occupying the 30º at the left ear and 30º at
response at the output.
the right ear of the human head. A delta sound of 2040
samples at a frequency of 44,1kHz was used to generate
acoustic information of the environment. In order to define h( n) = Ω sy ( n) = s ( n)Φ y ( n) =
(7)
the distances, 16 planes in distance were generated from 0,5m 1 L −1
increasing exponentially up to 5m. Refreshing rate of the ∑ s(k ) ⋅ y(n + k )
L + 1 k =0

4191
where Ф represents the circular cross correlation. End
End
Sound
z source
x(n)

hL(n) y
xL(n)
hR(n)

xR(n) x

Fig.3. Head-Related Transfer Function measurement model

Direct implementation of (7), for large sequences becomes


impossible due to its computational time. Then, the
convolution method was used:

1 (8)
a ( n )Φ b ( n ) = a ( − n )∗b ( n ) Start point Start point
L +1 AWT1 & AWT3 AWT2 &AWT4

Fig.4. Laboratory experimental scenario


And even then the computational time is great due to that
the Fast Fourier Transform used is of 2k-1 length. In that case,
the Fast Hadamard Transform (FHT) was used to reduce from Germany) and group B (users from Italy)
twice the computational time for HRTF generation: simultaneously tested the device at their respective countries.
For the experimental task, two identical devices were used,
one for each group. Before proceeding to the experimental
1 task of object detection and navigation, subjects were trained
h(n ) = P2 S 2 {H L+1 [S1 (P1 y ( n ) )]} (9)
(L + 1)s[0] during a month. The aim of this training was to learn to
externalize the sounds listened through the headphones as
where P are the permutation matrices, S are the well as to comprehend how to relate one spatial coordinate
redimension matrices and HL+1 are the Hadamard matrices of from the near environment to one specific virtual acoustic
L+1 degree. sound listened through the headphones, noticing the
difference between two different located sounds. Seven
In conclusion, binaural cues had an influence on the exercises with different levels of complexity were developed.
generation of the virtual sounds. The azimuth displacement of These exercises included from single object detection to the
the virtual sound source was obtained by the interaural phase detection of several objects, doors and walls. During the tests,
shift of the left and right signals in ter users were placed at distance of 2m from the object and they
ms of time difference between the left and right ear. Each were asked how they perceived the sounds. When the users
virtual sound corresponds to one pixel from the sensor were prepared, they were allowed to move towards the object
module and covers an area of 60º. and back, going around the object. The idea was to learn how
to localize spatial sounds. It is important to remark that, for
V. EXPERIMENTAL RESULTS blind people, each spatial sound represents an obstacle which
In this section, the tests with the CASBliP System are must be avoided. Training period depended of the user ability
described. Twenty blind users took part in the experiments. to localize the sounds. After the training period, blind users
These experiments were developed both in laboratory and in proceeded to the laboratory and outdoor experimental trials.
outdoor environments not known ‘a priori’, under the The experiments aimed to test the Acoustic System as
supervision of blind people trainers and engineers. In order to obstacle detector and navigation mobility aid for blind
avoid unnecessary displacements, two groups, group A (users people. The tests consisted of two tasks: navigation through a

4192
path with 4 pairs of soft obstacles in a controlled environment In the evaluation of object detection and spatial sound
and navigation in an open uncontrolled environment. In the localization in an open environment, there is always the
first experiment, subjects walked through a 14m artificial problem that the sounds are not be perceived exactly as in the
laboratory path limited by soft carton boxes (see Fig.4). All laboratory since, in the laboratory, the user is protected from
obstacles were identical in weight and height. After the the external noises, such as traffic noises, human, animal and
device had been connected and prepared, the users were faced bird’s speech, etc. Also, blind users require longer times to
to the scenario. The columns were placed in an asymmetric take decisions about the perceived objects and the
order, separated horizontally by 2m and at a distance of 2,5m environment.
between the last pair of columns and the wall. In the
laboratory experiment all obstacles remained static. The TABLE I
objective of the experiment was to navigate through the path MOBILITY TEST RESULTS FOR TWENTY SUBJECTS REPRESENTING THE
avoiding all the obstacles. Two routes were prepared for the ABSOLUTE WALKING TIME (AWT), THE NUMBER OF HITS (NH) AND THE
NUMBER OF CORRECTIONS (NOC)
experiment. The second route represents the opposite
direction to the first, i.e., the way back direction of the first Mean time, min
Group A Group B
route. AWT-0 1,00 -
6 AWT-1 9,02 8,27

5 AWT-11 7,31 6,46


Average tim e, m in

4
AWT-2 6,49
AWT-21 5,55
3
NUH-1 2,80 0,70
2
NOC-1 1,40 0,20
1
NUH-11 2,00 0,60
0
1 2 3 4 NOC-11 1,20 1,00
Runs

Fig.5. Average Absolute Walking Time (AWT) for laboratory Blind user tasks were to localize the objects and to judge
experiment
the differences between the perceived objects, to analyze the
In the experiment, the absolute walking time (AWT),
environment, to select the free path and walk through the
number of hits (NUH) and number of corrections (NOC)
scenario. External noises had a slight influence in the object
were measured. Blind subjects were able to perceive object
detection performance.
thickness and texture. Also the device can reliably detect
Four different scenarios were used for this test. The group
objects with small diameters due to its 64 pixels which scan
A, from Germany, used two external scenarios. Both
only 60º in azimuth.
scenarios were located in the patio of the School for Blind
The data are presented for a clear inspection in Fig. 5. The
People of Berlin (ABSV). The blind user should navigate
runs 1 and 2 are the runs for the group A and the runs 3 and 4
through 29m long way where a variety of obstacles were
are the runs for the group B. Note that after the training
placed, from the school entrance up to the school door. The
exercise, the participants performed the subsequent
test started with the blind user looking to a wide space where
experiments without any white cane, trainer instructions or
no object intersected the user direction of view. Because no
support. The value of time in minutes in which the subject
objects appeared in the system direction of view, no signals
reported the end of the trajectory is plotted in the y-axis. Fig.5
were sent by the system to the user. Thus, the user knew that
shows that the group A required a longer time to complete the
in an area of 5 meters, there were no obstacles.
experiment. This is due to the fact that some subjects which
Also, the user knew that when he was listening a sound,
had difficulties on sound externalization and orientation took
then there were objects are in his area of view. Despite the
part in the experiment. The difficulties were due to the time
total silence of the system the user knew that this silence did
of blindness; for example, in the group A, one of the subjects
not mean that the system was broken, so he must not worry.
is blind from born and he never used a navigation device for
The group B used also two different scenarios. The tests were
blind people. This user find difficulties in learning that the
carried out in the IFC Institute courtyard in Italy.
sounds were representing something that he had never seen,
Group A carried out two sessions of this test (in two
or that the sounds listened via headphones were not in his different days) whereas the group B carried out four sessions.
head, but coming from the environment. The location of the soft obstacles was modified except on the
The goal of the second experiment, named Mobility Test I, real obstacles such as building wall, columns and bench.
was to test perceptual authenticity of the navigation, by This time, the walking time, number of hits and number of
evaluating if blind users were able to detect the objects and to corrections were compared between the two groups (Fig. 6).
navigate through a 29m scenario in an outdoor environment. In average, the group A spent 90,19 minutes for the first run

4193
and 75,09 minutes for the second run. The group B registered Experimental results in laboratory conditions and real
better results with respect to the total and mean time. For the uncontrolled outdoor environments demonstrate that blind
first run the group B spent 84,32minutes, for the run 2 – users are able to detect obstacles and navigate through
67,37minutes, for the 3rd run 68,09minutes and for the 4th unknown and known environments safety and confidently.
59,07minutes. Table I confirms that a slight improvement was Acoustic System works accurately in range between 0 and 5
detected for the repeated run. Also, we can observe that all m in distance and 60º in azimuth. The resolution of the device
subjects showed a very similar pattern in the first run, except is sufficiently high both in object detection and acoustic
on four subjects, which had great difficulties in orientation. representation of the environment.
Three of them, apart from requiring more time to navigate Finally, it must be also said that future improvements are
through the trajectory, were less accurate in the navigation. possible in order to obtain more complete environment
These subjects had difficulties in object detection making information:
more hits and confusions and needing more corrections. - Improvement of the image acquisition techniques by
According to the results from Fig.5 and Fig.6, navigation adding more arrays of 1×64 CMOS image sensor, or using
time is affected mainly by the lack of training. stereovision instead CMOS image sensor.
- Improvement of the acoustic module adding sounds in
elevation.
M T, min
10
9
8 ACKNOWLEDGMENT
7
6
The authors wish to make the following acknowledgments
5 to the participants of CASBliP project and to all blind people
4 for their active participation on the system development, tests
3 and evaluation. The project was developed within the 6th
2
European Frame Programme, Cognitive Systems and
1
0
Robotics
Group A Group B
REFERENCES
NUH average
[1] World Blind Union, “White cane safety day,” World Blind Union Press
2,5 release, October, Canada, 2009.
[2] E. Foulke, “The perceptual basis for mobility,” AFB Res. Bull, vol. 23,
2,0 pp. 1, 1971
[3] L. Russell, “Travel Path Sounder Proceedings,” Rotterdam Mobility
1,5 Research Conference. N.Y: American Foundation for the Blind, 1965
[4] R.W. Mann, “Mobility aids for the blind—An argument for a
computer-based, man-device-environment, interactive, simulation
1,0
system,” Pro Conf on Evaluation of Mobility Aids for the Blind,
Washington, DC: Com. on Interplay of Eng. with Biology and
0,5 Medicine, National Academy of Engineering, pp. 101–16, 1970
[5] D. L. Morrissette, G.L. Goddrich, J.J. Henesey, “A follow-up-study of
0,0 the Mowat sensors applications, frequency of use and maintenance
Group A Group B reliability,” J. Vis. Impairment Blindness, Vol. 75, 6, pp. 244-247, 1981
[6] H. A. M. Freiberger, “Mobility Aids for the Blind,” Bulletin of
Fig.6 Mean absolute walking times from Mobility Test I and mean Prosthetic Research, pp. 73-78, 1974
number of hits. The ten clusters of bars show mean results for the ten [7] Book, “Electronic travel aids: new directions for research,” National
subjects from the group A as well as the mean results from the four runs Research Council, Chap. 6, pp. 67-90, 1986
of the Group B. The cluster of bars from the second graph shows the [8] L. W. Farmer, “Mobility devices,” Bulletin of Prosthetic Research, pp-
average Number of Hits for both groups for the previous carried out runs 47-118, 1978
[9] N. Debnath, J. B. Tangiah, S. Pararasaingam, A. S. A. Kader, “A
VI. CONCLUSIONS mobility aid for the blind with discrete distance indicator and hanging
object detection,” IEEE TENCON Vol.D, pp. 664-667, 2004
In the paper, a new wearable cognitive real-time Acoustical [10] L. Farmer, D. Smith, “Adaptive technology,” Book chapter,
Foundations on Orientation and Mobility, Eds. 2, pp.231-260, 1997
System device is described, which was developed as obstacle
[11] B. Ando, “Electronic Sensory systems for the Visually Impaired,” IEEE
detector, orientation and navigation Electronic Travel Aid for Instrumentation & Management Magazine, pp. 62-67, 2003
blind people. An array of 1×64 CMOS Time-of-Flight [12] B. S. Hoyle, “The Bathcane – mobility aid for the vision impaired and
sensors scan the environment and through binaural acoustic the blind,” IEE Symposium on Assistive Technology, pp. 18-22, 2003
[13] S. Shoval, J. Borenstein, Y. Koren, “The Navbelt-A computerized
sounds the device relays the surrounding near and far Travel Aid for the blind based on mobile robotics technology,“ IEEE
environment. The prototype device aims to be a portable Transactions on Biomedical Engineering, vol. 45, no. 11, November
device and constructed by commercially available 1998, pp.1376-1386
components. [14] R. Kuc, “Binaural sonar electronic travel aid provides vibrotactile cues
for landmark, reflector motion and surface texture classification,” IEEE

4194
Transactions on Biomedical Engineering, Vol. 49, No 10, pp. 1173- [38] A. Hub, S. Kombrink, K. Bosse and T. Ertl, “TANIA – a tactile-
1180, 2002 acoustical navigation and information assistant,” CSUN Conference.
[15] L. Kay, “An ultrasonic sensing probe as an aid to the blind,” Conference Proceedings of the California State University, Northridge
Ultrasonics, Vol. 2, No. 2, pp. 53-59, 1964 Center on Disabilities' 22nd Annual International Technology and
[16] L. Farmer, D. Smith, “Adaptive technology,” Book chapter, Persons with Disabilities Conference, Los Angeles, CA, USA, 2007,
Foundations on Orientation and Mobility, Eds. 2, pp.231-260, 1997 March 19-24
[17] P. B. L. Meijer, “An experimental system for auditory image [39] A. Hub, D. and T. Ertl, ”Design and development of an indoor
representations,” IEEE Transaction Biomedical Engineering, BME-39, navigation and object identification system for the blind,” Proceedings
(2), pp. 112-121, 1992 of the ACM SIGACCESS conference on Computers and accessibility,
[18] P. B. L. Meijer, “A modular sysnthetic vision and navigation system for Designing for accessibility, Atlanta, GA, USA, pp.147-152, 2004
the totally blind,” World Congress Proposals, 2005 [40] J. Wilson, B. N. Walker, J. Lindsay, C. Cambias, F. Dellaert, “SWAN:
[19] J. Brabyn, “New Developments in Mobility and Orientation Aids for system for wearable audio navigation,”11th IEEE International
the Blind,” IEEE Transactions on Medical Engineering, Vol. BME-29, Symposium on Wearable Computers (ISWC 07), pp. 91-98, 2007, 11-
NO 4, pp. 285-289, 1982 13 Oct.
[20] J. Brabyn, ”A review of mobility aids and means of assessment,” In D. [41] B. N. Walker, J Lindsay, “Using virtual environments to prototype
H. Warren & E. R. Strelow (Eds) Electronic Spatial Sensing for the auditory navigation display,” Asst. Technology, 17, pp. 72-81, 2005
Blind, Martinus Nijhoff, Boston, pp.13-27, 1985 [42] N. Bourbakis, D. Kavraki, “Intelligent assistants for handicapped
[21] J. Brabyn, “Technology as a support system for orientation and people's independence: case study,” IEEE International Joint Symposia
mobility,” American Rehabilitation, 1997 on Intelligence and Systems, Rockville, MD, p. 337-344, 1996, Nov.4-
[22] J. Brabyn, “New Developments in Mobility and Orientation Aids for 5
the Blind,” IEEE Transactions on Medical Engineering, Vol. BME-29, [43] D. Dakopoulos, N. Bourbakis, “A 2D vibration array as an assistive
NO 4, pp. 285-289, 1982 device for visually impaired,” IEEE Int. Conf. on BIBE07, Boston,
[23] R. Kuc, “Binaural sonar electronic travel aid provides vibrotactile cues MA, vol. II, pp. 930-937, 2007, Oct. 15-17
for landmark, reflector motion and surface texture classification,” IEEE [44] N. Bourbakis (2008). Sensing surrounding 3-D space for navigation of
Transactions on Biomedical Engineering, Vol. 49, No 10, pp. 1173- the blind - A prototype system featuring vibration arrays and data
1180, 2002 fusion provides a near real-time feedback. IEEE Engineering in
[24] B. Bentzen, P. Mitchell, “Audible signage as a wayfinding aid: Medicine and Biology Magazine, v. 27, Issue 1, pp. 49-55.
comparison of verbal landmark and talking signs. Draft Report to the [45] N. Bourbakis, P. Kakumanu, “Skin-based face detection- extraction and
American Council of the Blind, Sept. 1993 recognition of facial expressions,” Applied Pattern Recognition, book
[25] C. C. Collins, “On mobility aids for the blind,” in Electronic spatial chapter, Vol.91, 2008
sensing for the blind, D. H. Warren & E. R. Strelow, Eds., Dordrecht, [46] D. Dakopoulos, N. Bourbakis, “Preserving visual information in low
Martinus Nijjoff, pp. 35-64, 1985 resolution images during navigation of blind,” 1st International
[26] J. Loomis, R. Golledge, “Personal guidance system using GPS, GIS, Conference on Pervasive Technologies Related to Assistive
and VR technologies,” Proceedings, CSUN Conference on Virtual Environments, Athens, Greece, 2008, July 15-19
Reality and Person with Disabilities, San Francisco, June17-18, 2003 [47] P. Mengel, G. Doemens, L. Listl, “Fast range imaging by CMOS sensor
[27] J. M Loomis, R. G. Golledge, & R. L. Klatzky, “GPS-based navigation array through multiple double short time integration (MDSI),” Proc.
systems for the visually impaired,” In. W. Barfield & T. Caudell, Eds, IEEE International Conference in Image Processing (ICIP 2001),
Fundamentals of wearable computers and augmented reality, pp. 429- Thessaloniki (Greece), Oct.7-10, 2001, pp.169-172
446, Mahwah, NJ: Lawrence Erlbaum Associates, 2001 [48] P. Mengel, L. Listl, B. Konig, C. Toepfer, M. Pellikofer, W.
[28] J. M. Loomis, R. G. Golledge, R. L. Klatzky, & J. R. Marston Assisting Brockherde, B. Hosticka, O. Elkhalili, O. Schrey, W. Ulfig, “Three-
wayfinding in visually impaired travelers. In G. Allen (Ed.), Applied Dimensional CMOS image sensor for pedestrian protection and
spatial cognition: From research to cognitive technology. Mahwah, N. collision mitigation,” J. Valldorf and W. Gessner, Eds., Advanced
J.: Lawrence Erlbaum Associates, 2006 microsystems for automotive applications 2006. Berlin: Springer, 2006.
[29] J. M. Loomis, R. G. Golledge, & R. L. Klatzky, "Navigation system for (VDI-Buch), pp. 23-39
the blind: Auditory display modes and guidance,” Presence: [49] O. Elkhalili, O.M. Scherey, P. Mengel, M. Petermann, W. Brockherde,
Teleoperators and Virtual Environments, 7, pp.193-203, 1998 „A 4×64 pixel CMOS image sensor for 3-D measurement
[30] J. M. Loomis, J. R. Marston, R. G. Golledge, & R. L. Klatzky, applications,“ IEEE Journal of Solid Circuits, vol. 39, 7, pp. 1208-
“Personal guidance system for people with visual impairment: A 1212, 2004
comparison of spatial displays for route guidance,” Journal of Visual
Impairment & Blindness, 99, pp. 219-232, 2005
[31] R. L. Klatzky, J. R. Marston, N. A. Guidice, R. G. Golledge, J. M.
Loomis, “Cognitive load of navigating without vision when guided by
virtual sound versus spatial language,” Journal of Experimental
Psychology: Applied, Vol. 12, No. 4, pp. 223-232, 2006
[32] H. Petrie, V. Johnson, T. Strothotte, A. Raab, R. Michel, L. Reichert,
A. Schalt, „MoBIC: an aid to increase the independent mobility of blind
travellers,” British Journal of Visual Impairment, Vol. 15, No. 2, pp.
63-66, 1997
[33] HumanWare “Braile Note GPS”, www.humanware.com
[34] R. Fancy, R. Leroux, A. Jucha, “Electronic travel aids and electronic
orientation aids for blind people: Technical, rehabilitation and everyday
life points of view,” CVHI Conference, 2006
[35] Action for Blind People. Easy Walk – a new system for people with
sight loss. http://www.actionforblindpeople.org.uk
[36] Y. Lagace, “Trekker GPS system creates handheld personal guide with
HP ¡PAQ Pocket PC,” HumanWare, HP notice, 2005 April
[37] T. M. Morales, A. M. Berrocal, “El GPS como sistema de ayuda a la
orientacion de personas ciegas,” IIICV-INTEREVISUAL Autonomia
Personal, 2005, Octomber

4195

View publication stats


Powered by TCPDF (www.tcpdf.org)

You might also like