Vision_Based_Autonomous_Navigation_in_Unstructured
Vision_Based_Autonomous_Navigation_in_Unstructured
net/publication/228447487
CITATIONS READS
2 484
4 authors:
All content following this page was uploaded by Sanda Victorinne Paturca on 03 June 2014.
Abstract: This paper presents an algorithm for real-time vision based autonomous navigation for mobile
ground robots in an unstructured static environment. The obstacle detection is based on Canny edge
detection and a suite of algorithms for extracting the location of all obstacles in robot’s current view. In
order to avoid obstacles we designed a reasoning process that successively builds an environment
representation using the location of the detected obstacles. This environment representation is then used
for making optimal decisions on obstacle avoidance.
Keywords: autonomous robot, robot vision, image processing algorithms, unstructured environment,
Canny edge detection, agglomerative clustering.
1. INTRODUCTION In order to ensure maximum flexibility during the algorithm
development, we implemented the main image processing
Computer vision is a field of computer science that has been tasks in Matlab and interfaced the robot with the Matlab.
heavily researched in the recent years. Its applications in Thus, the robot acquires the images, sends the image to
robotics are diverse, ranging from face recognition to Matlab, which localizes the obstacles, makes an optimal
autonomous navigation (see Forsyth, Akella, Browning). decision and sends a trajectory update to the robot. The
General object recognition for mobile robots is of primary application we designed runs in real-time due to the
importance in order to generate a representation of the efficiency of the developed algorithms.
environment that robots can use for their reasoning processes.
The ability to localize obstacles in real-time and with a high Of course, the algorithms can be then implemented directly
degree of accuracy based on vision provides a foundation for on the robot controller and we present a processing time
the development of many state of the art autonomous robotic estimation for this situation.
systems (see Gopalakrishnan, Allenya, Stronger, Novischi
2009). Directions such as optimal implementation, robust 2. HARDWARE & SOFTWARE USED
detection, especially when using limited performance The hardware used for physically implementing the robotic
cameras and optimal avoidance decisions continue to be of system includes a Blackfin SRV-1 robot (Fig. 1) and a
primary interest. personal computer. The SRV-1 robot is equipped with a DSP
This paper focuses on real-time vision based autonomous microcontroller running at 500 MHz, a 1.3 MP camera with
optimal navigation for basic mobile robots in an unstructured adjustable image resolution and a Wi-fi hardware module. It
static environment (see also Zhao). Simply stated, the robot also includes four DC brushless motors.
has to travel through an unknown environment, where still
objects are randomly placed, in order to reach a specified
destination. For this, the robot relies only on visual
information for the reasoning process. Furthermore, the
camera and the robot hardware and sensing are rather limited
in terms of performance (see section 2). We have used for
obstacle identification an approach based on Canny edge
detection and a suite of algorithms for extracting the location
of all obstacles in the robots field of view. We also designed
a higher reasoning process that allows the robot to make
optimal decisions to avoid collisions in order to reach the Fig. 1. The Blackfin SRV-1 robot used for implementation
specified destination. and testing.
CONTROL ENGINEERING AND APPLIED INFORMATICS 27
The application algorithms were implemented in Java and in 3.3 Obstacle Detection Task
Matlab. These languages were selected because of the overall
performance and flexibility that their combination offers The obstacle detection task is responsible for localizing all
mainly: complex data structures, cross platform and fast obstacles in the current field of view that are on possible
matrix operations. collision courses and for signalling this to the decision task.
For this purpose the obstacle detection is performed in two
In order to interface the SRV-1 robot with a PC we developed separate steps, namely: object feature detection and feature
a Wi-fi software communication module. So, from the analysis. In the first step the objects edges are detected using
application standing point a PC can be viewed as the a Canny edge detector. This detector was selected because of
processing unit and the robot can be viewed as the sensing its higher performance in comparison with others such as:
and actuating unit. This module supports requests such as Solbel, Roberts or Laplacian (see Forsyth). After applying the
movement commands, setting the image resolution and Canny edge detector, we separate the edges of the objects of
reading the data from the sensors such as the RGB colour interest from the edges belonging to the objects in the
image. background by selecting only the bottom part of the image.
3. APPLICATION ALGORITHMS This is because obstacles that are closer to the robot and
influence its travel path are always located in the bottom part
of the 2D image plane. The algorithm for the feature
3.1 The main Algorithm detection and its results for every step are presented below.
c d
detection and the start node is set before the computation in position for avoiding the intermediary obstacle on the same
question. side as the previous obstacle. This situation is presented in
Fig. 11 below, where the intermediary obstacle is detected
When the robot first starts the graph is initialized to contain after the robot tried to avoid the first obstacle on the right
only two nodes, namely: the start node and the goal node, side.
depicted in figure 9.
avoid the first obstacle on the other side. This can be seen
from the Fig. 14 below:
In the second setting the objects were placed such that the Fig. 15. The graph representation of the environment built by
robot would detect also a secondary obstacle and the distance the robot for the path shown in Fig. 14.
to avoid that obstacle would be greater than the distance to
avoid the first obstacle on the other side. Thus the optimal The computational complexity of the application that we
decision made by the robot in this case is to turn back and designed is in principal given by the image transformation to
CONTROL ENGINEERING AND APPLIED INFORMATICS 31
gray scale and the three core algorithms, namely: the Canny The overall obtained results prove that autonomous
edge detection algorithm, the agglomerative clustering navigation in unstructured environments is possible with low
algorithm and the single source shortest path algorithm. The cost hardware such as the SRV-1 robot. Therefore in the
gray scale image transformation and the edge detection are future we expect for many such robotic systems to be used in
the prime contributors to the application computational an unlimited number of applications, thus continuing to have
overhead, because there computational complexity is O(mxn) an increasingly impact in people lives.
where m = 240 and n = 320 (for an input image of 320x240).
The agglomerative clustering has a computational complexity REFERENCES
of O(n2) where n is variable and equals the number of edge
pixels. The average number of the edge pixels equals 1000. Akella, M.R.(2005) Vision-Based Adaptive Tracking Control
Thus, the agglomerative clustering algorithm generates a of Uncertain Robot Manipulators. Robotics, IEEE
Transactions on, Volume: 21 , Issue: 4. pp: 747 – 753.
smaller overhead because the input size is considerably
smaller compared to that of the canny edge detection. The Alenya, G.; Escoda, J.; Martinez, A.B.; Torras, C.(2005)
single source shortest path algorithm generates the smallest Using Laser and Vision to Locate a Robot in an Industrial
Environment: A Practical Experience. Robotics and
overhead in relation with the application and has a
computational complexity of O(n log n) The algorithm’s Automation. Proceedings of the 2005 IEEE International
input size comprises of number of nodes and the number of Conference on. pp: 3528 - 3533
Browning, B.; Veloso, M.(2005) Real-time, adaptive color-
edges in the graph at a given moment.
based robot vision. Intelligent Robots and Systems, 2005.
In order for the robot to avoid a certain obstacle, these IEEE/RSJ International Conference on. pp: 3871 – 3876.
algorithms are executed in a sequential order. So, the running Dumitrescu, E., Paturca, S., and Ilas, C., (2009) Optimal Path
time of each processing step depends on the sum of machine Computation for Mobile Robots using MATLAB, Revista
instructions given by each algorithm computational de Robotica si Management, vol. 14, nr.2, pg.36, 2009
complexity. This sum, in an average equals approximately Forsyth D. and Ponce J. (2003), Computer Vision: A Modern
480k machine instructions. For the SRV-1 robot DSP Approach. Upper Saddle River, New Jersey: Prentice
microcontroller at 500MHz the running time of the algorithm Hall.
is around 0.96 milliseconds. Gopalakrishnan, A.; Greene, S.; Sekmen, A. (2005) Vision-
based mobile robot learning and navigation. Robot and
For comparison, the round trip exchange of information and Human Interactive Communication, 2005. IEEE
processing using Matlab is around 1s, which still ensures a International Workshop on. pp: 48 – 53.
smooth movement of the robot. Novischi, D., Ilas, C., and Paturca, S., (2010) Obstacle
Avoidance based on vision with Low Level Hardware
5. CONCLUSIONS Robots. Performance Comparison, Eurobot International
Conference, Rapperswill, Switzerland, 2010.
This paper presents a real-time application of a vision based Novischi, D., Paturca, S., and Ilas, C. (2009), Obstacle
autonomous navigation in unstructured static environments Avoidance Algorithm for Autonomous Mobile Robot,
for mobile robots.
Revista de Robotica si Management, vol. 14, nr.2, pg.40,
The results showed that the robot successfully navigates in 2009
unstructured environment making optimal decisions to avoid Stronger, D.; Stone, P. (2007) A Comparison of Two
all the obstacles. Approaches for Vision and Self-Localization on a Mobile
Robot. Robotics and Automation, 2007 IEEE
The application was structured according to the tasks the International Conference on. pp: 3915 – 3920.
robot has to perform. The main idea was a continuous Tanaka, K.; Yamano, K.; Kondo, E.; Kimuro, Y.(2004) A
interaction between the robot and the surrounding vision system for detecting mobile robots in office
environment. environments. Robotics and Automation, 2004.
Proceedings. 2004 IEEE International Conference on.
Obstacles on the robot travel path were localized in two steps. Volume: 3. pp: 2279 – 2284.
In the first step we detected and separated edges of the Zhao, Y.; Cheah, C.C.; Slotine, J.J.E.(2007) Adaptive Vision
objects of interest. In the second step we analyzed these and Force Tracking Control of Constrained Robots with
edges to determine the individual objects and compute their Structural Uncertainties. Robotics and Automation, 2007
coordinates. The detection of edges is based on Canny edge IEEE International Conference on. pp: 2349 – 2354.
detector and the edge analysis is based on a modified bottom-
up agglomerative clustering algorithm.
In order to make an optimal decision to avoid a certain
obstacle we maintain and successively update a graph-based
representation of the environment. The collision avoidance
decisions are based on the Dijkstra’s single source shortest
path algorithm.