0% found this document useful (0 votes)
2 views

Vision_Based_Autonomous_Navigation_in_Unstructured

Uploaded by

aaronuha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Vision_Based_Autonomous_Navigation_in_Unstructured

Uploaded by

aaronuha
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228447487

Vision Based Autonomous Navigation in Unstructured Static Environments


for Mobile Ground Robots

Article in Control Engineering and Applied Informatics · January 2011

CITATIONS READS

2 484

4 authors:

D. Novischi Constantin Ilas


Universitatea Națională de Știință și Tehnologie Politehnica București Universitatea Națională de Știință și Tehnologie Politehnica București
6 PUBLICATIONS 20 CITATIONS 33 PUBLICATIONS 396 CITATIONS

SEE PROFILE SEE PROFILE

Sanda Victorinne Paturca Mariana Eugenia Ilas


Universitatea Națională de Știință și Tehnologie Politehnica București Universitatea Națională de Știință și Tehnologie Politehnica București
83 PUBLICATIONS 209 CITATIONS 13 PUBLICATIONS 60 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Sanda Victorinne Paturca on 03 June 2014.

The user has requested enhancement of the downloaded file.


CEAI, Vol.13, No.2, pp. 26-31 , 2011 Printed in Romania

Vision Based Autonomous Navigation in Unstructured Static Environments for


Mobile Ground Robots
D. Novischi*, C. Ilas*, S. Paturca*, M. Ilas**

*Dept. of Electrical Engineering, Politehnica University of Bucharest,


Bucharest, Romania (e-mail: dan.novischi@ gmail.com, [email protected], [email protected]).
**Dept. of Electronics and Telecommunications, Politehnica University of Bucharest,
Bucharest, Romania (e-mail: [email protected])

Abstract: This paper presents an algorithm for real-time vision based autonomous navigation for mobile
ground robots in an unstructured static environment. The obstacle detection is based on Canny edge
detection and a suite of algorithms for extracting the location of all obstacles in robot’s current view. In
order to avoid obstacles we designed a reasoning process that successively builds an environment
representation using the location of the detected obstacles. This environment representation is then used
for making optimal decisions on obstacle avoidance.
Keywords: autonomous robot, robot vision, image processing algorithms, unstructured environment,
Canny edge detection, agglomerative clustering.

1. INTRODUCTION In order to ensure maximum flexibility during the algorithm
development, we implemented the main image processing
Computer vision is a field of computer science that has been tasks in Matlab and interfaced the robot with the Matlab.
heavily researched in the recent years. Its applications in Thus, the robot acquires the images, sends the image to
robotics are diverse, ranging from face recognition to Matlab, which localizes the obstacles, makes an optimal
autonomous navigation (see Forsyth, Akella, Browning). decision and sends a trajectory update to the robot. The
General object recognition for mobile robots is of primary application we designed runs in real-time due to the
importance in order to generate a representation of the efficiency of the developed algorithms.
environment that robots can use for their reasoning processes.
The ability to localize obstacles in real-time and with a high Of course, the algorithms can be then implemented directly
degree of accuracy based on vision provides a foundation for on the robot controller and we present a processing time
the development of many state of the art autonomous robotic estimation for this situation.
systems (see Gopalakrishnan, Allenya, Stronger, Novischi
2009). Directions such as optimal implementation, robust 2. HARDWARE & SOFTWARE USED
detection, especially when using limited performance The hardware used for physically implementing the robotic
cameras and optimal avoidance decisions continue to be of system includes a Blackfin SRV-1 robot (Fig. 1) and a
primary interest. personal computer. The SRV-1 robot is equipped with a DSP
This paper focuses on real-time vision based autonomous microcontroller running at 500 MHz, a 1.3 MP camera with
optimal navigation for basic mobile robots in an unstructured adjustable image resolution and a Wi-fi hardware module. It
static environment (see also Zhao). Simply stated, the robot also includes four DC brushless motors.
has to travel through an unknown environment, where still
objects are randomly placed, in order to reach a specified
destination. For this, the robot relies only on visual
information for the reasoning process. Furthermore, the
camera and the robot hardware and sensing are rather limited
in terms of performance (see section 2). We have used for
obstacle identification an approach based on Canny edge
detection and a suite of algorithms for extracting the location
of all obstacles in the robots field of view. We also designed
a higher reasoning process that allows the robot to make
optimal decisions to avoid collisions in order to reach the Fig. 1. The Blackfin SRV-1 robot used for implementation
specified destination. and testing.
CONTROL ENGINEERING AND APPLIED INFORMATICS 27

The application algorithms were implemented in Java and in 3.3 Obstacle Detection Task
Matlab. These languages were selected because of the overall
performance and flexibility that their combination offers The obstacle detection task is responsible for localizing all
mainly: complex data structures, cross platform and fast obstacles in the current field of view that are on possible
matrix operations. collision courses and for signalling this to the decision task.
For this purpose the obstacle detection is performed in two
In order to interface the SRV-1 robot with a PC we developed separate steps, namely: object feature detection and feature
a Wi-fi software communication module. So, from the analysis. In the first step the objects edges are detected using
application standing point a PC can be viewed as the a Canny edge detector. This detector was selected because of
processing unit and the robot can be viewed as the sensing its higher performance in comparison with others such as:
and actuating unit. This module supports requests such as Solbel, Roberts or Laplacian (see Forsyth). After applying the
movement commands, setting the image resolution and Canny edge detector, we separate the edges of the objects of
reading the data from the sensors such as the RGB colour interest from the edges belonging to the objects in the
image. background by selecting only the bottom part of the image.
3. APPLICATION ALGORITHMS This is because obstacles that are closer to the robot and
influence its travel path are always located in the bottom part
of the 2D image plane. The algorithm for the feature
3.1 The main Algorithm detection and its results for every step are presented below.

The main algorithm of our application is structured according


to three tasks that the robot has to perform namely: the
obstacle detection task, the decision task and the travel task.
These tasks are interleaved in order for the robot to reach its
given goal. The main logic of the algorithm starts from the
idea of a permanent interaction between the robot and the
surrounding environment. In this interaction the robot not
only detects obstacles, but it also successively builds a
a b
representation of the environment in order to make an
optimal decision to avoid certain obstacles. After an obstacle
has been avoided the robot repositions its self to reach the
given goal. The main algorithm diagram is presented in Fig.3
below.

c d

Fig. 4. SRV-1 camera images: a – RGB colour image, b –


gray scale image, c – edge image, d – bottom part of edge
image.

Objects Feature Detection Algorithm


Get RGB image from the robot
Fig. 3. Main algorithm diagram.
Transform the image to gray scale
Apply the Canny edge detector
3.2 The Travel Task
Select the bottom part of the image
The travel task is responsible for driving the robot. This is
accomplished by sending movement commands via the Wi-fi In the second step we analyze the detected edges in order to
communication module. Due to the fact that the SRV-1 robot compute the location of the obstacles in the 2D image plane.
is not equipped with any odometric sensor such as encoders, In the ideal case where object edges are perfectly detected,
the trajectory is not expressed in fixed coordinates, but in such as the one in Fig. 5, we could compute the objects
adjustable movement segments. coordinates in the 2D image plane.
28 CONTROL ENGINEERING AND APPLIED INFORMATICS

course with the robot. This approach is depicted in the Fig. 8


below. Finally, the central obstacle is merged with some of
the obstacles that are not on the direct collision course, but
for which the distance between them and the central obstacle
is smaller than the robot width and thus, the robot could not
Fig. 5. Ideal case edge detection: left – bottom part of the pass between these obstacles.
camera captured image and right – ideal edge image.

Despite the fact that tests were conducted in an office


environment (see also Tanaka), the detection of edges is
influenced by various factors such as ambient light
conditions, shadows and similar overlapped objects. Due to
these factors the object edges appear as in Fig. 6.

Fig. 6. Real case edge detection: left – bottom part of the


camera captured image and right – real edge image.

In order to distinguish which edges belong to which object


we developed a modified version of the bottom-up
agglomerative clustering algorithm. The elements that are
clustered in this algorithm are the individual edge pixels. The
merging step of two clusters is based on a threshold rather
than on a rule such as single linkage of the standard Fig. 8. Obstacle localization diagram – including obstacle
algorithm. The modified version of the algorithm is presented merging for close obstacles (O1, O2 and O4).
below:
The algorithm for obtaining the location of the merged
Agglomerative Clustering Algorithm
obstacle in the 2D image plane is presented below:
Put each edge pixel into its own cluster
Obstacle Localization Algorithm
Compute the distance matrix
Segment the image into individual obstacles
While (! no cluster to merge)
Compute each obstacle coordinates
For every cluster i
Compute the coordinates of the central obstacle
For every cluster j
Merge obstacles
If ( distance between clusters < = threshold)
3.3. The Decision Task
Merge clusters
Update distance matrix The decision task is responsible for successively making
optimal decisions on how to avoid obstacles in order to reach
The output of this algorithm is separate edge images which a given goal. This means that ideally, the shortest travelling
contain individual objects. The results for the images in Fig. path should be selected. In order to make such decisions
6 are shown in Fig. 7 below. when avoiding the obstacles, we maintain and successively
update an environment representation in form of a graph. In
this graph the nodes represent the possible positions that the
robot can reach and the edges weights represent the distances
the robot must travel in order to reach these positions. These
distances are estimated from the processed image, by
Fig. 7. Agglomerative clustering results. counting the number of pixels between given pair of
positions. Making an optimal decision as to which successive
After the image segmentation into individual obstacles, the positions the robot must reach in order to avoid the obstacles
coordinates of each obstacle are computed in the 2D image is based on Dijkstra’s single source shortest path algorithm
plane. Then we compute the coordinates of a central obstacle (see also Dumitrescu 2009). To correctly compute the
which includes all obstacles that are on the direct collision shortest path, the graph is updated after each obstacle
CONTROL ENGINEERING AND APPLIED INFORMATICS 29

detection and the start node is set before the computation in position for avoiding the intermediary obstacle on the same
question. side as the previous obstacle. This situation is presented in
Fig. 11 below, where the intermediary obstacle is detected
When the robot first starts the graph is initialized to contain after the robot tried to avoid the first obstacle on the right
only two nodes, namely: the start node and the goal node, side.
depicted in figure 9.

Fig. 9. Environment graph for path decision task; Initial


graph – V1 is start node and V2 is the goal node.

As the robot travels and detects obstacles on its travel path,


the graph is successively updated according to two situations:
the detection of a new obstacle and the detection of a
secondary obstacle. A new obstacle is the obstacle detected Fig. 11. Found intermediary obstacle graph update: left – the
while the robot is moving straight toward the goal. By node’s position relative to the obstacle and the robot, right –
secondary obstacle, we refer to an obstacle detected while the the updated graph.
robot is trying to avoid a previously detected obstacle. In the
first situation, the graph is updated with two nodes that are The algorithm for this graph update is presented below.
placed between the start node and the goal node. These nodes
correspond to the left and right positions that the robot can Found Intermediary Obstacle Graph Update Algorithm
reach in order to avoid the obstacle edges, as depicted in the
Fig. 10 below. Estimate distances to avoid the obstacle from 2D image plane
obstacle coordinates
Create a new node
Update source adjacencies and edge weights
Update adjacencies and edge weights for the new node
Update adjacencies and edge weights for the node that the
robot was trying to reach
Fig. 10. Found new obstacle graph update: left – the nodes
position relative to the obstacle and the robot, right – the 4. RESULTS
updated graph.
In the experiments that we conducted the behaviour of the
The algorithm for this graph update is presented below. robot was tested in various obstacle configurations. In this
section we present two of the environment settings (obstacle
Found New Obstacle Graph Update Algorithm configurations) that are representative for the application that
we designed. The objects that were used as obstacles in the
Estimate distances to avoid the obstacle from 2D image plane two settings are: various cups and glasses, small packages
obstacle coordinates and note pads.
Create two nodes In the first setting we placed the obstacles so that the robot
Update source adjacencies and edge weights would successively detect only new obstacles on the
travelling path to the goal object. This setting and the travel
Update the new nodes adjacencies and edge weights path of the robot from the start to the goal is shown in the
Fig. 12.
In the second situation the robot detects an obstacle while
trying to avoid another. The graph representation in this case The graph representation of the environment built by the
is updated with only one node corresponding to the current robot for this obstacle course is presented in Fig. 13. It can be
position of the robot. This node is placed between the starting seen that the trajectory that corresponds to the smallest
node and the node that the robot was trying to reach. The distance on the graph matches the trajectory depicted in Fig.
later node is also updated so that it corresponds to the 12.
30 CONTROL ENGINEERING AND APPLIED INFORMATICS

avoid the first obstacle on the other side. This can be seen
from the Fig. 14 below:

Fig. 12. Obstacle setting 1 and the robot travel path.

Fig. 14. Obstacle setting 2 and the robot travel path.

The graph representation of the environment built by the


robot for this obstacle course is presented in Fig 15.

Fig. 13. The graph representation of the environment built by


the robot for the path shown in Fig. 12.

In the second setting the objects were placed such that the Fig. 15. The graph representation of the environment built by
robot would detect also a secondary obstacle and the distance the robot for the path shown in Fig. 14.
to avoid that obstacle would be greater than the distance to
avoid the first obstacle on the other side. Thus the optimal The computational complexity of the application that we
decision made by the robot in this case is to turn back and designed is in principal given by the image transformation to
CONTROL ENGINEERING AND APPLIED INFORMATICS 31

gray scale and the three core algorithms, namely: the Canny The overall obtained results prove that autonomous
edge detection algorithm, the agglomerative clustering navigation in unstructured environments is possible with low
algorithm and the single source shortest path algorithm. The cost hardware such as the SRV-1 robot. Therefore in the
gray scale image transformation and the edge detection are future we expect for many such robotic systems to be used in
the prime contributors to the application computational an unlimited number of applications, thus continuing to have
overhead, because there computational complexity is O(mxn) an increasingly impact in people lives.
where m = 240 and n = 320 (for an input image of 320x240).
The agglomerative clustering has a computational complexity REFERENCES
of O(n2) where n is variable and equals the number of edge
pixels. The average number of the edge pixels equals 1000. Akella, M.R.(2005) Vision-Based Adaptive Tracking Control
Thus, the agglomerative clustering algorithm generates a of Uncertain Robot Manipulators. Robotics, IEEE
Transactions on, Volume: 21 , Issue: 4. pp: 747 – 753.
smaller overhead because the input size is considerably
smaller compared to that of the canny edge detection. The Alenya, G.; Escoda, J.; Martinez, A.B.; Torras, C.(2005)
single source shortest path algorithm generates the smallest Using Laser and Vision to Locate a Robot in an Industrial
Environment: A Practical Experience. Robotics and
overhead in relation with the application and has a
computational complexity of O(n log n) The algorithm’s Automation. Proceedings of the 2005 IEEE International
input size comprises of number of nodes and the number of Conference on. pp: 3528 - 3533
Browning, B.; Veloso, M.(2005) Real-time, adaptive color-
edges in the graph at a given moment.
based robot vision. Intelligent Robots and Systems, 2005.
In order for the robot to avoid a certain obstacle, these IEEE/RSJ International Conference on. pp: 3871 – 3876.
algorithms are executed in a sequential order. So, the running Dumitrescu, E., Paturca, S., and Ilas, C., (2009) Optimal Path
time of each processing step depends on the sum of machine Computation for Mobile Robots using MATLAB, Revista
instructions given by each algorithm computational de Robotica si Management, vol. 14, nr.2, pg.36, 2009
complexity. This sum, in an average equals approximately Forsyth D. and Ponce J. (2003), Computer Vision: A Modern
480k machine instructions. For the SRV-1 robot DSP Approach. Upper Saddle River, New Jersey: Prentice
microcontroller at 500MHz the running time of the algorithm Hall.
is around 0.96 milliseconds. Gopalakrishnan, A.; Greene, S.; Sekmen, A. (2005) Vision-
based mobile robot learning and navigation. Robot and
For comparison, the round trip exchange of information and Human Interactive Communication, 2005. IEEE
processing using Matlab is around 1s, which still ensures a International Workshop on. pp: 48 – 53.
smooth movement of the robot. Novischi, D., Ilas, C., and Paturca, S., (2010) Obstacle
Avoidance based on vision with Low Level Hardware
5. CONCLUSIONS Robots. Performance Comparison, Eurobot International
Conference, Rapperswill, Switzerland, 2010.
This paper presents a real-time application of a vision based Novischi, D., Paturca, S., and Ilas, C. (2009), Obstacle
autonomous navigation in unstructured static environments Avoidance Algorithm for Autonomous Mobile Robot,
for mobile robots.
Revista de Robotica si Management, vol. 14, nr.2, pg.40,
The results showed that the robot successfully navigates in 2009
unstructured environment making optimal decisions to avoid Stronger, D.; Stone, P. (2007) A Comparison of Two
all the obstacles. Approaches for Vision and Self-Localization on a Mobile
Robot. Robotics and Automation, 2007 IEEE
The application was structured according to the tasks the International Conference on. pp: 3915 – 3920.
robot has to perform. The main idea was a continuous Tanaka, K.; Yamano, K.; Kondo, E.; Kimuro, Y.(2004) A
interaction between the robot and the surrounding vision system for detecting mobile robots in office
environment. environments. Robotics and Automation, 2004.
Proceedings. 2004 IEEE International Conference on.
Obstacles on the robot travel path were localized in two steps. Volume: 3. pp: 2279 – 2284.
In the first step we detected and separated edges of the Zhao, Y.; Cheah, C.C.; Slotine, J.J.E.(2007) Adaptive Vision
objects of interest. In the second step we analyzed these and Force Tracking Control of Constrained Robots with
edges to determine the individual objects and compute their Structural Uncertainties. Robotics and Automation, 2007
coordinates. The detection of edges is based on Canny edge IEEE International Conference on. pp: 2349 – 2354.
detector and the edge analysis is based on a modified bottom-
up agglomerative clustering algorithm.
In order to make an optimal decision to avoid a certain
obstacle we maintain and successively update a graph-based
representation of the environment. The collision avoidance
decisions are based on the Dijkstra’s single source shortest
path algorithm.

View publication stats

You might also like