Mohammed 2013

Download as pdf or txt
Download as pdf or txt
You are on page 1of 17

Int. J. Mechanisms and Robotic Systems, Vol. 1, No.

1, 2013 95

Vision-based robotic path following

Abdullah Mohammed and Lihui Wang*


Virtual Systems Research Centre,
University of Skövde,
541 28 Skövde, Sweden
E-mail: [email protected]
E-mail: [email protected]
*Corresponding author

Abstract: Most robotic manufacturing applications nowadays require tedious


and expensive pre-programming of the chosen robot, each time when a new
task is introduced. In order to eliminate the tedious robot programming for
better productivity, this research proposes an adaptive approach that allows a
robot to follow any arbitrary robot path defined by an operator. A so-developed
system was designed to monitor and control the path following operation
locally or remotely through an established web-based architecture, without the
need of extra programming. The objective of the research is achieved by
integrating an image processing module with a robotic system. The real
benefits of such a system are the ability to control and monitor the stepwise
processing stages, as well as to automate the whole operation to a certain level
of control defined in advance. In particular, this paper introduces a prototype
that can be extended to various industrial applications, such as arc welding,
laser cutting and water jet cutting, which require controlling the 2D or 3D path
of a robot.
Keywords: robotic path following; image processing; vision-based system;
adaptive manufacturing.
Reference to this paper should be made as follows: Mohammed, A. and
Wang, L. (2013) ‘Vision-based robotic path following’, Int. J. Mechanisms and
Robotic Systems, Vol. 1, No. 1, pp.95–111.
Biographical notes: Abdullah Mohammed earned his BS in Computer
Engineering from University of Mosul, Iraq, in 2002 and his MS in Industrial
Informatics from the University of Skövde, Sweden, in 2011. Since then, he
has been working as a Research Assistant with the Adaptive Manufacturing
Group of the university. His work is focused on utilising the vision systems in
adaptive manufacturing and robotic environments.
Lihui Wang received his PhD and MS in Mechanical Engineering from Kobe
University, Japan, in 1993 and 1990, respectively, and BS in Machine Design
from Tsinghua University (former Academy of Arts and Design), China in
1982. He was an Assistant Professor of Kobe University and Toyohashi
University of Technology (Japan) prior to joining National Research Council of
Canada in 1998, where he was a Senior Research Scientist before moving
to Sweden in 2008. Currently, he is a Professor of Virtual Manufacturing
at University of Skövde, Sweden. His research interests are focused on
human-robot collaboration, real-time monitoring and control, and adaptive
process planning. He has published six books, eight journal special issues and
over 220 archival journal articles and refereed conference papers in these areas.
He is also a Professional Engineer, a member of CIRP, and a fellow of Society
of Manufacturing Engineers.

Copyright © 2013 Inderscience Enterprises Ltd.


96 A. Mohammed and L. Wang

1 Introduction

In general, industrial robotic applications need to be sufficiently detailed in advance to


ensure their success in real operations. It is, therefore, required to pre-programme and
plan each single step in those applications. This normally leads to two main problems:
first, extra hours need to be reserved for programming and calibrating a robot cell;
second, the abilities of the robot are limited to certain predefined operations. These
problems also limit the adaptability of the robot to work in a dynamic industrial
environment.
Many researchers have been focusing on developing different approaches to
overcome the limitations. One of the main aspects of these approaches is to improve the
robot’s ability of sensing the surrounding environment. The sensing can be in different
forms, ranging from using primitive sensors to utilising highly sophisticated laser
scanning systems. However, building a robust and adaptable vision-based robotic system
remains a challenge and needs further developments.
As documented in this paper, our research introduces a new approach that provides
the facility for both local and remote operators to monitor and control an industrial robot.
The aim of a so-developed system is to plan robot paths to precisely follow any arbitrary
sketch drawn by an operator on a workpiece. This approach demonstrates the advantages
of introducing the image processing techniques to the robotic path-following system,
which represents one step towards the adaptive decision making capability of the robotic
system.
This paper is organised as follows. Section 2 reviews the literature related to the topic
of this research. Section 3 overviews the developed system and discusses its architecture.
Section 4 describes the stages of the implementation. The experimental results are
presented in Section 5. Finally, the discussions and conclusions of the proposed research
are demonstrated in Section 6 and Section 7.

2 Literature review

Modern low-cost camera systems provide an acceptable level of accuracy in captured


images. This accuracy can satisfy many industrial robotic operations. Therefore, it has
been investigated by many researchers in attempting to integrate vision to robotic
systems. Some of these researches focused on integrating the vision system with force
control techniques to build an adaptive robot path planning system (Olsson et al., 2002;
Baeten and Schutter, 2002; Chang and Wu, 2002). However, such approaches are limited
to a narrow range of applications; these applications require the robot’s end effector to be
in contact with the workpiece during the process.
Other approaches focused on using auxiliary devices to assist the vision system.
Examples of these approaches are the ones of Gonzalez-Galvan et al. (2008) and Lumia
et al. (2010), which used a laser pointer together with a stereo vision system to track the
arbitrary trajectory of a robot. Despite the fact that these approaches were successful in
their reported case studies, they needed a relatively high-cost vision system to accomplish
the required objective, and in addition, these approaches cannot guarantee a high
accuracy in their final results.
Other research groups depended on the online feedback from the vision system to
track the desired path. This feedback was based on analysing camera images in real time.
Vision-based robotic path following 97

The cameras can be mounted on the end effector of the robot (Lange and Hirzinger,
2003), or they can be moving around the workpiece to track the path of the robot (Garcia
et al., 2008). These approaches, however, lack the possibility to optimise the total path of
the robot. This is due to the fact that they do not have the complete knowledge about the
robot path before starting the robot.
Other researchers focused on solving common robot path following problems.
Some approaches focused on the collision issues (Okada et al., 2001; Chan et al., 2011),
whereas others focused on the dynamic issues such as the torque limits of actuators
(Antonelli et al., 2007). Moreover, some researchers focused on the dynamic singularity
and the velocity constraints of the robot (Li et al., 2010), who studied the dynamic and
kinematic models of the robot to solve the issues in advance.
Bieman and Rutledge (1998) as well as Zhang et al. (2006) based their approaches on
the vision sensing abilities without the need for extra information. However, those
approaches were focused on developing a real-time closed-loop control system which
does not consider optimising the entire robotic operation.

3 System overview

Different from the reported systems in the literature, the objective of our integrated
system is to provide the ability for an operator to locally or remotely monitor and control
the needed image processing and robotic path-following processes, effectively and
efficiently. While the level of the ability has been decided in advance by system
designers and application developers, an authorised operator has the freedom to access
the local machines/robots within the system using a web browser. This is accomplished
by developing a Java applet that the operator can use to access the system using a secured
user account.

Figure 1 System architecture


98 A. Mohammed and L. Wang

Our research is conducted using a system that consists of a network camera and an
industrial robot, connected to a central application server through an Ethernet network. A
user interface is developed to control the camera and the robot in addition to the image
processing stages. The entire application includes camera control, image processing, and
robot path planning and control.
As shown in Figure 1, the system consists of four main components. The central
component is the server, which is an application server, providing the needed functions to
communicate with the different components of the system. At the same time, the server is
responsible for image processing and path planning.
The second component of the system is the controller of an industrial robot, which is
responsible for preparing the data for the server to monitor the robot, and at the same
time it receives the control commands from the server and performs the task needed. This
controller is connected to the robot manipulator at one side and the application server at
the other side.
The third component consists of an IP-based network camera, which can be
controlled remotely through an URL. It is used to take a snapshot of a hand-drawing
(a sketch) of an operator, and then send the image to the server for image processing.
The remote operator connected to the system via a web browser represents the fourth
component. The operator has access to the system to monitor and control the process, and
this is accomplished by using the facilities made available in the developed Java applet
(user interface) of the system.
The system is designed to
1 capture an image using the network camera
2 send the image to the server
3 retrieve its contours after the server processes the image
4 send back the points of the contours to the industrial robot, point by point and
contour by contour.
The robot can thus follow the targets to trace the contours of the image for path
following. In the current implementation, the camera is a Sony SNC-CH110, and the
robot is an ABB IRB140.

4 System implementation

The processing procedures presented in this paper can be divided into two parts: the first
part containing the steps of image processing, and the second part outlining the steps of
path planning, as detailed in the following subsections.

4.1 Image processing


The processing starts by capturing the hand-drawn paths through the network camera
followed by analysing the captured image, and ends up with extracting the set of points
that represent the robot paths.
Vision-based robotic path following 99

4.1.1 Working area adjustment


The image is cropped and rotated for adjustment to prepare it for the next step of
processing. This adjustment is based on defining a region in the image to be the region of
interest. The original snapshot contains a typical lens distortion. Therefore, one of the
OpenCV (Intel Corporation, 2012) library tools has been used to deal with this issue.

4.1.2 Converting from coloured to greyscale


The colour information in the captured image is considered to be redundant information;
thus removing it in this early stage of the processing is crucial in computation time
reduction.

4.1.3 Moving average thresholding


Thresholding the image needs to be performed in order to identify the pixels in the image
that belongs to the contours of the image. This process segments the image into two
levels of pixel intensity: the contour pixels are defined by a certain level of pixel
intensity, and the background pixels are defined by another level of pixel intensity.
After analysing the captured image, it is found that the background pixels of the
image have a wide range of intensity values; therefore, the moving average thresholding
technique (Woods and Gonzalez, 2007) has been chosen to achieve the desired results at
this stage. This technique starts by scanning the pixels of the image one by one, and at
each pixel the average intensity value of the pixels in the region surrounding the image is
calculated. The average value is compared with the threshold value. Depending on the
result of the comparison, the pixel is adjusted to white or black intensity. The region
surrounding the pixel is predefined based on several investigations on the image.

4.1.4 Median smoothing


The previous steps of the image processing may cause what is known as the pepper and
salt noise. This noise results in small regions in the image with white or black pixels.
These regions do not represent any part of desired contours and need to be eliminated.
Therefore, a median smoothing filter (Maheswari and Radha, 2010) was used to remove
the effect of the noise.
The median filter was performed by going through the pixels of the image one by
one. At each pixel, the median value of the neighbours of the pixel is determined and
assigned to the concerned pixel. A 3 × 3 square separable kernel filter is used due to the
fact that it provides the best results compared to other types of kernels.

4.1.5 Converting the image to a binary array


The image is converted to a two-dimensional array consisting of zeros and ones. Each
element of the array represents one pixel in the image. This step is important as it
prepares the image for the subsequent steps where implementing more advanced image
processing algorithms is required.
100 A. Mohammed and L. Wang

4.1.6 Thinning the contours


The contours of the image need to have one pixel thickness. This provides the ability for
a robot to trace the contours pixel by pixel and with the correct order. The most common
algorithms that are used in the thinning process are called template-based mark and delete
algorithms. Two algorithms of that kind have been tested in this research: the first one is
Zhang-Suen algorithm and the second Stentiford algorithm (Stentiford and Mortimer,
1983). It is found that the Stentiford algorithm has a proven efficiency in thinning the
contours with minimum side effects. Therefore, the Stentiford thinning algorithm is
adopted to achieve the required thickness for the contours.
The thinning process starts by using a set of templates to match them with the
processed image. The chosen algorithm will delete specific pixels that are located in the
center of the templates if a match has been found. This process will be repeated in a
number of iterations, and it will remove the outer layers of a contour in the image
gradually, resulting in a contour with only one-pixel width (Martin and Tosunoglu,
2000). Figure 2 depicts the thinning process through an example.

Figure 2 Example of contour thinning process

The details of the Stentiford algorithm can be described as follows:


1 Find the pixel location where the pixels in the image match those in the first template
as shown in Figure 3. Using this template, the algorithm can remove all the pixels
that are located at the top of the contour by scanning the pixels from left to right and
from top to bottom.
2 Check the central pixel. If it is not an endpoint (has two neighbour pixel that belongs
to the contour), it will be marked for later deletion.
3 Repeat Step 1 and Step 2 for all pixel locations that match the first template.
Vision-based robotic path following 101

4 Repeat Step 1 to Step 3 for the other templates as shown in Figure 3:


a The second template will detect the pixels that are located at the left of the
contour, by scanning the image from bottom to top and from left to right.
b The third template will detect the pixels that are located at the bottom of the
contour, by scanning the image from right to left and from bottom to top.
c The fourth template will detect the pixels that are located at the right side of the
contour, by scanning the image from top to bottom and from right to left.
5 Reset the pixels that are marked in the previous steps to zeros.

Figure 3 Templates used in the Stentiford algorithm

4.1.7 Detecting the intersection points of the contours


Some of the contours in the image intersect with each other. This is due to the nature of
the hand-drawn sketches for welding/cutting applications. With the presence of the
intersections, the contour labelling algorithm applied in the next step may detect the
intersected contours as one contour, making it difficult for a robot to follow at the
intersections.
In order to solve the problem, the intersecting pixels of the contours are detected first,
and they are then converted to zeros. This process divides the intersecting contours to two
or more subsections, each of which can be assigned a label in the subsequent labelling
process. Eventually, these subsections will be merged accordingly to retrieve the original
contours.

4.1.8 Contour labelling


The contours in the image need to be labelled in order to extract them later one by one.
Therefore, each contour must possess a unique label to distinguish it from other contours
and the background.
The connected component recursive algorithm (Dillencourt and Samet, 1992) is used
to label the contours in the image. The labelling process can be described as follows
(Mukundan, 1999):
1 Scan the image from top to bottom and left to right. If a pixel is not labelled yet, do
the following:
a Assign a new label to the pixel.
102 A. Mohammed and L. Wang

b If the pixel has connected pixels (at least one of the pixel’s four-neighbours has
a value 1, the four-neighbours of the pixel are the neighbours that share one side
of the pixel), move to the first connected pixel.
c Assign the same label to the neighbour pixel.
d Repeat Step b to Step c until there are no connected pixels left.
2 Continue the image scanning. If an unlabeled pixel is found, repeat Step 1.
Figure 4 describes the contour labelling process, as we can see that the process starts with
the binary image which consists of ones for the foreground and zeros for the background,
and it ends up with a labelled image.

Figure 4 Labelling of contours

4.1.9 Contour tracing


The nature of the robot path-following application requires that the points (pixels) in each
contour being organised in the correct sequence. This sequence should ensure that the
points of each contour are arranged from the beginning of the contour to the end in a
sequential order.
The process for contour tracing can be described as follows:
1 Scan the pixels in the image to find one that has a label.
2 Calculate the connectivity value of the pixel. This can be defined as the number of
transitions between the zero and a certain value for the eight-connected neighbour
pixels of a certain pixel. If it equals to 1 (indicating a starting or ending pixel of a
contour), then:
a Add the first pixel to a new list that represents the contour.
b Find the neighbour pixel of the current pixel that belongs to the same contour
and add it to the list.
c Assign a zero value to the original pixel in the image.
d Repeat Step b and Step c until the last pixel is reached.
3 Continue scanning the image. If another pixel with a label is found repeat Step 2.
Figure 5 illustrates the tracing of the pixels of a contour.
Vision-based robotic path following 103

Figure 5 Contour tracing

4.1.10 Merging the broken contours


As a side effect of resetting the intersection pixels of the contours, the contours are
divided into shorter sub-contours. These sub-contours need to be merged again to
resemble the original (yet the final) contours for better traceability and smother robotic
motion.
An approach is represented in this paper to merge the broken contours of the image.
The decision of merging takes into consideration both the distance between the endpoints
of these sub-contours and the average direction (tangent) of their endpoints. This
approach can be specified with the following procedures:

1 Take the first contour and calculate the closest distance between its endpoints
and those of other contours in the image. If there is a contour that is close enough
(the distance is equal or less than a specified threshold value), then do the following:
a Calculate the tangent of the first contour at the endpoint that is closest to the
second contour. This is done by calculating the average angle of a few contour
points that are near the endpoint, as expressed in equation (1).

⎛ (x − x ) ⎞

5
arctan 2 ⎜ n1 n 2 ⎟
⎝ ( yn1 − yn 2 ) ⎠
n=0
Average angle = (1)
n
b Repeat Step a, for the second contour found in Step 1.
c Compare the two average angles of the two contours. If the angular difference is
within a specified range, then merge the contours in the correct order.

2 Repeat Step 1 until all the contours are checked.

3 Repeat Step 1 and Step 2 until no contours need to be merged.

Figure 6 shows the merging process through an example.


104 A. Mohammed and L. Wang

Figure 6 Merging the broken contours

4.2 Path planning


Before starting the path-following process by the robot, two issues must be handled in
advance. The first issue is to reduce the number of points in each contour by removing
the redundant points from the contours, and the second issue is to find an optimal or
near-optimal sequence of paths among those contours to arrive at the shortest travelling
time by the robot. This is treated as path planning for robotic path following. More details
are explained in the subsequent sections.

Figure 7 Point reduction process of one contour


Vision-based robotic path following 105

4.2.1 Reducing points in contours


One of the most efficient algorithms for reducing the number of points in a contour is the
Douglas-Peucker algorithm (Douglas and Peucker, 1973). It is commonly used due to its
straightforward implementation and its high performance (Wu and Marquez, 2003;
Ebisch, 2002). The initial purpose of this algorithm is to control the details of
geographical maps. It has then been used extensively in reducing the points for polylines
and polygons in image processing.
This algorithm is also adopted in this research. It starts by saving the first and the last
points in the contour to a simplification list, and then by finding the point that has the
maximum distance between itself and the straight line connecting the first and the last
points. If the distance is greater than a specified threshold value, the new point is added to
the simplification list. This addition should take into consideration the sequence of the
points in the list, so that the new point must be added between the two original points.
This process is repeated to simplify the other subsections of the contour. The procedure
will call itself in a recursive way until the distance becomes less than the threshold value.
Figure 7 explains the iterative steps of this procedure for one contour.

4.2.2 Optimising the order of contours


For practical industrial applications such as robotic welding and laser cutting,
productivity and efficiency is of importance. This motivates the work of optimising the
sequence of contours (paths) to be followed by a robot for the minimum air-travel time.
The nearest neighbour algorithm is chosen to optimise the order of contours for path
following. This algorithm is much faster than genetic algorithms despite the fact that it
does not guarantee the convergence to the optimal solution. The reason behind the speedy
processing is that the algorithm does not require heavy computational operations, as it is
a local optimisation algorithm. Nevertheless, it gives a good chance to converge to the
optimal solution.
The nearest neighbour algorithm implemented in this research can be described by the
following steps (Edan et al., 1991):
1 Pick a random contour and add it to the sequence of contours.
2 Find the nearest neighbouring contour to the current contour. The evaluation is done
through the following steps, as depicted in Figure 8:
a Pick the far end of the current contour.
b Calculate the distance between the chosen endpoint and those of other contours
that have not yet been added to the sequence.
c Select the contour that has the shortest distance to the current contour, and this
contour is the nearest neighbour to the current contour.
3 Add the selected contour in Step 2 to the sequence.
4 Repeat Step 1 to Step 3 until all the contours are added to the sequence.
106 A. Mohammed and L. Wang

Figure 8 Finding the nearest neighbouring contour

5 Experimental results

An arbitrary hand-drawn snapshot was taken by a robot-mounted camera and used to


study and test the results at each stage of the image processing and path planning. The
implemented system is able to maintain the balance between the quality of the final
results and the speed of performance. In addition, the results revealed the fact that the
system can fulfil the expectations of several common industrial applications.

Figure 9 Experimental results at six processing stages

The image processing and path planning were developed and performed using the
Wise-ShopFloor framework (Wang, 2008). The application server in the current setup
has the following configuration: Intel Core i5 2.8 GHz processor, 4 GB RAM, with
Vision-based robotic path following 107

NVidia GeForce GT 240 graphics card, and running on Windows Vista operating system.
Figure 9 shows the end results of the image processing and path planning at different
stages.
The scenario starts when an operator sketches an arbitrary path on the drawing board
(for lab testing). Then, the sketch is captured using the network camera. After that the
server processes the image to retrieve its contours and plan the path for the robot; these
paths are represented as a set of points in the 3D space. The server then sends these points
to the robot as a form of network communication data packets. The robot unpacks these
packets and retrieves the required targets. Finally, the robot follows the paths and
re-sketches the contours on the board. During the operations, the operator can monitor the
robot and supervise remotely the process through a web-based Java applet. Figure 10
shows the robot sketching the paths and at the same time monitored by a remote operator
through the web.

Figure 10 A robot following the defined paths

6 Discussions

This paper presents a web-based system for image processing and path planning of
robotic path-following. It can be extended to other industrial applications such as robotic
welding and laser cutting. The remote monitoring and control functions (beyond the
scope of this paper) make the system practical for distant operations in distributed
manufacturing environment.
The results of this research indicate that the developed image processing and robot
path planning system can perform as efficiently as any commercial tools can when
applied to path following. In addition, the real advantage of this system is its consistency,
user-friendliness, seamless integration with robots, and robustness towards industrial
applications, both locally or remotely.
108 A. Mohammed and L. Wang

This prototype system has been tested using five different snapshots with varying
numbers of contours, ranging from ten contours in a simple hand sketch to 46 contours in
a complex one. The time spent in each step during image processing and path planning
was recorded and analysed. Figure 11 reveals the detailed processing times for each step.

Figure 11 Detailed analyses of the experimental results

500
450
Process Time (millisecond)

400
350
300
250 46
200
31

No. of contours
150
100
21
50
0 14

10

Moving Converting Optimising


Converting Median Thinning Labelling Tracing the Reducing
average image to contours’
to greyscale smoothing the contours the contours contours the points
thresholding array sequence

10 24 382 496 225 115 14 57 280 0


14 54 493 489 304 107 17 88 139 8
21 23 358 451 280 352 13 54 304 0
31 26 381 441 279 183 15 62 213 0
46 24 358 442 274 240 13 62 350 1

It is noticed after analysing the processing times that the time spent for moving average
thresholding and median smoothing does not depend on the number of contours. This is
due to the nature of the algorithms used in the application, as these algorithms scan the
Vision-based robotic path following 109

image to read and manipulate pixels one by one. Therefore, the most influential factor in
the processing time is the resolution of an image.
As shown in Figure 12, a quite high percentage of the processing time is consumed
for reducing the points in the contours. As seen from the results, the tracing of the
contours also depends on the number of the contours. This is due to the fact that the
contour tracing algorithm works in a recursive loop to manipulate the contours in the
image.

Figure 12 Percentage of times needed for each processing step

optimizing
contours' Converting to
sequence GrayScale
0% 2%
Tracing the
contours
4% Reducing the
points
Labeling the 15% Moving average
curves Thresholding
1% 23%

Thinning the
contours
12%

Median
Converting Image Smoothing
to Array 27%
16%

7 Conclusions and future work

This research demonstrates one example of a programming-free robotic system. The


experimental results show that an operator can utilise the system directly without the
need of prior skills in robot programming. It saves not only the extra programming hours
but also the trainings sessions that are needed for operators.
Further reducing the processing time as much as possible is the aim of our future
work. This can be achieved by improving the efficiency of algorithms used in the time
consuming processes. As mentioned earlier, the system developed in this research opens
110 A. Mohammed and L. Wang

the door to many industrial applications by enhancing the adaptability of robots in


dynamic industrial environments. One of these applications would rely on the reported
algorithms and the prototype to build a robotic cutting application (e.g. water-jet or laser
cutting) that possesses the ability to follow any arbitrary paths. The system can also be
utilised to improve the adaptability of arc welding and friction-stir welding, where the
welding seams can be detected by a robot-mounted camera. Remote operations of an
automated robotic cell will also be tested for distributed manufacturing.
Another example of a direct industrial application is using reported system in the
soldering of electronic circuit boards, as utilising the robots in that kind of application
becomes more and more common. The gluing or sealing of parts in automotive industry
is yet another possible industrial application where both the accuracy and the flexibly are
quite important.
Probe testing of an electronic circuit board is another example of future applications
that can enjoy the benefits of the proposed system. Using the image processing
techniques helps the system to identify the locations of the testing points on the board.

References
Antonelli, G., Chiaverini, S., Gerio, G., Palladino, M. and Renga, G. (2007) ‘SmartMove4: an
industrial implementation of trajectory planning for robots’, Industrial Robot: An
International Journal, Vol. 34, No. 3, pp.217–224.
Baeten, J. and Schutter, J. (2002) ‘Hybrid vision/force control at corners in planner robotic-contour
following’, IEEE/ASME Transactions on Mechatronics, Vol. 7, No. 2, pp.143–151.
Bieman, L. and Rutledge, G. (1998) Vision Guided Automatic Robotic Path, US Patent.
Chan, A., Leonard, S., Croft, E. and Little, J. (2011) ‘Collision-free visual servoing of an
eye-in-hand manipulator via constraint-aware planning and control’, American Control
Conference, San Francisco, USA, pp.4642–4648.
Chang, W. and Wu, C. (2002) ‘Integrated vision and force control of a 3-DOF planar surface’,
IEEE International Conference on Control Applications, Glasgow, Scotland, UK, pp.748–753.
Dillencourt, M.B. and Samet, H. (1992) ‘A general approach to connected-components labeling for
arbitrary image representations’, Journal of Association for Computing Machinery, Vol. 39,
No. 2, pp.253–280.
Douglas, D. and Peucker, T. (1973) ‘Algorithms for the reduction of the number of points
required to represent a digitized line or its caricature’, The Canadian Cartographer, Vol. 10,
No. 2, pp.112–122.
Ebisch, K. (2002) ‘A correction to the Douglas-Peucker line generalization algorithm’, Computers
and Geoscienses, Vol. 28, No. 8, pp.995–997.
Edan, Y., Flash, T., Peiper, U., Shmulevich, I. and Sarig, Y. (1991) ‘Near-minimum-time task
planning for fruit-picking robots’, IEEE Transactions on Robotics and Automation, Vol. 7,
No. 1, pp.48–56.
Garcia, G., Pomares, J. and Torres, F. (2008) ‘Automaic robotic tasks in unstructured environments
using an image path tracker’, Control Engineering Practice, Vol. 17, No. 5, pp.597–608.
Gonzalez-Galvan, E., Loredo-Flores, A., Cervantes-Sanchez, J., Aguilera_Cortes, L. and Skaar, S.
(2008) ‘An optimal path-generation algorithm for manufacturing of arbitrarily curved surfaces
using uncalibrated vision’, Robotics and Computer-Integrated Manufacturing, Vol. 24, No. 1,
pp.77–91.
Intel Corporation (2012) OpenCV Library, [online] http://opencv.willowgarage.com/wiki/
(accessed 11 April 2012).
Vision-based robotic path following 111

Lange, F. and Hirzinger, G. (2003) ‘Predictive visual tracking of lines by industrial robots’, The
International Journal of Robotics Research, Vol. 22, Nos. 10–11, pp.889–903.
Li, H., Liu, J., Li, Lu, X., Yu, K. and Sun, L. (2010) ‘Trajectory planning for visual servoing with
some constraints’, Proceedings of the 29th Chinese Control Conference, Beijing, China,
pp.3636–3642.
Lumia, R., Gatla, C., Wood, J. and Starr, G. (2010) ‘Laser tagging: an approach for rapid robot
trajectory definition’, IEEE International Conference on Industrial Technology (ICIT), Chile,
pp.535–540.
Maheswari, D. and Radha, V. (2010) ‘Noise removal in compound image using median filter’,
International Journal of Computer Science and Engineering, Vol. 2, No. 4, pp.1359–1362.
Martin, A. and Tosunoglu, S. (2000) ‘Image processing techniques for machine vision’, Florida
Conference on Recent Advances in Robotics, Florida Atlantic University, Boca Raton, Florida.
Mukundan, R. (1999) ‘Binary vision algorithms in Java’, New Zealand: International Conference
on Image and Vision Computing IVCNZ’99, New Zealand, pp.145–150.
Okada, N., Minamoto, K. and Kondo, E. (2001) ‘Collision avoidance for a visuo-motor system
with a redundant manipulator using a self-organizing visuo-motor map’, IEEE International
Symposium on Assembly and Task Planning, Fukuoka, Japan, pp.104–109.
Olsson, T., Bengtsson, J., Johansson, R. and Malm, H. (2002) ‘Force control and visual
servoing using planar surface identification’, IEEE International Conference on Robotics and
Automation, Washington, USA, pp.4211–4216.
Stentiford, F. and Mortimer, R. (1983) ‘Some new heuristics for thinning binary handprinted
characters for OCR’, IEEE Transactions on Systems, Man and Cybernetics, Vol. 13, No. 1,
pp.81–84.
Wang, L. (2008) ‘Wise-ShopFloor: an integrated approach for web-based collaborative
manufacturing’, IEEE Transactions on Systems, Man, and Cybernetics – Part C: Applications
and Reviews, Vol. 38, No. 4, pp.562–573.
Woods, R.E. and Gonzalez, R.C. (2007) Digital Image Processing, 3rd ed., Prentice Hall,
New Jersey.
Wu, S-T. and Marquez, M.R.G. (2003) ‘A non-self-intersection Douglas-Peucker algorithm’,
Proceedings of the XVI Brazilian Symposium on Computer Graphics and Image Processing,
Brazil, pp.60–66.
Zhang, H., Chen, H., Xi, N., Zhang, G. and He, J. (2006) ‘On-line path generation for robotic
deburring of cast aluminum wheels’, IEEE/RSJ International Conference of Intelligent Robots
and Systems, Beijing, China, pp.2400–2405.

You might also like