Mohammed 2013
Mohammed 2013
Mohammed 2013
1, 2013 95
1 Introduction
2 Literature review
The cameras can be mounted on the end effector of the robot (Lange and Hirzinger,
2003), or they can be moving around the workpiece to track the path of the robot (Garcia
et al., 2008). These approaches, however, lack the possibility to optimise the total path of
the robot. This is due to the fact that they do not have the complete knowledge about the
robot path before starting the robot.
Other researchers focused on solving common robot path following problems.
Some approaches focused on the collision issues (Okada et al., 2001; Chan et al., 2011),
whereas others focused on the dynamic issues such as the torque limits of actuators
(Antonelli et al., 2007). Moreover, some researchers focused on the dynamic singularity
and the velocity constraints of the robot (Li et al., 2010), who studied the dynamic and
kinematic models of the robot to solve the issues in advance.
Bieman and Rutledge (1998) as well as Zhang et al. (2006) based their approaches on
the vision sensing abilities without the need for extra information. However, those
approaches were focused on developing a real-time closed-loop control system which
does not consider optimising the entire robotic operation.
3 System overview
Different from the reported systems in the literature, the objective of our integrated
system is to provide the ability for an operator to locally or remotely monitor and control
the needed image processing and robotic path-following processes, effectively and
efficiently. While the level of the ability has been decided in advance by system
designers and application developers, an authorised operator has the freedom to access
the local machines/robots within the system using a web browser. This is accomplished
by developing a Java applet that the operator can use to access the system using a secured
user account.
Our research is conducted using a system that consists of a network camera and an
industrial robot, connected to a central application server through an Ethernet network. A
user interface is developed to control the camera and the robot in addition to the image
processing stages. The entire application includes camera control, image processing, and
robot path planning and control.
As shown in Figure 1, the system consists of four main components. The central
component is the server, which is an application server, providing the needed functions to
communicate with the different components of the system. At the same time, the server is
responsible for image processing and path planning.
The second component of the system is the controller of an industrial robot, which is
responsible for preparing the data for the server to monitor the robot, and at the same
time it receives the control commands from the server and performs the task needed. This
controller is connected to the robot manipulator at one side and the application server at
the other side.
The third component consists of an IP-based network camera, which can be
controlled remotely through an URL. It is used to take a snapshot of a hand-drawing
(a sketch) of an operator, and then send the image to the server for image processing.
The remote operator connected to the system via a web browser represents the fourth
component. The operator has access to the system to monitor and control the process, and
this is accomplished by using the facilities made available in the developed Java applet
(user interface) of the system.
The system is designed to
1 capture an image using the network camera
2 send the image to the server
3 retrieve its contours after the server processes the image
4 send back the points of the contours to the industrial robot, point by point and
contour by contour.
The robot can thus follow the targets to trace the contours of the image for path
following. In the current implementation, the camera is a Sony SNC-CH110, and the
robot is an ABB IRB140.
4 System implementation
The processing procedures presented in this paper can be divided into two parts: the first
part containing the steps of image processing, and the second part outlining the steps of
path planning, as detailed in the following subsections.
b If the pixel has connected pixels (at least one of the pixel’s four-neighbours has
a value 1, the four-neighbours of the pixel are the neighbours that share one side
of the pixel), move to the first connected pixel.
c Assign the same label to the neighbour pixel.
d Repeat Step b to Step c until there are no connected pixels left.
2 Continue the image scanning. If an unlabeled pixel is found, repeat Step 1.
Figure 4 describes the contour labelling process, as we can see that the process starts with
the binary image which consists of ones for the foreground and zeros for the background,
and it ends up with a labelled image.
1 Take the first contour and calculate the closest distance between its endpoints
and those of other contours in the image. If there is a contour that is close enough
(the distance is equal or less than a specified threshold value), then do the following:
a Calculate the tangent of the first contour at the endpoint that is closest to the
second contour. This is done by calculating the average angle of a few contour
points that are near the endpoint, as expressed in equation (1).
⎛ (x − x ) ⎞
∑
5
arctan 2 ⎜ n1 n 2 ⎟
⎝ ( yn1 − yn 2 ) ⎠
n=0
Average angle = (1)
n
b Repeat Step a, for the second contour found in Step 1.
c Compare the two average angles of the two contours. If the angular difference is
within a specified range, then merge the contours in the correct order.
5 Experimental results
The image processing and path planning were developed and performed using the
Wise-ShopFloor framework (Wang, 2008). The application server in the current setup
has the following configuration: Intel Core i5 2.8 GHz processor, 4 GB RAM, with
Vision-based robotic path following 107
NVidia GeForce GT 240 graphics card, and running on Windows Vista operating system.
Figure 9 shows the end results of the image processing and path planning at different
stages.
The scenario starts when an operator sketches an arbitrary path on the drawing board
(for lab testing). Then, the sketch is captured using the network camera. After that the
server processes the image to retrieve its contours and plan the path for the robot; these
paths are represented as a set of points in the 3D space. The server then sends these points
to the robot as a form of network communication data packets. The robot unpacks these
packets and retrieves the required targets. Finally, the robot follows the paths and
re-sketches the contours on the board. During the operations, the operator can monitor the
robot and supervise remotely the process through a web-based Java applet. Figure 10
shows the robot sketching the paths and at the same time monitored by a remote operator
through the web.
6 Discussions
This paper presents a web-based system for image processing and path planning of
robotic path-following. It can be extended to other industrial applications such as robotic
welding and laser cutting. The remote monitoring and control functions (beyond the
scope of this paper) make the system practical for distant operations in distributed
manufacturing environment.
The results of this research indicate that the developed image processing and robot
path planning system can perform as efficiently as any commercial tools can when
applied to path following. In addition, the real advantage of this system is its consistency,
user-friendliness, seamless integration with robots, and robustness towards industrial
applications, both locally or remotely.
108 A. Mohammed and L. Wang
This prototype system has been tested using five different snapshots with varying
numbers of contours, ranging from ten contours in a simple hand sketch to 46 contours in
a complex one. The time spent in each step during image processing and path planning
was recorded and analysed. Figure 11 reveals the detailed processing times for each step.
500
450
Process Time (millisecond)
400
350
300
250 46
200
31
No. of contours
150
100
21
50
0 14
10
It is noticed after analysing the processing times that the time spent for moving average
thresholding and median smoothing does not depend on the number of contours. This is
due to the nature of the algorithms used in the application, as these algorithms scan the
Vision-based robotic path following 109
image to read and manipulate pixels one by one. Therefore, the most influential factor in
the processing time is the resolution of an image.
As shown in Figure 12, a quite high percentage of the processing time is consumed
for reducing the points in the contours. As seen from the results, the tracing of the
contours also depends on the number of the contours. This is due to the fact that the
contour tracing algorithm works in a recursive loop to manipulate the contours in the
image.
optimizing
contours' Converting to
sequence GrayScale
0% 2%
Tracing the
contours
4% Reducing the
points
Labeling the 15% Moving average
curves Thresholding
1% 23%
Thinning the
contours
12%
Median
Converting Image Smoothing
to Array 27%
16%
References
Antonelli, G., Chiaverini, S., Gerio, G., Palladino, M. and Renga, G. (2007) ‘SmartMove4: an
industrial implementation of trajectory planning for robots’, Industrial Robot: An
International Journal, Vol. 34, No. 3, pp.217–224.
Baeten, J. and Schutter, J. (2002) ‘Hybrid vision/force control at corners in planner robotic-contour
following’, IEEE/ASME Transactions on Mechatronics, Vol. 7, No. 2, pp.143–151.
Bieman, L. and Rutledge, G. (1998) Vision Guided Automatic Robotic Path, US Patent.
Chan, A., Leonard, S., Croft, E. and Little, J. (2011) ‘Collision-free visual servoing of an
eye-in-hand manipulator via constraint-aware planning and control’, American Control
Conference, San Francisco, USA, pp.4642–4648.
Chang, W. and Wu, C. (2002) ‘Integrated vision and force control of a 3-DOF planar surface’,
IEEE International Conference on Control Applications, Glasgow, Scotland, UK, pp.748–753.
Dillencourt, M.B. and Samet, H. (1992) ‘A general approach to connected-components labeling for
arbitrary image representations’, Journal of Association for Computing Machinery, Vol. 39,
No. 2, pp.253–280.
Douglas, D. and Peucker, T. (1973) ‘Algorithms for the reduction of the number of points
required to represent a digitized line or its caricature’, The Canadian Cartographer, Vol. 10,
No. 2, pp.112–122.
Ebisch, K. (2002) ‘A correction to the Douglas-Peucker line generalization algorithm’, Computers
and Geoscienses, Vol. 28, No. 8, pp.995–997.
Edan, Y., Flash, T., Peiper, U., Shmulevich, I. and Sarig, Y. (1991) ‘Near-minimum-time task
planning for fruit-picking robots’, IEEE Transactions on Robotics and Automation, Vol. 7,
No. 1, pp.48–56.
Garcia, G., Pomares, J. and Torres, F. (2008) ‘Automaic robotic tasks in unstructured environments
using an image path tracker’, Control Engineering Practice, Vol. 17, No. 5, pp.597–608.
Gonzalez-Galvan, E., Loredo-Flores, A., Cervantes-Sanchez, J., Aguilera_Cortes, L. and Skaar, S.
(2008) ‘An optimal path-generation algorithm for manufacturing of arbitrarily curved surfaces
using uncalibrated vision’, Robotics and Computer-Integrated Manufacturing, Vol. 24, No. 1,
pp.77–91.
Intel Corporation (2012) OpenCV Library, [online] http://opencv.willowgarage.com/wiki/
(accessed 11 April 2012).
Vision-based robotic path following 111
Lange, F. and Hirzinger, G. (2003) ‘Predictive visual tracking of lines by industrial robots’, The
International Journal of Robotics Research, Vol. 22, Nos. 10–11, pp.889–903.
Li, H., Liu, J., Li, Lu, X., Yu, K. and Sun, L. (2010) ‘Trajectory planning for visual servoing with
some constraints’, Proceedings of the 29th Chinese Control Conference, Beijing, China,
pp.3636–3642.
Lumia, R., Gatla, C., Wood, J. and Starr, G. (2010) ‘Laser tagging: an approach for rapid robot
trajectory definition’, IEEE International Conference on Industrial Technology (ICIT), Chile,
pp.535–540.
Maheswari, D. and Radha, V. (2010) ‘Noise removal in compound image using median filter’,
International Journal of Computer Science and Engineering, Vol. 2, No. 4, pp.1359–1362.
Martin, A. and Tosunoglu, S. (2000) ‘Image processing techniques for machine vision’, Florida
Conference on Recent Advances in Robotics, Florida Atlantic University, Boca Raton, Florida.
Mukundan, R. (1999) ‘Binary vision algorithms in Java’, New Zealand: International Conference
on Image and Vision Computing IVCNZ’99, New Zealand, pp.145–150.
Okada, N., Minamoto, K. and Kondo, E. (2001) ‘Collision avoidance for a visuo-motor system
with a redundant manipulator using a self-organizing visuo-motor map’, IEEE International
Symposium on Assembly and Task Planning, Fukuoka, Japan, pp.104–109.
Olsson, T., Bengtsson, J., Johansson, R. and Malm, H. (2002) ‘Force control and visual
servoing using planar surface identification’, IEEE International Conference on Robotics and
Automation, Washington, USA, pp.4211–4216.
Stentiford, F. and Mortimer, R. (1983) ‘Some new heuristics for thinning binary handprinted
characters for OCR’, IEEE Transactions on Systems, Man and Cybernetics, Vol. 13, No. 1,
pp.81–84.
Wang, L. (2008) ‘Wise-ShopFloor: an integrated approach for web-based collaborative
manufacturing’, IEEE Transactions on Systems, Man, and Cybernetics – Part C: Applications
and Reviews, Vol. 38, No. 4, pp.562–573.
Woods, R.E. and Gonzalez, R.C. (2007) Digital Image Processing, 3rd ed., Prentice Hall,
New Jersey.
Wu, S-T. and Marquez, M.R.G. (2003) ‘A non-self-intersection Douglas-Peucker algorithm’,
Proceedings of the XVI Brazilian Symposium on Computer Graphics and Image Processing,
Brazil, pp.60–66.
Zhang, H., Chen, H., Xi, N., Zhang, G. and He, J. (2006) ‘On-line path generation for robotic
deburring of cast aluminum wheels’, IEEE/RSJ International Conference of Intelligent Robots
and Systems, Beijing, China, pp.2400–2405.