0% found this document useful (0 votes)
62 views6 pages

Conveyor Visual Tracking Using Robot Vision: April 2006

This document summarizes a research article about using computer vision to track objects on a conveyor belt using a robot. Key points include: 1. The system uses a camera mounted on a robot end-effector to capture images of objects on a conveyor belt and analyze the differences between sequential image frames to extract vision information for tracking in real-time. 2. Experiments were conducted using an integrated robot, conveyor system, vision system, and robot user interface software. 3. A robot vision language was developed to more easily design vision systems and efficiently interface with vision processing functions.

Uploaded by

Duvan Tamayo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views6 pages

Conveyor Visual Tracking Using Robot Vision: April 2006

This document summarizes a research article about using computer vision to track objects on a conveyor belt using a robot. Key points include: 1. The system uses a camera mounted on a robot end-effector to capture images of objects on a conveyor belt and analyze the differences between sequential image frames to extract vision information for tracking in real-time. 2. Experiments were conducted using an integrated robot, conveyor system, vision system, and robot user interface software. 3. A robot vision language was developed to more easily design vision systems and efficiently interface with vision processing functions.

Uploaded by

Duvan Tamayo
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/228670939

Conveyor visual tracking using robot vision

Article · April 2006

CITATIONS READS

7 1,698

5 authors, including:

Rodney G. Roberts
Florida State University
135 PUBLICATIONS   1,618 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Rodney G. Roberts on 03 March 2014.

The user has requested enhancement of the downloaded file.


Conveyor Visual Tracking using Robot Vision
Ik Sang Shin1, 3, Sang-Hyun Nam2, Hyun Geun Yu3, Rodney G. Roberts3 , Seungbin B. Moon1

1 2 3
Dept. of Computer Engineering Chungnam Human Resource Dept. of Electrical & Computer Eng.
Sejong Univ., Seoul, Korea Development, Institute, Korea Florida State Univ.,Tallahassee, FL
[email protected], [email protected] [email protected],
[email protected] [email protected]

ABSTRACT
Robot conveyor tracking is a task in which a robot follows and can acquire depth information by itself. Currently researchers in
obtains objects on a conveyor belt. Prior to obtaining an object this area use a variety of methods for detecting objects in the
from an automation line, a robot needs information about the sequential image frames including block matching, model-based
object, such as its position, orientation, velocity, size, etc. methods, motion-based methods and methods which use color
Compared to ultrasonic sensors and infrared ray sensors, vision information.
sensors can provide more information about the object on the In this article we introduce a method based on analyzing a
conveyor belts. Generally, an object tracking system has several
differential image. Akec [2] made use of calculating pose
steps: obtaining an image, recognizing objects, and extracting
information for object position and orientation. The object parameters of a robot from feature points measured in the image
tracking process for a conveyor system should be fast enough to plane of a camera for finding object motion. But this method has a
support a real-time environment. In this article we describe a disadvantage in that there is a need for prior object information.
robotic tracking system that uses vision information extracted Luo [3] calculated the motion of a rigid body using optical flow
from sequential image frames. For fast extraction of vision technique. This works well when several objects have differential
information we applied a difference image technique which velocity. Nomura [4] applied a hybrid Kalman filter for
carries out a subtraction operation between two sequential frames
calculating the velocity of end-effector and measuring precisely
with a small time interval. This simple method is useful for
obtaining information about an image when there are minimal the velocity of a moving object. Papanikolopoulos calculated the
background effects caused by the conveyor system. In this article motion of objects using SSD (Sum of Squared Differences)
we present some results of some experiments using an integrated optical flow for searching for vision information to track an object
robot conveyor, vision system, and robot user interface software. on the conveyor [5][6]. Rembold applied a template matching
method in his work to calculate position and orientation of objects
Keywords and a method which estimates continuously the acquisition region
Conveyor visual tracking, vision sensor, object recognition of them by a Kalman filter [7].
In this paper, we introduce a system which tracks objects on a
1. INTRODUCTION conveyor. We performed experiments about conveyor tracking
Conveyor tracking consists of tracking and catching an object with the method of analyzing a differential image captured from a
on a conveyor belt using a robot. For catching objects, a robot camera on the end-effector of a robot and obtained information
needs object information such as position, orientation, velocity, about the objects moving on the conveyor. This is useful in the
size, etc. The recognition of the objects using vision sensors [1] is case when the background effect is small. Previous methods made
applicable to areas such as military applications, medical the full image to convert a binary image but we did only the
applications, biology, engineering, education, factory operation, special part of the image for real-time realization. In conclusion
etc. The vision sensor is able to efficiently process the we describe the development environment and robot vision
information needed to reorganize and track objects on the language in Sections 2 and 3. In Section 4 the steps for visual
conveyor belt and adjust some position error of a robot in the tracking are described and experimental results are shown in
middle of assembling electric parts into products by a robot. Section 5. In Section 6, we present the conclusions.
Generally the camera can be a single or stereo type for the
recognition and tracking of the objects. The former case has the
disadvantage that it cannot obtain depth information by itself but 2. DEVELOPMENT ENVIRONMENT
has advantages, that is, simple configuration, no correspondence As shown in Figure 1 the system proposed here consists of a
problem caused in each other relation and also its processing is robot system with a 6-axis robot and controller, a vision system
not complicated while in the latter case, it has advantages which with a stand-alone vision processing PC, a camera on the end-
effector, a frame grabber on the PC, and a conveyor system which

-1-
2006 Florida Conference on Recent Advances in Robotics, FCRAR 2006 Miami, Florida, May 25-26, 2006
moves objects. As shown in Figure 2, once the robot controller part requests
The operating system for the robot controller is LynxOS, vision processing, the vision system decodes the vision command
which is operated in real-time. As shown in Figures 2 and 3 the and performs the service routine and then transmits the result to
the robot part through TCP/IP using a communication protocol
vision system consists of a stand-alone PC for vision processing
developed for this project.
and a vision sensor attached to a robot arm to improve system
open architecture. Figure 2 shows the block diagram of the robot
3. DEVELOPMENT OF ROBOT VISION
control part and the vision part.
LANGUAGE
For designing vision systems more easily, we developed a
robot language architecture which is consistent with previous
robot vision languages and is able to interface with vision
functions efficiently. The main goal of this development is to have
a smaller number of commands than previous languages and to
interface functions with the arguments of commands so as to
support a variety of vision libraries. This architecture handles the
current language well and can easily interface with additional
libraries later.
As shown in Figure 4, the vision I/F(Interface) language
consists of command sets, that is, VINT, VPIC, VCOM, VTRA
and VOUT. Also each command set includes many functions.
Figure 1. Experiment environment
VINIT performs vision initialization such as the image grabber,
vision library. VPIC saves images acquired from a camera on
memory and shows still images or sequential images to the user.
VCOM is in charge of image processing such as preprocessing,
object recognition, graphic processing, etc.

Figure 2. System architecture

Figure 4. Vision Language

To make the image processing more efficient, unnecessary


components are reduced in the preprocessing step, which is
composed of Threshold, Histogram, Binaries, etc. [8]. In the
object recognition step, object information such as Labeling, Area,
and Perimeter are extracted. In the graphic step, object
information is properly displayed by characters or figures.
Similarly VTRA language is used in calculating conveyor speed,
transferring object position to a robot, and starting and pausing
vision processing. Finally VOUT is a language which obtains and
saves the resulting data to the special variable indicated by the
user and computer memory.
Figure 3. Eye-in-hand vision sensor

-2-
2006 Florida Conference on Recent Advances in Robotics, FCRAR 2006 Miami, Florida, May 25-26, 2006
n m
4. STEPS FOR CONVEYOR VISUAL
TRACKING
∑ ∑ iB[i, j ]
i =initial j =initial
Conveyor visual tracking requires two stages: (1) object
y= (6)
A
recognition and (2) object tracking. There are several approaches
to perform object recognition. One could use self-organizing
maps [9], template matching [10], recognition using color
where m, n define the maximum region of an object; i, j is the
information [11], and the method of using the time difference of
current x, y coordinates on the image plane, and initial value is the
two sequential image samples. Self-organizing maps are too slow initial point of each object. The binary image is presented by B
to be operated with real-time. Template matching requires a priori [i,j]. The area of each object A is given by
knowledge of object information to match objects. Unfortunately,
due to the required computation load, this method cannot be n m

performed in real time. The method by color information covers A= ∑ ∑ B[i, j ] .


i =initial j =initial
(7)

the former two disadvantages but cannot be used for binary


images. The object recognition method using the difference of By equation (3) the future position of objects is calculated and
two images is useful when the environment does not change then sent to the robot controller. Using this information, a robot
quickly over time. can acquire an object and according to the user command, move
the object to another location.
[Reference creating Steps for object recognition]
Step 1: Initialize vision functions. [Steps for object tracking]
Step 2: Create binary image. Step 1: Recognize objects entry by calculating the difference of
two images.
Step 3: Specify a range which includes objects.
Step 2: Measure the speed of the object and identify whether it
Step 4: If a specific object is pointed among objects probed by is defined by user or not, If it is ok, go to Step 3; otherwise,
user, save the object information. go to Step 1 and the robot waits.
For object tracking we need to measure object speed, i.e., the Step 3: Send the information to the robot by TCP/IP communi-
speed of the belt. In this work we considered two approaches, one cation.
using an encoder and the other using a vision sensor. The former
is exactly computable but not only requires the device to be Step 4: Conduct object tracking by the robot.
attached to the conveyor system but is inflexible. The vision
sensor has additional advantages since it can provide a variety of 5. RESULTS
information such as the area of an object [6], [12]. As shown in Figure 5, as the conveyor belt moves from left to
The equation for the conveyor belt speed is given by right, the robot chases a moving object.

X (T k ) − X (T k −1 )
Vc = (3)
Tk

where Vc is the speed of an object (conveyor belt), X is the


position of an object, T k is the k-th time. The position of the
object is defined to be

X = ( x, y) (4)

where the center of gravity ( x , y ) of the object is given by

n m

∑ ∑ jB[i, j ]
i = initial j = initial
x= (5)
A
Figure 5. A robot and conveyor system

-3-
2006 Florida Conference on Recent Advances in Robotics, FCRAR 2006 Miami, Florida, May 25-26, 2006
As shown in Figure 6, we designed a GUI consisting of a
menu bar, output window, communication mode, object
information, etc. As shown in Figure 6, the menu bar has vision
libraries and the current image acquired from a camera is
displayed on an output window(view port). Additionally, results
form the object recognition process and the tracking situation is
displayed for user. Also we added a communication mode on the
GUI to communicate with the robot, debug, and develop
programs conveniently.
Figure 7 shows the object recognition steps for one
experiment case. First of all, the Figure 7(a) step shows the initial (a) (b)
step, i.e., to initialize the vision functions. Figure 7(b) step is the
preprocessing step; the image is converted into a binary image
using a threshold.

(c)

Figure 6. Vision GUI

This step is needed to improve the speed of operation and separate


objects from the background. In Figure 7(c) step, the range of
objects is chosen in the binarized image and within that range, (d)
labels are assigned to each object using a special labeling Figure 7. Object recognition steps: (a) Vision initialization (b)
algorithm. As shown in Figure 7(d) step, the last step is the object Image preprocessing (c) Set object range (d) Classify objects
recognition stage, e.g., once the user selects an object using the
mouse, information (area, position, perimeter, compactness, etc.) As shown in Figure 8(d) step, the system predicts the next
about that object is saved to a structure assigned for object position of the object using the current position and speed and
information. then delivers the information to the robot to track objects with
The steps for conveyor visual tracking are shown in Figure 8. respect to camera-centered position. After this step, the robot
Figure 8(a) step shows that the vision system recognizes object catches the object using its gripper and places the object at a point
entry by calculating the difference of two sequential images and defined by user. After all steps are completed, the robot returns to
then calls the tracking language. Figure 8(b) step shows that the the first step in which the robot is ready and vision system is
system measures object speed and decides whether or not the initialized.
object is identical to the reference object. If the object is matched
to the reference, the robot starts to track the object, otherwise the
robot remains at its nominal configuration, and the vision system
remains in state (a). In the next step, shown in Figure 8(c) step,
the system transmits object information to the robot.

-4-
2006 Florida Conference on Recent Advances in Robotics, FCRAR 2006 Miami, Florida, May 25-26, 2006
[2] Akec, J. A. Steiner, S. J., and Stenger, F. An experimental
visual feedback control system for tracking applications
using a robotic manipulator. Industrial Electronics Society,
1998. IECON '98. Proceedings of the 24th Annual
Conference of the IEEE vol 2, (31 Aug.-4 Sept. 1998) 1125 –
1130.
(a) (b) [3] Luo, R. C., Mullen, R. E., and Wessell, D. E. An adaptive
robotic tracking system using optical flow, Robotics and
Automation, Proceedings, IEEE International Conference on
vol. 1, (1998) 568-573.
[4] Papanikolopoulos, N. P. and Khosla, P. K. Feature based
robotic visual tracking of 3-D translational motion, Proc. of
the 30th IEEE Conf Decision and Control, (1991)1877-1882.
[5] Papanikolopoulos, N. P., Khosla, P.K., and Kanade, T.
Visual tracking of a moving target by a camera mounted on a
object: a combination of control and vision, Robotics and
Automation, IEEE Trans., vol.9 (1993) 14-35.
(c) (d) [6] Rembold, D., Zimmermann, U., Langle, T., and Worn, U.
Detection and handling of moving objects, IEEE IECON ,
Figure 8. Object tracking steps : (a) Object entry in view of (1998) 1332-1337.
point of a camera (b) Calculate speed and discriminate objects [7] Chowdhury, M. H. and Little, W. D, Image thresholding
(c) Transmit the information to the robot (d) Object tracking techniques, Communications, Computers, and Signal,
Proceedings, IEEE Conference on, (1995) 585-889.
[8] Kohenen, T. The self-organizing map, Proc. of the IEEE,
6. CONCLUSION vol.78, No.9. (1998) 1464-1479.
In this article, we presented a conveyor system for catching an
object. This included the development of a vision language, a [9] Rembold, D., Zimmermann, U., Langle, T., and Worn, H.
vision library, steps for object recognition and tracking for Detection and Handling of Moving Object, Industrial
conveyor visual tracking. Object tracking can be applied to Electronics Society, 1998. IECON '98. Proceedings of the
security systems, entertainment robots, automated plant 24th Annual Conference of the IEEE vol.3, (1998) 1332 -
operations, etc., but we focused on catching an object in an 1337.
automated factory. To achieve fast tracking and catching of an [10] Seungmin, B. and Taeyong, K. An implementation of real-
object, it needs to decide on an optimal robot trajectory and time color tracking vision system using a low cost overlay
interception point. For remote control of a robot [13], vision data board, In Proc. of The 14th KACC, vol. 14, (1999) 443-445.
are transported by Ethernet communication and a differential
[11] Sitti, M., Bozma, I., and Denker, A. Visual tracking for
image method was applied to object recognition for real-time
moving multiple objects: an integration of vision and control,
calculation. We believe that these studies will be needed to
Industrial Electronics, 1995, ISIE '95., Proceedings of the
improve the efficiency of a repetitive task by a robot in an
IEEE International Symposition on vol. 2, (1995) 535-540.
automated line.
[12] Safaric, R., Jezernik, K., Calkin, D., W., and Parkin, R., M.
Telerobot control via Internet, Industrial Electronics, ISIE
'99. Proceedings of the IEEE International Symposium on
7. REFERENCES vol. 1, (Feb., 1999) 298 – 303.
[1] Denker, A., Sabanovic, A., and Kaynak, O. Vision-
controlled robotic tracking and acquisition. Intelligent
Robots and systems '94. In Proceedings of the IEEE/RSJ/GI
nternational Conference on vol. 3, 12-16 (Sept. 1994) 2000
– 2006.

-5-
2006 Florida Conference on Recent Advances in Robotics, FCRAR 2006 Miami, Florida, May 25-26, 2006

View publication stats

You might also like