0% found this document useful (0 votes)
57 views5 pages

Research of Sorting Technology Based On Industrial Robot of Machine Vision

The document discusses a research project on developing vision systems for industrial robots to sort metal workpieces. It describes the components of the sorting system including cameras, workpiece platforms, and robot arms. It also explains algorithms for image preprocessing, circle detection, corner detection and contour recognition to analyze images and guide the robot sorting.

Uploaded by

MekaTron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views5 pages

Research of Sorting Technology Based On Industrial Robot of Machine Vision

The document discusses a research project on developing vision systems for industrial robots to sort metal workpieces. It describes the components of the sorting system including cameras, workpiece platforms, and robot arms. It also explains algorithms for image preprocessing, circle detection, corner detection and contour recognition to analyze images and guide the robot sorting.

Uploaded by

MekaTron
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

2012 Fifth International Symposium on Computational Intelligence and Design

Research of Sorting Technology based on Industrial Robot of Machine Vision


Image processing and machine vision
Liu Zhen-yu, Zhao Bin, Zhu Hai-bo
Information Science and Engineering School
Shenyang University of Technology
Shenyang 110870, China
e-mail: [email protected], [email protected], [email protected]
is mainly located in the laboratorial stage, for example:
Adtech contrives sorting robots that is used for sorting some
smaller regular workpiece. The Chinese Academy of
Sciences Institute of Automation designs CASIA-I type of
robot, which is able to achieve the basic functions including
evading obstacle and automatic track [6]. Harbin Institute of
Technology contrives MEMS system based on 2D visual
servo, which can accomplish handing workpiece and
precision assembly [7].

AbstractAimed at the industrial sorting technology problems,


this paper researched correlative algorithm of image
processing and analysis, and completed the construction of
robot vision sense. The operational process was described as
follows: The camera acquired image sequences of the metal
workpiece in the sorting region. Image sequence was analyzed
to use algorithms of image pre-processing, Hough circle
detection, corner detection and contour recognition. In the
mean time, this paper also explained the characteristics of
three main function model (image pre-processing, corner
detection and contour recognition), and proposed algorithm of
multi-objective center and a corner recognition. The simulated
results show that the sorting system can effectively solve the
sorting problem of regular geometric workpiece, and
accurately calculate center and edge of geometric workpiece to
achieve the sorting purpose.

II.

ROBOT

The article builds vision systems shown in Figure 1.


Industrial robot sorting system is mainly composed of three
units: workpiece platform unit, vision unit, robot control
unit.
Workpiece platform unit is composed of workpiece
platform, placement trench. Black workpiece platform is
useful to form contrast between the metal workpiece and
background, which is convenient for validating algorithm.
Workpiece trench is used to place different workpiece by
classification.
Vision unit is composed of CCD camera, camera stand,
and vision software. Firstly, the camera captures video
images, secondly use the image pre-processing to eliminate
noise, thirdly use corner detection to extract the
characteristics of workpiece, and then calculate centroid of
the workpiece and placement direction, and finally use the
outline identification to identify the target species. According
to relationships between image and object coordinate system,
the vision system unit calculates the relative position and
direction, and then transmits argument computed to the next
control cabinet.
Robot control unit consists of the control cabinet and sixaxis mechanical arm. The unit is responsible for analyzing
parameters and completing related grabbing and placing
operations. The control cabinet operates robotic arm to
complete tracking and grabbing assignment, and places
workpiece into trench to the next workpiece slot.
For simplifying recognition algorithm and convenient
solution to problem, Special requirements are as following:
(1) In order to prevent interference of sorting and collision,
mechanical workpiece is placed dispersedly, and there are
enough gaps. (2) The same variety of the workpiece is not
placed next door to each, which contribute to test the

Keywords-component; machine vision; industrial robot;


sorting; target recognition; image processing

I.

INTRODUCTION

Machine vision technology refers to a technique to


simulate the human vision function with camera and
computer [1], which is widely used in the fields of
spaceflight, automobile, pharmacy, industry and electron.
Vision sorting technology is an important part of the most
industrial product line. In the traditional production line,
industrial robots used to be operated by means of teach-in
and Playback [2]. All the actions must be pre-setting, and
target workpiece is regular placed. Once the work
environment is changed, induces grabbing errors. It not
satisfies large quantities and high-speed production [2].
Compared with the traditional sorting operations, vision
sorting technology has the advantages of high-quality, highspeed and high intelligence which cannot be replaced. At
present, vision technology develops from the laboratorial
stage toward the application stage. In the United States,
Japan and Europe, robot vision technology has been quite
popular. For example: Siemens applies machine vision
systems to assembly production line [3], and solves the
problem for error-prone of the long time artificial assembly.
ABB develops Delata vision mechanical arm RB340FlexPicker [4], which can complete normative displaying
target object. ISRA develops intelligent recognition system
[5]
, which can imitate the interaction between the human eye
and arm. At the present time, Chinese robotic sorting system

978-0-7695-4811-1/12 $26.00 2012 IEEE


DOI 10.1109/ISCID.2012.23

SORTING SYSTEM CONSTITUTES AN INDUSTRIAL

57

reliability and accuracy of algorithm. (3) Select the


mechanical workpiece of regular geometric shape, because
almost all mechanical workpiece are the regular simple
geometric shapes.

Image feature extraction is the most important part of


image recognition. The general target characteristics contain
perimeter, area, location, direction, curvature, centroid,
distance, long-axis and short-axis. These geometric
characteristics are the main features of tracing and
recognition.
IV.

RESEARCH OF ALGORITHM

A.

Calibration of the camera


In the vision system, Camera calibration is the key step
of deciding precision in the whole sorting system [9]. The
purpose of camera calibration is to establish the relevant
coordinate system, get relationship model of the image
coordinate and space coordinate system [10]. The sorting
system takes the pinhole model as calibration algorithm
model by means of Zhang Zhen-you calibration method.
Only need to shooting at least three photos, the calibration
result is shown in figure 3. According to the corresponding
relationship between characteristic points, vision sorting
system calculates the camera intrinsic parameters and
extrinsic parameters [11]. The system calibration directly
affects target tracking and grabbing, provides the foundation
insurance for follow-up grab work.

Figure 1. The industrial robot sorting system.

III.

THE REALIZATION OF SORTING PROCESS

Process of machine vision sorting experiment is shown in


Figure 2. Image sequence of the target mechanical workpiece
is acquired through the CCD camera. Automatically analyse
each frame and recognize the general regular shape including
round, rectangle, triangle, hexagons and other regular
geometric workpiece. The entire sorting process is divided
into four parts: image pre-processing, object extraction, a
single target analysis, sorting grabbing [8]. (1) Image preprocessing: smooth collected image to remove noise and
eliminate the interference of noise. (2) Object extraction: use
canny operator to extract the target image from the
background image. (3) A single target analysis: because the
goal is the regular geometric workpiece, the corner detection
cant be detected circular. This paper uses Hough algorithm
to identify circular. (4) Classification grabbing: each single
workpiece target is acquired feature points, centroid and the
length axes to crawl. Vision sorting system sends feature
information to the robot control cabinet through RS232, and
then six-axis robot is controlled to sort by robot control
cabinet.

Figure 3. The industrial robot sorting system.

B. Image pre-processing
Image pre-processing mainly improves SNR. This paper
selects canny operator for edge detection to distinguish
different shapes among workpiece. Canny edge detection is
similar to Marr edge detection [13], belongs to calculate first
derivative of Gaussian function after smooth [14]. Canny
algorithm uses first order derivative of two-dimensional
Gaussian function to smooth image. Then using Gaussian
function removes noise, general method is the convolution
formula between the template of the Gaussian function and
image. The convolution formula is as follows:
1
x2 + y2
IG ( x, y) =
exp(
) I (x, y) = G(x, y) I (x, y) (1)
2
2
2 2
The I(x, y) is original image, the is a scale parameter.
Then using difference operation of IG(x,y) calculates the
gradient to get gradient amplitude image G and gradient

Figure 2. Sort the flow chart..

58

image direction. Gx(i, j) and Gy(i, j) are respectively two


directions of partial derivatives which are given by:

Finally peak is selected. The r0 is regarded as radius of round,


and the (a0, b0) is regarded as center coordinates of the image
[15]
plane .

Gx ( i, j ) = (IG ( i, j +1) IG ( i, j ) + IG ( i +1, j +1) IG ( i +1, j )) / 2 (2)

Gy ( i, j ) = (IG ( i, j ) IG ( i +1, j ) + IG ( i, j +1) IG ( i +1, j +1)) / 2

D. Corner detection
Using corner detection extracts characters of the polygon
workpiece after the circular workpiece detected. Corner
points are change large points in the direction of the level Ix
and vertical Iy. Dreschler and Nagel put forward detection
methods based on the principle of Gaussian curvature [15].
Smith proposed the famous the SUASAN corner detection
operator [16]. Harris and Stephens improved Plessey corner in
1988, then advanced the Harris operator [17]. In this paper,
using the Harris detect corner features, the formula is:
E ( u, v) = w( x, y)[I ( x + u, y + v) I ( x, y)]2 (6)

Gradient magnitude and direction are directly calculated


by means of transformation formula between Cartesian and
polar coordinate system. Gradient amplitude and gradient
direction are as follows:
M i, j = G i, j 2 + G i, j 2 1 2
)
)
x(
y (
[ ]
(3)

IG
1 I G
= tan [
/
]
y x

The amplitude M is the greater, the greater value image


gradient is. In order to search the edge, keep local variation
amplitude of the larger point to generate the refined edge.
Where is a smooth gradient direction vector, that is
direction of the orthogonal edge [14]. The results are shown in
figure 4 (a).After canny operator processing, the imerode
makes the edge more apparent in Figure 4 (b). The imerode
belongs to the morphological approach. Define a "structure
element", structure element and binary image process a
particular logic operation in each pixel position. The imerode
formula is defined as:

x, y

The E(u, v) is gray change that moving window[u, v]


produce; w(x, y) is the window function that is generally
Gaussian smoothing factor; I(x+u, y+v) is gray value that
image I(x, y) moves u in the x-direction and v in the ydirection. For local small movement amount [u, v], ignoring
the Gaussian item can be as follows:


E ( u , v ) [ , ] M

IxIy
M = w( x, y) I x

I x I y I y
x, y

I S = {x | S x , y I 0}
(4)
If the intersection of S and I is non-empty after structure
element S translated to (x, y), the point (x, y) is to set up of
expansion result that I expansion with structure element S.

(a)

(7)

Thereinto, the Ix Iy represent the derivative of horizontal


and vertical direction of the image, max min are
characteristics of the M. When 2 >>1 or 1 >>2 is
reachable, point I(x, y) is the edge. When 1 and 2 are
biggish equivalent value, point I(x, y) is corner. The rotation
matrix R is as follows:

R = det M k (traceM ) 2

2 2
2
det M = 12 = I x I y ( I x I y )

2
2
traceM = 1 + 2 = I x + I y

(b)

Figure 4. The results of canny operator and imerode processing.

(8)

The k usually takes 0.04~0.2, the R value is only related


to characteristic value of M. Positive maximum value R is
the corner; Negative minimal value R is the edge; R of small
value is the flat areas. Threshold processing extracts the local
maximum value of the rotation factor R. In order to validate
extracting the corner point algorithm, place a number of
different geometries. Corner detection results are as shown in
Figure 5.

C. Hough circle detection


Image feature can be extracted and identified after image
pre-processing. If the target workpiece is circular, corner
feature doesnt exist. Hough transform for circle detection is
an effective method. Hough transforming makes use of
correspondence between image space and parameter space to
complete detection mission. Corresponding to the image
space, radius and center of the round are r and (x, y).
Equation (5) is as follows:
(a x)2 + (b y )2 = r 2
(5)
The formula has three parameters a, b, r, and it is
necessary to establish a three-dimensional accumulation
array A (a, b, r) in the parameter space. Generally Hough
transform maps the circle boundary points to threedimensional parameter space, and finally identify the peak
point in the parameter space. In the detection process, put the
image of each boundary pixel(x, y) into the equation (5). If
the formula is established, related accumulator adds 1.

Figure 5. Corner detection.

59

corner coordinates are calculated minimum and maximum


coordinate value in the x-axis and y-axis direction:

E. Contour Finding and contour approximation


Although algorithms like the canny edge detector can be
used to find the edge pixels that separate different segments
in an image, they do not tell you anything about those edges
as entities in themselves [18]. A contour is a list of points that
represent, in one way or another, a curve in an image [18].
Firstly calculate target image contour with method of canny
operator, then find contour in the binary image
(FindContours), finally research contour as a whole. A
contour generally corresponds to a series of points, which is
the peripheral curve of the target workpiece. In the OpenCV
vision library, it is convenient to recovery outline contours
represent for a binary tree [18]. Drawing a polygon and shape
analysis usually save the outline with a specified accuracy
polygonal curve approximation. Approximation has two
purposes: obtain the corner again and get the target contour.
I(x,y) is location array of Harri corner, J(x,y) is location
array of outline approximation corner. The relative error
formula is as follows:
RE =

( xI x J ) 2 + ( y I y J ) 2
xI2 + yI2

Min ( po int1.x, ponit 2.x, ponit 3.x ") = jmin

Max ( po int1.x, ponit 2.x, ponit 3.x ") = jmax

Min ( po int1. y, ponit 2. y, ponit 3. y ") = imin


Max ( po int1. y, ponit 2. y, ponit 3. y ") = imax

A pair of image can be divided into different BlockN(i,j)


based on N numbers of targets. In order to calculate
accurately, the coordinate expands 5 pixel to make target
workpiece surrounded by outline rectangle. Each target
block x-axis ranges iN~ (imin+5, imax+5), y-axis ranges jN~
(jmin+5, jmax+5). After divided into BlockN(i,j), different target
is calculated centroid.

This article describes sorting technology based on


Industrial robot of machine vision, applies monocular vision
technology in industrial robots. According to the
characteristics of mechanical arm and methods of control,
applying Computer Vision Library (OpenCV) accomplishes
software programming in the VC++ environment. At the
same timeworkpiece can be accurately sorted by means of
corner detection, contour finding and quadrature features.
Figure6 (a) shows results that vision software identifies the
same category of the workpiece. Experiments show that the
sorting system can effectively identify hexagon, circle,
triangle, pentagon and square. Figure6 (b) shows results that
vision software identifies different types of geometrical
workpiece. Experiments show that the vision system has
good robustness in various geometrical workpiece. Among
image identified, the contour of the object is marked by
different colorful curve, and the center is marked by
crosswise. Furthermore results of identifying different shapes
are marked by the text.
Table 1 is recorded in workpiece centroid coordinates of
Figure 6 (a). The unit of centroid is the pixel. Record the
coordinates can accurately confirm the position of the
workpiece.

Formula compares contour corner with Harris corner, and


remove the needless corner. According to number of corner
points, workpiece is classified.
Calculating geometric center
Vision sorting system generally uses the centroid to
describe positional information of workpiece. The process of
calculating geometric center is as follows: The moment
feature used to describe the shape feature. The size of digital
images f(i, j) is MN, its (p+q) order moment method is as
follows:

F.

(10)

i =0 j = 0

In the formula: f(i, j) is equivalent to the quality of a


pixel; Mpq is the moment of image with different p, q
values. Using central moment calculates the centroid, zero
order moment M00 is the sum of the regional density:

M 00 = f (i, j )

TABLE I.

(11)

A first order moment M10 is the inertia moment of image


the j-axis. M01 is inertia of the image i axis. So solutions of
equation (9) can be written:

M 10 = i f (i, j )

M 01 = j f (i, j )

(12)

Center of mass
(pixel)

The type of
workpiece

Center of mass
(pixel)

Hexagon

(664,466)

Triangle

(534,523)

(284,451)

Triangle

(752,429)

(455,182)

Triangle

(226,452)

(484,178)

Triangle

(356,249)

Circle

(750,458)

Triangle

(685,218)

Circle

(456,434)

Pentagon

(511,432)

Circle

(690,248)

Pentagon

(262,274)

Circle

(196,208)

Pentagon

(745,267)

Circle

(226,456)

Hexagon
Circle

(13)

The algorithm is relatively simple, which are applicable


for any graphics, but the previous calculation acquires center
of mass of a single object in the image. This paper presents a
method of center of a multi-target object. Firstly compute
corner points of the multi-convex object contour, then all

60

WORKPIECE CENTROID COORDINATE

The type of
workpiece

Hexagon

The ( i , j ) is the object center that the first moment M10


and M01 respectively divided by the zero-order moment M00.

( i , j ) = (M 10 / M 00 , M 01 / M 00 )

EXPERIMENTAL RESULTS AND CONCLUSIONS

V.

( xI xJ ) 2 + ( yI yJ ) 2 (9)
xJ2 + yJ2

M pq = i p j q f (i, j ) ( p, q = 0,1, 2 ")

(14)

The simulation results show that the technology in this


article is very simple and effective. The sorting system suits
fast accurate sorting regular geometric workpiece, which can
be widely used in the production line with practical value.

[9]

REFERENCES

[11]

[1]

[2]

[3]

[4]
[5]
[6]

[7]

[8]

[10]

K. L. Zhuang, J. Z. Wang, and J. Zhou, Application of Machine


Vision in Angle Inspection , Equipment Manufacturing Technology,
vol. 4, pp. 4-10, 2011.
H. Qian, Z. Y. Yang, and T. Huang, Key Technology to the Vision
Software Development of High-speed and Parallel Manipulator ,
Manufacturing Technology&Machine Tool, vol. 10, pp. 34-37, 2005.
Y. B. Wang, Y. Z. Hu, and M. Y. Lu, Siemens machine vision
system and its application in automobile engine assembly line ,
Automation systems engineering , vol. 1, pp. 20-22, 2006.
Clavel R, Device for the movement and positioning of an element in
space, U.S.Patent, NO.49765821990.
ISRA Co.(2006-11-21) Flexible, the recognition system.[Online].
www.china-vision.net
Y. Lv, and J. Q. Wang, Application of Vision Guiding Technique in
Intelligent Grasp of Industrial Robot. Anhui: HeFei University of
Technology, 2009.
Y. Song, M. T. Li, and L. N. Sun, et a1, Global Visual Servoing of
Miniature Mobile Robot inside a Micro-Assembly Station,
Proceedings of the IEEE International Conference on Mechatronics
Automation , pp. 1586-1591, 2005.
H. Qian, Z. Y Yang, and T. Huang, Key Technology to the Vision
SoftWare Development of High-speed and Parallel Manipulator .
Tianjin: Tianjin University, 2005.

[12]

[13]

[14]

[15]
[16]

[17]

[18]

S. D. Ma, and Z. Y. Zhang, Computer vision-theory and algorithm of


computer foundation, Beijing: Science and Technology Press, 2003.
Z. Y. Zhang, A Flexible New Technique for Camera Calibration ,
IEEE Transactions On Pattern Analysis And Machine Intelligence,
vol. 11, pp. 1330-1334, 2000.
Elsayed E.Hemayed, A Survey of Camera Self Calibration,
Proceedings of the IEEE Conference on Advanced Video and Signal
Based Surveillance. 2003.
H. L. Li, Q. Guan, and K. H. Hu, Development of camera calibration
system based on OpenCV in VC++ Environment, Computer
applications and software, vol. 6, pp. 19-22, 2011.
Z. Wang, and S. X. He, An adaptive edge-detection method based on
Canny Algorithm, China Journal of image and graphics, vol. 8, pp.
957-962, 2004.
H. S. Yu, and Y. N. Wang, An Improved Canny edge Detection
Algorithm, Computer engineering and applications., pp. 27-29,
2004.
Kitchen, and A.Rosenfeld, Gray-Level Corner Detection, Pattern
Recognition Letters, vol. 1, pp. 95-102, 1982.
S.M.Smith, and J.M.Brady, SUSAN-a new approach to low level
image processing, International Journal of Computer Vision, vol. 1.
1997, pp. 45-78.
Ewaryst Rafajlowicz, SUSAN Edge Detector Reinterpreted,
Simplified and Modified , Multidimensional Systems. 2007, pp. 6974.
Gary Bradski, AdrianKaehler, Learn OpenCV, Beijing: Tsinghua
University press, 2008.

(a)

(b)
Figure 6. Experimental renderings.

61

You might also like