A Study of Computer Vision For Measuring Surface Roughness
A Study of Computer Vision For Measuring Surface Roughness
The paper presents a system for measuring the surface rough- In recent years, many optical measuring methods have been
ness of turned parts using a computer vision system. The applied to overcome the limitations of the stylus method in
images of specimens grabbed by the computer vision system measuring the surface roughness of workpieces. Galante et al.
are processed to obtain parameters of their grey levels (spatial applied an image-processing technique to measure the surface
frequency, arithmetic mean value, and standard deviation). roughness of tools [4]. Al-Kindi et al. [5] applied a machine
These parameters are used as input data to a polynomial vision system in the automated inspection of an engineering
network. Using the trained polynomial network, the experi- surface. The definition of surface roughness and the arrange-
mental result shows that the surface roughness of a turned ment of the system were described clearly in Table 1 and
part made of S55C steel, measured by the computer vision Fig. 1 of his paper. Damodarasamy and Raman used a computer
system over a wide range of turning conditions, can be obtained vision system to analyse surface texture of workpieces success-
with reasonable accuracy, compared to that measured by a fully [1]. Pre-processing for eliminating effects due to illumi-
traditional stylus method. Compared with the stylus method, nation problems and noise was reported by Kiran et al. [3].
the computer vision system constructed is a useful method for In this paper, a polynomial network is used to construct the
measuring the surface roughness of this material faster, at a relationships between the cutting parameters (cutting speed,
lower cost, and with lower environmental noise. feedrate, and depth of cut) and cutting performance (surface
roughness). The polynomial network is a self-organising adapt-
Keywords: Image; Surface roughness; Turning ive modelling tool [6] with the ability to construct the relation-
ships between input variables and output feature spaces. A
comparison between a polynomial network and a back-
propagation network has shown that the polynomial network
1. Introduction has higher prediction accuracy and fewer internal network
connections [7].
Surface roughness of workpieces is an important mechanical In this work, we construct a computer vision system for
property. The traditional stylus method is the most widely used measuring surface roughness automatically in the turning pro-
method in industry for this measurement in which a precision cess. First, a simple image modelling procedure and the theory
diamond stylus is drawn over the surface being measured of polynomial networks are introduced. An experimental set-
with the perpendicular motion of the stylus being amplified up for measuring texture parameters and surface roughness in
electronically [1]. The accuracy of the stylus method depends turning operations is then described. A polynomial network
on the radius of the diamond tip. When the surface roughness constructed using the parameters is developed. An experimental
is less than 2.5 m, stylus instruments produce a large system verification of the network is then presented. Finally, a com-
error. The major disadvantage of the method is that it requires puter vision-based surface roughness measuring system is
direct physical contact and provides only line sampling, which developed for the turning process.
may not represent the real characteristics of the surface [2].
The stylus method must also be applied off-line, which does
not suit the adaptive control and automation in industry [3]. 2. A Simple Image Modelling
The term image refers to a 2D light-intensity function, denoted
by g(x, y), where the value for the amplitude of g at spatial
Correspondence and offprint requests to: Professor B.Y. Lee, Depart-
ment of Mechanical Manufacture Engineering, National Huwei Institute coordinates (x, y) gives the intensity (brightness) of the image
of Technology, 64 Wun Hua Road, Huwei Yunlin 632, Taiwan. at that coordinate [8]. As light is a form of energy, g(x, y)
E-mail: leebyin얀sunws.nhit.edu.tw must be non-zero and finite, that is,
296 B. Y. Lee et al.
Table 1. Experimental texture of turned workpiece surface and surface roughness for training database.
Note: V, cutting speed; f, feedrate: D, depth of cut; F, spatial frequency deviation of grey level; Gra, arithmetic mean value of grey level; STR standard
deviation of grey level.
Computer Vision for Measuring Surface Roughness 297
冤 冥
g(0, 0) g(0, 1) ··· g(0, M−1)
g(1, 0) g(1, 1) ··· g(1, M−1)
g(x,y) =
⯗ ⯗
g(N−1, 0) g(N−1, 1) · · · g(N−1, M−1)
(2.4)
The righthand side of Eq. (2.4) represents what is commonly
called a digital image. This digitisation process requires
decisions about values for N, M, and the number of discrete
grey levels allowed for each pixel. Common practice in digital
image processing is to let these quantities be integer powers
of two; that is,
N = 2n , M = 2k (2.5)
and
Fig. 1. Types of polynomial functional node. G = 2m (2.6)
where G denotes the number of grey levels. The assumption
0 ⬍ g(x,y) ⬍ ⬁ (2.1) in this section is that the discrete levels are equally spaced
between 0 and L in the grey scale. Using Eqs (2.5) and (2.6)
The basic nature of g(x, y) may be characterised by two yields the number, b, of bits required to store a digitised image:
components:
b=N×M×m (2.7)
1. The amount of light incident on the object being viewed.
2. The amount of light reflected by the object.
3. Description of Polynomial Networks
They are called the illumination and reflectance components,
respectively, and are denoted by i(x,y) and r(x,y). The function The polynomial network proposed by Ivakhnenko [9] is a
i(x,y) and r(x,y) combine as a product to form g(x,y): group method data handling (GMDH) technique [10]. In a
g(x,y) = i(x,y) r(x,y) (2.2) polynomial network, complex systems are decomposed into
The nature of i(x,y) is determined by the light source, and smaller, simpler subsystems and grouped into several layers
r(x,y) is determined by the characteristics of the object. using polynomial function nodes. The inputs of the network
We call the intensity of a monochrome image g at coordi- are subdivided into groups and transmitted to individual func-
nates (x,y) the grey level l of the image at that point. It is tional nodes. These nodes are used to evaluate the limited
evident that l lies in the range number of inputs by a polynomial function and generate an
output to serve as an input to subsequent nodes of the next
Lmin ⱕ l ⱕ Lmax (2.3) layer. The general methodology for dealing with a limited
In theory, the only requirement on Lmin is that it be positive, number of inputs at a time, summarising the input information,
and on Lmax that it be finite. In practice, Lmin = imin rmin and and then passing the summarised information to a higher
Lmax = imax rmax. Using the preceding values of illumination and reasoning level, is related directly to human behaviour, as
reflectance as a guideline, the values Lmin ⱌ 0.005 and observed by Miller [11]. Polynomial networks can be con-
Lmax ⱌ 100 for indoor image-processing applications may be sidered as a special class of biologically inspired networks
expected [8]. with machine intelligence and can be used effectively as a
The interval [Lmin, Lmax] is called the grey scale. Common predictor for estimating the output of complex systems.
practice is to shift this interval numerically to the interval
[0,L], where l = 0 is considered black and l = L is considered 3.1 Polynomial Functional Nodes
white in the scale. All intermediate values are shades of grey
varying continuously from black to white. The general polynomial function, known as the Ivakhnenko
To be suitable for computer processing, an image function polynomial, in a polynomial functional node can be
g(x,y) must be digitised both spatially and in amplitude. Digitis- expressed as
冘冘 冘冘冘
ation of the spatial coordinates (x,y) is called image sampling, m n m m n
and amplitude digitisation is called grey-level quantisation. y 0 = w0 + wijxixj + wijkxixjxk + · · · (3.1)
Suppose that a continuous image, g(x,y), is approximated by i=1 j=1 i=1 j=1 k=1
298 B. Y. Lee et al.
Where xi, xj, and xk are the inputs, y0 is the output, and 3.2 Synthesis of Polynomial Networks
w0, wi, wij, and wijk are the coefficients of the polynomial
functional node. To build a polynomial network, a training database with the
In the present study, several specific types of polynomial information of inputs and outputs is required first. Then, an
function nodes (Fig. 1) are used in the polynomial networks algorithm for the synthesis of the polynomial network (ASPN),
for predicting corner wear in drilling. An explanation of these called the predicted-squared-error (PSE) criterion [12] is used
polynomial function nodes is given as follows: to determine an optimal network structure. The principle of
the PSE criterion is to select an accurate network with as
(i) Normaliser. A normaliser transforms the original input simple a network as possible. To accomplish this, the PSE is
into a normalised input, where the corresponding poly- composed of two terms, i.e.
nomial function can be expressed as
PSE = FSE + KP (3.7)
y1 = w0 + w1x1 (3.2)
Where FSE is the average-square-error of the network for
fitting the training data, and KP is the complex penalty of
in which x1 is the original input, y1 the normalised input,
the network.
and w0 and w1 are the coefficients of the normaliser.
The average-square-error of the network FSE can be
During this normalisation process, the normalised input
expressed as
y1 is adjusted to have a mean value of zero and a
variance of unity.
冘
N
1
(ii) Unitiser. A unitiser converts the output of the network FSE = [ŷi− yi]2 (3.8)
N i=1
to the real output value. The polynomial equation of the
unitiser can be expressed as where N is the number of training data items, ŷi the desired
value in the training set, and yi is the predicted value from
y1 = w0 + w1x1 (3.3) the network.
The complex penalty of the network KP can be expressed as
where x1 is the output of the network, y1 is the real
output, w0 and w1 are the coefficients of the unitiser. 22pK
KP = CPM (3.9)
The mean and variance of the real output must be equal N
to those of the output used to synthesise the network. where CPM is the complex penalty multiplier, K is the number
(iii) Single node. A single node has only one input and the of coefficients in the network, and 2p is a prior estimate of
polynomial equation is limited to the third degree, i.e. the model error variance, which is also equal to a prior estimate
of FSE.
y0 = w0 + w1x1 + w2x21 + w3x31 (3.4)
Usually, a complex network has a high fitting accuracy.
Hence, FSE, Eq. (3–8) decreases with the increase of the
where x1 is the input to the node, y1 the output of the
complexity of the network. However, the more complex the
node, and w0, w1, w2, and w3 are the coefficients of the
network is, the larger the value of KP, Eq. (3.9). Therefore,
single node.
the PSE criterion, Eq. (3.7), performs a trade-off between
(iv) Double node. A double node takes two inputs at a time, model accuracy and complexity. CPM, Eq. (3.9), can be used
and the third-degree polynomial equation has a cross-term to adjust the trade-off. A complex network will be penalised
to consider the interaction between the two inputs, i.e. more in the PSE criterion as CPM is increased. On the contrary,
y0 = w0 + w1x1 + w2x2 + w3x21 + w4x21 (3.5) a complex network will be selected if CPM is decreased [13].
+ w5x1x2 + w6x31 + w7x32
where x1 and x2 are the inputs to the node, y1 is the output 4. Experimental Set-up and Training
of the node, and w0, w1, w2, · · ·,w7 are the coefficients of Database
the double node.
(v) Triple mode. Similar to the single- and double-nodes, a To build a polynomial network for a computer vision system
triple node, with three inputs, has a more complicated to measure the surface roughness under a varying cutting
polynomial equation allowing for the interaction among conditions, a training database must be established for different
these inputs, i.e. cutting parameters and surface roughnesses. A number of turn-
ing experiments on S55C steel were carried out on a PC-based
y0 = w0 + w1x1 + w2x2 + w3x3 + w4x21 + w5x22 CNC lathe (ECOCA PC4610) using a carbide tool (Mitsubishi
+ w6x33 + w7x1x2 + w8x1x3 + w9x2x3 (3.6) TNMG160404R-2G) for machining S55C workpieces. A digital
+ w10x1x2x3 + w7x32 + w11x31 + w12x33 camera (Olympus C1400L) captures the image of the surface
+ w13x33 1
with 1280 × 1024 resolution, s grabbing speed and 8 bit
30
Where x1, x2, and x3 are the inputs to the node, y1 is the digit output.
output of the node, and w0,w1,w2,· · ·,w13 are the coef- Illumination of the specimens was accomplished by a dif-
ficients of the triple node. fused, blue light source which was situated at an angle of
Computer Vision for Measuring Surface Roughness 299
Fig. 2. Experimental results for texture analysis (a) Turning workpiece Fig. 3. Structure of the polynomial network for measuring surface
surface image (cutting speed, 57.58 m min−1, feedrate, 0.4 mm rev−1, roughness by the vision system.
depth, 0.5 mm); (b) Distribution of grey level on the center-line of
(a). (c) The amplitude of grey level in (b).
In this study, a feature of the surface image, called the
arithmetic average of the grey level, is used to predict the
approximately 40° incidence with respect to the specimen actual surface roughness of the workpiece. The arithmetic
surface. The equipment used for measuring surface roughness average of the grey level GRa can be expressed as:
was a surface roughness tester, Surfcorder SE-1100 (Kosaka
冘
n
Laboratory Ltd). The cutting parameters were selected by 1
varying the cutting speed in the range 53.44–199.49 m min−1, GRa = 兩gi兩
n
the feedrate in the range 0.16–0.52 mm rev−1, the depth of cut i=1
in the range 0.5–1.5 mm, and the average surface roughness where gi is the grey level of surface image deviating from the
Ra in the range 1.87–26.39 m. The average surface roughness mean grey level [14].
Ra, which is the most widely used surface finish parameter in The parameters of surface texture are shown in Fig. 2. The
industry, is selected in this study. It is the arithmetic average image of the workpiece surface captured by the digital camera
of the absolute value of the heights of roughness irregularities is shown in Fig. 2(a). Extracting the digital data on the central
from the mean value measured within a sampling length of line of Fig. 2(a), the distribution could be obtained and is
8 mm. shown in Fig. 2(b). The amplitude of the grey level is obtained
Fig. 4. Schematic diagram of the computer vision system for measuring roughness.
300 B. Y. Lee et al.
Table 2. Experimental turning parameters and surface roughness for verification tests.
by filtering the direct-current of the grey level and is shown which are listed in the Appendix. The comparisons of R̃a
in Fig. 2(c). The spatial frequency (F), the arithmetic mean (measured by vision) and Ra (measured by stylus) are shown
value of grey level (GRa), and the standard deviation of grey in Fig. 5. The surface roughness (R̃a) evaluated by the vision
level (STR) then can be calculated from the amplitude of the measuring system is close to the value (Ra) mesured by the
grey level by statistical methods. Those were the three stylus instrument.
parameters used for training the polynomial network. In the
experiments, 55 turned specimens were made, based on the
cutting parameter combinations. The experimental results are 6. Conclusions
given in Table 1 for the training database.
Using the developed training database, a three-layer poly- In this paper, a self-organised polynomial to model the vision
nomial network for measuring turned surface roughness is measuring system for surface roughness has been established.
developed using the PSE criterion. Figure 3 shows the poly- Several verification tests on S55C steel have shown that the
nomial network developed for the vision system for measuring maximum absolute error between the surface roughness meas-
surface roughness. All of the polynomial equations used in the ured by the vision system and that measured by the stylus
networks shown in Fig. 3 are listed in the Appendix. A sche-
matic diagram of the computer vision system for measuring
surface roughness is shown in Fig. 4.
instrument is less than 11.32%. In other words, the developed 13. H. S. Liu, B. Y. Lee and Y. S. Tarng, “In-process prediction of
measuring system using computer vision can be used effectively corner wear in drilling operations”, Journal of Materials Processing
Technology, 101, pp. 152–158, 2000.
to measure the surface roughness for this material over a wide 14. D. E. P Hoy and F. Yu, “Surface quality assessment using
range of cutting conditions in turning. The direct imaging computer vision methods”, Journal of Materials Processing Tech-
approach is effective and easy to apply at the shop floor level. nology, 28, pp. 265–274, 1991.
References
Appendix
1. Sampath Damodarasamy and Shivakumar Raman, “Texture analy-
sis using computer vision”, Computers in Industry, 16, pp. 25– A. Normaliser
34, 1991.
2. M. A Younis, “On line surface roughness measurements using y01 = −2.3 + 0.637 F
image processing towards an adapptive control”, Computers and y02 = −2.35 + 0.0587 GRa
Industrial Engineering, 35(1–2), pp. 49–52, 1998.
3. M. B. Kiran, B. Ramamoorthy and V. Radhakrishnan, “Evaluation y03 = −2.47 + 0.0548 STR
of surface roughness by vision system”, International Journal of
Machine Tools and Manufacture, 38(5–6), pp. 685–690, 1998. B. Unitiser
4. G. Galante, M. Piacentini and V. F. Ruisi, “Surface roughness
detection by tool image processing”, Wear, 148, pp. 211–220, R̃a = 11.4 + 8.01 y31
1991.
5. G. A. Al-Kindi, R. M. Baul and K. F. Gill, “An application of
machine vision in the automated inspection of engineering sur- C. Single Node
faces”, International Journal of Production Research, 30(2),
pp. 241–253, 1992.
y13 = −0.553 − 0.92 y01 + 0.707 y201 − 0.217 y301
6. R. L. Barron, A. N. Mucciardi, F. J. Cook and A. R. Barron, y14 = −0.553 − 0.92 y01 + 0.707 y201 − 0.217 y301
“Adaptive learning network: delevopment and application in the
United States of algorithms related to GMDH”, in S. J. Farlow D. Double Node
(ed.), Self-Organizing Methods in Modeling: GMDH Type Algor-
ithms, Marcel Dekker, New York, 1984. y12 = −0.541 − 0.853 y01 + 0.0874 y03 + 0.627 y201 − 0.034 y203
7. G. J. Montgomery and K. C. Drake, “Abductive resoning net- − 0.096 y03 y01 − 0.203 y301
work”, Neurocomputing, 2, pp. 97–104, 1991.
8. Rafael C. Gonzalez and Richard E. Woods, Digital Image Pro- y31 = 1.34 y21 − 0.356 y14 + 3.59 y221 + 3.08 y214 − 6.68 y21 y14
cessing, Addison-Wesley, pp. 28–32, 1992. − 0.108 y321 + 0.108 y314
9. A. G. Ivakhnenko, “Polynomial theory of complex systems”,
IEEE Transactions Systems, Man and Cybernetics, 1(4) pp. 364– E. Triple Node
378, 1971.
10. S. J. Farlow (ed.), The GMDH algorithm, Self-Organizing Methods y11 = −0.552 − 0.875 y01 + 0.119 y02 − 0.0461 y03 + 0.827 y201
in Modeling: GMDH Type Algorithms, Marcel Dekker, New
York, 1984. + 4.97 y202 + 5.14 y203 + 0.5 y01 y02 − 0.0851 y01 y02
11. G. A. Miller, “The magic number seven, plus or minus two: some − 9.85 y02 y03 − 0.188 y01 y02 y03 − 0.129 y301 − 0.173 y202
limits on our capacity for processing information”, Psychological + 0.0428 y203
Review, 63, pp. 81–97, 1956. y21 = −0.0202 − 3.07 y11 + 1.63 y12 + 2.39 y13 − 12.8 y211
12. A. R. Barron, “Predicted-squared-error: a criterion for automatic
model selection”, in S. J. Farlow (ed.), Self-Organizing Methods
+ 50.7 y212 + 57.3 y213 + 21 y11 y12 + 5.28 y11 y13
in Modeling: GMDH Type Algorithms, Marcel Dekker, New − 121 y12 y13 + 24.6 y11 y12 y13 − 7.2 y311 − 9.13 y312
York, 1984. − 8.27 y313