0% found this document useful (0 votes)
67 views7 pages

A Study of Computer Vision For Measuring Surface Roughness

The document presents a study on using a computer vision system to measure surface roughness in a turning process. Images of turned specimens are processed to obtain texture parameters like spatial frequency, mean grey level, and standard deviation. These parameters are input into a polynomial network model to predict surface roughness. Experimental results found that surface roughness measured by the computer vision system matches well with measurements from a traditional stylus method, with the computer vision approach being faster, cheaper, and causing less environmental noise.

Uploaded by

Praveen Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
67 views7 pages

A Study of Computer Vision For Measuring Surface Roughness

The document presents a study on using a computer vision system to measure surface roughness in a turning process. Images of turned specimens are processed to obtain texture parameters like spatial frequency, mean grey level, and standard deviation. These parameters are input into a polynomial network model to predict surface roughness. Experimental results found that surface roughness measured by the computer vision system matches well with measurements from a traditional stylus method, with the computer vision approach being faster, cheaper, and causing less environmental noise.

Uploaded by

Praveen Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Int J Adv Manuf Technol (2002) 19:295–301

 2002 Springer-Verlag London Limited

A Study of Computer Vision for Measuring Surface Roughness


in the Turning Process
B. Y. Lee, H. Juan and S. F. Yu
National Huwei Institute of Technology, Huwei, Yunlin, Taiwan

The paper presents a system for measuring the surface rough- In recent years, many optical measuring methods have been
ness of turned parts using a computer vision system. The applied to overcome the limitations of the stylus method in
images of specimens grabbed by the computer vision system measuring the surface roughness of workpieces. Galante et al.
are processed to obtain parameters of their grey levels (spatial applied an image-processing technique to measure the surface
frequency, arithmetic mean value, and standard deviation). roughness of tools [4]. Al-Kindi et al. [5] applied a machine
These parameters are used as input data to a polynomial vision system in the automated inspection of an engineering
network. Using the trained polynomial network, the experi- surface. The definition of surface roughness and the arrange-
mental result shows that the surface roughness of a turned ment of the system were described clearly in Table 1 and
part made of S55C steel, measured by the computer vision Fig. 1 of his paper. Damodarasamy and Raman used a computer
system over a wide range of turning conditions, can be obtained vision system to analyse surface texture of workpieces success-
with reasonable accuracy, compared to that measured by a fully [1]. Pre-processing for eliminating effects due to illumi-
traditional stylus method. Compared with the stylus method, nation problems and noise was reported by Kiran et al. [3].
the computer vision system constructed is a useful method for In this paper, a polynomial network is used to construct the
measuring the surface roughness of this material faster, at a relationships between the cutting parameters (cutting speed,
lower cost, and with lower environmental noise. feedrate, and depth of cut) and cutting performance (surface
roughness). The polynomial network is a self-organising adapt-
Keywords: Image; Surface roughness; Turning ive modelling tool [6] with the ability to construct the relation-
ships between input variables and output feature spaces. A
comparison between a polynomial network and a back-
propagation network has shown that the polynomial network
1. Introduction has higher prediction accuracy and fewer internal network
connections [7].
Surface roughness of workpieces is an important mechanical In this work, we construct a computer vision system for
property. The traditional stylus method is the most widely used measuring surface roughness automatically in the turning pro-
method in industry for this measurement in which a precision cess. First, a simple image modelling procedure and the theory
diamond stylus is drawn over the surface being measured of polynomial networks are introduced. An experimental set-
with the perpendicular motion of the stylus being amplified up for measuring texture parameters and surface roughness in
electronically [1]. The accuracy of the stylus method depends turning operations is then described. A polynomial network
on the radius of the diamond tip. When the surface roughness constructed using the parameters is developed. An experimental
is less than 2.5 ␮m, stylus instruments produce a large system verification of the network is then presented. Finally, a com-
error. The major disadvantage of the method is that it requires puter vision-based surface roughness measuring system is
direct physical contact and provides only line sampling, which developed for the turning process.
may not represent the real characteristics of the surface [2].
The stylus method must also be applied off-line, which does
not suit the adaptive control and automation in industry [3]. 2. A Simple Image Modelling
The term image refers to a 2D light-intensity function, denoted
by g(x, y), where the value for the amplitude of g at spatial
Correspondence and offprint requests to: Professor B.Y. Lee, Depart-
ment of Mechanical Manufacture Engineering, National Huwei Institute coordinates (x, y) gives the intensity (brightness) of the image
of Technology, 64 Wun Hua Road, Huwei Yunlin 632, Taiwan. at that coordinate [8]. As light is a form of energy, g(x, y)
E-mail: leebyin얀sunws.nhit.edu.tw must be non-zero and finite, that is,
296 B. Y. Lee et al.

Table 1. Experimental texture of turned workpiece surface and surface roughness for training database.

Test number V f D F GRa STR Stylus instrument


(m min−1) (mm rev−1) (mm) (line mm−1) (grey level) (grey level) Ra (␮m)

1 57.59 0.52 0.5 1.92 62.66 69.33 21.790


2 81.58 0.52 0.5 1.92 62.18 69.41 23.150
3 115.17 0.52 0.5 1.92 56.61 65.03 23.460
4 191.95 0.52 0.5 1.92 50.13 56.55 22.440
5 59.85 0.52 1.0 1.92 59.82 67.74 25.710
6 84.78 0.52 1.0 1.92 74.39 81.83 23.710
7 119.69 0.52 1.0 1.92 63.54 71.58 24.720
8 199.49 0.52 1.0 1.92 61.34 68.60 26.390
9 53.44 0.52 1.5 1.92 57.62 63.97 22.250
10 75.10 0.52 1.5 1.92 58.72 65.71 22.330
11 106.88 0.52 1.5 1.92 52.88 59.87 22.720
12 178.13 0.52 1.5 1.92 48.16 53.99 23.260
13 57.59 0.4 0.5 2.50 47.44 52.68 14.420
14 81.58 0.4 0.5 2.50 51.50 56.88 14.750
15 115.17 0.4 0.5 2.50 51.36 56.89 15.970
16 191.95 0.4 0.5 2.50 50.40 55.06 15.080
17 59.85 0.4 1.0 2.50 65.25 72.00 15.820
18 84.78 0.4 1.0 2.50 47.46 52.96 14.810
19 119.69 0.4 1.0 2.50 53.65 59.30 15.130
20 199.49 0.4 1.0 2.50 46.76 64.88 15.260
21 53.44 0.4 1.5 2.50 59.83 58.20 17.160
22 75.70 0.4 1.5 2.50 53.00 58.20 15.050
23 106.88 0.4 1.5 2.50 50.01 55.16 17.900
24 178.13 0.4 1.5 2.50 45.35 50.46 17.010
25 57.59 0.29 0.5 3.44 37.02 41.37 6.595
26 81.58 0.29 0.5 3.44 38.65 43.04 6.815
27 115.17 0.29 0.5 3.44 36.69 40.59 7.176
28 191.95 0.29 0.5 3.44 38.83 43.32 7.273
29 59.85 0.29 1.0 3.44 45.17 49.86 7.042
30 84.78 0.29 1.0 3.44 41.46 45.83 6.432
31 119.69 0.29 1.0 3.44 40.77 45.16 6.713
32 199.49 0.29 1.0 3.44 38.85 43.07 7.026
33 53.44 0.29 1.5 3.44 42.63 47.49 9.625
34 75.70 0.29 1.5 3.44 39.61 43.82 9.573
35 106.88 0.29 1.5 3.44 39.95 44.41 8.332
36 178.13 0.29 1.5 3.44 35.84 40.12 9.034
37 88.79 0.26 1.0 3.85 36.17 40.98 6.284
38 129.12 0.26 1.0 3.85 35.16 40.01 5.332
39 202.63 0.26 1.0 3.85 32.79 37.53 6.667
40 88.79 0.23 1.0 4.35 33.07 37.10 4.911
41 129.12 0.23 1.0 4.35 33.13 37.56 4.555
42 88.79 0.2 1.0 5.00 28.85 34.45 3.596
43 129.12 0.2 1.0 5.00 27.73 31.90 3.797
44 202.63 0.2 1.0 5.00 30.48 35.59 3.707
45 57.59 0.16 0.5 6.25 19.58 24.70 2.525
46 81.58 0.16 0.5 6.25 11.06 13.95 2.331
47 115.17 0.16 0.5 6.25 10.60 13.61 2.184
48 191.95 0.16 0.5 6.25 8.90 11.65 2.161
49 59.85 0.16 1.0 6.25 13.56 16.55 2.167
50 84.78 0.16 1.0 6.25 12.44 16.07 1.871
51 119.69 0.16 1.0 6.25 12.61 15.81 2.265
52 199.49 0.16 1.0 6.25 12.76 16.25 2.385
53 53.44 0.16 1.5 6.25 13.69 17.15 2.213
54 75.70 0.16 1.5 6.25 11.72 14.22 2.465
55 106.88 0.16 1.5 6.25 14.99 19.42 2.545

Note: V, cutting speed; f, feedrate: D, depth of cut; F, spatial frequency deviation of grey level; Gra, arithmetic mean value of grey level; STR standard
deviation of grey level.
Computer Vision for Measuring Surface Roughness 297

equally spaced samples arranged in the form of an N × M


array, as shown in Eq. (2.4), where each element of the array
is a discrete quantity:

冤 冥
g(0, 0) g(0, 1) ··· g(0, M−1)
g(1, 0) g(1, 1) ··· g(1, M−1)
g(x,y) =
⯗ ⯗
g(N−1, 0) g(N−1, 1) · · · g(N−1, M−1)

(2.4)
The righthand side of Eq. (2.4) represents what is commonly
called a digital image. This digitisation process requires
decisions about values for N, M, and the number of discrete
grey levels allowed for each pixel. Common practice in digital
image processing is to let these quantities be integer powers
of two; that is,
N = 2n , M = 2k (2.5)
and
Fig. 1. Types of polynomial functional node. G = 2m (2.6)
where G denotes the number of grey levels. The assumption
0 ⬍ g(x,y) ⬍ ⬁ (2.1) in this section is that the discrete levels are equally spaced
between 0 and L in the grey scale. Using Eqs (2.5) and (2.6)
The basic nature of g(x, y) may be characterised by two yields the number, b, of bits required to store a digitised image:
components:
b=N×M×m (2.7)
1. The amount of light incident on the object being viewed.
2. The amount of light reflected by the object.
3. Description of Polynomial Networks
They are called the illumination and reflectance components,
respectively, and are denoted by i(x,y) and r(x,y). The function The polynomial network proposed by Ivakhnenko [9] is a
i(x,y) and r(x,y) combine as a product to form g(x,y): group method data handling (GMDH) technique [10]. In a
g(x,y) = i(x,y) r(x,y) (2.2) polynomial network, complex systems are decomposed into
The nature of i(x,y) is determined by the light source, and smaller, simpler subsystems and grouped into several layers
r(x,y) is determined by the characteristics of the object. using polynomial function nodes. The inputs of the network
We call the intensity of a monochrome image g at coordi- are subdivided into groups and transmitted to individual func-
nates (x,y) the grey level l of the image at that point. It is tional nodes. These nodes are used to evaluate the limited
evident that l lies in the range number of inputs by a polynomial function and generate an
output to serve as an input to subsequent nodes of the next
Lmin ⱕ l ⱕ Lmax (2.3) layer. The general methodology for dealing with a limited
In theory, the only requirement on Lmin is that it be positive, number of inputs at a time, summarising the input information,
and on Lmax that it be finite. In practice, Lmin = imin rmin and and then passing the summarised information to a higher
Lmax = imax rmax. Using the preceding values of illumination and reasoning level, is related directly to human behaviour, as
reflectance as a guideline, the values Lmin ⱌ 0.005 and observed by Miller [11]. Polynomial networks can be con-
Lmax ⱌ 100 for indoor image-processing applications may be sidered as a special class of biologically inspired networks
expected [8]. with machine intelligence and can be used effectively as a
The interval [Lmin, Lmax] is called the grey scale. Common predictor for estimating the output of complex systems.
practice is to shift this interval numerically to the interval
[0,L], where l = 0 is considered black and l = L is considered 3.1 Polynomial Functional Nodes
white in the scale. All intermediate values are shades of grey
varying continuously from black to white. The general polynomial function, known as the Ivakhnenko
To be suitable for computer processing, an image function polynomial, in a polynomial functional node can be
g(x,y) must be digitised both spatially and in amplitude. Digitis- expressed as

冘冘 冘冘冘
ation of the spatial coordinates (x,y) is called image sampling, m n m m n
and amplitude digitisation is called grey-level quantisation. y 0 = w0 + wijxixj + wijkxixjxk + · · · (3.1)
Suppose that a continuous image, g(x,y), is approximated by i=1 j=1 i=1 j=1 k=1
298 B. Y. Lee et al.

Where xi, xj, and xk are the inputs, y0 is the output, and 3.2 Synthesis of Polynomial Networks
w0, wi, wij, and wijk are the coefficients of the polynomial
functional node. To build a polynomial network, a training database with the
In the present study, several specific types of polynomial information of inputs and outputs is required first. Then, an
function nodes (Fig. 1) are used in the polynomial networks algorithm for the synthesis of the polynomial network (ASPN),
for predicting corner wear in drilling. An explanation of these called the predicted-squared-error (PSE) criterion [12] is used
polynomial function nodes is given as follows: to determine an optimal network structure. The principle of
the PSE criterion is to select an accurate network with as
(i) Normaliser. A normaliser transforms the original input simple a network as possible. To accomplish this, the PSE is
into a normalised input, where the corresponding poly- composed of two terms, i.e.
nomial function can be expressed as
PSE = FSE + KP (3.7)
y1 = w0 + w1x1 (3.2)
Where FSE is the average-square-error of the network for
fitting the training data, and KP is the complex penalty of
in which x1 is the original input, y1 the normalised input,
the network.
and w0 and w1 are the coefficients of the normaliser.
The average-square-error of the network FSE can be
During this normalisation process, the normalised input
expressed as
y1 is adjusted to have a mean value of zero and a
variance of unity.

N
1
(ii) Unitiser. A unitiser converts the output of the network FSE = [ŷi− yi]2 (3.8)
N i=1
to the real output value. The polynomial equation of the
unitiser can be expressed as where N is the number of training data items, ŷi the desired
value in the training set, and yi is the predicted value from
y1 = w0 + w1x1 (3.3) the network.
The complex penalty of the network KP can be expressed as
where x1 is the output of the network, y1 is the real
output, w0 and w1 are the coefficients of the unitiser. 2␴2pK
KP = CPM (3.9)
The mean and variance of the real output must be equal N
to those of the output used to synthesise the network. where CPM is the complex penalty multiplier, K is the number
(iii) Single node. A single node has only one input and the of coefficients in the network, and ␴2p is a prior estimate of
polynomial equation is limited to the third degree, i.e. the model error variance, which is also equal to a prior estimate
of FSE.
y0 = w0 + w1x1 + w2x21 + w3x31 (3.4)
Usually, a complex network has a high fitting accuracy.
Hence, FSE, Eq. (3–8) decreases with the increase of the
where x1 is the input to the node, y1 the output of the
complexity of the network. However, the more complex the
node, and w0, w1, w2, and w3 are the coefficients of the
network is, the larger the value of KP, Eq. (3.9). Therefore,
single node.
the PSE criterion, Eq. (3.7), performs a trade-off between
(iv) Double node. A double node takes two inputs at a time, model accuracy and complexity. CPM, Eq. (3.9), can be used
and the third-degree polynomial equation has a cross-term to adjust the trade-off. A complex network will be penalised
to consider the interaction between the two inputs, i.e. more in the PSE criterion as CPM is increased. On the contrary,
y0 = w0 + w1x1 + w2x2 + w3x21 + w4x21 (3.5) a complex network will be selected if CPM is decreased [13].
+ w5x1x2 + w6x31 + w7x32

where x1 and x2 are the inputs to the node, y1 is the output 4. Experimental Set-up and Training
of the node, and w0, w1, w2, · · ·,w7 are the coefficients of Database
the double node.
(v) Triple mode. Similar to the single- and double-nodes, a To build a polynomial network for a computer vision system
triple node, with three inputs, has a more complicated to measure the surface roughness under a varying cutting
polynomial equation allowing for the interaction among conditions, a training database must be established for different
these inputs, i.e. cutting parameters and surface roughnesses. A number of turn-
ing experiments on S55C steel were carried out on a PC-based
y0 = w0 + w1x1 + w2x2 + w3x3 + w4x21 + w5x22 CNC lathe (ECOCA PC4610) using a carbide tool (Mitsubishi
+ w6x33 + w7x1x2 + w8x1x3 + w9x2x3 (3.6) TNMG160404R-2G) for machining S55C workpieces. A digital
+ w10x1x2x3 + w7x32 + w11x31 + w12x33 camera (Olympus C1400L) captures the image of the surface
+ w13x33 1
with 1280 × 1024 resolution, s grabbing speed and 8 bit
30
Where x1, x2, and x3 are the inputs to the node, y1 is the digit output.
output of the node, and w0,w1,w2,· · ·,w13 are the coef- Illumination of the specimens was accomplished by a dif-
ficients of the triple node. fused, blue light source which was situated at an angle of
Computer Vision for Measuring Surface Roughness 299

Fig. 2. Experimental results for texture analysis (a) Turning workpiece Fig. 3. Structure of the polynomial network for measuring surface
surface image (cutting speed, 57.58 m min−1, feedrate, 0.4 mm rev−1, roughness by the vision system.
depth, 0.5 mm); (b) Distribution of grey level on the center-line of
(a). (c) The amplitude of grey level in (b).
In this study, a feature of the surface image, called the
arithmetic average of the grey level, is used to predict the
approximately 40° incidence with respect to the specimen actual surface roughness of the workpiece. The arithmetic
surface. The equipment used for measuring surface roughness average of the grey level GRa can be expressed as:
was a surface roughness tester, Surfcorder SE-1100 (Kosaka


n
Laboratory Ltd). The cutting parameters were selected by 1
varying the cutting speed in the range 53.44–199.49 m min−1, GRa = 兩gi兩
n
the feedrate in the range 0.16–0.52 mm rev−1, the depth of cut i=1

in the range 0.5–1.5 mm, and the average surface roughness where gi is the grey level of surface image deviating from the
Ra in the range 1.87–26.39 ␮m. The average surface roughness mean grey level [14].
Ra, which is the most widely used surface finish parameter in The parameters of surface texture are shown in Fig. 2. The
industry, is selected in this study. It is the arithmetic average image of the workpiece surface captured by the digital camera
of the absolute value of the heights of roughness irregularities is shown in Fig. 2(a). Extracting the digital data on the central
from the mean value measured within a sampling length of line of Fig. 2(a), the distribution could be obtained and is
8 mm. shown in Fig. 2(b). The amplitude of the grey level is obtained

Fig. 4. Schematic diagram of the computer vision system for measuring roughness.
300 B. Y. Lee et al.

Table 2. Experimental turning parameters and surface roughness for verification tests.

Test number V f D F GRa STR Vision Stylus Error


(m min−1) (mm rev−1) (mm) (line mm−1) (grey level) (grey level) measuring instrument (%)
R̃a (␮m) Ra (␮m)

1 75.44 0.45 0.80 1.95 60.92 67.61 21.923 20.030 4.81


2 75.44 0.45 1.20 2.18 69.51 75.67 20.778 19.930 4.25
3 177.50 0.45 1.20 2.21 48.11 53.66 18.857 20.060 6.00
4 177.50 0.45 0.80 2.21 43.91 49.45 22.898 21.160 8.21
5 405.50 0.35 0.80 2.92 46.76 52.02 11.530 11.550 0.17
6 177.50 0.35 0.80 2.95 41.50 46.28 11.150 11.280 1.15
7 75.44 0.35 0.80 2.97 42.96 48.03 11.062 11.710 5.53
8 75.44 0.32 0.08 3.27 39.51 43.84 8.645 8.143 6.16
9 75.44 0.26 1.00 3.89 34.69 38.90 5.868 5.462 7.43
10 202.63 0.26 1.00 3.91 32.79 37.53 5.912 6.667 11.32
11 129.12 0.26 1.00 3.93 35.16 40.01 5.848 5.332 9.68
12 88.80 0.26 1.00 3.98 36.17 40.98 5.648 6.284 10.12
13 88.80 0.23 1.00 4.41 33.07 37.10 4.466 4.911 9.06
14 75.44 0.23 1.00 4.52 33.13 37.56 4.285 4.555 5.93
15 75.44 0.20 1.00 5.10 28.85 34.45 3.521 3.596 2.09
16 129.12 0.20 1.00 5.17 27.73 31.90 3.550 3.797 6.51
17 202.63 0.20 1.00 5.19 30.48 35.59 3.394 3.707 8.44

Note: V, cutting speed; f, feedrate; D, depth of cut.

by filtering the direct-current of the grey level and is shown which are listed in the Appendix. The comparisons of R̃a
in Fig. 2(c). The spatial frequency (F), the arithmetic mean (measured by vision) and Ra (measured by stylus) are shown
value of grey level (GRa), and the standard deviation of grey in Fig. 5. The surface roughness (R̃a) evaluated by the vision
level (STR) then can be calculated from the amplitude of the measuring system is close to the value (Ra) mesured by the
grey level by statistical methods. Those were the three stylus instrument.
parameters used for training the polynomial network. In the
experiments, 55 turned specimens were made, based on the
cutting parameter combinations. The experimental results are 6. Conclusions
given in Table 1 for the training database.
Using the developed training database, a three-layer poly- In this paper, a self-organised polynomial to model the vision
nomial network for measuring turned surface roughness is measuring system for surface roughness has been established.
developed using the PSE criterion. Figure 3 shows the poly- Several verification tests on S55C steel have shown that the
nomial network developed for the vision system for measuring maximum absolute error between the surface roughness meas-
surface roughness. All of the polynomial equations used in the ured by the vision system and that measured by the stylus
networks shown in Fig. 3 are listed in the Appendix. A sche-
matic diagram of the computer vision system for measuring
surface roughness is shown in Fig. 4.

5. Experimental Verification and


Discussion

To evaluate the developed networks for measuring the surface


roughness of turned workparts, 17 more turned specimens using
different cutting parameters were made (Table 2). The spatial
frequency, arithmetic mean value, and standard deviation of
the grey level are fed into the polynomial network, and the
surface roughness measured by the vision system can then be
calculated directly using the polynomial functions listed in the
Appendix. A comparison of R̃a (measured by the vision system)
and Ra (measured by the stylus method) is presented in Table 2.
The maximum error is less than 11.32%. Once the database
of the grey level (F, GRa, STR in Table 1) together with the
verifying database (Table 2) are fed into the polynomial net-
work, the surface roughness measured by the vision system Fig. 5. Comparisons between R̃a (measured by the vision system) and
can be calculated automatically using the polynomial functions, Ra (measured by the stylus instrument).
Computer Vision for Measuring Surface Roughness 301

instrument is less than 11.32%. In other words, the developed 13. H. S. Liu, B. Y. Lee and Y. S. Tarng, “In-process prediction of
measuring system using computer vision can be used effectively corner wear in drilling operations”, Journal of Materials Processing
Technology, 101, pp. 152–158, 2000.
to measure the surface roughness for this material over a wide 14. D. E. P Hoy and F. Yu, “Surface quality assessment using
range of cutting conditions in turning. The direct imaging computer vision methods”, Journal of Materials Processing Tech-
approach is effective and easy to apply at the shop floor level. nology, 28, pp. 265–274, 1991.

References
Appendix
1. Sampath Damodarasamy and Shivakumar Raman, “Texture analy-
sis using computer vision”, Computers in Industry, 16, pp. 25– A. Normaliser
34, 1991.
2. M. A Younis, “On line surface roughness measurements using y01 = −2.3 + 0.637 F
image processing towards an adapptive control”, Computers and y02 = −2.35 + 0.0587 GRa
Industrial Engineering, 35(1–2), pp. 49–52, 1998.
3. M. B. Kiran, B. Ramamoorthy and V. Radhakrishnan, “Evaluation y03 = −2.47 + 0.0548 STR
of surface roughness by vision system”, International Journal of
Machine Tools and Manufacture, 38(5–6), pp. 685–690, 1998. B. Unitiser
4. G. Galante, M. Piacentini and V. F. Ruisi, “Surface roughness
detection by tool image processing”, Wear, 148, pp. 211–220, R̃a = 11.4 + 8.01 y31
1991.
5. G. A. Al-Kindi, R. M. Baul and K. F. Gill, “An application of
machine vision in the automated inspection of engineering sur- C. Single Node
faces”, International Journal of Production Research, 30(2),
pp. 241–253, 1992.
y13 = −0.553 − 0.92 y01 + 0.707 y201 − 0.217 y301
6. R. L. Barron, A. N. Mucciardi, F. J. Cook and A. R. Barron, y14 = −0.553 − 0.92 y01 + 0.707 y201 − 0.217 y301
“Adaptive learning network: delevopment and application in the
United States of algorithms related to GMDH”, in S. J. Farlow D. Double Node
(ed.), Self-Organizing Methods in Modeling: GMDH Type Algor-
ithms, Marcel Dekker, New York, 1984. y12 = −0.541 − 0.853 y01 + 0.0874 y03 + 0.627 y201 − 0.034 y203
7. G. J. Montgomery and K. C. Drake, “Abductive resoning net- − 0.096 y03 y01 − 0.203 y301
work”, Neurocomputing, 2, pp. 97–104, 1991.
8. Rafael C. Gonzalez and Richard E. Woods, Digital Image Pro- y31 = 1.34 y21 − 0.356 y14 + 3.59 y221 + 3.08 y214 − 6.68 y21 y14
cessing, Addison-Wesley, pp. 28–32, 1992. − 0.108 y321 + 0.108 y314
9. A. G. Ivakhnenko, “Polynomial theory of complex systems”,
IEEE Transactions Systems, Man and Cybernetics, 1(4) pp. 364– E. Triple Node
378, 1971.
10. S. J. Farlow (ed.), The GMDH algorithm, Self-Organizing Methods y11 = −0.552 − 0.875 y01 + 0.119 y02 − 0.0461 y03 + 0.827 y201
in Modeling: GMDH Type Algorithms, Marcel Dekker, New
York, 1984. + 4.97 y202 + 5.14 y203 + 0.5 y01 y02 − 0.0851 y01 y02
11. G. A. Miller, “The magic number seven, plus or minus two: some − 9.85 y02 y03 − 0.188 y01 y02 y03 − 0.129 y301 − 0.173 y202
limits on our capacity for processing information”, Psychological + 0.0428 y203
Review, 63, pp. 81–97, 1956. y21 = −0.0202 − 3.07 y11 + 1.63 y12 + 2.39 y13 − 12.8 y211
12. A. R. Barron, “Predicted-squared-error: a criterion for automatic
model selection”, in S. J. Farlow (ed.), Self-Organizing Methods
+ 50.7 y212 + 57.3 y213 + 21 y11 y12 + 5.28 y11 y13
in Modeling: GMDH Type Algorithms, Marcel Dekker, New − 121 y12 y13 + 24.6 y11 y12 y13 − 7.2 y311 − 9.13 y312
York, 1984. − 8.27 y313

You might also like