Machine Vision For Visual Testing
Machine Vision For Visual Testing
C
7H A P T E R
Machine vision is the application of vision does not necessarily mean the use
computer vision to industry and of a computer. Specialized image
manufacturing. It is a specialization processing hardware is even capable of
within system engineering, which achieving a higher processing speed and
encompasses computer science, optics, can replace the computer.1 The modern
mechanical engineering, and industrial approaches may use a camera with the
automation. One definition of machine capability to interface directly with a
vision is “the use of devices for optical, personal computer, a system designed on
noncontact sensing to automatically an image processing board, or a vision
receive and interpret an image of a real engine that plugs into a personal
scene in order to obtain information computer.2
and/or control machines or processes.”1 A smart camera is a self-contained and
For nondestructive inspection, visual standalone unit with communication
inspection is usually performed by an interfaces. A typical smart camera may
experienced inspector. However, a tedious consist of the following components:
and difficult task may cause the inspector image sensor, image digitization circuit,
to tire prematurely and degrade the memory, digital signal processor,
quality of inspection. Repetitive and communication interface, input/output
dangerous inspection demands machine ports and a built-in illumination device.
vision to replace human inspection so An embedded vision computer, which is a
that precise information can be extracted standalone box with frame storage and
and interpreted consistently. With intelligence, is intermediate to the
technological advances on computer, personal computer based vision system
camera, illumination, and and a smart camera.2 The system differs
communication, widespread application from smart cameras, in that the camera is
of machine vision systems to tethered to the unit rather than
nondestructive testing is foreseen. self-contained. Different system
The basic architecture of a personal configurations have their own advantages
computer based machine vision system is for different applications. A personal
given in Fig. 1. The main components computer based machine vision system
include light source, detector, optics, has the greatest flexibility and capability
frame grabber and computer. Machine of handling a wider range of applications.
FIGURE 1. Typical architecture of machine FIGURE 2. Four basic parameters for optics.
vision system.
Light source
Frame
grabber
Cameras
Specimen
Working
distance
Depth
of view Resolution
Computer
Field of
view
Illumination
Illumination can be provided by one or
more of the following techniques: front (b)
lighting, backlighting, coaxial lighting,
structured illumination, strobed
illumination or polarized light.
As illustrated in Fig. 3a, the bright field
mode for front lighting uses any light
source in the line of sight of the camera
upon direct reflection from the test
surface. Matte surfaces will appear darker
than specular surfaces because the
scattering of the matte surface returns less
light to the camera. In contrast, sharp
reflection returns more light. Dark field is
any light source that is outside the line of Bright field
sight of the camera upon direct reflection.
In a dark field, light scattering from a
matte surface will reach the camera and Dark field
create a bright region. Similarly, a bright
field for backlighting is any light source in
the line of sight of the camera upon direct
(b)
FIGURE 5. Illumination: (a) backlighting;
(b) coaxial illumination.
(a)
Light box
(c)
(b)
Fluorescent lighting Illumination using electricity to excite mercury vapor to produce short wave ultraviolet radiation, which causes
a phosphor to fluoresce, producing visible light.
Quartz halogen lamp Incandescent light bulb with envelope made of quartz and with filament surrounded by halogen gas.
Light emitting diode (LED) Semiconductor diode that emits light when electrical current is applied.
Metal halide lamp Lamp that produces light by passing an electric arc through high pressure mixture of argon, mercury and
various metal halides.
Xenon Element used in arc and flash lamps. Xenon arc lamps use ionized xenon gas to produce bright white light;
xenon flash lamps are electric glow discharge lamps that produce flashes of very intense, incoherent, full
spectrum white light.
Sodium Element used in some vapor lamps. Sodium gas discharge lamps use sodium in excited state to produce light.
There are two types: low pressure and high pressure lamps.
G B
FIGURE 7. Pinhole camera: (a) model; (b) transformation
Color wheel between camera frame coordinates and pixel coordinates.14
Image scene
(a) Y
X
(b)
G RG B p
GRGB
GRGB P
G RG B
G RGG
O o
G R GG Z
CCD Optical
GGGB
f axis
G GG B
G RG R
G RG R Image plane
Image scene G BG B
G BG B
(b) Yw
(c) Yc
CCD 1
R
Beam
splitter G Xw
CCD 2 Zc
Xc
Pw Pc
B
Zw
CCD 3
Camera Link® Automated Imaging Association, serial communication protocol that extends base
Ann Arbor, MI technology of Channel Link® for vision application19
USB USB Implementers Forum universal serial bus
GigE Vision® Automated Imaging Association, based on gigabit ethernet standard with fast data
Ann Arbor, MI transfer, allowing standard, long, low cost cables21
IEEE 1394 IEEE, New York, NY Interface standard for high speed communication and
isochronous (real time) data transfer for high
performance and time sensitive applications22
Topology master and slave master and slave networked, peer to peer
(“on the fly”) peer to peer
Maximum bit ratec 2380 Mbps 480 Mbps 1000 Mbps ~400 ~800 Mbps
Isochronous mode yes yes no yes
Maximum sustained bit rate 2380 Mbps 432 Mbps 930 Mbps ~320 to ~640 Mbps
Cable distance (copper) 10 m 5m 25 m ~4.5 to ~100 m
Bus power none up to 0.5 A none up to 1.5 A
a. USB = universal serial bus
b. IEEE = IEEE [formerly Institute of Electrical and Electronics Engineers], New York, NY
c. Mbps = 106 bits per second
0 0 0
(b)
0 0 0
0 1 0
0 –1 0
Image I(x,y) Operator w(i,j)
value of tan–1[Iy2(x,y) + Ix2(x,y)] is the pixels in random positions, not fixed. The
direction of the edge. A simpler version of intensities of these noises differ from
the magnitude of the gradient is those of surrounding pixels, so the noise
|Ix(x,y)| + |Iy(x,y)|. is conspicuous. The salt and pepper noise
Moreover, there are other operators to is caused by the flicker of the illumination
calculate differences, such as roberts, and the variation in the performance of
prewitt and sobel operators (Fig. 10). The imaging sensor elements.
roberts operator can detect edges in the The operation called smoothing is a
direction of a slant. The edges detected by simple way to reduce such noises. This
prewitt and sobel operators tend to be process is used to obtain the local
thick. averaged intensity value. Such a
Edge parts can be extracted by using calculation can be performed by the
the first order and second order convolution operation. Figure 12 shows a
differentiation. A typical operator is the 3 × 3 smoothing operator. Smoothing
laplacian operator. Three commonly used blurs the image. The smoothing operation
small kernels are given in Fig. 11. This is a kind of low pass filtering in the
operator expresses the magnitude of the frequency domain.
edge, combining x and y directions.
According to definitions, there are some
variations. The position of edge is the zero
origin point because large gradient values FIGURE 11. Laplacian operators:
are found around the edge points. (a) laplacian 1; (b) laplacian 2;
The edge parts correspond to the high (c) laplacian 3.
frequency parts of the image intensity.
Therefore, the edge extraction process is (a)
the high pass filtering in the frequency
0 1 0
domain. (This is proved mathematically.)
1 –4 1
Noise Reduction
In image data, there are various noises of
0 1 0
which the noise called salt and pepper is
typical. Such noises are expressed by
(b)
1 1 1
FIGURE 10. Operators for edge detection:
(a) roberts operators; (b) prewitt operators;
(c) sobel operators. 1 –8 1
(a)
1 1 1
0 0 0 0 0 0
0 1 0 0 0 1 (c)
–1 2 –1
0 0 –1 0 –1 0
2 –4 2
(b)
–1 2 –1
–1 0 1 –1 –1 –1
–1 0 1 0 0 0
(c)
1/9 1/9 1/9
–1 0 1 –1 –2 –1
Frequency
4 4 3
2 10 3 Sorting 4
P percent
5 2 4
2 2 3 3 4 4 4 5 10
(11) X ⊕ Y = {z z = x + y for x ∈ X , y ∈ Y }
σ 2bc σ 2bc
(10) =
σ 2wc σ 2 − σ 2bc
FIGURE 16. Examples of dilation with different structuring
elements: (a) 2 × 2 set; (b) four connected sets.
Because σ2 is fixed, when σbc2 reaches
its maximum value, the separation metric (a)
is maximized. Therefore, in the Image X
discriminant analysis technique, σbc2 is
calculated by changing the value of t and
searching for the adequate threshold in
case of maximum σbc2 .
By using discriminant analysis, the
threshold is uniquely estimated for any
monochrome images. Although only the
case of binarization (two classes, black
and white) is demonstrated, this
technique can also be applied to estimate
multiple thresholds. Origin
Structuring element Y
FIGURE 15. Discriminant analysis technique.
(b)
Image X
Frequency
Class 2
Class 1
Origin
t Intensity
ω1, m1, σ1 ω2, m2, σ2 Structuring element Y
(a)
Image X
(c) x
Origin
Structuring element Y
(b) Image X
(d) x
Origin
Structuring element Y y
⎡ x′⎤ ⎡1 0 t ⎤ ⎡ x ⎤
⎢ x⎥
⎢ ⎥ ⎢ ⎥
(17) ⎢ y ′⎥ = ⎢0 1 t y ⎥ ⎢y ⎥
⎢⎣ 1 ⎥⎦ ⎢ ⎥⎢ ⎥
⎣0 0 1 ⎦ ⎣1⎦
θp
Equation 18 gives rotation as an angle
from the X axis around the origin
(Fig. 18d):
y
(b)
⎡ x′⎤ ⎡cos θ − sin θ 0 ⎤ ⎡ x⎤ x
⎢ ⎥ ⎢ ⎥⎢ ⎥
(18) ⎢ y ′⎥ = ⎢ sin θ cos θ 0 ⎥ ⎢ y ⎥ θq
⎢ 1 ⎥⎦
⎣
⎢ 0
⎣ 0 1 ⎥⎦ ⎢⎣1⎥⎦
⎡ x′⎤ ⎡1 p 0 ⎤ ⎡ x ⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥
(19) ⎢ y ′⎥ = ⎢ q 1 0 ⎥ ⎢ y ⎥
⎢⎣ 1 ⎥⎦ ⎢⎣0 0 1 ⎥⎦ ⎢⎣1⎥⎦
t w −1 t h −1
(25) RNCC ( x, y ) = ∑ ∑ ⎡⎣I (x + i, y + j)
i =0 j =0
× T (i , j )⎤⎦
⎡ t w −1 t h −1
⎢ 2
÷ ⎢
⎢
∑ ∑ I ( x + i , y + j)
⎣ i =0 j =0
y
t w −1 th −1 ⎤
2⎥
(b)
x × ∑ ∑ T (i , j ) ⎥
⎥
i=0 j= 0 ⎦
(c) x
Tw
Th
y=x
Image I(x,y) Template
y T(i,j)
correlation, a higher value indicates a space, parameters â and b of this line can
higher similarity. The similarity be expressed:
estimation is carried out over the whole
input image and the position at which ⎛y − y x y − y x ⎞
the highest similarity is obtained is the (28) (aˆ, bˆ) = ⎜⎜ 2 1, 2 1
⎝ x2 − x1
2 1⎟
x2 − x1 ⎟⎠
location of the object.
The computational costs of the sum of
absolute difference and the sum of On the other hand, points (x1,y1) and
squared difference are small and they can (x2,y2) in x-y space correspond to the
be estimated rapidly. However, if the following lines in the parameter space:
brightness (or gain) of the input image is
different from that of the template image
(in other words, the input and template (29) b = (−x1 ) a + y1
images are of different illumination
conditions), the similarity will be low.
Therefore, an accurate match will not be (30) b = (−x2 ) a + y2
achieved. The normalized cross
correlation estimates the correlation
between the input and template images, And the cross point of these lines is equal
which is less likely to be affected by to â,^b (Fig. 22b).
illumination changes, but comes with a Therefore, points on a straight line in
higher computational cost. an image (x-y space) correspond to lines
So far as the template matching that cross a point in the parameter space.
technique is concerned, the size and pose By estimating the crossing point of
of the object in the template image and straight lines in the parameter space, we
those of the corresponding patterns in the can obtain parameters (a and b) of a
input image need to be the same. If not, straight line in an image (x-y space).
the similarity becomes low and an However, there are many cross points
accurate match will not be obtained. In of the straight lines in the parameter
this case, it is necessary to apply the space, so it is difficult to estimate
geometric transformations, described in adequate points mathematically. To
the previous section, to the template overcome this problem, the cross points
image. However, this step requires are obtained by a voting process in the
estimating the transformation parameters, hough transform. In this technique, the
so it is not efficient. When the size and
pose of the object in the template image
vary from those in the input image, it is FIGURE 22. a-b hough transform:
still possible to do the matching with (a) x-y image space; (a) a-b parameter
color information or other high space.
dimensional features.
(a) y
a-b Hough Transform
The template matching technique can be
^ +^ x2, y2
applied to any pattern of an object. If the y = ax b
object can be expressed by the
mathematical models (such as line or
circle), it can be searched more efficiently x1, y1
in the input image with a hough
transform.
The line detection with hough
transform is first described. A straight line x
in x-y space is modeled by parameters a 0
and b:
(b) b
(26) y = ax + b
(27) b = ( −x ) a + y
^ ^
(a, b)
This formulation shows another straight
line in a-b space. Here, the a-b space is
called the parameter space. b = (–x2) a + y2
As shown in Fig. 22a, when a line
a
crosses two points (x1,y1) and (x2,y2) in x-y 0
parameter space is divided into small parameter space. By using the voting
rectangular regions along each axis. Each process along this trajectory, the cross
small rectangular region is called a cell. points can be estimated (Fig. 23).
The cell works as a counter. If a line The ρ-θ hough transform has a limited
crosses a cell, the counter increases by parameter space, but the computational
one. Finally, the coordinates of the cell, cost is still high because of the calculation
which have maximum value of voting, are of sine waves. To avoid such a problem,
the estimated parameters of the line in another technique is proposed. The
the image. The voting process in the γ-ω hough transform uses piecewise line
parameter space can be considered as a segments in the parameter space to
parameter estimation process for a line, perform the voting process rapidly.27
which crosses (or is supported by) many Moreover, because the coordinates of the
points. image plane are discrete values, by
The least square technique can also be considering the cell redivision in the
used to estimate line parameters. Basically, parameter space, line parameters can be
this technique estimates only one line. estimated more accurately.28
The hough transform can estimate If the parameter space is expanded to
multiple lines simultaneously by picking three dimensions, circles can be detected
up cells whose voting number is beyond a in an image. Moreover, if high
threshold. dimensional parameter space is
considered, it is possible to detect various
ρ-θ Hough Transform patterns. However, the higher the
dimension of parameter space, the more
Generally, the range of line parameters is the computational costs. In this case,
from –∞ to +∞, which is the same as the huge numbers of cells are needed and it is
range of the parameter space. The range difficult to perform any pattern detection.
of the image (in x-y space) is limited but To overcome this problem, a generalized
the range of parameter a (slant of line) is hough transform, which detects the
from –∞ to +∞. It is difficult to prepare position and pose of a pattern by the
cells in such a range for computation. voting process, is an option.
Therefore, the model of a straight line can
be rewritten as:
(32) − w 2 + h2 ≤ ρ ≤ w 2 + h2
^
ρ
^
θ
(33) 0 ≤ θ ≤ π x
0
where: ^
ρ,^
θ
and where: θ
0
x1
(36) α = tan −1
y1
(a)
High frequency
High pass emphasized image
filter
Live image
Low pass
Fraction
filter
1. High frequency emphasis algorithms Once the edge maps of the rivets are
are applied to the left and right detected, the region of interest can be
images. determined with the centroid of the rivet.
2. To identify the features of interest, Once the region of interest is
images are dynamically threshold identified, the multiscale edge detection is
filtered. applied to the region of interest to
3. The original left and right images are generate a list of edges at different scales.
overlaid with the depth offset desired This technique will help discriminate
for identified features. cracks from noncracks according to the
4. The processed images are displayed size of a typical crack in comparison to
stereoscopically on the screen. The other objects such as scratches and repair
eyewear of the inspector or operator plates appearing on the surface. A
can help highlight features of interest. coarse-to-fine edge linking process traced
The second type of result is binary — an edge from the coarse resolution (high
that is, crack or noncrack. Binary results scale) to a fine resolution (low scale). The
are useful for the inspection of a specific propagation depth of all edges presented
part, where a binary accept/reject decision at scale one was found. Here, the
may follow. As described in one study,34 a propagation depth means the number of
crack detection algorithm shown in scales in which the edge appears. A
Fig. 27 was developed to identify the feature vector for each edge in scale was
surface cracks on aircraft skin. Cracks generated so that the edges of cracks
frequently happen near rivets; therefore, could be discriminated from those of
the first step is to detect rivets by noncracks. The feature vector includes the
detecting the circular arcs in the image. following: average wavelet magnitude of
active pixels, which belong to the edges;
the propagation depth number; average
wavelet magnitudes of any linked edges in
FIGURE 27. Surface crack detection algorithm. scale two and scale four; spins of sum WX
and sum WY, where WX, WY are the
wavelet coefficients in the x and y
direction of an active pixel at scale one;
Rivet detection
and
and the number of active pixels.34
region-of-interest A neural network as shown in Fig. 28
identification was trained to classify the inputs —
Image feature vectors of edges in the region of
interest — into cracks and noncracks. The
Multiscale Feature
edge
Edge
Classification
feature vectors used for the training may
linking vector
detection calculation represent the cracks that need immediate
repair. In this case, the classification result
indicating a crack calls for further
investigation of the corresponding region
of interest for repairing. An accuracy rate
of 71.5 percent and a false alarm rate
FIGURE 28. Neural network used for crack classification. 27 percent for the neural network based
classification were reported.
The third type is more informative,
Hidden layer
which allows quantitative information
(a)
Input Output
(b)
LH HH Image
Feature vectors
HH = high high
HL = high low Image preprocessing:
LH = low high · enhancement;
LL = low low · denoising;
· segmentation;
(b) · others
Image
Wavelet transform
Feature extraction:
· spatial domain;
· transform domain
Feature extraction
Classification
Classification
Postprocessing Postprocessing
Result Result
been described in the previous section of for nondestructive testing. Both the
this chapter. Sometimes, postprocessing is system architecture and algorithm
needed to further characterize the implementation for machine vision are
classified results as described in the described. A good understanding of the
example above. The measurement results application’s requirements is essential to
can also be compared with calibrated the success of a machine vision system.
samples for quantitative analysis. Such The technical advances in machine vision
comparison can also be done in the make it applicable to varied
feature space. nondestructive test applications. The
capability of a machine vision system can
be further expanded and enhanced by
incorporating multiple image modalities
Conclusion or other nondestructive test techniques,
This chapter provides a general which may provide complementary
description of machine vision techniques information.
References
1. Batchelor, B.G. and P. F. Whelan. 13. Charge Injection Device Research at RIT.
Intelligent Vision Systems for Industry. Web site. Rochester, NY: Rochester
Bruce G. Batchelor, Cardiff, United Institute of Technology, Center for
Kingdom; Paul F. Whelan, Dublin, Imaging Science (2009).
Republic of Ireland (2002). 14. Trucco, E. and A. Verri. Introductory
2. Zuech, N. “Smart Cameras vs. Techniques for 3-D Computer Vision.
PC-Based Machine Vision Systems.” Upper Saddle River, NJ :Prentice Hall
Machine Vision Online. Ann Arbor, MI: (1998).
Automated Imaging Association 15. Bouguet, J.-Y. Camera Calibration
(October 2002). Toolbox for Matlab. Web pages.
3. Zuech, N. “Optics in Machine Vision Pasadena, CA: California Institute of
Applications.” Machine Vision Online. Technology (2009).
Ann Arbor, MI: Automated Imaging 16. Wang, J., F. Shi, J. Zhang and Y. Liu.
Association (August 2005). “A New Calibration Model of Camera
4. Fales, G. “Ten Lens Specifications You Lens Distortion.” Pattern Recognition.
Must Know for Machine-Vision Vol. 41, No. 2. Amsterdam,
Optics.” Test and Measurement World. Netherlands: Elsevier, for Pattern
Web page. Waltham, MA: Reed Elsevier Recognition Society (February 2008):
(27 October 2003). p 607-615.
5. “What is Structured Light?” Web page. 17. IEEE 1394, High-Performance Serial Bus.
Salem, NH: StockerYale (2009). New York, NY: IEEE (2008).
6. Casasent, D.[P.], Y.F. Cheu and 18. Wilson, A. “Camera Connections.”
D. Clark. Chapter 4: Part 4, “Machine Vision Systems Design. Tulsa, OK:
Vision Technology.” Nondestructive PennWell Corporation (April 2008).
Testing Handbook, second edition: 19. Specifications of the Camera Link
Vol. 8, Visual and Optical Testing. Interface Standard for Digital Cameras
Columbus, Ohio: American Society of and Frame Grabbers. Ann Arbor, MI:
Nondestructive Testing (1993): Automated Imaging Association
p 92-107. (Annex D, 2007).
7. Forsyth, D.A. and J. Ponce. Computer 20. “Universal Serial Bus,” Web site.
Vision: A Modern Approach. Upper Beaverton, OR: USB Implementers
Saddle River, NJ: Prentice Hall (2002). Forum (2008).
8. Martin, D. Practical Guide to Machine 21. “GigE Vision®.” Web page. Machine
Vision Lighting. Web pages. Austin, TX: Vision Online. Ann Arbor, MI:
National Instruments Corporation Automated Imaging Association
(November 2008). (2009).
9. Hainaut, O.R. “Basic Image 22. 1394 Technology. Web page. Southlake,
Processing.” Web pages. Santiago, TX: 1394 Trade Association (2008).
Chile: European Organisation for 23. Sgro, J. “USB Advantages Offset Other
Astronomical Research in the Southern Interfaces.” Vision Systems Design.
Hemisphere, European Southern Tulsa, OK: PennWell Corporation
Observatory (December 1996). (September 2003).
10. Users Manual MTD/PS-0218, Kodak 24. Jain, R., R. Kasturi and B.G. Schunck.
Image Sensors. Revision 2.0. Rochester, Machine Vision. New York, NY:
NY: Eastman Kodak (July 2008). McGraw-Hill (1995).
11. Peterson, C. “How It Works: The 25. Gonzalez, R.C. and R.E. Woods. Digital
Charged-Coupled Device, or CCD.” Image Processing: Upper Saddle River,
Journal of Young Investigators. Vol. 3, NJ: Prentice Hall (2002).
No. 1. Durham, NC: Journal of Young 26. Otsu, N. “A Threshold Selection
Investigators, Incorporated (March Method from Gray-Level Histograms.”
2001). SMC-9, IEEE Transactions on Systems,
12. Litwiller, D. “CCD vs. CMOS: Facts Man, and Cybernetics. Vol. 9, No. 1.
and Fiction.” Photonics Spectra. Vol. 35, New York, NY: IEEE
No. 1. Pittsfield, MA: Laurin (January 1979): p 62-66.
Publishing (January 2001): p 154-158.
27. Wada, T., T. Fujii and T. Matsuyama. 32. Komorowski, J.P., S. Krishnakumar,
“γ - ω Hough Transform — Linearizing R.W. Gould, N.C. Bellinger, F. Karpala,
Voting Curves in an Unbiased ρ - θ O.L. Hageniers. “Double Pass
Parameter Space” [in Japanese]. IEICE Retroreflection for Corrosion
D-II. Vol. J75, No. 1. Minato-ku, Detection in Aircraft Structures.”
Tokyo, Japan: Institute of Electronics, Materials Evaluation. Vol. 54, No. 1.
Information and Communication Columbus, OH: American Society for
Engineers (1992): p 21-30. Nondestructive Testing (January 1996):
28. T. Wada, M. Seki and T. Matsuyama, p 80-86.
“High Precision γ - ω Hough Transform 33. Siegel, M. and P. Gunatilake. “Remote
Algorithm to Detect Arbitrary Digital Inspection Technologies for Aircraft
Lines” [in Japanese]. IEICE D-II. Vol. Skin Inspection.” ETVSIM’97, IEEE
J77, No. 3. Minato-ku, Tokyo, Japan: Workshop on Emergent Technologies and
Institute of Electronics, Information Virtual Systems for Instrumentation and
and Communication Engineers (1994): Measurement [Niagara Falls, Ontario,
p 529-530. Canada, May 1997]. New York, NY:
29. Malamas, E.N., E.G.M. Petrakis, IEEE (1997): p 1-10.
M.E. Zervakis, L. Petit and J.D. Legat. 34. Gunatilake, P., M.W. Siegel,
“A Survey on Industrial Vision A.G. Jordan and G.W. Podnar. “Image
Systems, Applications and Tools.” Understanding Algorithms for Remote
Image and Vision Computing. Vol. 21, Visual Inspection of Aircraft Surface.”
No. 2. Amsterdam, Netherlands: Machine Vision Applications in Industrial
Elsevier (February 2003): p 171-188. Inspection V [Orlando, FL, April 1997].
30. Komorowski, J.P. and D.S. Forsyth. Proceedings Vol. 3029. Bellingham,
“The Role of Enhanced Visual WA: International Society for Optical
Inspection in the New Strategy for Engineering (Society of Photo-Optical
Corrosion Management.” Aircraft Instrumentation Engineers)
Engineering and Aerospace Technology. (1997): p 2-13.
Vol. 72, No. 1. Bingley, United
Kingdom: Emerald Group Publishing
(2000): p 5-13.
31. Komorowski, J.P., N.C. Bellinger,
R.W. Gould , A. Marincak and
R. Reynolds. “Quantification of
Corrosion in Aircraft Structures with
Double Pass Retroreflection.” Canadian
Aeronautics and Space Journal. Vol. 42,
No. 2. Kanata, Ontario, Canada:
Canadian Aeronautics and Space
Institute (1996): p 76-82.