Fundamendals of Machine Vision
Fundamendals of Machine Vision
– Key Points:
• Automated/Non-Contact
• Acquisition
• Analysis
• Data
Introduction and Overview
• What is Machine Vision
Image
Acquisition
Image
•Sensors Acquisition
•Optics
•Lighting
Integration
and
Integration and Applications
Applications
•Results
•Communications
•Automation
Analysis
•Components
•Software Analysis
•Algorithms
Introduction and Overview
• The Machine Vision Market
– Choices
• Well over 400 manufacturers and suppliers
• Diverse product offerings
– Confusion
• Product/component differentiation sometimes is unclear
• End-users (the buyers) often don’t understand what they are
getting
– What’s important
• Components and techniques need to be better understood at
the end-user level
• Advanced technology skills are necessary for competent
specification and integration
Introduction and Overview
• The Machine Vision Market
– General Purpose Machine Vision Systems
Camera
Lens Computer
Electronics
Imager
Power/Control Frame Grabber
Signal or other signal Digital
conversion Image
– PC-based system
– Single or multiple cameras interfaced to a computer, standard (Windows, Linux)
operating system
– Diverse imaging devices available
• analog (RS170), and digital (GigE Vision, FireWire, Camera Link, USB) interfaces
Introduction and Overview
Electronics/El
0Wood ectrical
Automotive
Semiconductor
Pharma/
Medical
Device
Integration
IMAGE ACQUISITION
– Sensors & Imaging
– Optics
– Lighting
Image Acquisition
• Nothing happens in a machine vision application without the
successful capture of a very high quality image
– Image quality: correct resolution for the target application with
best possible feature contrast
• Resolution – determined by sensor size and quality of optics
• Feature contrast – determined by correct lighting technique
and quality of optics
60 68 62 57 42 41 46 41 43 49 42 41
44 42 41 46 46 42 48 44 42 42 46 42
41 46 42 48 44 42 41 41 46 43 49 42
59 54 60 59 41 46 42 46 46 42 48 46
The defect diameter should span about 2 pixels so a pixel must cover 0.05”. Over 36”
therefore, there must be 720 pixels (36 / 0.05). We must select a camera with at least
that resolution in the minor axis (vertical) – probably one with a 1024x780 sensor.
At minimum, a differentiable object must cover 5 pixels. Due to the low contrast, we
decide to double that coverage to 10 pixels. The target pixel size will be 0.1” (1” / 10),
and the field of view must be no larger than 48” (480 x .01).
Image Acquisition
• Optics
– Application of optical components
• Machine vision requires fundamental understanding of the
physics of lens design and performance
• Goal: specify the correct lens
– Create a desired field of view (FOV)
– Achieve a specific or acceptable working distance (WD)
– Project the image on a selected sensor based on sensor
size – primary magnification (PMAG)
– Create the highest level of contrast between features of
interest and the surrounding background; with the
greatest possible imaging accuracy
Image Acquisition
• Optics
– Considerations for lens selection
• Magnification, focal length, depth of focus (DOF), f-number, resolution, diffraction limits,
aberrations ( roll-off, chromatic, spherical, field curvature, distortion), parallax, image size, etc.
– Some geometric aberration may be corrected in calibration
• The physics of optical design is well known and can be mathematically modeled and/or
empirically tested
– Specification or control of most of the lens criteria is out of our hands
• Optics
– Considerations for lens selection
• Practical specifications for machine vision:
PMAG (as dictated by focal length) and
WD to achieve a desired FOV
– Use a simple lens calculator and/or
manufacturer lens specifications
– Simple – state the required FOV, the
sensor size based on physical
selection of camera and resolution,
and a desired working distance –
calculate the lens focal length
» Note – specified working
distance may not be available
for a given lens – review
specifications Images: PPT Vision;
– Test your results pptvision.com
• Always use a high-resolution machine
vision lens NOT a security lens
Image Acquisition
• Optics
– Why use machine vision lenses only
• Light gathering capability
and resolution
• Lighting
– Illumination for machine vision must
be designed for imaging, not human
viewing
• Selection must be made relative
to light structure, position, Diffuse
color, diffusion Reflection
Light Source Specular
• We need to know how light Reflection
works so our light selections are
not “hit and miss” guesswork
• Light is both absorbed and
reflected to some degree from
all surfaces Refraction, Absorption
• Lighting Techniques
– Direct bright-field
illumination
• Sources: high-angle
ring lights (shown),
spot-lights, bar-lights
(shown); LEDs or
Fiber-optic guides
• Uses: general
illumination of
relatively high-
contrast objects; light
reflection to camera is
mostly specular
• Lighting Techniques
– Diffuse bright-field
illumination
• Sources: high-angle
diffuse ring lights (shown),
diffuse bar-lights; LEDs or
fluorescent
• Uses: general illumination
of relatively high-contrast
objects; light reflection to
camera is mostly diffuse
• Lighting Techniques
– Direct dark-field illumination
• Sources: low-angle ring
lights (shown), spot-lights,
bar-lights; LEDs or Fiber-
optic guides
• Uses: illumination of
geometric surface features;
light reflection to camera is
mostly specular
• “Dark field” is misleading –
the “field” or background
may be light relative to
surface objects
• Lighting Techniques
– Diffuse dark-field illumination
• Sources: diffuse, low-angle
ring lights (shown), spot-
lights, bar-lights; LEDs or
fluorescent
• Uses: non-specular
illumination of surfaces,
reducing glare; may hide
unwanted surface features
• Lighting Techniques
– Diffuse backlight
• Sources: highly
diffused LED or
fluorescent area
lighting
• Uses: provide an
accurate silhouette of
a part
• Lighting Techniques
– Structured light
• Sources: Focused LED
linear array, focused or
patterned lasers
• Uses: highlight
geometric shapes, create
contrast based upon
shape, provide 3D
information in 2D images
• Lighting Techniques
– On-axis (coaxial) illumination
• Sources: directed, diffused
LED or fiber optic area
• Uses: produce more even
illumination on specular
surfaces, may reduce low-
contrast surface features,
may highlight high-contrast
geometric surface features
depending on reflective angle
• Lighting Techniques
• Collimated illumination
• Sources: specialty illuminator
(LED, Fiber) utilizing optics to
guide the light
• Uses: highly accurate
backlighting, reducing stray light,
highlighting surface features as a
front light
• Lighting Techniques
– Constant Diffuse Illumination
(CDI – “cloudy day
illumination”)
• Sources: specialty
integrated lighting
• Uses: provides completely
non-specular, non-
reflecting continuous
lighting from all reflective
angles; good for reflective
or specular surfaces
Integration
IMAGE ANALYSIS
– Machine Vision Software
– General Machine Vision Algorithms
Image Analysis
• Machine Vision Software
– Machine vision software drives component capability, reliability,
and usability
– Main machine vision component differentiation is in the
software implementation
• Available image processing and analysis tools
• Ability to manipulate imaging and system hardware
• Method for inspection task configuration/programming
• Interface to hardware, communications and I/O
• Operator interface and display capability
– Often, system software complexity increases with system
capability
– AND greater ease of use usually is at the expense of some
algorithmic and/or configuration capabilities
Image Analysis
• Machine Vision Software
– A dizzying variety of software packages and libraries
Image Analysis
• Machine Vision Software
– What’s Important
• Sufficient algorithm depth and capability to perform the
required inspection tasks
– Consider:
» Speed of processing
» Level of tool parameterization
» Ease with which tools can be combined
• Adequate flexibility in the process configuration to service
the automation requirements
• Enough I/O and communications capability to interface with
existing automation as necessary
• Appropriate software/computer interfacing to implement an
operator interface as needed for the application
Image Analysis
• General Machine Vision Algorithms
– Image transformation/geometric manipulation
– Content Statistics
– Image enhancement/preprocessing
– Connectivity
– Edge Detection
– Correlation
– Geometric Search
– OCR/OCV
– Color processing
Search Process
Target Image
Integration
•Hardware
execution
Initiate •Camera and (if
Acquire applicable)
Inspection –
Image strobe trigger
external event
-Part
Tracking
-Multiple
results
•Software -Other data
Analysis execution of
inspection
program
-Recipe
changeovers •Determine
Results part status and
-Multiple communicate Process
images/lights results Result –
external
-Part tracking event
Integration and Applications
• Integration
– Utilize appropriate handshaking where applicable, particularly with
interface to a PLC or external control system (not timers)
– When part is in motion, interface from vision device to the automation
must be discrete, digital I/O to avoid variable latencies
Trigger >
Acquisition <
Processing <
Result <
• Integration
– Configuring a basic
machine vision inspection
• Processing steps
– Locate the part and extract
a nominal origin
– Adjust regions relative to
the origin
– Extract appropriate data
within regions of interest
– Make decisions based
upon application
parameters for that feature
Integration and Applications
• Results and Communications
– Installation/testing/startup
• Always implement control handshaking first
– Image acquisition, data exchange, and reject timing/co-
ordination make up the largest part of on-site systems
integration with an automation device
– Determine final camera/lighting mounting based upon
live production imaging
» Lock/pin all mounts once image is correct for all parts
• Test inspection algorithms on all samples with representative
failure modes
Integration and Applications
• Results and Communications
– Discrete digital I/O may be required due to timing/signal latency
– Other communication protocols can be considered where
appropriate
– Incorporate proper handshaking when implementing signals with
external control devices
– Non-critical data communications (not requiring deterministic
timing), such as part recipes, can be implemented with serial
(RS232/422), TCP/IP, “Ethernet/IP”, or specialty (Modbus,
DataHighway) interfaces.
– The system design must take into account inherent latencies in
these protocols
– About “network” communications
Integration and Applications
• Basic Application Concepts
– Configure the desired inspection task utilizing appropriate tools
provided by the selected components
– Typical general-purpose factory floor inspections
• Flaw detection
• Assembly Verification/Recognition
• Gauging/Metrology
• Location/Guidance
• OCR/OCV
– Note that virtually all applications will require the
implementation of multiple “tools” to successfully extract the
image data
Integration and Applications
• Basic Application Concepts
– Defect/Flaw Detection
• A flaw is an object that is different from the normal immediate
background
• Imaging Issues
– Must have sufficient contrast and geometric features to be
differentiable from the background and other “good” objects
– Typically must be a minimum of 3x3 pixels in size and possibly
up to 50x50 pixels if contrast is low and defect classification is
required
– Reliable object classification may not be possible depending
upon geometric shape of the flaws
• Machine vision tools
– Binary pixel counting, morphology, edge extraction and
counting, image subtraction/golden template, blob analysis
Integration and Applications
• Basic Application Concepts
– Assembly Verification/Object Recognition
• Feature presence/absence, identification, differentiation of similar
features
• Imaging Issues
– Must create adequate contrast between feature and
background
– Accommodate part and process variations – locate and correct
for part positional changes
– May require flexible lighting/imaging for varying features
– For feature presence/absence, feature should cover approx.
1% of the field of view (med. resolution camera), more for
identification or differentiation
• Machine vision tools
– Edge detection and measurement tools, blob analysis,
normalized correlation and pattern matching
Integration and Applications
• Basic Application Concepts
– Gauging/Metrology
• Note: There are physical differences between gauging
features in an image produced by a camera, and the use of a
gage that contacts a part. These differences usually can not
be reconciled
• Gauging concepts
– Resolution, repeatability, accuracy
– Sub-pixel measurement
– Measurement tolerances
– Resolution must be approximately 1/10 of required
accuracy in order to achieve gauge
reliability/repeatability
Integration and Applications
• Basic Application Concepts
– Gauging/Metrology
• Imaging Issues
– Lighting to get a repeatable edge
» Backlighting, collimated light
– Telecentric lenses
– Calibration
» Correction for image perspective/plane
» Calibration error stack-up
• Machine vision tools
– Edge detection and measurement, blob analysis
Integration and Applications
• Basic Application Concepts
– Location/Guidance
• Identification and location of an object in 2D or 3D space
• May be in a confusing field of view
• Imaging Issues
– Measurement tolerances and accuracies as described for
gauging/metrology applications
– Sub-pixel resolutions may be better than discrete gauging
results
– For guidance applications, the stack-up error in robot
motion may be significant
• Machine vision tools
– Blob analysis, normalized correlation, pattern matching
Integration and Applications
• Basic Application Concepts
– OCR/OCV
• Optical Character Recognition/Verification – reading or
verifying printed characters
• Can be fooled by print variations
• Verification is difficult depending upon the application
• Imaging Issues
– Consistent presentation of the character string
– May require extensive pre-processing
• Machine vision tools
– OCR/OCV, golden template match
Integration and Applications
• Basic Application Concepts
– Camera Calibration
• Mapping real-world coordinates to the camera (observed)
pixel coordinates
• Correction of planar and optical distortion