Eğitim
Eğitim
Course Description:
VisionPro Standard gives new or potential VisionPro users a 2 day overview of the hardware and
software used to prototype and deploy basic VisionPro applications. The class focuses on grayscale and color
tool usage while building a single application that is then used to simulate deployment via the Application
Wizard. The class also features basic input and output using digital I/O and TCP/IP connections.
Length: 2 days
Locations: Natick, MA; Onsite
Price: $395 (Cognex facility)
Onsite Available – Call 770-814-7920 for Details
Registration: Online at http://www.cognex.com/training
Topic List
1. Hardware Overview
2. Software and Image Acquisition
3. PatMax Basics
4. Search Tool Strategies
5. Histogram, Fixturing & Coordinate Spaces
6. Blob
7. Caliper & Geometry
8. Checkerboard & N-Point Calibration
9. PatInspect
10. OCVMax
11. Color Tools
12. Data Analysis and Results Analysis
13. Input/Output and Application Wizard
Expected Outcomes:
You will benefit from this course by learning:
About the hardware supported by VisionPro
How to prototype, develop, and test vision applications in QuickBuild
How to acquire images
How and when to use each vision tool
How and when to use calibration
How to add digital I/O and TCP/IP communication to application
How to create new and modify existing applications built using the Application Wizard
Prerequisites:
Familiarity with Windows environment
Cognex Corporation One Vision Drive Natick, MA 01760-2059 (508) 650-3000 fax (508) 650-3333 www.cognex.com
Hardware, Connections, & Security
Session 1
1
Course Expected Outcomes
You will benefit from this course by learning
How to prototype, develop, and test vision applications in
QuickBuild
Hardware
2
Objectives
• The student will correctly:
Become acquainted with hardware supported by VisionPro
Understand the different utilities available for camera
support
Aware of the different resources available
Describe the different types of security options that
VisionPro employs for its software and tools
Acquisition Hardware
3
1394DCAM FireWire
• Each camera acts as a frame grabber
4
GigE Vision Camera Support
• GigE Vision falls between FireWire
and Camera Link
600
– Line scan support is possible
400
– Less bandwidth than 300
CameraLink, more than FireWire
200
100
10
5
MVS-8504 PCI Frame Grabber
• 4 independent channels provide mixed
camera format support for
asynchronous or simultaneous
acquisitions
11
• 8500Le
– In price sensitive applications
– Single camera applications
12
6
MVS-8600 PCI Camera Link
• 8601
- single channel frame grabber
- supports 1 area scan or 1 line scan camera
• 8602 is a
- dual channel frame grabber
- supports 2 area scan, 2 line scan or 1 area scan
and 1 line scan cameras simultaneously.
• Variety of cameras:
–Line Scan: 1/2K, 2K, 4K and 8K cameras
–Area Scan: 640x480 to 2Kx2K or greater
cameras
13
14
7
Hardware Overview
PCI Slot Analog Digital Linescan Max # Acq Channels I/O
Requirement Support Support Support Cameras
15
8
Wiring Options
• PC Wiring Guide installed by
default
• Defines different “kits” available
• States part numbers for cables
used in various kits
• Detailed illustrations on wiring
multiple scenarios depending upon
hardware used
17
I/O Available
FireWire and GigE
• Two flavors
– PCI flavor (PCI-DIO24/S)
– USB flavor (USB-1024LS)
• Provide up to 24 bi-directional
programmable Opto I/O lines
• Drivers are shipped with VisionPro
and is installed as an option
• Can be used directly in QuickBuild
Communication Explorer or
programmatically (as of 5.2)
18
9
MVS 8500 Options
• TTL
– 16 bi-directional
programmable I/O lines
– Each line configured
individually
• Opto- Isolated
– 8 pairs of programmable
Opto input and output lines
• Half Opto / Half TTL (Split)
– 4 pairs of Opto input and
output lines
– 8 bi-directional TTL lines
19
20
10
Other Accessories
To simplify and speed up the system integration process, Cognex offers a wide
range of optional accessories including cameras, lighting options and lenses
for all types of machine vision applications.
Lighting
In order to achieve the highest quality images
possible, Cognex offers a wide array of light
modules and integrated LED lighting options.
Lenses
Cognex offers a full range of high-quality compact
camera lenses designed specifically for machine
vision applications.
21
Supported Cameras
11
1394DCAM FireWire
23
GigE Vision
24
12
MVS-8504 & MVS-8501
25
MVS-8600 Series
26
13
Camera Utilities
28
14
FireWire DCAM Doctor Utility
• Topology information
includes the FireWire
bus speed
• Bus speed should be
either S400 or S800
• A bus speed of S100 for
an IEEE 1394b device
indicates that Windows
XP SP2 is running with
the SP2 Windows
FireWire bus driver
30
15
MVS-8600 Camera Initialization
• Uses Serial Communication port built
into the camera link cable
• Protocol commands for each camera
vendor is different!
• Use Cognex Serial Communication
Utility called cogclserial.exe.
• Use CLC file to initialize camera
through the utility.
31
VisionPro Resources
16
VisionPro Resources
• VisionPro provides different levels of resources
– QuickBuild Navigator panel
– On-Line Help
• System level access
– VisionPro Library
– Vision Tool information
• QuickBuild
– Shortcut to tool information
– Samples
• QuickBuild
• .Net & C#
• Scripting
33
Installed Documentation
• Documentation files are installed and accessible
directly through the Windows Start menu
– Hardware Manuals
– FireWire and GigE User’s Guides
– PC Vision Wiring Guide
– PC Configuration Guide
34
17
VisionPro Help Files
• Help files are installed
and accessible directly
through the Windows
Start menu :
– VisionPro Online
Documentation
– through Help Selection
of QuickBuild
35
36
18
The Browsigator
Integrated Help topic on all Tool Edit controls
37
VisionPro Samples
• Two types of CogJob samples available
1. QuickBuild interface
– Sample programs and scripts
2. Programmatic (VB.NET and C#)
– A HTML link is installed for VisionPro samples included in the installation on
the system
– Using Windows Explorer, the user can drill down to the directory that contain
the sample files
38
19
Application Samples
• QuickBuild samples
– Selecting a
sample will add it
as a job in
QuickBuild
– Navigating
through the
sampled job will
illustrate its use.
• Scripting examples
can also be viewed
through the Navigator
as well
39
20
VisionPro Tool Suite
• VisionPro currently offers 3 levels of tools to suite
your needs for performance and price
VisionPro Base
-provides fundamental machine vision tools
VisionPro Plus
-adds PatQuick geometric pattern matching, OCV,
and ID tools
VisionPro Max
-completes the suite, including PatMax and all
VisionPro tools for ultimate flexibility
41
Security Options
• VisionPro supports various methods for authorized use of the
software
• Framegrabber/Dongle detection
• VisionPro detects installed Cognex hardware and
authorizes use of VisionPro software
• Software licensing
• Used with GigE and FireWire cameras (non-Cognex
hardware)
• Can operate in online mode and offline mode using file
exchange
• Emergency licenses
• Up to 5 emergency license can be used per installation
• Useful when license keys are not immediately available
• Authorizes VisionPro software use for 3 days
42
21
Software Security Licensing Center
Currently installed software
licenses and their status
Software license
management
menu including
access to
emergency
licenses
43
• Deployment
• Set with tools at time of purchase
• Does not have a time limit
44
22
VisionPro
3. Confirm your PC and camera configuration (we’ll need this info later in class).
a. Camera:
i. Serial Number:________________________
ii. IP Address:___________________________
b. PC:
i. IP Address:___________________________
4. If not already set, be sure to set the performance driver by clicking the “Set Performance Driver”
button in the Cognex GigE Configuration tool.
Software & Acquisition
Session 2
Objectives
• The student will correctly:
Become acquainted with VisionPro, QuickBuild, and the
development methods available
Identify how to create a QuickBuild job, save the job, &
deploy an application
Save and load VisionPro projects into QuickBuild
Create and configure an Image Source acquisition
1
Software
What is VisionPro?
2
Four Development Models
Advantages:
• No Programming Required
• Fast
• Can continue to use QuickBuild to modify vision, jobs, and I/O
Disadvantages:
• Operator interface limited by Application Wizard
3
Path 2 Development Model
Advantages:
Easy customization of generated application
• Can still use QuickBuild to modify the underlying vision application
Disadvantages:
• Requires some programming
• Must work within framework of Wizard-generated code
• Cannot re-run the Wizard to update modified Wizard-generated code without losing
your modifications
Advantages:
• Total control over operator interface appearance and behavior.
• Can still use QuickBuild to modify the underlying vision application
Disadvantages:
• Requires programming
4
Path 4 Development Model
Advantages:
• Total application flexibility
Disadvantages:
QuickBuild
5
What is QuickBuild?
• QuickBuild is the
interactive window into
VisionPro
• Many components
created can be re-
used in applications
– Written using VPro
API
12
6
QuickBuild Object Structure
13
13
Job 1 Job 2
Tool 1
Tools generate
Tool 1
results from their
Tool 2 inspection on the
Tool 2 Image
Image Image
In In
Ram Ram
14
14
7
Applications
• Applications contain jobs which contain tools
• You can also set system settings at the application level
Application name
Job name
Quick access to
sample code,
jobs, help and
Job View allows tutorials
individual job
management as
well as
communications
configuration
15
Application Shortcuts
Resets
Open, save, or application or
save application runs application
as; includes all continuously Application
jobs and tools settings
Create, import,
within those jobs
or open a job to
Run application be added to the Online/Offline
including all application; also toggle;
jobs within it allows saving of communications
selected jobs settings and
posted items
Shows
application and Tool tips,
job results in a sample code,
floating display and help
16
8
Jobs Individual tool status/feedback
Camera settings
Tool
Tool
Job
feedback/status
17
Job Shortcuts
Resets
application; runs
job continuously
Shows job and
tool results in a
Runs job;
floating display
enables image
container or Image selector
Job
separate
properties
window
Tool tips and help
18
9
Tools
Tool name
Tool
feedback/status
19
Tool Shortcuts
Open, save, or
save tool as
Runs tool;
electric mode; Resets
enables image application or
container or runs job Image selector
separate continuously
window
Tool tips and help
Shows tool
results in a
floating display
20
10
Add Tools to a Job
• A Tool is a VisionPro object that performs a specific
analysis on the designated image
21
22
11
Job
Image Source
Vision Tool
Input Terminals
Output Terminals
23
24
12
Run/Test and Make Decisions
• Make decisions about the Pass/Fail status of the scene using
either:
– Vision Tool Results
– Data Analysis Tool
– Results Analysis Tool
– Scripting
• Output results to QuickBuild Posted Items or the
Communications Explorer
25
26
13
Multilanguage Support
• QuickBuild and VisionPro use the System locale to set the active
language of the system during installation
27
Image Acquisition
14
Acquisition Basics (Frame Grabber)
1
Digital signal
from frame
grabber to PC
2 3
4
Image Representation
• Images are stored as 2-D x
(0,0)
arrays (table) of points of
light intensity called pixels
or pels
• The light intensity value, or y
grey value, of each pixel is
mapped to an integer
between 0 and 255 (for 8 bit
images)
• 0 = black
• 255 = white
• Left-handed image
coordinate system
30
15
Example 8 bit vs. 12 bit
Original Image 8 bit Image 12 bit Image
mapped mapped
Image Source
• The tool used to
acquire images from
a camera in
VisionPro is the
Image Source
• Initialize Acquisition
using the Initialize
button
32
16
Image Source
• First, choose
whether image
comes from an
Image Database or
Camera
• You can also load a
folder of images
and cycle through
them
33
Image Source
• Frame grabber
– The Cognex board from which images will be acquired
• Video Format:
– Choose the camera (and its format) from which you will acquire this
image
• Camera Port
– Which port this camera is connected to
34
17
Run the Job
• When you Run the Job, it
acquires an image from
the camera and puts it into
LastRun.OutputImage
35
36
18
Getting a Better Image
• Exposure
– Exposure duration for electronically shuttered cameras
• Brightness and Contrast
• Contrast determines the “spread” of the grey
values of the image
• Brightness shifts the collective grey values
higher or lower
37
Strobe &Trigger
– Enabling Strobe
– Setting pulse duration
and polarity of strobe
– Selecting a trigger mode
– Selecting whether trigger
will be low to high or
high to low transition
38
19
Trigger Modes
Trigger Description Use
Type
Manual Software triggering Press Run in application
Free Run Allows for acquisition as fast Pseudo live mode – often
as possible used with Linescan to negate
image lag
Hardware Acquires when it detects a Proximity switch detecting
Auto transition on an external line part
Hardware Acquires when the software Application is running and
Semi-Auto Run is enabled AND the waiting for external source –
external line sees a transition more control over system
39
Additional Parameters
• A final set of parameters is for specialized acquisition
settings
– Strobed acquisition
– Using auxiliary lighting modules
– Partial image acquisition with progressive scan cameras
– Using Lookup Tables
40
20
Display Live Video
• Use the Display Live Video button to open a Live Video
Display and show a live image
Live Display
41
Displays
• Optionally use the
Floating Display to
open a separate
window to display
the acquired image
• Notice extra
information at
bottom.
42
21
Displays
• On any of the
displays, you can
right click and choose
to zoom in, zoom out,
pan, etc.
• Zoom Wheel allows
for wheel on mouse
to control the zoom of
the display
43
22
Image File Tool
• Used to save images to file or process images from an existing file
• File types supported:
– Image databases: .idb & .cdb
• Multiple images in one file
• Grayscale only
– Bitmaps: .bmp
• One image per file
• Grayscale and color
– Tagged Image File Format: .tif
• Multiple images in one file
• Grayscale and color
• Examples:
– To save and read back test images for prototyping, development,
and documentation
– To save and read back images from a production run
• i.e. All failed parts
45
ImageFile Tool
Used to save images to file or process images from an existing file
23
Image File Modes
• Toggle between Read and
Write mode using the
Record button
– In Read mode you are
reading images from an
image file
– In Write mode you are
appending images to an
image file
• We’ll first address how to
read images from an
existing image file
47
Reading Images
• Example: You need to
prototype and test
vision tools on a fixed
set of saved images
• Load a file using the
OpenImageFile button
– Browse for the image
file
48
24
Reading Images
Opened image file
Total image
count
Navigation
buttons
Currently
selected image
49
Three Images
• Most tools have
several images to
work with
• The Image File Tool
has three images
• Choose the image to
view from the
Display pull-down
50
25
Selected Image
• When you first open an
image file, the first
image is selected by
default
– The thumbnail of
the selected image
is highlighted in
blue
– The selected image
is shown in the
display
51
LastRun.OutputImage
• When we grab an image from
an image file, it becomes the
LastRun.OutputImage
52
26
Writing Images to File
• Example: Your Quality Assurance
department wants images saved of
all parts that failed in production
• When in Write Mode, running the
Image File
– Appends the Current.InputImage to
the image file
– Takes the Current.InputImage and
puts it in the LastRun.OutputImage
53
Current.InputImage
• Current.InputImage is the
image to be written to the
Image File on the next run
when in record mode only
54
27
Adding Images to an Image File
• Start by creating a new file or opening an existing file to
append to
New Image File
Load Existing
Image File
55
Link Images
• Drag and drop from the
OutputImage of the Image
Source to the InputImage of the
ImageFile tool
– Now every time the Job runs, the
acquired image will be appended
to the image file
56
28
Load / Save
• In the Image File control, there are two Save buttons
– One saves the entire tool and all of its settings to a .vpp file
– The other saves the images in the currently open file to an
image file
• .bmp, .cdb, .idb, or .tif
Saves single
image
Saves complete
image file tool
57
29
VisionPro
4. Click the “Show Live Display” button and verify you can affect the FOV at which your camera is
pointing (move something unique under the camera and see if it moves on the live display).
5. Use the two ring controls on the lens to adjust aperture and focus.
The top ring, Aperture, adjusts the amount of light allowed to pass through the lens.
The bottom ring, Focus, adjusts the sharpness of an image.
Click anywhere in the Image view window to stop the live acquisition.
When to adjust lens:
6. Open the Floating Display and identify where the following information can be found:
a. X & Y position coordinate
b. Zoom level for viewing the image
c. Intensity or grayscale value
7. Open the VisionPro Toolbox and add a CogImageFileTool.
8. Make the proper connections between the Image Source and the newly inserted
CogImageFileTool.
9. Configure the Image File tool to record an image to a CDB file on your hard drive.
10. Use the tool to record 8 PASS images of the demo plate and 8 FAIL ones.
11. Reopen your Image Source and configure it to acquire images from the newly created CDB file
instead of the camera.
Hint: You’ll have trouble using the Image Source on this file and recording at the same time so
only do one at a time.
12. Confirm that you are able to acquire from the file instead of the actual camera.
13. Revert back to camera acquisition instead of file acquisition.
14. Remove the Image File tool from the job (we no longer need it after the CDB file was
generated).
15. Save your APPLICATION as MyQuickBuild.vpp.
PatMax® – Getting Started
Session 3
Objectives
• The student will correctly:
Identifyy applications
pp where PMAlign
g can be used to inspect
p
Understand the concepts behind how the tool works
Create and configure a PMAlign tool to find a pattern under
various run-time conditions
Train a pattern and determine if the automatically extracted
features are valid for the application
Evaluate parameter settings to determine which are needed for
basic run-time conditions
1
PMAlign
Introducing PatMax
• PatMax is a pattern-location
search technology
– PatMax patterns are not
dependent on the pixel grid
• A feature is a contour that
represents the boundary
between dissimilar regions in
an image
• Feature-based representation
can be transformed more
quickly and more accurately
than pixel-grid representations
2
PatMax Capabilities
• With one tool measure
– Position of the Pattern
– Size
Si relative
l ti tto the
th originally
i i ll ttrained
i d pattern
tt
– Angle relative to the originally trained pattern
• Unprecedented accuracy
– Up to 1/40 pixel translation
– Up to 1/50 degree rotation
– Up to 0.05% scale
• Increased speed
– Basic pattern finding is faster
– Angle and size determined quickly
PatMax Capabilities
• Improved alignment yield
– Handles wide range of image contrast
– Defocus, partial occlusion, and unexpected features can be
tolerated
• Easier to use
– Direct measurement of angle and size in one step
– Patterns may be transported between machines without loss
of fidelity
– Single tool functions more accurately and efficiently than
previously needed multiple tool solution
3
PatMax Applications
• Align a printed circuit
board based on
fiducials (alignment)
PatMax Applications
• Locate tabs on peach
cans; variations in
t
translation,
l ti rotation,
t ti and
d
lighting (presence /
absence detection)
Result: 4
Result: 3
Result: 2
Result: 1
Score: 0.97
Contrast: 0.94
Fit Error: 0.02
Location: x= 351.08
y= 245.92
Angle: 0.09
X-Scale: 1.0
Y-Scale: 1.0
4
PatMax Applications
• Identify engine block by
type
yp despite
p extreme
similarity between types,
lighting variations, and
part rotation (sorting and
classification)
PatMax Algorithms
PatQuick PatMax PatFlex High Sensitivity
• Best for speed • Best for high • Designed for • For low
• Best for three- accuracy highly flexible constrast/high
dimensional or • Great on two- patterns noise images
poor quality parts dimensional • Great on • Used with very
parts curved and noisy
• Tolerates more uneven backgrounds
image variations • Best for fine surfaces • Good for images
details • Extremely that have
• Example: Pick flexible, but significant video
and Place • Example: less acurate noise or image
Wafer degradation
** PatQuick is the alignment • Example:
cursory part of the Label location • Example:
PatMax algorithm Obscured part
in bag
10
5
The Big Picture
Train a Pattern
Set Run-time
Parameters
Run PatMax
on the Image
11
Pattern Training
Get a Train
Image
Set Training
Parameters
Train the
Pattern
Evaluate the
Trained Features
12
6
Linking Tools
• You need images for:
– Pattern training
g
– Run-time inspection
13
Training a Pattern
• The PMAlign Tool has
three images
g associated
with it
14
7
Current.InputImage
• PMAlign Tool also has a
Current.InputImage
p g that
can either be a run-time
image or can be
“grabbed” as a training
image
(Current.TrainImage)
15
16
8
Pattern Region and Origin
• When using graphics
– Dragg and resize
training box around
pattern
– Position origin at
appropriate location
17
18
9
Model Origin
• Model origin identifies the point which will be reported
to you when PatMax locates an instance of the model
in the search scene
Origin
19
Train Pattern
• Press the Train button
to train the pattern
p
– PatMax finds the
features in the
Region
20
10
PatMax Patterns
• When you train a pattern,
PatMax determines the features
contained in that pattern
• A feature is a contour that
represents the boundary
between dissimilar regions in
an image
• A feature is described by a list
of boundary points that lie along
the contour
– Boundary points are defined
by position (x, y) in the image
and its direction normal to
the contour
21
Pattern Features
• To see what
PatMax has
detected as
features to look for
with this pattern,
check the Train
Features Graphics
22
11
Pattern Features
• Yellow lines indicate
coarse features
– Those used by PatQuick
• Green lines indicate fine
features
– Those used by PatMax
23
Pattern Features
• Zoom in to get a
closer look at the
detected features
24
12
InfoStrings
• Watch for any InfoStrings
– These will indicate if the p
pattern training
g was successful
– They will also warn of potential problems with the trained
pattern
25
Pattern Training
General guidelines for PatMax pattern training:
• Select a representative
p p
pattern with consistent features
• Reduce needless features and image noise
• Train only important features
• Consider masking to create a representative pattern
• Larger patterns will provide greater accuracy
• Really, the more boundary points, the greater accuracy
26
13
“Bad” Patterns
• What happens if you look at the trained pattern and
don’t like it?
– Too much detail
– Not enough detail
– Missed features
27
Granularity
• Granularity indicates
which features PatMax
d t t in
detects i an image
i
28
14
Granularity
• Granularity is expressed
as the radius of interest in
pixels within which
features are detected
• Increasing the granularity
decreases the amount of
finer features PatMax will
use
29
Granularity Limits
• PatMax uses a range of granularities between fine
and coarse limits
• Making granularity coarser (higher):
– Increases speed
– Decreases accuracy
– Detects coarse and attenuates fine features (which may be good
or bad)
• Making granularity finer (lower):
– Decreases speed
– Increases accuracy linearly
– Detects fine and attenuates coarse features (which may be good
or bad)
30
15
The Big Picture
Train a Pattern
Set Run-time
Parameters
Run PatMax
on the Image
31
Run-time Parameters
• Choose the run-time algorithm
• Then a Search Mode
– Search Image uses entire image
– Refine Start Pose uses another tool’s results for start
• Then specify the number of instances to find in the run-
time image
• Indicate the Accept threshold
32
16
Accept Threshold
• Accept Threshold is a score (between 0 and 1.0) that
PatMax uses to determine if a match represents
p a
valid instance of the model within the search image.
Increasing the acceptance value reduces the time
required for search.
Accept Threshold
0 1.0
33
34
17
Coarse Accept Threshold
Manually set
Coarse Threshold
35
X Scale Y Scale
18
Degrees of Freedom
• Set either a nominal value or range of values
– Use the arrows to toggle
gg between which yyou use
– Also toggle between degrees and radians for angle
– ScaleX and ScaleY are advanced parameters
37
Search Region
• By default, PatMax
searches the entire image
f potential
for t ti l matches
t h
• To have PatMax look in
only a portion of the
image, use a Region
Shape
– Either type in values or
use graphics to set size
and position
38
19
Graphics
• Last, select the
graphics
g p to be
shown at run-time
– Remember
graphics take time
to update
39
Run PatMax
• Press the Run button to
run PatMax on the current
input image
• If an instance is found,
designated graphics will
appear on the last run
input image
40
20
Results
• Results are displayed
under the Results tab
• If multiple instances
are found, they are
returned in descending
order of score
41
Results
• Score
– How well the result features match the trained pattern features
• X, Y
– The location of the found pattern in terms of the specified origin point
• Angle
– The angle of the found pattern relative to the originally trained pattern
– If nominal angle
g is used, this always
y equals
q the nominal value
42
21
Results
• Fit Error (PatMax algorithm only)
– a measure of the variance between the shape of the trained pattern
and
d th
the shape
h off the
th pattern
tt instance
i t found
f d in
i the
th search
h image
i
43
Results
• Scale
– The size of the found pattern compared to the originally
t i d pattern
trained tt
– If nominal scale is used, this always equals the nominal value
– a.k.a. Uniform Scale
• Scale X, Scale Y
– The size of the found pattern compared to the originally
trained pattern in X and Y directions
– If nominal scale is used, this always equals the nominal value
44
22
VisionPro
2. Open up CogJob 1.
3. Open the VisionPro Toolbox and add a CogPMAlign Tool.
4. Make the proper connections between the Image Source and the newly inserted CogPMAlign
Tool. Remember to run the Job once after you make the Image Source connection.
5. Configure the CogPMAlign Tool tool to find the COGNEX logo printed on the demo plate. The
tool should account for at least 45 degrees of rotation in either direction.
Objectives
• The student will correctly:
Identifyy applications
pp where PMAlign
g or SearchMax may y be
part of the vision solution
Create and configure a PMAlign tool to find a pattern under
various run-time conditions
Evaluate parameter settings to determine which are needed for
various run-time conditions
Optimize execution time and accuracy
Understand parameters to search more successfully
Create and configure a SearchMax tool
Decide when to use PMAlign or SearchMax
Train and set run-time parameters
1
PMAlign - revisited
• Repeating Patterns
– Ability to tell PatMax that elements repeat, such as a grid or a
set of bars or a pattern of parallel lines
• Elasticity
– Amount of variance (in pixels) allowed for perimeter
• More on Granularity
2
Pattern Polarity
Pattern polarity is defined at every point along a boundary
as the direction toward darkness, without regard to magnitude.
Matching Mismatched
polarity polarity
Polarity
• Check the box to
ignore
g p
polarity
y
(allow for polarity
changes)
3
Ignoring Pattern Polarity
Polarity is a hint to PatMax which can make a pattern less
ambiguous. You should use polarity unless the object is subject
to polarity changes
changes. Notice the potentially ambiguous object
illustrated below.
EXPECTED
MATCH
INADVERTENT
MATCH
Repeating Patterns
• These can create a special challenge to PatMax
- Human eye has problems with repeating AND alignment
- PatMax MUST be selected as Algorithm
Train Runtime Pattern
Pattern found
4
Elasticity Shows Advanced Parameters
• Elasticity is an
Advanced
Parameter that can
be valuable in
finding parts with
some geometric
change from the
originally trained
pattern
Elasticity
Elasticity, a train-time parameter,
is used to specify the degree to
which
hi h you will
ill allow
ll P tM to
PatMax t
tolerate nonlinear geometric
changes Image
Pattern
Elasticity is measured in pixels,
typically 0 to 8
10
5
Granularity
Granularity = 6
• Coarse granularity controls the
level of detail used by the
P tQ i k algorithm.
PatQuick l ith
• Fine granularity controls the
level of detail used by the
PatMax algorithm.
• By default, both are set to 0
allowing PatMax to
automatically determine good
values.
l
Granularity = 1
11
6
Relationship Between Boundary Points
• In the end PatMax creates a compilation of vectors
which include boundaryy p point information, direction
(polarity), and a relationship to one another.
13
• Contrast
- Greyscale difference
between edge and
background
• Overlap
- Percentage of one part
covering another
14
7
Clutter
• The model consists of inter-related boundary points.
• Clutter is a term used to describe extra features
present and adjacent to the original boundary
features of the image. They were not trained as
part of the original model.
15
Clutter in Score
• The Score using Clutter
parameter allows yyou to
p
factor or ignore clutter
when the score is
calculated.
• If checked, score is
lowered based on the
amount of clutter.
• If unchecked, score is not Score: 68
affected by the presence
of clutter.
Score: 94
16
8
Contrast
• Contrast sets the
minimum contrast
required in order to
consider a change in 92 62
grayscale a potential
boundary point.
18
9
Outside Region
• Outside Region allows a percentage of the model to be outside
the search region and still be found.
• Those missing boundary points outside the field of view are not
counted against the score.
19
Degrees of Freedom
Remember: Tell PatMax what you know about the part -- do not
enable freedoms your application does not demand
10
Degrees of Freedom
Each degree of freedom may have a low to high zone of values
Multiple
p degrees
g of freedom can be enabled
• Multiple degrees of freedom can cause unintended matches
• Of the three scale degrees of freedom, only have at most two
enabled - the third would be redundant
ë
Original - 1.00
ë
0.50
ë
0. 67
ëëë
1.17 1.33 1.67
ë
2.00
21
PatMax Score
• Score ranges from 0 (no match) to 1.0 (perfect match)
• Brightness, Contrast, and Polarity do NOT affect scores. They
may only affect if a pattern is detected or not.
• Factors considered in scoring include:
• Degree of Pattern Shape Fit
• Fit within Degree of Freedom range
• Missing Features
• Extraneous Features (PatMax algorithm only)
22
11
How To Make PatMax Fast
• Control what you can & Tell PatMax what you know
about the part
p
• Understand what parameters affect execution time
23
24
12
Guidelines for Run-time Accuracy
• Never ask PatMax to figure out what you already know
or should know
– Prefer “consider polarity”
– Prefer elasticity very close to 0.0
– Prefer nominal DOF settings
– If you need to use DOF zones, set them based on realistic
expectations of object variation
25
26
13
Guidelines for High Accuracy
• Camera
– Use a qquality
y lens to minimize distortion
– Stick to the middle of the field of view
– Focus carefully
– Adjust aperture to avoid saturation
– Calibrate the camera to the system
• Larger patterns are more accurate
• Make sure fine granularity is 1.0
– If the automatic selection picks a larger value, you will get a
warning
27
SearchMax
14
SearchMax
• Specialized search tool that combines features from
both PMAlign
g and CNLSearch
– CNLSearch – normalized correlation to match features at
runtime
– PMAlign – find instances at different rotations and scale
29
Differences
PatMax SearchMax
Color Image Must be transformed to Handles color images
grayscale
Outside Region Model can be outside Part MUST be in ROI
ROI
Skewing Cannot handle skew Can find in skew range
Small Model The bigger the model, Can handle small models
the better (more info)
Noisy Background Very good at finding Cannot handle background
model noise very well
Open Shapes Not as reliable on open Can give more reliable
shapes (like a corner) results
Many DOFs Increase in tool time, Tool time becomes
but good extremely high
30
15
Where to use SearchMax
• Grey level images with small models
– For e.g.
g 15x15 p
pixel
• Images that would create too many features for PatMax
– Page of written text
– Textured objects
• Object doesn’t segment well due to color variations
• Skewed objects
SearchMax Capabilities
• Intensity based alignment (intensity correlation)
– Greyy Scale, RGB
• DOF
– Rotation 0-360 degrees, Scale 50-200%, Skew 0-30 degrees
• Accuracy
– Depends on image size but varies between a ¼ and a 1/10 of
a pixel
• Benefits
– Can handle very small patterns (15x15 or smaller)
– Works on many images where PatMax has a hard time
• Blurry images
• Confusing or too many geometries created by noise
• Skewed images
32
16
SearchMax
Training similar to PatMax
except its modes.
33
Results
•SearchMax is able to
find all four results –
even the skewed one
34
17
VisionPro
2. Open up CogJob 1.
3. Open the VisionPro Toolbox and add a CogSearchMax Tool.
4. Make the proper connections between the Image Source and the newly inserted CogSearchMax
Tool. Remember to run the Job once after you make the Image Source connection.
5. Configure the CogSearchMax Tool to find the lower right fastener on the back panel view.
a. Set the Train Mode to be “Evaluate DOFs at Runtime”
b. Set the search region so that it is only picking up that fastener and not the one on the
other side.
6. Run the CogJob and verify that the fastener is found.
Objectives:
• The student will correctly:
Analyze
y an image g for the p
presence/absence of a p part using
g
a Histogram tool
Choose the appropriate fixture tool as needed in a vision
application
Create and configure a Fixture Tool
Use terminals to pass data between tools
Identify the use of Coordinate Spaces in vision applications
1
Histogram Tool
Histogram
Histogram creates statistics and a plot of the grey
values found within a specified
p area of the image
g
2
Histogram
• A histogram may be used to:
– Detect the p
presence or absence of something
g in the image
g
– Monitor the output from a light source
• A software light meter
– Measure the uniformity of the grey values within an image
– Determine the grey-value distribution in an image to set-up
other vision objects
3
Histogram Images
• Histogram has three
images associated with it
(t l dialog)
(tool di l )
• Current.InputImage is the
image Histogram will
analyze on the next run
– In this case, the image
comes from the Output
image of the Image
S
Source
Histogram Images
• LastRun.InputImage is
the image
g on which the
last execution of
Histogram took place
4
Histogram Images
• LastRun.Histogram is a
plot of the g
p grey-level
y
distribution
Region of Interest
• By default, Histogram runs on
the entire image
• To analyze a single area of the
image, choose a region shape
and manipulate on the
Current.InputImage
10
5
Graphics
• Optionally, change
which ggraphics
p appear
pp
at run-time
11
Results
• Results
appear
pp in
control and
floating results
grid
• May also be
accessed in
VB or C# code
12
6
Coordinate Spaces
14
7
Calibration and Fixturing
• Coordinate Spaces can be achieved through:
– Fixture Tool ((this section))
– FixtureNPointToNPoint Tool (this section)
– CalibNPointToNPoint Tool (later section)
– Checkerboard Calibration Tool (later section)
– Manually configuring and passing a 2D Transform (later
section)
15
Root Space
• The Root Space is a left-handed coordinate system
perfectly
p y aligned
g with the p
pixels of an acquired
q image
g
prior to any image processing
– May be different for synthetic or linescan images
16
8
Root Space
Image now has
• VisionPro automatically re-adjusts fewer pixels; note
that the root grid
the root space
p as an image
g lines no longer
undergoes image processing or correspond to the
sub-sampling pixel boundaries.
17
User Space
• VisionPro lets you define any number of additional
coordinate systems
y
18
9
User Space
• You determine:
– Units
– Handedness
– How it relates to the image’s root space
2.3, 8.5
19
Pixel Space
• A pixel space is like the root space in that
– Its origin
g is always
y in the upper-left
pp corner
– Its space corresponds to the image pixels
20
10
Coordinate Space Trees
• Coordinate space trees contain
– An image’s
g root space
p
– All user spaces you created
– How all the spaces are related to each other
• a.k.a. Transformation
21
22
11
Selected Space
• At all times, one space within the tree is the Selected
Space
p for the image
g
23
Selected Space
• Creating a new image through some transformation
adds a new coordinate space
p to the coordinate space
p
tree
– And automatically selects the space as the new image's
selected space name
24
12
Fixture Tool
Fixture Tool
• The Fixture Tool is used to create
a fixture coordinate system
y when
you already have a coordinate
transform calculated
– In our example, we’ll find our part
using PMAlign; it produces a
transform in its results
26
13
Our Problem:
• Then we’ll create a Caliper to
measure the width of the center
“tab”
27
Getting Started
• Create and configure
an Imageg Source and
a PMAlign Tool
trained to find the
right “ear” of the
bracket
28
14
Add Fixture Tool
• Then add a
CogFixtureTool
g and
connect its
InputImage to the
Image Source’s
OutputImage
29
Connect Transforms
• Take the transform
determined by PMAlign and
use it as our Fixture
Fi t
• Connect the Pose Result of
PMAlign to the Transform of
the Fixture
– If you individually wanted to
supply X, Y, and rotation,
you could connect to those
terminals individuallyy
30
15
Run the ToolGroup
• Run the ToolGroup to pass the image and transform to
the Fixture Tool
31
Settings
• In most applications,
that’s it
• In some cases, you
may want to
manipulate the
transformation before
running the
subsequent vision
tools
32
16
Add a Caliper
• Now add the Caliper
and connect its
InputImage to the
OutputImage of the
Fixture
33
34
17
FixtureNPointToNPoint
Reference Fixturing Method
36
18
Our Problem
• Measure the width of the tab on the bracket, using the
centers of the holes to indicate where the p
part is in the
FOV
37
Add Tools
• Create and configure an
Image
g Source and a Blob
Tool
38
19
Add FixtureNPointToNPoint Tool
• Now add a
FixtureNPointToNPoint Tool
39
Adding Terminals
• Right Click on Blob
Tool and Add
Terminals
40
20
Link Terminals
• Connect the newly
exposed terminals to
th Fi
the Fixture
t input
i t points
i t
41
Degrees of Freedom
• In the FixtureNPoint control,
choose the degrees of
freedom used when
determining the best-fit
transformation between
fixtured and unfixtured points
– In other words, how do
you expect your part to
change from image to
image?
Type # of Points
– Then be sure you have
enough points to perform Translation 1
the appropriate Rotation and Translation 2
transformation
Scaling, Aspect, Rotation, and 3
Translation
Scaling, Aspect, Rotation, Skew, 4 (or 3 if they are
and Translation not collinear)
42
21
Grab Reference Image
• Press the Grab Reference Image and Points button
43
44
22
Using FixtureNPoint
• Now add a Caliper and
connect its InputImage
p g to
the OutputImage of the
FixtureNPoint Tool
45
23
VisionPro
1. Open the VisionPro Toolbox and add a CogFixture Tool under the CogPMAlign Tool. Hint: It’s in
the Calibration & Fixturing folder.
2. Make the proper connections between the Image Source, the PMAlign Tool and the newly
inserted CogFixture Tool. Remember to run the Job once after you make the connections.
3. Save your APPLICATION as MyFixture.vpp.
4. Open the VisionPro Toolbox and add a CogHistogram Tool under the CogFixture Tool. Hint: It’s
in the Image Processing folder.
5. Make the proper connections between the CogFixture Tool and the newly inserted
CogHistogram Tool. Remember to run the Job once after you make the connections.
6. Configure the CogHistogram Tool tool to check for the presence of pins in the camera’s rear
connection.
Objectives
• The student will correctly:
1
Blob Overview
• Blob analysis is the detection and analysis of two-
dimensional shapes within an image
• Blob finds objects by identifying groups of pixels that fall
into a user-defined grey-scale range
• Blob reports many properties: center of mass (CM) extrema
– Area
– Center of Mass
– Perimeter
principal axes (PA)
– Principal Axes extrema
CM
PA
• Sample applications:
– Inspect for number, size, and shape of dispensed epoxy dots
– Inspect for correct position and size of ink dots indicating bad wafer dies
– Inspect for fragmentation and size of pharmaceutical tablets
– Sort or classify objects according to their size, shape, or position
2
Segmentation
• The first thing Blob does Blob pixels
when it runs is image
segmentation determining
segmentation,
which pixels are blob pixels
and which are background
pixels
• There are several modes to
specify what separates blob
from background pixels
Background pixels
Segmentation
• Most segmentation
modes will require:
q
– Polarity
• Dark blobs on light
• Light blobs on
dark
– Threshold
• The value(s) that
separate blob
pixels from
background pixels
3
Fixed Thresholding
• In Fixed Thresholding, the division between blob pixels and
background pixels is determined by grey values.
• Set a grey-level threshold:
grey value
p threshold
i = 140
x
e
l
s
Relative Thresholding
• Relative thresholds are expressed as percentages of the total pixels
between the left and right tails
• Tails represent noise
noise-level
level pixels that lie at the extremes of the
histogram
Image:
Histogram:
4
Using Relative Thresholds
• Relative
thresholds
Th h ld = 30
Threshold
adjust for
40% of
linear lighting
changes
Threshold = 100
40% of
Threshold = 140
40% of
Pixel 200
Light image: value 120
Threshold = 100
10
5
Fixed vs. Relative Thresholding
• Fixed is faster than relative
because the grey levels
corresponding
di tto ththe background "object"
11
Hard Thresholding
• The examples so far have all used Hard Thresholding
– One value (grey level or percentage) divides blob pixels from
background pixels
200
Pixel
value
{ 220
80
100
120
Apply threshold value = 150
grey value
threshold
Examine a histogram to
determine the threshold
grey value blob background
12
6
Hard Thresholding
Threshold Specify single
dynamically percentage & tails
chosen; good for
images with
bimodal
distribution of grey
values
Specify single
grey value
13
14
7
Pixel Weighting
• Spatial Quantization Error can be eliminated by applying
pixel weighting
• As the blob moves relative to the pixel grid
grid, the total weight
remains the same
0 0 1 1 1 0 0 0 .4 1 1 .6 0 0 0 0 .8 1 1 .2 0
15
Soft Thresholding
• Create a pixel weighting scheme by using soft thresholding
• Soft thresholding uses a range of thresholds
1 1
Weight
0
} Softness (3) Weight
16
8
Soft Thresholding
• Soft Thresholding
1.0
example
– Low Threshold =
50 0.75
– HighThreshold = Weighting
65
0.50
– Softness = 3
0.25
50 55 60 65
Threshold Grey Values
High Threshold
17
Soft Thresholding
18
9
Using a Subtraction Image
• Use a Subtraction Image when the image
consists of similar background
g and blob
grey values Subtraction Image
• The threshold image contains only
background information
• Every pixel in the image that differs from
the corresponding pixel in the threshold
image by a specified amount is a blob Image to Segment
pixel
Segmented Image
19
Pixel Mapping
• Use a pixel map (lookup table) for images that cannot
be segmented
g with hard or soft binary
y thresholds
• Requires a scaling factor which gets applied to the pixel
map values
20
10
Pixel Mapping
• Supply an output value for
each g
grey
y value
21
Connectivity Analysis
• After segmenting the image,
Blob pperforms Connectivityy
Analysis
• Whole Image blob analysis
returns one result for all blob
pixels in the image
• Grey Scale analysis identifies Whole Blob Analysis
discrete, connected blobs
22
11
Connected-Blob Analysis
• Object pixels must be eight-connected
– Connected vertically,
y horizontally,
y or diagonally
g y
• Background pixels are four-connected
– Connected vertically or horizontally only
• Order matters!
– To reorder or delete an operation,
use the buttons in the dialog
24
12
Pruning and Filling
• Pruning ignores, but does not remove, features which
are below a specified
p size
• Filling fills in pruned features with grey values from
neighboring pixels on the left
25
Region
• By default, the blob
analysis
y is done on the
entire image
• To only detect blobs in a
portion of the acquired
image use a Region
Shape
– May graphically position
and size on the Input
Image
26
13
Measurements
• Allows you to specify
measurements
calculated on each
blob
27
Measurements
• For each selected
measurement,
choose:
– Grid
– Runtime
– Filter
28
14
Measurements
• Use Filter to
exclude blobs
outside a certain
range for any
property
– Or include only in
a certain range
29
Measurements
• Results may be sorted in
order for anyy of the selected
measurements
– Ascending or descending order
30
15
Graphics
• Choose to display Result
or Diagnostic
g g
graphics
p
– Remember that graphics
add time
31
Results
•N
– Index of the blob
• ID
– A unique blob identification number independent of sorting
criteria
• Measurements
– Calculated for those selected measurements
32
16
Geometric Properties
• Geometric properties are blob measurements that are
constant regardless of the orientation of the blob
– Area
– Perimeter
– Center of Mass
– Second moments of
inertia about the Bounding box for
Geometric extents
principle axes Center of mass
– Geometric extents Minor axis
– Principal bounding box
Major axis
33
Non-geometric Properties
• Non-geometric properties are those that change as the
blob rotates or changes
g p position
– Blob median
– Second moment of
inertia about the
coordinate axes
– Coordinate extents
Bounding box for
– Arbitrary bounding Coordinate extents
Median in yy-axis
axis
b
box
Blob median
Median in x-axis
34
17
Topological Properties
• Identifies blobs, holes, and blobs within holes
35
18
VisionPro
3. Open the VisionPro Toolbox and add a CogBlob Tool under the CogHistogram Tool.
4. Make the proper connections between the CogFixture Tool and the newly inserted CogBlob
Tool. Remember to run the Job once after you make the connection.
5. Configure the CogBlob Tool to count the number of LEDs the camera’s rear plate.
a. Set the proper thresholding and polarity.
b. Use the Results tab to identify the properties of the blobs of interest.
c. Use the Measurement tab to discriminate against unwanted blobs and only count those
blobs that represent LEDs. Hint: You may need to add more criteria to be used in
distinguishing LED blobs from others.
3 LEDs 2 LEDs
6. Verify that the CogBlob Tool region tracks the movement of the demo plate.
d. What image must be selected in order to see the all the blobs found (before they are
eliminated using the Measurements tab)?
e. What image must be selected in order to see only the blobs that match your specific
criteria (after they are eliminated using the Measurements tab)?
f. What image must be selected to view the graphics from all the tools in one image?
7. Save your APPLICATION as MyBlob.vpp.
Caliper & Geometry
Session 7
Objectives
• The student will correctly:
Identifyy applications
pp where Caliper
p and Geometry y tools may
y
be part of the vision solution
Create and configure a Caliper tool to detect edges under
various run-time conditions
Choose an appropriate region of interest for finding edges
Evaluate parameter setting to determine the best values to use
for different edges
Assess when additional scoring functions are necessary and
implement when needed
Create and configure geometry tools
1
Caliper
Introducing Caliper
• Identifies edges and edge pairs in an object
• Reports edge location and distance between edges in an
edge pair
2
Caliper Applications
• Ideal for gauging applications
– Measure the width of a part
p
– Measure the distance between parts
• Useful for fixturing a part
– When a part has positional uncertainty
The Task:
• Measure the width
across this metal
bracket
3
Define a Region of Interest
• The Caliper Region is the
area of the image
g in which
edges will be detected
Scan direction
Projection
Direction Rotation
Skew Handle
Handle
4
Define a Region of Interest
• Region criteria:
– Contains the edges of interest Projection
direction
– Edges should be parallel to the
projection direction
• May have to rotate Region
– Exclude features other than
the edges of interest when
possible Projection direction
• May have to skew
Caliper Parameters
• Next, set the parameters for Caliper
10
5
The Big Picture – Run-time
Create the
projection image
Apply the
edge filter
Score remaining
edge candidates
Return highest
scoring edges
11
Projection
• Projection reduces a 2-D
image to a 1-D image Scan arrow
– Reduces processing time and
storage Projection
arrow
– Maintains, and in some cases,
enhances edge information Image
12
6
Edge Filtering
• The purpose of the edge filter is to eliminate noise from the
input image
Projection
Edge peaks
13
Edge Filtering
• Caliper performs
filtering
g byy
convolving the
one-dimensional
projection image
with a filter
operator
14
7
Edge Filtering
• A filter size close to the edge size produces stronger
edge
g p peaks
• A filter size too large or small flattens peaks
Filter width = 2
Filter width = 4
Filter width = 6
15
Settings Parameters
• Set Filter Half Size
– We’ll see a graphic
g p
on the image that
will visually indicate
if we’ve chosen a
good number for
our image
• Set the Contrast
Threshold
– 0 through 255
– Difference in
greyscale value
from both sides of
edge
16
8
Contrast Threshold
• Contrast threshold eliminates edges that do not meet
minimum contrast (p
(peak height
g or depth)
p )
+ min. contrast
0.0
- min. contrast
17
Edge Polarity
• Edge models describe edges or edge pairs as:
– Light to dark
– Dark to light
– Any polarity
18
9
Edge Polarity
• Choose Single Edge
or Edge
g Pair
• Then indicate the
expected polarity
• For edge pairs, also
specify the expected
distance between the
edges
19
Maximum Results
• Specify the
maximum number of
edges or edge pairs
to return in the
results
20
10
Run
• Use the Run button to
detect edges
g on the
current input image
21
Graphics
• Use graphics to indicate
results of executing
g the
Caliper
22
11
Graphics
• Show Edges Found
draws ggreen lines in the
LastRun.InputImage at
the reported edges
23
Graphics
• The remaining result
graphics
g p appear
pp in the
LastRun.RegionData
24
12
Graphics
• Show Affined
Transformed Image g
adds the pixels from the
Region to the
RegionData
25
Results Grid
• Results appear in the Results grid in order from highest
to lowest scores
26
13
Results
• Score
– The score received based on the scoring
g functions yyou
created
• Edge 0 / Edge 1
– Which edge along the Region this is (an index)
• Measured Width
– For edge pairs only, the distance between the two edges
27
Results
• Position
– A one-dimensional measurement along g the search direction
relative to the center of the input region
28
14
Results
• X, Y
– The location of the edge
g in the image
g
• Function Scores
– The score this edge received for a single scoring function
29
“Bad” Edges
• What happens when the edges you want to detect are
not being
g reported?
p
Or
30
15
Scoring
• By default, single edges are scored only by their
contrast across the edge
g and edgeg ppairs are scored by
y
how well the measured distance between the edges
matches the expected distance.
31
Scoring
• Specify the scoring
method(s) to apply to
thi edge
this d d detection
t ti
32
16
Scoring
• Scores between Xc and X1 are mapped to Y1
• Scores between X0 and X1 are mapped linearly
between Y1 and Y0
• Scores above X0 are mapped to Y0
33
Scoring Method
• Contrast - Expressed in terms of the change in pixel
values
- For edge pairs, the contrast is the average contrast of the
two edges
• Straddle - whether or not the edges straddle the
center of the projection window
– Score = 1 if they do
– Score = 0 if they do not
34
17
Scoring Method
• Size - based on how much width between edges
varies from the edge
g model
• w = width of the edge model
• d = width of edge pair candidate
• 0 - Size_Diff_Norm |w-d|/w
• 1 - Size_Norm d/w
• 2 - Size_Diff_Norm_Asym ( w - d) / w
35
Scoring Method
• Position - distance of the edge(s) from the center of
the p
projection
j window
• a = distance between the origin of the edge candidate and
the center of the edge window
• 0 - Pos |a|
• 1 - Pos_Norm |a|/w
• 2 - Pos_Neg a
• 3 - Pos_Norm_Neg a/w
36
18
Scoring
• The raw score computed for each constraint is
converted to a final score that ranges
g from 0.0 to 1.0 via
the scoring functions defined
• All scores for each edge or edge pair are geometrically
averaged to obtain a final score
• Report only the edges or edge pairs with highest scores
up to the number of edges or edge pairs requested
37
Geometry Tools
19
Geometry Tools
• VisionPro contains many
tools that will do geometric
g
calculations for you
– You provide the inputs and
it does the appropriate
calculations
39
Creation Tools
• Create designated shape
based on inputs
p p
provided
– i.e. CreateCircle Tool will
output a circle, given an X,
Y center point and radius
40
20
Finding & Fitting Tools
• Find Tools create the
designated
g shape
p using g the
results of Calipers included in
the tool
41
Intersection Tools
• Calculate the intersection
point(s)
p ( ) from input
p shapes
p
42
21
Measurement Tools
• Calculate angle and/or
distances between inputted
p
shapes
43
22
VisionPro
3. Open the VisionPro Toolbox and add a CogCaliper Tool under the CogBlob Tool.
4. Make the proper connections between the CogFixture Tool and the newly inserted CogCaliper
Tool. Remember to run the Job once after you make the connection.
5. Configure the CogCaliper Tool to count the number of increments the camera’s lens.
a. Set the proper Edge Mode.
b. Set the proper Edge Polarity settings.
c. Set the proper Edge Pair Width. Hint: You may need view pixel coordinate locations to
determine the proper Edge Pair Width.
d. Set the proper Maximum Results.
8 Pairs 4 Pairs
6. Verify that the CogCaliper Tool region tracks the movement of the demo plate.
e. What image must be selected in order to see the graphical representation of edge
transitions and strengths?
f. What value would you change in the Scoring tab to make score drop faster when edge
pairs deviate from the Edge Pair Width setting?
7. Save your APPLICATION as MyCaliper.vpp.
8. Open the VisionPro Toolbox and add a CogFindLine Tool (available in the Geometry – Finding
and Fitting group) under the CogCaliper Tool.
9. Make the proper connections between the CogFixture Tool and the newly inserted CogFindLine
Tool. Remember to run the Job once after you make the connection.
10. Configure the CogFindLine Tool to find the edge of the rear edge in the camera’s side view.
11. Repeat steps 8- 10 to find the edge of the connector angle.
CogFindLine1 CogFindLine2
12. Open the VisionPro Toolbox and add a CogAngleLineLine Tool (available in the Geometry –
Measurement group) under the CogFindLine Tools.
13. Make the proper connections between the CogFixture Tool, both CogFindLine Tools, and the
newly inserted CogAngleLineLine Tool. Remember to run the Job once after you make the
connection.
14. Verify the CogAngleLIneLine Tool measures distinct and repeatable values for PASS images and
FAIL images.
PASS FAIL
Objectives
• The student will correctly:
– Create and configure
g a calibration routine using
g the
CalibNPoint Tool
– Identify applications where Checkerboard Calibration is
necessary
– Create and configure a Checkerboard Calibration Tool
– Use the result of a nonlinear calibration in subsequent vision
tools
1
CogCalibNPointToNPoint Tool
CogCalibNPointToNPoint Tool
• The CogCalibNPointToNPoint Tool calculates a 2-D
transform that maps
p image
g coordinates to “real-world”
coordinates
2
Calibration
• Calibrating your vision system creates a fixed
coordinate system
y that represents
p real-world
measurement and location
(0,0)
92.7 mm
Robot home position
Calibration Image
• Typically, calibration is done on a part other than the
part to be inspected
p p
• Some calibration plate criteria:
– Contains features at known locations
• Number of features needed depends on number of degrees of
freedom calculated
– i.e. Translation, rotation, scaling, aspect, and skew requires
three known locations
– Occupies approximately 50-70%
50 70% of the FOV at the same
optical set-up (same plane) as when running on the inspected
parts
3
Acquire Calibration Image
• Acquire an image of the part
from which yyou want to calibrate
Determine Locations
• There are many ways we
could determine the
location of the corners of
the calibration square
4
Create Calib Tool
• Add a CalibNPointToNPoint tool to the Job
Entering Coordinates
• Connect the X & Y
coordinates of the
corners to the
uncalibrated points of
the Calib tool
10
5
Grab Calibration Image
• Open the Calib control and press the Grab Calibration
Image
g button
– This passes the Current.InputImage to the
Current.CalibrationImage
11
Enter Coordinates
• Notice the coordinates of the three corners have been
passed to the Calib Tool
p
• Enter the real-world coordinates of each point
12
6
Degrees of Freedom
• Next, choose the
Degrees
g of Freedom
to use when
computing the best-
fit transformation
between
uncalibrated and
calibrated points
13
Origin
• Optionally, indicate additional origin translation,
rotation, or swapp handedness of coordinate axes
14
7
Graphics
• Also optionally,
indicate g
graphics
p to
show for calibration
15
Compute Calibration
• Finally, press the Compute Calibration button
– In the Current.CalibrationImage,
g notice the calibrated image’s
g
coordinates axes graphic
16
8
Results
• Check that the
Calibration Results make
sense for the calibration
image you just used
17
Calibration Errors
• If there is a
large
g RMS
error, a
message will
appear in the
control
– Note the
possible
reasons that
this could be
large
18
9
Disable Corner-finding Tools
• Now that we’ve calculated the calibration transform, we
don’t need tools to run again until we need to
recalibrate
lib t our vision
i i systemt
– Distance between the part and camera changes
19
Analyze Part
• Now add the vision analysis
tools to the Tool Group
p
– In this example we’ll add a
Blob Tool
20
10
Checkerboard Calibration
Checkerboard Calibration
• Checkerboard calibration uses a
checkerboard p plate to calculate
the transform between pixels and
real-world units
• Can calculate either a linear or
non-linear transform
– Non-linear transforms account for
optical and/or perspective
distortions
22
11
Non-linear Distortions
• There are three common types of distortion to account for:
– Aspect
– Perspective
– Radial
23
24
12
Calibration Plate Guidelines
• The plate itself:
– Black and white tiles must be arranged in an alternating pattern
– Black and white tiles must be the same size.
size
– Tiles must be rectangular with an aspect ratio within the range 0.90
through 1.10
• The acquired image:
– Acquired image must include at least 9 full tiles
– Tiles in the acquired image must be at least 15x15 pixels
– In general, increasing the number of tiles visible in the calibration
image (by reducing the size of the tiles on the calibration plate),
improves the accuracy of the calibration
• Also see documentation for complete explanation
25
Plate Origin
• Optionally, your calibration plate
may have an origin point, indicated
b ttwo iintersecting
by t ti rectangles
t l
• If found, this point will become the
origin of the raw calibrated space
• If not found, the origin of the raw
calibrated space is the vertex
closest to the center of the
calibration image
26
13
Get an Image
• First, get an image of the calibration plate at the same
optical
p set-up
p as the p
production inspection
p
27
Calibration Set-up
• Next, specify Linear or
Nonlinear Mode
• Enter the single tile size in
both X and Y
– Units are irrelevant, but must
be the same for X and Y
• Indicate whether a fiducial
(origin) mark exists on your
plate
p
• Grab a Calibration Image
28
14
Optionally, Change Origin
• You can optionally change the origin point of the
calibrated space
p
– Translation
– Axis Rotation
– Axis Handedness
29
Compute Calibration
• Press the Compute
Calibration Button
• Look at the
Undistorted Calibration
Image
– This is the “corrected”
image
30
15
Calibration Result
• See the results for
each corner of each
tile
– Both uncalibrated
and raw calibrated
coordinates
31
Calibration Result
• Also see the
coefficients for the
nonlinear equation
calculated
• RMS Error
32
16
What the Calibration Tool Did
• Obtains two sets of points:
– Points of vertices in acquired
q image
g
– Points of vertices in raw calibrated space, based on tile size
information you provided
33
34
17
Using Calibration Results
• Once the transform is
calculated, simply
py p
pass the
OutputImage of the
Calibration to the
InputImage of an
inspection tool
• The tool will be passed the
“corrected” image
35
18
VisionPro
7. Add four CogIntersectLineLine Tools inside the newly created CogToolGroup under the
CogFindLine Tools.
a. Each one will find the points of intersection between the CogFindLine Tools.
8. Make the proper connections between the CogFixture Tool and the newly inserted
CogIntersectLineLine Tools. Remember to run the Job once after you make the connection.
9. Rename the CogIntersectLineLine Tools to reflect their position (Top-Left, Top-Right, Bottom-
Right, and Bottom-Left).
10. Make the proper connections between the CogFindLine Tools and the CogIntersectLineLine
Tools. Remember to run the Job once after you make the connection.
11. Add a CogCalibNPointToNPoint Tool inside the newly created CogToolGroup under the
CogIntersectLineLine Tools. Notice the 3 point pairs Calibration.SetUncalibratedPointX/Y(0-2).
12. Open the CogCalibNPointToNPoint Tool and add another point pair for a total of 4 point pairs (0-
3).
13. Close the tool and right–click on the CogCalibNPointToNPoint Tool and select Add Terminals…
14. Find and add the input terminals for SetUncalibratedPointX(3) and SetUncalibratedPointY(3).
Hint: Choose Expanded from the Browse drop-down list.
18. Configure the CogCalibNPointToNPoint Tool so that the Raw Calibrated X and Y values reflect
the real world dimensions of the points of intersection. This dimension is 30mm.
19. Click the Compute Calibrate button, and close the tool.
20. Disable the all 4 CogIntersectLineLine Tools. If you do not, the calibration will be reset and you
will have to click on Compute Calibration each time an image is acquired. We want to set the
calibration and then disable the intersection points from getting passed into the calibration each
time.
21. Add a CogCaliper Tool inside the CogToolGroup under the CogCalibNPointToNPoint Tool to
measure the distance across the side of the camera. Make the proper connections between the
CogCalibNPointToNPoint and the newly inserted CogCaliper Tool. Remember to run the Job
once after you make the connection.
22. Set the region to measure across the side of the camera as seen in the image below:
23. Configure the CogCaliper Tool to measure the distance across those edges.
a. Set the proper edge mode to find a pair of edges.
b. Set the proper polarity.
c. Set the proper Expected Edge Width. Hint: Since you are passing a calibrated image
into this CogCaliper Tool, this distance must be communicated in the real-world units to
which the image was calibrated (mm).
24. Check your results to make sure the distance is being measure properly, and that the results are
being reported in real-world units.
PASS
FAIL
Objectives
• The student will correctly:
Identifyy applications
pp where PatInspect
p may
y be p
part of the
vision solution
Create and configure a PatInspect tool to detect defects
under various run-time conditions
1
PatInspect
• Purpose is to detect defects using the PatMax
technology
gy
PatInspect
• Detects differences in Trained Image
pixel grey-scale values
b t
between analogous
l
regions in a trained image
and a run-time image
• Supports image Run-time Image
normalization
– Minimizes effects of
lighting variations on
results
Intensity
Difference
Image
2
Using PatInspect
• Basic steps to using PatInspect:
– Train an alignment
g p
pattern
– Train inspection pattern(s)
– Set run-time parameters
– Run PatInspect
– Extract results from PatInspect or perform further analysis
with other vision tools on the Difference Image
Alignment Image
• Typically, your run-time images and training images will
not always
y be in the exact same location in the image
g
– Even tiny variations in position will cause problems UNLESS
accounted for in an alignment step
3
Inspection Pattern Training
• One or more images can be used as the trained pattern
• PatInspect will statistically combine these images into a
single pattern
– A pattern model is created
– It provides information on where to expect high variability in a
run-time image
• Currently, you may supply only one inspection pattern when training for Boundary
Difference This limitation will be removed in a future release
Difference. release.
4
Training the Inspection Pattern
• For the first training image:
10
5
Statistical Pattern Training
• For subsequent images:
– Pass in the image to the
InputImage
– Run Statistically Train
Current Pattern
– The training region number
will increase
• No limit to how many
images
– The TrainImage
g will NOT
change
11
12
6
Masking Training Images
• Optionally, you
can mask anyy
of the training
images to
ignore certain
pixels in
training
13
Threshold Image
• PatInspect also calculates a threshold image
– The Threshold Image
g sets a threshold value for each p
pixel
• PatInspect uses this threshold image to eliminate differences
that do not represent defects, by assigning a higher value where
variability occurs and lower where less variability occurs
14
7
Threshold Image
• When PatInspect runs, it will subtract the pixels in the
run-time imageg from the template
p image
g and compares
p
the result to the threshold image.
• Therefore, the higher the threshold, the more
differences there can be in the run-time image without
passing the defect.
15
Calculating Thresholds
• Computing a pixel’s threshold value T using coeffs:
16
8
Calculating Thresholds
• Example with default values:
– If the standard deviation for a particular pixel across all
training images is 2
2.4,
4 the threshold value for that pixel is
also 2.4:
– This means that the grey value for that pixel in an affine-
transformed run-time region may vary by up to 2.4 grey
levels from that p
pixel’s g
grey
y value in the corresponding
p g
trained pattern image, and not count as a defect
17
Calculating Thresholds
• Example with other values:
– The standard deviation for a particular pixel is 2.4
– Scale = 3.0
– Offset = 10.0
– Thi
This means ththatt th
the grey value
l ffor th
thatt pixel
i l iin an affine-
ffi
transformed run-time region may vary by up to 17.2 grey levels
from that pixel’s grey value in the corresponding trained pattern
image, and not count as a defect
18
9
Calculating Thresholds with One Image
• If you are training with only one image, an artificially
standard deviation ((StdDev)) value is constructed for
each pixel
• Edge pixels have higher values
• Edge pixels also tend to be where multiple images show
the greatest variation, image to image
• Therefore, the Sobel image can be the basis for a
reasonable, if artificial, StdDev value
19
20
10
The Difference Image - Normalization
• Four types of normalization:
– Tails matching
• Appropriate for images with large defects that alter the shape of
the histogram, but not its range
– Mean and Standard Deviation
• Appropriate for images with “moderately-sized” defects
– Histogram equalization
• Appropriate where the total defect area is small or when the
typical defect amplitude is small
pp
• Well-suited for applications where lighting
g g or optical
p variations
can lead to nonlinear grey-scale variations
– Identity Transformation
• No change in the image
21
22
11
Thresholding the Difference Image
• PatInspect compares the pixel grey-levels in the Raw
Difference Image
g with the values in the Threshold
Image
• Any Difference Image pixels whose grey-levels exceed
the corresponding Threshold Image pixels will remain
untouched in the Difference Image
• Any Difference Image pixels whose grey-levels are less
than the corresponding Threshold Image pixels will be
assigned grey-level zero
23
Match Image
• PatInspect also produces a Match Image that indicates
the matched regions
g between the run-time image
g and
the trained image
– May be useful in highly confusing images
24
12
Match Image
• Also able to show graphic representing the thresholded
difference images
g displayed
p y over the match images g
• Cyan pixels represent a grey scale difference between 1
and 19
• Red pixels represent a grey scale difference greater
than or equal to 20.
25
PatInspect Images
• Optionally generate
additional images
g for
different kinds of analysis
26
13
Now What?
• The usual final result for a
PatInspect
p tool is the
Difference Image
• Perform analysis on this
image using other vision tools
such as Blob or Histogram
27
14
VisionPro
Section 9: PatInspect
3. Open the VisionPro Toolbox and add a CogPatInspect Tool under the Calibration Tools.
4. Make the proper connections between the Image Source, the CogPMAlign Tool and the newly
inserted CogPatInspect Tool. Remember to run the Job once after you make the connection.
5. Configure the CogPatInspect Tool to generate an image showing the differences between the
PASS and FAIL side of the demo plate..
a. Grab the train image and origin.
b. Set the proper region of interest to train as the ideal demo plate. Hint: Look at the
Current.TrainImage.
c. Train the new pattern.
d. Check that the new pattern is representative of an ideal demo plate. Hint: Look at the
Current.TrainedImage.
e. Check to see if there are any differences between the model and the current image.
Hint: Look at the LastRun.DifferenceImageAbsolute image.
6. Change over to the FAIL demo plate side. Run the CogJob again and notice the difference in the
LastRun.DifferenceImageAbsolute.
7. Send that image from the CogPatInspect Tool to a new CogBlob Tool and configure the CogBlob
Tool to find all the defects and give you information about them. Configure the tool to report
only those defects which represent real defects and not run-time variations.
8. Save your APPLICATION as MyPatInspect.vpp.
OCVMax
Session 10
Objectives
• The student will correctly:
Understand where OCVMax is used
Setup an OCVMax pattern based on an existing FONT file
Train for OCVMax Pattern layout
Specify a reasonable OCVMax pattern position
Analyze the results returned from the OCVMax tool
1
What is OCV?
• Optical Character Verification is used to verify that a
given character string
g g is p
present
• Commonly used to verify
– Date Codes
– Lot Codes
– Expiration Date
• Returns TRUE if all characters in the string are correctly
identified; FALSE if not
Example
• Verify the lot number “04149”
2
What is OCVMax?
• OCVMax tool uses the Cognex PatMax technology
– Based on the font file which defines the layout
y of each
character
– Determines the best possible search parameters for reliably
locating the string
– Optimizing various search parameters to improve
performance
3
Add OCVMax Tool
• Add tool and pass an image to the tool group
Font Tab
• Used to select the font file to use to verify
• Click on the browse
button to search for
the font file on the
system
4
Font Tab
• With the font selected that represents the font type
being
g verified
• Select Alpha/numeric
character that will be
part of string
• Select polarity
Text Tab
• Text tab is where the pattern is trained
• Enter in text to be
trained
• Click on the “Adjust
Position” button
10
5
Text Tab
• The string will appear in the display window in the style
of the font selected under the font tab.
11
12
6
Wildcards Tab
• The OCVMax tool allows you to insert wildcards so that the
string can change at runtime such is the case with
serialization
• To set a wildcard
– Choose position
– Select potential
characters that
would be found
– Add Selected Keys
– Retrain the font
13
14
7
Image Params Tab – Search Mode
• Character string can be located differently in each acquired
image
• Search Mode
– Position +/- Shift
• Based on trained
– Region
• Specific region
– Whole Image
g
15
16
8
Character Params tab
• The degrees of freedom in the search parameters is on
characters base
• After the string has been
trained, a confusion matrix
is populated with the
characters
• The matrix will indicate
where character confusion
may occur
17
18
9
Advanced Params tab
• Timeout set for tool to run
• Earlyy Accept
p Threshold is the p
percentage
g of character that
have passed, no further searching is carried out
• Early Fail Threshold
is opposite to
Accept
19
Result tab
• Displays overall results for string and individual characters
• Overall
O ll results
lt
• Cumulative score
• Individual results
20
10
OCVMax Results
• Character results return values for each character in the
string
g – in this case, a different serial number is used
21
Optimizing
• Use fixture tool and search mode Position +/- Shift
• Use Character Search p parameter on curved surfaces
• Set noise level
• Use separate OCVMax tool on paragraphs containing
character of high confusion
22
11
OCVMax vs. OCV tool
Both tools can be used to verify one or more characters
in an image, but here are three areas where they differ:
•Image based training
•OCVMax is quicker to set-up as it is derived from font
file
•OCV is purely image based so it can handle non-
western fonts
•Search Reliability
•OCVMax
OC a used PatMax a a technology
ec o ogy aallowing
o g for
o
position variability
•Distortion
•OCVMax can handle greater image to image
character distortion
23
12
VisionPro
3. Create a CogImageFile Tool (introduced the first day of class). Use this CogImageFIle Tool to
create a database of at least 1 PASS and FAIL image.
4. Launch the Image Font Extractor found under StartAll
ProgramsCognexVisionProUtilities. Use the Image Font Extractor to load the image
database and extract a custom font from the image of the demo plate.
a. Click Browse and find the recently created image database file.
b. Find an image with a representative set of PASS characters (ABC123).
c. Type ABC123 in the Chars field and click Extract.
d. Select the Character tab and view the models extracted for each character. Ensure they
are representative of what you wish to verify.
e. Click Save, name the file “demo_plate_font”, and save it to a location you will easily
find. Close the Image Font Extractor.
5. Open the VisionPro Toolbox and add a CogOCVMax Tool. Hint: Find it under the ID &
Verification folder.
6. Make the proper connections between the CogFixture Tool and the newly inserted CogOCVMax
Tool. Remember to run the Job once after you make the connection.
PASS FAIL
Color Tools
• Color, Color, Color!
– FireWire white balance
– Color Match
– Composite Color Match
– Color Segmentation
1
New to VisionPro 5.2
• ColorExtractor!!
2
Color Match Tool
3
Color Match & Composite Color Match
• Color Match tool provides a matching score
– Analogous to “Color Distance”
• Simple match compares against average RGB value
• Composite match compares against the distributions
4
Using Simple Match to Select Colors
• Correctly identify all the flavors
– Although
g Grape
p and Black Cherry
y are similar
10
5
Examples for Composite Color Match
• All samples have similar average values, but different distributions
and scores
O i i l
Original
.930 .370
.886 .152
.111 .102
11
How do we do it?
Simple and composite color match are very similar to
color segmentation…
g
1. Learn new color regions
2. Enable all color models that you want to compare
against
3. Test results
12
6
Color Extractor Tool
14
7
Defining Colors
• Colors can be defined as regions
– Select the region of color that you want to find
– Subtract any additional colors that may get added
15
16
8
Adjusting Colors
• Dilation to 1
– On original color trained
(not on Subtracted colors)
– Incorporates colors on
edge of model
17
How do we do it?
1. Extract desired color by enclosing that region with
an appropriate
pp p shape.
p
2. Acquire a new image to see the extraction results.
3. Adjust Minimum Pixel Value to 1 as well as Dilation
to 1.
4. Subtract any items that erroneously found during the
first training step (do not set dilation for any
subtracted colors)
18
9
VisionPro
Up until now, all the labs have been done with a monchrome camera. For the color
section, we will be using images that are loaded in the Images directory of your
VisionPro installation
Objectives:
Notes:
Notice the italicized words in the objectives. The color extractor tool searches a defined
inspection region for pixels that match the learned color models. The result is a binary
(or color) image that contains only the pixels that match the learned color models. The
color matching tools on the other hand don’t search, but rather they consider all the
pixels in the defined inspection area to score how well an object matches each learned
color model. The highest score can be considered the best match, but perhaps only if it
is above a user defined threshold.
Instructions:
Color Extractor
Start a new CogJob and reference the color_flowers.bmp file that is installed in
the Images folder of a default VisionPro installation.
Create a CogColorExtractorTool from the Tools>>Color folder.
The window should automatically display the ‘Color From Region’ parameters.
Change the Region Shape to a circle since that fits the shape of the rose well.
Also change the name of to ‘Rose’.
On the Current.Input image, move the learning circle over one of the roses and
resize it to fit inside the border of the rose. Click ‘Accept’ when you are done.
Set the Minimum Pixel Count to 1 and look at your results under the
LastRun.OverallColorImage
In order to reduce the amount of black spots in the image, set the dilation to 1.
Though now some of the orange flowers as being picked up.
To get rid of the other flower, we need to create a new color, but this time it will
be subtracted from the result. Make sure to set the Minimum Pixel Count to 1,
but DO NOT set the dilation.
Note the resulting image. You may need to do this to both of the large orange
flowers. Your resulting image should look like the following:
Objectives:
To identify the color of the smiley face using the Color Match Tool
File in Images directory to use: smiley.bmp
Instructions:
Color Matching
1. Start a new CogJob and reference the smiley.bmp file that is installed in the
Images folder of a default VisionPro installation.
2. Create a CogColorMatchTool from the Tools>>Color folder
3. Connect the Image Source output image to the input image of the
CogColorMatch tool.
4. In the color match window, create a new color region from the ‘Colors’ tab by
clicking on the button. Choose the “Region” as opposed to the “point”
5. The window should automatically display the ‘Color From Region’ parameters.
Change the Region Shape to a circle since that fits the shape of the face well.
Also change the name to ‘Yellow’.
6. Hit ‘Accept’. Do step 4 and 5 for Blue, Pink, and Green. When all four colors are
trained, your tool should look similar to the one below
7. To set to region to find, go to the Region tab and choose the CogCircle.
8. Now, where ever you place the circle, you can run the tool and it will tell you the
color that is within the circle.
9. Save your application as MyColorMatch.vpp
Thought question: Will the eyes and mouth affect the results?
Why choose region over point when defining a color?
Data Analysis and Results Analysis
Session 12
Objectives
• The student will correctly:
1
Data Analysis Tool
• Used to set
Tolerance
Ranges
– Pass / Fail /
Warn
• Also collect
aggregate stats
about tool
results
Click to add
new item
A new item
is added
2
Data Analysis Tool
• To pass a value from
a tool, yyou need to
create a input terminal
• Right click on the
Data Analysis tool
and select “Add
Terminal”
3
Data Analysis Tool
• Pass the tool’s value
to the new input
p
4
Data Analysis
• In the Results Tab, set buffering for cumulative statistics
• Determine if tool should fail if no update occurs on a
channel as the data will not be current
10
5
Results Analysis Tool
• Define a set of criteria that will allow the last run of the
tool g
group
p to g
give a ppassing,
g warn-level, or reject-level
j
result
– Can combine the results from one, several, or all the vision
tools in a tool group and generate a Warn or Reject status
– VisionPro ultimately uses this Warn or Reject status to
determine the value of the RunStatus property for the tool
group
– It is not both Warn and Reject like the Data Analysis tool
tool. The
selection is made on the Output parameter in the tool
11
12
6
Results Analysis Tool
13
14
7
Creating an Input Terminal
15
Creating a Function
When creating a function, Operation is chosen first:
Then arguments are selected. To enter a number, just type it into the field
16
8
Results Analysis Tool
• Results Tab
– Show results from last run
– Result is AND of all the functions (default) or particular parameter is
selected
17
Which to Use?
Both Tools are considered the Decision Making tools of
the VisionPro Product:
18
9
VisionPro
3. Be sure the following output terminals are exposed from their respective tools:
a. CogPMAlign
i. Score
b. CogHistogram
i. Standard Deviation
c. CogBlob
i. Count
d. CogCaliper
i. Edge Pair Count
e. CogAngleLineLine
i. Angle
f. CogCaliper from Calibrated Units
i. Measured Width
g. CogBlob from PatInspect
i. Count
h. CogOCVMax
i. Text Score
4. Create a CogDataAnalysis Tool and add a data entry channel for each of the items above.
5. Remove the default RunParams.Item[“Channel 0”].CurrentValue and add the input terminals for
each one of the newly created channels. Also expose the ToleranceStatus of each of these
values. Make the proper connection between the inspection results and the newly created
channels.
6. Open the CogDataAnalysis Tool and set the proper Reject Low, Warn Low, Warn High, and
Reject High for your inspection. Test your inspection to make sure the PASS side always passes
and the FAIL side always FAILS.
Objectives
• The student will correctly:
1
I/O: Getting Data Out
2
Adding Posted Items
All items that are posted (along with path) will be listed
3
Application Properties
Settings:
•Control of some
memory resources
Language:
• Language
g g used
Multithreading:
• Enabling more
efficient use of
cores
7
Posted Items:
• Results of job and all
posted items
4
Discrete I/O Through QuickBuild
• In order for QB to generate I/O
signals,
g enable I/O settings
g
Communications Explorer
• First you need to add the appropriate device
10
5
Communications Explorer
• Usage - Then you would set the lines to be input or
output.
p
– The module that you use will dictate the polarity available for
the lines
11
Communications Explorer
• Field – Select the appropriate parameter or tool result
12
6
Check Configuration
• To make sure the configuration is correct click on
“Configuration Checking”.
13
14
7
Final Notes on Discrete IO
• IO must be enabled to “run”
– When enabled, p
parameters cannot be changed
g
• The IO is global
– All jobs associated with the application use the same IO
– Different
Diff t applications
li ti will
ill resett th
the IO
– * Take care – best to keep IO consistent as it is hardwired
TCP/IP
• TCP/IP packets exchange application data and results with
other Windows applications
16
8
Adding TCP/IP IO
• Right Click to Add TCP/IP
– Choose from Client or
Server
17
Configuring TCP/IP IO
• Click in the field to add data
18
9
Configuring TCP/IP IO
• The data string can then be configured with control
character(s)
( )
– Encoder – data format that the remote device expects to receive
– Output Terminator
• Carriage Return
• New Line
– Output Delimiter
• Comma
• Tab
• Space
• Semi Colon
• Underscore
19
Configuring TCP/IP IO
• The result string that will be transfer to the remote device is
shown in the Output String window at the bottom
20
10
Configuring TCP/IP IO
• If this is a multi job application, repeat the last three step
after selecting the job to configure the TCP/IP IO data
21
Application Wizard
11
Application Wizard
• Creates a full-featured application from a QuickBuild
project
p j file ((.vpp
pp file)) that includes a customized
operator interface
– Does not require a Visual Studio or other development
environment license
– Output is a .NET application
• May optionally generate the source code in C# or VB.NET for that
application.
23
Application Wizard
24
12
Application Wizard – Use of Posted Items
25
26
13
Application Wizard – Input Property
27
28
14
Advantages vs. Disadvantages
• Advantage
– Quick creation of runtime application
pp for user
– Do not need to know programming
– User sees just image, results, and parameters brought
forward
– Allow users to tweak Vision Tools through QuickBuild
(optional)
• Disadvantage
Di d t
– Layout is confined to basic model
15
VisionPro
TCP/IP Communication
1. Load application MyDIO.vpp
2. Open Communication Explorer and go to TCP/IP.
3. Set the Device Type to Server and the Port to 5001. You should notice that this new item is
added under the TCP/IP folder in Communication Explorer.
7. Select Output Terminator to add carriage return and new line as well as your delimiter of choice.
8. Enable the IO.
9. Start a Command screen and type in “ipconfig” to get the IP Address of your PC. Make note of
this.
IP Address: _____________________________
10. Go to Windows Start All Programs Accessories Communication and choose
Hyperterminal.
11. Under Connect using: select “TCP/IP (Winsock)
12. Take the IP Address that you wrote down and insert under the Host Address. Set the port to
5001.
13. Select OK and then connect this session by selecting the icon that looks like a phone with the
receiver down.
14. Now run the QB job. You should see the data being sent to the Hyperterminal dialog.
15. Save application as MyTCPIP.vpp and completely exit the QuickBuild environment.
Application Wizard
1. Launch the VisionPro Application Wizard.
a. Step through the wizard.
i. Select the recently saved MyResults.vpp APPLICATION file.
ii. Name you application Demo Plate Inspection.
iii. Ensure the Include QuickBuild access checkbox is checked.
iv. Add 8 tabs at the Operator Security Level, one for each of the datum analyzed
above. Also add their respective Posted Items and an input field that affects
that Posted Item for each tab. Try to achieve this structure:
1. Cognex Logo
a. Input Field: Accept Threshold
b. Posted Item: Score
c. Posted Item: Score Tolerance Status
2. Connector Pins
a. Posted Item: Standard Deviation
b. Posted Item: Score Tolerance Status
3. LEDs
a. Posted Item: Blob Count
b. Posted Item: Blob Count Tolerance Status
4. Focus Ring
a. Input Field: Maximum Results
b. Posted Item: Edge Count
c. Posted Item: Edge Count Tolerance Status
5. Connector Angle
a. Posted Item: Angle
b. Posted Items: Angle Tolerance Status
6. Camera Width
a. Posted Item: Measured Width
b. Posted Item: Measured Width Tolerance Status
7. Defects
a. Posted Item: Blob Count
b. Posted Item: Blob Count Tolerance Status
8. Text
a. Posted Item: String Result
b. Posted Item: String Result Tolerance Status.
v. Tab one should look like this:
vi. Select where the application files should be created.
vii. Choose your preferred language for code generation.
viii. Optionally, save the configuration so you can import it next time you use the
Application Wizard.
ix. Launch the application.
2. Run the application continuously and change the demo plate image to track the results. Make
sure that PASS images are passing and FAIL images are failing. If this is not the case, click the
configuration button and open up QuickBuild to adjust settings. Be sure to save any changes
you make to the APPLICATION file.
Grid spacing = 10.000 Millimeter
Y
Course
Did the course fulfill the learning outcomes and objectives listed at the beginning of the course notes?
Completely Partially Not at all
If not, please explain:_______________________________________________________________________
Were the learning outcomes and objectives appropriate?
Completely Partially Not at all
If not, please explain:_______________________________________________________________________
Which topics were most relevant to your job?_________________________________________________
________________________________________________________________________________________
Are there other topics not covered you would like to see included in the course?
Yes No
If yes, please list: _____________________________________________________________________
Speed of presentation
Much too fast Too fast Just right Too slow Much too slow
Knowledge of material
Excellent Very good Good Fair Poor
-over-
Cognex Course Evaluation Form
Materials
Slides
Excellent Very good Good Fair Poor N/A
Handouts
Excellent Very good Good Fair Poor N/A
Technical documentation
Excellent Very good Good Fair Poor N/A
Videos
Excellent Very good Good Fair Poor N/A
Lab Exercises
Quality and availability of assistance during lab sessions
Excellent Very good Good Fair Poor
Were the lab exercises helpful in reinforcing your understanding of the course material?
Yes No
If not, please explain:_______________________________________________________________________
________________________________________________________________________________________
Overall Rating
Overall rating of the course
Excellent Very good Good Fair Poor
Thank you very much for your input! We look forward to working with you in the future.
COGNEX
Customer Education Center 2008-09-09