0% found this document useful (0 votes)
18 views287 pages

Eğitim

Uploaded by

829xkgfqkd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
18 views287 pages

Eğitim

Uploaded by

829xkgfqkd
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 287

TRN-VP-210-C-02-AM: VisionPro Standard

Course Description:

VisionPro Standard gives new or potential VisionPro users a 2 day overview of the hardware and
software used to prototype and deploy basic VisionPro applications. The class focuses on grayscale and color
tool usage while building a single application that is then used to simulate deployment via the Application
Wizard. The class also features basic input and output using digital I/O and TCP/IP connections.

Length: 2 days
Locations: Natick, MA; Onsite
Price: $395 (Cognex facility)
Onsite Available – Call 770-814-7920 for Details
Registration: Online at http://www.cognex.com/training

Topic List
1. Hardware Overview
2. Software and Image Acquisition
3. PatMax Basics
4. Search Tool Strategies
5. Histogram, Fixturing & Coordinate Spaces
6. Blob
7. Caliper & Geometry
8. Checkerboard & N-Point Calibration
9. PatInspect
10. OCVMax
11. Color Tools
12. Data Analysis and Results Analysis
13. Input/Output and Application Wizard

Expected Outcomes:
You will benefit from this course by learning:
 About the hardware supported by VisionPro
 How to prototype, develop, and test vision applications in QuickBuild
 How to acquire images
 How and when to use each vision tool
 How and when to use calibration
 How to add digital I/O and TCP/IP communication to application
 How to create new and modify existing applications built using the Application Wizard

Recommended Reading at http://www.cognex.com/support:


TRN-IS-210-M-02-AM: VisionPro Standard Class Manual
TRN-IS-210-V-02-AM: VisionPro Standard Video Downloads Sections 1-11

Prerequisites:
Familiarity with Windows environment

Cognex Corporation One Vision Drive Natick, MA 01760-2059 (508) 650-3000 fax (508) 650-3333 www.cognex.com
Hardware, Connections, & Security
Session 1

Before We Officially Get Started…


• Introductions
– Name?
– Company?
– Location?
– Position there / responsibilities?
– Previous vision experience?
– Your VisionPro application?
• Orientation
• Course Agenda

1
Course Expected Outcomes
You will benefit from this course by learning
 How to prototype, develop, and test vision applications in
QuickBuild

 How to use the Application Wizard to build a deployable application

 How to acquire images

 How and when to use each vision tool

 How and when to use calibration

Hardware

2
Objectives
• The student will correctly:
 Become acquainted with hardware supported by VisionPro
 Understand the different utilities available for camera
support
 Aware of the different resources available
 Describe the different types of security options that
VisionPro employs for its software and tools

Acquisition Hardware

3
1394DCAM FireWire
• Each camera acts as a frame grabber

• FireWire has two different bandwidth


rates based on the version of 1394
DCAM
– 1394a is rated to 32MB/sec
– 1394b is rated to 64MB/sec

• FireWire-based systems can be


configured with as many as 63
cameras, from multiple vendors and in
varying resolutions.

• FireWire camera ordering done by port


number, making field changes simple
even if camera type changes.

GigE Vision Acquisition


• Direct connect” technology
– Similar to FireWire
• Each camera acts as a frame grabber

• 100 MB/s practical bandwidth


• 100M cable length possible
• Timestamp images to check for lost data

• Supports up to 255 cameras per Ethernet port


– Camera enumeration by IP address

• VisionPro implements the “GigE Vision” standard


– Not all GigE cameras are “GigE Vision”
• QuickBuild Scripts demonstrate advanced features for GigE cameras

4
GigE Vision Camera Support
• GigE Vision falls between FireWire
and Camera Link

600
– Line scan support is possible

Practical Bandwidth (MB/sec)


500

400
– Less bandwidth than 300
CameraLink, more than FireWire
200

100

– No frame grabber required, like 0


FireWire FireWire A FireWire B Gigabit Camera Link Camera Link
Ethernet Base Medium

– Can support multi-camera apps,


like FireWire

MVS-8501 PCI Frame Grabber


• Four multiplexed analog cameras;
one image acquired at a time

• Allows fast and reliable image


transfer through 32-bit/33 MHz bus
architecture and 8MB FIFO

• 16 bi-directional TTL I/O


– Opto-isolation available via a
breakout box

10

5
MVS-8504 PCI Frame Grabber
• 4 independent channels provide mixed
camera format support for
asynchronous or simultaneous
acquisitions

• 32-bit/66 MHz bus architecture and


16MB FIFO provides fast and reliable
image transfer

• 16 bi-directional TTL I/O


– Opto-isolation available via a
breakout box

11

8504 PCI Express Frame Grabbers


• 8504e
– Quad Speed camera support is
required
– Maximum performance for double
speed cameras
• For example: 4x 1600x1200
double speed simultaneously at
full speed
– PC offers PCI Express slots only
– Better performance when multiple X1 PCI Express connector
analog grabbers are required

• 8500Le
– In price sensitive applications
– Single camera applications

12

6
MVS-8600 PCI Camera Link
• 8601
- single channel frame grabber
- supports 1 area scan or 1 line scan camera

• 8602 is a
- dual channel frame grabber
- supports 2 area scan, 2 line scan or 1 area scan
and 1 line scan cameras simultaneously.

• Variety of cameras:
–Line Scan: 1/2K, 2K, 4K and 8K cameras
–Area Scan: 640x480 to 2Kx2K or greater
cameras

13

MVS-8600e PCI Express Camera Link


• 8602e
– dual channel frame grabber
– supports two “Base” cameras, either area or line scan
• 2 area scan, 2 line scan or 1 of each
– Supports one “Medium” camera, either area or line
scan
• 85MHz pixel clock
• PCI Express (PCIe) x4 configuration
• PoCL (Power over Camera Link)
• Single cable for power and image transfer
• Requires new PoCL cable
• SafePower scheme senses if camera is PoCL or not
• Cameras are specifically marked if PoCL
• Only works on CameraLink Base cameras
• All 8602e shipping are PoCL

14

7
Hardware Overview
PCI Slot Analog Digital Linescan Max # Acq Channels I/O
Requirement Support Support Support Cameras

Firewire 1394 Maybe No Yes No 63 N/A Up to 24 via


DCAM PCI card

GigE N/A No Yes Yes 255 N/A Up to 24 via


PCI card
8501 PCI Yes No No 4 1 Up to 16

8504 PCI Yes No No 4 4 Up to 16

8601 PCI No Yes Yes 1 1 8 in, 8 out

8602 PCI No Yes Yes 2 2 8 in, 8 out

8602e PCIe x4 No Yes Yes 2 2 8 in, 8 out

8500Le PCIe x1 Yes No No 2 2 8 in, 8 out

8504e PCIe x1 Yes No No 4 4 Up to 16

15

I/O and Accessories

8
Wiring Options
• PC Wiring Guide installed by
default
• Defines different “kits” available
• States part numbers for cables
used in various kits
• Detailed illustrations on wiring
multiple scenarios depending upon
hardware used

17

I/O Available
FireWire and GigE

• Two flavors
– PCI flavor (PCI-DIO24/S)
– USB flavor (USB-1024LS)
• Provide up to 24 bi-directional
programmable Opto I/O lines
• Drivers are shipped with VisionPro
and is installed as an option
• Can be used directly in QuickBuild
Communication Explorer or
programmatically (as of 5.2)

18

9
MVS 8500 Options
• TTL
– 16 bi-directional
programmable I/O lines
– Each line configured
individually
• Opto- Isolated
– 8 pairs of programmable
Opto input and output lines
• Half Opto / Half TTL (Split)
– 4 pairs of Opto input and
output lines
– 8 bi-directional TTL lines

19

MVS 8600 Options (Trigger/Strobe/Encoder)


L LVDS (default) TTL Dual LVDS
Trigger Up to 2 (TTL and/or Opto)
Strobe Up to 2 (TTL and/or Opto)
Encoder LVDS (one) TTL (up to 2) LVDS (up to 2)
General 8 opto inputs and 8 opto outputs (connects to P4 and P6
Purpose I/O from framegrabber)

20

10
Other Accessories
To simplify and speed up the system integration process, Cognex offers a wide
range of optional accessories including cameras, lighting options and lenses
for all types of machine vision applications.

Lighting
In order to achieve the highest quality images
possible, Cognex offers a wide array of light
modules and integrated LED lighting options.

Lenses
Cognex offers a full range of high-quality compact
camera lenses designed specifically for machine
vision applications.

21

Supported Cameras

11
1394DCAM FireWire

For details on Camera Support for current version of VisionPro:


http://www.cognex.com/ProductsServices/VisionSoftware/SupportedCameras.aspx

23

GigE Vision

For details on Camera Support for current version of VisionPro:


http://www.cognex.com/ProductsServices/VisionSoftware/SupportedCameras.aspx

24

12
MVS-8504 & MVS-8501

For details on Camera Support for current version of VisionPro:


http://www.cognex.com/ProductsServices/VisionSoftware/SupportedCameras.aspx

25

MVS-8600 Series

For details on Camera Support for current version of VisionPro:


http://www.cognex.com/ProductsServices/VisionSoftware/SupportedCameras.aspx

26

13
Camera Utilities

FireWire DCAM Doctor Utility


• A troubleshooting utility
for driver and
acquisition problems
• A verification tool for
newly created FireWire
CCF’s
• DCAM register access
• Get your PC’s FireWire
topology
• Find out what video
formats a FireWire
camera supports DCAM Module Version
Microsoft
• Report module, driver IEEE 1394 Bus
and bus driver versions Driver Version DCAM Driver
with problem reports Version

28

14
FireWire DCAM Doctor Utility
• Topology information
includes the FireWire
bus speed
• Bus speed should be
either S400 or S800
• A bus speed of S100 for
an IEEE 1394b device
indicates that Windows
XP SP2 is running with
the SP2 Windows
FireWire bus driver

FireWire Bus Speed

Key for “board” sort


order

See also: FireWire Cameras User’s Guide in VisionPro


Documentation
29

Cognex GigE Vision Configuration Tool


• Use to configure the Topology of a GigE Vision network
• See also: GigE Vision Cameras User’s Guide in VisionPro
Documentation

30

15
MVS-8600 Camera Initialization
• Uses Serial Communication port built
into the camera link cable
• Protocol commands for each camera
vendor is different!
• Use Cognex Serial Communication
Utility called cogclserial.exe.
• Use CLC file to initialize camera
through the utility.

31

VisionPro Resources

16
VisionPro Resources
• VisionPro provides different levels of resources
– QuickBuild Navigator panel
– On-Line Help
• System level access
– VisionPro Library
– Vision Tool information
• QuickBuild
– Shortcut to tool information
– Samples
• QuickBuild
• .Net & C#
• Scripting

33

Installed Documentation
• Documentation files are installed and accessible
directly through the Windows Start menu
– Hardware Manuals
– FireWire and GigE User’s Guides
– PC Vision Wiring Guide
– PC Configuration Guide

34

17
VisionPro Help Files
• Help files are installed
and accessible directly
through the Windows
Start menu :
– VisionPro Online
Documentation
– through Help Selection
of QuickBuild

35

VisionPro Help Files (cont.)


• Broken into 5 areas:
– Application Development Guide
• Overall suggestions on Good Programming Practices
– User’s Guide
• Explanation of each tool (How To and Theory)
– Control Reference
• Controls of each tool (as used in QuickBuild)
– Programming Reference
• Methods, classes, properties
– Release Information
• New features / Release Histories
• Supported Platforms, Cameras, and Video for release
• Fixed and known bugs

36

18
The Browsigator
Integrated Help topic on all Tool Edit controls

37

VisionPro Samples
• Two types of CogJob samples available
1. QuickBuild interface
– Sample programs and scripts
2. Programmatic (VB.NET and C#)
– A HTML link is installed for VisionPro samples included in the installation on
the system
– Using Windows Explorer, the user can drill down to the directory that contain
the sample files

38

19
Application Samples
• QuickBuild samples
– Selecting a
sample will add it
as a job in
QuickBuild
– Navigating
through the
sampled job will
illustrate its use.
• Scripting examples
can also be viewed
through the Navigator
as well

39

VisionPro Licensing and Security

20
VisionPro Tool Suite
• VisionPro currently offers 3 levels of tools to suite
your needs for performance and price

VisionPro Base
-provides fundamental machine vision tools
VisionPro Plus
-adds PatQuick geometric pattern matching, OCV,
and ID tools
VisionPro Max
-completes the suite, including PatMax and all
VisionPro tools for ultimate flexibility

•Software and tools are secured through license bits

41

Security Options
• VisionPro supports various methods for authorized use of the
software
• Framegrabber/Dongle detection
• VisionPro detects installed Cognex hardware and
authorizes use of VisionPro software
• Software licensing
• Used with GigE and FireWire cameras (non-Cognex
hardware)
• Can operate in online mode and offline mode using file
exchange
• Emergency licenses
• Up to 5 emergency license can be used per installation
• Useful when license keys are not immediately available
• Authorizes VisionPro software use for 3 days

42

21
Software Security Licensing Center
Currently installed software
licenses and their status

Software license
management
menu including
access to
emergency
licenses

Answer to commonly asked


questions about Cognex
software licensing

43

Security Dongles – Development vs. Deployment


• Development
• Allows for all tools to be used
• Does not need to be reprogrammed for new
tool releases
• Has a time limit of up to a year
• Can be extended through Cognex Product
Services provided that the company has an
active Service Update Program (SUP)

• Deployment
• Set with tools at time of purchase
• Does not have a time limit

44

22
VisionPro

Section 1: Hardware & Connections

1. Confirm your camera has the proper connections to:


a. Power
b. I/O (not connected)
c. Network
2. Launch the Cognex GigE Configuraton tool.

3. Confirm your PC and camera configuration (we’ll need this info later in class).
a. Camera:
i. Serial Number:________________________
ii. IP Address:___________________________
b. PC:
i. IP Address:___________________________
4. If not already set, be sure to set the performance driver by clicking the “Set Performance Driver”
button in the Cognex GigE Configuration tool.
Software & Acquisition
Session 2

Objectives
• The student will correctly:
 Become acquainted with VisionPro, QuickBuild, and the
development methods available
 Identify how to create a QuickBuild job, save the job, &
deploy an application
 Save and load VisionPro projects into QuickBuild
 Create and configure an Image Source acquisition

1
Software

What is VisionPro?

2
Four Development Models

Path 1 Development Model (featured)

Advantages:
• No Programming Required
• Fast
• Can continue to use QuickBuild to modify vision, jobs, and I/O

Disadvantages:
• Operator interface limited by Application Wizard

3
Path 2 Development Model

Advantages:
 Easy customization of generated application
• Can still use QuickBuild to modify the underlying vision application

Disadvantages:
• Requires some programming
• Must work within framework of Wizard-generated code
• Cannot re-run the Wizard to update modified Wizard-generated code without losing
your modifications

Path 3 Development Model (featured)

Advantages:
• Total control over operator interface appearance and behavior.
• Can still use QuickBuild to modify the underlying vision application

Disadvantages:

• Requires programming

4
Path 4 Development Model

Advantages:
• Total application flexibility

Disadvantages:

• Requires the most programming

QuickBuild

5
What is QuickBuild?
• QuickBuild is the
interactive window into
VisionPro

• Many components
created can be re-
used in applications
– Written using VPro
API

• All most all users of


VisionPro will start
building their
application with
QuickBuild
11

QuickBuild Manages Jobs


• Each Job has:
– One Image Source
that provides images
– Some combination of
Vision Tools that run
on these images
– Multiple Jobs execute
in parallel

12

6
QuickBuild Object Structure

Applications store one or more jobs as well as all system


Application settings; can be saved as a vpp file

Jobs are found inside applications; they contain tools as well


as camera settings like gain and exposure settings; can be
Job saved as vpp file

Tools are found inside jobs; they are individual application


Tool tasks performed on an image; can be saved as vpp file

Images are acquired by the camera or can be loaded from a


Image file to emulate a live image capture

13
13

The Big Picture: Object Structure


Application

Job 1 Job 2

Tool 1
Tools generate
Tool 1
results from their
Tool 2 inspection on the
Tool 2 Image

Image Image
In In
Ram Ram
14
14

7
Applications
• Applications contain jobs which contain tools
• You can also set system settings at the application level
Application name

Job name

Quick access to
sample code,
jobs, help and
Job View allows tutorials
individual job
management as
well as
communications
configuration

15

Application Shortcuts
Resets
Open, save, or application or
save application runs application
as; includes all continuously Application
jobs and tools settings
Create, import,
within those jobs
or open a job to
Run application be added to the Online/Offline
including all application; also toggle;
jobs within it allows saving of communications
selected jobs settings and
posted items

Shows
application and Tool tips,
job results in a sample code,
floating display and help
16

8
Jobs Individual tool status/feedback

Camera settings

Image & graphics


Tool

Tool

Tool

Job
feedback/status

17

Job Shortcuts
Resets
application; runs
job continuously
Shows job and
tool results in a
Runs job;
floating display
enables image
container or Image selector
Job
separate
properties
window
Tool tips and help

Image settings Opens the


and live display VisionPro
toolbox

18

9
Tools
Tool name

Tool settings tabs

Image & graphics

Tool
feedback/status

19

Tool Shortcuts
Open, save, or
save tool as

Runs tool;
electric mode; Resets
enables image application or
container or runs job Image selector
separate continuously
window
Tool tips and help

Shows tool
results in a
floating display

20

10
Add Tools to a Job
• A Tool is a VisionPro object that performs a specific
analysis on the designated image

21

How To Add a Tool


• Select a tool in the Toolbox
Tools available for
and drag into the Job use
– The insertion marker
indicates the position of
the tool in the tool group
• When multiple tools are
added, execution order
is serial from top to
bottom

Click here to open


the VisionPro
toolbox

22

11
Job

Image Source

Vision Tool

Input Terminals

Output Terminals

Tools are listed in execution


order
- Drag and drop to change
the order

23

Use Terminals to Pass Data


• Terminals are
data elements
exposed for a tool
or tool group
– Drag and drop to
pass data
between tools or
tool groups

24

12
Run/Test and Make Decisions
• Make decisions about the Pass/Fail status of the scene using
either:
– Vision Tool Results
– Data Analysis Tool
– Results Analysis Tool
– Scripting
• Output results to QuickBuild Posted Items or the
Communications Explorer

25

Makes the Pieces into Executable Code


• Import any Saved VisionPro object into code
• Create a customized front end to your app using
VB.NET or C#
• Import application into the Application Wizard

26

13
Multilanguage Support
• QuickBuild and VisionPro use the System locale to set the active
language of the system during installation

• Use QuickBuild Options to change language after installation:


English, Japanese, German, Korean, and Simplified Chinese

27

Image Acquisition

14
Acquisition Basics (Frame Grabber)
1

Digital signal
from frame
grabber to PC

2 3
4

• The field of view (FOV), also called the scene, is the


physical area seen by the camera and the lens
• The camera converts light energy into a signal (analog
or digital)
• The signal is passed through the frame grabber to the
PC for analysis
• The grey values are reassembled into rows and
columns of picture elements or pixels
29

Image Representation
• Images are stored as 2-D x
(0,0)
arrays (table) of points of
light intensity called pixels
or pels
• The light intensity value, or y
grey value, of each pixel is
mapped to an integer
between 0 and 255 (for 8 bit
images)
• 0 = black
• 255 = white
• Left-handed image
coordinate system
30

15
Example 8 bit vs. 12 bit
Original Image 8 bit Image 12 bit Image
mapped mapped

Original Mapped Mapped


Histogram Histogram Histogram

Range 19..34 Range 342..536 31

Image Source
• The tool used to
acquire images from
a camera in
VisionPro is the
Image Source
• Initialize Acquisition
using the Initialize
button

32

16
Image Source
• First, choose
whether image
comes from an
Image Database or
Camera
• You can also load a
folder of images
and cycle through
them

33

Image Source
• Frame grabber
– The Cognex board from which images will be acquired
• Video Format:
– Choose the camera (and its format) from which you will acquire this
image
• Camera Port
– Which port this camera is connected to

34

17
Run the Job
• When you Run the Job, it
acquires an image from
the camera and puts it into
LastRun.OutputImage

35

Getting a Better Image


• Making changes to the physical set-up for acquisition is
always the first thing you should do to try and improve
your image
– Lighting
– Focus
– Aperture

• There are a few parameters in the Image Source


Configuration that could also help improve the image

36

18
Getting a Better Image
• Exposure
– Exposure duration for electronically shuttered cameras
• Brightness and Contrast
• Contrast determines the “spread” of the grey
values of the image
• Brightness shifts the collective grey values
higher or lower

37

Strobe & Trigger


• There are several settings in
the Image Source related to
triggering acquisition at run-
time

Strobe &Trigger
– Enabling Strobe
– Setting pulse duration
and polarity of strobe
– Selecting a trigger mode
– Selecting whether trigger
will be low to high or
high to low transition

38

19
Trigger Modes
Trigger Description Use
Type
Manual Software triggering Press Run in application
Free Run Allows for acquisition as fast Pseudo live mode – often
as possible used with Linescan to negate
image lag
Hardware Acquires when it detects a Proximity switch detecting
Auto transition on an external line part
Hardware Acquires when the software Application is running and
Semi-Auto Run is enabled AND the waiting for external source –
external line sees a transition more control over system

39

Additional Parameters
• A final set of parameters is for specialized acquisition
settings
– Strobed acquisition
– Using auxiliary lighting modules
– Partial image acquisition with progressive scan cameras
– Using Lookup Tables

40

20
Display Live Video
• Use the Display Live Video button to open a Live Video
Display and show a live image

Live Display

41

Displays
• Optionally use the
Floating Display to
open a separate
window to display
the acquired image
• Notice extra
information at
bottom.

location grey value

42

21
Displays
• On any of the
displays, you can
right click and choose
to zoom in, zoom out,
pan, etc.
• Zoom Wheel allows
for wheel on mouse
to control the zoom of
the display

43

Image File Tool

22
Image File Tool
• Used to save images to file or process images from an existing file
• File types supported:
– Image databases: .idb & .cdb
• Multiple images in one file
• Grayscale only
– Bitmaps: .bmp
• One image per file
• Grayscale and color
– Tagged Image File Format: .tif
• Multiple images in one file
• Grayscale and color
• Examples:
– To save and read back test images for prototyping, development,
and documentation
– To save and read back images from a production run
• i.e. All failed parts

45

ImageFile Tool
Used to save images to file or process images from an existing file

Resolution Index of image Relative paths


IDB CDB BMP TIFF PNG JPEG
8 bit grey      
24 bit RGB      
16 bit grey    
46

23
Image File Modes
• Toggle between Read and
Write mode using the
Record button
– In Read mode you are
reading images from an
image file
– In Write mode you are
appending images to an
image file
• We’ll first address how to
read images from an
existing image file

47

Reading Images
• Example: You need to
prototype and test
vision tools on a fixed
set of saved images
• Load a file using the
OpenImageFile button
– Browse for the image
file

48

24
Reading Images
Opened image file
Total image
count

Navigation
buttons
Currently
selected image

49

Three Images
• Most tools have
several images to
work with
• The Image File Tool
has three images
• Choose the image to
view from the
Display pull-down

50

25
Selected Image
• When you first open an
image file, the first
image is selected by
default
– The thumbnail of
the selected image
is highlighted in
blue
– The selected image
is shown in the
display

51

LastRun.OutputImage
• When we grab an image from
an image file, it becomes the
LastRun.OutputImage

52

26
Writing Images to File
• Example: Your Quality Assurance
department wants images saved of
all parts that failed in production
• When in Write Mode, running the
Image File
– Appends the Current.InputImage to
the image file
– Takes the Current.InputImage and
puts it in the LastRun.OutputImage

53

Current.InputImage
• Current.InputImage is the
image to be written to the
Image File on the next run
when in record mode only

54

27
Adding Images to an Image File
• Start by creating a new file or opening an existing file to
append to
New Image File

Load Existing
Image File

55

Link Images
• Drag and drop from the
OutputImage of the Image
Source to the InputImage of the
ImageFile tool
– Now every time the Job runs, the
acquired image will be appended
to the image file

56

28
Load / Save
• In the Image File control, there are two Save buttons
– One saves the entire tool and all of its settings to a .vpp file
– The other saves the images in the currently open file to an
image file
• .bmp, .cdb, .idb, or .tif
Saves single
image
Saves complete
image file tool

57

29
VisionPro

Section 2: Software & Acquisition

1. Launch the VisionPro software by launching QuickBuild.

2. Double-click on the Image Source object to open the acquisition settings.


3. Configure the camera settings so that you will acquire from the camera at your station.

4. Click the “Show Live Display” button and verify you can affect the FOV at which your camera is
pointing (move something unique under the camera and see if it moves on the live display).
5. Use the two ring controls on the lens to adjust aperture and focus.
The top ring, Aperture, adjusts the amount of light allowed to pass through the lens.
The bottom ring, Focus, adjusts the sharpness of an image.
Click anywhere in the Image view window to stop the live acquisition.
When to adjust lens:

Too Dark Too Light Too Blurry

What to look for:

 Maximum difference between light and dark areas (good contrast)


 Sharp features (not blurry)
Ideal Image

6. Open the Floating Display and identify where the following information can be found:
a. X & Y position coordinate
b. Zoom level for viewing the image
c. Intensity or grayscale value
7. Open the VisionPro Toolbox and add a CogImageFileTool.
8. Make the proper connections between the Image Source and the newly inserted
CogImageFileTool.
9. Configure the Image File tool to record an image to a CDB file on your hard drive.
10. Use the tool to record 8 PASS images of the demo plate and 8 FAIL ones.
11. Reopen your Image Source and configure it to acquire images from the newly created CDB file
instead of the camera.
Hint: You’ll have trouble using the Image Source on this file and recording at the same time so
only do one at a time.
12. Confirm that you are able to acquire from the file instead of the actual camera.
13. Revert back to camera acquisition instead of file acquisition.
14. Remove the Image File tool from the job (we no longer need it after the CDB file was
generated).
15. Save your APPLICATION as MyQuickBuild.vpp.
PatMax® – Getting Started
Session 3

Objectives
• The student will correctly:
 Identifyy applications
pp where PMAlign
g can be used to inspect
p
 Understand the concepts behind how the tool works
 Create and configure a PMAlign tool to find a pattern under
various run-time conditions
 Train a pattern and determine if the automatically extracted
features are valid for the application
 Evaluate parameter settings to determine which are needed for
basic run-time conditions

1
PMAlign

Introducing PatMax
• PatMax is a pattern-location
search technology
– PatMax patterns are not
dependent on the pixel grid
• A feature is a contour that
represents the boundary
between dissimilar regions in
an image
• Feature-based representation
can be transformed more
quickly and more accurately
than pixel-grid representations

2
PatMax Capabilities
• With one tool measure
– Position of the Pattern
– Size
Si relative
l ti tto the
th originally
i i ll ttrained
i d pattern
tt
– Angle relative to the originally trained pattern
• Unprecedented accuracy
– Up to 1/40 pixel translation
– Up to 1/50 degree rotation
– Up to 0.05% scale
• Increased speed
– Basic pattern finding is faster
– Angle and size determined quickly

PatMax Capabilities
• Improved alignment yield
– Handles wide range of image contrast
– Defocus, partial occlusion, and unexpected features can be
tolerated
• Easier to use
– Direct measurement of angle and size in one step
– Patterns may be transported between machines without loss
of fidelity
– Single tool functions more accurately and efficiently than
previously needed multiple tool solution

3
PatMax Applications
• Align a printed circuit
board based on
fiducials (alignment)

PatMax Applications
• Locate tabs on peach
cans; variations in
t
translation,
l ti rotation,
t ti and
d
lighting (presence /
absence detection)

Result: 4
Result: 3
Result: 2
Result: 1
Score: 0.97
Contrast: 0.94
Fit Error: 0.02
Location: x= 351.08
y= 245.92
Angle: 0.09
X-Scale: 1.0
Y-Scale: 1.0

4
PatMax Applications
• Identify engine block by
type
yp despite
p extreme
similarity between types,
lighting variations, and
part rotation (sorting and
classification)

PatMax Algorithms
PatQuick PatMax PatFlex High Sensitivity

• Best for speed • Best for high • Designed for • For low
• Best for three- accuracy highly flexible constrast/high
dimensional or • Great on two- patterns noise images
poor quality parts dimensional • Great on • Used with very
parts curved and noisy
• Tolerates more uneven backgrounds
image variations • Best for fine surfaces • Good for images
details • Extremely that have
• Example: Pick flexible, but significant video
and Place • Example: less acurate noise or image
Wafer degradation
** PatQuick is the alignment • Example:
cursory part of the Label location • Example:
PatMax algorithm Obscured part
in bag

10

5
The Big Picture
Train a Pattern

Set Run-time
Parameters

Run PatMax
on the Image

Get PatMax Results

11

Pattern Training
Get a Train
Image

Set Train Region


and Origin

Set Training
Parameters

Train the
Pattern

Evaluate the
Trained Features
12

6
Linking Tools
• You need images for:
– Pattern training
g
– Run-time inspection

• Link the OutputImage of the


Image Source to the
InputImage of PMAlign
– Drag and drop

13

Training a Pattern
• The PMAlign Tool has
three images
g associated
with it

• To train our pattern, we


need a
Current.TrainImage

14

7
Current.InputImage
• PMAlign Tool also has a
Current.InputImage
p g that
can either be a run-time
image or can be
“grabbed” as a training
image
(Current.TrainImage)

15

Grab Train Image


• Press the Grab Train Image button in the control

16

8
Pattern Region and Origin
• When using graphics
– Dragg and resize
training box around
pattern
– Position origin at
appropriate location

17

Pattern Region and Origin


• Next, define the
region
g of p
pixels
containing the
pattern to be trained
and the pattern origin
– Use graphics or
enter values in the
Train Region &
Origin tab

18

9
Model Origin
• Model origin identifies the point which will be reported
to you when PatMax locates an instance of the model
in the search scene

• To maintain the greatest accuracy, the origin point


should be placed at the center of the pattern region

Most accurate Less accurate


Origin

Origin

19

Train Pattern
• Press the Train button
to train the pattern
p
– PatMax finds the
features in the
Region

20

10
PatMax Patterns
• When you train a pattern,
PatMax determines the features
contained in that pattern
• A feature is a contour that
represents the boundary
between dissimilar regions in
an image
• A feature is described by a list
of boundary points that lie along
the contour
– Boundary points are defined
by position (x, y) in the image
and its direction normal to
the contour

21

Pattern Features
• To see what
PatMax has
detected as
features to look for
with this pattern,
check the Train
Features Graphics

22

11
Pattern Features
• Yellow lines indicate
coarse features
– Those used by PatQuick
• Green lines indicate fine
features
– Those used by PatMax

23

Pattern Features
• Zoom in to get a
closer look at the
detected features

24

12
InfoStrings
• Watch for any InfoStrings
– These will indicate if the p
pattern training
g was successful
– They will also warn of potential problems with the trained
pattern

25

Pattern Training
General guidelines for PatMax pattern training:
• Select a representative
p p
pattern with consistent features
• Reduce needless features and image noise
• Train only important features
• Consider masking to create a representative pattern
• Larger patterns will provide greater accuracy
• Really, the more boundary points, the greater accuracy

26

13
“Bad” Patterns
• What happens if you look at the trained pattern and
don’t like it?
– Too much detail
– Not enough detail
– Missed features

27

Granularity
• Granularity indicates
which features PatMax
d t t in
detects i an image
i

• In most cases, the


granularity range PatMax
selects for you is best

28

14
Granularity
• Granularity is expressed
as the radius of interest in
pixels within which
features are detected
• Increasing the granularity
decreases the amount of
finer features PatMax will
use

29

Granularity Limits
• PatMax uses a range of granularities between fine
and coarse limits
• Making granularity coarser (higher):
– Increases speed
– Decreases accuracy
– Detects coarse and attenuates fine features (which may be good
or bad)
• Making granularity finer (lower):
– Decreases speed
– Increases accuracy linearly
– Detects fine and attenuates coarse features (which may be good
or bad)

30

15
The Big Picture
Train a Pattern

Set Run-time
Parameters

Run PatMax
on the Image

Get PatMax Results

31

Run-time Parameters
• Choose the run-time algorithm
• Then a Search Mode
– Search Image uses entire image
– Refine Start Pose uses another tool’s results for start
• Then specify the number of instances to find in the run-
time image
• Indicate the Accept threshold

32

16
Accept Threshold
• Accept Threshold is a score (between 0 and 1.0) that
PatMax uses to determine if a match represents
p a
valid instance of the model within the search image.
Increasing the acceptance value reduces the time
required for search.

Accept Threshold
0 1.0

Not Valid Valid


Matches Matches

33

Coarse Accept Threshold


• Known PatMax behavior
– Accept
p Threshold = 0.5 -> no ppattern found
– Accept Threshold = 0.49 -> pattern found with Score 0.85 !

PatQuick Model PatMax Model Runtime Model

• Now exposing intermediate coarse accept threshold


– You can now modify this if candidates don‘t pass

34

17
Coarse Accept Threshold

Manually set
Coarse Threshold

Use Coarse Score to


set Coarse Threshold

35

Six Degrees of Freedom


X Translation Y Translation

Rotation Uniform Scale

X Scale Y Scale

** If multiple degrees of freedom are used, scale is always applied first **


36

18
Degrees of Freedom
• Set either a nominal value or range of values
– Use the arrows to toggle
gg between which yyou use
– Also toggle between degrees and radians for angle
– ScaleX and ScaleY are advanced parameters

37

Search Region
• By default, PatMax
searches the entire image
f potential
for t ti l matches
t h
• To have PatMax look in
only a portion of the
image, use a Region
Shape
– Either type in values or
use graphics to set size
and position

38

19
Graphics
• Last, select the
graphics
g p to be
shown at run-time
– Remember
graphics take time
to update

39

Run PatMax
• Press the Run button to
run PatMax on the current
input image

• If an instance is found,
designated graphics will
appear on the last run
input image

40

20
Results
• Results are displayed
under the Results tab

• If multiple instances
are found, they are
returned in descending
order of score

41

Results
• Score
– How well the result features match the trained pattern features

• X, Y
– The location of the found pattern in terms of the specified origin point

• Angle
– The angle of the found pattern relative to the originally trained pattern
– If nominal angle
g is used, this always
y equals
q the nominal value

42

21
Results
• Fit Error (PatMax algorithm only)
– a measure of the variance between the shape of the trained pattern
and
d th
the shape
h off the
th pattern
tt instance
i t found
f d in
i the
th search
h image
i

• Coverage (PatMax algorithm only)


– a measure of the extent to which all parts of the trained pattern are
also present in the search image

• Clutter (PatMax algorithm only)


– a measure of the extent to which the found object contains features
that are not present in the trained pattern

43

Results
• Scale
– The size of the found pattern compared to the originally
t i d pattern
trained tt
– If nominal scale is used, this always equals the nominal value
– a.k.a. Uniform Scale

• Scale X, Scale Y
– The size of the found pattern compared to the originally
trained pattern in X and Y directions
– If nominal scale is used, this always equals the nominal value

44

22
VisionPro

Section 3: PMAlign Tool

1. Load your MyQuickBuild.vpp APPLICATION file.

2. Open up CogJob 1.
3. Open the VisionPro Toolbox and add a CogPMAlign Tool.
4. Make the proper connections between the Image Source and the newly inserted CogPMAlign
Tool. Remember to run the Job once after you make the Image Source connection.

5. Configure the CogPMAlign Tool tool to find the COGNEX logo printed on the demo plate. The
tool should account for at least 45 degrees of rotation in either direction.

6. Save your APPLICATION as MyPMAlign.vpp.


Search Tools and Strategies
Session 4

Objectives
• The student will correctly:
 Identifyy applications
pp where PMAlign
g or SearchMax may y be
part of the vision solution
 Create and configure a PMAlign tool to find a pattern under
various run-time conditions
 Evaluate parameter settings to determine which are needed for
various run-time conditions
 Optimize execution time and accuracy
 Understand parameters to search more successfully
 Create and configure a SearchMax tool
 Decide when to use PMAlign or SearchMax
 Train and set run-time parameters

1
PMAlign - revisited

Additional Train-time Parameters


• Ignore Polarity
– Allow for shape
p to be found regardless
g of p
part or background
g
color

• Repeating Patterns
– Ability to tell PatMax that elements repeat, such as a grid or a
set of bars or a pattern of parallel lines

• Elasticity
– Amount of variance (in pixels) allowed for perimeter

• More on Granularity

2
Pattern Polarity
Pattern polarity is defined at every point along a boundary
as the direction toward darkness, without regard to magnitude.

By default, PatMax only


finds patterns with the same
polarity as the trained pattern.

You can configure PatMax


to ignore the polarity of the Trained pattern
pattern and use only feature
shapep information

Matching Mismatched
polarity polarity

Polarity
• Check the box to
ignore
g p
polarity
y
(allow for polarity
changes)

3
Ignoring Pattern Polarity
Polarity is a hint to PatMax which can make a pattern less
ambiguous. You should use polarity unless the object is subject
to polarity changes
changes. Notice the potentially ambiguous object
illustrated below.

Object PatMax using Polarity PatMax ignoring Polarity

EXPECTED
MATCH

PatMax Pattern EXPECTED MATCH

INADVERTENT
MATCH

Repeating Patterns
• These can create a special challenge to PatMax
- Human eye has problems with repeating AND alignment
- PatMax MUST be selected as Algorithm
Train Runtime Pattern
Pattern found

4
Elasticity Shows Advanced Parameters

• Elasticity is an
Advanced
Parameter that can
be valuable in
finding parts with
some geometric
change from the
originally trained
pattern

Elasticity
Elasticity, a train-time parameter,
is used to specify the degree to
which
hi h you will
ill allow
ll P tM  to
PatMax t
tolerate nonlinear geometric
changes Image
Pattern
Elasticity is measured in pixels,
typically 0 to 8

As you increase elasticity,


PatMax may find unintended
matches - Accuracy decreases

10

5
Granularity
Granularity = 6
• Coarse granularity controls the
level of detail used by the
P tQ i k algorithm.
PatQuick l ith
• Fine granularity controls the
level of detail used by the
PatMax algorithm.
• By default, both are set to 0
allowing PatMax to
automatically determine good
values.
l

Granularity = 1
11

How Granularity Works


32 Boundary Points
• Granularity works to limit the
number of boundary points
extracted
t t d from
f features
f t in
i an
image.
• A granularity value of 6 means
that boundary points will have
a radius of 6 pixels where no
other boundary points can
exist.
• A granularity
l it value
l off 1 means
that boundary points will have
a radius of 1 pixel where no
other boundary points can
exist.

104 Boundary Points


12

6
Relationship Between Boundary Points
• In the end PatMax creates a compilation of vectors
which include boundaryy p point information, direction
(polarity), and a relationship to one another.

13

Run-time Parameters Revisited


• Clutter
- Extraneous features at
runtime

• Contrast
- Greyscale difference
between edge and
background

• Overlap
- Percentage of one part
covering another

14

7
Clutter
• The model consists of inter-related boundary points.
• Clutter is a term used to describe extra features
present and adjacent to the original boundary
features of the image. They were not trained as
part of the original model.

15

Clutter in Score
• The Score using Clutter
parameter allows yyou to
p
factor or ignore clutter
when the score is
calculated.
• If checked, score is
lowered based on the
amount of clutter.
• If unchecked, score is not Score: 68
affected by the presence
of clutter.
Score: 94

16

8
Contrast
• Contrast sets the
minimum contrast
required in order to
consider a change in 92 62
grayscale a potential
boundary point.

• For a boundary point to 31 16


be detected its feature
contrast must exceed this
value.
5
17

Discarding Overlapped Results

18

9
Outside Region
• Outside Region allows a percentage of the model to be outside
the search region and still be found.
• Those missing boundary points outside the field of view are not
counted against the score.

19

Degrees of Freedom
Remember: Tell PatMax what you know about the part -- do not
enable freedoms your application does not demand

• Nominal values should be set to the value a part is known to


have
• If a pattern is trained at a different scale than the image, set the
nominal value of Uniform Scale to represent the image scale

Example: Training a pattern Training it at this size


at this size may make it and setting the nominal
difficult for PatMax to know scale value to 50%
whether the umlaut is a ensures the full
feature or image noise
ë
character is trained as
features
ë
20

10
Degrees of Freedom
Each degree of freedom may have a low to high zone of values

Multiple
p degrees
g of freedom can be enabled
• Multiple degrees of freedom can cause unintended matches
• Of the three scale degrees of freedom, only have at most two
enabled - the third would be redundant

Training at this size and setting a range of


50 - 200% scale would allow:

ë
Original - 1.00
ë
0.50
ë
0. 67
ëëë
1.17 1.33 1.67
ë
2.00

21

PatMax Score
• Score ranges from 0 (no match) to 1.0 (perfect match)
• Brightness, Contrast, and Polarity do NOT affect scores. They
may only affect if a pattern is detected or not.
• Factors considered in scoring include:
• Degree of Pattern Shape Fit
• Fit within Degree of Freedom range
• Missing Features
• Extraneous Features (PatMax algorithm only)

22

11
How To Make PatMax Fast
• Control what you can & Tell PatMax what you know
about the part
p
• Understand what parameters affect execution time

23

Parameters & Execution Time


• The larger the Search “Volume”, the longer the execution time will
be
– (width) x (height) x (angle zone) x (scale zone)
• Lowering accept threshold forces more exploration
• Larger number of results asked for makes execution time slightly
longer
• Lower fine granularity limit increases time (more detail to resolve)
• Increasing the coarse granularity limit decreases time (but be sure
necessary features are being detected)
• Consider polarity to slightly increase speed
• Set a Contrast Threshold > 0.0 for faster execution

24

12
Guidelines for Run-time Accuracy
• Never ask PatMax to figure out what you already know
or should know
– Prefer “consider polarity”
– Prefer elasticity very close to 0.0
– Prefer nominal DOF settings
– If you need to use DOF zones, set them based on realistic
expectations of object variation

25

Guidelines for High Accuracy


• Object Appearance
– Objects
j must be consistent in relative g
geometry
y
– Objects must be consistent in appearance
– Object features should be sharply defined

• Presentation and illumination


– Minimize specular reflections, shadows, non-linear changes,
occlusion, non-uniform contrast variations

26

13
Guidelines for High Accuracy
• Camera
– Use a qquality
y lens to minimize distortion
– Stick to the middle of the field of view
– Focus carefully
– Adjust aperture to avoid saturation
– Calibrate the camera to the system
• Larger patterns are more accurate
• Make sure fine granularity is 1.0
– If the automatic selection picks a larger value, you will get a
warning

27

SearchMax

14
SearchMax
• Specialized search tool that combines features from
both PMAlign
g and CNLSearch
– CNLSearch – normalized correlation to match features at
runtime
– PMAlign – find instances at different rotations and scale

• In most situation, PatMax will be faster and more


accurate – but…

29

Differences
PatMax SearchMax
Color Image Must be transformed to Handles color images
grayscale
Outside Region Model can be outside Part MUST be in ROI
ROI
Skewing Cannot handle skew Can find in skew range
Small Model The bigger the model, Can handle small models
the better (more info)
Noisy Background Very good at finding Cannot handle background
model noise very well
Open Shapes Not as reliable on open Can give more reliable
shapes (like a corner) results
Many DOFs Increase in tool time, Tool time becomes
but good extremely high

30

15
Where to use SearchMax
• Grey level images with small models
– For e.g.
g 15x15 p
pixel
• Images that would create too many features for PatMax
– Page of written text
– Textured objects
• Object doesn’t segment well due to color variations
• Skewed objects

• Where not to use SearchMax


– If many degrees of freedom are required at the same time
– Noisy background
– Tool may end up being too slow
31

SearchMax Capabilities
• Intensity based alignment (intensity correlation)
– Greyy Scale, RGB
• DOF
– Rotation 0-360 degrees, Scale 50-200%, Skew 0-30 degrees
• Accuracy
– Depends on image size but varies between a ¼ and a 1/10 of
a pixel
• Benefits
– Can handle very small patterns (15x15 or smaller)
– Works on many images where PatMax has a hard time
• Blurry images
• Confusing or too many geometries created by noise
• Skewed images
32

16
SearchMax
Training similar to PatMax
except its modes.

You can either:


• Set your degrees of
freedom at runtime
Or
• Train on the degrees
of freedom required

33

Results

•SearchMax is able to
find all four results –
even the skewed one

•PatMax is not able to


find the skewed item

34

17
VisionPro

Section 4: SearchMax Tool

1. Load your MyPMAlign.vpp APPLICATION file.

2. Open up CogJob 1.
3. Open the VisionPro Toolbox and add a CogSearchMax Tool.
4. Make the proper connections between the Image Source and the newly inserted CogSearchMax
Tool. Remember to run the Job once after you make the Image Source connection.

5. Configure the CogSearchMax Tool to find the lower right fastener on the back panel view.
a. Set the Train Mode to be “Evaluate DOFs at Runtime”
b. Set the search region so that it is only picking up that fastener and not the one on the
other side.
6. Run the CogJob and verify that the fastener is found.

7. Save your APPLICATION as MySearchMax.vpp.


Histogram, Fixtures & Coordinate Spaces
Session 5

Objectives:
• The student will correctly:
 Analyze
y an image g for the p
presence/absence of a p part using
g
a Histogram tool
 Choose the appropriate fixture tool as needed in a vision
application
 Create and configure a Fixture Tool
 Use terminals to pass data between tools
 Identify the use of Coordinate Spaces in vision applications

1
Histogram Tool

Histogram
Histogram creates statistics and a plot of the grey
values found within a specified
p area of the image
g

A Histogram is a plot of the count of


image pixels (y axis) at each possible
pixel intensity (x axis) throughout the
image.
count
The height of the graph at each pixel
intensity position along the x axis
indicates the number of pixels in the
tool’s region that have that intensity.

X axis positions can represent intensity grey values


groups instead of individual intensities.

2
Histogram
• A histogram may be used to:
– Detect the p
presence or absence of something
g in the image
g
– Monitor the output from a light source
• A software light meter
– Measure the uniformity of the grey values within an image
– Determine the grey-value distribution in an image to set-up
other vision objects

Add Histogram & Link Images


• Drag and drop the Image
Source OutputImage
p g to
the Histogram
InputImage
– Now, any time you run
the tool group, the output
image becomes the
image on which
Histogram
g will run

3
Histogram Images
• Histogram has three
images associated with it
(t l dialog)
(tool di l )
• Current.InputImage is the
image Histogram will
analyze on the next run
– In this case, the image
comes from the Output
image of the Image
S
Source

Histogram Images
• LastRun.InputImage is
the image
g on which the
last execution of
Histogram took place

4
Histogram Images
• LastRun.Histogram is a
plot of the g
p grey-level
y
distribution

Region of Interest
• By default, Histogram runs on
the entire image
• To analyze a single area of the
image, choose a region shape
and manipulate on the
Current.InputImage

10

5
Graphics
• Optionally, change
which ggraphics
p appear
pp
at run-time

11

Results
• Results
appear
pp in
control and
floating results
grid
• May also be
accessed in
VB or C# code

12

6
Coordinate Spaces

What Are Coordinate Spaces?


• Coordinate spaces provide a numerical framework for
expressing
p g the locations of p
points

14

7
Calibration and Fixturing
• Coordinate Spaces can be achieved through:
– Fixture Tool ((this section))
– FixtureNPointToNPoint Tool (this section)
– CalibNPointToNPoint Tool (later section)
– Checkerboard Calibration Tool (later section)
– Manually configuring and passing a 2D Transform (later
section)

15

Root Space
• The Root Space is a left-handed coordinate system
perfectly
p y aligned
g with the p
pixels of an acquired
q image
g
prior to any image processing
– May be different for synthetic or linescan images

16

8
Root Space
Image now has
• VisionPro automatically re-adjusts fewer pixels; note
that the root grid
the root space
p as an image
g lines no longer
undergoes image processing or correspond to the
sub-sampling pixel boundaries.

Image has been


subsampled;
automatically
adjusted the root so
that image features
(such as the "C" in
"COGNEX") retained
the same locations

17

User Space
• VisionPro lets you define any number of additional
coordinate systems
y

• Typically, user spaces are used to create and


manipulate calibrated spaces and fixtures

18

9
User Space
• You determine:
– Units
– Handedness
– How it relates to the image’s root space

2.3, 8.5

19

Pixel Space
• A pixel space is like the root space in that
– Its origin
g is always
y in the upper-left
pp corner
– Its space corresponds to the image pixels

• However, the pixel space does not adjust to reflect the


effects of image processing

• Rarely used in applications

20

10
Coordinate Space Trees
• Coordinate space trees contain
– An image’s
g root space
p
– All user spaces you created
– How all the spaces are related to each other
• a.k.a. Transformation

21

Coordinate Space Trees

22

11
Selected Space
• At all times, one space within the tree is the Selected
Space
p for the image
g

• The coordinate system in which all VisionPro tools that


operate on an image
– Return results
– Interpret input data
• i.e. regions
g of interest

23

Selected Space
• Creating a new image through some transformation
adds a new coordinate space
p to the coordinate space
p
tree
– And automatically selects the space as the new image's
selected space name

• Allows you to automatically map coordinates from a


processed image
p g back to the original
g image
g or vice-
versa

24

12
Fixture Tool

Fixture Tool
• The Fixture Tool is used to create
a fixture coordinate system
y when
you already have a coordinate
transform calculated
– In our example, we’ll find our part
using PMAlign; it produces a
transform in its results

26

13
Our Problem:
• Then we’ll create a Caliper to
measure the width of the center
“tab”

• The Caliper’s region of interest


should move in relation to where
the “ear” is found in the image

27

Getting Started
• Create and configure
an Imageg Source and
a PMAlign Tool
trained to find the
right “ear” of the
bracket

28

14
Add Fixture Tool
• Then add a
CogFixtureTool
g and
connect its
InputImage to the
Image Source’s
OutputImage

29

Connect Transforms
• Take the transform
determined by PMAlign and
use it as our Fixture
Fi t
• Connect the Pose Result of
PMAlign to the Transform of
the Fixture
– If you individually wanted to
supply X, Y, and rotation,
you could connect to those
terminals individuallyy

30

15
Run the ToolGroup
• Run the ToolGroup to pass the image and transform to
the Fixture Tool

31

Settings
• In most applications,
that’s it
• In some cases, you
may want to
manipulate the
transformation before
running the
subsequent vision
tools

32

16
Add a Caliper
• Now add the Caliper
and connect its
InputImage to the
OutputImage of the
Fixture

• Configure the Caliper

33

Running with a Fixture


• Why is it important to create and
configure
g the Fixture before
creating and configuring the
Caliper?

34

17
FixtureNPointToNPoint
Reference Fixturing Method

Reference Fixturing Method


• Use reference fixturing if you do not know the geometric
dimensions of yyour object
j or the real-world coordinates
of points to find within the object before performing
fixturing
– In this method, you supply a reference image that shows the
physical object to be fixtured
– You specify the desired location and orientation of the fixtured
coordinate space on the reference image and designate the
reference-image
reference image coordinates of important object features as
raw fixtured points

36

18
Our Problem
• Measure the width of the tab on the bracket, using the
centers of the holes to indicate where the p
part is in the
FOV

37

Add Tools
• Create and configure an
Image
g Source and a Blob
Tool

38

19
Add FixtureNPointToNPoint Tool
• Now add a
FixtureNPointToNPoint Tool

• We need the X and Y


centers of mass of our
blobs to connect to the
Fixture Points

39

Adding Terminals
• Right Click on Blob
Tool and Add
Terminals

40

20
Link Terminals
• Connect the newly
exposed terminals to
th Fi
the Fixture
t input
i t points
i t

• You may add


additional points and
expose additional
terminals as needed

41

Degrees of Freedom
• In the FixtureNPoint control,
choose the degrees of
freedom used when
determining the best-fit
transformation between
fixtured and unfixtured points
– In other words, how do
you expect your part to
change from image to
image?
Type # of Points
– Then be sure you have
enough points to perform Translation 1
the appropriate Rotation and Translation 2
transformation
Scaling, Aspect, Rotation, and 3
Translation
Scaling, Aspect, Rotation, Skew, 4 (or 3 if they are
and Translation not collinear)

42

21
Grab Reference Image
• Press the Grab Reference Image and Points button

43

Fixture Coordinate Axes


• By default, the fixture origin is in
the upper-left corner of the
image
• You may choose to move it
anywhere by dragging to the
new location or entering values
in the control
• May also rotate axes, change
aspect ratio,
ratio skew,
skew etc.
etc

44

22
Using FixtureNPoint
• Now add a Caliper and
connect its InputImage
p g to
the OutputImage of the
FixtureNPoint Tool

45

23
VisionPro

Section 5: Histogram & Fixturing Tools

1. Load your MySearchMax.vpp APPLICATION file.


2. Open up CogJob 1.

1. Open the VisionPro Toolbox and add a CogFixture Tool under the CogPMAlign Tool. Hint: It’s in
the Calibration & Fixturing folder.
2. Make the proper connections between the Image Source, the PMAlign Tool and the newly
inserted CogFixture Tool. Remember to run the Job once after you make the connections.
3. Save your APPLICATION as MyFixture.vpp.
4. Open the VisionPro Toolbox and add a CogHistogram Tool under the CogFixture Tool. Hint: It’s
in the Image Processing folder.
5. Make the proper connections between the CogFixture Tool and the newly inserted
CogHistogram Tool. Remember to run the Job once after you make the connections.

6. Configure the CogHistogram Tool tool to check for the presence of pins in the camera’s rear
connection.

PINS PRESENT PINS MISSING


7. Verify that the CogHistogram Tool region tracks the movement of the demo plate. Hint: You
may need to enable the REGION graphics in the tool to make the region show up.
a. What image must be selected in order to see the CogHistogram Tool region of interest?
b. What image must be selected in order to see the actual graphical histogram of the
region of interest?
c. What happens if the CogHistogram Tool region is forced outside of the field of view by
movement of the demo plate?
8. Save your APPLICATION as MyHistogram.vpp.
Blob
Session 6

Objectives
• The student will correctly:

 Identify applications where a Blob tool may be part of a


vision solution
 Create and configure a blob tool that
 Finds blobs in a designated grey-level range
 Filters blobs based on given criteria

1
Blob Overview
• Blob analysis is the detection and analysis of two-
dimensional shapes within an image
• Blob finds objects by identifying groups of pixels that fall
into a user-defined grey-scale range
• Blob reports many properties: center of mass (CM) extrema

– Area
– Center of Mass
– Perimeter
principal axes (PA)
– Principal Axes extrema

CM

PA

When to Use Blob


• Blob analysis is well-suited for applications where:
– Objects vary greatly in size, shape, and/or orientation (Difficult or
impossible to train a model)
– Objects are of a distinct shade of grey not found in the background
– Objects are not overlapping or touching

• Sample applications:
– Inspect for number, size, and shape of dispensed epoxy dots
– Inspect for correct position and size of ink dots indicating bad wafer dies
– Inspect for fragmentation and size of pharmaceutical tablets
– Sort or classify objects according to their size, shape, or position

2
Segmentation
• The first thing Blob does Blob pixels
when it runs is image
segmentation determining
segmentation,
which pixels are blob pixels
and which are background
pixels
• There are several modes to
specify what separates blob
from background pixels

Background pixels

Segmentation
• Most segmentation
modes will require:
q
– Polarity
• Dark blobs on light
• Light blobs on
dark
– Threshold
• The value(s) that
separate blob
pixels from
background pixels

3
Fixed Thresholding
• In Fixed Thresholding, the division between blob pixels and
background pixels is determined by grey values.
• Set a grey-level threshold:

grey value
p threshold
i = 140
x
e
l
s

0 blob background 255


grey-values

Relative Thresholding
• Relative thresholds are expressed as percentages of the total pixels
between the left and right tails
• Tails represent noise
noise-level
level pixels that lie at the extremes of the
histogram

Image:
Histogram:

5% of pixels with 40% of 5% of pixels with


lowest values highest values

Right tail pixel value


Left tail pixel value Threshold pixel value

4
Using Relative Thresholds
• Relative
thresholds
Th h ld = 30
Threshold
adjust for
40% of
linear lighting
changes
Threshold = 100

40% of

Threshold = 140

40% of

Fixed vs. Relative Thresholding


• Fixed grey-level
Pixel 90
thresholds do not Dark image: value 10
Threshold = 100
accommodate
linear lighting
changes Average Pixel 160
image: value 80
Threshold = 100

Pixel 200
Light image: value 120
Threshold = 100

10

5
Fixed vs. Relative Thresholding
• Fixed is faster than relative
because the grey levels
corresponding
di tto ththe background "object"

percentages do not have to be


computed
• Fixed thresholding can test for
absence of a feature in a 0 255
scene, whereas relative left tail grey level grey level right tail
thresholdingg will always
y find a with
weight 0
with
weight 1
1.0
0
blob in the scene

11

Hard Thresholding
• The examples so far have all used Hard Thresholding
– One value (grey level or percentage) divides blob pixels from
background pixels

200

Pixel
value
{ 220
80
100
120
Apply threshold value = 150

grey value
threshold
Examine a histogram to
determine the threshold
grey value blob background

12

6
Hard Thresholding
Threshold Specify single
dynamically percentage & tails
chosen; good for
images with
bimodal
distribution of grey
values
Specify single
grey value

13

Spatial Quantization Error


• Occurs with hard thresholding when
the object falls differently on the pixel
grid from image
g g to imageg
• May result in erroneous results for
blob size, perimeter, and location
• Error becomes more pronounced as
the perimeter of the object increases
Pixels in object = 64 Pixels in object = 81

Pixels in object = 44 Pixels in object = 25

14

7
Pixel Weighting
• Spatial Quantization Error can be eliminated by applying
pixel weighting
• As the blob moves relative to the pixel grid
grid, the total weight
remains the same

0 0 1 1 1 0 0 0 .4 1 1 .6 0 0 0 0 .8 1 1 .2 0

Total area = 3 Total area = 3 Total area = 3

15

Soft Thresholding
• Create a pixel weighting scheme by using soft thresholding
• Soft thresholding uses a range of thresholds

1 1

Weight

0
} Softness (3) Weight

Pixel value Pixel value

Low threshold High threshold Low threshold High threshold

Thresholding for dark object on Thresholding for light object on


light background dark background

16

8
Soft Thresholding
• Soft Thresholding
1.0
example
– Low Threshold =
50 0.75
– HighThreshold = Weighting
65
0.50
– Softness = 3

0.25

50 55 60 65
Threshold Grey Values
High Threshold

17

Soft Thresholding

Uses grey values


for thresholds Uses percentages for
thresholds and tails

18

9
Using a Subtraction Image
• Use a Subtraction Image when the image
consists of similar background
g and blob
grey values Subtraction Image
• The threshold image contains only
background information
• Every pixel in the image that differs from
the corresponding pixel in the threshold
image by a specified amount is a blob Image to Segment
pixel

Segmented Image

19

Pixel Mapping
• Use a pixel map (lookup table) for images that cannot
be segmented
g with hard or soft binary
y thresholds
• Requires a scaling factor which gets applied to the pixel
map values

20

10
Pixel Mapping
• Supply an output value for
each g
grey
y value

21

Connectivity Analysis
• After segmenting the image,
Blob pperforms Connectivityy
Analysis
• Whole Image blob analysis
returns one result for all blob
pixels in the image
• Grey Scale analysis identifies Whole Blob Analysis
discrete, connected blobs

Grey Scale Analysis

22

11
Connected-Blob Analysis
• Object pixels must be eight-connected
– Connected vertically,
y horizontally,
y or diagonally
g y
• Background pixels are four-connected
– Connected vertically or horizontally only

How many blobs are in


this Image?
23

Applying Morphological Filters


• First choose the filter(s) from the
pull-down list
p

• Order matters!
– To reorder or delete an operation,
use the buttons in the dialog

24

12
Pruning and Filling
• Pruning ignores, but does not remove, features which
are below a specified
p size
• Filling fills in pruned features with grey values from
neighboring pixels on the left

Initial image Pruned image Filled image

1 blob enclosing 9 holes 1 blob enclosing 1 hole 1 blob enclosing 1 hole


Blob area = 900 Blob area = 900 Blob area = 980

Holes still exist,


but are not reported Holes are filled in

25

Region
• By default, the blob
analysis
y is done on the
entire image
• To only detect blobs in a
portion of the acquired
image use a Region
Shape
– May graphically position
and size on the Input
Image

26

13
Measurements
• Allows you to specify
measurements
calculated on each
blob

27

Measurements
• For each selected
measurement,
choose:
– Grid
– Runtime
– Filter

28

14
Measurements
• Use Filter to
exclude blobs
outside a certain
range for any
property
– Or include only in
a certain range

29

Measurements
• Results may be sorted in
order for anyy of the selected
measurements
– Ascending or descending order

30

15
Graphics
• Choose to display Result
or Diagnostic
g g
graphics
p
– Remember that graphics
add time

31

Results
•N
– Index of the blob

• ID
– A unique blob identification number independent of sorting
criteria

• Measurements
– Calculated for those selected measurements

32

16
Geometric Properties
• Geometric properties are blob measurements that are
constant regardless of the orientation of the blob
– Area
– Perimeter
– Center of Mass
– Second moments of
inertia about the Bounding box for
Geometric extents
principle axes Center of mass
– Geometric extents Minor axis
– Principal bounding box
Major axis

33

Non-geometric Properties
• Non-geometric properties are those that change as the
blob rotates or changes
g p position
– Blob median
– Second moment of
inertia about the
coordinate axes
– Coordinate extents
Bounding box for
– Arbitrary bounding Coordinate extents
Median in yy-axis
axis
b
box
Blob median

Median in x-axis

34

17
Topological Properties
• Identifies blobs, holes, and blobs within holes

35

18
VisionPro

Section 6: Blob Tool

1. Load your MyHistogram.vpp APPLICATION file.


2. Open up CogJob 1.

3. Open the VisionPro Toolbox and add a CogBlob Tool under the CogHistogram Tool.
4. Make the proper connections between the CogFixture Tool and the newly inserted CogBlob
Tool. Remember to run the Job once after you make the connection.
5. Configure the CogBlob Tool to count the number of LEDs the camera’s rear plate.
a. Set the proper thresholding and polarity.
b. Use the Results tab to identify the properties of the blobs of interest.
c. Use the Measurement tab to discriminate against unwanted blobs and only count those
blobs that represent LEDs. Hint: You may need to add more criteria to be used in
distinguishing LED blobs from others.

3 LEDs 2 LEDs

6. Verify that the CogBlob Tool region tracks the movement of the demo plate.
d. What image must be selected in order to see the all the blobs found (before they are
eliminated using the Measurements tab)?
e. What image must be selected in order to see only the blobs that match your specific
criteria (after they are eliminated using the Measurements tab)?
f. What image must be selected to view the graphics from all the tools in one image?
7. Save your APPLICATION as MyBlob.vpp.
Caliper & Geometry
Session 7

Objectives
• The student will correctly:
 Identifyy applications
pp where Caliper
p and Geometry y tools may
y
be part of the vision solution
 Create and configure a Caliper tool to detect edges under
various run-time conditions
 Choose an appropriate region of interest for finding edges
 Evaluate parameter setting to determine the best values to use
for different edges
 Assess when additional scoring functions are necessary and
implement when needed
 Create and configure geometry tools

1
Caliper

Introducing Caliper
• Identifies edges and edge pairs in an object
• Reports edge location and distance between edges in an
edge pair

Apply large calipers to find


left and bottom edge
points of IC

Determine position and angle of IC

Apply smaller caliper to measure


lead spacing

2
Caliper Applications
• Ideal for gauging applications
– Measure the width of a part
p
– Measure the distance between parts
• Useful for fixturing a part
– When a part has positional uncertainty

The Task:
• Measure the width
across this metal
bracket

3
Define a Region of Interest
• The Caliper Region is the
area of the image
g in which
edges will be detected

• Graphically indicated by the


blue box in the
Current.InputImage

Define a Region of Interest


Resize Handles

Scan direction

Projection
Direction Rotation
Skew Handle
Handle

4
Define a Region of Interest
• Region criteria:
– Contains the edges of interest Projection
direction
– Edges should be parallel to the
projection direction
• May have to rotate Region
– Exclude features other than
the edges of interest when
possible Projection direction
• May have to skew

Caliper Parameters
• Next, set the parameters for Caliper

• Setting some of these parameters requires knowledge


of how the tool executes
– We’ll explain some “under the hood” operations as we go
through the parameters

10

5
The Big Picture – Run-time
Create the
projection image

Apply the
edge filter

Apply contrast &


polarity filters

Score remaining
edge candidates

Return highest
scoring edges
11

Projection
• Projection reduces a 2-D
image to a 1-D image Scan arrow
– Reduces processing time and
storage Projection
arrow
– Maintains, and in some cases,
enhances edge information Image

• Adds pixel grey values along


parallel rays that lie in a
specified direction Rays
Projection image
(averaged pixel values)

12

6
Edge Filtering
• The purpose of the edge filter is to eliminate noise from the
input image

Projection

Graph of pixel values

Graph of filtered output

Edge peaks

13

Edge Filtering
• Caliper performs
filtering
g byy
convolving the
one-dimensional
projection image
with a filter
operator

14

7
Edge Filtering
• A filter size close to the edge size produces stronger
edge
g p peaks
• A filter size too large or small flattens peaks

Sharp edge (1 pixel wide) Dull edge (5 pixels wide)

Filter width = 2

Filter width = 4

Filter width = 6

15

Settings Parameters
• Set Filter Half Size
– We’ll see a graphic
g p
on the image that
will visually indicate
if we’ve chosen a
good number for
our image
• Set the Contrast
Threshold
– 0 through 255
– Difference in
greyscale value
from both sides of
edge
16

8
Contrast Threshold
• Contrast threshold eliminates edges that do not meet
minimum contrast (p
(peak height
g or depth)
p )

+ min. contrast

0.0

- min. contrast

17

Edge Polarity
• Edge models describe edges or edge pairs as:
– Light to dark
– Dark to light
– Any polarity

18

9
Edge Polarity
• Choose Single Edge
or Edge
g Pair
• Then indicate the
expected polarity
• For edge pairs, also
specify the expected
distance between the
edges

19

Maximum Results
• Specify the
maximum number of
edges or edge pairs
to return in the
results

20

10
Run
• Use the Run button to
detect edges
g on the
current input image

21

Graphics
• Use graphics to indicate
results of executing
g the
Caliper

• Graphics increase execution


time, so use accordingly

22

11
Graphics
• Show Edges Found
draws ggreen lines in the
LastRun.InputImage at
the reported edges

23

Graphics
• The remaining result
graphics
g p appear
pp in the
LastRun.RegionData

• Useful for interpreting


what’s happening in your
image

24

12
Graphics
• Show Affined
Transformed Image g
adds the pixels from the
Region to the
RegionData

25

Results Grid
• Results appear in the Results grid in order from highest
to lowest scores

26

13
Results
• Score
– The score received based on the scoring
g functions yyou
created

• Edge 0 / Edge 1
– Which edge along the Region this is (an index)

• Measured Width
– For edge pairs only, the distance between the two edges

27

Results
• Position
– A one-dimensional measurement along g the search direction
relative to the center of the input region

28

14
Results
• X, Y
– The location of the edge
g in the image
g

• Function Scores
– The score this edge received for a single scoring function

29

“Bad” Edges
• What happens when the edges you want to detect are
not being
g reported?
p

Or

• Edges you don’t want to be detected are being reported


as results?

30

15
Scoring
• By default, single edges are scored only by their
contrast across the edge
g and edgeg ppairs are scored by
y
how well the measured distance between the edges
matches the expected distance.

• Sometimes, you need to modify how the edges are


scored to return reliably the ones you really want to find.
This is where the other scoring function can be added.

31

Scoring
• Specify the scoring
method(s) to apply to
thi edge
this d d detection
t ti

• The goal is to give the


highest possible scores
to the edge candidates
that best meet out
expected edges

32

16
Scoring
• Scores between Xc and X1 are mapped to Y1
• Scores between X0 and X1 are mapped linearly
between Y1 and Y0
• Scores above X0 are mapped to Y0

33

Scoring Method
• Contrast - Expressed in terms of the change in pixel
values
- For edge pairs, the contrast is the average contrast of the
two edges
• Straddle - whether or not the edges straddle the
center of the projection window
– Score = 1 if they do
– Score = 0 if they do not

34

17
Scoring Method
• Size - based on how much width between edges
varies from the edge
g model
• w = width of the edge model
• d = width of edge pair candidate
• 0 - Size_Diff_Norm |w-d|/w

• 1 - Size_Norm d/w
• 2 - Size_Diff_Norm_Asym ( w - d) / w

35

Scoring Method
• Position - distance of the edge(s) from the center of
the p
projection
j window
• a = distance between the origin of the edge candidate and
the center of the edge window
• 0 - Pos |a|
• 1 - Pos_Norm |a|/w
• 2 - Pos_Neg a
• 3 - Pos_Norm_Neg a/w

36

18
Scoring
• The raw score computed for each constraint is
converted to a final score that ranges
g from 0.0 to 1.0 via
the scoring functions defined
• All scores for each edge or edge pair are geometrically
averaged to obtain a final score
• Report only the edges or edge pairs with highest scores
up to the number of edges or edge pairs requested

37

Geometry Tools

19
Geometry Tools
• VisionPro contains many
tools that will do geometric
g
calculations for you
– You provide the inputs and
it does the appropriate
calculations

39

Creation Tools
• Create designated shape
based on inputs
p p
provided
– i.e. CreateCircle Tool will
output a circle, given an X,
Y center point and radius

40

20
Finding & Fitting Tools
• Find Tools create the
designated
g shape
p using g the
results of Calipers included in
the tool

• Fit Tools create a Best Fit


shape using the inputs from
other tools

41

Intersection Tools
• Calculate the intersection
point(s)
p ( ) from input
p shapes
p

42

21
Measurement Tools
• Calculate angle and/or
distances between inputted
p
shapes

43

22
VisionPro

Section 7: Caliper & Geometry Tools

1. Load your MyBlob.vpp APPLICATION file.


2. Open up CogJob 1.

3. Open the VisionPro Toolbox and add a CogCaliper Tool under the CogBlob Tool.
4. Make the proper connections between the CogFixture Tool and the newly inserted CogCaliper
Tool. Remember to run the Job once after you make the connection.
5. Configure the CogCaliper Tool to count the number of increments the camera’s lens.
a. Set the proper Edge Mode.
b. Set the proper Edge Polarity settings.
c. Set the proper Edge Pair Width. Hint: You may need view pixel coordinate locations to
determine the proper Edge Pair Width.
d. Set the proper Maximum Results.

8 Pairs 4 Pairs

6. Verify that the CogCaliper Tool region tracks the movement of the demo plate.
e. What image must be selected in order to see the graphical representation of edge
transitions and strengths?
f. What value would you change in the Scoring tab to make score drop faster when edge
pairs deviate from the Edge Pair Width setting?
7. Save your APPLICATION as MyCaliper.vpp.
8. Open the VisionPro Toolbox and add a CogFindLine Tool (available in the Geometry – Finding
and Fitting group) under the CogCaliper Tool.
9. Make the proper connections between the CogFixture Tool and the newly inserted CogFindLine
Tool. Remember to run the Job once after you make the connection.

10. Configure the CogFindLine Tool to find the edge of the rear edge in the camera’s side view.
11. Repeat steps 8- 10 to find the edge of the connector angle.

CogFindLine1 CogFindLine2
12. Open the VisionPro Toolbox and add a CogAngleLineLine Tool (available in the Geometry –
Measurement group) under the CogFindLine Tools.
13. Make the proper connections between the CogFixture Tool, both CogFindLine Tools, and the
newly inserted CogAngleLineLine Tool. Remember to run the Job once after you make the
connection.

14. Verify the CogAngleLIneLine Tool measures distinct and repeatable values for PASS images and
FAIL images.

PASS FAIL

15. Save your APPLICATION as MyGeometry.vpp.


Checkerboard & N-Point Calibration
Session 8

Objectives
• The student will correctly:
– Create and configure
g a calibration routine using
g the
CalibNPoint Tool
– Identify applications where Checkerboard Calibration is
necessary
– Create and configure a Checkerboard Calibration Tool
– Use the result of a nonlinear calibration in subsequent vision
tools

1
CogCalibNPointToNPoint Tool

CogCalibNPointToNPoint Tool
• The CogCalibNPointToNPoint Tool calculates a 2-D
transform that maps
p image
g coordinates to “real-world”
coordinates

• It also attaches the computed coordinate space to the


coordinate space tree
– As discussed in the Coordinate Space section

2
Calibration
• Calibrating your vision system creates a fixed
coordinate system
y that represents
p real-world
measurement and location
(0,0)

92.7 mm
Robot home position

Calibration Image
• Typically, calibration is done on a part other than the
part to be inspected
p p
• Some calibration plate criteria:
– Contains features at known locations
• Number of features needed depends on number of degrees of
freedom calculated
– i.e. Translation, rotation, scaling, aspect, and skew requires
three known locations
– Occupies approximately 50-70%
50 70% of the FOV at the same
optical set-up (same plane) as when running on the inspected
parts

3
Acquire Calibration Image
• Acquire an image of the part
from which yyou want to calibrate

• In our example, we’ll use a


100mm calibration square
– Use its corners as the known
locations

Determine Locations
• There are many ways we
could determine the
location of the corners of
the calibration square

4
Create Calib Tool
• Add a CalibNPointToNPoint tool to the Job

• Connect the OutputImage terminal of the Image Source


to the InputImage of the Calib tool

Entering Coordinates
• Connect the X & Y
coordinates of the
corners to the
uncalibrated points of
the Calib tool

10

5
Grab Calibration Image
• Open the Calib control and press the Grab Calibration
Image
g button
– This passes the Current.InputImage to the
Current.CalibrationImage

11

Enter Coordinates
• Notice the coordinates of the three corners have been
passed to the Calib Tool
p
• Enter the real-world coordinates of each point

12

6
Degrees of Freedom
• Next, choose the
Degrees
g of Freedom
to use when
computing the best-
fit transformation
between
uncalibrated and
calibrated points

13

Origin
• Optionally, indicate additional origin translation,
rotation, or swapp handedness of coordinate axes

14

7
Graphics
• Also optionally,
indicate g
graphics
p to
show for calibration

15

Compute Calibration
• Finally, press the Compute Calibration button
– In the Current.CalibrationImage,
g notice the calibrated image’s
g
coordinates axes graphic

16

8
Results
• Check that the
Calibration Results make
sense for the calibration
image you just used

17

Calibration Errors
• If there is a
large
g RMS
error, a
message will
appear in the
control
– Note the
possible
reasons that
this could be
large

18

9
Disable Corner-finding Tools
• Now that we’ve calculated the calibration transform, we
don’t need tools to run again until we need to
recalibrate
lib t our vision
i i systemt
– Distance between the part and camera changes

• Right click on each tool or tool group and disable it


– When you run the tool group, the tool will not execute

• Leave the Calibration Tool Enabled

19

Analyze Part
• Now add the vision analysis
tools to the Tool Group
p
– In this example we’ll add a
Blob Tool

• Make the InputImage of the


BlobTool connect to the
OutputImage of the
Calibration Tool

• All results now will be in


real-world units

20

10
Checkerboard Calibration

Checkerboard Calibration
• Checkerboard calibration uses a
checkerboard p plate to calculate
the transform between pixels and
real-world units
• Can calculate either a linear or
non-linear transform
– Non-linear transforms account for
optical and/or perspective
distortions

22

11
Non-linear Distortions
• There are three common types of distortion to account for:
– Aspect
– Perspective
– Radial

23

The Big Picture

24

12
Calibration Plate Guidelines
• The plate itself:
– Black and white tiles must be arranged in an alternating pattern
– Black and white tiles must be the same size.
size
– Tiles must be rectangular with an aspect ratio within the range 0.90
through 1.10
• The acquired image:
– Acquired image must include at least 9 full tiles
– Tiles in the acquired image must be at least 15x15 pixels
– In general, increasing the number of tiles visible in the calibration
image (by reducing the size of the tiles on the calibration plate),
improves the accuracy of the calibration
• Also see documentation for complete explanation

25

Plate Origin
• Optionally, your calibration plate
may have an origin point, indicated
b ttwo iintersecting
by t ti rectangles
t l
• If found, this point will become the
origin of the raw calibrated space
• If not found, the origin of the raw
calibrated space is the vertex
closest to the center of the
calibration image

26

13
Get an Image
• First, get an image of the calibration plate at the same
optical
p set-up
p as the p
production inspection
p

27

Calibration Set-up
• Next, specify Linear or
Nonlinear Mode
• Enter the single tile size in
both X and Y
– Units are irrelevant, but must
be the same for X and Y
• Indicate whether a fiducial
(origin) mark exists on your
plate
p
• Grab a Calibration Image

28

14
Optionally, Change Origin
• You can optionally change the origin point of the
calibrated space
p
– Translation
– Axis Rotation
– Axis Handedness

29

Compute Calibration
• Press the Compute
Calibration Button
• Look at the
Undistorted Calibration
Image
– This is the “corrected”
image

30

15
Calibration Result
• See the results for
each corner of each
tile
– Both uncalibrated
and raw calibrated
coordinates

31

Calibration Result
• Also see the
coefficients for the
nonlinear equation
calculated
• RMS Error

32

16
What the Calibration Tool Did
• Obtains two sets of points:
– Points of vertices in acquired
q image
g
– Points of vertices in raw calibrated space, based on tile size
information you provided

33

What the Calibration Tool Did


• The tool then calculates a
nonlinear transform, indicating
th “map”
the “ ” between
b t
corresponding points in the two
images
• The tool uses this calibration
transformation to warp the run-
time image to remove the
distortion detected during
g
calibration

34

17
Using Calibration Results
• Once the transform is
calculated, simply
py p
pass the
OutputImage of the
Calibration to the
InputImage of an
inspection tool
• The tool will be passed the
“corrected” image

35

18
VisionPro

Section 8: Calibration Tools

1. Load your MyGeometry.vpp APPLICATION file.


2. Open up CogJob 1.
3. Open the VisionPro Toolbox and add a CogToolGroup Tool under the CogAngleLineLine Tool.
a. CogToolGroup Tools create a subgroup within your Job. They are useful for organizing
jobs by grouping related tools.
b. Double-click on the CogToolGroup to open it and populate it with more tools.
4. Open the VisionPro Toolbox and add four CogFindLine Tools inside the newly created
CogToolGroup.
a. Each one should find one of the sides of the rear camera view.
5. Make the proper connections between the CogFixture Tool and the newly inserted CogFindLine
Tools. Remember to run the Job once after you make the connection.
6. Configure the CogFindLine Tools to find the four sides of the rear camera view.
a. Under Caliper Settings:
i. Set the proper location and direction using the interactive graphics.
ii. Set the proper Edge Mode to find only a single edge and not an edge pair.
iii. Set the proper Edge Polarity settings that will find the first edge encountered.
b. Rename the CogFindLine Tools to reflect their position (Top, Right, Bottom, and Left).

7. Add four CogIntersectLineLine Tools inside the newly created CogToolGroup under the
CogFindLine Tools.
a. Each one will find the points of intersection between the CogFindLine Tools.
8. Make the proper connections between the CogFixture Tool and the newly inserted
CogIntersectLineLine Tools. Remember to run the Job once after you make the connection.
9. Rename the CogIntersectLineLine Tools to reflect their position (Top-Left, Top-Right, Bottom-
Right, and Bottom-Left).
10. Make the proper connections between the CogFindLine Tools and the CogIntersectLineLine
Tools. Remember to run the Job once after you make the connection.
11. Add a CogCalibNPointToNPoint Tool inside the newly created CogToolGroup under the
CogIntersectLineLine Tools. Notice the 3 point pairs Calibration.SetUncalibratedPointX/Y(0-2).

12. Open the CogCalibNPointToNPoint Tool and add another point pair for a total of 4 point pairs (0-
3).

13. Close the tool and right–click on the CogCalibNPointToNPoint Tool and select Add Terminals…
14. Find and add the input terminals for SetUncalibratedPointX(3) and SetUncalibratedPointY(3).
Hint: Choose Expanded from the Browse drop-down list.

Now notice the 4 point pairs Calibration.SetUncalibratedPointX/Y(0-3) available in the


CogCalibNPointToNpoint Tool.
15. Make the proper connections between the CogFixture Tool and the newly inserted
CogCalibNpointToNPoint Tool. Remember to run the Job once after you make the connection.
16. Make the proper connections between the CogIntersectLineLine Tools and the newly inserted
CogCalibNpointToNPoint Tool. Remember to run the Job once after you make the connection.
17. Open the CogCalibNPointToNPoint Tool and click the Grab Calibration Image button. Select the
Current.CalibrationImage image from the image drop-down. You should see the four points of
intersection being passed from the geometry tools.

18. Configure the CogCalibNPointToNPoint Tool so that the Raw Calibrated X and Y values reflect
the real world dimensions of the points of intersection. This dimension is 30mm.
19. Click the Compute Calibrate button, and close the tool.
20. Disable the all 4 CogIntersectLineLine Tools. If you do not, the calibration will be reset and you
will have to click on Compute Calibration each time an image is acquired. We want to set the
calibration and then disable the intersection points from getting passed into the calibration each
time.
21. Add a CogCaliper Tool inside the CogToolGroup under the CogCalibNPointToNPoint Tool to
measure the distance across the side of the camera. Make the proper connections between the
CogCalibNPointToNPoint and the newly inserted CogCaliper Tool. Remember to run the Job
once after you make the connection.
22. Set the region to measure across the side of the camera as seen in the image below:

23. Configure the CogCaliper Tool to measure the distance across those edges.
a. Set the proper edge mode to find a pair of edges.
b. Set the proper polarity.
c. Set the proper Expected Edge Width. Hint: Since you are passing a calibrated image
into this CogCaliper Tool, this distance must be communicated in the real-world units to
which the image was calibrated (mm).
24. Check your results to make sure the distance is being measure properly, and that the results are
being reported in real-world units.

PASS

FAIL

25. Save your APPLICATION as MyNPointCalibration.vpp.


26. Calibrate using a CogCalibCheckerboard Tool instead of the CogCalibNPointToNpoint.
27. Reapeat steps 21-24 using the CogCalibCheckerboard Tool instead of the
CogCalibNPointToNPoint tool.
28. Save your APPLICATION as MyCheckerboardCalibration.vpp.
PatInspect
Session 9

Objectives
• The student will correctly:
 Identifyy applications
pp where PatInspect
p may
y be p
part of the
vision solution
 Create and configure a PatInspect tool to detect defects
under various run-time conditions

1
PatInspect
• Purpose is to detect defects using the PatMax
technology
gy

• Defects are defined as any changes in the run-time


image beyond normal expected image variations

• Defects can be either things missing (occlusion) or


extra (clutter)

PatInspect
• Detects differences in Trained Image
pixel grey-scale values
b t
between analogous
l
regions in a trained image
and a run-time image
• Supports image Run-time Image
normalization
– Minimizes effects of
lighting variations on
results

Intensity
Difference
Image

2
Using PatInspect
• Basic steps to using PatInspect:
– Train an alignment
g p
pattern
– Train inspection pattern(s)
– Set run-time parameters
– Run PatInspect
– Extract results from PatInspect or perform further analysis
with other vision tools on the Difference Image

Alignment Image
• Typically, your run-time images and training images will
not always
y be in the exact same location in the image
g
– Even tiny variations in position will cause problems UNLESS
accounted for in an alignment step

• Create an configure a PMAlign tool that finds a feature


that can be reliably used for alignment
– This may be:
• The entire part to be inspected
• A portion of the part to be inspected
• Something completely different than what you will be
inspecting, as long as it is a consistent offset from the
inspection

3
Inspection Pattern Training
• One or more images can be used as the trained pattern
• PatInspect will statistically combine these images into a
single pattern
– A pattern model is created
– It provides information on where to expect high variability in a
run-time image

• Currently, you may supply only one inspection pattern when training for Boundary
Difference This limitation will be removed in a future release
Difference. release.

Training the Inspection Pattern


• Pass into PatInspect:
– An InputImage
p g
– A Pose
• Typically directly from a
PMAlign result run on the
same image as the
InputImage
– You can also optionally
pass in a TrainImage &
origin
i i di
directly
tl

4
Training the Inspection Pattern
• For the first training image:

Training the Inspection Pattern


• Technically, that’s all you need is one training pattern.
However, most “real-life” inspections
p need to account
for natural, acceptable variability within a part
• Statistical training allows you to supply multiple images
of good, but varying parts

10

5
Statistical Pattern Training
• For subsequent images:
– Pass in the image to the
InputImage
– Run Statistically Train
Current Pattern
– The training region number
will increase
• No limit to how many
images
– The TrainImage
g will NOT
change

11

Trained Pattern Image


• The trained pattern image
is the mathematical
average of the aligned
images supplied for
pattern training

12

6
Masking Training Images
• Optionally, you
can mask anyy
of the training
images to
ignore certain
pixels in
training

13

Threshold Image
• PatInspect also calculates a threshold image
– The Threshold Image
g sets a threshold value for each p
pixel
• PatInspect uses this threshold image to eliminate differences
that do not represent defects, by assigning a higher value where
variability occurs and lower where less variability occurs

14

7
Threshold Image
• When PatInspect runs, it will subtract the pixels in the
run-time imageg from the template
p image
g and compares
p
the result to the threshold image.
• Therefore, the higher the threshold, the more
differences there can be in the run-time image without
passing the defect.

15

Calculating Thresholds
• Computing a pixel’s threshold value T using coeffs:

T = coeffs.x( ) * StdDev + coeffs.y( )


|| ||
scale offset

• Default values for coefficients are (1.0, 0.0)


• StdDev is the standard deviation of the grey values for a single pixel across all
images supplied for training

The Scale and Offset values


are set in the tool

16

8
Calculating Thresholds
• Example with default values:
– If the standard deviation for a particular pixel across all
training images is 2
2.4,
4 the threshold value for that pixel is
also 2.4:

1.0 x 2.4 + 0.0 = 2.4

– This means that the grey value for that pixel in an affine-
transformed run-time region may vary by up to 2.4 grey
levels from that p
pixel’s g
grey
y value in the corresponding
p g
trained pattern image, and not count as a defect

17

Calculating Thresholds
• Example with other values:
– The standard deviation for a particular pixel is 2.4
– Scale = 3.0
– Offset = 10.0

then the threshold value for that pixel is 17.2:

3.0 x 2.4 + 10.0 = 17.2

– Thi
This means ththatt th
the grey value
l ffor th
thatt pixel
i l iin an affine-
ffi
transformed run-time region may vary by up to 17.2 grey levels
from that pixel’s grey value in the corresponding trained pattern
image, and not count as a defect

18

9
Calculating Thresholds with One Image
• If you are training with only one image, an artificially
standard deviation ((StdDev)) value is constructed for
each pixel
• Edge pixels have higher values
• Edge pixels also tend to be where multiple images show
the greatest variation, image to image
• Therefore, the Sobel image can be the basis for a
reasonable, if artificial, StdDev value

19

The Difference Image - Normalization


• Sometimes, you want to account for changes that may
occur in lighting
g g or environmental conditions
– So that the grey-level variations are not detected as defects
• You may normalize the run-time image before
comparing the run-time image to the trained image to
eliminate the effects of these expected variations

20

10
The Difference Image - Normalization
• Four types of normalization:
– Tails matching
• Appropriate for images with large defects that alter the shape of
the histogram, but not its range
– Mean and Standard Deviation
• Appropriate for images with “moderately-sized” defects
– Histogram equalization
• Appropriate where the total defect area is small or when the
typical defect amplitude is small
pp
• Well-suited for applications where lighting
g g or optical
p variations
can lead to nonlinear grey-scale variations
– Identity Transformation
• No change in the image

21

The Difference Image


• The normalized run-time
image
g is subtracted
from the trained image
(the image with the
average grey values
from training images)
• The result of this
subtraction is the Raw
Difference Image

22

11
Thresholding the Difference Image
• PatInspect compares the pixel grey-levels in the Raw
Difference Image
g with the values in the Threshold
Image
• Any Difference Image pixels whose grey-levels exceed
the corresponding Threshold Image pixels will remain
untouched in the Difference Image
• Any Difference Image pixels whose grey-levels are less
than the corresponding Threshold Image pixels will be
assigned grey-level zero

23

Match Image
• PatInspect also produces a Match Image that indicates
the matched regions
g between the run-time image
g and
the trained image
– May be useful in highly confusing images

24

12
Match Image
• Also able to show graphic representing the thresholded
difference images
g displayed
p y over the match images g
• Cyan pixels represent a grey scale difference between 1
and 19
• Red pixels represent a grey scale difference greater
than or equal to 20.

25

PatInspect Images
• Optionally generate
additional images
g for
different kinds of analysis

26

13
Now What?
• The usual final result for a
PatInspect
p tool is the
Difference Image
• Perform analysis on this
image using other vision tools
such as Blob or Histogram

27

14
VisionPro

Section 9: PatInspect

1. Load your MyNPointCalibration.vpp APPLICATION file.


2. Open up CogJob 1.

3. Open the VisionPro Toolbox and add a CogPatInspect Tool under the Calibration Tools.
4. Make the proper connections between the Image Source, the CogPMAlign Tool and the newly
inserted CogPatInspect Tool. Remember to run the Job once after you make the connection.
5. Configure the CogPatInspect Tool to generate an image showing the differences between the
PASS and FAIL side of the demo plate..
a. Grab the train image and origin.
b. Set the proper region of interest to train as the ideal demo plate. Hint: Look at the
Current.TrainImage.
c. Train the new pattern.
d. Check that the new pattern is representative of an ideal demo plate. Hint: Look at the
Current.TrainedImage.
e. Check to see if there are any differences between the model and the current image.
Hint: Look at the LastRun.DifferenceImageAbsolute image.
6. Change over to the FAIL demo plate side. Run the CogJob again and notice the difference in the
LastRun.DifferenceImageAbsolute.

7. Send that image from the CogPatInspect Tool to a new CogBlob Tool and configure the CogBlob
Tool to find all the defects and give you information about them. Configure the tool to report
only those defects which represent real defects and not run-time variations.
8. Save your APPLICATION as MyPatInspect.vpp.
OCVMax
Session 10

Objectives
• The student will correctly:
 Understand where OCVMax is used
 Setup an OCVMax pattern based on an existing FONT file
 Train for OCVMax Pattern layout
 Specify a reasonable OCVMax pattern position
 Analyze the results returned from the OCVMax tool

1
What is OCV?
• Optical Character Verification is used to verify that a
given character string
g g is p
present
• Commonly used to verify
– Date Codes
– Lot Codes
– Expiration Date
• Returns TRUE if all characters in the string are correctly
identified; FALSE if not

Example
• Verify the lot number “04149”

2
What is OCVMax?
• OCVMax tool uses the Cognex PatMax technology
– Based on the font file which defines the layout
y of each
character
– Determines the best possible search parameters for reliably
locating the string
– Optimizing various search parameters to improve
performance

• Depending on the environment,


environment the tool can be
challenging to train and find.
• Please reference the OCVMax Application Guide that
is installed with the software and documentation.

Font files that can be used


• OCVMax tool can use most font files
– Western-language
g g TrueType
yp ASCII

– Unicode character fonts

– Create own font file using Image Font Extractor

– More extensive listing in the VisionPro Help

3
Add OCVMax Tool
• Add tool and pass an image to the tool group

Font Tab
• Used to select the font file to use to verify
• Click on the browse
button to search for
the font file on the
system

• Remember – you can


train your own font
with the Font Extractor
Tool found in the
VisionPro Utilities
folder

4
Font Tab
• With the font selected that represents the font type
being
g verified

• Select Alpha/numeric
character that will be
part of string

• Select polarity

Text Tab
• Text tab is where the pattern is trained
• Enter in text to be
trained
• Click on the “Adjust
Position” button

10

5
Text Tab
• The string will appear in the display window in the style
of the font selected under the font tab.

• Using the mouse,


position and size the
text to overlay the
string.
• Click on “Lock
Position” button
• Click on the “Train”
button

11

Text Tab – additional adjustments


• Rendering parameters used to adjust the position of the
render overlayy
• Noise level to consider
when background may
cause confusion
• Clean Print is used
when text quality and
position is very
consistent

12

6
Wildcards Tab
• The OCVMax tool allows you to insert wildcards so that the
string can change at runtime such is the case with
serialization

• To set a wildcard
– Choose position
– Select potential
characters that
would be found
– Add Selected Keys
– Retrain the font

13

Image Params Tab


• The OCVMax tool uses a variety of search parameters to
locate the character strings in the images you acquire
• Search Mode
– Position
– Region
– Whole Image
• Search Parameters
– Degrees
D off Freedom
F d
• Accept Threshold
– For the entire string

14

7
Image Params Tab – Search Mode
• Character string can be located differently in each acquired
image

• Search Mode
– Position +/- Shift
• Based on trained
– Region
• Specific region
– Whole Image
g

15

Image Params Tab – degrees of freedom

16

8
Character Params tab
• The degrees of freedom in the search parameters is on
characters base
• After the string has been
trained, a confusion matrix
is populated with the
characters
• The matrix will indicate
where character confusion
may occur

17

Character Params tab - Confusion


• Scores in red indicate potential confused characters with
those in the left column
• Confusion Threshold is the score that the character must
exceed to be consider for confusion
• Confidence
Threshold is the
value the character
must be over the
score of the next
“closest” character

18

9
Advanced Params tab
• Timeout set for tool to run
• Earlyy Accept
p Threshold is the p
percentage
g of character that
have passed, no further searching is carried out
• Early Fail Threshold
is opposite to
Accept

19

Result tab
• Displays overall results for string and individual characters

• Overall
O ll results
lt

• Cumulative score

• Individual results

20

10
OCVMax Results
• Character results return values for each character in the
string
g – in this case, a different serial number is used

21

Optimizing
• Use fixture tool and search mode Position +/- Shift
• Use Character Search p parameter on curved surfaces
• Set noise level
• Use separate OCVMax tool on paragraphs containing
character of high confusion

22

11
OCVMax vs. OCV tool
Both tools can be used to verify one or more characters
in an image, but here are three areas where they differ:
•Image based training
•OCVMax is quicker to set-up as it is derived from font
file
•OCV is purely image based so it can handle non-
western fonts
•Search Reliability
•OCVMax
OC a used PatMax a a technology
ec o ogy aallowing
o g for
o
position variability
•Distortion
•OCVMax can handle greater image to image
character distortion

23

12
VisionPro

Section 10: OCVMax

1. Load your MyPatInspect.vpp APPLICATION file.


2. Open up CogJob 1.

3. Create a CogImageFile Tool (introduced the first day of class). Use this CogImageFIle Tool to
create a database of at least 1 PASS and FAIL image.
4. Launch the Image Font Extractor found under StartAll
ProgramsCognexVisionProUtilities. Use the Image Font Extractor to load the image
database and extract a custom font from the image of the demo plate.
a. Click Browse and find the recently created image database file.
b. Find an image with a representative set of PASS characters (ABC123).
c. Type ABC123 in the Chars field and click Extract.
d. Select the Character tab and view the models extracted for each character. Ensure they
are representative of what you wish to verify.
e. Click Save, name the file “demo_plate_font”, and save it to a location you will easily
find. Close the Image Font Extractor.
5. Open the VisionPro Toolbox and add a CogOCVMax Tool. Hint: Find it under the ID &
Verification folder.
6. Make the proper connections between the CogFixture Tool and the newly inserted CogOCVMax
Tool. Remember to run the Job once after you make the connection.

7. Configure the CogOCVMax Tool to verify the string ABC123.


a. Use the Browse button to load the recently created demo_plate_font.ocm file.
b. Set all characters within the font to be used.
c. Specify the proper polarity.
d. Select the Text tab.
e. Enter the string to be verified and click Adjust Position to set the string position.
f. Check the Tune After Train checkbox and click the Train button.
8. Check to make sure the PASS side passes by verifying all characters and that the FAIL side fails
when the ABA122 string appears.

PASS FAIL

9. Save your APPLICATION as MyOCVMax.vpp.


Color Tools
Session 11

Color Tools
• Color, Color, Color!
– FireWire white balance
– Color Match
– Composite Color Match
– Color Segmentation

1
New to VisionPro 5.2
• ColorExtractor!!

Which platforms support color?


• FireWire
– Widest selection of color cameras
• All are 1CCD color cameras with Bayer filter
– Best value
• 8504
– Supports the Sony 3CCD analog color
• One camera per 8504
• 8600
– Can do either area scan or line scan color
– Usually the most expensive cameras

2
Color Match Tool

What does it do?


• Determines how well a color region matches a learned
color model, scores from 0.0 – 1.0

3
Color Match & Composite Color Match
• Color Match tool provides a matching score
– Analogous to “Color Distance”
• Simple match compares against average RGB value
• Composite match compares against the distributions

Simple Color Match


• Identifies uniform colors, such as yellow and orange
• Learns the average color in the learn region
• Define colors by point or region
• Runs faster than composite match

4
Using Simple Match to Select Colors
• Correctly identify all the flavors
– Although
g Grape
p and Black Cherry
y are similar

Composite Color Match


• Used for subtle variations, such as speckles, patterns
• Learns the color distribution in the learn region
• Colors defined by regions

10

5
Examples for Composite Color Match
• All samples have similar average values, but different distributions
and scores
O i i l
Original

.930 .370

.886 .152

.111 .102

11

How do we do it?
Simple and composite color match are very similar to
color segmentation…
g
1. Learn new color regions
2. Enable all color models that you want to compare
against
3. Test results

12

6
Color Extractor Tool

What does it do?


• Closely resembling the Segmenter tool, an image is
divided into two parts – the learned color(s) and everything
else

14

7
Defining Colors
• Colors can be defined as regions
– Select the region of color that you want to find
– Subtract any additional colors that may get added

15

Advantage of the ColorExtractor


• No knowledge of color spaces needed
– Add the color you want

- Subtract colors not desired

16

8
Adjusting Colors

• Minimum Pixel Count to 1


– Amount of pixels of a
given color value

• Dilation to 1
– On original color trained
(not on Subtracted colors)
– Incorporates colors on
edge of model

17

How do we do it?
1. Extract desired color by enclosing that region with
an appropriate
pp p shape.
p
2. Acquire a new image to see the extraction results.
3. Adjust Minimum Pixel Value to 1 as well as Dilation
to 1.
4. Subtract any items that erroneously found during the
first training step (do not set dilation for any
subtracted colors)

18

9
VisionPro

Lab 11: Color Tools

Up until now, all the labs have been done with a monchrome camera. For the color
section, we will be using images that are loaded in the Images directory of your
VisionPro installation

By default, this will be C:/Program Files/Cognex/VisionPro/Images

Color Extractor Tool

Objectives:

 To determine amount of roses in a bouquet using the Color Extractor Tool


 File in Images directory to use: color_flowers.bmp

Notes:

Notice the italicized words in the objectives. The color extractor tool searches a defined
inspection region for pixels that match the learned color models. The result is a binary
(or color) image that contains only the pixels that match the learned color models. The
color matching tools on the other hand don’t search, but rather they consider all the
pixels in the defined inspection area to score how well an object matches each learned
color model. The highest score can be considered the best match, but perhaps only if it
is above a user defined threshold.

Instructions:

Color Extractor
Start a new CogJob and reference the color_flowers.bmp file that is installed in
the Images folder of a default VisionPro installation.
Create a CogColorExtractorTool from the Tools>>Color folder.

Connect the Image Source output image to the CogColorExtractorTool input


image.
In the color extractor window, create a new color region from the ‘Colors’ tab by
clicking on the button.

The window should automatically display the ‘Color From Region’ parameters.
Change the Region Shape to a circle since that fits the shape of the rose well.
Also change the name of to ‘Rose’.
On the Current.Input image, move the learning circle over one of the roses and
resize it to fit inside the border of the rose. Click ‘Accept’ when you are done.
Set the Minimum Pixel Count to 1 and look at your results under the
LastRun.OverallColorImage

In order to reduce the amount of black spots in the image, set the dilation to 1.
Though now some of the orange flowers as being picked up.
To get rid of the other flower, we need to create a new color, but this time it will
be subtracted from the result. Make sure to set the Minimum Pixel Count to 1,
but DO NOT set the dilation.

Note the resulting image. You may need to do this to both of the large orange
flowers. Your resulting image should look like the following:

Save you application as MyColorExtractor.vpp


Thought question: What is the image showing you? What could you do now?

Color Match Tool

Objectives:

 To identify the color of the smiley face using the Color Match Tool
 File in Images directory to use: smiley.bmp

Instructions:

Color Matching
1. Start a new CogJob and reference the smiley.bmp file that is installed in the
Images folder of a default VisionPro installation.
2. Create a CogColorMatchTool from the Tools>>Color folder

3. Connect the Image Source output image to the input image of the
CogColorMatch tool.
4. In the color match window, create a new color region from the ‘Colors’ tab by
clicking on the button. Choose the “Region” as opposed to the “point”
5. The window should automatically display the ‘Color From Region’ parameters.
Change the Region Shape to a circle since that fits the shape of the face well.
Also change the name to ‘Yellow’.

6. Hit ‘Accept’. Do step 4 and 5 for Blue, Pink, and Green. When all four colors are
trained, your tool should look similar to the one below
7. To set to region to find, go to the Region tab and choose the CogCircle.

8. Now, where ever you place the circle, you can run the tool and it will tell you the
color that is within the circle.
9. Save your application as MyColorMatch.vpp

Thought question: Will the eyes and mouth affect the results?
Why choose region over point when defining a color?
Data Analysis and Results Analysis
Session 12

Objectives
• The student will correctly:

– Create and configure a Data Analysis Tool


– Create and configure a Results Analysis Tool
– Choose the appropriate Analysis Tool for a given application

1
Data Analysis Tool
• Used to set
Tolerance
Ranges
– Pass / Fail /
Warn

• Also collect
aggregate stats
about tool
results

Data Analysis Tool


• Add a item to the tool to pass a value to analyze

Click to add
new item

A new item
is added

2
Data Analysis Tool
• To pass a value from
a tool, yyou need to
create a input terminal
• Right click on the
Data Analysis tool
and select “Add
Terminal”

Data Analysis Tool


• Select the property to
pass the value
p

• If the property is not


shown expand the
selection

• Add this as an Input

3
Data Analysis Tool
• Pass the tool’s value
to the new input
p

Data Analysis Tool


• Choose how to set Tolerances
– Reject
j Low
– Warn Low
When setting limits, the Value is “Up To”, not
– Warn High “Up to and Including” the set points.
– Reject High
– Nominal  Example: If you want to count only 3 blobs,
the Reject Low would be 3. The Reject High
would be 4. If 2 or less, or 4 or more are found,
the tool will reject (fail).

4
Data Analysis
• In the Results Tab, set buffering for cumulative statistics
• Determine if tool should fail if no update occurs on a
channel as the data will not be current

Data Analysis Tool


• Results for individual run and cumulative stats

10

5
Results Analysis Tool
• Define a set of criteria that will allow the last run of the
tool g
group
p to g
give a ppassing,
g warn-level, or reject-level
j
result
– Can combine the results from one, several, or all the vision
tools in a tool group and generate a Warn or Reject status
– VisionPro ultimately uses this Warn or Reject status to
determine the value of the RunStatus property for the tool
group
– It is not both Warn and Reject like the Data Analysis tool
tool. The
selection is made on the Output parameter in the tool

11

Results Analysis Tool


• Can use a result expressed as
– Numeric value
– String
– Boolean
– An array (vector) of result values
• Ex. Blob finding multiple blob in a result

12

6
Results Analysis Tool

13

Results Analysis Tool


• Used to evaluate results from other tools to:
– Give a Pass, Warning or Reject
– Output expression value to other tools

14

7
Creating an Input Terminal

Function: Math statement based off inputs


A * 3.41= Value

Input: Single parameter being brought in


A = Count from Blob Tool

Array: Value from tool with multiple results


B = Area from all blobs found (B less than 100
– checks that all blob areas are under 100)

15

Creating a Function
When creating a function, Operation is chosen first:

Then arguments are selected. To enter a number, just type it into the field

16

8
Results Analysis Tool
• Results Tab
– Show results from last run
– Result is AND of all the functions (default) or particular parameter is
selected

17

Which to Use?
Both Tools are considered the Decision Making tools of
the VisionPro Product:

• CogResultsAnalysis: used when the Decision Making tool for your


overall machine vision application is based upon a set of “rules”. These
rules can be mathematical, string-based, to even more complex
expressions
• CogDataAnalysis: used when the Decision Making tool for your overall
machine vision application is based upon tolerance ranges and need
statistical results of the of the vision tools.

18

9
VisionPro

Section 12: Data & Results Analysis with Application Wizard

1. Load your MyOCVMax.vpp APPLICATION file.


2. Open up CogJob 1.

3. Be sure the following output terminals are exposed from their respective tools:
a. CogPMAlign
i. Score
b. CogHistogram
i. Standard Deviation
c. CogBlob
i. Count
d. CogCaliper
i. Edge Pair Count
e. CogAngleLineLine
i. Angle
f. CogCaliper from Calibrated Units
i. Measured Width
g. CogBlob from PatInspect
i. Count
h. CogOCVMax
i. Text Score
4. Create a CogDataAnalysis Tool and add a data entry channel for each of the items above.

5. Remove the default RunParams.Item[“Channel 0”].CurrentValue and add the input terminals for
each one of the newly created channels. Also expose the ToleranceStatus of each of these
values. Make the proper connection between the inspection results and the newly created
channels.
6. Open the CogDataAnalysis Tool and set the proper Reject Low, Warn Low, Warn High, and
Reject High for your inspection. Test your inspection to make sure the PASS side always passes
and the FAIL side always FAILS.

7. Add all the CogDataAnalysis Tool’s ToleranceStatus terminals to Posted Items.


8. Add all the CogDataAnalysis Tool’s CurrentValue terminals to Posted Items.
9. Save your APPLICATION as MyResults.vpp.
I/O and the Application Wizard
Session 13

Objectives
• The student will correctly:

– Configure I/O settings using the Communications Explorer


• Light LEDs with result from application
• Send TCP/IP information to HyperTerminal

– Create and configure an operator user interface using the


VisionPro Application Wizard

1
I/O: Getting Data Out

Getting Results Out


• Posted Items
– Holds the results of completed
p runs of each jjob
– Required to bring data out to be utilized by VB.NET or C#

2
Adding Posted Items

This will make the


value for the
amount of blobs
found available to
a third party
application

Adding Posted Items

All items that are posted (along with path) will be listed

3
Application Properties

Settings:
•Control of some
memory resources

Language:
• Language
g g used

Multithreading:
• Enabling more
efficient use of
cores
7

QuickBuild Floating Results

Ability to quickly show (and


queue) results with image

Posted Items:
• Results of job and all
posted items

Failure Results Queue:


• Failures as indicated by
Application Properties

4
Discrete I/O Through QuickBuild
• In order for QB to generate I/O
signals,
g enable I/O settings
g

• Open the Communications


Explorer

Communications Explorer
• First you need to add the appropriate device

• Go into the created folder to gain access to the


individual lines

10

5
Communications Explorer
• Usage - Then you would set the lines to be input or
output.
p
– The module that you use will dictate the polarity available for
the lines

• Owner – Select which job (or the application as a


whole) that controls the signal

11

Communications Explorer
• Field – Select the appropriate parameter or tool result

• Pulse Width and Polarity – Defines the amount of time


the pulse is on as well as whether this is high or low (for
connecting with NPN or PNP devices)

12

6
Check Configuration
• To make sure the configuration is correct click on
“Configuration Checking”.

• This will check for missing hardware or conflicting I/O lines

13

Installing Measurement Computing Board


• Drivers are available with the installation
• To install the drivers (not installed by default)
– Select the option during installation
– You may go back and install after
the fact
• Restart Setup.exe in the
C:\Program Files\VisionPro x.xx
directory
• Select Modify and choose to
add driver
– Make sure to load the drivers as well
• Done by running the Setup.exe file in the
C:\Program Files\VisionPro x.xx\drivers directory

14

7
Final Notes on Discrete IO
• IO must be enabled to “run”
– When enabled, p
parameters cannot be changed
g

• The IO is global
– All jobs associated with the application use the same IO
– Different
Diff t applications
li ti will
ill resett th
the IO
– * Take care – best to keep IO consistent as it is hardwired

• If you use an output line to transmit a tool result, you


must create a Data Ready output as well.
15

TCP/IP
• TCP/IP packets exchange application data and results with
other Windows applications

• You can create as many TCP/IP devices as you need in


QuickBuild
– Be aware, this will have an impact on the performance of your
vision application

• You can configure as a Client or Server


– Client and Server – not mutually exclusive

16

8
Adding TCP/IP IO
• Right Click to Add TCP/IP
– Choose from Client or
Server

• If Client, select from list the


device to connect (server)
Client is responsible for
opening the channel

Device is added with


information

17

Configuring TCP/IP IO
• Click in the field to add data

18

9
Configuring TCP/IP IO
• The data string can then be configured with control
character(s)
( )
– Encoder – data format that the remote device expects to receive
– Output Terminator
• Carriage Return
• New Line
– Output Delimiter
• Comma
• Tab
• Space
• Semi Colon
• Underscore

19

Configuring TCP/IP IO
• The result string that will be transfer to the remote device is
shown in the Output String window at the bottom

20

10
Configuring TCP/IP IO
• If this is a multi job application, repeat the last three step
after selecting the job to configure the TCP/IP IO data

21

Application Wizard

11
Application Wizard
• Creates a full-featured application from a QuickBuild
project
p j file ((.vpp
pp file)) that includes a customized
operator interface
– Does not require a Visual Studio or other development
environment license
– Output is a .NET application
• May optionally generate the source code in C# or VB.NET for that
application.

• Wizard walks through step by step procedures to


ensure proper application creation

23

Application Wizard

24

12
Application Wizard – Use of Posted Items

25

Application Wizard – Input Property

26

13
Application Wizard – Input Property

27

28

14
Advantages vs. Disadvantages
• Advantage
– Quick creation of runtime application
pp for user
– Do not need to know programming
– User sees just image, results, and parameters brought
forward
– Allow users to tweak Vision Tools through QuickBuild
(optional)

• Disadvantage
Di d t
– Layout is confined to basic model

• Code can be brought into Visual Studio or like


application for more creative displays
29

15
VisionPro

Section 13: Input/Output and Application Wizard

Digital Output (if IO Module is Available)


1. Load application MyResults.vpp
2. Open the Communication Explorer and go to Discrete I/O folder
3. Choose the USB-1024-LS module (or other if instructed by instructor)
4. Set up three lines to be outputs
a. Heartbeat for the QuickBuild application
b. Data Ready Signal
c. Result from the Data Analysis tool

5. Enable the IO and note the results


6. Save application as MyDIO.vpp

TCP/IP Communication
1. Load application MyDIO.vpp
2. Open Communication Explorer and go to TCP/IP.
3. Set the Device Type to Server and the Port to 5001. You should notice that this new item is
added under the TCP/IP folder in Communication Explorer.

4. Select the new item to open the dialog to configure.


5. Select the first open cell under field to open the Select Field browser.
6. Add the PMAlign result for Tanslation X and Y as well as Rotation to send to a remote device via
TCP/IP.

7. Select Output Terminator to add carriage return and new line as well as your delimiter of choice.
8. Enable the IO.
9. Start a Command screen and type in “ipconfig” to get the IP Address of your PC. Make note of
this.
IP Address: _____________________________
10. Go to Windows Start  All Programs  Accessories  Communication and choose
Hyperterminal.
11. Under Connect using: select “TCP/IP (Winsock)
12. Take the IP Address that you wrote down and insert under the Host Address. Set the port to
5001.
13. Select OK and then connect this session by selecting the icon that looks like a phone with the
receiver down.
14. Now run the QB job. You should see the data being sent to the Hyperterminal dialog.

15. Save application as MyTCPIP.vpp and completely exit the QuickBuild environment.
Application Wizard
1. Launch the VisionPro Application Wizard.
a. Step through the wizard.
i. Select the recently saved MyResults.vpp APPLICATION file.
ii. Name you application Demo Plate Inspection.
iii. Ensure the Include QuickBuild access checkbox is checked.
iv. Add 8 tabs at the Operator Security Level, one for each of the datum analyzed
above. Also add their respective Posted Items and an input field that affects
that Posted Item for each tab. Try to achieve this structure:
1. Cognex Logo
a. Input Field: Accept Threshold
b. Posted Item: Score
c. Posted Item: Score Tolerance Status
2. Connector Pins
a. Posted Item: Standard Deviation
b. Posted Item: Score Tolerance Status
3. LEDs
a. Posted Item: Blob Count
b. Posted Item: Blob Count Tolerance Status
4. Focus Ring
a. Input Field: Maximum Results
b. Posted Item: Edge Count
c. Posted Item: Edge Count Tolerance Status
5. Connector Angle
a. Posted Item: Angle
b. Posted Items: Angle Tolerance Status
6. Camera Width
a. Posted Item: Measured Width
b. Posted Item: Measured Width Tolerance Status
7. Defects
a. Posted Item: Blob Count
b. Posted Item: Blob Count Tolerance Status
8. Text
a. Posted Item: String Result
b. Posted Item: String Result Tolerance Status.
v. Tab one should look like this:
vi. Select where the application files should be created.
vii. Choose your preferred language for code generation.
viii. Optionally, save the configuration so you can import it next time you use the
Application Wizard.
ix. Launch the application.

2. Run the application continuously and change the demo plate image to track the results. Make
sure that PASS images are passing and FAIL images are failing. If this is not the case, click the
configuration button and open up QuickBuild to adjust settings. Be sure to save any changes
you make to the APPLICATION file.
Grid spacing = 10.000 Millimeter
Y

Grid spacing = 10.000 Millimeter


Cognex Course Evaluation Form
General
Name (optional) _______________________________ Company _______________________________
Date attended ____________________________
Course attended
 In-Sight® Maintenance &  In-Sight® Advanced  VisionPro® Advanced
Troubleshooting  Intellect®  Other ____________________
 In-Sight® Standard  VisionPro® Intro
Location
 Cognex facility  Customer site

Course
Did the course fulfill the learning outcomes and objectives listed at the beginning of the course notes?
 Completely  Partially  Not at all
If not, please explain:_______________________________________________________________________
Were the learning outcomes and objectives appropriate?
 Completely  Partially  Not at all
If not, please explain:_______________________________________________________________________
Which topics were most relevant to your job?_________________________________________________
________________________________________________________________________________________

Which topics were least relevant to your job?_________________________________________________


________________________________________________________________________________________

Which topics were the most difficult to understand?____________________________________________


________________________________________________________________________________________

Were there any topics which needed to be covered more in-depth?


 Yes  No
If yes, please list: __________________________________________________________________________

Any topics which should have been covered in less detail?


 Yes  No
If yes, please list: __________________________________________________________________________

Are there other topics not covered you would like to see included in the course?
 Yes  No
If yes, please list: _____________________________________________________________________

Instructor Name: ________________________________________________


Clarity of presentation
 Excellent  Very good  Good  Fair  Poor

Speed of presentation
 Much too fast  Too fast  Just right  Too slow  Much too slow

Ability to answer questions


 Excellent  Very good  Good  Fair  Poor

Consistency between lecture and slides and handouts


 Excellent  Very good  Good  Fair  Poor

Knowledge of material
 Excellent  Very good  Good  Fair  Poor

Degree of preparation and organization


 Excellent  Very good  Good  Fair  Poor

-over-
Cognex Course Evaluation Form
Materials
Slides
 Excellent  Very good  Good  Fair  Poor  N/A

Handouts
 Excellent  Very good  Good  Fair  Poor  N/A

Technical documentation
 Excellent  Very good  Good  Fair  Poor  N/A

Videos
 Excellent  Very good  Good  Fair  Poor  N/A

Please list other materials which you would consider helpful:____________________________________


________________________________________________________________________________________

Lab Exercises
Quality and availability of assistance during lab sessions
 Excellent  Very good  Good  Fair  Poor

Level of difficulty of lab exercises


 Much too difficult  Too difficult  Just right  Too easy  Much too easy

Time devoted to lab sessions


 Far too much  Too much  Just right  Too little  Far too little

Were the lab exercises helpful in reinforcing your understanding of the course material?
 Yes  No
If not, please explain:_______________________________________________________________________
________________________________________________________________________________________

Overall Rating
Overall rating of the course
 Excellent  Very good  Good  Fair  Poor

Would you recommend the course to others?


 Yes  No
If not, please explain:_______________________________________________________________________
________________________________________________________________________________________

What other Cognex courses are you interested in attending?


 CVL® (MVS-8000)  Introduction to Machine Vision  Online Courses
 In-Sight®  VisionPro® (8000)  Other____________________
 Intellect  SmartView®

How did you hear about our courses? _______________________________________________________


Additional Comments:_____________________________________________________________________
________________________________________________________________________________________
________________________________________________________________________________________
________________________________________________________________________________________
________________________________________________________________________________________
________________________________________________________________________________________

Thank you very much for your input! We look forward to working with you in the future.

COGNEX
Customer Education Center 2008-09-09

You might also like