IMAQ Manual
IMAQ Manual
TM
User Manual
Worldwide Offices
Australia 1800 300 800, Austria 43 0 662 45 79 90 0, Belgium 32 0 2 757 00 20, Brazil 55 11 3262 3599,
Canada (Calgary) 403 274 9391, Canada (Montreal) 514 288 5722, Canada (Ottawa) 613 233 5949,
Canada (Québec) 514 694 8521, Canada (Toronto) 905 785 0085, Canada (Vancouver) 514 685 7530,
China 86 21 6555 7838, Czech Republic 420 2 2423 5774, Denmark 45 45 76 26 00,
Finland 385 0 9 725 725 11, France 33 0 1 48 14 24 24, Germany 49 0 89 741 31 30, Greece 30 2 10 42 96 427,
India 91 80 51190000, Israel 972 0 3 6393737, Italy 39 02 413091, Japan 81 3 5472 2970,
Korea 82 02 3451 3400, Malaysia 603 9131 0918, Mexico 001 800 010 0793, Netherlands 31 0 348 433 466,
New Zealand 1800 300 800, Norway 47 0 66 90 76 60, Poland 48 0 22 3390 150, Portugal 351 210 311 210,
Russia 7 095 238 7139, Singapore 65 6226 5886, Slovenia 386 3 425 4200, South Africa 27 0 11 805 8197,
Spain 34 91 640 0085, Sweden 46 0 8 587 895 00, Switzerland 41 56 200 51 51, Taiwan 886 2 2528 7227,
Thailand 662 992 7519, United Kingdom 44 0 1635 523545
For further support information, refer to the Technical Support and Professional Services appendix. To comment
on the documentation, send email to [email protected].
Warranty
The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects
in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National
Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives
notice of such defects during the warranty period. National Instruments does not warrant that the operation of the software shall be
uninterrupted or error free.
A Return Material Authorization (RMA) number must be obtained from the factory and clearly marked on the outside of the package before
any equipment will be accepted for warranty work. National Instruments will pay the shipping costs of returning to the owner parts which are
covered by warranty.
National Instruments believes that the information in this document is accurate. The document has been carefully reviewed for technical
accuracy. In the event that technical or typographical errors exist, National Instruments reserves the right to make changes to subsequent
editions of this document without prior notice to holders of this edition. The reader should consult National Instruments if errors are suspected.
In no event shall National Instruments be liable for any damages arising out of or related to this document or the information contained in it.
EXCEPT AS SPECIFIED HEREIN, NATIONAL INSTRUMENTS MAKES NO WARRANTIES, EXPRESS OR IMPLIED, AND SPECIFICALLY DISCLAIMS ANY WARRANTY OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. CUSTOMER’S RIGHT TO RECOVER DAMAGES CAUSED BY FAULT OR NEGLIGENCE ON THE PART OF
NATIONAL INSTRUMENTS SHALL BE LIMITED TO THE AMOUNT THERETOFORE PAID BY THE CUSTOMER. NATIONAL INSTRUMENTS WILL NOT BE LIABLE FOR
DAMAGES RESULTING FROM LOSS OF DATA, PROFITS, USE OF PRODUCTS, OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, EVEN IF ADVISED OF THE POSSIBILITY
THEREOF. This limitation of the liability of National Instruments will apply regardless of the form of action, whether in contract or tort, including
negligence. Any action against National Instruments must be brought within one year after the cause of action accrues. National Instruments
shall not be liable for any delay in performance due to causes beyond its reasonable control. The warranty provided herein does not cover
damages, defects, malfunctions, or service failures caused by owner’s failure to follow the National Instruments installation, operation, or
maintenance instructions; owner’s modification of the product; owner’s abuse, misuse, or negligent acts; and power failure or surges, fire,
flood, accident, actions of third parties, or other events outside reasonable control.
Copyright
Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including photocopying,
recording, storing in an information retrieval system, or translating, in whole or in part, without the prior written consent of National
Instruments Corporation.
Trademarks
CVI™, IMAQ™, LabVIEW™, Measurement Studio™, National Instruments™, NI™, NI Developer Zone™, ni.com™, and NI-IMAQ™ are
trademarks of National Instruments Corporation.
Product and company names mentioned herein are trademarks or trade names of their respective companies.
Patents
For patents covering National Instruments products, refer to the appropriate location: Help»Patents in your software, the patents.txt file
on your CD, or ni.com/patents.
Chapter 1
Introduction to IMAQ Vision
About IMAQ Vision ......................................................................................................1-1
IMAQ Vision Control Palette ........................................................................................1-1
IMAQ Vision Function Palettes ....................................................................................1-2
Vision Utilities.................................................................................................1-2
Image Processing.............................................................................................1-4
Machine Vision ...............................................................................................1-5
Creating IMAQ Vision Applications .............................................................................1-6
Chapter 2
Getting Measurement-Ready Images
Set Up Your Imaging System ........................................................................................2-1
Calibrate Your Imaging System ....................................................................................2-2
Create an Image .............................................................................................................2-2
Input and Output Combinations ......................................................................2-4
Image Analysis..................................................................................2-4
Image Masks .....................................................................................2-4
Image Filling .....................................................................................2-5
Image Processing ..............................................................................2-5
Arithmetic and Logical Operations...................................................2-6
Acquire or Read an Image .............................................................................................2-6
Display an Image ...........................................................................................................2-8
External Window Display ...............................................................................2-8
Image Display Control ....................................................................................2-9
Attach Calibration Information......................................................................................2-11
Analyze an Image ..........................................................................................................2-12
Improve an Image ..........................................................................................................2-13
Lookup Tables .................................................................................................2-13
Chapter 3
Grayscale and Color Measurements
Define Regions of Interest............................................................................................. 3-1
Define Regions Interactively .......................................................................... 3-3
Defining an ROI in the Image Display Control................................ 3-3
Defining an ROI in an External Window ......................................... 3-4
Defining an ROI Using an ROI Constructor .................................... 3-4
Tools Palette Transformation ........................................................... 3-6
Define Regions Programmatically .................................................................. 3-7
Define Regions with Masks ............................................................................ 3-8
Measure Grayscale Statistics......................................................................................... 3-9
Measure Color Statistics................................................................................................ 3-9
Compare Colors .............................................................................................. 3-11
Learn Color Information ................................................................................ 3-11
Specifying the Color Information to Learn ...................................... 3-11
Choosing a Color Representation Sensitivity ................................... 3-14
Ignoring Learned Colors................................................................... 3-15
Chapter 4
Particle Analysis
Create a Binary Image ................................................................................................... 4-1
Improve the Binary Image............................................................................................. 4-2
Remove Unwanted Particles ........................................................................... 4-3
Separate Touching Particles............................................................................ 4-4
Improve Particle Shapes.................................................................................. 4-4
Make Particle Measurements ........................................................................................ 4-4
Chapter 5
Machine Vision
Locate Objects to Inspect .............................................................................................. 5-2
Use Edge Detection to Build a Coordinate Transformation .......................... 5-3
Use Pattern Matching to Build a Coordinate Transformation ....................... 5-6
Choose a Method to Build the Coordinate Transformation ............................ 5-6
Chapter 6
Calibration
Perspective and Nonlinear Distortion Calibration .........................................................6-1
Define a Calibration Template ........................................................................6-2
Define a Reference Coordinate System...........................................................6-3
Learn Calibration Information.........................................................................6-5
Specifying Scaling Factors................................................................6-5
Choosing a Region of Interest ..........................................................6-6
Choosing a Learning Algorithm .......................................................6-6
Using the Learning Score..................................................................6-7
Learning the Error Map.....................................................................6-8
Learning the Correction Table .........................................................6-8
Setting the Scaling Method ...............................................................6-8
Calibration Invalidation ...................................................................6-8
© National Instruments Corporation vii IMAQ Vision for LabVIEW User Manual
Contents
Appendix A
Vision for LabVIEW Real-Time
About Vision for LabVIEW Real-Time ........................................................................ A-1
System Components ...................................................................................................... A-1
Development System ...................................................................................... A-1
Deployed System ............................................................................................ A-2
Installing NI-IMAQ and Vision for LabVIEW Real-Time........................................... A-2
Displaying Images in Vision for LabVIEW Real-Time................................................ A-2
Remote Display............................................................................................... A-3
RT Video Out.................................................................................................. A-4
Determinism in Vision for LabVIEW Real-Time ......................................................... A-4
Determinism vs. Time-Bounded Execution.................................................... A-5
Time-Bounded Execution ............................................................................... A-6
Initializing the Timed Environment ................................................. A-6
Preparing Resources ......................................................................... A-7
Performing Time-Bounded Vision Operations................................. A-7
Closing the Timed Environment ..................................................................... A-9
Image Files .................................................................................................................... A-9
Deployment ................................................................................................................... A-9
Troubleshooting............................................................................................................. A-9
Remote Display Errors.................................................................................... A-9
Programming Errors........................................................................................ A-10
RT Video Out Errors....................................................................................... A-12
Appendix B
Technical Support and Professional Services
Glossary
Index
The IMAQ Vision for LabVIEW User Manual is intended for engineers and
scientists who have knowledge of the LabVIEW programming
environment and need to create machine vision and image processing
applications using LabVIEW VIs. The manual guides you through tasks
beginning with setting up your imaging system to taking measurements.
It also describes how to create a real-time vision application using
Vision for LabVIEW Real-Time.
Conventions
The following conventions appear in this manual:
» The » symbol leads you through nested menu items and dialog box options
to a final action. The sequence File»Page Setup»Options directs you to
pull down the File menu, select the Page Setup item, and select Options
from the last dialog box.
bold Bold text denotes items that you must select or click in the software, such
as menu items and dialog box options. Bold text also denotes parameter
names.
monospace Text in this font denotes text or characters that you should enter from the
keyboard, sections of code, programming examples, and syntax examples.
This font is also used for the proper names of disk drives, paths, directories,
programs, subprograms, subroutines, device names, functions, operations,
variables, filenames and extensions, and code excerpts.
monospace bold Bold text in this font denotes the messages and responses that the computer
automatically prints to the screen. This font also emphasizes lines of code
that are different from the other examples.
Related Documentation
In addition to this manual, the following documentation resources are
available to help you create your vision application.
IMAQ Vision
• IMAQ Vision Concepts Manual—If you are new to machine vision
and imaging, read this manual to understand the concepts behind
IMAQ Vision.
• IMAQ Vision for LabVIEW Help—If you need information about
IMAQ Vision palettes or individual IMAQ Vision VIs while creating
your application, refer to this help file. You can access this file by
selecting Help»IMAQ Vision from within LabVIEW.
NI Vision Assistant
• NI Vision Assistant Tutorial—If you need to install NI Vision
Assistant and learn the fundamental features of the software, follow
the instructions in this tutorial.
• NI Vision Assistant Help—If you need descriptions or step-by-step
guidance about how to use any of the functions or features of NI Vision
Assistant, refer to this help file.
Other Documentation
• Your National Instruments IMAQ device user manual—If you need
installation instructions and device-specific information, refer to your
device user manual.
• Getting Started With Your IMAQ System—If you need instructions for
installing the NI-IMAQ software and your IMAQ hardware,
connecting your camera, running Measurement & Automation
Explorer (MAX) and the NI-IMAQ Diagnostics, selecting a camera
file, and acquiring an image, refer to this getting started document.
• NI-IMAQ User Manual—If you need information about how to use
NI-IMAQ and IMAQ image acquisition devices to capture images for
processing, refer to this manual.
• NI-IMAQ VI or function reference guides—If you need information
about the features, functions, and operation of the NI-IMAQ image
acquisition VIs or functions, refer to these help files.
• IMAQ Vision Deployment Engine Note to Users—If you need
information about how to deploy your custom IMAQ Vision
applications on target computers, read this CD insert.
• Your National Instruments PXI controller user manual—If you are
using the LabVIEW Real-Time Module to develop your vision
application and need information about how to set up your
PXI controller device in a PXI-1020 chassis, refer to this manual.
• Example programs—If you want examples of how to create specific
applications, go to LabVIEW\Examples\Vision. For documentation
about these examples, refer to the help file located at Help»Search
Vision Examples from within LabVIEW 6.x or Help»Find Examples
from within LabVIEW 7.0 and later.
• Application Notes—If you want to know more about advanced
IMAQ Vision concepts and applications, refer to the Application
Notes located on the National Instruments Web site at ni.com/
appnotes.nsf/.
• NI Developer Zone (NIDZ)—If you want even more information
about developing your vision application, visit the NI Developer Zone
at ni.com/zone. The NI Developer Zone contains example
programs, tutorials, technical presentations, the Instrument Driver
Network, a measurement glossary, an online magazine, a product
advisor, and a community area where you can share ideas, questions,
and source code with vision developers around the world.
Note Refer to the release notes that came with your software for information about the
system requirements and installation procedure for IMAQ Vision for LabVIEW.
© National Instruments Corporation 1-1 IMAQ Vision for LabVIEW User Manual
Chapter 1 Introduction to IMAQ Vision
later. You also can use this control to create regions of interest (ROIs).
Classic and 3D versions are available.
• IMAQ Vision controls—Use these controls to get the functionality of
corresponding IMAQ Vision VI controls directly into your own VIs.
• Machine Vision controls—Use these controls to get the functionality
of corresponding Machine Vision VI controls directly into your
own VIs.
Note This document references many VIs from the IMAQ Vision function palette. If you
have difficulty finding a VI, use the search capability of the LabVIEW VI browser.
Vision Utilities
Vision Utilities functions allow you to manipulate and display images in
IMAQ Vision.
• Image Management—A group of VIs that manage images. Use these
VIs to create and dispose images, set and read attributes of an image
(such as its size and offset), and copy one image to another. You also
can use some of the advanced VIs to define the border region of an
image and access the pointer to the image data.
• Files—A group of VIs that read images from files, write images to files
in different file formats, and get information about the image contained
in a file.
• External Display—A group of VIs that control the display of images
in external image windows. Use these VIs to do the following:
– Get and set window attributes, such as size, position, and zoom
factor
– Assign color palettes to image windows
– Set up and use image browsers
– Set up and use different drawing tools to interactively select ROIs
on image windows
– Detect draw events
– Retrieve information about ROIs drawn on the image window
Note If you have LabVIEW 7.0 or later, you also can use the Image Display control
available from the Vision control palette.
• Region of Interest—A group of VIs that manage ROIs. Use these VIs
to programmatically define ROIs and convert ROIs to and from image
masks.
Note If you have LabVIEW 7.0 or later, you can use the property node and invoke node
of the Image Display control to perform many of these ROI tasks.
© National Instruments Corporation 1-3 IMAQ Vision for LabVIEW User Manual
Chapter 1 Introduction to IMAQ Vision
Image Processing
Use the Image Processing functions to analyze, filter, and process images
in IMAQ Vision.
• Processing—A group of VIs that process grayscale and binary images.
Use these VIs to convert a grayscale image into a binary image using
different thresholding techniques. You also can use these VIs to
transform images using predefined or custom lookup tables, change
the contrast information in the image, and invert the values in an
image.
• Filters—A group of VIs that filter an image to enhance the information
in the image. Use these VIs to smooth an image, remove noise, and
highlight or enhance edges in the image. You can use a predefined
convolution kernel or create custom convolution kernels.
• Morphology—A group of VIs that perform morphological operations
on an image. Some of these VIs perform basic morphological
operations, such as dilation and erosion, on grayscale and binary
images. Other VIs improve the quality of binary images by filling holes
in particles, removing particles that touch the image border, removing
small particles, and removing unwanted particles based on different
shape characteristics of the particle. Another set of VIs in this
subpalette separate touching particles, find the skeleton of particles,
and detect circular particles.
• Analysis—A group of VIs that analyze the content of grayscale and
binary images. Use these VIs to compute the histogram information
and grayscale statistics of an image, retrieve pixel information and
statistics along any one-dimensional profile in an image, and detect
and measure particles in binary images.
• Color Processing—A group of VIs that analyze and process color
images. Use these VIs to compute the histogram of color images;
apply lookup tables to color images; change the brightness, contrast,
and gamma information associated with a color image; and threshold
a color image. Some of these VIs also compare the color information
in different images or different regions in an image using a color
matching process.
• Operators—A group of VIs that perform basic arithmetic and logical
operations on images. Use some of these VIs to add, subtract, multiply,
and divide an image with other images or constants. Use other VIs in
this subpalette to apply logical operations—such as AND/NAND,
OR/NOR, XOR/XNOR—and make pixel comparisons between an
image and other images or a constant. In addition, one VI in this
Machine Vision
The IMAQ Machine Vision VIs are high-level VIs that simplify common
machine vision tasks.
• Select Region of Interest—A group of VIs that allow you to select a
region of interest tool, draw specific regions of interest in the image
window, and return information about regions with very little
programming.
• Coordinate System—A group of VIs that find a coordinate system
associated with an object in an image. Use these VIs to find the
coordinate system using either edge detection or pattern matching.
You can then use this coordinate system to take measurements from
other Machine Vision VIs.
• Count and Measure Objects—A VI that thresholds an image to
isolate objects from the background and then finds and measures
characteristics of the objects. This VI also can ignore unwanted objects
in the image when making measurements.
• Measure Intensities—A group of VIs that measure the intensity of a
pixel at a point or the statistics of pixel intensities along a line or
rectangular region in an image.
• Measure Distances—A group of VIs that measure distances,
such as the minimum and maximum horizontal distance between
two vertically oriented edges or the minimum and maximum vertical
distance between two horizontally oriented edges.
• Locate Edges—A group of VIs that locate vertical, horizontal, and
circular edges.
© National Instruments Corporation 1-5 IMAQ Vision for LabVIEW User Manual
Chapter 1 Introduction to IMAQ Vision
Create an Image
Analyze an Image
Improve an Image
3 Machine Vision
Note Diagram items enclosed with dashed lines are optional steps.
© National Instruments Corporation 1-7 IMAQ Vision for LabVIEW User Manual
Chapter 1 Introduction to IMAQ Vision
2 3
Chapter 4:
Improve a Binary Image Set Search Areas
Particle Analysis
Make Measurements
Display Results
Note Diagram items enclosed with dashed lines are optional steps.
© National Instruments Corporation 2-1 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
Perspective errors occur when your camera axis is not perpendicular to the
object under inspection. Nonlinear distortion may occur from aberrations
in the camera lens. Perspective errors and lens aberrations cause images to
appear distorted. This distortion displaces information in an image, but it
does not necessarily destroy the information in the image.
Create an Image
Use the IMAQ Create VI (Vision Utilities»Image Management) to create
an image reference. When you create an image, specify one of the
following image data types:
• 8-bit (default)
• 16-bit
• Float
• Complex
• RGB
• HSL
Note If you plan to use filtering or particle analysis VIs on the image, refer to their help
topics in the IMAQ Vision for LabVIEW Help for information about the appropriate border
size for the image. The default border size is 3 pixels.
During development, you may want to examine the contents of your image
at run time. With LabVIEW 7.0 or later, you can use a LabVIEW image
probe to view the contents of your image during execution. To create a
probe, right-click on the image wire and select Probe.
Most VIs belonging to the IMAQ Vision library require an input of one
or more image references. The number of image references a VI takes
depends on the image processing function and the type of image you want
to use.
IMAQ Vision VIs that analyze the image but do not modify the contents
require the input of only one image reference. VIs that process the contents
of images may require a reference to the source image(s) and to a
destination image, or the VIs may have an optional destination image. If
you do not provide a destination image, the VI modifies the source image.
At the end of your application, dispose of each image that you created using
the IMAQ Dispose VI (Vision Utilities»Image Management).
© National Instruments Corporation 2-3 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
The figures in the following sections show several VI connector panes used
in IMAQ Vision.
Image Analysis
The following connector pane applies only to VIs that analyze an image
and therefore do not modify either the size or contents of the image.
Examples of these types of operations include particle analysis and
histogram calculations.
Image Masks
The following connector pane introduces an Image Mask.
Image Filling
The following connector pane applies to VIs performing an operation that
fills an image.
Image Processing
The following connector pane applies to VIs that process an image.
This connector is the most common type in IMAQ Vision. The Image Src
input receives the image to process. The Image Dst input can receive either
another image or the original, depending on your goals. If two different
images are connected to the two inputs, the original Image Src image is not
modified. As shown in the following diagrams, if the Image Dst and
Image Src inputs receive the same image, or if nothing is connected to
Image Dst, the processed image is placed into the original image, and the
original image data is lost.
The Image Dst image is the image that receives the processing results.
Depending on the functionality of the VI, this image can be either the same
or a different image type as that of the source image. The VI descriptions
in the IMAQ Vision for LabVIEW Help include the type of image that can
be connected to the Image inputs. The image connected to Image Dst is
resized to the source image size.
© National Instruments Corporation 2-5 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
Two source images exist for the destination image. You can perform an
operation between two images, A and B, and then either store the result in
another image (Image Dst) or in one of the two source images, A or B.
In the latter case, you can consider the original data to be unnecessary after
the processing has occurred. The following combinations are possible in
this pane.
In the pane on the left, the three images are all different. Image Src A and
Image Src B are intact after processing and the results from this operation
are stored in Image Dst.
In the center pane, Image Src A also is connected to the Image Dst,
which therefore receives the results from the operation. In this operation,
the source data for Image Src A is overwritten.
In the pane on the right, Image Src B receives the results from the
operation and its source data is overwritten.
Most operations between two images require that the images have the
same type and size. However, arithmetic operations can work between
two different types of images.
Note You must use the IMAQ Close VI (Image Acquisition) to release resources
associated with the image acquisition device.
Use the IMAQ Read Image and Vision Info VI (Vision Utilities»Files) to
open an image file containing additional information, such as calibration
information, template information for pattern matching, or overlay
information. Refer to Chapter 5, Machine Vision, for information about
pattern matching templates and overlays.
© National Instruments Corporation 2-7 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
Display an Image
You can display images in LabVIEW using two methods. If you use
LabVIEW 6.x, you can display an image in an external window using the
external display VIs on the External Display function palette. If you use
LabVIEW 7.0 or later, you can use the above method or display an image
directly on the front panel using the Image Display control on the Vision
control palette.
NoteImage windows are not LabVIEW panels. They are managed directly by
IMAQ Vision.
You can use a color palette to display grayscale images by applying a color
palette to the window. You can use the IMAQ GetPalette VI (Vision
Utilities»Display) to obtain predefined color palettes. For example, if you
need to display a binary image—an image containing particle regions with
pixel values of 1 and a background region with pixel values of 0—apply the
predefined binary palette. Refer to Chapter 2, Display, of the IMAQ Vision
Concepts Manual for more information about color palettes.
NoteAt the end of your application, you must close all open external windows using the
IMAQ WindClose VI (Vision Utilities»Display).
Note If your Palette View is set to Express, you can access the Image Display control by
right-clicking on the front panel and selecting All Controls»Vision.
© National Instruments Corporation 2-9 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
To display an image, wire the image output of an IMAQ Vision VI into the
image display terminal on the block diagram, as shown in Figure 2-2.
Figure 2-2. An Image Wired into the Image Display Control Terminal
During design time, you can customize the appearance of the control by
rearranging the elements and by configuring properties through the popup
menu. During run time, you can customize many pieces of the control using
property nodes.
Note Not all functionality available during design time is available at run-time.
The following list describes a subset of the properties available for the
Image Display control:
• Snapshot Mode—Determines whether the control makes a copy of the
image or has a reference to the image. When you enable the Snapshot
Mode, if the inspection image changes later in your application, the
Image Display control continues to display the image as it was when
the image was wired into the Image Display control.
Enabling the Snapshot Mode may reduce the speed of your application
because the control makes a copy of the image. Enable this property
when you want to display a snapshot of the image in time. Disable this
© National Instruments Corporation 2-11 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
Analyze an Image
When you acquire and display an image, you may want to analyze the
contents of the image for the following reasons:
• To determine whether the image quality is sufficient for your
inspection task.
• To obtain the values of parameters that you want to use in processing
functions during the inspection process.
The histogram and line profile tools can help you analyze the quality of
your images.
Use the IMAQ Histograph and IMAQ Histogram VIs (Image Processing»
Analysis) to analyze the overall grayscale distribution in the image. Use the
histogram of the image to analyze two important criteria that define the
quality of an image: saturation and contrast. If your image is underexposed
(does not have enough light) the majority of your pixels will have low
intensity values, which appear as a concentration of peaks on the left side
of your histogram. If your image is overexposed (has too much light) the
majority of your pixels will have high intensity values, which appear as
a concentration of peaks on the right side of your histogram. If your image
has an appropriate amount of contrast, your histogram will have distinct
regions of pixel concentrations. Use the histogram information to decide if
the image quality is sufficient enough to separate objects of interest from
the background.
If the image quality meets your needs, use the histogram to determine the
range of pixel values that correspond to objects in the image. You can use
this range in processing functions, such as determining a threshold range
during particle analysis.
If the image quality does not meet your needs, try to improve the imaging
conditions to get the desired image quality. You may need to re-evaluate
and modify each component of your imaging setup: lighting equipment
and setup, lens tuning, camera operation mode, and acquisition board
parameters. If you reach the best possible conditions with your setup but
the image quality still does not meet your needs, try to improve the image
quality using the image processing techniques described in the Improve an
Image section of this chapter.
If the image quality meets your needs, use the pixel distribution
information to determine some parameters of the inspection functions you
want to use. For example, use the information from the line profile to
determine the strength of the edge at the boundary of an object. You can
input this information into the IMAQ Edge Tool VI (Machine Vision»
Caliper) to find the edges of objects along the line.
Improve an Image
Using the information you gathered from analyzing your image, you may
want to improve the quality of your image for inspection. You can improve
your image with lookup tables, filters, grayscale morphology, and Fast
Fourier transforms.
Lookup Tables
Apply lookup table (LUT) transformations to highlight image details in
areas containing significant information at the expense of other areas.
A LUT transformation converts input grayscale values in the source image
into other grayscale values in the transformed image. IMAQ Vision
provides four VIs that directly or indirectly apply lookup tables to images:
• IMAQ MathLookup (Image Processing»Processing)—Converts the
pixel values of an image by replacing them with values from a
predefined lookup table. IMAQ Vision has seven predefined lookup
tables based on mathematical transformations. Refer to Chapter 5,
Image Processing, of the IMAQ Vision Concepts Manual for more
information about these lookup tables.
• IMAQ UserLookup (Image Processing»Processing)—Converts the
pixel values of an image by replacing them with values from a
user-defined lookup table.
• IMAQ Equalize (Image Processing»Processing)—Distributes the
grayscale values evenly within a given grayscale range. Use
IMAQ Equalize to increase the contrast in images containing few
grayscale values.
© National Instruments Corporation 2-13 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
Filters
Filter your image when you need to improve the sharpness of transitions in
the image or increase the overall signal-to-noise ratio of the image. You can
choose either a lowpass or highpass filter depending on your needs.
Convolution Filter
IMAQ Convolute (Image Processing»Filters) allows you to use a
predefined set of lowpass and highpass filters. Each filter is defined by a
kernel of coefficients. Use the IMAQ GetKernel VI (Image Processing»
Filters) to retrieve predefined kernels. If the predefined kernels do not meet
your needs, define your own custom filter using a LabVIEW 2D array of
floating point numbers.
Grayscale Morphology
Perform grayscale morphology when you want to filter grayscale
features of an image. Grayscale morphology helps you remove or
enhance isolated features, such as bright pixels on a dark background.
Use these transformations on a grayscale image to enhance non-distinct
features before thresholding the image in preparation for particle analysis.
FFT
Use the Fast Fourier Transform (FFT) to convert an image into its
frequency domain. In an image, details and sharp edges are associated
with mid to high spatial frequencies because they introduce significant
gray-level variations over short distances. Gradually varying patterns are
associated with low spatial frequencies.
© National Instruments Corporation 2-15 IMAQ Vision for LabVIEW User Manual
Chapter 2 Getting Measurement-Ready Images
You can use algorithms working in the frequency domain to isolate and
remove these unwanted frequencies from your image. Complete the
following steps to obtain an image in which the unwanted pattern has
disappeared but the overall features remain:
1. Use the IMAQ FFT VI (Image Processing»Frequency Domain) to
convert an image from the spatial domain to the frequency domain.
This VI computes the FFT of the image and results in a complex image
representing the frequency information of your image.
2. Improve your image in the frequency domain with a lowpass or
highpass frequency filter. Specify which type of filter to use with
the IMAQ ComplexAttenuate VI (Image Processing»Frequency
Domain) or the IMAQ ComplexTruncate VI (Image Processing»
Frequency Domain). Lowpass filters smooth noise, details, textures,
and sharp edges in an image. Highpass filters emphasize details,
textures, and sharp edges in images, but they also emphasize noise.
• Lowpass attenuation—The amount of attenuation is directly
proportional to the frequency information. At low frequencies,
there is little attenuation. As the frequencies increase, the
attenuation increases. This operation preserves all of the zero
frequency information. Zero frequency information corresponds
to the DC component of the image or the average intensity of
the image in the spatial domain.
• Highpass attenuation—The amount of attenuation is inversely
proportional to the frequency information. At high frequencies,
there is little attenuation. As the frequencies decrease, the
attenuation increases. The zero frequency component is removed
entirely.
• Lowpass truncation—Frequency components above the ideal
cutoff frequency are removed, and the frequencies below it remain
unaltered.
• Highpass truncation—Frequency components above the ideal
cutoff frequency remain unaltered, and the frequencies below it
are removed.
Advanced Operations
The IMAQ ImageToComplexPlane VI (Image Processing»Frequency
Domain) and IMAQ ComplexPlaneToImage VI (Image Processing»
Frequency Domain) allow you to access, process, and update
independently the magnitude, phase, real, and imaginary planes of a
complex image. You also can convert a complex image to an array and
back with the IMAQ ComplexImageToArray VI (Image Processing»
Frequency Domain) and IMAQ ArrayToComplexImage VI (Image
Processing»Frequency Domain).
© National Instruments Corporation 2-17 IMAQ Vision for LabVIEW User Manual
Grayscale and Color
3
Measurements
This chapter describes how to take measurements from grayscale and color
images. You can make inspection decisions based on image statistics, such
as the mean intensity level in a region. Based on the image statistics, you
can perform many machine vision inspection tasks on grayscale or color
images, such as detecting the presence or absence of components, detecting
flaws in parts, and comparing a color component with a reference.
Figure 3-1 illustrates the basic steps involved in making grayscale and
color measurements.
Measure Measure
Grayscale Statistics Color Statistics
© National Instruments Corporation 3-1 IMAQ Vision for LabVIEW User Manual
Chapter 3 Grayscale and Color Measurements
Hold down <Shift> when drawing an ROI if you want to constrain the ROI
to the horizontal, vertical, or diagonal axes, when possible. Use the
selection tool to position an ROI by its control points or vertices. ROIs are
context sensitive, meaning that the cursor actions differ depending on the
ROI with which you interact. For example, if you move your cursor over
the side of a rectangle, the cursor changes to indicate that you can click and
drag the side to resize the rectangle. If you want to draw more than one ROI
in an image display environment, hold down <Ctrl> while drawing
additional ROIs.
You can configure which ROI tools are present on the control. Complete the
following steps to configure the ROI tools palette during design time:
1. Right-click the ROI tools palette and select Visible Items»ROI Tool
Button Visibility.
2. Deselect the tools you do not want to appear in the ROI tools palette.
If you do not want any of the tools to appear, click All Hidden.
3. Click OK to implement the changes.
To get or set ROIs programmatically, use the property node for the Image
Display control.
© National Instruments Corporation 3-3 IMAQ Vision for LabVIEW User Manual
Chapter 3 Grayscale and Color Measurements
Note If you want to draw an ROI without displaying the tools palette in an external
window, use the IMAQ WindToolsSelect VI (Vision Utilities»Region of Interest).
This VI allows you to select a contour from the tools palette without opening the palette.
You also can use the IMAQ Select Point VI, IMAQ Select Line VI, and
IMAQ Select Rectangle VI to define regions of interest. These three VIs
appear in the Machine Vision»Select Region of Interest palette.
Complete the following steps to use these VIs:
1. Call the VI to display an image in an ROI Constructor window. Only
the tools specific to that VI are available for you to use.
2. Select an ROI tool from the tools palette.
3. Draw an ROI on your image. Resize or reposition the ROI until it
covers the area you want to process.
© National Instruments Corporation 3-5 IMAQ Vision for LabVIEW User Manual
Chapter 3 Grayscale and Color Measurements
4. Click OK to output a simple description of the ROI. You can use this
description as an input for the VIs on the Machine Vision»Intensities
palette that measure grayscale intensity:
• IMAQ Light Meter (Point)—Uses the output of IMAQ Select
Point.
• IMAQ Light Meter (Line)—Uses the output of IMAQ Select
Line.
• IMAQ Light Meter (Rect)—Uses the output of IMAQ Select
Rectangle.
Pixel Intensity
Image-type indicator (8-bit, 16-bit, Float, RGB, HSL, Complex)
© National Instruments Corporation 3-7 IMAQ Vision for LabVIEW User Manual
Chapter 3 Grayscale and Color Measurements
© National Instruments Corporation 3-9 IMAQ Vision for LabVIEW User Manual
Chapter 3 Grayscale and Color Measurements
Red Red
8 8
Green Green
8 8
Blue Blue
8 8
Hue Hue
8 8
or Saturation Saturation or
8 8
Color Color
Image Intensity Intensity Image
8 8
32 8-bit Image Processing 32
Hue Hue
8 8
or Saturation Saturation or
8 8
Luminance Luminance
8 8
Hue Hue
8 8
or Saturation Saturation or
8 8
Value Value
8 8
Compare Colors
You can use the color matching capability of IMAQ Vision to compare or
evaluate the color content of an image or regions in an image. Complete the
following steps to compare colors using color matching:
1. Select an image containing the color information that you want to use
as a reference. The color information can consist of multiple colors.
2. Use the entire image or regions in the image to learn the color
information using the IMAQ ColorLearn VI (Image Processing»
Color Processing), which outputs a color spectrum that contains a
compact description of the color information that you learned. Refer to
Chapter 14, Color Inspection, of the IMAQ Vision Concepts Manual
for more information. Use the color spectrum to represent the learned
color information for all subsequent matching operations.
3. Define an image or multiple regions in an image as the inspection or
comparison area.
4. Use the IMAQ ColorMatch VI (Image Processing»Color
Processing) to compare the learned color information to the color
information in the inspection regions. This VI returns a score that
indicates the closeness of match. You can specify a Minimum Match
Score, which indicates whether there is a match between the input
color information and the color in each specified region in the image.
5. Use the color matching score as a measure of similarity between the
reference color information and the color information in the image
regions being compared.
© National Instruments Corporation 3-11 IMAQ Vision for LabVIEW User Manual
Chapter 3 Grayscale and Color Measurements
a. b.
Figure 3-8 shows how light reflects differently off of the 3D surfaces of the
fuses, resulting in slightly different colors for identical fuses. Compare the
3-amp fuse in the upper row with the 3-amp fuse in the lower row.
The difference in light reflection results in different color spectrums for
identical fuses.
If you learn the color spectrum by drawing a region of interest inside the
3-amp fuse in the upper row and then do a color matching for the 3-amp
fuse in the upper row, you get a very high match score (close to 1000).
However, the match score for the3-amp fuse in the lower row is quite low
(around 500). This problem could cause a mismatch for the color matching
in a fuse box inspection process.
© National Instruments Corporation 3-13 IMAQ Vision for LabVIEW User Manual
Chapter 3 Grayscale and Color Measurements
fuses much better and results in high match scores (around 800) for both
3-amp fuses. Use as many samples as you want in an image to learn the
representative color spectrum for a specified template.
© National Instruments Corporation 3-15 IMAQ Vision for LabVIEW User Manual
Particle Analysis
4
This chapter describes how to perform particle analysis on your images.
Use particle analysis to find statistical information—such as the area,
number, location, and presence of particles. With this information, you can
perform many machine vision inspection tasks, such as detecting flaws on
silicon wafers or detecting soldering defects on electronic boards.
Examples of how particle analysis can help you perform web inspection
tasks include locating structural defects on wood planks or detecting cracks
on plastic sheets.
You can use different techniques to threshold your image. If all the
objects of interest in your grayscale image fall within a continuous range
of intensities and you can specify this threshold range manually, use the
IMAQ Threshold VI (Image Processing»Processing) to threshold your
image. If all the objects in your grayscale image are either brighter or
© National Instruments Corporation 4-1 IMAQ Vision for LabVIEW User Manual
Chapter 4 Particle Analysis
darker than your background, you can use one of the automatic
thresholding techniques in IMAQ Vision. Complete the following steps to
use one of the automatic thresholding techniques:
1. Use the IMAQ AutoBThreshold VI (Image Processing»Processing)
to select the thresholding technique that automatically determines the
optimal threshold range.
2. Connect the Threshold Data output to the IMAQ MultiThreshold VI
(Image Processing»Processing), or use the Lookup Table output to
apply a lookup table to the image using the IMAQ UserLookup VI
(Image Processing»Processing).
The advanced morphology functions require that you specify the type of
connectivity to use. Connectivity specifies how IMAQ Vision determines
whether two adjacent pixels belong to the same particle. Use connectivity-4
when you want IMAQ Vision to consider pixels to be part of the same
particle only when the pixels touch along an adjacent edge. Use
connectivity-8 when you want IMAQ Vision to consider pixels to be part
of the same particle even if the pixels touch only at a corner. Refer to
Chapter 9, Binary Morphology, of the IMAQ Vision Concepts Manual for
more information about connectivity.
If you know enough about the shape features of the particles you want to
keep, use the IMAQ Particle Filter 2 VI (Image Processing»Morphology)
to filter out particles that do not interest you.
© National Instruments Corporation 4-3 IMAQ Vision for LabVIEW User Manual
Chapter 4 Particle Analysis
Pattern matching algorithms use edges and patterns. Pattern matching can
locate with very high accuracy the position of fiducials or characteristic
features of the part under inspection. You can combine those locations to
compute lengths, angles, and other object measurements.
Note Diagram items enclosed with dashed lines are optional steps.
© National Instruments Corporation 5-1 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Make Measurements
Display Results
If the object under inspection is always at the same location and orientation
in the images you need to process, defining ROIs is simple. Refer to the Set
Search Areas section of this chapter for information about selecting
an ROI.
Often, the object under inspection appears shifted or rotated in the image
you need to process with respect to the reference image in which you
located the object. When this occurs, the ROIs need to shift and rotate with
the parts of the object in which you are interested. For the ROIs to move
with the object, you need to define a reference coordinate system relative
to the object in the reference image. During the measurement process, the
coordinate system moves with the object when the object appears shifted
and rotated in the image you need to process. This coordinate system is
referred to as the measurement coordinate system. The measurement VIs
automatically move the ROIs to the correct position using the position of
Note To use these techniques, the object cannot rotate more than ±65° in the image.
© National Instruments Corporation 5-3 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
1 2 1
3
4
3
a. b.
1 Search Area for the Coordinate System 3 Origin of the Coordinate System
2 Object Edges 4 Measurement Area
4
2
1 1
a. b.
Figure 5-3. Locating Coordinate System Axes with Two Search Areas
2. Choose the parameters you need to locate the edges on the object.
3. Choose the coordinate system axis direction.
4. Choose the results that you want to overlay onto the image.
5. Choose the mode for the VI. To build a coordinate transformation for
the first time, set mode to Find Reference. To update the coordinate
transformation in subsequent images, set this mode to Update
Reference.
© National Instruments Corporation 5-5 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Note The object can rotate 360° in the image using this technique if you use
rotation-invariant pattern matching.
1. Define a template that represents the part of the object that you want
to use as a reference feature. Refer to the Find Measurement Points
section for information about defining a template.
2. Define a rectangular search area in which you expect to find the
template.
3. Choose the Match Mode. Select Rotation Invariant when you expect
your template to appear rotated in the inspection images. Otherwise,
select Shift Invariant.
4. Choose the results that you want to overlay onto the image.
5. Choose the mode for the VI. To build a coordinate transformation for
the first time, set mode to Find Reference. To update the coordinate
transformation in subsequent images, set this mode to Update
Reference.
Start
Object positioning No
accuracy better
than ±65 degrees.
Yes
Yes
End
© National Instruments Corporation 5-7 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
ROI Measurement VI
Rotated rectangle IMAQ Find Pattern
(Machine Vision»Find Patterns)
IMAQ Clamp Horizontal Max
(Machine Vision»Measure Distances)
IMAQ Clamp Horizontal Min
(Machine Vision»Measure Distances)
IMAQ Clamp Vertical Max
(Machine Vision»Measure Distances)
IMAQ Clamp Vertical Min
(Machine Vision»Measure Distances)
IMAQ Find Horizontal Edge
(Machine Vision»Locate Edges)
IMAQ Find Vertical Edge
(Machine Vision»Locate Edges)
Annulus IMAQ Find Circular Edge
(Machine Vision»Locate Edges)
IMAQ Find Concentric Edge
(Machine Vision»Locate Edges)
© National Instruments Corporation 5-9 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
If you want to find points along a circular edge and find the circle that best
fits the edge, as shown in Figure 5-6, use the IMAQ Find Circular Edge VI
(Machine Vision»Locate Edges).
2 3
IMAQ Find Vertical Edge, IMAQ Find Horizontal Edge, and IMAQ Find
Concentric Edge locate the intersection points between a set of search
lines within the search region and the edge of an object. Specify the
separation between the lines that the VIs use to detect edges. The VIs
determine the intersection points based on their contrast, width, and
steepness. The software calculates a best-fit line with outliers rejected or a
best-fit circle through the points it found. The VIs return the coordinates of
the edges found.
IMAQ Simple Edge and IMAQ Edge Tool require you to input the
coordinates of the points along the search contour. Use the
IMAQ ROIProfile VI (Image Processing»Analysis) to obtain the
coordinates from the ROI descriptor of the contour. If you have a straight
line, use the IMAQ GetPointsOnLine VI (Machine Vision»Analytic
© National Instruments Corporation 5-11 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Geometry) to obtain the points along the line instead of using an ROI
descriptor.
IMAQ Rake works on a rectangular search region. The search lines are
drawn parallel to the orientation of the rectangle. Control the number of
search lines in the region by specifying the distance, in pixels, between
each line. Specify the search direction as left to right or right to left for
a horizontally oriented rectangle. Specify the search direction as top to
bottom or bottom to top for a vertically oriented rectangle.
IMAQ Spoke works on an annular search region, scanning the search lines
that are drawn from the center of the region to the outer boundary and that
fall within the search area. Control the number of lines in the region by
specifying the angle, in degrees, between each line. Specify the search
direction as either going from the center outward or from the outer
boundary to the center.
Symmetry
A rotationally symmetric template is less sensitive to changes in rotation
than one that is rotationally asymmetric. A rotationally symmetric template
provides good positioning information but no orientation information.
© National Instruments Corporation 5-13 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Rotationally Rotationally
Symmetric Asymmetric
Feature Detail
A template with relatively coarse features is less sensitive to variations in
size and rotation than a model with fine features. However, the model must
contain enough detail to identify the feature.
Good Ambiguous
Feature Detail Feature Detail
Positional Information
A template with strong edges in both the x and y directions is easier to
locate.
Background information
Unique background information in a template improves search
performance and accuracy.
© National Instruments Corporation 5-15 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
instances of a pattern and only one of them is required for the inspection
task, the presence of additional instances of the pattern can produce
incorrect results. To avoid this, reduce the search area so that only the
desired pattern lies within the search area.
a. b.
c. d.
Match Mode
Set the match mode to control how the pattern matching algorithm treats
the template at different orientations. If you expect the orientation of valid
matches to vary less than ±5° from the template, set the Match Mode
control to Shift Invariant. Otherwise, set Match Mode to Rotation
Invariant. Shift-invariant matching is faster than rotation-invariant
matching.
Minimum Contrast
The pattern matching algorithm ignores all image regions in which contrast
values fall below a set minimum contrast value. Contrast is the difference
between the smallest and largest pixel values in a region. Set the Minimum
Contrast control to slightly below the contrast value of the search area with
the lowest contrast.
You can set the minimum contrast to potentially increase the speed of the
pattern matching algorithm. If the search image has high contrast overall
but contains some low contrast regions, set a high minimum contrast value
to exclude all areas of the image with low contrast. Excluding these areas
significantly reduces the area in which the pattern matching algorithm must
search. However, If the search image has low contrast throughout, set a low
minimum contrast to ensure that the pattern matching algorithm looks for
the template in all regions of the image.
© National Instruments Corporation 5-17 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
other objects in the scene, color provides the machine vision software
with the additional information to locate the object.
Color pattern matching returns the location of the center of the template and
the template orientation. Complete the following general steps to find
features in an image using color pattern matching:
1. Define a reference or fiducial pattern in the form of a template image.
2. Use the reference pattern to train the color pattern matching algorithm
with IMAQ Setup Learn Color Pattern.
3. Define an image or an area of an image as the search area. A small
search area can reduce the time to find the features.
4. Set the Feature Mode control to Color and Shape.
5. Set the tolerances and parameters to specify how the algorithm
operates at run time using IMAQ Setup Match Color Pattern.
6. Test the search tool on test images using IMAQ Match Color Pattern.
7. Verify the results using a ranking method.
Color Information
A template with colors that are unique to the pattern provides better results
than a template that contains many colors, especially colors found in the
background or other objects in the image.
Symmetry
A rotationally symmetric template in the luminance plane is less sensitive
to changes in rotation than one that is rotationally asymmetric.
© National Instruments Corporation 5-19 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Feature Detail
A template with relatively coarse features is less sensitive to variations in
size and rotation than a model with fine features. However, the model must
contain enough detail to identify it.
Positional Information
A color template whose luminance plane contains strong edges in both the
x and y directions is easier to locate.
Background Information
Unique background information in a template improves search
performance and accuracy during the grayscale pattern matching phase.
This requirement could conflict with the color information requirement of
color pattern matching because background colors may not be desirable
during the color location phase. Avoid this problem by choosing a template
with sufficient background information for grayscale pattern matching
while specifying the exclusion of the background color during the color
location phase. Refer to the Training the Color Pattern Matching
Algorithm section of this chapter for more information on how to ignore
colors.
Exclude colors in the template that you are not interested in using during
the search phase. Ignore colors that make your template difficult to locate.
When a template differs from several regions of the search image by only
its primary color or colors, consider ignoring the predominant common
color to improve search performance. Typically, the predominant color is
the background color of the template.
Use the IMAQ Setup Learn Color Pattern VI to ignore colors. You can
ignore certain predefined colors by using Ignore Black and White. To
ignore other colors, first learn the colors to ignore using IMAQ ColorLearn.
Then set the Ignore Color Spectra control of the IMAQ Setup Learn Color
Pattern VI to the resulting color spectrum.
© National Instruments Corporation 5-21 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
1 2
1 Search Area for 20 Amp Fuses 2 Search Area for 25 Amp Fuses
Color Sensitivity
Use the color sensitivity to control the granularity of the color information
in the template image. If the background and objects in the image contain
colors that are very close to colors in the template image, use a higher color
sensitivity setting. A higher sensitivity setting distinguishes colors with
very close hue values. Three color sensitivity settings are available in
IMAQ Vision: low, medium, and high. Use the default low setting if the
colors in the template are very different from the colors in the background
or other objects that you are not interested in. Increase the color sensitivity
settings as the color differences decrease. Use the Color Sensitivity control
of the IMAQ Setup Match Color Pattern VI to set the color sensitivity.
Refer to Chapter 14, Color Inspection, of the IMAQ Vision Concepts
Manual for more information on color sensitivity.
Search Strategy
Use the search strategy to optimize the speed of the color pattern matching
algorithm. The search strategy controls the step size, subsampling factor,
and percentage of color information used from the template.
Note Use the conservative strategy if you have multiple targets located very close to each
other in the image.
© National Instruments Corporation 5-23 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Minimum Contrast
Use the minimum contrast to increase the speed of the color pattern
matching algorithm. The color pattern matching algorithm ignores all
image regions where grayscale contrast values fall beneath a set minimum
contrast value. Use the Minimum Contrast control to set the minimum
contrast. Refer to the Setting Matching Parameters and Tolerances section
of this chapter for more information about minimum contrast.
You can save the template image using the IMAQ Write Image and Vision
Info VI (Machine Vision»Searching and Matching).
© National Instruments Corporation 5-25 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Make Measurements
You can make different types of measurements either directly from the
image or from points that you detect in the image.
Distance Measurements
Use clamp VIs (Machine Vision»Measure Distances) to measure the
separation between two edges in a rectangular search region. Specify the
parameters for edge detection and the separation between the search lines
that you want to use within the search region to find the edges.
Note Use the IMAQ Select Rectangle VI (Machine Vision»Select Region of Interest) to
generate a valid input search region for the clamp VIs.
First the VIs use the rake function to detect points along two edges of the
object under inspection. Then the VIs compute the distance between the
points detected on the edges along each search line of the rake. The VIs
return the largest or smallest distance in either the horizontal or vertical
direction, and they output the coordinates of all the edge points that they
find.
© National Instruments Corporation 5-27 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
Display Results
You can overlay the results obtained at various stages of your inspection
process on the inspection image. IMAQ Vision attaches the information
that you want to overlay to the image, but it does not modify the image.
The overlay appears every time you display the image.
To use these VIs, pass in the image on which you want to overlay
information and the information that you want to overlay.
Tip You can select the color of the overlays using the above VIs.
You can configure the following processing VIs to overlay different types
of information on the inspection image:
• IMAQ Find Vertical Edge (Machine Vision»Locate Edges)
© National Instruments Corporation 5-29 IMAQ Vision for LabVIEW User Manual
Chapter 5 Machine Vision
The following list contains the kinds of information you can overlay on the
above VIs:
• Search area input into the VI
• Search lines used for edge detection
• Edges detected along the search lines
• Bounding rectangle of particles
• Center of particles
• Result of the VI
After you set up your imaging system, you may want to calibrate your
system. If your imaging setup is such that the camera axis is perpendicular
or nearly perpendicular to the object under inspection and your lens has no
distortion, use simple calibration. With simple calibration, you do not need
to learn a template. Instead, you define the distance between pixels in the
horizontal and vertical directions using real-world units.
If your camera axis is not perpendicular to the object under inspection, use
perspective calibration to calibrate your system. If your lens is distorted,
use nonlinear distortion calibration
After you calibrate your imaging system, you can attach the calibration
information to an image. Refer to the Attach Calibration Information
section of this chapter for more information. Then, depending on your
needs, you can do one of the following:
• Use the real-world measurements options on the Particle Analysis and
Particle Analysis Reports VIs to get real-world particle shape
parameters without correcting the image.
© National Instruments Corporation 6-1 IMAQ Vision for LabVIEW User Manual
Chapter 6 Calibration
dx
dy 1
2
X Y
Y X
a. b.
© National Instruments Corporation 6-3 IMAQ Vision for LabVIEW User Manual
Chapter 6 Calibration
x
1
2
y y
a. b.
1 Origin of a Calibration Grid in the Real World 2 Origin of the Same Calibration Grid in an Image
Note If you specify a list of points instead of a grid for the calibration process,
the software defines a default coordinate system, as follows:
1. The origin is placed at the point in the list with the lowest x-coordinate value and
then the lowest y-coordinate value.
2. The angle is set to 0°.
3. The axis direction is set to Indirect.
y'
2 x'
Note If you want to specify a list of points instead of a grid, use the Reference Points
control of IMAQ Learn Calibration Template to specify the pixel to real-world mapping.
© National Instruments Corporation 6-5 IMAQ Vision for LabVIEW User Manual
Chapter 6 Calibration
are measured. Use the X Step and Y Step elements of the Grid Descriptor
control to specify the scaling factors.
Note The user-defined learning ROI represents the area in which you are interested.
Do not confuse the learning ROI with the calibration ROI generated by the calibration
algorithm. Refer to Figure 6-6 for an illustration of calibration ROIs.
a. b. c.
If the learning process returns a learning score below 600, try the following:
1. Make sure your grid complies with the guidelines listed in the
Define a Calibration Template section.
2. Check the lighting conditions. If you have too much or too little
lighting, the software may estimate the center of the dots incorrectly.
© National Instruments Corporation 6-7 IMAQ Vision for LabVIEW User Manual
Chapter 6 Calibration
Also, adjust your Threshold Range to distinguish the dots from the
background.
3. Select another learning algorithm. When nonlinear lens distortion is
present, using perspective projection sometimes results in a low
learning score.
Note A high score does not reflect the accuracy of your system.
Calibration Invalidation
Any image processing operation that changes the image size or orientation
voids the calibration information in a calibrated image. Examples of VIs
that void calibration information include IMAQ Resample, IMAQ Extract,
IMAQ ArrayToImage, and IMAQ Unwrap.
Simple Calibration
When the axis of your camera is perpendicular to the image plane and lens
distortion is negligible, use a simple calibration. In a simple calibration, a
pixel coordinate is transformed to a real-world coordinate through scaling
in the horizontal and vertical directions.
Y
dy
1
dx
X
1 Origin
© National Instruments Corporation 6-9 IMAQ Vision for LabVIEW User Manual
Chapter 6 Calibration
Using the calibration information attached to the image, you can accurately
convert pixel coordinates to real-world coordinates to make any of the
analytic geometry measurements with IMAQ Convert Pixel to Real World
(Vision Utilities»Calibration). If your application requires that you make
shape measurements, you can use the calibrated measurements from the
IMAQ Particle Analysis or IMAQ Particle Analysis Report VIs (Image
Processing»Analysis). You also can correct the image by removing
distortion with IMAQ Correct Calibrated Image.
Develop your vision application with NI-IMAQ and IMAQ Vision for
LabVIEW. Then download your code to run on a real-time, embedded
target. You also can add National Instruments DAQ, Motion Control, CAN,
and serial instruments to your LabVIEW RT system to create a complete,
integrated, embedded system.
System Components
Your Vision for LabVIEW RT system consists of a development system
and one or more deployed RT targets.
Development System
The Vision for LabVIEW Real-Time (Vision for LabVIEW RT)
development system is made up of the following major components:
• Host—Pentium-based machine running a Windows operating system.
Use this component to configure your PXI controller as an RT target
and to develop your application.
• RT target—RT Series hardware that runs VIs downloaded from and
built in LabVIEW. Examples of RT targets include a National
© National Instruments Corporation A-1 IMAQ Vision for LabVIEW User Manual
Appendix A Vision for LabVIEW Real-Time
Note You need a network connection between your host machine and RT target during
development to configure the RT target and to download software and code from your host
machine onto the RT target. This network connection is optional at runtime.
Deployed System
When you have configured your host development system, you can set up
and configure additional LabVIEW RT targets for deployment. These
deployed systems use the same hardware and software as your development
LabVIEW RT target, but they do not require Windows for configuration.
Instead of using Windows for configuration, copy your configuration
information from your development RT target.
Use MAX to install IMAQ Vision for LabVIEW and any other necessary
LabVIEW RT components from your host machine onto your RT target
system. Refer to the Measurement & Automation Explorer Remote Systems
Help for specific information (within MAX, go to Help»Help Topics»
Remote Systems).
When your RT target is set up, you can write and execute IMAQ Vision
code just as you would on a Windows-based system.
Remote Display
Remote Display allows you to acquire images on your remote system and
view them on your host machine. Remote Display is automatically enabled
when you use the LabVIEW Image Display control (available with
LabVIEW 7.0 or later) or any of the IMAQ Vision display VIs (Vision
Utilities»External Display)—such as IMAQ WindDraw, IMAQ
WindToolsShow, and IMAQ ConstructROI.
Remote Display is useful for monitoring and debugging your Vision for
LabVIEW RT applications. Familiarize yourself with how Remote Display
works before using this feature.
The following details will help you prepare your application for use with
Remote Display:
• Remote Display is a front-panel feature. Therefore, your LabVIEW
front-panel must be open for you to see images displayed using
Remote Display.
• Remote Display performs best when combined with IMAQ Remote
Compression. (Vision Utilities»IMAQ RT). When you display
images on your remote machine, LabVIEW must send those images
over your network. This process can take up a large amount of your
network bandwidth, especially when transferring large images.
IMAQ Remote Compression allows you to specify compression
settings for those images to reduce the network bandwidth used by the
display process. In addition, compressing images may increase your
display rates on slower networks.
• IMAQ Remote Compression uses two types of compression
algorithms. Use the lossy JPEG compression algorithm on grayscale
and color images. Use the lossless Packed Binary compression
algorithm on binary images. Refer to the IMAQ Vision for LabVIEW
Help for more information on the IMAQ Remote Compression VI.
Note JPEG Compression may result in data degradation of the displayed image. There is
no degradation of the image during processing. Test various compression settings to find
the right balance for your application.
© National Instruments Corporation A-3 IMAQ Vision for LabVIEW User Manual
Appendix A Vision for LabVIEW Real-Time
Tip Refer to the Remote Display Errors section of this appendix for more information.
RT Video Out
RT Video Out allows you to display images on a monitor that is connected
to your RT target. In IMAQ Vision, IMAQ WindDraw and IMAQ
WindClose (Vision Utilities»External Display) provide support for
RT Video Out. To display images on a monitor connected to your RT target,
input 15 for the Window Number control.
Note This feature is only available on controllers that feature the i815 chipset, such as the
National Instruments PXI-8175/76 Series controllers and the NI 1450 Series Compact
Vision System.
For certain vision algorithms, the execution time has relatively small jitter
if the input sets are similar. For instance, pattern matching produces results
in roughly the same time when searching for the same pattern in images
with common content. Therefore, many vision applications already contain
components that have consistent execution times. Running the application
on LabVIEW RT enhances the time reliability. Unfortunately, this
execution behavior is dependant on the commonality of the input sets.
In many applications, the input sets are common enough that you can safely
predict the execution time by running the application over a large,
representative set of example images. In some cases, however, getting a
© National Instruments Corporation A-5 IMAQ Vision for LabVIEW User Manual
Appendix A Vision for LabVIEW Real-Time
Time-Bounded Execution
As with determinism, time-bounded behavior is controlled by both the
algorithm and the environment in which it executes. For this reason, some
vision algorithms are not candidates for time bounding. For example,
algorithms whose execution time varies widely between similar images are
not productive under time constraints. This includes operations, such as
skeleton, separation, and segmentation, whose execution time can vary
dramatically with slight changes in the input image. This makes choosing
a time limit for such operations difficult. However, many vision algorithms
are adaptable to time limits when the appropriate timed environment is
established.
The resources you reserve at initialization are not used until the timing
mechanism is started. These resources are intended for use in internal
processing that is not exposed in the LabVIEW environment. For objects
that are exposed in LabVIEW, always preallocate resources before entering
the time-bounded portion of your code. For example, preallocate
destination images using IMAQ Create (Vision Utilities»Image
Management) and IMAQ SetImageSize (Vision Utilities»Image
Management) before entering time-bounded code.
Preparing Resources
Allocate any resource whose exact size you know before the time limit is
started. This encourages optimal use of the reserved resources and provides
maximum flexibility.
© National Instruments Corporation A-7 IMAQ Vision for LabVIEW User Manual
Appendix A Vision for LabVIEW Real-Time
Image Files
Many applications require you to use external files, such as the template
files used by pattern matching and spatial calibration functions. Before
running your application on an RT target, you must use FTP to transfer any
external image files from your host machine to your remote target. You can
use MAX 3.0 to FTP images to your RT target. Refer to the LabVIEW
Real-Time Module User Manual for more information about using FTP.
Deployment
When you have finished developing your Vision for LabVIEW RT
application, you may want to deploy that application to a number of remote
systems. In order to achieve consistent results from your Vision for
LabVIEW RT application, you must configure these deployed systems with
the same settings you used for your development system.
Note Each deployed system must have its own RT Series controller and software.
Visit ni.com for ordering information.
Note You must purchase a Vision for LabVIEW RT run-time license and a LabVIEW
Real-Time Module run-time license for each deployed Vision for LabVIEW RT system.
Visit ni.com for more information about purchasing run-time licenses.
Troubleshooting
This section describes solutions and suggestions for common errors in
Vision for LabVIEW RT.
© National Instruments Corporation A-9 IMAQ Vision for LabVIEW User Manual
Appendix A Vision for LabVIEW Real-Time
Programming Errors
Why won’t my LabVIEW VI run on my RT target?
Your IMAQ Vision VI may not be supported by the LabVIEW RT.
The following VIs are among those not supported:
• IMAQ Browser Delete (Vision Utilities»External Display»Browser)
© National Instruments Corporation A-11 IMAQ Vision for LabVIEW User Manual
Appendix A Vision for LabVIEW Real-Time
Note If you are using a monitor that does not support high refresh frequencies, your
images cannot display correctly. Consult your monitor documentation for information on
supported refresh frequencies.
If you searched ni.com and could not find the answers you need, contact
your local office or NI corporate headquarters. Phone numbers for our
worldwide offices are listed at the front of this manual. You also can visit
the Worldwide Offices section of ni.com/niglobal to access the branch
office Web sites, which provide up-to-date contact information, support
phone numbers, email addresses, and current events.
© National Instruments Corporation B-1 IMAQ Vision for LabVIEW User Manual
Glossary
Numbers/Symbols
1D One-dimensional.
2D Two-dimensional.
3D Three-dimensional.
A
AIPD National Instruments internal image file format used for saving complex
images and calibration information associated with an image
(extension APD).
alignment The process by which a machine vision application determines the location,
orientation, and scale of a part being inspected.
alpha channel Channel used to code extra information, such as gamma correction, about
a color image. The alpha channel is stored as the first byte in the four-byte
representation of an RGB pixel.
arithmetic operators The image operations multiply, divide, add, subtract, and modulo.
auto-median function A function that uses dual combinations of opening and closing operations
to smooth the boundaries of objects.
© National Instruments Corporation G-1 IMAQ Vision for LabVIEW User Manual
Glossary
B
b Bit. One binary digit, either 0 or 1.
B Byte. Eight related bits of data, an eight-bit binary number. Also denotes
the amount of memory required to store one byte of data.
barycenter The grayscale value representing the centroid of the range of an image's
grayscale values in the image histogram.
binary image An image in which the objects usually have a pixel intensity of 1 (or 255)
and the background has a pixel intensity of 0.
binary threshold Separation of an image into objects of interest (assigned a pixel value of 1)
and background (assigned pixel values of 0) based on the intensities of the
image pixels.
bit depth The number of bits (n) used to encode the value of a pixel. For a given n,
a pixel can take 2n different values. For example, if n equals 8-bits, a pixel
can take 256 different values ranging from 0 to 255. If n equals 16 bits,
a pixel can take 65,536 different values ranging from 0 to 65,535 or
–32,768 to 32,767.
BMP Bitmap. Image file format commonly used for 8-bit and color images
(extension BMP).
border function Removes objects (or particles) in a binary image that touch the image
border.
brightness (1) A constant added to the red, green, and blue components of a color pixel
during the color decoding process.
(2) The perception by which white objects are distinguished from gray and
light objects from dark objects.
C
caliper (1) A function in the NI Vision Assistant and in NI Vision Builder for
Automated Inspection that calculates distances, angles, circular fits, and the
center of mass based on positions given by edge detection, particle analysis,
centroid, and search functions.
(2) A measurement function that finds edge pairs along a specified path in
the image. This function performs an edge extraction and then finds edge
pairs based on specified criteria such as the distance between the leading
and trailing edges, edge contrasts, and so forth.
center of mass The point on an object where all the mass of the object could be
concentrated without changing the first moment of the object about any
axis
closing A dilation followed by an erosion. A closing fills small holes in objects and
smooths the boundaries of objects.
clustering Technique where the image is sorted within a discrete number of classes
corresponding to the number of phases perceived in an image. The gray
values and a barycenter are determined for each class. This process is
repeated until a value is obtained that represents the center of mass for each
phase or class.
CLUT Color lookup table. Table for converting the value of a pixel in an image
into a red, green, and blue (RGB) intensity.
color images Images containing color information, usually encoded in the RGB form.
color space The mathematical representation for a color. For example, color can be
described in terms of red, green, and blue; hue, saturation, and luminance;
or hue, saturation, and intensity.
complex image Stores information obtained from the FFT of an image. The complex
numbers that compose the FFT plane are encoded in 64-bit floating-point
values: 32bits for the real part and 32bits for the imaginary part.
connectivity Defines which of the surrounding pixels of a given pixel constitute its
neighborhood.
© National Instruments Corporation G-3 IMAQ Vision for LabVIEW User Manual
Glossary
connectivity-4 Only pixels adjacent in the horizontal and vertical directions are considered
neighbors.
convex hull function Computes the convex hull of objects in a binary image.
convex hull The smallest convex polygon that can encapsulate a particle.
convolution kernel 2D matrices (or templates) used to represent the filter in the filtering
process. The contents of these kernels are a discrete two-dimensional
representation of the impulse response of the filter that they represent.
D
Danielsson function Similar to the distance functions, but with more accurate results.
digital image An image f (x, y) that has been converted into a discrete number of pixels.
Both spatial coordinates and brightness are specified.
dilation Increases the size of an object along its boundary and removes tiny holes in
the object.
E
edge Defined by a sharp change (transition) in the pixel intensities in an image
or along an array of pixels.
edge contrast The difference between the average pixel intensity before and the average
pixel intensity after the edge.
edge detection Any of several techniques to identify the edges of objects in an image.
edge steepness The number of pixels that corresponds to the slope or transition area of
an edge.
energy center The center of mass of a grayscale image. See center of mass.
erosion Reduces the size of an object along its boundary and eliminates isolated
points in the image.
exponential and Expand the high gray-level information in an image while suppressing low
gamma corrections gray-level information.
exponential function Decreases brightness and increases contrast in bright regions of an image,
and decreases contrast in dark regions of an image.
F
FFT Fast Fourier Transform. A method used to compute the Fourier transform
of an image.
fiducial A reference pattern on a part that helps a machine vision application find
the part's location and orientation in an image.
Fourier transform Transforms an image from the spatial domain to the frequency domain.
frequency filters Counterparts of spatial filters in the frequency domain. For images,
frequency information is in the form of spatial frequency.
ft Feet.
function A set of software instructions executed by a single line of code that may
have input and/or output parameters and returns a value when executed.
G
gamma The nonlinear change in the difference between the video signal's
brightness level and the voltage level needed to produce that brightness.
© National Instruments Corporation G-5 IMAQ Vision for LabVIEW User Manual
Glossary
gradient filter Extracts the contours (edge detection) in gray-level values. Gradient filters
include the Prewitt and Sobel filters.
gray-level dilation Increases the brightness of pixels in an image that are surrounded by other
pixels with a higher intensity.
gray-level erosion Reduces the brightness of pixels in an image that are surrounded by other
pixels with a lower intensity.
H
h Hour.
highpass FFT filter Removes or attenuates low frequencies present in the FFT domain of an
image.
highpass filter Emphasizes the intensity variations in an image, detects edges (or object
boundaries), and enhances fine details in an image.
highpass frequency Attenuates or removes (truncates) low frequencies present in the frequency
filter domain of the image. A highpass frequency filter suppresses information
related to slow variations of light intensities in the spatial image.
histogram equalization Transforms the gray-level values of the pixels of an image to occupy the
entire range (0 to 255 in an 8-bit image) of the histogram, increasing the
contrast of the image.
histogram inversion Finds the photometric negative of an image. The histogram of a reversed
image is equal to the original histogram flipped horizontally around the
center of the histogram.
hit-miss function Locates objects in the image similar to the pattern defined in the structuring
element.
HSL Color encoding scheme using Hue, Saturation, and Luminance information
where each image in the pixel is encoded using 32 bits: 8 bits for hue, 8 bits
for saturation, 8 bits for luminance, and 8 unused bits.
hue Represents the dominant color of a pixel. The hue function is a continuous
function that covers all the possible colors generated using the R, G, and
B primaries. See also RGB.
I
I/O Input/output. The transfer of data to/from a computer system involving
communications channels, operator interface devices, and/or data
acquisition and control interfaces.
image definition The number of values a pixel can take on, which is the number of colors or
shades that you can see in the image.
© National Instruments Corporation G-7 IMAQ Vision for LabVIEW User Manual
Glossary
image enhancement The process of improving the quality of an image that you acquire from
a sensor in terms of signal-to-noise ratio, image contrast, edge definition,
and so on.
image file A file containing pixel data and additional information about the image.
image format Defines how an image is stored in a file. Usually composed of a header
followed by the pixel data.
image mask A binary image that isolates parts of a source image for further processing.
A pixel in the source image is processed if its corresponding mask pixel has
a non-zero value. A source pixel whose corresponding mask pixel has a
value of 0 is left unchanged.
image palette The gradation of colors used to display an image on screen, usually defined
by a CLUT.
image processing Encompasses various processes and analysis functions that you can apply
to an image.
imaging Any process of acquiring and displaying images and analyzing image data.
inspection The process by which parts are tested for simple defects such as missing
parts or cracks on part surfaces.
inspection function Analyzes groups of pixels within an image and returns information about
the size, shape, position, and pixel connectivity. Typical applications
include quality of parts, analyzing defects, locating objects, and sorting
objects.
instrument driver A set of high-level software functions, such as NI-IMAQ, that control
specific plug-in computer boards. Instrument drivers are available in
several forms, ranging from a function callable from a programming
language to a virtual instrument (VI) in LabVIEW.
intensity The sum of the Red, Green, and Blue primary colors divided by three,
(Red + Green + Blue)/3.
intensity profile The gray-level distribution of the pixels along an ROI in an image.
intensity threshold Characterizes an object based on the range of gray-level values in the
object. If the intensity range of the object falls within the user-specified
range, it is considered an object. Otherwise it is considered part of the
background.
J
jitter Maximum amount of time that the execution of an algorithm varies from
one execution to the next.
JPEG Joint Photographic Experts Group. Image file format for storing 8-bit and
color images with lossy compression (extension JPG).
K
kernel Structure that represents a pixel and its relationship to its neighbors.
The relationship is specified by weighted coefficients of each neighbor.
L
labeling The process by which each object in a binary image is assigned a unique
value. This process is useful for identifying the number of objects in the
image and giving each object a unique identity.
line gauge Measures the distance between selected edges with high-precision subpixel
accuracy along a line in an image. For example, this function can be used
to measure distances between points and edges. This function also can step
and repeat its measurements across the image.
line profile Represents the gray-level distribution along a line of pixels in an image.
© National Instruments Corporation G-9 IMAQ Vision for LabVIEW User Manual
Glossary
linear filter A special algorithm that calculates the value of a pixel based on its own
pixel value as well as the pixel values of its neighbors. The sum of this
calculation is divided by the sum of the elements in the matrix to obtain a
new pixel value.
logarithmic function Increases the brightness and contrast in dark regions of an image and
decreases the contrast in bright regions of the image.
logic operators The image operations AND, NAND, OR, XOR, NOR, XNOR, difference,
mask, mean, max, and min.
lossless compression Compression in which the decompressed image is identical to the original
image.
lossy compression Compression in which the decompressed image is visually similar but not
identical to the original image.
lowpass FFT filter Removes or attenuates high frequencies present in the FFT domain of an
image.
lowpass filter Attenuates intensity variations in an image. You can use these filters to
smooth an image by eliminating fine details and blurring edges.
lowpass frequency filter Attenuates high frequencies present in the frequency domain of the image.
A lowpass frequency filter suppresses information related to fast variations
of light intensities in the spatial image.
luma The brightness information in the video picture. The luma signal amplitude
varies in proportion to the brightness of the video signal and corresponds
exactly to the monochrome picture.
LUT Lookup table. Table containing values used to transform the gray-level
values of an image. For each gray-level value in the image, the
corresponding new value is obtained from the lookup table.
M
M (1) Mega, the standard metric prefix for 1 million or 106, when used with
units of measure such as volts and hertz
(2) Mega, the prefix for 1,048,576, or 220, when used with B to quantify
data or computer memory.
machine vision An automated application that performs a set of visual inspection tasks.
mask FFT filter Removes frequencies contained in a mask (range) specified by the user.
match score A number ranging from 0 to 1000 that indicates how closely an acquired
image matches the template image. A match score of 1000 indicates a
perfect match. A match score of 0 indicates no match.
MB Megabyte of memory.
median filter A lowpass filter that assigns to each pixel the median value of its neighbors.
This filter effectively removes isolated pixels without blurring the contours
of objects.
morphological Extract and alter the structure of objects in an image. You can use these
transformations transformations for expanding (dilating) or reducing (eroding) objects,
filling holes, closing inclusions, or smoothing borders. They are used
primarily to delineate objects and prepare them for quantitative inspection
analysis.
N
neighbor A pixel whose value affects the value of a nearby pixel when an image is
processed. The neighbors of a pixel are usually defined by a kernel or a
structuring element.
neighborhood Operations on a point in an image that take into consideration the values of
operations the pixels neighboring that point.
© National Instruments Corporation G-11 IMAQ Vision for LabVIEW User Manual
Glossary
nonlinear filter Replaces each pixel value with a nonlinear function of its surrounding
pixels.
nonlinear gradient filter A highpass edge-extraction filter that favors vertical edges.
Nth order filter Filters an image using a nonlinear filter. This filter orders (or classifies)
the pixel values surrounding the pixel being processed. The pixel being
processed is set to the Nth pixel value, where N is the order of the filter.
number of planes The number of arrays of pixels that compose the image. A gray-level or
(in an image) pseudo-color image is composed of one plane, while an RGB image is
composed of three planes (one for the red component, one for the blue, and
one for the green).
O
OCR Optical Character Recognition. The ability of a machine to read
human-readable text.
offset The coordinate position in an image where you want to place the origin of
another image. Setting an offset is useful when performing mask
operations.
operators Allow masking, combination, and comparison of images. You can use
arithmetic and logic operators in IMAQ Vision.
optical representation Contains the low-frequency information at the center and the high-
frequency information at the corners of an FFT-transformed image.
P
palette The gradation of colors used to display an image on screen, usually defined
by a CLUT.
particle analysis A series of processing operations and analysis functions that produce some
information about the particles in an image.
pattern matching The technique used to locate quickly a grayscale template within a
grayscale image
pixel Picture element. The smallest division that makes up the video scan line.
For display on a computer monitor, a pixel's optimum dimension is square
(aspect ratio of 1:1, or the width equal to the height).
pixel aspect ratio The ratio between the physical horizontal size and the vertical size of the
region covered by the pixel. An acquired pixel should optimally be square,
thus the optimal value is 1.0, but typically it falls between 0.95 and 1.05,
depending on camera quality.
pixel depth The number of bits used to represent the gray level of a pixel.
PNG Portable Network Graphic. Image file format for storing 8-bit, 16-bit,
and color images with lossless compression (extension PNG).
Prewitt filter Extracts the contours (edge detection) in gray-level values using a
3 × 3 filter kernel.
proper-closing A finite combination of successive closing and opening operations that you
can use to fill small holes and smooth the boundaries of objects.
proper-opening A finite combination of successive opening and closing operations that you
can use to remove small particles and smooth the boundaries of objects.
© National Instruments Corporation G-13 IMAQ Vision for LabVIEW User Manual
Glossary
Q
quantitative analysis Obtaining various measurements of objects in an image.
R
real time A property of an event or system in which data is processed as it is acquired
instead of being accumulated and processed at a later time.
resolution The number of rows and columns of pixels. An image composed of m rows
and n columns has a resolution of
reverse function Inverts the pixel values in an image, producing a photometric negative of
the image.
RGB Color encoding scheme using red, green, and blue (RGB) color information
where each pixel in the color image is encoded using 32 bits: 8 bits for red,
8 bits for green, 8 bits for blue, and 8 bits for the alpha value (unused).
Roberts filter Extracts the contours (edge detection) in gray level, favoring diagonal
edges.
ROI tools Collection of tools that enable you to select a region of interest from an
image. These tools let you select points, lines, annuli, polygons, rectangles,
rotated rectangles, ovals, and freehand open and closed contours.
rotational shift The amount by which one image is rotated with respect to a reference
image. This rotation is computed with respect to the center of the image.
rotation-invariant A pattern matching technique in which the reference pattern can be located
matching at any orientation in the test image as well as rotated at any degree.
S
saturation The amount of white added to a pure color. Saturation relates to the richness
of a color. A saturation of zero corresponds to a pure color with no white
added. Pink is a red with low saturation.
scale-invariant A pattern matching technique in which the reference pattern can be any size
matching in the test image.
segmentation function Fully partitions a labeled binary image into non-overlapping segments,
with each segment containing a unique object.
separation function Separates objects that touch each other by narrow isthmuses.
shift-invariant A pattern matching technique in which the reference pattern can be located
matching anywhere in the test image but cannot be rotated or scaled.
skeleton function Applies a succession of thinning operations to an object until its width
becomes one pixel.
Sobel filter Extracts the contours (edge detection) in gray-level values using a
3 × 3 filter kernel.
spatial filters Alter the intensity of a pixel with respect to variations in intensities of its
neighboring pixels. You can use these filters for edge detection, image
enhancement, noise reduction, smoothing, and so forth.
spatial resolution The number of pixels in an image, in terms of the number of rows and
columns in the image.
standard representation Contains the low-frequency information at the corners and high-frequency
information at the center of an FFT-transformed image.
© National Instruments Corporation G-15 IMAQ Vision for LabVIEW User Manual
Glossary
subpixel analysis Finds the location of the edge coordinates in terms of fractions of a pixel.
T
template Color, shape, or pattern that you are trying to match in an image using the
color matching, shape matching, or pattern matching functions. A template
can be a region selected from an image or it can be an entire image.
threshold Separates objects from the background by assigning all pixels with
intensities within a specified range to the object and the rest of the pixels to
the background. In the resulting binary image, objects are represented with
a pixel intensity of 255 and the background is set to 0.
threshold interval Two parameters, the lower threshold gray-level value and the upper
threshold gray-level value.
TIFF Tagged Image File Format. Image format commonly used for encoding
8-bit, 16-bit, and color images (extension TIF).
time-bounded Term that describes algorithms that are designed to support a lower and
upper bound on execution time.
tools palette Collection of tools that enable you to select regions of interest, zoom in and
out, and change the image palette.
V
value The grayscale intensity of a color pixel computed as the average of the
maximum and minimum red, green, and blue values of that pixel.
VI Virtual Instrument.
(1) A combination of hardware and/or software elements, typically used
with a PC, that has the functionality of a classic stand-alone instrument
(2) A LabVIEW software module (VI), which consists of a front panel user
interface and a block diagram program.
© National Instruments Corporation I-1 IMAQ Vision for LabVIEW User Manual
Index
control palette E
Image Display control, 1-1, 2-9
edge detection
IMAQ Vision control, 1-1
building coordinate reference, 5-3
Machine Vision control, 1-2
finding measurement points
conventions used in the manual, ix
along multiple search contours, 5-12
convolution filters, 2-14
along one search contour, 5-11
coordinate reference
lines or circles, 5-9
building for machine vision
error map, for calibration, 6-7
choosing method (figure), 5-7
example code, B-1
edge detection, 5-3
external window, displaying images, 2-8
pattern matching, 5-6
defining for calibration, 6-3
coordinates, converting pixel to real-world F
coordinates, 5-25 Fast Fourier Transform (FFT), 2-15
correction table, for calibration, 6-8 filters
creating applications. See application convolution, 2-14
development
highpass, 2-14
creating images. See images
highpass frequency filters, 2-16
customer
improving images, 2-13
education, B-1
lowpass, 2-14
professional services, B-1
lowpass frequency filters, 2-16
technical support, B-1
Nth order, 2-14
finding measurement points. See measurement
D points, finding
frequency domain, 2-16
deployment, application, xi, A-9
function palettes
diagnostic resources, B-1
Image Processing, 1-4
displaying
Machine Vision, 1-5
images, 2-8
Vision Utilities, 1-2
Remote Display, A-3
results of inspection process, 5-28
distance measurements, 5-26 G
distortion, correcting. See calibration geometrical measurements, 5-27
documentation grayscale and color measurements
conventions used in manual, ix color statistics
online library, B-1 color comparison, 3-11
related documentation, x learning color information, 3-11
drivers primary components of color image
instrument, B-1 (figure), 3-10
NI-IMAQ, xi, 1-3, 2-2, A-1, A-2
software, B-1
© National Instruments Corporation I-3 IMAQ Vision for LabVIEW User Manual
Index
© National Instruments Corporation I-5 IMAQ Vision for LabVIEW User Manual
Index
O R
online technical support, B-1 ranking method for verifying pattern
matching, 5-18
reading images, 2-6
P regions of interest, defining
particle analysis for calibration, 6-6
connectivity, 4-3 interactively
creating binary image, 4-1 displaying tools palette in separate
improving binary image window, 3-4
improving particle shapes, 4-4 for machine vision inspection, 5-8
removing unwanted particles, 4-3 ROI constructor window, 3-4
separating touching particles, 4-4 tools palette functions (table), 3-1
particle measurements, 4-4 tools palette tools and information
steps (figure), 4-1 (figure), 3-6
particle measurements, 4-4 programmatically
pattern matching for machine vision inspection, 5-9
See also color pattern matching specifying ROI elements and
building coordinate reference, 5-6 parameters, 3-7
finding measurement points using VIs, 3-7
defining and creating template using masks, 3-8
images, 5-13 related documentation, x
defining search area, 5-15 Remote Display, A-3
general steps, 5-13 resource management, A-5
learning the template, 5-15 ROI. See regions of interest, defining
setting matching parameters and RT Video Out, A-2, A-4, A-12
tolerances, 5-17
testing search tool on test
images, 5-18 S
verifying results with ranking saving calibration information, 6-10
method, 5-18 scaling mode, for calibration, 6-8
perspective errors, calibrating. See calibration search contour, finding points along
phone technical support, B-1 edge, 5-11
pixel coordinates, converting to real-world software drivers, B-1
coordinates, 5-25 support, technical, B-1
points, finding. See measurement points, system integration services, B-1
finding
professional services, B-1
programming examples, B-1
© National Instruments Corporation I-7 IMAQ Vision for LabVIEW User Manual
Index
T V
technical support, B-1 validating calibration, 6-8
telephone technical support, B-1 verifying pattern matching, 5-18
template for calibration, defining, 6-2 Vision Utilities function palettes, 1-2
template images
defining
color pattern matching, 5-19 W
pattern matching, 5-13 Web
learning professional services, B-1
color pattern matching, 5-20 technical support, B-1
pattern matching, 5-15 worldwide technical support, B-1
time-bounded execution, A-4, A-5, A-6
tools palette functions (table), 3-1
training, customer, B-1
troubleshooting resources, B-1
truncation
highpass, 2-16
lowpass, 2-16