Tutorial VisionAssistant
Tutorial VisionAssistant
Support Worldwide Technical Support and Product Information ni.com National Instruments Corporate Headquarters 11500 North Mopac Expressway Worldwide Offices Australia 1800 300 800, Austria 43 0 662 45 79 90 0, Belgium 32 0 2 757 00 20, Brazil 55 11 3262 3599, Canada 800 433 3488, China 86 21 6555 7838, Czech Republic 420 224 235 774, Denmark 45 45 76 26 00, Finland 385 0 9 725 725 11, France 33 0 1 48 14 24 24, Germany 49 0 89 741 31 30, India 91 80 51190000, Israel 972 0 3 6393737, Italy 39 02 413091, Japan 81 3 5472 2970, Korea 82 02 3451 3400, Lebanon 961 0 1 33 28 28, Malaysia 1800 887710, Mexico 01 800 010 0793, Netherlands 31 0 348 433 466, New Zealand 0800 553 322, Norway 47 0 66 90 76 60, Poland 48 22 3390150, Portugal 351 210 311 210, Russia 7 095 783 68 51, Singapore 1800 226 5886, Slovenia 386 3 425 4200, South Africa 27 0 11 805 8197, Spain 34 91 640 0085, Sweden 46 0 8 587 895 00, Switzerland 41 56 200 51 51, Taiwan 886 02 2377 2222, Thailand 662 278 6777, United Kingdom 44 0 1635 523545 For further support information, refer to the Technical Support and Professional Services appendix. To comment on National Instruments documentation, refer to the National Instruments Web site at ni.com/info and enter the info code feedback. 20002005 National Instruments Corporation. All rights reserved. Austin, Texas 78759-3504 USA Tel: 512 683 0100
Important Information
Warranty
The media is warranted against defects in materials and workmanship for a period of 90 days from the date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace equipment that proves to be defective during the warranty period. This warranty includes parts and labor. The media on which you receive National Instruments software are warranted not to fail to execute programming instructions, due to defects in materials and workmanship, for a period of 90 days from date of shipment, as evidenced by receipts or other documentation. National Instruments will, at its option, repair or replace software media that do not execute programming instructions if National Instruments receives notice of such defects during the warranty period. National Instruments does not warrant that the operation of the software shall be uninterrupted or error free. A Return Material Authorization (RMA) number must be obtained from the factory and clearly marked on the outside of the package before any equipment will be accepted for warranty work. National Instruments will pay the shipping costs of returning to the owner parts which are covered by warranty. National Instruments believes that the information in this document is accurate. The document has been carefully reviewed for technical accuracy. In the event that technical or typographical errors exist, National Instruments reserves the right to make changes to subsequent editions of this document without prior notice to holders of this edition. The reader should consult National Instruments if errors are suspected. In no event shall National Instruments be liable for any damages arising out of or related to this document or the information contained in it. EXCEPT AS SPECIFIED HEREIN, NATIONAL INSTRUMENTS MAKES NO WARRANTIES, EXPRESS OR IMPLIED, AND SPECIFICALLY DISCLAIMS ANY WARRANTY OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. CUSTOMERS RIGHT TO RECOVER DAMAGES CAUSED BY FAULT OR NEGLIGENCE ON THE PART OF NATIONAL INSTRUMENTS SHALL BE LIMITED TO THE AMOUNT THERETOFORE PAID BY THE CUSTOMER. NATIONAL INSTRUMENTS WILL NOT BE LIABLE FOR DAMAGES RESULTING FROM LOSS OF DATA, PROFITS, USE OF PRODUCTS, OR INCIDENTAL OR CONSEQUENTIAL DAMAGES, EVEN IF ADVISED OF THE POSSIBILITY THEREOF. This limitation of the liability of National Instruments will apply regardless of the form of action, whether in contract or tort, including negligence. Any action against National Instruments must be brought within one year after the cause of action accrues. National Instruments shall not be liable for any delay in performance due to causes beyond its reasonable control. The warranty provided herein does not cover damages, defects, malfunctions, or service failures caused by owners failure to follow the National Instruments installation, operation, or maintenance instructions; owners modification of the product; owners abuse, misuse, or negligent acts; and power failure or surges, fire, flood, accident, actions of third parties, or other events outside reasonable control.
Copyright
Under the copyright laws, this publication may not be reproduced or transmitted in any form, electronic or mechanical, including photocopying, recording, storing in an information retrieval system, or translating, in whole or in part, without the prior written consent of National Instruments Corporation.
Trademarks
National Instruments, NI, ni.com, and LabVIEW are trademarks of National Instruments Corporation. Refer to the Terms of Use section on ni.com/legal for more information about National Instruments trademarks. Other product and company names mentioned herein are trademarks or trade names of their respective companies. Members of the National Instruments Alliance Partner Program are business entities independent from National Instruments and have no agency, partnership, or joint-venture relationship with National Instruments.
Patents
For patents covering National Instruments products, refer to the appropriate location: HelpPatents in your software, the patents.txt file on your CD, or ni.com/patents.
Contents
About This Manual
Conventions ...................................................................................................................xi Related Documentation..................................................................................................xii NI Vision .........................................................................................................xii NI Vision Assistant..........................................................................................xii NI Vision Builder for Automated Inspection ..................................................xii Other Documentation ......................................................................................xiii
Contents
Improve an Image.......................................................................................................... 2-14 Lookup Tables................................................................................................. 2-14 Filters .............................................................................................................. 2-15 Convolution Filter............................................................................. 2-15 Nth Order Filter ................................................................................ 2-15 Grayscale Morphology.................................................................................... 2-15 FFT.................................................................................................................. 2-16 Advanced Operations ....................................................................... 2-18
vi
ni.com
Contents
vii
Contents
Reading Barcodes ........................................................................................... 5-35 Reading 1D Barcodes ....................................................................... 5-35 Reading Data Matrix Barcodes......................................................... 5-35 Reading PDF417 Barcodes............................................................... 5-35 Inspect Image for Defects.............................................................................................. 5-36 Compare to Golden Template ......................................................................... 5-36 Verify Characters ............................................................................................ 5-36 Display Results.............................................................................................................. 5-37
viii
ni.com
Contents
Time-Bounded Execution................................................................................A-6 Initializing the Timed Environment ..................................................A-6 Preparing Resources..........................................................................A-7 Performing Time-Bounded Vision Operations .................................A-8 Closing the Timed Environment .......................................................A-9 Image Files.....................................................................................................................A-9 Deployment....................................................................................................................A-9 Troubleshooting .............................................................................................................A-10 Remote Display Errors ....................................................................................A-10 Programming Errors ........................................................................................A-11 RT Video Out Errors .......................................................................................A-13
ix
Conventions
The following conventions appear in this manual: The symbol leads you through nested menu items and dialog box options to a final action. The sequence FilePage SetupOptions directs you to pull down the File menu, select the Page Setup item, and select Options from the last dialog box. This icon denotes a tip, which alerts you to advisory information. This icon denotes a note, which alerts you to important information. bold Bold text denotes items that you must select or click in the software, such as menu items and dialog box options. Bold text also denotes parameter names. Italic text denotes variables, emphasis, a cross reference, or an introduction to a key concept. Italic text also denotes text that is a placeholder for a word or value that you must supply. Text in this font denotes text or characters that you should enter from the keyboard, sections of code, programming examples, and syntax examples. This font is also used for the proper names of disk drives, paths, directories, programs, subprograms, subroutines, device names, functions, operations, variables, filenames, and extensions. Bold text in this font denotes the messages and responses that the computer automatically prints to the screen. This font also emphasizes lines of code that are different from the other examples.
italic
monospace
monospace bold
xi
Related Documentation
In addition to this manual, the following documentation resources are available to help you create your vision application.
NI Vision
NI Vision Concepts ManualDescribes the basic concepts of image analysis, image processing, and machine vision. This document also contains in-depth discussions about imaging functions for advanced users. NI Vision for LabVIEW VI Reference HelpContains reference information about NI Vision for LabVIEW palettes and VIs.
NI Vision Assistant
NI Vision Assistant TutorialDescribes the NI Vision Assistant software interface and guides you through creating example image processing and machine vision applications. NI Vision Assistant HelpContains descriptions of the NI Vision Assistant features and functions and provides instructions for using them.
xii
ni.com
Other Documentation
National Instruments image acquisition device user manualsContain installation instructions and device-specific information. Getting Started With NI-IMAQContains information about new functionality, minimum system requirements, and installation instructions for NI-IMAQ driver software. NI Vision Hardware HelpContains fundamental programming concepts for NI-IMAQ driver software and terminology for using NI image acquisition devices. NI-IMAQ VI Reference HelpContains reference information about the LabVIEW palettes and VIs for NI-IMAQ driver software. National Instruments PXI controller user manualContains information about how to set up your PXI controller device in your PXI chassis if you are using the LabVIEW Real-Time Module to develop your vision application. LabVIEW HelpEnter NI-IMAQ I/O in the Search tab of the LabVIEW Help for information about configuring the NI CVS-1450 Series compact vision system digital I/O and shutdown components as well as parameter reference information for each of the components. Example programsIllustrate common applications you can create with Vision for LabVIEW. You can find these examples by launching the NI Example Finder from HelpFind Examples within LabVIEW 7.0 and later. Application NotesContain information about advanced NI Vision concepts and applications. Application Notes are located on the National Instruments Web site at ni.com/appnotes.nsf. NI Developer Zone (NIDZ)Contains example programs, tutorials, technical presentations, the Instrument Driver Network, a measurement glossary, an online magazine, a product advisor, and a community area where you can share ideas, questions, and source code with developers around the world. The NI Developer Zone is located on the National Instruments Web site at ni.com/zone.
xiii
Introduction to NI Vision
This chapter describes the NI Vision for LabVIEW software, outlines the NI Vision palette organization, and lists the steps for making a machine vision application.
Note
Refer to the NI Vision Development Module Release Notes that came with your software for information about the system requirements and installation procedures for NI Vision for LabVIEW.
About NI Vision
NI Vision for LabVIEWa part of the NI Vision Development Moduleis a library of LabVIEW VIs that you can use to develop machine vision and scientific imaging applications. The NI Vision Development Module also includes the same imaging functions for LabWindows/CVI and other C development environments, as well as ActiveX controls for Visual Basic. Vision Assistant, another Vision Development Module software product, enables you to prototype your application strategy quickly without having to do any programming. Additionally, NI offers Vision Builder AI: configurable machine vision software that you can use to prototype, benchmark, and deploy applications.
1-1
Chapter 1
Introduction to NI Vision
NI Vision controlsUse these controls to get the functionality of corresponding NI Vision VI controls directly into your own VIs. Machine Vision controlsUse these controls to get the functionality of corresponding Machine Vision VI controls directly into your own VIs.
This document references many VIs from the NI Vision function palette. If you have difficulty finding a VI, use the search capability of the LabVIEW VI browser.
Vision Utilities
Vision Utilities functions allow you to manipulate and display images in NI Vision. Image ManagementA group of VIs that manage images. Use these VIs to create and dispose images, set and read attributes of an image such as its size and offset, and copy one image to another. You also can use some of the advanced VIs to define the border region of an image and access the pointer to the image data. FilesA group of VIs that read images from files, write images to files in different file formats, and get information about the image contained in a file. External DisplayA group of VIs that control the display of images in external image windows. Use these VIs to complete the following tasks: Get and set window attributes, such as size, position, and zoom factor Assign color palettes to image windows Set up and use image browsers Set up and use different drawing tools to interactively select ROIs on image windows Detect draw events Retrieve information about ROIs drawn on the image window
1-2
ni.com
Chapter 1
Introduction to NI Vision
Note
If you have LabVIEW 7.0 or later, you also can use the Image Display control available from the Vision control palette. Region of InterestA group of VIs that manage ROIs. Use these VIs to programmatically define ROIs and convert ROIs to and from image masks.
Note
If you have LabVIEW 7.0 or later, you can use the property node and invoke node of the Image Display control to perform many of these ROI tasks. Image ManipulationA group of VIs that modify the spatial content of images. Use these VIs to resample an image, extract parts of an image, and rotate, shift, and unwrap images. This subpalette also contains VIs that copy images to and from the clipboard. Pixel ManipulationA group of VIs that read and modify individual pixels in an image. Use these VIs to read and set pixel values in an image or along a row or column in an image, fill the pixels in an image with a particular value, and convert an image to and from a 2D LabVIEW array. OverlayA group of VIs that overlay graphics on an image display environment without altering the pixel values of the image. Use these VIs to overlay the results of your inspection application onto the images you inspect. CalibrationA group of VIs that spatially calibrate an image to take accurate, real-world measurements regardless of camera perspective or lens distortion. Use these VIs to set a simple calibration or to let NI Vision automatically learn the calibration data from a grid image. Then use the VIs to convert pixel coordinates to real-world coordinates for simple measurements. Color UtilitiesA group of VIs that access data from color images. Use these VIs to extract different color planes from an image, replace the planes of a color image with new data, convert a color image to and from a 2D array, read and set pixel values in a color image, and convert pixel values from one color space to another. Vision RTA group of VIs that provide functionality for using NI-IMAQ and NI Vision with the LabVIEW Real-Time (RT) Module. Use these VIs to display images to Video Out on your RT system, to control the compression setting for sending images over the network, and to time bound your processing VIs on your RT system.
1-3
Chapter 1
Introduction to NI Vision
Image Processing
Use the Image Processing functions to analyze, filter, and process images in NI Vision. ProcessingA group of VIs that process grayscale and binary images. Use these VIs to convert a grayscale image into a binary image using different thresholding techniques. You also can use these VIs to transform images using predefined or custom lookup tables, apply a watershed transform, change the contrast information in the image, and invert the values in an image. FiltersA group of VIs that filter an image to enhance the information in the image. Use these VIs to smooth an image, remove noise, and highlight or enhance edges in the image. You can use a predefined convolution kernel or create custom convolution kernels. MorphologyA group of VIs that perform morphological operations on an image. Some of these VIs perform basic morphological operations, such as dilation and erosion, on grayscale and binary images. Other VIs improve the quality of binary images by filling holes in particles, removing particles that touch the image border, removing small particles, and removing unwanted particles based on different shape characteristics of the particle. Another set of VIs in this subpalette separate touching particles, find the skeleton of particles, and detect circular particles. AnalysisA group of VIs that analyze the content of grayscale and binary images. Use these VIs to compute the histogram information and grayscale statistics of an image, retrieve pixel information and statistics along any one-dimensional profile in an image, and detect and measure particles in binary images. Color ProcessingA group of VIs that analyze and process color images. Use these VIs to compute the histogram of color images; apply lookup tables to color images; change the brightness, contrast, and gamma information associated with a color image; and threshold a color image. Some of these VIs also compare the color information in different images or different regions in an image using a color matching process. OperatorsA group of VIs that perform basic arithmetic and logical operations on images. Use some of these VIs to add, subtract, multiply, and divide an image with other images or constants. Use other VIs in this subpalette to apply logical operationssuch as AND/NAND, OR/NOR, XOR/XNORand make pixel comparisons between an image and other images or a constant. In addition, one VI in this
1-4
ni.com
Chapter 1
Introduction to NI Vision
subpalette allows you to select regions in an image to process using a masking operation. Frequency DomainA group of VIs that analyze and process images in the frequency domain. Use these VIs to convert an image from the spatial domain to the frequency domain using a two-dimensional Fast Fourier Transform (FFT) and convert from the frequency domain to the spatial domain using the inverse FFT. These VIs also extract the magnitude, phase, real, and imaginary planes of the complex image. In addition, these VIs allow you to convert complex images into complex 2D arrays and back. Also in this subpalette are VIs that perform basic arithmetic operationssuch as addition, subtraction, multiplication, and divisionbetween a complex image and other images or a constant. Lastly, some of these VIs allow you to filter images in the frequency domain.
Machine Vision
The NI Machine Vision VIs are high-level VIs that simplify common machine vision tasks. Select Region of InterestA group of VIs that allow you to select a ROI tool, draw specific ROIs in the image window, and return information about ROIs with very little programming. Coordinate SystemA group of VIs that find a coordinate system associated with an object in an image. Use these VIs to find the coordinate system using either edge detection or pattern matching. You can then use this coordinate system to take measurements from other Machine Vision VIs. Count and Measure ObjectsA VI that thresholds an image to isolate objects from the background and then finds and measures characteristics of the objects. This VI also can ignore unwanted objects in the image when making measurements. Measure IntensitiesA group of VIs that measure the intensity of a pixel at a point or the statistics of pixel intensities along a line or rectangular region in an image. Measure DistancesA group of VIs that measure distances, such as the minimum and maximum horizontal distance between two vertically oriented edges or the minimum and maximum vertical distance between two horizontally oriented edges. Locate EdgesA group of VIs that locate vertical, horizontal, and circular edges. Find PatternsA VI that learns and searches for a pattern in an image.
1-5
Chapter 1
Introduction to NI Vision
Searching and MatchingA group of VIs that create and search for patterns in grayscale and color images. This subpalette also contains a VI to search for objects with predefined shapes in binary images. CaliperA group of VIs that find edges along different profiles in the image. Use these VIs to find edges along a line, a set of parallel lines defined inside a rectangular region (rake), a set of parallel concentric lines defined inside an annular region (concentric rake), or a set of radial lines defined inside an annular region (spoke). You also can use these VIs to find pairs of edges in the image that satisfy certain criteria. Analytic GeometryA group of VIs that perform analytic geometry computations on a set of points in an image. Use these VIs to fit lines, circles, and ellipses to a set of points in the image; compute the area of a polygon represented by a set of points; measure distances between points; and find angles between lines represented by points. VIs in this subpalette also perform computations, such as finding the intersection point of two lines and finding the line bisecting the angle formed by two lines. OCRA group of VIs that perform optical character recognition in a region of the image. ClassificationA group of VIs that classify binary objects according to their shape or any user-defined feature vector. Instrument ReadersA group of VIs that accelerate the development of applications that require reading from seven-segment displays, meters, gauges, 1D barcodes, or 2D barcodes. InspectionA group of VIs that compare images to a golden template image.
1-6
ni.com
Chapter 1
Introduction to NI Vision
Create an Image
Analyze an Image
Improve an Image
1-7
Chapter 1
Introduction to NI Vision
Make Measurements
Display Results
1-8
ni.com
This chapter describes how to set up your imaging system, acquire and display an image, analyze the image, and prepare the image for additional processing.
c.
2.
Position your camera so that it is perpendicular to the object under inspection. If your camera acquires images of the object from an angle, perspective errors occur. Even though you can compensate for these errors with software, NI recommends that you use a perpendicular inspection angle to obtain the most accurate results.
2-1
Chapter 2
3.
Select an image acquisition device that meets your needs. National Instruments offers several image acquisition devices, such as analog color and monochrome devices as well as digital devices. Visit ni.com/vision for more information about NI image acquisition devices. Configure the driver software for your image acquisition device. If you have an NI image acquisition device, configure your NI-IMAQ driver software through MAX. Open MAX by double-clicking the Measurement & Automation Explorer icon on your desktop. Refer to the NI Vision Hardware Help and the NI-IMAQ Help for more information.
4.
Create an Image
Use the IMAQ Create VI (Vision UtilitiesImage Management) to create an image reference. When you create an image, specify one of the following image data types: Grayscale (U8, default)8-bit unsigned Grayscale (I16)16-bit signed Grayscale (SGL)Floating point Complex (CSG)
2-2
ni.com
Chapter 2
You can create multiple images by executing IMAQ Create as many times as you want, but each image you create requires a unique name. Determine the number of required images through an analysis of your intended application. Base your decision on different processing phases and whether you need to keep the original image after each processing phase.
Note
If you plan to use filtering or particle analysis VIs on the image, refer to their help topics in the NI Vision for LabVIEW VI Reference Help for information about the appropriate border size for the image. The default border size is three pixels. When you create an image, NI Vision creates an internal image structure to hold properties of the image, such as its name and border size. However, no memory is allocated to store the image pixels at this time. NI Vision VIs automatically allocate the appropriate amount of memory when the image size is modified. For example, VIs that acquire or resample an image alter the image size so they allocate the appropriate memory space for the image pixels. The output of IMAQ Create is a reference to the image structure. Supply this reference as an input to all subsequent NI Vision functions. During development, you may want to examine the contents of your image at run time. With LabVIEW 7.0 or later, you can use a LabVIEW image probe to view the contents of your image during execution. To create a probe, right-click on the image wire and select Probe. Most VIs belonging to the NI Vision library require an input of one or more image references. The number of image references a VI takes depends on the image processing function and the type of image you want to use. NI Vision VIs that analyze the image but do not modify the contents require the input of only one image reference. VIs that process the contents of images may require a reference to the source image(s) and to a destination image, or the VIs may have an optional destination image. If you do not provide a destination image, the VI modifies the source image. At the end of your application, dispose of each image that you created using the IMAQ Dispose VI (Vision UtilitiesImage Management).
2-3
Chapter 2
Image Analysis
The following connector pane applies only to VIs that analyze an image and therefore do not modify either the size or contents of the image. Examples of these types of operations include particle analysis and histogram calculations.
Image Masks
The following connector pane introduces an Image Mask.
The presence of an Image Mask input indicates that the processing or analysis is dependent on the contents of another image: the Image Mask. The only pixels in Image that are processed are those whose corresponding pixels in Image Mask are non-zero. If an Image Mask pixel is 0, the corresponding Image pixel is not changed. Image Mask must be an 8-bit image. If you want to apply a processing or analysis function to the entire image, do not connect the Image Mask input. Connecting the same image to both inputs Image and Image Mask also gives the same effect as leaving the input Image Mask unconnected, except in this case the Image must be an 8-bit image.
2-4
ni.com
Chapter 2
Image Filling
The following connector pane applies to VIs performing an operation that fills an image.
Examples of this type of operation include reading a file, acquiring an image from an NI image acquisition device, or transforming a 2D array into an image. This type of VI can modify the size of an image.
Image Processing
The following connector pane applies to VIs that process an image.
This connector is the most common type in NI Vision. The Image Src input receives the image to process. The Image Dst input can receive either another image or the original, depending on your goals. If two different images are connected to the two inputs, the original Image Src image is not modified. As shown in the following diagrams, if the Image Dst and Image Src inputs receive the same image, or if nothing is connected to Image Dst, the processed image is placed into the original image, and the original image data is lost.
The Image Dst input is the image that receives the processing results. Depending on the functionality of the VI, this image can be either the same or a different image type as that of the source image. The VI descriptions in the NI Vision for LabVIEW VI Reference Help include the type of image that can be connected to the Image inputs. The image connected to Image Dst is resized to the source image size.
2-5
Chapter 2
Two source images exist for the destination image. You can perform an operation between two images, A and B, and then either store the result in another image, Image Dst, or in one of the two source images, A or B. In the latter case, you can consider the original data to be unnecessary after the processing has occurred. The following combinations are possible in this pane.
In the pane on the left, the three images are all different. Image Src A and Image Src B are intact after processing and the results from this operation are stored in Image Dst. In the center pane, Image Src A also is connected to the Image Dst, which therefore receives the results from the operation. In this operation, the source data for Image Src A is overwritten. In the pane on the right, Image Src B receives the results from the operation and its source data is overwritten. Most operations between two images require that the images have the same type and size. However, arithmetic operations can work between two different types of images.
2-6
ni.com
Chapter 2
automatically allocate the memory space required to accommodate the image data. Use one of the following methods to acquire images with a National Instruments image acquisition device: Acquire a single image using the IMAQ Snap VI (Image Acquisition). When you call this VI, it initializes the image acquisition device and acquires the next incoming video frame. Use this VI for single capture applications where ease of programming is essential. Acquire images continually through a grab acquisition. Grab functions perform an acquisition that loops continually on one buffer. Use the grab functions for high-speed image acquisition. Use the IMAQ Grab Setup VI (Image Acquisition) to start the acquisition. Use the IMAQ Grab Acquire VI (Image Acquisition) to return a copy of the current image. Use the IMAQ Stop VI (Image Acquisition Low-Level Acquisition) to stop the acquisition. Acquire a fixed number of images using the IMAQ Sequence VI (Image Acquisition). IMAQ Sequence acquires one image after another until it has acquired the number of images you requested. If you want to acquire only certain images, supply IMAQ Sequence with a table describing the number of frames to skip after each acquired frame.
Note
When you are finished with the image acquisition, you must use the IMAQ Close VI (Image Acquisition) to release resources associated with the image acquisition device. Use the IMAQ ReadFile VI (Vision UtilitiesFiles) to open and read data from a file stored on your computer into the image reference. You can read from image files stored in a standard formatsuch as BMP, TIFF, JPEG, JPEG2000, PNG, and AIPDor a nonstandard format you specify. In all cases, the software automatically converts the pixels it reads into the type of image you pass in. Use the IMAQ Read Image and Vision Info VI (Vision UtilitiesFiles) to open an image file containing additional information, such as calibration information, template information for pattern matching, or overlay information. Refer to Chapter 5, Performing Machine Vision Tasks, for information about pattern matching templates and overlays. You also can use the IMAQ GetFileInfo VI (Vision UtilitiesFiles) to retrieve image propertiesimage size, pixel depth, recommended image type, and calibration unitswithout actually reading all the image data.
2-7
Chapter 2
Use IMAQ AVI Open and IMAQ AVI Read Frame to open and read data from an AVI file stored on your computer into the image reference. NI Vision automatically converts the pixels it reads into the type of image you pass in.
Note
When you are finished with the AVI file, you must use IMAQ AVI Close to release resources associated with the AVI file. Use the IMAQ ArrayToImage VI (Vision UtilitiesPixel Manipulation) to convert a 2D array to an image. You also can use the IMAQ ImageToArray VI (Vision UtilitiesPixel Manipulation) to convert an image to a LabVIEW 2D array.
Display an Image
You can display images in LabVIEW using two methods. You can display an image in an external window using the external display VIs on the External Display function palette. You can also display an image directly on the front panel using the Image Display control on the Vision control palette.
External image windows are not LabVIEW panels. They are managed directly by NI Vision.
2-8
ni.com
Chapter 2
You can use a color palette to display grayscale images by applying a color palette to the window. You can use the IMAQ GetPalette VI (Vision UtilitiesDisplay) to obtain predefined color palettes. For example, if you need to display a binary imagean image containing particle regions with pixel values of 1 and a background region with pixel values of 0apply the predefined binary palette. Refer to Chapter 2, Display, of the NI Vision Concepts Manual for more information about color palettes.
Note
At the end of your application, you must close all open external windows using the IMAQ WindClose VI (Vision UtilitiesDisplay).
If your Palette View is set to Express, you can access the Image Display control by right-clicking the front panel and selecting All ControlsVision.
2-9
Chapter 2
1 2
3 4
To display an image, wire the image output of an NI Vision VI into the image display terminal on the block diagram, as shown in Figure 2-2.
Figure 2-2. An Image Wired into the Image Display Control Terminal
The Image Display control contains the following elements: Display areaDisplays an image. Image information indicatorDisplays information about your image and the ROI that you are currently drawing.
2-10
ni.com
Chapter 2
ROI tools paletteContains tools for drawing ROIs, panning, and zooming. Unlike external display windows, each Image Display control uses its own set of tools. ScrollbarsAllows you to position the image in the display area.
During design time, you can customize the appearance of the control by rearranging the control elements, configuring properties through the popup menu, or selecting the control and clicking EditCustomize Control. During run time, you can customize many parts of the control using property nodes.
Note
Not all functionality available during design time is available at run-time. To create a property node, right-click the control and select Create Property Node. Click the Property Node once to see the properties you can configure. Properties specific to the Image Display control appear at the end of the list. The following list describes a subset of the properties available for the Image Display control: Snapshot ModeDetermines if the control makes a copy of the image or has a reference to the image. When you enable the Snapshot Mode, if the inspection image changes later in your application, the Image Display control continues to display the image as it was when the image was wired into the Image Display control. Enabling the Snapshot Mode may reduce the speed of your application because the control makes a copy of the image. Enable this property when you want to display a snapshot of the image in time. Disable this property when you need to display results quickly, such as during a grab acquisition. The property is disabled by default.
To cause the Image Display control to refresh the image immediately, use the Refresh Image method. To create a method, right-click the control, and select Create Invoke Node. Click the Invoke Node once to see the available methods. Methods specific to the Image Display control appear at the end of the popup menu.
Note
PaletteDetermines which color palette the Image Display control uses to display images. You can configure the control to use a predefined color palette or a custom color palette. Define a custom color palette with the User Palette property node. You also can change the color palette of the control or an image probe at run-time by right-clicking the Image Display control.
2-11
Chapter 2
Maximum Contour CountSets the maximum number of ROI contours a user can draw on an image display.
The Image Display control also includes the following methods: Get Last EventReturns the last user event, resulting from mouse movements and clicks, on the Image Display control. This method has the same behavior as IMAQ WindLastEvent does for external display windows. Clear ROIRemoves any ROIs on the Image Display control. Refresh ImageRefreshes the display to show the latest image. This method is useful if the snapshot control is disabled, but you want the Image Display control to show the latest changes to the image.
Analyze an Image
When you acquire and display an image, you may want to analyze the contents of the image for the following reasons: To determine if the image quality is sufficient for your inspection task To obtain the values of parameters that you want to use in processing functions during the inspection process
The histogram and line profile tools can help you analyze the quality of your images.
2-12
ni.com
Chapter 2
Use the IMAQ Histograph and IMAQ Histogram VIs (Image Processing Analysis) to analyze the overall grayscale distribution in the image. Use the histogram of the image to analyze two important criteria that define the quality of an image: saturation and contrast. If your image is underexposed because it was acquired in an environment without sufficient light, the majority of your pixels will have low intensity values, which appear as a concentration of peaks on the left side of your histogram. If your image is overexposed because it was acquired in an environment with too much light, the majority of the pixels will have high intensity values, which appear as a concentration of peaks on the right side of your histogram. If your image has an appropriate amount of contrast, your histogram will have distinct regions of pixel concentrations. Use the histogram information to decide if the image quality is sufficient enough to separate objects of interest from the background. If the image quality meets your needs, use the histogram to determine the range of pixel values that correspond to objects in the image. You can use this range in processing functions, such as determining a threshold range during particle analysis. If the image quality does not meet your needs, try to improve the imaging conditions to get the necessary image quality. You may need to re-evaluate and modify each component of your imaging setup: lighting equipment and setup, lens tuning, camera operation mode, and acquisition device parameters. If you reach the best possible conditions with your setup but the image quality still does not meet your needs, try to improve the image quality using the image processing techniques described in the Improve an Image section of this chapter. Use the IMAQ LineProfile VI (Image ProcessingAnalysis) to get the pixel distribution along a line in the image, or use the IMAQ ROIProfile VI (Image ProcessingAnalysis) to get the pixel distribution along a one-dimensional path in the image. To use a line profile to analyze your image, draw or specify a line across the boundary of an object in your image. Use IMAQ LineProfile to examine the pixel values along the line. By looking at the pixel distribution along this line, you can determine if the image quality is good enough to provide you with sharp edges at object boundaries. Also, you can determine if the image is noisy, and identify the characteristics of the noise. If the image quality meets your needs, use the pixel distribution information to determine some parameters of the inspection functions you want to use. For example, use the information from the line profile to determine the strength of the edge at the boundary of an object. You can
2-13
Chapter 2
input this information into the IMAQ Edge Tool VI (Machine Vision Caliper) to find the edges of objects along the line.
Improve an Image
Using the information you gathered from analyzing your image, you may want to improve the quality of your image for inspection. You can improve your image with lookup tables, filters, grayscale morphology, and Fast Fourier transforms.
Lookup Tables
Apply lookup table (LUT) transformations to highlight image details in areas containing significant information at the expense of other areas. A LUT transformation converts input grayscale values in the source image into other grayscale values in the transformed image. NI Vision provides four VIs that directly or indirectly apply lookup tables to images: IMAQ MathLookup (Image ProcessingProcessing)Converts the pixel values of an image by replacing them with values from a predefined lookup table. NI Vision has seven predefined lookup tables based on mathematical transformations. Refer to Chapter 5, Image Processing, of the NI Vision Concepts Manual for more information about these lookup tables. IMAQ UserLookup (Image ProcessingProcessing)Converts the pixel values of an image by replacing them with values from a user-defined lookup table. IMAQ Equalize (Image ProcessingProcessing)Distributes the grayscale values evenly within a given grayscale range. Use IMAQ Equalize to increase the contrast in images containing few grayscale values. IMAQ Inverse (Image ProcessingProcessing)Inverts the pixel intensities of an image to compute the negative of the image. For example, use IMAQ Inverse before applying an automatic threshold to your image if the background pixels are brighter than the object pixels.
2-14
ni.com
Chapter 2
Filters
Filter your image when you need to improve the sharpness of transitions in the image or increase the overall signal-to-noise ratio of the image. You can choose either a lowpass or highpass filter depending on your needs. Lowpass filters remove insignificant details by smoothing the image, removing sharp details, and smoothing the edges between the objects and the background. You can use the IMAQ LowPass VI (Image ProcessingFilters) or define your own lowpass filter with the IMAQ Convolute VI or IMAQ NthOrder VI. Highpass filters emphasize details, such as edges, object boundaries, or cracks. These details represent sharp transitions in intensity value. You can define your own highpass filter with IMAQ Convolute or IMAQ NthOrder or use the IMAQ EdgeDetection VI or IMAQ CannyEdgeDetection VI (Image ProcessingFilters). IMAQ EdgeDetection allows you to find edges in an image using predefined edge detection kernels, such as the Sobel, Prewitt, and Roberts kernels.
Convolution Filter
IMAQ Convolute (Image ProcessingFilters) allows you to use a predefined set of lowpass and highpass filters. Each filter is defined by a kernel of coefficients. Use the IMAQ GetKernel VI (Image Processing Filters) to retrieve predefined kernels. If the predefined kernels do not meet your needs, define your own custom filter using a LabVIEW 2D array of floating point numbers.
Grayscale Morphology
Perform grayscale morphology when you want to filter grayscale features of an image. Grayscale morphology helps you remove or enhance isolated features, such as bright pixels on a dark background. Use these transformations on a grayscale image to enhance non-distinct features before thresholding the image in preparation for particle analysis.
2-15
Chapter 2
Grayscale morphological transformations compare a pixel to those pixels surrounding it. The transformation keeps the smallest pixel values when performing an erosion or keeps the largest pixel values when performing a dilation. Refer to Chapter 5, Image Processing, of the NI Vision Concepts Manual for more information about grayscale morphology transformations. Use the IMAQ GrayMorphology VI (Image ProcessingMorphology) to perform one of the following seven transformations: ErosionReduces the brightness of pixels that are surrounded by neighbors with a lower intensity. Define the neighborhood with a structuring element. Refer to Chapter 9, Binary Morphology, of the NI Vision Concepts Manual for more information about structuring elements. DilationIncreases the brightness of pixels surrounded by neighbors with a higher intensity. A dilation has the opposite effect of an erosion. OpeningRemoves bright pixels isolated in dark regions and smooths boundaries. ClosingRemoves dark spots isolated in bright regions and smooths boundaries. Proper-openingRemoves bright pixels isolated in dark regions and smooths the boundaries of regions. Proper-closingRemoves dark pixels isolated in bright regions and smooths the boundaries of regions. Auto-medianGenerates simpler particles that have fewer details.
FFT
Use FFT to convert an image into its frequency domain. In an image, details and sharp edges are associated with mid to high spatial frequencies because they introduce significant gray-level variations over short distances. Gradually varying patterns are associated with low spatial frequencies. An image can have extraneous noise, such as periodic stripes, introduced during the digitization process. In the frequency domain, the periodic pattern is reduced to a limited set of high spatial frequencies. Also, the imaging setup may produce non-uniform lighting of the field of view, which produces an image with a light drift superimposed on the information you want to analyze. In the frequency domain, the light drift appears as a limited set of low frequencies around the average intensity of the image (DC component).
2-16
ni.com
Chapter 2
You can use algorithms working in the frequency domain to isolate and remove these unwanted frequencies from your image. Complete the following steps to obtain an image in which the unwanted pattern has disappeared but the overall features remain: 1. Use the IMAQ FFT VI (Image ProcessingFrequency Domain) to convert an image from the spatial domain to the frequency domain. This VI computes the FFT of the image and results in a complex image representing the frequency information of your image. Improve your image in the frequency domain with a lowpass or highpass frequency filter. Specify which type of filter to use with the IMAQ ComplexAttenuate VI (Image ProcessingFrequency Domain) or the IMAQ ComplexTruncate VI (Image Processing Frequency Domain). Lowpass filters smooth noise, details, textures, and sharp edges in an image. Highpass filters emphasize details, textures, and sharp edges in images, but they also emphasize noise. Lowpass attenuationThe amount of attenuation is directly proportional to the frequency information. At low frequencies, there is little attenuation. As the frequencies increase, the attenuation increases. This operation preserves all of the zero frequency information. Zero frequency information corresponds to the DC component of the image or the average intensity of the image in the spatial domain. Highpass attenuationThe amount of attenuation is inversely proportional to the frequency information. At high frequencies, there is little attenuation. As the frequencies decrease, the attenuation increases. The zero frequency component is removed entirely. Lowpass truncationFrequency components above the ideal cutoff frequency are removed, and the frequencies below it remain unaltered. Highpass truncationFrequency components above the ideal cutoff frequency remain unaltered, and the frequencies below it are removed.
2.
3.
To transform your image back to the spatial domain, use the IMAQ InverseFFT VI (Image ProcessingFrequency Domain).
2-17
Chapter 2
Advanced Operations
The IMAQ ImageToComplexPlane VI (Image ProcessingFrequency Domain) and IMAQ ComplexPlaneToImage VI (Image Processing Frequency Domain) allow you to access, process, and update independently the magnitude, phase, real, and imaginary planes of a complex image. You also can convert a complex image to an array and back with the IMAQ ComplexImageToArray VI (Image Processing Frequency Domain) and IMAQ ArrayToComplexImage VI (Image ProcessingFrequency Domain).
2-18
ni.com
This chapter describes how to take measurements from grayscale and color images. You can make inspection decisions based on image statistics, such as the mean intensity level in a region. Based on the image statistics, you can perform many machine vision inspection tasks on grayscale or color images, such as detecting the presence or absence of components, detecting flaws in parts, and comparing a color component with a reference. Figure 3-1 illustrates the basic steps involved in making grayscale and color measurements.
3-1
Chapter 3
Icon
Function Select an ROI in the image and adjust the position of its control points and contours. Action: Click ROI or control points.
Point
Select a pixel in the image. Action: Click the position of the pixel.
Line
Draw a line in the image. Action: Click the initial position and click again at the final position.
Rectangle
Draw a rectangle or square in the image. Action: Click one corner and drag to the opposite corner.
Oval
Draw an oval or circle in the image. Action: Click the center position and drag to the required size.
Polygon
Draw a polygon in the image. Action: Click to place a new vertex and double-click to complete the ROI element.
Freehand Region
Draw a freehand region in the image. Action: Click the initial position, drag to the required shape and release the mouse button to complete the shape.
Annulus
Draw an annulus in the image. Action: Click the center position and drag to the required size. Adjust the inner and outer radii, and adjust the start and end angle.
Zoom
Zoom-in or zoom-out in an image. Action: Click the image to zoom in. Hold down the <Shift> key and click to zoom out.
Pan
Pan around an image. Action: Click an initial position, drag to the required position and release the mouse button to complete the pan.
3-2
ni.com
Chapter 3
Icon
Function Draw a broken line in the image. Action: Click to place a new vertex and double-click to complete the ROI element.
Freehand Line
Draw a freehand line in the image. Action: Click the initial position, drag to the required shape, and release the mouse button to complete the shape.
Rotated Rectangle
Draw a rotated rectangle in the image. Action: Click one corner and drag to the opposite corner to create the rectangle. Then click the lines inside the rectangle and drag to adjust the rotation angle.
Hold down the <Shift> key when drawing an ROI if you want to constrain the ROI to the horizontal, vertical, or diagonal axes, when possible. Use the selection tool to position an ROI by its control points or vertices. ROIs are context sensitive, meaning that the cursor actions differ depending on the ROI with which you interact. For example, if you move your cursor over the side of a rectangle, the cursor changes to indicate that you can click and drag the side to resize the rectangle. If you want to draw more than one ROI in an image display environment, hold down the <Ctrl> key while drawing additional ROIs.
3-3
Chapter 3
You can configure which ROI tools are present on the control. Complete the following steps to configure the ROI tools palette during design time: 1. 2. 3. Right-click the ROI tools palette and select Visible ItemsROI Tool Button Visibility. Deselect the tools you do not want to appear in the ROI tools palette. If you do not want any of the tools to appear, click All Hidden. Click OK to implement the changes.
To get or set ROIs programmatically, use the property node for the Image Display control.
Note
If you want to draw an ROI without displaying the tools palette in an external window, use the IMAQ WindToolsSelect VI (Vision UtilitiesRegion of Interest). This VI allows you to select a contour from the tools palette without opening the palette.
3-4
ni.com
Chapter 3
Complete the following steps to invoke an ROI constructor and define an ROI from within the ROI constructor window: 1. Use the IMAQ ConstructROI VI (Vision UtilitiesRegion of Interest) to display an image and the tools palette in an ROI constructor window, as shown in Figure 3-2.
2. 3. 4.
Select an ROI tool from the tools palette. Draw an ROI on your image. Resize and reposition the ROI until it designates the area you want to inspect. Click OK to output a descriptor of the region you selected. You can input the ROI descriptor into many analysis and processing functions. You also can convert the ROI descriptor into an image mask, which you can use to process selected regions in the image. Use the IMAQ ROIToMask VI (Vision UtilitiesRegion of Interest) to convert the ROI descriptor into an image mask.
3-5
Chapter 3
You also can use the IMAQ Select Point VI, IMAQ Select Line VI, and IMAQ Select Rectangle VI to define regions of interest. These three VIs appear in the Machine VisionSelect Region of Interest palette. Complete the following steps to use these VIs: 1. 2. 3. 4. Call the VI to display an image in an ROI Constructor window. Only the tools specific to that VI are available for you to use. Select an ROI tool from the tools palette. Draw an ROI on your image. Resize or reposition the ROI until it covers the area you want to process. Click OK to output a simple description of the ROI. You can use this description as an input for the VIs on the Machine VisionIntensities palette that measure grayscale intensity: IMAQ Light Meter (Point)Uses the output of IMAQ Select Point IMAQ Light Meter (Line)Uses the output of IMAQ Select Line IMAQ Light Meter (Rect)Uses the output of IMAQ Select Rectangle
3-6
ni.com
Chapter 3
1 2
1 2 3
Pixel Intensity Image Type Indicator (8-bit, 16-bit, Float, Complex, 32-bit RGB, HSL, 64-bit RGB) Coordinates of the mouse on the active image window
4 5 6
Anchoring coordinates of an ROI Size of an active ROI Length and horizontal angle of a line region
3-7
Chapter 3
Specify regions by providing basic parameters that describe the region you want to define. For example, define a point by providing the x-coordinate and y-coordinate. Define a line by specifying the start and end coordinates. Define a rectangle by specifying the coordinates of the top/left point, bottom/right point, and the rotation angle in thousandths of degrees.
The Vision UtilitiesRegion of InterestRegion of Interest Conversion palette provides VIs to convert simple data typessuch as points, lines, rectangles, and annulusesinto an ROI descriptor. Use the following VIs to convert an ROI contour encoded by a simple description to an ROI descriptor for that contour: IMAQ Convert Point to ROIConverts a point specified by its x and y coordinates. IMAQ Convert Line to ROIConverts a line specified by its start and end points. IMAQ Convert Rectangle to ROIConverts a rectangle specified by its top-left and bottom-right points and rotation angle. IMAQ Convert Annulus to ROIConverts an annulus specified by its center point, inner and outer radii, and start and end angles.
3-8
ni.com
Chapter 3
IMAQ Convert Rectangle to ROI (Polygon)Converts a rectangle specified by its top-left and bottom-right points and rotation angle to an ROI descriptor that uses a polygon to represent the rectangle.
Use the following VIs to convert an ROI contour encoded by an ROI descriptor to a simple description for the contour. IMAQ Convert ROI to PointThe output point is specified by its x and y coordinates. IMAQ Convert ROI to LineThe output line is specified by its start and end points. IMAQ Convert ROI to RectangleThe output rectangle is specified by its top-left point, bottom-right point, and rotation angle. IMAQ Convert ROI to AnnulusThe output annulus is specified by its center point, inner and outer radii, and start and end angles.
3-9
Chapter 3
3-10
ni.com
Chapter 3
Figure 3-5 illustrate how 32-bit and 64-bit color images break down into their three primary components.
8 8 8 8 8 8 8 8 8 8 8 8
Red Green Blue Hue Saturation Intensity Hue Saturation Luminance Hue Saturation Value
or or or
Color Image 32
Color Image 32
or
or
Saturation Value
Red 16
Color 16 64 Image
Green 16 Blue 16
Use the IMAQ ExtractSingleColorPlane VI or the IMAQ ExtractColorPlanes VI (Vision UtilitiesColor Utilities) to extract the red, green, blue, hue saturation, intensity, luminance, or value plane of a color image into an 8-bit image.
3-11
Chapter 3
A color pixel encoded as an unsigned 32-bit integer control can be decomposed into its individual components using the IMAQ IntegerToColorValue VI. You can convert a pixel value represented by its RGB components into the equivalent components for another color model using the IMAQ RGBToColor 2 VI. You can convert components in any other color mode to RGB by using the IMAQ ColorToRGB VI. These three VIs appear in the Vision UtilitiesColor Utilities palette.
Comparing Colors
You can use the color matching capability of NI Vision to compare or evaluate the color content of an image or regions in an image. Complete the following steps to compare colors using color matching: 1. 2. Select an image containing the color information that you want to use as a reference. The color information can consist of multiple colors. Use the entire image or regions in the image to learn the color information using the IMAQ ColorLearn VI (Image Processing Color Processing), which outputs a color spectrum that contains a compact description of the color information that you learned. Refer to Chapter 15, Color Inspection, of the NI Vision Concepts Manual for more information. Use the color spectrum to represent the learned color information for all subsequent matching operations. Define an image or multiple regions in an image as the inspection or comparison area. Use the IMAQ ColorMatch VI (Image ProcessingColor Processing) to compare the learned color information to the color information in the inspection regions. This VI returns a score that indicates the closeness of match. You can specify a Minimum Match Score, which indicates if there is a match between the input color information and the color in each specified region in the image. Use the color matching score as a measure of similarity between the reference color information and the color information in the image regions being compared.
3. 4.
5.
3-12
ni.com
Chapter 3
a.
b.
The following sections explain when to learn the color information associated with an entire image, a region in an image, or multiple regions in an image.
3-13
Chapter 3
3-14
ni.com
Chapter 3
3-amp fuses much better and results in high match scoresnear 800for both 3-amp fuses. Use as many samples as you want in an image to learn the representative color spectrum for a specified template.
3-15
Chapter 3
3-16
ni.com
This chapter describes how to perform particle analysis on your images. Use particle analysis to find statistical informationsuch as the area, number, location, and presence of particles. With this information, you can perform many machine vision inspection tasks, such as detecting flaws on silicon wafers or detecting soldering defects on electronic boards. Examples of how particle analysis can help you perform web inspection tasks include locating structural defects on wood planks or detecting cracks on plastic sheets. Figure 4-1 illustrates the steps involved in performing particle analysis.
4-1
Chapter 4
darker than your background, you can use one of the automatic thresholding techniques in NI Vision. Complete the following steps to use one of the automatic thresholding techniques: 1. Use the IMAQ AutoBThreshold VI (Image ProcessingProcessing) to select the thresholding technique that automatically determines the optimal threshold range. Connect the Threshold Data output to the IMAQ MultiThreshold VI (Image ProcessingProcessing), or use the Lookup Table output to apply a lookup table to the image using the IMAQ UserLookup VI (Image ProcessingProcessing).
2.
If your grayscale image contains objects that have multiple discontinuous grayscale values, use the IMAQ MultiThreshold VI (Image Processing Processing). If your grayscale image contains objects whose grayscale values vary within the image due to effects such as light drift, use the IMAQ LocalThreshold VI (Image Processing Processing). If you need to threshold a grayscale image that has nonuniform lighting conditions, such as those resulting from a strong illumination gradient or shadows, use the IMAQ LocalThreshold VI (Image Processing Processing). You need to define a pixel window that specifies which neighboring pixels are considered in the statistical calculation. The default window size is 32 32. However, the window size should be approximately the size of the smallest object you want to separate from the background. You also need to specify the local thresholding algorithm to use. The local thresholding algorithm options are the Niblack or background correction algorithm. Refer to Chapter 8, Image Segmentation, of the NI Vision Concepts Manual for more information about local thresholding. Automatic thresholding techniques offer more flexibility than simple thresholds based on fixed ranges. Because automatic thresholding techniques determine the threshold level according to the image histogram, the operation is less affected by changes in the overall brightness and contrast of the image than a fixed threshold. Because these techniques are more resistant to changes in lighting, they are well suited for automated inspection tasks.
4-2
ni.com
Chapter 4
If you need to threshold a color image, use the IMAQ ColorThreshold VI (Image ProcessingColor Processing). You must specify threshold ranges for each of the color planeseither Red, Green, and Blue or Hue, Saturation, and Luminance). The binary image resulting from a color threshold is an 8-bit binary image.
4-3
Chapter 4
Use the hit-miss function of the IMAQ Morphology VI to locate particular configurations of pixels, which you define with a structuring element. Depending on the configuration of the structuring element, the hit-miss function can locate single isolated pixels, cross-shape or longitudinal patterns, right angles along the edges of particles, and other user-specified shapes. Refer to Chapter 9, Binary Morphology, of the NI Vision Concepts Manual for more information about structuring elements. If you know enough about the shape features of the particles you want to keep or remove, use the IMAQ Particle Filter 2 VI (Image Processing Morphology) to filter out particles that do not interest you.
4-4
ni.com
Chapter 4
4-5
This chapter describes how to perform many common machine vision inspection tasks. The most common inspection tasks are detecting the presence or absence of parts in an image and measuring the dimensions of parts to determine if they meet specifications. Measurements are based on characteristic features of the object represented in the image. Image processing algorithms traditionally classify the type of information contained in an image as edges, surfaces and textures, or patterns. Different types of machine vision algorithms leverage and extract one or more types of information. Edge detectors and derivative techniquessuch as rakes, concentric rakes, and spokesuse edges represented in the image. They locate, with high accuracy, the position of the edge of an object. You can use edge detection to make such measurements as the width of the part, which is a technique called clamping. You also can combine multiple edge locations to compute intersection points, projections, circles, or ellipse fits. Pattern matching algorithms use edges and patterns. Pattern matching can locate, with very high accuracy, the position of fiducials or characteristic features of the part under inspection. You can combine those locations to compute lengths, angles, and other object measurements. The robustness of your measurement relies on the stability of your image acquisition conditions. Sensor resolution, lighting, optics, vibration control, part fixture, and general environment are key components of the imaging setup. All elements of the image acquisition chain directly affect the accuracy of the measurements.
5-1
Chapter 5
Figure 5-1 illustrates the basic steps involved in performing machine vision.
Make Measurements
Display Results
5-2
ni.com
Chapter 5
Often, the object under inspection appears shifted or rotated in the image you need to process, relative to the reference image in which you located the object. When this occurs, the ROIs need to shift and rotate with the parts of the object in which you are interested. For the ROIs to move with the object, you need to define a reference coordinate system relative to the object in the reference image. During the measurement process, the coordinate system moves with the object when the object appears shifted and rotated in the image you need to process. This coordinate system is referred to as the measurement coordinate system. The measurement VIs automatically move the ROIs to the correct position using the position of the measurement coordinate system relative to the reference coordinate system. Refer to Chapter 14, Dimensional Measurements, of the NI Vision Concepts Manual for information about coordinate systems. You can build a coordinate transformation using edge detection or pattern matching. The output of the edge detection and pattern matching VIs that build a coordinate transformation are the origin, angle, and axes direction of the coordinate system. Some machine vision VIs take this output and adjust the regions of inspection automatically. You also can use these outputs to move the regions of inspection relative to the object programmatically.
To use these techniques, the object cannot rotate more than 65 in the image. 1. Specify one or two rectangular regions. a. If you use IMAQ Find CoordSys (Rect), specify one rectangular ROI that includes part of two straight, nonparallel boundaries of the object, as shown in Figure 5-2. This rectangular region must be large enough to include these boundaries in all the images you want to inspect.
5-3
Chapter 5
4 2
3 4 3 a.
3 4
b.
Origin of the Coordinate System Measurement Area
1 2
b.
If you use IMAQ Find CoordSys (2 Rects), specify two rectangular ROIs, each containing one separate, straight boundary of the object, as shown in Figure 5-3. The boundaries cannot be parallel. The regions must be large enough to include the boundaries in all the images you want to inspect.
5-4
ni.com
Chapter 5
4 2 4 2
3 3 1 a.
1 2 Primary Search Area Secondary Search Area 3 4
1 b.
Origin of the Coordinate System Measurement Area
Figure 5-3. Locating Coordinate System Axes with Two Search Areas
2. 3. 4. 5.
Choose the parameters you need to locate the edges on the object. Choose the coordinate system axis direction. Choose the results that you want to overlay onto the image. Choose the mode for the VI. To build a coordinate transformation for the first time, set mode to Find Reference. To update the coordinate transformation in subsequent images, set this mode to Update Reference.
5-5
Chapter 5
The object can rotate 360 in the image using this technique if you use rotation-invariant pattern matching. 1. Define a template that represents the part of the object that you want to use as a reference feature. Refer to the Find Measurement Points section for information about defining a template. Define a rectangular search area in which you expect to find the template. Choose the Match Mode. Select Rotation Invariant when you expect your template to appear rotated in the inspection images. Otherwise, select Shift Invariant. Choose the results that you want to overlay onto the image. Choose the mode for the VI. To build a coordinate transformation for the first time, set mode to Find Reference. To update the coordinate transformation in subsequent images, set this mode to Update Reference.
2. 3.
4. 5.
5-6
ni.com
Chapter 5
Start
No
Yes
The object under inspection has a straight, distinct edge (main axis)?
No
Yes
The object contains a second distinct edge not parallel to the main axis in the same search area?
No
Yes Build a coordinate transformation based on edge detection using a single search area.
The object contains a second distinct edge not parallel to the main axis in a separate search area?
No
Yes Build a coordinate transformation based on edge detection using two distinct search areas.
Object positioning accuracy better than 5 degrees? Yes Build a coordinate transformation based on pattern matching shift invariant strategy.
No
End
5-7
Chapter 5
5-8
ni.com
Chapter 5
Refer to Chapter 4, Performing Particle Analysis, for more information about defining ROIs.
5-9
Chapter 5
1 2
Search Region
Search Lines
If you want to find points along a circular edge and find the circle that best fits the edge, as shown in Figure 5-6, use the IMAQ Find Circular Edge VI (Machine VisionLocate Edges).
5-10
ni.com
Chapter 5
1 2
3 4
IMAQ Find Vertical Edge, IMAQ Find Horizontal Edge, and IMAQ Find Concentric Edge locate the intersection points between a set of search lines within the search region and the edge of an object. Specify the separation between the lines that the VIs use to detect edges. The VIs determine the intersection points based on their contrast, width, and steepness. The software calculates a best-fit line with outliers rejected or a best-fit circle through the points it found. The VIs return the coordinates of the edges found.
5-11
Chapter 5
Geometry) to obtain the points along the line instead of using an ROI descriptor. IMAQ ROIProfile and IMAQ GetPointsOnLine determine the edge points based on their contrast and slope. You can specify if you want to find the edge points using subpixel accuracy.
5-12
ni.com
Chapter 5
Symmetry
A rotationally symmetric template, as shown in Figure 5-7a is less sensitive to changes in rotation than one that is rotationally asymmetric. A rotationally symmetric template provides good positioning information but no orientation information.
5-13
Chapter 5
a.
b.
Feature Detail
A template with relatively coarse features is less sensitive to variations in size and rotation than a model with fine features. However, the model must contain enough detail to identify the feature.
a.
b.
Positional Information
A template with strong edges in both the x and y directions is easier to locate.
a.
b.
5-14
ni.com
Chapter 5
Background Information
Unique background information in a template improves search performance and accuracy.
a.
b.
5-15
Chapter 5
task, the presence of additional instances of the pattern can produce incorrect results. To avoid this, reduce the search area so that only the required pattern lies within the search area. The time required to locate a pattern in an image depends on both the template size and the search area. By reducing the search area, you can reduce the required search time. Increasing the template size can improve the search time, but doing so reduces match accuracy if the larger template includes an excess of background information. In many inspection applications, you have general information about the location of the fiducial. Use this information to define a search area. For example, in a typical component placement application, each printed circuit board (PCB) being tested may not be placed in the same location with the same orientation. The location of the PCB9 in various images can move and rotate within a known range of values, as illustrated in Figure 5-11. Figure 5-11a shows the template used to locate the PCB in the image. Figure 5-11b shows an image containing a PCB with a fiducial you want to locate. Notice the search area around the fiducial. If you know before the matching process begins that the PCB can shift or rotate in the image within a fixed rangeas shown in Figure 5-11c and Figure 5-11d, respectivelyyou can limit the search for the fiducial to a small region of the image.
a.
b.
c.
d.
5-16
ni.com
Chapter 5
Match Mode
Set the match mode to control how the pattern matching algorithm treats the template at different orientations. If you expect the orientation of valid matches to vary less than 5 from the template, set the Match Mode control to Shift Invariant. Otherwise, set Match Mode to Rotation Invariant. Shift-invariant matching is faster than rotation-invariant matching.
Minimum Contrast
The pattern matching algorithm ignores all image regions in which contrast values fall below a set minimum contrast value. Contrast is the difference between the smallest and largest pixel values in a region. Set the Minimum Contrast control to slightly below the contrast value of the search area with the lowest contrast. You can set the minimum contrast to potentially increase the speed of the pattern matching algorithm. If the search image has high contrast overall but contains some low contrast regions, set a high minimum contrast value to exclude all areas of the image with low contrast. Excluding these areas significantly reduces the area in which the pattern matching algorithm must search. However, If the search image has low contrast throughout, set a low minimum contrast to ensure that the pattern matching algorithm looks for the template in all regions of the image.
5-17
Chapter 5
Pattern Matching, of the NI Vision Concepts Manual for information about pattern matching.
5-18
ni.com
Chapter 5
Complete the following generalized steps to find features in an image using geometric matching. 1. 2. Define a reference or fiducial pattern to use as a template image. Use the reference pattern to train the geometric matching algorithm with the NI Vision Template Editor. Go to StartAll Programs National InstrumentsVisionTemplate Editor to launch the NI Vision Template Editor. Define an image or an area of an image as the search area. A small search area can reduce the time to find the features. Set the tolerances and parameters to specify how the algorithm operates at run time using IMAQ Setup Match Geometric Pattern. Test the search algorithm on test images using IMAQ Match Geometric Pattern.
3. 4. 5.
Geometric matching is not suited for template images that are predominantly defined by grayscale or texture information. Figure 5-13 shows examples of template images that do not have good geometric information. The template image in Figure 5-13a is characterized by the grayscale or texture information in the image. Figure 5-13b contains too many edges and will dramatically increase the time geometric matching takes to locate the template in an image.
5-19
Chapter 5
a.
b.
Figure 5-13. Examples of Objects for which Geometric Matching is Not Suited
5-20
ni.com
Chapter 5
a.
b.
c.
Once the template image is trained, you can save the template to a file from within the NI Vision Template Editor. Use the IMAQ Read Image and Vision Info VI (Machine VisionSearching and Matching) to read the template file within your application. Refer to the NI Vision Template Editor Help for more information about training the Geometric Matching algorithm.
Match Mode
Set the match mode to control the conditions under which the geometric matching algorithm finds the template matches. If you expect the orientation of the valid matches to vary more than 5 from the template, enable the Rotation option within Match Mode. If you expect the size of the valid matches to change more than 5%, enable the Scale option within Match Mode. If you expect the valid matches to be partially covered or
5-21
Chapter 5
missing, enable the Occlusion option in Match Mode. Disabling any of these options or limiting their ranges decreases the search time.
Occlusion Ranges
When the Occlusion option in Match Mode is enabled, geometric matching searches for occurrences of the template in the image, allowing for a specified percentage of the template to be occluded. The default occlusion range is 0% to 25%. If you know that the occlusion range of the valid matches is restricted to a certain rangefor example, between 0% and 10%provide this restriction information to the geometric matching algorithm by setting the Occlusion (%) option within the Range Settings control. Refer to the Chapter 13, Geometric Matching, of the NI Vision Concepts Manual for more information about geometric matching.
5-22
ni.com
Chapter 5
the current template, or select a better template until both training and testing are successful.
2.
If you want to perform the learning process off-line, use the IMAQ Write Multiple Geometric Template VI (Machine VisionSearching and Matching) to save to file the multiple geometric templates created by this step. Use the IMAQ Read Multiple Geometric Template VI (Machine VisionSearching and Matching) to read the multiple geometric template file within your application during the matching process. 3. Use the methods described in Setting Matching Parameters and Tolerances to define how each template matches the target image.
Note
Even though the parameters that you create in this step are used only during the match phase, you can pass these parameters to IMAQ Learn Multiple Geometric Patterns VI during the learn phase. Doing this stores these parameters in the multiple geometric template, and they will be saved to file when you save the multiple geometric template. When you read the multiple geometric template file within your application, it will be part of the multiple geometric template, and will be used during the matching process. If you wish to override some of these parameters during the matching process, you can do so by passing the overriding parameters to the IMAQ Match Multiple Geometric Patterns VI. 4. Test the multiple template search algorithm on test images using the IMAQ Match Multiple Geometric Patterns VI (Machine Vision Searching and Matching) to match all of the templates. The importance of this step is for the same reason explained in Testing the Search Algorithm on Test Images.
5-23
Chapter 5
Color pattern matching returns the location of the center of the template and the template orientation. Complete the following general steps to find features in an image using color pattern matching: 1. 2. 3. 4. 5. 6. 7. Define a reference or fiducial pattern in the form of a template image. Use the reference pattern to train the color pattern matching algorithm with IMAQ Setup Learn Color Pattern. Define an image or an area of an image as the search area. A small search area can reduce the time to find the features. Set the Feature Mode control to Color and Shape. Set the tolerances and parameters to specify how the algorithm operates at run time using IMAQ Setup Match Color Pattern. Test the search tool on test images using IMAQ Match Color Pattern. Verify the results using a ranking method.
5-24
ni.com
Chapter 5
Color Information
A template with colors that are unique to the pattern provides better results than a template that contains many colors, especially colors found in the background or other objects in the image.
Symmetry
A rotationally symmetric template in the luminance plane is less sensitive to changes in rotation than one that is rotationally asymmetric.
Feature Detail
A template with relatively coarse features is less sensitive to variations in size and rotation than a model with fine features. However, the model must contain enough detail to identify it.
Positional Information
A color template whose luminance plane contains strong edges in both the x and y directions is easier to locate.
Background Information
Unique background information in a template improves search performance and accuracy during the grayscale pattern matching phase. This requirement could conflict with the color information requirement of color pattern matching because background colors may interfere with the color location phase. Avoid this problem by choosing a template with sufficient background information for grayscale pattern matching while specifying the exclusion of the background color during the color location phase. Refer to the Training the Color Pattern Matching Algorithm section of this chapter for more information about how to ignore colors.
5-25
Chapter 5
Use the IMAQ Setup Learn Color Pattern VI (Machine VisionSearching and Matching) to specify which type of learning mode to use. Then use the IMAQ Learn Color Pattern VI (Machine VisionSearching and Matching) to learn the template. Exclude colors in the template that you are not interested in using during the search phase. Ignore colors that make your template difficult to locate. When a template differs from several regions of the search image by only its primary color or colors, consider ignoring the predominant common color to improve search performance. Typically, the predominant color is the background color of the template. Use the IMAQ Setup Learn Color Pattern VI to ignore colors. You can ignore certain predefined colors by using Ignore Black and White. To ignore other colors, first learn the colors to ignore using IMAQ ColorLearn. Then set the Ignore Color Spectra control of the IMAQ Setup Learn Color Pattern VI to the resulting color spectrum. The learning process is time-intensive because the algorithm attempts to find unique features of the template that allow for fast, accurate matching. However, you can train the pattern matching algorithm offline, and save the template image using the IMAQ Write Image and Vision Info VI (Machine VisionSearching and Matching).
5-26
ni.com
Chapter 5
The time required to locate a pattern in an image depends on both the template size and the search area. By reducing the search area, you can reduce the required search time. Increasing the template size can improve the search time, but doing so reduces match accuracy if the larger template includes an excess of background information.
5-27
Chapter 5
(Machine VisionSearching and Matching) to set the following parameters that influence color pattern matching: color sensitivity, search strategy, color score weight, ignore colors, minimum contrast, and rotation angle ranges.
Color Sensitivity
Use the color sensitivity to control the granularity of the color information in the template image. If the background and objects in the image contain colors that are very close to colors in the template image, use a higher color sensitivity setting. A higher sensitivity setting distinguishes colors with very close hue values. Three color sensitivity settings are available in NI Vision: low, medium, and high. Use the default low setting if the colors in the template are very different from the colors in the background or other objects that you are not interested in. Increase the color sensitivity settings as the color differences decrease. Use the Color Sensitivity control of the IMAQ Setup Match Color Pattern VI to set the color sensitivity. Refer to Chapter 15, Color Inspection, of the NI Vision Concepts Manual for more information about color sensitivity.
Search Strategy
Use the search strategy to optimize the speed of the color pattern matching algorithm. The search strategy controls the step size, subsampling factor, and percentage of color information used from the template. Choose from these strategies: ConservativeUses a very small step size, the least amount of subsampling, and all the color information present in the template. The conservative strategy is the most reliable method to look for a template in any image at potentially reduced speed.
Note
Use the conservative strategy if you have multiple targets located very close to each other in the image. BalancedUses values in between the aggressive and conservative strategies. AggressiveUses a large step size, a lot of subsampling, and all the color spectrum information from the template. Very AggressiveUses the largest step size, the most subsampling, and only the dominant color from the template to search for the template. Use this strategy when the color in the template is almost uniform, the template is well contrasted from the background and there
5-28
ni.com
Chapter 5
is a good amount of separation between different occurrences of the template in the image. This strategy is the fastest way to find templates in an image. Decide on the best strategy by experimenting with the different options. Use the Search Strategy control to select a search strategy.
Minimum Contrast
Use the minimum contrast to increase the speed of the color pattern matching algorithm. The color pattern matching algorithm ignores all image regions where grayscale contrast values fall beneath a minimum contrast value. Use the Minimum Contrast control to set the minimum contrast. Refer to the Setting Matching Parameters and Tolerances section of this chapter for more information about minimum contrast.
5-29
Chapter 5
Complete the following general steps to find features in an image using color location: 1. 2. 3. 4. 5. 6. 7. Define a reference pattern in the form of a template image. Use the reference pattern to train the color location algorithm with IMAQ Learn Color Pattern. Define an image or an area of an image as the search area. A small search area can reduce the time to find the features. Set the Feature Mode control of IMAQ Setup Learn Color Pattern to Color. Set the tolerances and parameters to specify how the algorithm operates at run time using IMAQ Setup Match Color Pattern. Test the color location algorithm on test images using IMAQ Match Color Pattern. Verify the results using a ranking method.
You can save the template image using the IMAQ Write Image and Vision Info VI (Machine VisionSearching and Matching).
5-30
ni.com
Chapter 5
Make Measurements
You can make different types of measurements either directly from the image or from points that you detect in the image.
Use the IMAQ Select Rectangle VI (Machine VisionSelect Region of Interest) to generate a valid input search region for the clamp VIs. First the VIs use the rake function to detect points along two edges of the object under inspection. Then the VIs compute the distance between the points detected on the edges along each search line of the rake. The VIs return the largest or smallest distance in either the horizontal or vertical direction, and they output the coordinates of all the edge points that they find. The following list describes the available clamp VIs: IMAQ Clamp Horizontal MaxMeasures the largest horizontal separation between two edges in a rectangular search region. IMAQ Clamp Horizontal MinFinds the smallest horizontal separation between two vertically oriented edges. IMAQ Clamp Vertical MaxFinds the largest vertical separation between two horizontally oriented edges. IMAQ Clamp Vertical MinFinds the smallest vertical separation between two horizontally oriented edges.
Use the IMAQ Point Distances VI (Machine VisionAnalytic Geometry) to compute the distances between consecutive pairs of points in an array of points. You can obtain these points from the image using any one of the feature detection methods described in the Find Measurement Points section of this chapter.
5-31
Chapter 5
Use the IMAQ Read Meter VI (Machine VisionInstrument Readers) to read the position of the needle using the base of the needle and the array of points on the arc traced by the tip of the needle.
5-32
ni.com
Chapter 5
Use the IMAQ Get LCD ROI VI (Machine VisionInstrument Readers) to calculate the ROI around each digit in an LCD or LED. To find the area of each digit, all the segments of the indicator must be activated. Use the IMAQ Read Single Digit VI (Machine VisionInstrument Readers) to read one digit of an LCD or LED. Use the IMAQ Read LCD VI (Machine VisionInstrument Readers) to read multiple digits of an LCD or LED.
Classifying Samples
Use classification to identify an unknown object by comparing a set of its significant features to a set of features that conceptually represent classes of known objects. Typical applications involving classification include the following: SortingSorts objects of varied shapes. For example, sorting different mechanical parts on a conveyor belt into different bins. InspectionInspects objects by assigning each object an identification score and then rejecting objects that do not closely match members of the training set.
Before you classify objects, you must train the Classifier Session with samples of the objects using the NI Classification Training Interface. Go to StartAll ProgramsNational InstrumentsVision Classification Training to launch the NI Classification Training Interface. After you have trained samples of the objects you want to classify, use the following VIs to classify the objects: 1. In the initialization part the of code, use IMAQ Read Classifier File (Machine VisionClassification) to read in a Classifier that you created using the NI Classification Training Interface. Use IMAQ Classify (Machine VisionClassification) to classify the object inside the ROI of the image under inspection into one of the classes you created using the NI Classification Training Interface. Use IMAQ Dispose Classifier (Machine VisionClassification) to free the resources that the Classifier Session used.
2.
3.
5-33
Chapter 5
6 7
1 2 3 4
Read a Classifier File Acquire and Preprocess the Image Locate the Sample to Classify Classify the Sample
5 6 7 8
Dispose of the Classifier Session Path to the Trained Classifier File Class Stop
Reading Characters
Use OCR to read text and/or characters in an image. Typical uses for OCR in an inspection application include identifying or classifying components. Before you read text and/or characters in an image, you must train the OCR Session with samples of the characters using the NI OCR Training Interface. Go to StartAll ProgramsNational InstrumentsVision OCR Training to launch the NI OCR Training Interface. After you have trained samples of the characters you want to read, use the following VIs to read the characters: 1. In the initialization part the of code, use IMAQ OCR Read Character Set File (Machine VisionOCR) to read in a session that you created using the NI OCR Training Interface. Use IMAQ OCR Read Text (Machine VisionOCR) to read the characters inside the ROI of the image under inspection. Use IMAQ OCR Dispose Session (Machine VisionOCR) to free the resources that the OCR Session used.
2. 3.
5-34
ni.com
Chapter 5
Reading Barcodes
Use barcode reading VIs to read values encoded into 1D barcodes, Data Matrix barcodes, and PDF417 barcodes.
Reading 1D Barcodes
Use IMAQ Read Barcode (Machine VisionInstrument Readers) to read values encoded in a 1D barcode. Locate the barcode in the image using one of the techniques described in the Locate Objects to Inspect and Set Search Areas section of this chapter. Then pass the ROI Descriptor of the location into IMAQ Read Barcode. Specify the type of 1D barcode in the application using the Barcode Type control. NI Vision supports the following 1D barcode types: Codabar, Code 39, Code 93, Code 128, EAN 8, EAN 13, Interleaved 2 of 5, MSI, and UPCA.
5-35
Chapter 5
Verify Characters
Use character verification functions to verify characters in your image. Before you verify text, you must train the OCR Session with samples of the characters using the NI OCR Training Interface. Go to StartAll ProgramsNational InstrumentsVisionOCR Training to launch the NI OCR Training Interface. You must then designate reference characters for each of the character classes in your character set. The characters you want to verify are compared against these reference characters and are assigned a score based on this comparison. After you have trained the samples of the characters and assigned the reference character, use the following functions to verify the characters. 1. In the initialization part the of code, use IMAQ OCR Read Character Set File (Machine VisionOCR) to read in a session that you created using the NI OCR Training Interface. Use IMAQ OCR Verify Text (Machine VisionOCR) to verify the characters inside the ROI(s) of the image under inspection. Use IMAQ OCR Dispose Session (Machine VisionOCR) to free the resources that the OCR Session used.
2. 3.
5-36
ni.com
Chapter 5
Display Results
You can overlay the results obtained at various stages of your inspection process on the inspection image. NI Vision attaches the information that you want to overlay to the image, but it does not modify the image. The overlay appears every time you display the image. Use the following VIs (Vision UtilitiesOverlay) to overlay search regions, inspection results, and other information, such as text and bitmaps. IMAQ Overlay PointsOverlays points on an image. Specify a point by its x-coordinate and y-coordinate. IMAQ Overlay LineOverlays a line on an image. Specify a line by its start and end points. IMAQ Overlay Multiple LinesOverlays multiple lines on an image. IMAQ Overlay RectangleOverlays a rectangle on an image. IMAQ Overlay OvalOverlays an oval or a circle on the image. IMAQ Overlay ArcOverlays an arc on the image. IMAQ Overlay BitmapOverlays a bitmap on the image. IMAQ Overlay TextOverlays text on an image. IMAQ Overlay ROIOverlays an ROI described by the ROI Descriptor on an image.
To use these VIs, pass in the image on which you want to overlay information and the information that you want to overlay.
Tip
You can select the color of the overlays using the previous VIs. You can configure the following processing VIs to overlay different types of information about the inspection image: IMAQ Find Vertical Edge (Machine VisionLocate Edges) IMAQ Find Horizontal Edge (Machine VisionLocate Edges)
5-37
Chapter 5
IMAQ Find Circular Edge (Machine VisionLocate Edges) IMAQ Find Concentric Edge (Machine VisionLocate Edges) IMAQ Clamp Horizontal Max (Machine VisionMeasure Distances) IMAQ Clamp Horizontal Min (Machine VisionMeasure Distances) IMAQ Clamp Vertical Max (Machine VisionMeasure Distances) IMAQ Clamp Vertical Min (Machine VisionMeasure Distances) IMAQ Find Pattern (Machine VisionFind Patterns) IMAQ Count Object (Machine VisionCount and Measure Objects) IMAQ Find CoordSys (Rect) (Machine VisionCoordinate Systems) IMAQ Find CoordSys (2 Rects) (Machine VisionCoordinate Systems) IMAQ Find CoordSys (Pattern) (Machine VisionCoordinate Systems)
The following list contains the kinds of information you can overlay on the above VIs: Search area input into the VI Search lines used for edge detection Edges detected along the search lines Bounding rectangle of particles Center of particles Result of the VI
Select the information you want to overlay by enabling the corresponding Boolean control of the VI. Use the IMAQ Clear Overlay VI (Vision UtilitiesOverlay) to clear any previous overlay information from the image. Use the IMAQ Write Image and Vision Info VI (Vision UtilitiesOverlay) to save an image with its overlay information to a file. You can read the information from the
5-38
ni.com
Chapter 5
file into an image using the IMAQ Read Image and Vision Info VI (Vision UtilitiesOverlay).
Note
As with calibration information, overlay information is removed from an image when the image size or orientation changes.
5-39
Calibrating Images
This chapter describes how to calibrate your imaging system, save calibration information, and attach calibration information to an image. After you set up your imaging system, you may want to calibrate your system. If your imaging setup is such that the camera axis is perpendicular or nearly perpendicular to the object under inspection, and your lens has no distortion, use simple calibration. With simple calibration, you do not need to learn a template. Instead, you define the distance between pixels in the horizontal and vertical directions using real-world units. If your camera axis is not perpendicular to the object under inspection, use perspective calibration to calibrate your system. If your lens is distorted, use nonlinear distortion calibration.
After you calibrate your imaging system, you can attach the calibration information to an image. Refer to the Attach Calibration Information section of this chapter for more information. Then, depending on your needs, you can perform one of the following steps: Use the real-world measurements options on the Particle Analysis and Particle Analysis Reports VIs to get real-world particle shape parameters without correcting the image.
6-1
Chapter 6
Calibrating Images
Use the calibration information to convert pixel coordinates to real-world coordinates without correcting the image. Create a distortion-free image by correcting the image for perspective errors and lens aberrations.
Refer to Chapter 6, Calibrating Images, for more information about applying calibration information before making measurements.
dx dy 1 2
Center-to-Center Distance
6-2
ni.com
Chapter 6
Calibrating Images
Note
Click StartAll ProgramsNational InstrumentsVisionDocumentation Calibration Grid to use the calibration grid installed with NI Vision. The dots have radii of 2 mm and center-to-center distances of 1 cm. Depending on your printer, these measurements may change by a fraction of a millimeter. You can purchase highly accurate calibration grids from optics suppliers, such as Edmund Industrial Optics.
X Y a.
Y X b.
If you do not specify a coordinate system, the calibration process defines a default coordinate system. If you specify a grid for the calibration process, the software defines the following default coordinate system, as shown in Figure 6-3: The origin is placed at the center of the leftmost, topmost dot in the calibration grid. The angle is set to 0. This aligns the x-axis with the first row of dots in the grid, as shown in Figure 6-3b. The axis direction is set to Indirect. This aligns the y-axis to the first column of the dots in the grid, as shown in Figure 6-3b.
6-3
Chapter 6
Calibrating Images
1 x
y a.
b.
If you specify a list of points instead of a grid for the calibration process, the software defines a default coordinate system, as follows: 1. 2. 3. The origin is placed at the point in the list with the lowest x-coordinate value and then the lowest y-coordinate value. The angle is set to 0. The axis direction is set to Indirect.
If you define a coordinate system yourself, carefully consider the needs of your application. Express the origin in pixels. Always choose an origin location that lies within the calibration grid so that you can convert the location to real-world units. Specify the angle as the angle between the x-axis of the new coordinate system (x') and the top row of dots (x), as shown in Figure 6-4. If your imaging system exhibits nonlinear distortion, you cannot visualize the angle as you can in Figure 6-4 because the dots do not appear in straight lines.
6-4
ni.com
Chapter 6
Calibrating Images
x 1 y'
2 y
x'
User-Defined Origin
If you want to specify a list of points instead of a grid, use the Reference Points control of IMAQ Learn Calibration Template to specify the pixel to real-world mapping.
6-5
Chapter 6
Calibrating Images
a.
b.
c.
6-6
ni.com
Chapter 6
Calibrating Images
Choose the perspective projection algorithm when your system exhibits perspective errors only. A perspective projection calibration has an accurate transformation even in areas not covered by the calibration grid, as shown in Figure 6-6. Set the Distortion element of the Calibration Learn Setup control to Perspective to choose the perspective calibration algorithm. Learning and applying perspective projection is less computationally intensive than the nonlinear method. However, perspective projection is not designed to handle highly nonlinear distortions. If your imaging setup exhibits nonlinear distortion, use the nonlinear method. The nonlinear method guarantees accurate results only in the area that the calibration grid covers, as shown in Figure 6-6. If your system exhibits both perspective and nonlinear distortion, use the nonlinear method to correct for both. Set the Distortion element of the Calibration Learn Setup control to Nonlinear to choose the nonlinear calibration algorithm.
6-7
Chapter 6
Calibrating Images
If the learning process returns a learning score below 600, try the following: 1. 2. Make sure your grid complies with the guidelines listed in the Defining a Calibration Template section of this chapter. Check the lighting conditions. If you have too much or too little lighting, the software may estimate the center of the dots incorrectly. Also, adjust your Threshold Range to distinguish the dots from the background. Select another learning algorithm. When nonlinear lens distortion is present, using perspective projection sometimes results in a low learning score.
3.
Note
6-8
ni.com
Chapter 6
Calibrating Images
Calibration Invalidation
Any image processing operation that changes the image size or orientation voids the calibration information in a calibrated image. Examples of VIs that void calibration information include IMAQ Resample, IMAQ Extract, IMAQ ArrayToImage, and IMAQ Unwrap.
Simple Calibration
When the axis of your camera is perpendicular to the image plane and lens distortion is negligible, use a simple calibration. In a simple calibration, a pixel coordinate is transformed to a real-world coordinate through scaling in the horizontal and vertical directions. Use simple calibration to map pixel coordinates to real-world coordinates directly without a calibration grid. The software rotates and scales a pixel coordinate according to predefined coordinate reference and scaling factors. You can assign the calibration to an arbitrary image using the IMAQ Set Simple Calibration VI (Vision UtilitiesCalibration). To perform a simple calibration, set a coordinate referencecomposed of an angle, center, and axis directionand scaling factors on the defined axis, as shown in Figure 6-7. Express the angle between the x-axis and the horizontal axis of the image in degrees. Express the center as the position, in pixels, where you want the coordinate reference origin. Set the axis direction to direct or indirect. Simple calibration also offers a correction table option and a scaling mode option. Simple calibration is performed using IMAQ Set Simple Calibration. Use the Calibration Axis Info control to define the coordinate system. Use the X Step and Y Step elements of the Grid Descriptor control to specify the scaling factors. Use the Corrected Image Scaling control to set the scaling method. Set the Learn Correction Table? control to True to learn the correction table.
6-9
Chapter 6
Calibrating Images
Y dy
1 dx X
Origin
The source image and destination image must be the same size.
6-10
ni.com
Chapter 6
Calibrating Images
Using the calibration information attached to the image, you can accurately convert pixel coordinates to real-world coordinates to make any of the analytic geometry measurements with IMAQ Convert Pixel to Real World (Vision UtilitiesCalibration). If your application requires that you make shape measurements, you can use the calibrated measurements from the IMAQ Particle Analysis or IMAQ Particle Analysis Report VIs (Image ProcessingAnalysis). You also can correct the image by removing distortion with IMAQ Correct Calibrated Image.
Note
Correcting images is a time-intensive operation. A calibrated image is not the same as a corrected image. Because calibration information is part of the image, it is propagated throughout the processing and analysis of the image. Functions that modify the image size, such as an image rotation function, void the calibration information. Use IMAQ Write Image and Vision Info (Vision UtilitiesCalibration) to save the image and all of the attached calibration information to a file.
6-11
This appendix introduces the real-time capabilities of NI Vision and describes how you can use NI Vision with the LabVIEW Real-Time Module to create a machine vision application for a real-time, deterministic, embedded target.
Overview
With the LabVIEW Real-Time Module, NI-IMAQ, and NI Vision, you have all the tools necessary to develop a complete machine vision application on a reliable, embedded platform. The LabVIEW Real-Time Module provides real-time programming and execution capabilities, NI-IMAQ provides the acquisition components, and NI Vision provides the image manipulation and analysis functions. Develop your vision application with NI-IMAQ and NI Vision. Then download your code to run on a real-time, embedded target. You also can add National Instruments data acquisition (DAQ), motion control, contoller area network (CAN), and serial instruments to your LabVIEW Real-Time Module system to create a complete, integrated, embedded system.
System Components
If you are using NI Vision with the LabVIEW Real-Time Module, your system consists of a development system and one or more deployed RT targets.
Development System
The development system for NI Vision and the LabVIEW Real-Time Module is made up of the following major components: HostPentium-based machine running a Windows operating system. Use this component to configure your NI PXI controller or
A-1
Appendix A
NI CVS-1450 Series device as an RT target and to develop your application. RT targetRT Series hardware that runs VIs downloaded from and built in LabVIEW. Examples of RT targets include a PXI chassis housing a PXI controller or a CVS-1450 Series device.
Refer to the NI Vision Hardware Help for details about how to set up each machine and how they interact with each other.
Note
You need a network connection between your host machine and RT target during development to configure the RT target and to download software and code from your host machine onto the RT target. This network connection is optional at runtime.
Deployed System
When you have configured your host development system, you can set up and configure additional RT targets for deployment. These deployed systems use the same hardware and software as your development RT target, but they do not require Windows for configuration. Instead of using Windows for configuration, copy your configuration information from your development RT target to the deployment system.
Software Installation
Set up your RT target by installing the LabVIEW Real-Time Module and NI-IMAQ. Refer to the NI Vision Hardware Help for detailed instructions. Use MAX to install NI Vision and any other necessary LabVIEW Real-Time Module components from your host machine onto your RT target system. Refer to the Measurement & Automation Explorer Remote Systems Help for specific information (within MAX, go to HelpHelp TopicsRemote Systems). When your RT target is set up, you can write and execute NI Vision code just as you would on a Windows-based system.
Image Display
Using NI Vision with the LabVIEW Real-Time Module gives you two options for displaying images: Remote Display and RT Video Out. Use Remote Display during development and debugging to view your images from your host machine just as you would view the LabVIEW front panels of the VIs running on your LabVIEW Real-Time Module system. Use RT
A-2
ni.com
Appendix A
Video Out to display your images on a monitor connected to your remote LabVIEW Real-Time Module system.
Remote Display
Remote Display allows you to acquire images on your remote system and view them on your host machine. Remote Display is automatically enabled when you use the LabVIEW Image Display control (available with LabVIEW 7.0 or later) or any of the NI Vision display VIs (Vision UtilitiesExternal Display)such as IMAQ WindDraw, IMAQ WindToolsShow, and IMAQ ConstructROI. Remote Display is useful for monitoring and debugging your applications that use NI Vision with the LabVIEW Real-Time Module. Familiarize yourself with how Remote Display works before using this feature. The following details will help you prepare your application for use with Remote Display: Remote Display is a front-panel feature. Therefore, your LabVIEW front-panel must be open for you to see images displayed using Remote Display. Remote Display performs best when combined with IMAQ Remote Display Options. (Vision UtilitiesVision RT). When you display images on your remote machine, LabVIEW must send those images over your network. This process can take up a large amount of your network bandwidth, especially when transferring large images. IMAQ Remote Display Options allows you to specify compression settings for those images to reduce the network bandwidth used by the display process. In addition, compressing images may increase your display rates on slower networks.
Note
IMAQ Remote Display Options will not affect the remote display options in versions of LabVIEW prior to LabVIEW 8.0. Use the IMAQ Flatten Image Options VI instead. IMAQ Remote Display Options uses two types of compression algorithms. Use the lossy JPEG compression algorithm on grayscale and color images. Use the lossless Packed Binary compression algorithm on binary images. Refer to the NI Vision for LabVIEW VI Reference Help for more information about the IMAQ Remote Display Options VI.
A-3
Appendix A
Note
JPEG Compression may result in data degradation of the displayed image. There is no degradation of the image during processing. Test various compression settings to find the right balance for your application. Using Remote Display can affect the timing performance of your NI Vision VIs. Do not use Remote Display if your program contains a time-critical loop. Disconnecting your remote system from your host machine disables Remote Display. Disabled Remote Display VIs do not affect the performance of your application. When you reconnect your remote system and host machine, Remote Display is automatically re-enabled. Use RT Video Out instead of Remote Display on deployed systems. Remote Display requires a LabVIEW front panel, and deployed systems do not have a front panel. Refer to the LabVIEW Help for recommended program architecture.
Tip
Refer to the Remote Display Errors section of this appendix for more information.
RT Video Out
RT Video Out allows you to display images on a monitor that is connected to your RT target. In NI Vision, IMAQ WindDraw and IMAQ WindClose (Vision UtilitiesExternal Display) provide support for RT Video Out. To display images on a monitor connected to your RT target, input 15 for the Window Number control. Alternately, you can use IMAQ RT Video Out (Vision Utilities Vision RT) to display image on a monitor connected to your RT target.
Note
This feature is available only on controllers that feature the i815 chipset, such as NI PXI-8175/76 Series controllers and NI CVS-1450 Series compact vision system. RT Video Out supports overlay functionality. However, the overlay text is limited to one size and one font. This display option is not a time-bounded operation. Refer to the Determinism in Real-Time Applications section of this appendix for more information about time-bounded operations. To programmatically configure your system to use RT Video Out for displaying system images, use the IMAQ Video Out Display Mode VI (Vision UtilitiesVision RT). This VI allows you to set parameters for screen area, color depth, and refresh frequency.
A-4
ni.com
Appendix A
Resource contention is just one example of how to destroy determinism. Determinism also can be destroyed by the nature of some algorithms, by adding file I/O, or by networking. Refer to the LabVIEW Help for a discussion on determinism and programming deterministic applications.
Appendix A
on the LabVIEW Real-Time Module enhances the time reliability. Unfortunately, this execution behavior is dependant on the commonality of the input sets. In many applications, the input sets are common enough that you can safely predict the execution time by running the application over a large, representative set of example images. In some cases, however, getting a representative set of example images may be difficult, or a bounded execution time may be particularly important. Herein lies the need for time-bounded algorithms.
Time-Bounded Execution
As with determinism, time-bounded behavior is controlled by both the algorithm and the environment in which it executes. For this reason, some vision algorithms are not candidates for time bounding. For example, algorithms whose execution time varies widely between similar images are not productive under time constraints. This includes operations, such as skeleton, separation, and segmentation, whose execution time can vary dramatically with slight changes in the input image. This makes choosing a time limit for such operations difficult. However, many vision algorithms are adaptable to time limits when the appropriate timed environment is established. When using NI Vision with the LabVIEW Real-Time Module, the timed environment is best described in terms of the following process flow: 1. 2. 3. 4. Initialize the timed environment. Prepare resources. Perform time-bounded vision operations. Close the timed environment.
A-6
ni.com
Appendix A
Use IMAQ Initialize Timed Execution to initialize the environment. Because resource requirements differ among vision applications, you can change the amount of memory reserved using the Reserved Memory Size control. If the reserved resources are exhausted during execution, a special out-of-memory error message is generated. If you receive this error, increase the amount of resource memory to meet the needs of your processing. The resources you reserve at initialization are not used until the timing mechanism is started. These resources are intended for use in internal processing that is not exposed in the LabVIEW environment. For objects that are exposed in LabVIEW, always preallocate resources before entering the time-bounded portion of your code. For example, preallocate destination images using IMAQ Create (Vision UtilitiesImage Management) and IMAQ SetImageSize (Vision UtilitiesImage Management) before entering time-bounded code.
Preparing Resources
Allocate any resource whose exact size you know before the time limit is started. This encourages optimal use of the reserved resources and provides maximum flexibility. System resources allocated before timed execution are available at any time. Reserved resources allocated inside the time-bounded portion of your code are not guaranteed to be available outside the timed environment. Therefore, you should preallocate as much as possible before entering the time-bounded portion of your code. When time-bounded execution begins, changes to system resources by NI Vision algorithms, such as resizing an existing image, generate an error. Images can be created only in system resources. In addition, special image changes performed by learning a pattern, calibrating an image, or adding an overlay also require system resources. This is primarily because they have to exist outside the timed environment. These operations are prohibited during time-bounded execution.
A-7
Appendix A
For vision algorithms working with images, serialized processing is crucial because an image may be shared among multiple vision routines running in parallel. When changes to the image are required, the image is blocked from access until the updates are completed. This is another form of resource contention that invalidates time constraints. If the error cluster is passed between VIs sequentially, this type of conflict is avoided. Use the error cluster to sequence your VIs and one loop in the code to avoid time constraint conflicts. All non-vision processing during time-bounded execution is not constrained by the timing mechanism, increasing the jitter of the overall execution. Consequently, limit non-vision processing during time-bounded execution. In particular, eliminate any operation in the LabVIEW Real-Time Module requesting resources because these operations nullify the time limit. For example, do not build or resize arrays to sizes determined at run-time if a time limit is active. However, you can perform such operations as reading elements from an array. Other operations, such
A-8
ni.com
Appendix A
as file I/O, serial I/O, and networking, are inherently non-deterministic and should be avoided in the time-critical portion of your application. Refer to the LabVIEW Real-Time Module documentation to determine which routines can execute deterministically. Because some non-vision processing may be required during timed execution, use IMAQ Check Timed Execution (Vision Utilities Vision RT) periodically to see if time has expired. Determine how frequently you need to check for expired time based on the complexity of the non-vision process and the required jitter.
Image Files
Many applications require you to use external files, such as the template files used by pattern matching and spatial calibration functions. Before running your application on an RT target, you must use FTP to transfer any external image files from your host machine to your remote target. You can use MAX 3.0 and later to FTP images to your RT target. Refer to the LabVIEW Help for more information about using FTP.
Deployment
When you have finished developing your application using NI Vision for the LabVIEW and the LabVIEW Real-Time Module, you may want to deploy that application to a number of remote systems. To achieve consistent results from your application, you must configure these deployed systems with the same settings you used for your development system.
Note
Each deployed system must have its own embedded real-time controller and software. Visit ni.com for ordering information.
Note
You must purchase a run-time license for NI Vision and a run-time license for the LabVIEW Real-Time Module for each deployed system using NI Vision and the LabVIEW Real-Time Module. Visit ni.com for more information about purchasing run-time licenses.
A-9 NI Vision for LabVIEW User Manual
Appendix A
Troubleshooting
This section describes solutions and suggestions for common errors when using NI Vision with the LabVIEW Real-Time Module.
How do I select an ROI in an image? If you are using the LabVIEW Real-Time Module 7.0 or later, you can you use an Image Display control to select an ROI. Wire your inspection image to the Image Display control on your LabVIEW block diagram to display the image on your LabVIEW front panel. While your application is running on the RT target, your user can use the ROI tools associated with the Image Display control to select their ROI on the host computer. Use the ROI property node in your application to get the user selected ROI. If you are using the LabVIEW Real-Time Module 7.1 or later, you can use the Get Last Event invoke node in your application to determine when a user has drawn a new ROI and to get basic properties of the new ROI. You can also use the Clear ROI invoke node to ensure the Image Display control
A-10
ni.com
Appendix A
is not displaying an ROI when your application begins execution. Refer to Extract Example.vi located in examples\vision\2. Functions\ Image Management to learn more about using these invoke nodes in an application. For all versions of LabVIEW RT, you can use the IMAQ Construct ROI VI (Vision UtilitiesExternal Display) to allow the user to select an ROI. LabVIEW displays the ROI selection window on the host computer and outputs the selected ROI to the VI running on the RT target. Similarly, you can use the IMAQ WindDraw VI (Vision UtilitiesExternal Display) to display the inspection image on the host computer. You can then use the IMAQ WindToolsShow VI (Vision UtilitiesExternal Display) to display the tools dialog on the host computer. Your user would then use the ROI tools to select their ROI in the image display window. Use the IMAQ WindGetROI VI (Vision UtilitiesExternal Display) to get the user selected ROI for processing on the RT target.
Programming Errors
Why wont my LabVIEW VI run on my RT target? Your NI Vision VI may not be supported by the LabVIEW Real-Time Module. The following VIs are among those not supported: IMAQ Browser Delete (Vision UtilitiesExternal DisplayBrowser) IMAQ Browser Focus (Vision UtilitiesExternal DisplayBrowser) IMAQ Browser Focus Setup (Vision UtilitiesExternal Display Browser) IMAQ Browser Insert (Vision UtilitiesExternal DisplayBrowser) IMAQ Browser Replace (Vision UtilitiesExternal Display Browser) IMAQ Browser Setup (Vision UtilitiesExternal DisplayBrowser) IMAQ Draw (Vision UtilitiesPixel Manipulation) IMAQ Draw Text (Vision UtilitiesPixel Manipulation)
A-11
Appendix A
IMAQ WindGrid (Vision UtilitiesExternal Display) All IMAQ AVI VIs (Vision UtilitiesFilesAVI) Other obsoleted VIs. If your program contains a VI that has been updated or replaced to support new functionality, the icon of the obsoleted VI will contain a small black X.
How can I make my NI Vision application work on my LabVIEW Real-Time Module system if it contains IMAQ Draw or IMAQ Draw Text? The IMAQ Draw and IMAQ Draw Text VIs are not supported under the LabVIEW Real-Time Module. However, you can achieve similar functionality by using the IMAQ Overlay VIs (Vision UtilitiesOverlays), such as IMAQ Overlay Rectangle and IMAQ Overlay Text. You also can use IMAQ Merge Overlay to merge your overlay data into your image. Refer to the NI Vision for LabVIEW VI Reference Help for information about merging overlays. How can I make my NI Vision application work on my LabVIEW Real-Time Module system if the application contains IMAQ Browser VIs? If your application uses any of the IMAQ Browser VIs, use IMAQ ImageToImage (Vision UtilitiesImage Management) to embed multiple images within a single image. Why do I get a File Not Found error from my LabVIEW VI when I run it on the LabVIEW Real-Time Module system? When you run your LabVIEW Real-Time Module application, your VI is downloaded to your RT target, but your support filessuch as images and templatesare not. The File I/O routines in the LabVIEW Real-Time Module and NI Vision always refer to files on the target machine, which is the remote RT target in this case. Use FTP to move your image files to the RT target. Refer to your LabVIEW Real-Time Module documentation for more information about transferring files to your RT target using FTP. If you created a VI in the NI Vision Assistant, selecting Image File for your Image Source setting causes your VI to return an error in the LabVIEW Real-Time Module. This is because the File Dialog function is not supported in the LabVIEW Real-Time Module. To avoid this error, select Image Control or Image Acquisition Board as the Image Source.
A-12
ni.com
Appendix A
A-13
Visit the following sections of the National Instruments Web site at ni.com for technical support and professional services: SupportOnline technical support resources at ni.com/support include the following: Self-Help ResourcesFor answers and solutions, visit the award-winning National Instruments Web site for software drivers and updates, a searchable KnowledgeBase, product manuals, step-by-step troubleshooting wizards, thousands of example programs, tutorials, application notes, instrument drivers, and so on. Free Technical SupportAll registered users receive free Basic Service, which includes access to hundreds of Application Engineers worldwide in the NI Developer Exchange at ni.com/exchange. National Instruments Application Engineers make sure every question receives an answer. For information about other technical support options in your area, visit ni.com/services or contact your local office at ni.com/contact. Training and CertificationVisit ni.com/training for self-paced training, eLearning virtual classrooms, interactive CDs, and Certification program information. You also can register for instructor-led, hands-on courses at locations around the world. System IntegrationIf you have time constraints, limited in-house technical resources, or other project challenges, National Instruments Alliance Partner members can help. To learn more, call your local NI office or visit ni.com/alliance.
If you searched ni.com and could not find the answers you need, contact your local office or NI corporate headquarters. Phone numbers for our worldwide offices are listed at the front of this manual. You also can visit the Worldwide Offices section of ni.com/niglobal to access the branch office Web sites, which provide up-to-date contact information, support phone numbers, email addresses, and current events.
B-1
Glossary
Numbers
1D 2D 3D One-dimensional. Two-dimensional. Three-dimensional.
A
AIPD The National Instruments internal image file format used for saving complex images and calibration information associated with an image. AIPD images have the file extension APD. The process by which a machine vision application determines the location, orientation, and scale of a part being inspected. The channel used to code extra information, such as gamma correction, about a color image. The alpha channel is stored as the first byte in the four-byte representation of an RGB pixel. (1) A rectangular portion of an acquisition window or frame that is controlled and defined by software. (2) The size of an object in pixels or user-defined units. arithmetic operators array auto-median function The image operations multiply, divide, add, subtract, and modulo. An ordered, indexed set of data elements of the same type. A function that uses dual combinations of opening and closing operations to smooth the boundaries of objects.
area
G-1
Glossary
B
b B barycenter binary image binary morphology binary threshold Bit. One binary digit, either 0 or 1. Byte. Eight related bits of data, an 8-bit binary number. Also denotes the amount of memory required to store one byte of data. The grayscale value representing the centroid of the range of an image's grayscale values in the image histogram. An image in which the objects usually have a pixel intensity of 1 (or 255) and the background has a pixel intensity of 0. Functions that perform morphological operations on a binary image. The separation of an image into objects of interest (assigned pixel values of 1) and background (assigned pixel values of 0) based on the intensities of the image pixels. The number of bits (n) used to encode the value of a pixel. For a given n, a pixel can take 2n different values. For example, if n equals 8, a pixel can take 256 different values ranging from 0 to 255. If n equals 16, a pixel can take 65,536 different values ranging from 0 to 65,535 or 32,768 to 32,767. Reduces the amount of detail in an image. Blurring can occur when the camera lens is out of focus or when an object moves rapidly in the field of view. You can blur an image intentionally by applying a lowpass frequency filter. Bitmap. An image file format commonly used for 8-bit and color images. BMP images have the file extension BMP. Removes objects (or particles) in a binary image that touch the image border. (1) A constant added to the red, green, and blue components of a color pixel during the color decoding process. (2) The perception by which white objects are distinguished from gray and light objects from dark objects. buffer Temporary storage for acquired data.
bit depth
blurring
G-2
ni.com
Glossary
C
caliper A measurement function that finds edge pairs along a specified path in the image. This function performs an edge extraction and then finds edge pairs based on specified criteria such as the distance between the leading and trailing edges, edge contrasts, and so forth. Controller Area Network. A serial bus finding increasing use as a device-level network for industrial automation. CAN was developed by Bosch to address the needs of in-vehicle automotive communications. The point on an object where all the mass of the object could be concentrated without changing the first moment of the object about any axis. The color information in a video signal. The combination of hue and saturation. The relationship between chromaticity and brightness characterizes a color. A dilation followed by an erosion. A closing fills small holes in objects and smooths the boundaries of objects. A technique where the image is sorted within a discrete number of classes corresponding to the number of phases perceived in an image. The gray values and a barycenter are determined for each class. This process is repeated until a value is obtained that represents the center of mass for each phase or class. Color lookup table. A table for converting the value of a pixel in an image into a red, green, and blue (RGB) intensity. An image containing color information, usually encoded in the RGB form. The mathematical representation for a color. For example, color can be described in terms of red, green, and blue; hue, saturation, and luminance; or hue, saturation, and intensity. Stores information obtained from the FFT of an image. The complex numbers that compose the FFT plane are encoded in 64-bit floating-point values: 32 bits for the real part and 32 bits for the imaginary part. Defines which of the surrounding pixels of a given pixel constitute its neighborhood.
CAN
center of mass
complex image
connectivity
G-3
Glossary
connectivity-4 connectivity-8 contrast convex hull convex hull function convolution convolution kernel
Only pixels adjacent in the horizontal and vertical directions are considered neighbors. All adjacent pixels are considered neighbors. A constant multiplication factor applied to the luma and chroma components of a color pixel in the color decoding process. The smallest convex polygon that can encapsulate a particle. Computes the convex hull of objects in a binary image. See linear filter. 2D matrices, or templates, used to represent the filter in the filtering process. The contents of these kernels are a discrete 2D representation of the impulse response of the filter that they represent. The process of finding curves, or connected edge points, in a grayscale image. Curves usually represent the boundaries of objects in the image.
curve extraction
D
Danielsson function DAQ Similar to the distance functions, but with more accurate results. Data acquisition. The process of collecting and measuring electrical signals from sensors, transducers, and test probes or fixtures and inputting them to a computer for processing. A characteristic of a system that describes how consistently it can respond to external events or perform operations within a given time limit. An image f(x, y) that has been converted into a discrete number of pixels. Both spatial coordinates and brightness are specified. Increases the size of an object along its boundary and removes tiny holes in the object. Software that controls a specific hardware device, such as an NI image acquisition or DAQ device.
determinism
G-4
ni.com
Glossary
E
edge edge contrast edge detection edge steepness energy center equalize function erosion exponential and gamma corrections exponential function Defined by a sharp transition in the pixel intensities in an image or along an array of pixels. The difference between the average pixel intensity before the edge and the average pixel intensity after the edge. Any of several techniques that identify the edges of objects in an image. The number of pixels that corresponds to the slope or transition area of an edge. The center of mass of a grayscale image. See center of mass. See histogram equalization. Reduces the size of an object along its boundary and eliminates isolated points in the image. Expand the high gray-level information in an image while suppressing low gray-level information. Decreases brightness and increases contrast in bright regions of an image, and decreases contrast in dark regions of an image.
F
FFT fiducial Fourier transform frequency filters function Fast Fourier Transform. A method used to compute the Fourier transform of an image. A reference pattern on a part that helps a machine vision application find the part location and orientation in an image. Transforms an image from the spatial domain to the frequency domain. The counterparts of spatial filters in the frequency domain. For images, frequency information is in the form of spatial frequency. A set of software instructions executed by a single line of code that may have input and/or output parameters and returns a value when executed.
G-5
Glossary
G
gamma geometric features The nonlinear change in the difference between the video signal's brightness level and the voltage level needed to produce that brightness. Information extracted from a grayscale template that are used to locate the template in the target image. Geometric features in an image range from low-level features such as edges or curves detected in the image to higher-level features such as the geometric shapes made by the curves in the image. A technique used to locate quickly a grayscale template that is characterized by distinct geometric or shape information within a grayscale image. A comparison of the pixel intensities of an image under inspection to a golden template. A golden template is an image containing an ideal representation of an object under inspection. See gradient filter. An edge detection algorithm that extracts the contours in gray-level values. Gradient filters include the Prewitt and Sobel filters. The brightness of a pixel in an image. Increases the brightness of pixels in an image that are surrounded by other pixels with a higher intensity. Reduces the brightness of pixels in an image that are surrounded by other pixels with a lower intensity. An image with monochrome information. Functions that perform morphological operations on a gray-level image.
geometric matching
Golden Template comparison gradient convolution filter gradient filter gray level gray-level dilation gray-level erosion grayscale image grayscale morphology
G-6
ni.com
Glossary
H
highpass attenuation highpass filter highpass frequency filter highpass truncation histogram histogram equalization The inverse of lowpass attenuation. Emphasizes the intensity variations in an image, detects edges or object boundaries, and enhances fine details in an image. Removes or attenuates low frequencies present in the frequency domain of the image. A highpass frequency filter suppresses information related to slow variations of light intensities in the spatial image. The inverse of lowpass truncation. Indicates the quantitative distribution of the pixels of an image per gray-level value. Transforms the gray-level values of the pixels of an image to occupy the entire range of the histogram, thus increasing the contrast of the image. The histogram range in an 8-bit image is 0 to 255. Finds the photometric negative of an image. The histogram of a reversed image is equal to the original histogram flipped horizontally around the center of the histogram. In LabVIEW, a histogram that can be wired directly into a graph. Locates objects in the image similar to the pattern defined in the structuring element. A color encoding scheme in hue, saturation, and intensity. A color encoding scheme using hue, saturation, and luminance information where each image in the pixel is encoded using 32 bits: eight bits for hue, eight bits for saturation, eight bits for luminance, and eight unused bits. A color encoding scheme in hue, saturation, and value. Represents the dominant color of a pixel. The hue function is a continuous function that covers all the possible colors generated using the R, G, and B primaries. See also RGB. Hertz. Frequency in units of 1/second.
histogram inversion
HSV hue
Hz
G-7
Glossary
I
I/O Input/output. The transfer of data to/from a computer system involving communications channels, operator interface devices, and/or data acquisition and control interfaces. A 2D light intensity function f (x, y) where x and y denote spatial coordinates and the value f at any point (x, y) is proportional to the brightness at that point. A user-defined region of pixels surrounding an image. Functions that process pixels based on the value of the pixel neighbors require image borders. An image that contains thumbnails of images to analyze or process in a vision application. A memory location used to store images. The number of values a pixel can take on, which is the number of colors or shades that you can see in the image. A window or control that displays an image. The process of improving the quality of an image that you acquire from a sensor in terms of signal-to-noise ratio, image contrast, edge definition, and so on. A file containing pixel data and additional information about the image. Defines how an image is stored in a file. Usually composed of a header followed by the pixel data. A binary image that isolates parts of a source image for further processing. A pixel in the source image is processed if its corresponding mask pixel has a nonzero value. A source pixel whose corresponding mask pixel has a value of 0 is left unchanged. The gradation of colors used to display an image on screen, usually defined by a CLUT. Encompasses various processes and analysis functions that you can apply to an image.
image
image border
Image Browser image buffer image definition image display environment image enhancement
G-8
ni.com
Glossary
The original input image. Any process of acquiring and displaying images and analyzing image data. Finds the inner boundary of objects. The process by which parts are tested for simple defects, such as missing parts or cracks on part surfaces. Analyzes groups of pixels within an image and returns information about the size, shape, position, and pixel connectivity. Typical applications include testing the quality of parts, analyzing defects, locating objects, and sorting objects. A set of high-level software functions, such as NI-IMAQ, that control specific plug-in computer boards. Instrument drivers are available in several forms, ranging from a function callable in a programming language to a VI in LabVIEW. The sum of the red, green, and blue primary colors divided by three, (Red + Green + Blue)/3. Assigns user-defined quantities, such as optical densities or concentrations, to the gray-level values in an image. The gray-level distribution of the pixels along an ROI in an image. Defines the range of gray-level values in an object of an image. Characterizes an object based on the range of gray-level values in the object. If the intensity range of the object falls within the user-specified range, it is considered an object. Otherwise it is considered part of the background.
instrument driver
J
jitter JPEG The maximum amount of time that the execution of an algorithm varies from one execution to the next. Joint Photographic Experts Group. An image file format for storing 8-bit and color images with lossy compression. JPEG images have the file extension JPG.
G-9
Glossary
JPEG2000
An Image file format for storing 8-bit, 16-bit, or color images with either lossy or lossless compression. JPEG2000 images have the file extension JP2.
K
kernel A structure that represents a pixel and its relationship to its neighbors. The relationship is specified by the weighted coefficients of each neighbor.
L
labeling A morphology operation that identifies each object in a binary image and assigns a unique pixel value to all the pixels in an object. This process is useful for identifying the number of objects in the image and giving each object a unique pixel intensity. Laboratory Virtual Instrument Engineering Workbench. A program development environment application based on the programming language G used commonly for test and measurement applications. Measures the distance between selected edges with high-precision subpixel accuracy along a line in an image. For example, this function can be used to measure distances between points and edges. This function also can step and repeat its measurements across the image. Represents the gray-level distribution along a line of pixels in an image. A special algorithm that calculates the value of a pixel based on its own pixel value as well as the pixel values of its neighbors. The sum of this calculation is divided by the sum of the elements in the matrix to obtain a new pixel value. Creates a binary image by segmenting a grayscale image into a particle region and a background region. Increases the brightness and contrast in dark regions of an image and decreases the contrast in bright regions of the image. The image operations AND, NAND, OR, XOR, NOR, XNOR, difference, mask, mean, max, and min.
LabVIEW
line gauge
G-10
ni.com
Glossary
Compression in which the decompressed image is identical to the original image. Compression in which the decompressed image is visually similar but not identical to the original image. Applies a linear attenuation to the frequencies in an image, with no attenuation at the lowest frequency and full attenuation at the highest frequency. Removes or attenuates high frequencies present in the FFT domain of an image. Attenuates intensity variations in an image. You can use these filters to smooth an image by eliminating fine details and blurring edges. Attenuates high frequencies present in the frequency domain of the image. A lowpass frequency filter suppresses information related to fast variations of light intensities in the spatial image. Removes all frequency information above a certain frequency. Uses an L-shaped structuring element in the skeleton function. The brightness information in the video picture. The luma signal amplitude varies in proportion to the brightness of the video signal and corresponds exactly to the monochrome picture. See luma. Lookup table. A table containing values used to transform the gray-level values of an image. For each gray-level value in the image, the corresponding new value is obtained from the lookup table.
lowpass FFT filter lowpass filter lowpass frequency filter lowpass truncation L-skeleton function luma
luminance LUT
M
M (1) Mega, the standard metric prefix for 1 million or 106, when used with units of measure such as volts and hertz (2) Mega, the prefix for 1,048,576, or 220, when used with B to quantify data or computer memory. machine vision An automated application that performs a set of visual inspection tasks.
G-11
Glossary
Removes frequencies contained in a mask (range) specified by the user. A number ranging from 0 to 1000 that indicates how closely an acquired image matches the template image. A match score of 1000 indicates a perfect match. A match score of 0 indicates no match. Megabyte of memory. A lowpass filter that assigns to each pixel the median value of its neighbors. This filter effectively removes isolated pixels without blurring the contours of objects. See buffer. Multimedia Extensions. An Intel chip-based technology that allows parallel operations on integers, which results in accelerated processing of 8-bit images. Extract and alter the structure of objects in an image. You can use these transformations for expanding (dilating) or reducing (eroding) objects, filling holes, closing inclusions, or smoothing borders. They are used primarily to delineate objects and prepare them for quantitative inspection analysis. Uses an M-shaped structuring element in the skeleton function. The technique used to simultaneously locate multiple grayscale templates within a grayscale image.
MB median filter
morphological transformations
N
neighbor A pixel whose value affects the value of a nearby pixel when an image is processed. The neighbors of a pixel are usually defined by a kernel or a structuring element. Operations on a point in an image that take into consideration the values of the pixels neighboring that point. The driver software for National Instruments image acquisition hardware. Replaces each pixel value with a nonlinear function of its surrounding pixels. A highpass edge-extraction filter that favors vertical edges.
G-12
ni.com
Glossary
A highpass, edge-extraction filter based on 2D gradient information. A highpass, edge-extraction filter based on 2D gradient information. The filter has a smoothing effect that reduces noise enhancements caused by gradient operators. Filters an image using a nonlinear filter. This filter orders (or classifies) the pixel values surrounding the pixel being processed. The pixel being processed is set to the Nth pixel value, where N is the order of the filter. The number of arrays of pixels that compose the image. A gray-level or pseudo-color image is composed of one plane. An RGB image is composed of three planes: one for the red component, one for the blue component, and one for the green component.
O
occlusion invariant matching OCR OCV offset A geometric matching technique in which the reference pattern can be partially obscured in the target image. Optical Character Recognition. The process of analyzing an image to detect and recognize characters/text in the image. Optical Character Verification. A machine vision application that inspects the quality of printed characters. The coordinate position in an image where you want to place the origin of another image. Setting an offset is useful when performing mask operations. An erosion followed by a dilation. An opening removes small objects and smooths boundaries of objects in the image. Allow masking, combination, and comparison of images. You can use arithmetic and logic operators in NI Vision. Contains the low-frequency information at the center and the highfrequency information at the corners of an FFT-transformed image. Finds the outer boundary of objects.
G-13
Glossary
P
palette particle particle analysis pattern matching picture element pixel The gradation of colors used to display an image on screen, usually defined by a CLUT. A connected region or grouping of pixels in an image in which all pixels have the same intensity level. A series of processing operations and analysis functions that produce information about the particles in an image. The technique used to quickly locate a grayscale template within a grayscale image An element of a digital image. Also called pixel. Picture element. The smallest division that makes up the video scan line. For display on a computer monitor, a pixel's optimum dimension is square (aspect ratio of 1:1, or the width equal to the height). The ratio between the physical horizontal and vertical sizes of the region covered by the pixel. An acquired pixel should optimally be square, thus the optimal value is 1.0, but typically it falls between 0.95 and 1.05, depending on camera quality. Directly calibrates the physical dimensions of a pixel in an image. The number of bits used to represent the gray level of a pixel. Portable Network Graphic. An image file format for storing 8-bit, 16-bit, and color images with lossless compression. PNG images have the file extension PNG. An edge detection algorithm that extracts the contours in gray-level values using a 3 3 filter kernel. A finite combination of successive closing and opening operations that you can use to fill small holes and smooth the boundaries of objects. A finite combination of successive opening and closing operations that you can use to remove small particles and smooth the boundaries of objects.
G-14
ni.com
Glossary
Q
quantitative analysis Obtaining various measurements of objects in an image.
R
real time resolution reverse function RGB A property of an event or system in which data is processed as it is acquired instead of being accumulated and processed at a later time. The number of rows and columns of pixels. An image composed of m rows and n columns has a resolution of m n. Inverts the pixel values in an image, producing a photometric negative of the image. A color encoding scheme using red, green, and blue (RGB) color information where each pixel in the color image is encoded using 32 bits: eight bits for red, eight bits for green, eight bits for blue, and eight bits for the alpha value (unused). A color encoding scheme using red, green, and blue (RGB) color information where each pixel in the color image is encoded using 64 bits:16 bits for red, 16 bits for green, 16 bits for blue, and 16 bits for the alpha value (unused). An edge detection algorithm that extracts the contours in gray level, favoring diagonal edges. Region of interest. (1) An area of the image that is graphically selected from a window displaying the image. This area can be used focus further processing. (2) A hardware-programmable rectangular portion of the acquisition window. ROI tools A collection of tools that enable you to select a region of interest from an image. These tools let you select points, lines, annuli, polygons, rectangles, rotated rectangles, ovals, and freehand open and closed contours.
RGB U64
G-15
Glossary
The amount by which one image is rotated relative to a reference image. This rotation is computed relative to the center of the image. A pattern matching technique in which the reference pattern can be located at any orientation in the test image as well as rotated at any degree.
S
sample saturation An object in an image that you want to classify. The amount of white added to a pure color. Saturation relates to the richness of a color. A saturation of zero corresponds to a pure color with no white added. Pink is a red with low saturation. A pattern matching technique in which the reference pattern can be any size in the test image. Fully partitions a labeled binary image into non-overlapping segments, with each segment containing a unique particle. Separates particles that touch each other by narrow isthmuses. Detects rectangles, lines, ellipses, and circles within images. A pattern matching technique in which the reference pattern can be located anywhere in the test image but cannot be rotated or scaled. Applies a succession of thinning operations to an object until its width becomes one pixel. Blurs an image by attenuating variations of light intensity in the neighborhood of a pixel. An edge detection algorithm that extracts the contours in gray-level values using a 3 3 filter kernel. Assigns physical dimensions to the area of a pixel in an image. Alter the intensity of a pixel relative to variations in intensities of its neighboring pixels. You can use these filters for edge detection, image enhancement, noise reduction, smoothing, and so forth. The number of pixels in an image in terms of the number of rows and columns in the image.
scale-invariant matching segmentation function separation function shape detection shift-invariant matching skeleton function smoothing filter Sobel filter spatial calibration spatial filters
spatial resolution
G-16
ni.com
Glossary
See exponential function. See logarithmic function. Contains the low-frequency information at the corners and high-frequency information at the center of an FFT-transformed image. A binary mask used in most morphological operations. A structuring element is used to determine which neighboring pixels contribute in the operation. Finds the location of the edge coordinates in terms of fractions of a pixel.
subpixel analysis
T
template A color or pattern that you are trying to match in an image using the color matching or pattern matching functions. A template can be a region selected from an image or it can be an entire image. Separates particles from the background by assigning all pixels with intensities within a specified range to the particle and the rest of the pixels to the background. In the resulting binary image, particles are represented with a pixel intensity of 255 and the background is set to 0. Two parameters: the lower threshold gray-level value and the upper threshold gray-level value. Tagged Image File Format. An image format commonly used for encoding 8-bit, 16-bit, and color images. TIFF images have the file extension TIF. Describes algorithms that are designed to support a lower and upper bound on execution time. A collection of tools that enable you to select regions of interest, zoom in and out, and change the image palette.
threshold
G-17
V
value VI The grayscale intensity of a color pixel computed as the average of the maximum and minimum red, green, and blue values of that pixel. Virtual Instrument. (1) A combination of hardware and/or software elements, typically used with a PC, that has the functionality of a classic stand-alone instrument. (2) A LabVIEW software module, which consists of a front panel user interface and a block diagram program.
W
Watershed Transform Partitions an image based on the topographic surface of the image. The image is separated into non-overlapping segments with each segment containing a unique particle.
Index
Numerics
16-bit color images, 3-11 1D barcodes, reading, 5-35 8-bit color images, 3-11 for perspective and nonlinear distortion, 6-1 learning calibration information choosing learning algorithm, 6-6 choosing ROI, 6-6 correction table, 6-8 learning score and error map, 6-7 scaling mode, 6-8 voiding calibration, 6-9 overview, 2-2 saving calibration information, 6-10 simple calibration, 6-9 characters, reading, 5-34 circles, finding points along edge, 5-9 Classification Training Interface, 5-33 classifying samples, 5-33 color comparison, 3-12 color information, learning. See learning color information color location algorithms for finding measurement points, 5-30 color measurements. See grayscale and color measurements color pattern matching See also pattern matching defining search area, 5-26 defining template images, 5-24 setting matching parameters and tolerances color score weight, 5-29 color sensitivity, 5-28 minimum contrast, 5-29 rotation angle ranges, 5-29 search strategy, 5-28 testing search tool on test images, 5-29 training pattern matching tool using reference pattern, 5-25
A
acquiring measurement-ready images. See measurement-ready images, acquiring analytic geometry measurements, 5-32 analyzing images, 2-12 application development general steps (figure), 1-7 inspection steps (figure), 1-8 attaching calibration information to images, 2-12, 6-10 attenuation highpass, 2-17 lowpass, 2-17 AVI files, 2-8
B
background correction algorithm, 4-2 barcodes reading data matrix barcodes, 5-35 reading PDF417 barcodes, 5-35 binary images. See particle analysis binary morphology, 4-3, 4-4
C
calibration attaching calibration information to images, 2-12, 6-10 defining reference coordinate system, 6-3 defining template, 6-2
I-1
Index
color statistics color comparison, 3-12 learning color information, 3-13, 3-14 choosing color representation sensitivity, 3-15 ignoring learned colors, 3-16 specifying information to learn, 3-13 primary components of color images (figures), 3-11 connectivity, 4-3 connector pane examples image analysis, 2-4 image mask, 2-4 contour, finding points along edge, 5-11 control palette Image Display control, 1-1, 2-9 IMAQ Image control, 1-1 Machine Vision controls, 1-2 NI Vision controls, 1-1 conventions used in the manual, xi converting pixel coordinates to real-world coordinates, 5-30 convolution filters, 2-15 coordinate reference building for machine vision choosing method (figure), 5-7 edge detection, 5-3 pattern matching, 5-6 defining for calibration, 6-3 coordinates, converting pixel to real-world coordinates, 5-30 correction table, for calibration, 6-8 creating applications. See application development creating images. See images
diagnostic tools (NI resources), B-1 displaying images, 2-8 results of inspection process, 5-37 distance measurements, 5-31 distortion, correcting. See calibration documentation conventions used in manual, xi NI resources, B-1 related documentation, xii drivers (NI resources), B-1 drivers, NI-IMAQ, 1-3, 2-2, A-1, A-2
E
edge detection building coordinate reference, 5-3 finding measurement points along multiple search contours, 5-12 along one search contour, 5-11 lines or circles, 5-9 error map, for calibration, 6-7 examples (NI resources), B-1 external window, displaying images, 2-8
F
Fast Fourier Transform (FFT), 2-16 filters convolution, 2-15 highpass, 2-15 highpass frequency filters, 2-17 improving images, 2-14 lowpass, 2-15 lowpass frequency filters, 2-17 Nth order, 2-15 finding measurement points. See measurement points, finding
D
data matrix barcodes, reading, 5-35 deployment, application, xiii, A-9 determinism, A-5
I-2
ni.com
Index
frequency domain, 2-17 function palettes Image Processing, 1-4 Machine Vision, 1-5 Vision Utilities, 1-2
H
help, technical support, B-1 highpass filters, 2-15 highpass frequency filters attenuation, 2-17 truncation, 2-17
G
geometric matching finding points, 5-18 match mode, 5-21 occlusion ranges, 5-22 rotation angle ranges, 5-22 scale factor ranges, 5-22 training, 5-20 geometrical measurements, 5-32 golden template comparison, 5-36 grayscale and color measurements color statistics color comparison, 3-12 learning color information, 3-12 primary components of color images (figures), 3-11 defining regions of interest interactively, 3-3 programmatically, 3-8 using masks, 3-9 grayscale statistics, 3-10 area, 3-10 energy center, 3-10 light intensity, 3-10 maximum intensity, 3-10 mean intensity, 3-10 minimum intensity, 3-10 percent of image analyzed, 3-10 standard deviation, 3-10 grayscale morphology, 2-15
I
ignoring learned colors, 3-16 image analysis, connector pane example, 2-4 Image Display control, 1-1, 2-9 image mask, connector pane example, 2-4 Image Processing function palettes, 1-4 images See also particle analysis acquiring or reading, 2-6 figure, 1-7 analyzing, 2-12 attaching calibration information, 2-12, 6-10 creating connector panes, 2-4 Image Dst input, 2-5 Image Mask input, 2-4 multiple images, 2-3 overview, 2-2 source images for destination image, 2-5 valid image types, 2-2 degradation, A-4 displaying, 1-1, 2-8 improving advanced operations, 2-18 FFT (Fast Fourier Transform), 2-16 filters, 2-15 grayscale morphology, 2-15 lookup tables, 2-14
I-3
Index
imaging system calibrating, 2-2 setting up, 2-1 IMAQ ArrayToComplexImage VI, 2-18 IMAQ ArrayToImage VI, 2-8 IMAQ AutoBThreshold VI, 4-2 IMAQ AVI Close VI, 2-8 IMAQ AVI Open VI, 2-8 IMAQ AVI Read Frame VI, 2-8 IMAQ Bisecting Line VI, 5-32 IMAQ Browser Delete VI, A-11 IMAQ Browser Focus Setup VI, A-11 IMAQ Browser Focus VI, A-11 IMAQ Browser Insert VI, A-11 IMAQ Browser Replace VI, A-11 IMAQ Browser Setup VI, A-11 IMAQ Centroid VI, 3-10 IMAQ Check Timed Execution VI, A-9 IMAQ Clamp Horizontal Max VI, 5-31, 5-38 IMAQ Clamp Horizontal Min VI, 5-31, 5-38 IMAQ Clamp Vertical Max VI, 5-31, 5-38 IMAQ Clamp Vertical Min VI, 5-31, 5-38 IMAQ Classify VI, 5-33 IMAQ Clear Overlay VI, 5-38 IMAQ Close VI, 2-7 IMAQ ColorLearn VI, 3-12, 3-15 IMAQ ColorMatch VI, 3-12 IMAQ ColorThreshold VI, 4-3 IMAQ ColorToRGB VI, 3-12 IMAQ Compare Golden Template VI, 5-36 IMAQ Compare Golden Template, inspecting image for defects, 5-36 IMAQ ComplexAttenuate VI, 2-17 IMAQ ComplexImageToArray VI, 2-18 IMAQ ComplexPlaneToImage VI, 2-18 IMAQ ComplexTruncate VI, 2-17 IMAQ Concentric Rake VI, 5-12 IMAQ ConstructROI VI, 3-5 IMAQ Convert Annulus to ROI VI, 3-8 IMAQ Convert Line to ROI VI, 3-8
IMAQ Convert Pixel to Real World VI, 5-30, 5-36 IMAQ Convert Point to ROI VI, 3-8 IMAQ Convert Rectangle to ROI (Polygon) VI, 3-9 IMAQ Convert Rectangle to ROI VI, 3-8 IMAQ Convert ROI to Annulus VI, 3-9 IMAQ Convert ROI to Line VI, 3-9 IMAQ Convert ROI to Point VI, 3-9 IMAQ Convert ROI to Rectangle VI, 3-9 IMAQ Convolute VI, 2-15 IMAQ Count Object VI, 5-38 IMAQ Create VI, 2-2 IMAQ Danielsson VI, 4-4 IMAQ Dispose Classifier VI, 5-33 IMAQ Dispose VI (note), 2-3 IMAQ Draw Text VI, A-11 IMAQ Draw VI, A-11, A-12 IMAQ Edge Tool VI, 2-14, 5-11 IMAQ Equalize VI, 2-14 IMAQ ExtractColorPlanes VI, 3-10, 3-11 IMAQ ExtractSingleColorPlane VI, 3-11 IMAQ FFT VI, 2-16 IMAQ FillHoles VI, 4-5 IMAQ Find Circular Edge VI, 5-10, 5-38 IMAQ Find Concentric Edge VI, 5-9, 5-38 IMAQ Find CoordSys (2 Rects) VI, 5-3, 5-4, 5-38 IMAQ Find CoordSys (Pattern) 2 VI, 5-6 IMAQ Find CoordSys (Pattern) VI, 5-6, 5-38 IMAQ Find CoordSys (Rect) VI, 5-3, 5-38 IMAQ Find Horizontal Edge VI, 5-9, 5-37 IMAQ Find Pattern VI, 5-38 IMAQ Find Vertical Edge VI, 5-9, 5-37 IMAQ Fit Circle 2 VI, 5-32 IMAQ Fit Ellipse 2 VI, 5-32 IMAQ Fit Line VI, 5-32 IMAQ Get LCD ROI VI, 5-33
I-4
ni.com
Index
IMAQ Get Meter 2 VI, 5-32 IMAQ Get Meter VI, 5-32 IMAQ GetFileInfo VI, 2-7 IMAQ GetPalette VI, 2-9 IMAQ GetPointsOnLine VI, 5-12 IMAQ Grab Acquire VI, 2-7 IMAQ Grab Setup VI, 2-7 IMAQ GrayMorphology VI, 2-16 IMAQ Histogram VI, 2-13 IMAQ Histograph VI, 2-13 IMAQ Image control, 1-1 IMAQ Image To Image VI, A-12 IMAQ ImageToArray VI, 2-8 IMAQ ImageToComplexPlane VI, 2-18 IMAQ Initialize Timed Execution VI, A-6 IMAQ IntegerToColorValue VI, 3-12 IMAQ Inverse VI, 2-14 IMAQ Label VI, 3-10 IMAQ Learn Color Pattern VI, 5-26 IMAQ Learn Multiple Geometric Patterns VI, 5-23 IMAQ Learn Pattern 2 VI, 5-15 IMAQ Light Meter (Line) VI, 3-6, 3-10 IMAQ Light Meter (Point) VI, 3-6, 3-10 IMAQ Light Meter (Rectangle) VI, 3-6, 3-10 IMAQ LineProfile VI, 2-13 IMAQ Lines Intersection VI, 5-32 IMAQ Local Threshold VI, 4-2 IMAQ LowPass VI, 2-15 IMAQ Mask VI, 4-4 IMAQ Match Geometric Pattern VI, 5-19 IMAQ Match Multiple Geometric Patterns VI, 5-23 IMAQ MathLookup VI, 2-14 IMAQ Merge Overlay VI, A-12 IMAQ Mid Line VI, 5-32 IMAQ Morphology VI, 4-5 IMAQ MultiThreshold VI, 4-2 IMAQ OCR Dispose Session VI, 5-34, 5-36 IMAQ OCR Read Character Set File VI, 5-34, 5-36
IMAQ OCR Read Text VI, 5-34, 5-36 IMAQ Overlay Arc VI, 5-37 IMAQ Overlay Bitmap VI, 5-37 IMAQ Overlay Line VI, 5-37 IMAQ Overlay Multiple Lines VI, 5-37 IMAQ Overlay Oval VI, 5-37 IMAQ Overlay Points VI, 5-37 IMAQ Overlay Rectangle VI, 5-37, A-12 IMAQ Overlay ROI VI, 5-37 IMAQ Overlay Text VI, 5-37 IMAQ Particle Analysis Report VI, 4-5 IMAQ Particle Analysis VI, 4-5 IMAQ Particle Filter 2 VI, 4-4 IMAQ Perpendicular Line VI, 5-32 IMAQ Point Distance VI, 5-31 IMAQ Polygon Area VI, 5-32 IMAQ Quantify VI, 3-10 IMAQ Rake VI, 5-12 IMAQ Read Barcode VI, 5-35 IMAQ Read Classifier File VI, 5-33 IMAQ Read Data Matrix Barcode VI, 5-35 IMAQ Read Image and Vision Info VI, 2-7, 5-39 IMAQ Read LCD VI, 5-33 IMAQ Read Meter VI, 5-32 IMAQ Read Multiple Geometric Template VI, 5-23 IMAQ Read PDF417 Barcode VI, 5-35 IMAQ Read Single Digit VI, 5-33 IMAQ ReadFile VI, 2-7 IMAQ RejectBorder VI, 4-3 IMAQ Remote Display Options VI, A-3 IMAQ RemoveParticle VI, 4-3 IMAQ ReplaceColorPlanes VI, 3-10 IMAQ RGBToColor2 VI, 3-12 IMAQ ROIProfile VI, 2-13, 5-11 IMAQ ROIToMask VI, 3-5 IMAQ RT Video Out VI, A-4 IMAQ Select Line VI, 3-6
I-5
Index
IMAQ Select Point VI, 3-6 IMAQ Select Rectangle VI, 3-6, 5-31 IMAQ Separation VI, 4-4 IMAQ Sequence VI, 2-7 IMAQ Setup Learn Color Pattern VI, 5-26 IMAQ Setup Learn Pattern 2 VI, 5-15 IMAQ Setup Match Color Pattern VI, 5-27 IMAQ Setup Match Pattern 2 VI, 5-17 IMAQ Simple Edge VI, 5-11 IMAQ Snap VI, 2-7 IMAQ Spoke VI, 5-12 IMAQ Start Timed Execution VI, A-8, A-9 IMAQ Stop Timed Execution VI, A-8 IMAQ Stop VI, 2-7 IMAQ Threshold VI, 4-1 IMAQ Uninitialize Timed Execution VI, A-9 IMAQ UserLookup VI, 2-14, 4-2 IMAQ Video Out Display Mode VI, A-4, A-13 IMAQ Vision Remote Server VI, A-10 IMAQ Watershed Transform VI, 4-4 IMAQ WindClose VI, 2-9 IMAQ WindDraw VI, 2-8, A-3, A-4, A-10 IMAQ WindLastEvent VI, 2-12 IMAQ WindMove VI, 2-8 IMAQ WindSetup VI, 2-8 IMAQ WindToolsClose VI, 3-4 IMAQ WindToolsMove VI, 3-4 IMAQ WindToolsSelect VI, 3-4 IMAQ WindToolsSetup VI, 3-4 IMAQ WindToolsShow VI, 3-4 IMAQ Write Image and Vision Info VI, 5-15, 5-26, 5-38 IMAQ Write Multiple Geometric Template VI, 5-23 inspecting image for defects, 5-36 Inspection Defects, 5-36 Identify Parts Under, 5-33 palette, 1-6 Verify Characters, 5-36
instrument drivers, 1-3, 2-2, A-1, A-2 instrument drivers (NI resources), B-1 instrument reader measurements, 5-32
J
JPEG, 2-7 JPEG2000, 2-7
K
KnowledgeBase, B-1
L
LabVIEW Real-Time Module, xiii, A-1 learning calibration information choosing learning algorithm, 6-6 choosing ROI, 6-6 correction table, 6-8 learning score and error map, 6-7 scaling mode, 6-8 voiding calibration, 6-9 learning color information choosing color representation sensitivity, 3-15 entire image, 3-13 ignoring learned colors, 3-16 multiple regions in image, 3-14 region in image, 3-14 specifying information to learn, 3-13 lines, finding points along edge, 5-9 locating objects to inspect. See machine vision lookup table transformations, 2-14 lowpass filters, 2-15 lowpass frequency filters attenuation, 2-17 truncation, 2-17
I-6
ni.com
Index
M
machine vision converting pixel coordinates to real-world coordinates, 5-30 defining region of interest for search area interactively, 5-8 programmatically, 5-9 displaying results, 5-37 finding measurement points color location, 5-30 color pattern matching, 5-24 edge detection, 5-9 pattern matching, 5-13 locating objects to inspect choosing method for building coordinate reference (figure), 5-7 edge detection for building coordinate reference, 5-3 pattern matching for building coordinate reference, 5-6 making measurements analytic geometry measurements, 5-32 distance measurements, 5-31 instrument reader measurements, 5-32 overview, 5-1 steps for performing (figure), 5-2 Machine Vision controls, 1-2 Machine Vision function palettes, 1-5 masks, for defining regions of interest, 3-9 measurement points, finding color location, 5-30 color pattern matching, 5-24 edge detection, 5-9 pattern matching, 5-13 measurement-ready images, acquiring acquiring or reading images, 2-6 analyzing images, 2-12 attaching calibration information, 2-12 calibrating imaging system, 2-2
National Instruments Corporation I-7
creating images, 2-2 displaying images, 2-8 improving images advanced operations, 2-18 FFT (Fast Fourier Transform), 2-16 filters, 2-15 grayscale morphology, 2-15 lookup tables, 2-14 setting up imaging system, 2-1 morphology binary, 4-3 grayscale, 2-15 multiple template images, 5-23
N
National Instruments support and services, B-1 NI OCR Training Interface, 5-36 NI support and services, B-1 NI Vision Control, 1-1 NI Vision for LabVIEW creating applications general steps (figure), 1-7 inspection steps (figure), 1-8 function palettes Image Processing, 1-4 Machine Vision, 1-5 Vision Utilities, 1-2 overview, 1-1 NI Vision Template Editor, 5-19 Niblack algorithm, 4-2 NI-IMAQ, xiii, 2-2, A-1, A-2 Nth order filters, 2-15
O
OCR, 5-34
Index
P
particle analysis connectivity, 4-3 creating binary image, 4-1 improving binary image improving particle shapes, 4-5 removing unwanted particles, 4-3 separating touching particles, 4-4 particle measurements, 4-5 particle measurements, 4-5 pattern matching See also color pattern matching building coordinate reference, 5-6 finding measurement points defining and creating template images, 5-13 defining search area, 5-15 general steps, 5-13 learning the template, 5-15 setting matching parameters and tolerances, 5-17 testing search tool on test images, 5-18 verifying results with ranking method, 5-18 PDF417 barcodes, reading, 5-35 perspective errors, calibrating. See calibration pixel coordinates, converting to real-world coordinates, 5-30 points, finding. See measurement points, finding programming examples (NI resources), B-1
R
ranking method for verifying pattern matching, 5-18 reading 1D barcodes, 5-35 AVI files, 2-8 barcodes, 5-35 characters, 5-34 data matrix barcodes, 5-35 PDF417 barcodes, 5-35 reading images, 2-6 regions of interest, defining for calibration, 6-6 interactively displaying tools palette in separate window, 3-4 for machine vision inspection, 5-8 ROI constructor window, 3-5 tools palette functions (table), 3-2 tools palette tools and information (figure), 3-7 programmatically for machine vision inspection, 5-9 specifying ROI elements and parameters, 3-8 using VIs, 3-8 using masks, 3-9 related documentation, xii remote display, A-3 resource management, A-5 ROI. See regions of interest, defining RT Video Out, A-2, A-4, A-13
I-8
ni.com
Index
S
samples, classifying, 5-33 saving calibration information, 6-10 scaling mode, for calibration, 6-8 search algorithm testing, 5-22 search contour, finding points along edge, 5-11 software (NI resources), B-1 specifying information to learn using entire image, 3-13 using multiple regions in image, 3-14 using region in image, 3-14 support, technical, B-1
time-bounded execution, A-4, A-5, A-6 tools palette functions (table), 3-2 training and certification (NI resources), B-1 troubleshooting (NI resources), B-1 truncation highpass, 2-17 lowpass, 2-17
V
validating calibration, 6-9 verifying pattern matching, 5-18 Vision Utilities function palettes, 1-2
W T
technical support, B-1 template for calibration, defining, 6-2 template images defining color pattern matching, 5-25 pattern matching, 5-13 defining and creating, 5-19 learning color pattern matching, 5-25 pattern matching, 5-15 multiple, 5-23 watershed transform, 4-4 Web resources, B-1
I-9