ACE Users Guide
ACE Users Guide
Version 4
User’s Manual
I633-E-05
Copyright Notice
The information contained herein is the property of Omron Robotics and Safety Technologies Inc. and
shall not be reproduced in whole or in part without prior written approval of Omron Robotics and Safety
Technologies Inc. The information herein is subject to change without notice and should not be con-
strued as a commitment by Omron Robotics and Safety Technologies Inc. The documentation is peri-
odically reviewed and revised.
Copyright 2020 by Omron Robotics and Safety Technologies Inc. All rights reserved.
Chapter 1: Introduction 11
1.1 Intended Audience 11
1.2 Related Manuals 11
1.3 Software Overview 12
Configuration Wizards 12
Licenses 16
1.4 Software Features 17
Emulation Mode 17
Application Samples 21
1.5 Robot Concepts 24
Coordinate Systems 24
Calibrations 32
Basic Robot Motion 37
Understanding Belts (Conveyors) 41
IO Watcher 126
V+ File Browser 127
Virtual Front Panel 129
Profiler 131
Application Manager Control 133
Control Buttons 133
5.10 Vision Window 133
Zoom Level 134
5.11 V+ Watch 136
Adding Variables to the V+ Watch Window 136
5.12 Search and Replace 137
Search Options 138
Button Functions 138
5.13 User Management 139
Passwords 141
5.14 Project Options 141
Color Theme 142
Project Settings 143
Window 143
V+ Programs 144
Uninterruptible Power Supply (UPS) 145
Appendix A 597
A.1 Configuring Basler Cameras 597
Camera Connections 597
Power and Communication Connections 599
Configure Network and Camera Settings 599
Add a Basler Camera to the ACE Project 603
Position Latch Wiring 604
Latch Signal Test 606
A.2 Configuring Sentech Cameras 607
Camera Connections 607
Sentech Power and Communication Connections 608
Configure Network and Camera Settings 609
Add a Sentech Camera to the ACE Project 615
Position Latch Wiring 616
Latch Signal Test 618
A.3 Pack Manager Packaging Application Sample Exercise 619
Create a Basic Pack Manager Application Sample 619
Jar Packing Application Summary 629
Modify the Pack Manager Application Sample Project 630
Commissioning on Physical Hardware 660
A.4 Version Control Introduction 662
A.5 Git Software Installation 662
Installing Git 662
Installing TortoiseGit 665
A.6 Version Control Configuration 668
A.7 Creating a Shared Folder and Remote Repository 668
Create Shared Folder 668
Create a Remote Repository 669
A.8 Multiple User Configuration 669
Shared Computer Repository 670
Dedicated Git Server 670
Internet Git Server Service 671
A.9 Exporting ACE Projects 672
Save in Shared Directory 672
Import a Shared Project 672
This manual contains information that is necessary to use Automation Control Environment
(ACE) software. Please read this manual and make sure you understand the ACE software fea-
tures, functions and concepts before attempting to use it.
Use the information in this section to understand .
Manual Description
eV+3 Keyword Reference Manual (Cat. No. Provides references to eV+ Keyword use and
I652) functionality.
eV+ Language Reference Guide (Cat. No. Provides references to eV+language and func-
I605) tionality.
Robot Safety Guide (Cat. No. I590) Contains safety information for OMRON indus-
trial robots.
S8BA-series Uninterruptible Power Supply Installation and operating instructions for S8BA
(UPS) User’s Manual (Cat. No. U726) UPS.
Sysmac Studio for Robot Integrated System Contains information that is necessary to use
Building Function with Robot Integrated the robot control function of the NJ-series CPU
CPU Unit Operation Manual (Cat. No. W595) Unit.
Sysmac Studio for Project Version Control Contains version control information to properly
Function Operation Manual (Cat. No. W589) save, import, export and manage projects in
multi-user environments.
T20 Pendant User's Manual (Cat. No. I601) Contains information for the setup and use of
the T20 Pendant.
l Queue managing instances that have been passed to the controller for processing. This
includes notifying the PC concerning the status of parts being processed and not pro-
cessed.
l Robot control
Use the descriptions provided below to have a general understanding of the ACE software ter-
minology, concepts, and other functionality needed to create applications. More details for
these items are provided in separate sections of this document.
Configuration Wizards
Many of the ACE software components are configured using wizards. These wizards provide
a series of screens that guide you through a detailed configuration process.
Selections and steps in the wizards will vary depending on the application and because of
this, each wizard is not fully detailed in this document. Use the information provided in the
wizard steps to understand the selections that are required.
An example robot-to-belt calibration wizard is shown below.
Wizard Elements
Many of the wizards share common elements such as buttons, fields, etc.) The following
information describe common wizard interface items.
Navigation
Item Description
The Next button will not be available until the current screen is com-
pleted.
Dialog-access Controls
Item Description
Item Description
Approach Moves the robot to the approach position (the taught position plus
the approach height).
Depart Moves the robot to the depart position (the taught position plus the
depart height).
End Effector Displays the selected end effector (gripper) for the robot.
Gripper Activates / deactivates the gripper (end effector). Click the Signal
button ( / ) to toggle the state.
Here Records the current position of the robot. The recorded position is dis-
played in the Taught Position field.
Monitor Speed Adjusts the monitor speed (percent of full speed) for the robot move-
ments.
Move Moves the robot to the recorded (taught) position using the speed
specified in Monitor Speed.
Item Description
Fast / Slow Selects fast or slow speed. Click the Signal button ( / ) to toggle
the signal state.
On / Off Starts and stops the conveyor belt. Click the Signal button ( / )
to toggle the signal state.
Reverse / Forward Selects forward or reverse direction. Click the Signal button ( / )
to toggle the signal state.
Item Description
Edit Opens the Vision Tools Properties window that is used to edit various
parameters of the vision tool.
Stop Stops the currently-running vision tool or process (this is only active
in Live mode).
When Emulation Mode is enabled, some of the ACE software wizards contain differences from
their operation in standard mode. This section describes those differences.
When performing a belt calibration or sensor calibration in Emulation Mode, those wizards
include special interactive 3D Visualizer windows that allow you to interactively position the
elements being calibrated. This feature allows you to see what is being changed and how the
change affects the calibration. An example is shown in the following figure.
When multiple robots are present that access the same belt in the workspace, if a belt has not
been taught it is not displayed in the 3D teach processes.
NOTE: For Emulation Mode calibrations, the belt controls in the Calibration wiz-
ards will allow you to operate the belt even when the Active Control option of
the Belt object is not enabled.
For these wizard pages, there are two ways to change the settings.
1. Use the interactive 3D windows to drag the elements to the desired positions. After pos-
itioning the elements, you can see the changes to the values in the fields below the 3D
windows.
2. Use the fields below the interactive 3D windows to enter the values. After entering the
values, you can see the changes in the 3D windows.
Licenses
To enable full functionality, the ACE software requires the V+ controller licenses and PC
licenses supplied on the USB hardware key (dongle) as described below. For details on obtain-
ing these items, please contact your local Omron representative.
To view the licenses installed on the dongle, access the Help menu item and then select
About... This will open the About ACE Dialog Box. Choose the PC Licenses tab to view the
installed licenses.
PC Licenses
The following licenses are available for the PC running ACE software. The PC licenses are sup-
plied on the USB hardware key (dongle, sold separately). Contact your local Omron rep-
resentative for more information.
When licensing is not activated, you will have full functionality for two hours while running
in Emulation Mode. After two hours expires, you must restart the ACE software to continue.
l Emulation Mode
l Application Samples
Emulation Mode
The ACE software contains an operating mode called Emulation Mode. This mode provides a
virtual environment that emulates the physical hardware you have in your application. This
allows you to program and operate your application with no connection to physical hardware.
Although the Emulation Mode is an optional operating mode, it behaves as though you are
working in the standard operating mode of the ACE software. Once you have enabled Emu-
lation Mode, you can create and program an ACE application in the same manner that you
would when connected to physical hardware. This provides a seamless user experience that is
nearly identical to running with real, physical hardware.
Emulation Mode can run multiple, simultaneous instances of controllers and robots on the
same PC at the same time. This includes the handling of network ports and multiple file sys-
tems. This feature allows you to design, program, and operate a real multi-controller / robot
application.
This section details the start up, features, and limitations of Emulation Mode.
Emulation Mode Features
l Program offline
You can open and edit existing ACE projects. You can also edit V+ programs and
C# programs.
Because the Emulation Mode application is created with virtual hardware, you can
experiment with different robot cell designs and layouts before purchasing the physical
hardware.
Emulation Mode has the following differences when compared to operating while connected to
physical hardware.
l The Belt object control, speed, and latch settings are different.
When Emulation Mode is enabled wizards allow use of belt control signals even if Act-
ive Control is disabled, fast and slow speed settings are used, and the Latch Period gen-
erates a new latch at each distance interval of belt travel. Refer to Belt Object on page
342 for more information.
Emulation Mode can be enabled or disabled by clicking the Enable Emulation Mode icon ( )
or the Disable Emulation Mode icon ( ), respectively. Any unsaved data must be saved
before doing this. Note that the icons are disabled if they do not apply. For example, if Emu-
lation Mode is enabled, the Enable Emulation Mode icon cannot be clicked.
Alternatively, Emulation Mode can be enabled when opening a project with the following pro-
cedure:.
3. Make other selections for opening a new or existing project, or connecting to an emu-
lated device (refer to Creating a Project on page 49 for more information). After these
selections are made and you proceed, the ACE project will open and guide you through
any additional steps if necessary. Emulation Mode will be indicated in the Status bar at
the bottom of the ACE software, as shown in the following figure. The Enable and Dis-
able Emulation icons are indicated on the ACE 4.2 Toolbar.
4. After the ACE project is open and Emulation Mode is active, the procedure is complete.
The same procedure can be done while deselecting the Open in Emulation Mode option
to disable Emulation Mode.
Application Samples
The ACE software provides Application Samples to assist in the development of robotic applic-
ations. When an Application Sample is created, a wizard is launched to collect preliminary
information about the application. When the wizard is completed, an ACE project is created
with the basic objects and configurations for the application. This new project can be used as a
starting point for many types of robotic applications.
There are two types of Application Samples that are currently offered with the
ACE software: Robot Vision and Pack Manager. Refer to Robot Vision Application Sample on
page 22 and Pack Manager Application Sample on page 22 for more information.
The Application Sample wizard follows the basic steps listed below. These may vary based on
the Application Sample type and selections made during the wizard, but generally follow this
sequence.
Robot Vision Application Samples can be used to create example V+ programs and Robot
Vision objects for integrating Robot Vision with robots, belts, feeders, and more. When using
Robot Vision with V+ programs you are responsible for writing the V+ programs that drive
robot motion and other activity in the robot cell.
NOTE: Robot Vision sample wizards can be used for example application struc-
ture, but are not intended to provide all V+ program code required for the applic-
ation.
Pack Manager Application Samples can be used to create single-robot packaging application
projects with Pack Manager. These single-robot samples can later be expanded for multi-robot
applications. The ACE software provides a point-and-click interface to develop many pack-
aging applications without writing V+ programs. If the default behavior does not meet the
needs of the application, V+ programs can be customized. Pack Manager uses a Process Man-
ager to manage run-time control of the application including allocation of part and target
instances in multi-robot packaging lines, visualization of resources, and statistics tracking.
Additional Information: Refer to Process Manager Object on page 354 and the
ACE Reference Guide for more information.
There are two methods used to create an Application Sample as described below.
Use the following procedure to create an Application Sample from the Start Page.
3. If you are creating the application while connected to a physical controller, make the
appropriate Connect to Device settings. Refer to Online Connections to SmartControllers
on page 75 for more information. If you are not connected to a physical controller, select
the check box for Open in Emulation Mode.
4. Select Create Application Sample and then choose Robot Vision Manager Sample or
Pack Manager Sample. Then, click the Connect button. This will create a new
ACE project with the selected Application Sample and launch the Application Sample
wizard.
5. Select robots to install on the controller (if running in Emulation Mode) and finish all
wizard steps to complete the procedure. A new ACE project will be created according to
the collected wizard information.
To create an Application Sample from an ACE project, select Insert from the menu bar, select
Application Sample, and then click Robot Vision Manager Sample or Pack Manager Sample. An
Application Sample wizard will appear. Finish all wizard steps and then the Application
Sample will be added to the ACE project.
NOTE: You must select the SmartController device to access the Application
Sample item from the menu bar. If the Application Manager device is selected,
these menu items will not be available.
Coordinate Systems
The ACE software uses multiple coordinate frames to define the locations of elements. These
are often positioned in reference to other objects or origins. Each coordinate system is briefly
described in the following table.
The coordinates in each system are measured in terms of (X, Y, Z, Yaw, Pitch, Roll) unless oth-
erwise specified, where Yaw, Pitch, and Roll are defined as:
Robot - World Each robot has a world coordinate system. The X-Y plane of
this coordinate system is the robot mounting surface. The Z-
axis and origin are defined for each robot model and can be
viewed by enabling the Edit Workspace Position button in
the 3D Visualizer.
Robot - Joint Each robot has a joint coordinate system based on the ori-
entation of each individual joint. Each element of a coordinate
is the angular position of the joint.
Robot - Tool This is the coordinate system based on the robot tool.
The origin is positioned at the tool flange with the Z-axis ori-
ented away from the tool flange when a null tool offset is
applied.
Belt This is the coordinate system describing the direction and ori-
entation of a conveyer belt.
Workspace Coordinates
The workspace coordinate system is a global reference frame for all object positions in the 3D
Visualizer. The workspace origin is not visible, but it is positioned at the center of the tile grid,
as shown in Figure 1-8 below.
Workspace coordinates are primarily used for positioning robots and other features in the
workspace. Allocation of belt-relative Part and Part Target instances during run time depends
on the relative position of robots along a process belt object, therefore robot positions cannot be
changed while a Process Manager is active.
The robot world coordinate system is a frame of reference for all transformations recorded by a
specific robot. It is primarily used to define points with respect to the robot itself. Each robot
model has a unique base frame, but the X-Y plane is typically located at the robot mounting
surface. For example, the position markers of the robots shown in Figure 1-9 and Figure 1-10
are also the robot origin in each robot world coordinate system.
This coordinate system is used when a program defines a transformation-type location.
Whenever a position is taught or motion executed, it is usually done with respect to this
coordinate system.
The joint coordinate system is used to define the position of each individual joint. Each
coordinate has as many elements as there are joints in the robot. For example, a Cobra would
have four elements in a coordinate while a Viper would have six elements.
Joint coordinates become useful when defining a point that can be reached in multiple ori-
entations. For example, the two configurations shown in Figure 1-11 have the gripper in the
same position (550, 0, 317, 0, 180, 180) as defined by robot world coordinates. However, the
robot arm can be oriented in two different ways to reach this position. The top configuration in
the figure shows joint coordinates (-43, 93.5, 77, 50.5) and the bottom configuration shows joint
coordinates (43, -93.5, 77, -50.5).
NOTE: The size of the tool is exaggerated in the figures to clearly demonstrate
the orientation of J4.
A location based on joint coordinates instead of world coordinates is called a precision point.
These are useful in cases where one orientation would cause the robot to strike an obstacle.
A precision point guarantees that a robot will always move to the correct orientation. This is
most commonly seen in Cobra and Viper robots, since locations can be reached from multiple
positions. Precision points can be defined for parallel robots such as the Hornet and Quattro,
but because each location can only be reached while the joints are in one position, joint
coordinates and precision points usually are not used with these robots. Joint coordinate jog-
ging is also not allowed for parallel arm robots.
NOTE: The orientation of the servo is important when considering joint coordin-
ates. For example, in Figure 1-11 above, the J4 orientation convention is in the
opposite direction of the other two rotational joints. This is because the joint 4
servo internal mounting position is inverted.
The tool coordinate system is used to define the position of tool tips. Its frame of reference is
positioned at the tool flange itself. The tool Z-axis points opposite the other frames. This is
because the main purpose of this system is to define the offset of tool tips. For example, a tool
tip with coordinates (0, 0, 100, 0, 0, 0) is an offset of 100 mm in the negative Z-axis of the work-
space and robot world coordinate systems.
Belt Coordinates
The belt coordinate system is used to define positions on a belt window. Its frame of reference
is at one of the upstream corners of the belt. The axes are oriented so the positive X-axis is in
the direction of the belt vector and the Y-axis is along the belt width. The belt is typically posi-
tioned so that the Z-axis of the belt frame aligns with the tool Z-axis, but it can be reversed if
necessary.
This coordinate system is primarily used to provide part locations on a belt to a robot and to
verify that a belt-relative location is within the belt window before commanding a motion to
the location. When an instance is located, the identified belt coordinate is converted to a robot
world coordinate. This means that a belt-to-robot calibration must be done before any belt
coordinates are recorded.
The belt coordinate system is also used to set the various allocation limits in the belt window
for a Process Manager. The various limits are set using X-coordinates and, for the Downstream
Process Limit Line, an angle. In this case, the angle translates to both X and Y-coordinates to
determine when an instance crosses that line. The various coordinates can be seen in Figure 1-
13 based on the numbers shown in the Belt Calibration Editor in Figure 1-14 below. Refer to
Belt Calibration on page 257 for more information.
NOTE: Belt coordinates do not apply to a Belt object created in the Process area
of the Multiview Explorer (that is separate from a belt window). Belt objects are
used to record information about the belt itself, such as encoders and signals,
and provides a representation of a belt in the 3D Visualizer. Their location in the
workspace is set by their Workspace Location parameter that uses workspace
coordinates. Conversely, belt windows regard the positioning of the robot gripper
with respect to the belt and use belt coordinates to determine instance locations.
Camera Coordinates
The camera coordinate system is used to define positions relative to a camera. Vision tools
return positional data on detected instances or points in camera coordinates. 2D vision only
requires the X, Y, and Roll components. Since the positions are still returned and used as 6-ele-
ment transformations, the resulting locations are in the form of (X, Y, 0, 0, 0, Roll).
Camera coordinates must be interpreted into a different coordinate system before they can be
practically used in an application. A robot-to-camera calibration is required to translate vision
results to locations a robot can move to.
Calibrations
Calibrations are used to define relationships between coordinate frames. The calibration
method may differ depending on whether the application uses Robot Vision Manager or a Pro-
cess Manager, but the function of calibration is the same. In applications using Process Man-
ager, the calibrations can be found in their respective sections in the Process Manager edit
pane. The Process Manager will show calibrations required for defined processes.
When Robot Vision Manager is used, the calibrations are found by right-clicking Robot Vision
in the Multiview Explorer, clicking Add, and selecting the appropriate calibration object.
NOTE: Verify that any necessary tool tip offsets have been defined before begin-
ning calibration.
Calibration Order
There are two types of hardware calibrations used in the ACE software. Most applications will
use at least one, but if more than one are necessary, the calibrations should always be per-
formed in the following order:
l Robot-to-Belt Calibration
l Robot-to-Sensor Calibration (these include a wide range of different calibrations includ-
ing Robot-to Camera and Robot-to-Latch calibrations)
This is important because calibrations are dependent on previously defined locations. For
example, a robot-to-camera calibrations in an application with a belt utilizes a belt vector to
define the locations of instances detected by the camera. If the camera is calibrated first, then
the camera location will not be recorded properly and will need to be recalibrated once the belt
has been defined.
NOTE: This assumes robot hardware calibrations were performed before doing
the calibrations shown above. If robot hardware calibration changes, the other
calibrations may need to be performed again.
This calibration translates positional information from the belt coordinate system to the robot
world coordinate system. This is required whenever a belt is used in an application. One cal-
ibration needs to be performed for each encoder input associated with a robot.
Robot-to-belt calibration will require three points to be defined on the surface of the belt, shown
in order in Figure 1-16 below. Use the following procedure to execute a robot-to-belt calibration.
1. Place a calibration target on the belt at the farthest point upstream that the robot can
reach. Verify that the robot can reach that belt position across the entire width of the
belt.
2. Position the robot at the calibration target and record the position. This saves the robot
location and the belt encoder position.
3. Lift the robot and advance the belt to move the calibration target to the farthest down-
stream position that the robot can reach. Again, verify the robot can reach across the
entire width of the belt to ensure that the entire belt window remains within the work
envelope.
NOTE: It is important to ensure the calibration target does not move rel-
ative to the belt while advancing the belt.
4. Reposition the robot at the calibration target and record the position. The combination of
recorded robot world coordinates and the belt encoder positions of these two points
define the belt vector, which is the X-axis of the belt transformation, and the millimeter-
per-count scale factor (mm/count).
5. Remove the calibration target from the belt and reposition on the opposite side of the
belt at the farthest downstream position the robot should pick a part. Record its position
in the same way as the other two points. This defines the belt pitch or Y-axis of the belt
transformation. The Z-axis of the belt transformation is defined based on the right-hand
rule. After completing this step, the robot-to-belt calibration procedure is finished.
NOTE: The three points in this calibration also define other values, such as the
upstream and downstream allocation limits, also shown in Figure 1-16 below.
However, these do not directly affect the calibration of the belt and can later be
changed to fit the needs of the application. For ACE Sight and V+ programs, the
pick limits will be defined in a V+ program. For applications with a Process
Manager, refer to Belt Calibrations on page 372 for more information.
Item Description
1 Upstream Limit
2 Downstream Limit
3 Downstream Pick Limit
4 Belt Direction
5 Robot Side
6 Far Side
Robot-to-Camera Calibration
This calibration orients a camera frame of reference relative to a robot world coordinate sys-
tem. It is used to translate positional information of vision results from the camera coordinate
system to the robot world coordinate system. One of these is required for each association
between a robot and a camera.
To perform this calibration, a grid calibration must be active in the Virtual Camera. If it is not,
perform a grid calibration before proceeding. Refer to Grid Calibration Method on page 282 for
more information.
The process of calibrating the camera is dependent on the type of pick application in which it
is involved. Generally, there are three categories with which the application could be asso-
ciated:
l Fixed-mounted
l Arm-mounted
l Belt-relative
In all cases, the robot tool tip will be used to record various positions on the pick surface to
associate it with the robot world coordinates. At least four points are required to generate the
calibration, but a grid of 3x3 is recommended. The accuracy of the calibration increases with
the distribution of the targets across the field of view (refer to Figure 1-17 below). The con-
figuration on the left would result in an accurate calibration while the configuration on the
right could yield inaccurate results.
When the camera is fixed-mounted, a calibration target is recorded as a Locator Model and the
target is placed in a defined region of interest of a Locator included in the calibration wizard.
The target then must be repositioned in several different points on the pick surface. For each
one, the camera detects the instance and records the position in camera coordinates, and then
the position is taught to the robot by moving the gripper to the instance. The combination of
the recorded data teaches the robot where the pick surface and the camera are relative to the
robot, thus translating camera data to robot world coordinates.
If the application has a belt, the calibration is effectively the same, but it must be executed in
two phases since the robot cannot reach the camera field of view. In the first phase, the targets
are placed on the belt underneath the camera and their positions are recorded with respect to
the camera. Then, the belt is advanced to where the robot can touch the targets and record
their position in the robot world coordinate system. These locations and the associated belt
encoder positions are used to define the location of the camera with respect to the robot world
coordinate system.
Robot-to-Latch Calibration
This calibration positions a latch sensor relative to a belt coordinate system. It is used to trans-
late latch detection results to belt coordinate positions. One of these is required for each asso-
ciation between a robot and a belt with a latch sensor.
The robot-to-latch calibration is similar to the robot-to-camera calibration when a belt is
present. However, instead of using a camera to detect the location of the target, the calibration
is used to determine a part detection point, relative to a sensor signal.
The target and the associated object are placed upstream of the latch sensor. When the belt is
advanced past the sensor, the belt encoder position is recorded. Then, the belt is advanced to
where the robot can touch the part. The recorded location combined with the belt encoder pos-
ition indicates where the part will be detected by the sensor relative to the latched belt encoder
position. The Figure 1-18 below shows an example using a pallet with slots for six parts.
In the following figure, the blue field represents the belt with the arrows indicating the dir-
ection of belt travel. The numbered sections represent the different steps of the calibration,
explained as follows.
1. The pallet is positioned upstream of the latch sensor and the belt is not in motion.
2. The belt is advanced and the pallet is detected by the latch sensor. The belt encoder pos-
ition at this position is recorded.
3. The belt is advanced to a position where the pallet is within the robot range of motion.
4. The robot tool tip (marked by a black circle) is positioned where the first part will be
placed. The current belt encoder position is recorded and compared to the latched belt
encoder position. This difference and the position of the robot along the belt vector are
used to position the upstream part detection point for the latch sensor.
NOTE: When calibrating multiple robots to a single sensor, ensure the initial
position of the calibration object is identical for each robot calibration to avoid
large a deviation in part placement relative to a latched position. There should
not be a large deviation in sensor position for a single detection source, as
shown in Figure 1-19 below. Instead, the sensors should be close together, as
shown in Figure 1-20 below. It is normal for there to be a small deviation due to
differences between physical assemblies and ideal positions in 3D visualization.
Robot speed is usually specified as a percentage of normal speed, not as an absolute velocity.
The speed for a single robot motion is set in the Speed parameter of the Pick Motion Para-
meters or Place Motion Parameters dialogs for each Part or Part Target location. The result
obtained by the speed value depends on the operating mode of the robot (joint-interpolated
versus straight-line). Refer to Joint-Interpolated Motion vs. Straight-Line Motion on page 40 for
more information.
Whether in joint-interpolated mode or straight-line mode, the maximum speed is restricted by
the slowest moving joint during the motion, since all the joints are required to start and stop at
the same time. For example, if a given motion requires that the tool tip is rotated on a SCARA
robot (Joint 4), that joint could limit the maximum speed achieved by the other joints since
Joint 4 is the slowest moving joint in the mechanism. Using the same example, if Joint 4 was
not rotated, the motion could be faster without any change to the speed value.
NOTE: The motion speed specified in the Pick Motion Parameters or Place
Motion Parameters dialogs must always be greater than zero for a regular robot
motion. Otherwise, an error will be returned.
You can use the acceleration parameter to control the rate at which the robot reaches its des-
ignated speed and stops. Like speed, the acceleration / deceleration rate is specified as a per-
centage of the normal acceleration/ deceleration rate. To make the robot start or stop smoothly
using lower acceleration and deceleration for a less-abrupt motion, set the acceleration para-
meter to a lower value. To make the robot start or stop quickly using higher acceleration and
deceleration for a more abrupt motion, set the acceleration parameter to higher values.
The speed and acceleration parameters are commonly modified for cycle time optimization
and process constraints. For instance, abrupt stops with a vacuum gripper may cause the part
being held to shift on the gripper. This problem could be solved by lowering the robot speed.
However, the overall cycle time would then be increased. An alternative is to lower the
acceleration / deceleration rate so the part does not shift on the gripper during motion start or
stop. The robot can still move at the maximum designated speed for other movements.
Another case would be a relatively high payload and inertia coupled with tight positioning tol-
erances. A high deceleration rate may cause overshoot and increase settling time.
NOTE: Higher acceleration / deceleration rates and higher speeds do not always
result in faster cycle times due to positioning overshoot that may occur.
Approach and depart heights are used to make sure that the robot approaches and departs
from a location without running into any other objects or obstructions in the robot envelope.
Approaches and departs are always parallel to the Z-axis of the tool coordinate system.
Approach and depart heights are typically specified for pick and place locations. The
approach segment parameters are shown in the following figure.
When approach and depart heights are specified, the robot moves in three distinct motions. In
the first motion (Approach segment), the robot moves to a location directly above the specified
location. For the second motion, the robot moves to the actual location and the gripper is activ-
ated. In the third motion (Depart segment), the robot moves to a point directly above the loc-
ation.
Notice that all the motion parameters that apply to a motion to a location also can be applied
to approach and depart motions. This allows you to move at optimum speed to the approach
height above a location, then move more slowly when actually acquiring or placing the part,
and finally depart quickly if the application requires this.
Arm Configuration
Another motion characteristic that you can control is the configuration of the robot arm when
moving to a location. However, configuration options apply only to specific types of robots.
Location Precision
When a robot moves to a location, it actually makes several moves, each of which is a closer
approximation of the exact location. You can control the precision with which the robot moves
to a location using the Motion End parameter (Settle Fine / Settle Coarse). If Settle Coarse is
selected, the robot will spend less time attempting to reach the exact location. In many cases,
this setting will be adequate and will improve robot cycle times.
Making smooth transitions between motion segments without stopping the robot motion is
called continuous path operation. When a single motion instruction is processed, the robot
begins moving toward the location by accelerating smoothly to the commanded speed. Some-
time later, when the robot is close to the destination location, the robot decelerates smoothly to
a stop at the location. This motion is referred to as a single motion segment, because it is pro-
duced by a single motion instruction.
When a continuous-path series of two motion instructions is executed, the robot begins mov-
ing toward the first location by accelerating smoothly to the commanded speed just as before.
However, the robot does not decelerate to a stop when it gets close to the first location. Instead,
it smoothly changes its direction and begins moving toward the second location. Finally, when
the robot is close to the second location, it decelerates smoothly to a stop at that location. This
motion consists of two motion segments since it is generated by two motion instructions.
If desired, the robot can be operated in a non-continuous-path mode, which is also known as
breaking-continuous-path operation. When continuous-path operation is not used, the robot
decelerates and stops at the end of each motion segment before beginning to move to the next
location. The stops at intermediate locations are referred to as breaks in continuous-path oper-
ation. This method is useful when the robot must be stopped while some operation is
performed (for example, closing the gripper or applying a dot of adhesive). The continuous or
non-continuous path motion is set using the Wait Until Motion Done parameter and Motion
End parameter in the Pick Motion Parameters or Place Motion Parameters dialogs. To enable
continuous-path operation, you must set both parameters as follows.
l l Wait Until Motion Done = False
Continuous-path transitions can occur between any combination of straight-line and joint-inter-
polated motions. Refer to Joint-Interpolated Motion vs. Straight-Line Motion on page 40 for
more information.
The path a robot takes when moving from one location to another can be either a joint-inter-
polated motion or a straight-line motion. A joint-interpolated motion moves each joint at a con-
stant speed except during the acceleration / deceleration phases (refer to Speed, Acceleration,
and Deceleration on page 37 for more information).
With a rotationally-jointed robot, the robot tool tip typically moves along a curved path during
a joint-interpolated motion. Although such motions can be performed at maximum speed, the
nature of the path can be undesirable. Straight-line motions ensure that the robot tool tip traces
a straight line. That is useful for cutting a straight line, or laying a bead of sealant, or any other
situation where a totally predictable path is desired.
Performance Considerations
Things that may impact performance in most applications include robot mounting, cell layout,
part handling, and programming approaches.
The mounting surface should be smooth, flat and rigid. Vibration and flexing of the mounting
surface will degrade performance. Therefore, it is recommended that you carefully follow the
robot-mounting procedures described in your robot user's guide.
When positioning a robot in the workcell, take advantage of moving multiple joints for faster
motions. On a SCARA robot, the “Z” and “theta” axes are the slowest, and motion of these
joints should be minimized whenever possible. This can be accomplished by positioning the
robot and setting conveyor heights and pick-and-place locations to minimize Z-axis motion.
Regarding cell layout and jointed arms, the same point-to-point distance can result in different
cycle times. Moving multiple joints combines the joint speeds for faster motion. If the same
For part handling, settling time while trying to achieve a position can be minimized by cen-
tering the payload mass in the gripper. A mass that is offset from the tool rotation point will
result in excess inertia that will take longer to settle. In addition, minimizing gripper mass and
tooling weight will improve settling time. This could include using lighter materials and
removing material that is not needed on tooling.
Programming Considerations
There are two basic types of conveyor systems: indexing and tracking. In an indexing conveyor
system, also referred to as a noncontinuous conveyor system, you specify either control signals
or a time interval for the belt to move between stops. When the conveyor stops, the robot
removes parts from the belt and then it is signaled to move again. The conveyor must be
equipped with a device that can use digital output to turn the conveyor ON and OFF.
Indexing Conveyors
Indexing conveyor systems are configured as either non-vision or vision. With a non-vision
indexing system, the part must be in the same location each time the belt stops. In a vision-
equipped indexing system, a fixed-mount camera takes a picture when the belt stops and the
robot accesses any objects found.
Tracking Conveyors
In a tracking conveyor system, the belt moves continuously and the robot tracks parts until the
speed and location of the robot gripper match those of a part on the belt. The robot then
accesses the part.
Tracking conveyors must be equipped with an encoder that reports the movement of the belt
and distance moved to the ACE software. Tracking conveyor systems are configured as either
non-vision or vision.
With a non-vision tracking conveyor, a sensor signals that a part has passed a known loc-
ation. The ACE software tracks the progress of the belt and accesses the part when it comes
into the robot working area (belt window). Parts must always be in the same location with
respect to the center line of the belt.
With a vision-equipped tracking conveyor, the vision system detects parts that are randomly
positioned and oriented on the belt. A fixed-mount camera takes pictures of the moving parts
and based on the distance the conveyor travels, returns the location of parts. These part loc-
ations are queued and accessed by the robot.
This section describes installation and uninstallation details for the ACE software.
System Requirements
ACE software has the following system requirements.
Once installation media is loaded, you can access the installation media contents from a Win-
dows Explorer window, and double click on the file setup.exe.
The installation will be performed in two distinct phases. The first phase checks for pre-
requisites on your PC. The second phase installs the ACE software on the computer.
In the first phase, the installer checks for the following prerequisites:
l TortoiseGit
l OPC Redistributables
If these required packages are not on the computer, they will be installed. If the Microsoft .NET
Framework 4.6.1 is missing, the installer will attempt to download the files from the internet. If
the computer does not have internet connectivity, you will need to manually download and
install the Microsoft .NET Frameworks 4.6.1 from the Microsoft download site.
If any packages are already on the computer, they will be grayed out and not selectable from
the installation GUI and they will not be installed.
The ACE 4.2 Setup wizard provides options depending on the required use of ACE. Selecting
the “ACE Files” option will install the standard version of ACE. Selecting the “ACE Applic-
ation Manager Files” option will install a version of ACE that is designed to act as a server
instance. Refer to Remote Application Manager for more information. Additionally, “Git Files”
and “Tortoise Files” can be selected to install the latest versions of Git and TortoiseGit, respect-
ively.
NOTE: If the installer detects that a version of Git or TortoiseGit is installed, the
respective option in the window will be unchecked and disabled.
.
Figure 2-2 ACE Installation Options
When the installation completes the directory will be similar to that shown in the following fig-
ure. There will be two executables within the installation folder.
Usage Considerations
The following network ports are used by the ACE software.
PC network ports:
A Basler Pylon software suite is installed for support and configuration of Basler cameras.
Git and Tortoise repository resources are installed for use with project version control.
OPC Test Client
Offline Visualizer
Software used for playback of 3D Visualizer recording files (.awp3d). Refer to 3D Visualizer on
page 107 for more information.
Find an uninterruptible power supply script file example in the ACE software installation dir-
ectory in the UPS folder. Refer to Uninterruptible Power Supply (UPS) on page 145 for more
information.
Two PDF files are included to assist in vision system grid calibrations. Refer to Custom
Devices on page 290 for more information.
ACE Software Language Variations
When installing ACE software, the following language variations are included. Access the dif-
ferent language variants with the Windows Start Menu group or in the following default
installation directory.
C:\ProgramData\Microsoft\Windows\Start Menu\Programs\Omron\ACE 4.2\Localized
Shortcuts
Language Variants
l French
l German
1. Open the Control Panel from the Windows Start Menu and then select Programs and
Features (Apps & features). The Uninstall or change a program Dialog Box is displayed.
2. Select ACE 4.2 and then click the Uninstall button.
3. Proceed with any confirmation messages to uninstall the software.
4. Click the Finish button to complete the uninstallation of the ACE software.
This section describes how to start and exit the ACE software, create and save projects, and per-
form other basic operations.
l Exit all applications that are not necessary to use the ACE software. If the ACE software
startup or operations are affected by virus checking or other software, take measures to
remove ACE software from the scope.
l If any hard disks or printers that are connected to the computer are shared with other
computers on a network, isolate them so that they are no longer shared.
l With some notebook computers, the default settings do not supply power to the USB
port or Ethernet port to save energy. There are energy-saving settings in Windows, and
also sometimes in utilities or the BIOS of the computer. Refer to the user documentation
for your computer and disable all energy-saving features.
Starting the ACE Software
The installation includes ACE Application Manager and ACE, where ACE is the server
instance that can integrate with clients. Use one of the following methods to start the
ACE software.
l Double-click the ACE software shortcut icon on your desktop that was created during
the installation process.
l From the Windows Start Menu, select Omron - ACE 4.2 in the desired language.
The ACE software starts and the following window is displayed. This window is called the
Start page. The language displayed on this page will be the one selected at start up.
Right clicking the desktop icon and selecting Properties shows the path to the installed applic-
ation, as shown in the following figure.
NOTE: You may need to disable Windows Login to allow automatic launch of
the ACE project. Contact your system administrator for more information.
Use the following procedure to configure automatic project launch on system boot up.
1. Identify the preferred language variant to use with the ACE project. Take note of the lan-
guage code. This will be used later in this procedure.
Language Code
German de-DE
English en-US
Spanish es-ES
French fr-FR
Italian it-IT
Japanese ja-JP
Korean ko-KR
Language Code
2. Locate the ACE software executable file. The default software installation directory
where the ACE software executable file is normally located can be found here:
C:\Program Files\Omron\ACE 4.2\bin\Ace.AppMgr.exe
3. Create a shortcut to the Ace.AppMgr.exe file in new location that can be accessed in the
next step (on the desktop, for example).
4. Access the new shortcut to the Ace.AppMgr.exe file. Open the shortcut properties by
right-clicking the shortcut and selecting Properties. Then, view the shortcut tab of the
Properties Dialog Box.
5. Modify the file name at the end of the Target path by replacing \Ace.AppMgr.exe" with
the following.
\Ace.AppMgr.exe" start --culture=xx-XX --project="Project Name"
l Substitute "xx-XX" with the preferred language code from Table 3-1 above.
l Substitute "Project Name" with the name of the project to automatically open.
An example Target path would appear as shown below for the Ace.AppMgr.exe short-
cut that was created. The example below uses the preferred language of English and the
project name of Part Place 1.
"C:\Program Files\Omron\ACE 4.2\bin\Ace.AppMgr.exe" start -
6. Confirm the function of the modified shortcut Target path by double-clicking the short-
cut. The project should open in the ACE software to confirm the correct target command
was used.
7. Access the Windows Startup Folder, found in C:\ProgramData\Microsoft\Windows\Start
Menu\Programs\Startup. The Windows Startup folder will open.
8. Move the modified shortcut into the Windows Startup folder. This will cause the ACE
project to automatically open upon boot up as shown in Startup Folder on page 54.
9. To complete this procedure, reboot the PC and confirm that the ACE project auto-
matically launches.
When Yes is clicked the @autostart project is deleted from the server instance.
l Click the Close button in the title bar of the Start page.
Additional Information: A project file can also be created with the Connect to
Device method when a SmartController is present. Refer to Creating a Project
from an Online Connection on page 61 for more information.
2. Enter the project name, author, and comment in the Project Properties Screen, select the
device category and the device to use, and then click the Create button. (Only the pro-
ject name is required.)
Property Description
If this is the first time you are creating a project after installing the
ACE software, the user name that you entered when you logged onto
Windows is displayed (see note below).
l Standard Project
l Library Project
Refer to Device List and Device Icon on page 93 for more information.
Device Specify the device type based on the category selection (required).
Version Specify the device version based on the category selection (required).
NOTE: You can change the author name that is displayed when you cre-
ate a project in the option settings. Refer to Project Settings on page 143 for
more information.
4. After the new project properties are complete, click the Create button. A project file is cre-
ated and the following window is displayed with the specified device inserted.
NOTE: This directory applies when the default software installation path is
used.
All project files that have been saved will be displayed in a list when the Open Project selec-
tion is made. Refer to Opening a Project File on page 64 for more information.
IMPORTANT: Do not modify data internal to the default save location. Data
can be corrupted and project files can be lost as a result.
To save an existing project file, select Save from the File Menu (or press the Ctrl + S keys).
To save a new project, select Save As. Enter a specific project name, do not accept the default,
this may create multiple projects with the same name but different serial numbers.
NOTE: Projects with the same name may cause runtime issues when opened
with AutoStart. Confirm each project has a unique name.
The new project file is saved. To use a project file on a different computer, export the project
file as described in Exporting a Project File on page 68.
IMPORTANT: Refer to Pull V+ Memory on page 158 for information about sav-
ing controller memory contents to the project file.
For further information about version control, refer to Sysmac Studio for Project Version Con-
trol Function Operation Manual (Cat. No. W589).
Change the project file name and any other project property details, and then click the
Save button.
1. Click Connect to Device from the Start page. This will open the Connect to Device Dia-
log Box.
2. Make the appropriate connection settings in the Connect to Device Dialog Box and click
the Connect button (refer to Online Connections to SmartControllers on page 75 for
more information).
3. After the connection is established, the Application window will be shown. A new pro-
ject has been created from an online connection.
NOTE: This new project gets a default project name. Select Save as... from
the File Menu to save this project with a different name and adjust project
properties if necessary.
2. A dialog box is displayed to ask if you need to save the project. Click the Yes button or
No button for the project saving choice.
1. Click Open Project on the Start page. This will display the Project Screen.
2. Find a project by searching for its name or selecting it from the project list and click the
Open button. This will open the project.
Item Description
Project name The project names that were entered when the projects were created
are displayed.
Comment The comment that was entered when the project was created.
Last Modified The last date that the project was modified.
Created The date and time that the project was created.
Author The name that was entered when the project was created.
The Project Properties panel allows the project to be set with password security, See
"Properties Password Protection".
To enable password protection, click the check box, then enter and confirm the pass-
word and then click Save.
Opening the ACE project file will automatically launch the ACE software, import a new project
file, and open the project.
IMPORTANT: Repeated opening of a particular .awp2 ACE project file will res-
ult in duplicate copies of the project. This method should be used for projects not
previously imported.
Select the project you want and click Open. The project opens in the Multiview Explorer.
2. Select a project to export from the list of project names and click Export. The Export Pro-
ject Dialog Box will be displayed.
When Git is installed onto the computer, or the ACE application is connected to a remote repos-
itory, clicking Export opens the repository view, See "Repository View".
1. Click Open Project on the Start page. Then, click the Properties button or click the Edit
Properties icon ( ). This will display the Properties Dialog Box.
2. Select the Enable password protection for the project file Check Box.
3. Enter the password, confirm the password (these must match) and then click the Save
button. The file is saved and password protection is set for this project.
The Edit Properties icon displayed in the project file list will indicate password pro-
tection.
1. Click Open Project on the Start page. Then, click the Properties button or click the Edit
Properties icon ( ). This will display the password entry prompt.
2. Enter the correct password for the project and click the OK button. This will display the
Project Properties Dialog Box.
3. Uncheck the Enable password protection for the project file and then click the Save but-
ton. This will remove password protection for the project.
NOTE: Once a new project number is created, the number cannot be changed.
Use the Save As New Number... from the File Menu to assign an incrementing number to an
open project.
NOTE: ACE software version 3.8 (and older) cannot connect to a controller that
ACE software version 4.2 (or higher) has connected to previously until a zero
memory or reboot have been executed.
If an online connection has been established, you can perform operations such as:
NOTE: You can simultaneously go online with more than one SmartController
in a project from the ACE software. The operations that are described in this sec-
tion apply to the currently selected SmartController in the Multiview Explorer. If
there is more than one SmartController that is registered in the project, select the
SmartController in the Multiview Explorer before attempting to go online.
The main areas of the Connect to Device Dialog Box are described below.
2 Connection Use this area to select a connection method when going online.
Type
3 Connection Use this area to detect and select controller IP addresses used for
Settings online connection.
4 Operation after Use this area to create an ACE Sight or Pack Manager application
Connection sample after a connection is established.
6 Detected After a controller has been detected, it will appear in this area for
Controller selection.
IP Addresses
NOTE: A controller configured to use a gateway will
not appear in this area. If the network configuration
is correct, the controller should be able to connect by
manually entering the IP address of the desired con-
troller. Refer to Detect and Configure Controller on
page 78 for more information.
7 Monitor Used for opening the Monitor Window for the selected controller.
Window Refer to Monitor Window on page 206 for more information.
9 Detect and Opens the Controller IP Address Detection and Configuration Dia-
Configure log Box. Refer to Detect and Configure Controller on page 78 for
Controller more information.
If the Controller appears in the Detected Controller IP Address area, select it and click the Con-
nect button. This will initiate the online connection process. If the Controller does not appear,
refer to Detect and Configure Controller on page 78 below.
The project workspace appears and a yellow bar is present under the toolbar when an online
connection is present.
l The detected controller's IP address and/or subnet is changed for compatibility with
the PC's LAN connection. Change the Desired Properties Address, Subnet, and Name
accordingly in the area shown below.
Figure 4-4 Change the Controller Network Configuration, Name and Autostart Properties
1. Use the Online icon ( ) in the toolbar to go online with the controller selected in
the Multiview Explorer.
2. Select Online from the Controller Menu to go online with the controller selected in the
Multiview Explorer.
3. Click the Online icon ( ) in the Task Status Control area to go online with the asso-
ciated controller. Refer to Task Status Control on page 122 for more information about
Task Status Control.
Use the Online and Offline buttons to control the online connections to each controller.
These methods rely on a correct IP assignment for the controller. Refer to Controller Settings on
page 185 for more information.
l Click the Offline icon in the Task Status Control area (refer to Task Status Control on
page 122 for more information).
You have the option of using the controller memory, the ACE project V+ memory, or merging
the two together.
Controller Memory
Clicking the Controller button ( ) will overwrite the ACE project V+ memory with the
contents of the Controller.
NOTE: If you overwrite the ACE project V+ memory with the contents of the con-
troller, there is no way to recover previous project data unless a backup copy has
been created. Consider selecting Save As... from the File Menu to save a new
backup copy of the project before using the Controller memory.
Workspace Memory
Clicking the Workspace button ( ) will overwrite the Controller memory with the con-
tents of the ACE project.
Additional Information: All user programs will be stopped and all program
tasks will be killed when choosing this option.
Merging Memory
When a difference between the memory of a Controller and the ACE project is detected, mer-
ging can unify the differences between them based on your preference. Clicking the Merge but-
ton ( ) will display the Merge Contents of V+ Memory Dialog Box as shown below.
Additional Information: All user programs will be stopped and all program
tasks will be killed when merging the project and controller memory.
The Merge Contents of V+ Memory Dialog Box will guide you through a selection process to
choose what memory source is preferred. This dialog box provides the following functions to
assist in selecting what memory will be overwritten and used in the ACE project.
This section provides the names, functions, and basic arrangement of the ACE software user
interface.
Item Description
1 Menu bar
2 Toolbar
3 Multiview Explorer
4 Filter Pane
6 Status Bar
7 Edit Pane
8 Multifunction Tabs
Item Description
9 Multipurpose Tools
Contents Description
Close Clears all objects and project data and returns to the ACE start screen.
Save As... Save the currently opened project with options to adjust project properties.
Contents Description
Contents Description
Build Tab Page Display the Build Tab in the multifunction tabs area.
Event Log Display the Event Log in the multifunction tabs area.
Contents Description
Closed Windows Refer toRecently Closed Windows on page 101 for more information.
History
Contents Description
Contents Description
The Controller menu item is not available when the Application Manager device is
selected.
The Controller menu item is not available when the Application Manager device is
selected.
Contents Description
Contents Description
New Divide the Edit Pane into upper and lower sections. Only available when the
Horizontal Tab Edit Pane has more than one tab open.
Group
New Divide the Edit Pane into left and right sections. Only available when the Edit
Vertical Tab Pane has more than one tab open.
Group
Move to Next Move the selected tab page to the tab group below or to the right of the cur-
Tab Group rent tab group. Only available when there is a tab group below or to the right
of the current tab group.
Move to Move the selected tab page to the tab group above or to the left of the current
Previous Tab tab group. Only available when there is a tab group above or to the left of the
Group current tab group.
Close All But Close all tab pages and floating Edit Panes, except for the selected tab page.
This
Close All Close all tab pages and floating Edit Panes for other devices, leaving those for
Except Active the selected device.
Device
Close All Close all tab pages and floating Edit Panes, including the selected tab page.
Open Next Open the next Edit Pane tab. Only available when the Edit Pane has more
Tab than one tab open.
Open Previous Open the previous Edit Pane tab. Only available when the Edit Pane has more
Tab than one tab open.
Contents Description
eV+ Guides Access the eV+ Language Reference Guide (Cat. No. I605) or eV+3 Keyword
Reference Manual (Cat. No I652)
NOTE: Float, Dock and Close functions are also accessible with the Options
menu.
Additional Information: You can hide or show the menu bar, toolbar, and status
bar. Right-click the title bar in the main window and select or deselect the item to
show or hide.
Multiview Explorer
The Multiview Explorer pane is your access for all ACE project data. It is separated into Con-
figurations and Setup and Programming.
Click the icons ( or ) in front of each item in the tree to expand or collapse the tree.
An ACE project can include SmartController devices or Application Manager devices (or both).
The Device List contains all devices registered in the project. Use the Device List to select the
active device and access the objects associated with that device. The active device and asso-
ciated toolbar and menu items can automatically change based on the Edit Pane object that is
currently selected.
Right-click the icon ( or ) to add, rename, and delete a device. You can also select Switch
Device Control Display to switch from a dropdown view (default) to a list view.
Color Codes
You can display marks in five colors on the categories and members of the Multiview
Explorer. These colors can be used as filter conditions in the Filter Pane that is described later
in this section.
You can define how you want to use these marks. For example, you may want to indicate asso-
ciated objects within a device. The following example uses color codes to indicate which vis-
ion tools are used in each Vision Sequence.
Error Icons
Error icons in the Multiview Explorer indicate an object that is improperly configured or a pro-
gram that is not executable.
Filter Pane
The Filter Pane allows you to search for color codes and for items with an error icon. The res-
ults are displayed in a list. Click the Filter Pane Bar to display the Filter Pane. The Filter Pane
is hidden automatically if the mouse is not placed in it for more than five seconds at a time.
Automatically hiding the Filter Pane can be canceled by clicking the pin icon ( ).
You can search for only items with a specific color code or items with error icons to display a
list with only items that meet the search condition. Click any item in the search result to dis-
play the item in the Edit Pane.
Add Shortcut of Selected Item Add a shortcut of the item currently selec-
ted in the Multiview Explorer.
Status Bar
The status bar provides additional information about the project state and the user access
level.
Edit Pane
The Edit Pane is used to display and edit the data for any of the items. Double-click an item in
the Multiview Explorer to display details of the selected item in the Edit Pane.
The Edit Pane is displayed as tab pages, and you can switch the tab page to display in the
Edit Pane by clicking the corresponding tab. You can undock the Edit Pane from the main win-
dow.
You can use an Option setting to change the number of tab pages in the Edit Pane that can be
displayed at a time. The default setting is 10 and the maximum display number is 20. Refer to
Window on page 143 for details on the settings.
You can close the Edit Pane from the pop-up menu that appears when you right-click one of
the tabs. The following options are available.
Close Select this command to close the currently selected edit window.
Close All But This Select this command to close all tab pages and floating edit windows,
except for the selected tab page.
Close All Except Active Select this command to close all tab pages and floating edit windows
Device for other devices, leaving those for the active device open.
Close All Select this command to close all tab pages and floating edit windows,
including the selected tab page.
Tab Groups
Tab groups can be created to organize similar items in the Edit Pane. By creating tab groups,
you can manage more than one tab page within the Edit Pane.
To create a tab group, right-click any of the tabs to display the pop-up menu.
New Horizontal Tab Group Select this command to divide You can display the pop-up
the Edit Pane into upper and menu when the current tab
lower sections and move the group has more than one tab
selected tab page to the tab page.
group that is newly created.
New Vertical Tab Group Select this command to divide You can display the pop-up
the Edit Pane into left and right menu when the current tab
sections and move the selected group has more than one tab
tab page to the newly created page.
tab group.
Move to Next Tab Group Select this command to move You can display the pop-up
the selected tab page to the tab menu when there is a tab
group below or to the right of group below or to the right of
the current tab group. the current tab group.
Move to Previous Tab Group Select this command to move You can display the pop-up
the selected tab page to the tab menu when there is a tab
group above or to the left of the group above or to the left of
current tab group. the current tab group.
Additional Information: You can also move tab pages between tab groups by
dragging the mouse.
Multifunction Tabs
The Multifunction Tabs area contains several objects that are described in this chapter.
Special Tools
l Toolbox (refer to Toolbox on page 106 for more information)
l 3D Visualizer (refer to 3D Visualizer on page 107 for more information)
l V+ Jog Control (refer to V+ Jog Control on page 117 for more information)
l Task Status Control (refer to Task Status Control on page 122 for more information)
l Vision Window (refer to Vision Window on page 133 for more information)
Multiple objects can be open in the Special Tools area. Use the tab selections to switch between
active objects. You can also dock these objects in other areas of the software by dragging and
dropping them to different locations.
For the device selected in the Multiview Explorer, you can display a thumbnail index of the
windows that were previously displayed in the Edit Pane, and you can select a window from
the thumbnail index to display it again.
Select Recently Closed Windows from the View Menu. The windows previously displayed in
the Edit Pane are displayed in the thumbnail index of recently closed windows. Double-click
the window to display.
The thumbnails of up to the 20 most recently closed windows are displayed in the order they
were closed (the upper left is the most recent). Select the window to display with the mouse or
the Arrow Keys. You can select more than one window if you click the windows while you
hold down the Shift or Ctrl Key or if you press the Arrow Keys while you hold down the Shift
or Ctrl Key.
You can delete the history of recently closed windows from a project. Select Clear Recently
Closed Windows History from the View Menu, and then click the Yes button in the con-
firmation dialog box that is displayed. All of the histories are deleted.
You can restore the window layout to the ACE software default layout. Select Reset Window
Layout from the View Menu.
System Monitor
The System Monitor can be used to perform real-time monitoring of robot hardware per-
formance and Process Manager objects (when present in the Application Manager device). The
data can be used to identify inefficient part allocation, for example.
It is important to recognize that Process Manager statistics reflect the defined processes and
represent exactly what is happening in the Process Manager. Although it may appear that the
statistics are not accurate, the data needs to be interpreted appropriately for each system.
Consider a system containing one belt camera and three robots. If a part identified by the cam-
era is allocated to robot one, not picked, then allocated to robot two, not picked, then allocated
to robot three and picked, there would be two parts not processed because the instance was
allocated and not picked by the first two robots. To understand the parts not processed count
for this system, you should examine the Parts Not Processed for Robot 3 (the most down-
stream robot).
If a system requires customized statistics processing, this can be achieved using a C# program
and the API provided in the ACE 4.0 Reference Guide for StatisticsBins.
NOTE: You must be connected to a physical controller when using the System
Monitor (Emulation Mode is not supported).
Parameter
Description
Name
Amplifier bus The current amplifier bus voltage for the robot. This should operate within
voltage (V) the specified min/max warning limits (yellow bars), and never reach the
min/max error limits (red bars).
If the value drops below the range minimum, this means that the motion is
too hard or the AC input voltage is too low; if the value exceeds the range
maximum, this means that the motion is too hard or the AC input voltage is
too high. Lowering the motion speed (more than the acceleration) can help
correct these issues.
AC input (V) The current AC input voltage (220 VAC) for the robot. This should operate
within the specified min/max warning limits (yellow bars), and never reach
the min/max error limits (red bars). Running outside or close the limits may
create envelope errors.
DC input (V) The current DC input voltage (24 VDC) for the robot. This should operate
within the specified min/max warning limits (yellow bars), and never reach
the min/max error limits (red bars).
Base board The current temperature (°C) for the amp-in-base processor board. This
temperature should operate within the specified min/max warning limits (yellow bars),
(°C) and never reach the min/max error limits (red bars).
Encoder The current encoder temperature (°C) for the selected motor. This should
temperature operate within the specified min/max warning limits (yellow bars), and never
(°C) reach the min/max error limits (red bars).
Amplifier The current temperature (°C) for the motor amplifier. This should operate
temperature within the specified min/max warning limits (yellow bars), and never reach
(°C) the min/max error limits (red bars).
Duty cycle (% The current duty cycle value, as a percentage, for the selected motor. This
limit) should operate below the specified max warning limit (yellow bar), and never
reach the max error limit (red bar).
Harmonic The current usage of the Harmonic Drive, as a percentage of design life, for
Drive usage the selected motor. This should operate below the specified max warning
(%) limit (yellow bar), and never reach the max error limit (red bar).
If the value is less than 100%, the maximum life for the Harmonic Drive will
be extended; if the value exceeds 100%, the maximum life of the Harmonic
Drive will be diminished.
Peak torque The peak torque, as a percentage based on maximum torque, for the selected
(% max motor. If this is frequently exceeded, consider reducing acceleration, decel-
torque) eration, or speed parameters or changing s-curve profile to reduce peak
torque required for the motion. This should operate below the specified max
warning limit (yellow bar), and never reach the max error limit (red bar).
Peak velocity The peak velocity, in rotations per minute, for the selected motor. This should
(RPM) operate below the specified max warning limit (yellow bar), and never reach
the max error limit (red bar).
Peak position The peak position error, as a percentage of soft envelope error, for the selec-
error (% soft ted motor. This should operate below the specified max warning limit (yellow
envelope bar), and never reach the max error limit (red bar).
error)
Parameter Description
Instance The number of instances that have been available since the last restart or
Count reset.
Instantaneous The instantaneous instances since the last restart or reset. This is calculated
Instances over the update period selected in the System Diagnostics settings. If the
graph is updated 500 ms, it will tell you the instantaneous instances/minute
Parameter Description
Latch Faults The number of latch faults since the last restart or reset.
Parameter Description
Idle Time The average idle time percentage of the total run time since the last restart or
(%) reset. ("Idle" is when the Process Manager is waiting on part or part target
instances to process.)
Processing The average processing time percentage of the total run time since the last
Time (%) restart or reset. ("Processing" is when the Process Manager is actively pro-
cessing part or part target instances.)
Average For the Process Manager group only. The average total time for all robots. Other
Total Time fields, such as Parts Processed/Not Processed and Targets Processed/Not Pro-
(ms) cessed, show a summation for all robots.
Parts Per The average number of parts per minute. When viewing the Process Manager
Minute group, this is a summation for all robots.
Targets Per The number of targets per minute. When viewing the Process Manager group,
Minute this is a summation for all robots.
Parts Not The number of parts not processed since the last restart or reset. When viewing
Processed the Process Manager group, this is a summation for all robots.
Targets Not The number of targets not processed since the last restart or reset. When view-
Processed ing the Process Manager group, this is a summation for all robots.
Parts The number of parts processed since the last restart or reset. When viewing the
Processed Process Manager group, this is a summation for all robots.
Targets The number of targets processed since the last restart or reset. When viewing
Processed the Process Manager group, this is a summation for all robots.
Parameter Description
Average Allocation The average time it takes to run the allocation algorithm for allocating
Time (ms) all parts and part targets.
You can clear all Process Manager General Statistics. To clear statistics, right click the Process
Manager item and select Clear Statistics.
The Smart Project Search allows you to quickly find items in the Multiview Explorer. For
example, if there are a large number of programs or sections present in the project, you can
quickly find the desired program or section with the Smart Project Search. The search is per-
formed only within the project active device.
The following procedure describes the use of the Smart Project Search function.
1. Select Smart Project Search from the View Menu. The Smart Project Search Dialog Box
is displayed.
2. Enter part of the name of the item in the Search Box The items in the Multiview
Explorer or menus that contain the entered text string are displayed on the left side of
the search result list.
The search results are displayed from the top in the order they are displayed in the Mul-
tiview Explorer. On the right side of the search result list, the level that is one higher
than the item that resulted from the search is displayed.
3. Double-click the desired item in the search result list or press the Enter key after making
a selection. The Search Dialog Box is closed and the selected item is displayed in the
Edit Pane. To close the Search Dialog Box without selecting an item, click in a pane
other than the Search Dialog Box or press the Esc key.
Additional Information: You can enter English characters into the Search Box to
search for item names that contain words that start with the entered characters
in capitals. Example: If you enter is0 in the Search Box, an item named Ini-
tialSetting0 is displayed in the search result list.
5.2 Toolbox
The Toolbox shows the program instructions (Keywords) that you can use to edit and create
V+ programs. When a V+ program is open in the Edit Pane, the Toolbox will display a list of
available V+ keywords. If you drag a keyword into the V+ program Edit Pane and drop it into
the program, syntax will auto-generate for the chosen V+ keyword, as shown in the figure
below.
A search tool within the Toolbox can be used to find items with a text string.
You can expand or collapse all categories by right-clicking an item and selecting Expand All or
Collapse All from the pop-up menu.
5.3 3D Visualizer
The 3D Visualizer allows you to see simulated and real 3D motion for robots and other
Process Manager items such as belts and parts. The 3D Visualizer window displays the fol-
lowing information.
l Graphical representation of all belts, cameras, robots, fixed objects, and obstacles in the
workspace
l Robot belt windows and allocation limits
l Robot work envelopes
l Teach points and other part locations
l Belt lane widths
l Location of latch sensors
l Field of view for cameras
l Process Manager objects and Part / Part Target instances
Creating a 3D Workspace
To use the 3D Visualizer, an accurate 3D workspace must be create to represent all objects of
concern in the application. Use the following procedure to create a 3D workspace.
1. Configure robots by selecting robot types and offset positions. Refer to Robot Objects on
page 208 for more information.
2. Add static objects such as boxes, cylinders, and imported CAD shapes if necessary.
3. Add cameras if necessary.
4. Add feeders if necessary.
5. Add Process Manager items if necessary (part buffer, part target, part, belt, pallet, .etc).
3D Visualizer Window
Access the 3D Visualizer Window with the main toolbar icon ( ) or by selecting 3D Visu-
alizer from the View Menu.
The 3D Visualizer Window has the following control icons.
NOTE: The use of the term "camera" in this section refers to the perspective at
which the 3D Visualizer is viewed and not a camera configured in the Applic-
ation Manager (unless specified).
The rectangles marked above (2, refer to Table 5-19 ) and (3 refer to Table 5-20 ) show where
the control icons appear. The square marked (1) shows the display cube, which can be used to
quickly change user perspective of the visualization area to a different orientation by clicking
on various faces
Item Description
Edit
Opens the Edition Control Manager. Refer to Edition Control Manager for more
information refer to Figure 5-23 .
Displays a coordinate icon for relocating the X, Y, and Z location of the selected
object. Hover over an axis of the coordinate icon to see the cursor change and
then left-click and drag the object to a new position. Hovering the cursor in the
white circle portion of the coordinate icon will allow free movement of the
object when clicked and dragged. The new position will be reflected in the Off-
set from the Parent value of that object
Note that a different number of axis rings will be available depending on the
selected object.
Jog mode:
Click to toggle the jog mode between world, tool, and joint. Jog icons will
appear. Use the jog icons to manually control the selected robot's position.
Show Obstacles:
Toggles visibility of obstacles that are present. Refer to Robot Objects for more
information. This is only available for robot objects.
Toggles visibility of the selected robot work envelope. This is only available for
robot objects.
Teach Point:
Adds a new location variable at the robot's current position. This is only avail-
able for robot objects.
Toggles visibility of mount points on the selected object. This is only available
for Box, Cylinder, and CAD Data objects.
Delete:
Delete a selected position from the variables list. This is only available when a
point is selected.
Jog To:
Jog to the selected position. This is only available when a point is selected.
Approach Height:
Toggle the approach height for the Jog To command between 0, 10, and 25
mm for the selected position. This is only available when a point is selected.
Jog Speed:
Toggle the jog speed between 100%, 50%, 25%, and 5% for the selected pos-
ition. This is only available when a point is selected.
Split Window:
Click this icon to open a new dialog window that allows splitting the visu-
alizer window into multiple views. This allows the user to view the work-
3 space from multiple positions at the same time.
Selection:
Translate (pan):
Move the camera position without rotation. Click with the third mouse
button and drag as an alternative.
Rotate:
the camera position without translation. Click the right mouse button
and drag as an alternative. Click the arrow beneath this icon to choose
between Tumbler and Turntable rotation.
Zoom Move:
the camera position closer or farther from the workspace. Use the mouse
scroll wheel as an alternative.
Scene Graph:
The Visibility tab allows the user to set the usability of each object shown
in the Visualizer. The Collision Filter tab configures collision sources. Refer
to Graphical Collision Detection for more information.
Measurement Ruler:
Record:
Begin a recording for playback in ACE Offline Visualizer. The icon will
The Edition Control Manager is opened by clicking the Edition Control Manager. This allows
you to set certain editing parameters for the 3D Visualizer Window. The parameters can be set
in millimeters or degrees, refer to .
Icon Description
Changes the fields to X, Y, and Z, as shown by item (1) of the above figure.
Changing the values of these fields will translate the object in the Visualizer.
Changes the fields to Yaw, Pitch, and Roll, as shown by item (2) of the above fig-
ure. Changing the values of the fields will rotate the object in the Visualizer.
Edit Size:
Changes the fields to DX, DY, and DZ for a selected Box object or changes them
to Radius and Height for a selected Cylinder object, as shown by item (3) of the
above figure. Changing these fields will change the object size. This is only avail-
able for Box and Cylinder objects.
Toggles between the Object Coordinates and World Coordinates modes. When
Object Coordinates is selected, the icon will appear as it does to the left, and any
adjustments will be with respect to the objects parent. When World Coordin-
ates is selected, the icon will change ( ) and any adjustments will ignore
parent constraints. This icon is hidden when Edit Size is selected .
NOTE: This should not be confused with obstacles that are configured in robot
objects. Refer to Obstacles on page 226
Use the Scene Graph icon ( ) in the 3D Visualizer to configure collisions. This will open the
Scene Graph dialog box. Refer to 3D Visualizer on page 107 for more information.
Use the Collisions tab in the Scene Graph to add or remove object sources for collision defin-
ition(s). The Add button ( ) will create a new collision definition between two objects (Source
1 and Source 2). Use the Delete button ( ) to remove a selected collision definition. You can
also enable or disable a collision definition with the Enable check box.
If a collision is detected between two objects, they will become shaded in the 3D Visualizer as
shown below.
NOTE: Past information that is displayed in the logs may not reflect the current
status of the controller or programs.
Click the message type buttons to display messages by type. Click the Error button ( ) to
display all error message types. Click the Warning button ( ) to display all warning
message types. Click the Information button ( ) to display all information message
types.
Sorting Messages
Click the Time Stamp column heading to sort messages by the event occurrence time. Click the
Message column heading to sort messages by the message.
Additional Information: Many items logged to the Event Log are also logged to
the Windows Application Event Log. Use the Windows Application Event Log to
retrieve past ACE events as an alternative.
NOTE: The V+ Jog Control works for both emulated and physical robots.
Many jog commands and setting are disabled while a robot is under program
control. Refer to Current Position Section on page 118 for more information.
Robot Section
The robot section provides the following functionality.
Robot
Select a robot in the ACE project to control with the V+ Jog Control. All robots in the project
will be accessible in the dropdown selection area.
Robot Power
The Power button toggles the robot high power ON and OFF and calibrates the selected robot.
Robot power must be ON to allow jog control.
Additional Information: Turning the robot high power ON for the first time after
system power up executes the CALIBRATE() keyword to load joint calibration off-
sets into memory. This does not perform a full robot hardware calibration.
Align
The Align button aligns the robot tool Z-axis with the nearest World axis (six-axis robots only).
This displays the current tool transformation applied to the robot. The dropdown can be used
to clear the tool transformation or choose a tool transformation provided by an IO EndEffector
tip.
NOTE: Jogging is only possible when Ready is displayed in the status area.
After all jog control settings are made, use the move axis buttons to move the selected axis in
the positive ( ) or negative ( ) direction.
l World - Enables the jog control to move the robot in the selected direction: X, Y, or Z
axes of the world frame of reference or rotated around the RX, RY, or theta axes in the
world coordinate system.
l Joint - Enables the jog control to move the selected robot joint.
l Tool - Enables the jog control to move the robot in the selected direction: X, Y, or Z axes
of the tool frame of reference or rotated around the RX, RY, or theta axes in the tool
coordinate system.
Location Section
The Location Section is used to view, teach, remove, and jog to robot locations. Refer to V+
Variables on page 166 for more information.
NOTE: A robot location must exist and be selected to use this function.
Click and hold the Jog Appro button ( ) to make the robot jog to the specified loc-
ation at the approach height specified.
IMPORTANT: Using the Jog Appro button will cause straight-line motion
to occur. Monitor the robot during this movement to avoid collisions with
obstacles between the starting location and the destination location.
Before teaching a location, move the robot to the desired location (either by jogging or power-
ing OFF and physically moving the robot) and then click the Here button ( ). Clicking the
Here button will put the robot's current axis positions into the display field for use in the fol-
lowing teach procedure.
Additional Information: Refer to V+ Variable Editor on page 167 for other robot
position teach functions.
1. Click the Plus button ( ). This opens the Add a new variable Dialog Box.
2. Select a variable type (location or precision point), provide a new name, and verify the
value. If the robot dropdown selection is changed, click the Record button ( ) to
update the value for that robot accordingly.
3. Choose the display mode, category, and provide an optional description.
4. Make a selection for array, starting and ending index when applicable.
5. Click the Accept button to create the new robot location variable.
To remove an existing robot location, select the location from the dropdown menu and then
click the Delete button ( ). A confirmation dialog box will appear. Click Yes to remove
l Online / Offline
l Robot High Power Control
l Monitor Speed Setting
l Open the Monitor Window
l Task Manager
l IO Watcher
l V+ File Browser
l Virtual Front Panel
l Profiler
l Application Manager Control (when applicable)
Online/Offline
Robot Power Control
Use the ON/OFF button ( / ) to control the robot power state. This button is only available
while online with the controller. The Robot Power button in the Virtual Front Panel have the
Use the Monitor Window button ( ) to open the Monitor Window in the Edit Pane. Refer to
Monitor Window on page 206
Task Manager
The Task Manager displays and controls activity on user tasks 0 to 27. The ACE software uses
two tasks plus one task per robot, counting down from 27. The remaining tasks (0 to 21, or
more if fewer than four robots) are available for the execution of user-created V+ programs.
This includes programs started by a Process Manager as shown below.
NOTE: If a program is paused the task can be expanded to view the current pro-
gram stack.
Pause Task The selected task execution is paused at the next command.
Retry Step If the selected task was paused or stopped due to an error,
this button attempts to re-execute the current step and con-
tinue execution.
Proceed Task If the selected task was paused or stopped due to an error,
this button attempts to proceed execution of the task. This
button is dimmed if there is no program for the given task or
no task selected.
Copy Stack to Windows Copies the contents of the selected task stack to the Win-
Clipboard dows clipboard.
The flag icon next to each task in the list area represents the task state. Use the following table
to understand different task states.
Task is executing.
A program's task flag icon will be yellow if you drag it onto a task to
prime it.
Other Functions
Right-clicking a task in the task list will open a menu with other functions not described
above. Use the following descriptions to understand the other functions.
Execute Using... Prompts for the name of the program to execute on the selected
task.
Debug Using... Prompts for the name of a program to debug, primes the specified
program, and opens the V+ program in the Edit Pane.
Reset and Debug Resets the program and open the V+ program in the Edit Pane for
the selected task.
Kill All Tasks Clears all tasks that do not have running programs.
IO Watcher
Select IO Watcher to display an interface for monitoring the state of digital I/O signals (inputs,
outputs, soft signals, and robot signals) on the connected controller. Digital output signals and
soft signals can be turned ON and OFF manually by clicking on the signal button ( / ).
NOTE: When Emulation Mode is enabled, digital input signals can be manip-
ulated.
V+ File Browser
The V+ File Browser allows you to browse files and folders on the controller. This is only pos-
sible while online with the controller.
The V+ File Browser works with the Windows clipboard to enable easy transferring of files to
and from controllers.
Use the icons in the V+ File Browser toolbar to perform common file browser functions such as
navigation, creating new folders, rename, delete, cut, copy, and paste. Right-clicking a file or
folder will also display a menu with common file browser functions and other items described
below.
View File
Selecting View File will open the file in a quick-view window without the need for transferring
the file to a PC. This is available for program, variable, and text files.
Load
Selecting Load will transfer the contents of the selected file from disk to system memory.
Mode Selection
Switches between Manual ( ) and Automatic ( ) mode. In Automatic mode, executing pro-
grams control the robot, and the robot can run at full speed. In Manual mode, the system lim-
its robot speed and torque so that an operator can safely work in the cell. Manual mode
initiates software restrictions on robot speed, commanding no more than 250 mm/sec. There is
no high speed mode in manual mode. Refer to the robot's user's guide for more information.
Robot Power
The Robot Power button enables power to the robot motors. This button has an indicator to dis-
play the following robot power states.
E-Stop
E-Stop behavior can be tested and monitored with the E-Stop button on the Virtual Front
Panel. Use the ESTOP Channel area to simulate various E-Stop system functions.
NOTE: Refer to the eV+ Language Reference Guide (Cat. No. I605) or eV+3 Keyword
Reference Manual (Cat. No I652) and the robot user's guide for more information
about E-Stop channel functions.
Profiler
The Profiler is available for each controller in the project. It is used to provide a graphical dis-
play of controller processor usage for diagnostic purposes. There are two tabs in the Profiler
view as described below.
The Current Values tab shows a list of tasks and their respective processor usage. Use the Dis-
play and Timing menu items to adjust the listed items and the update rate.
NOTE: Selecting All User Tasks displays all the user tasks available to your sys-
tem. If All User Tasks is not selected, only tasks with a program on the execution
stack are displayed.
History Tab
Viewing the History tab displays a line plot history of CPU load over time for each task.
Control Buttons
Selecting a particular Application Manager object in the list enables or disables buttons on the
display according to the allowed recovery for the current state.
The buttons on the Task Status Control have the following functions.
Zoom Level
Use the icons on the left of the window to adjust the zoom level. These icons are described
below.
Fit to Screen
Click the fit to screen icon ( ) to change the zoom level so the entire image fits in the Vision
Window
Zoom to 100%
Click the zoom to 100% icon ( ) to change the zoom level to the default size of the acquired
image.
Zoom in/out
Click the zoom in ( ) and zoom out ( ) icons to manually adjust the zoom level.
Calibration Scale
The left and top axis values represent the calibration scale setting in mm/pixel that is present
in the Virtual Camera settings. Refer to Virtual Camera Calibration on page 281 for more
information.
Execution Time
The Vision Sequence execution time is displayed in the lower left area of the Vision Window.
Use the dropdown arrow to select from all available Image Sources defined in the system.
Cursor Information
Moving the cursor in the field of view portion of the Vision Window will reveal additional
information about the inspection results.
The X-Y coordinates are displayed at the bottom of the Vision Window for the current cursor
position. Color / gray scale values are also displayed when applicable.
Hover over the coordinate icon in the field of view to display inspection results as shown
below.
5.11 V+ Watch
V+ Watch is used to monitor specified variables while developing or debugging V+ programs.
Variables can be added to the V+ Watch window in several different ways as described below.
The V+ Watch window contents will be saved with the project. Refer to V+ Variables on page
166 for more information.
NOTE: You can search and replace text strings in modules and programs.
If more than one device is registered in the project, the target of a search and
replace is the currently active device only. Be sure to check the active device in
the device list for the project before you perform a search and replace.
Search Use this field to enter a string for a search. You can also select from previous
what search strings with the drop down
arrow.
Replace Enter a string to replace the search string You can also select from previous
with with. replace with strings with the drop
down arrow.
Look in Specify a range to search. You can select When selecting Checked
from the following. elements, use the more options
button ( ) to display the Select
l Programming: all of the programs of
the project's currently active device. search and replace scope Dialog
l Checked elements: the items Box.
selected in the Select search and
replace scope Dialog Box are
searched.
l Current view: the active V+ Editor
tab is searched.
Use Specify if you want to use wildcard If you choose to use wildcard
characters. characters, you can click the ( )
Search Options
Item Description
Match whole word When selected, only exact string matches are
returned.
Button Functions
Use the following table to understand the functions of the buttons in the Search and Replace
tool.
Item Description
Search All Searches all items and displays the results in the Search and Replace Res-
ults Tab Page displayed in the Multifunction Tabs area.
Replace All Replaces all items and displays the results in the Search and Replace Res-
ults Tab Page.
NOTE: Passwords are not mandatory for new and existing users. Refer to Pass-
words on page 141 for more information.
Default Access
Designate the default access level when opening a project or signing out.
Name
This is the User Name when signing in. The Name field can be edited for each user. Default
names of Operator, Technician, and Engineer are provided for all new ACE projects.
Access
Designate the access level for each Name in the User Manager list (Operator, Technician, or
Engineer).
Add Button
Add a new name with a specific access level to the user list. A new user will be added and
sign in will be possible with the User Name and Password (if specified).
Remove Button
Remove a selected name from the User Manager list. The user will be deleted and sign in will
not be possible with the previously stored credentials.
Change Password Button
Create a unique password for each user listed. Clicking this button will open the
Change Password Dialog Box. Refer to Passwords on page 141 for more information.
User access levels allow software feature and function access control for designated users for
an ACE project. Signing in to each user level can be password protected (optional). This allows
you to create a list of users and assign a specific access level to each one.
All features and functions in the ACE software are accessible to a user with the Engineer
access level.
NOTE: A user signed in with Technician or Operator access or cannot edit users
or access the User Manager. Only a user with Engineer access can edit users
with the User Manager.
Technician Accessible Items
The following features and functions are accessible to a user with the Technician access level.
The Technician access level cannot add any new items to the ACE project and can only view
The notes function is editable for a user with the Operator access level only when a note was
already created by another user with Technician or Engineer access. All other features and
functions are either read-only or inaccessible.
Passwords
Passwords for each User Name can be specified, but are not mandatory. Passwords can
include symbols, letters, and numbers and are case sensitive.
If a password has not been specified for a user, the following password omissions are present.
l The Old Password field can be left blank when changing a password.
l The Password field can be left blank when signing in as this user.
5.14 Project Options
Project options can be access in the Tools menu item. The following project option settings are
detailed below.
l Color Theme
l Project Settings
l Window
l V+ programs
l UPS
To access the project options, click the Tools menu bar item and then select Option... The
Option window will open.
Color Theme
To access the color theme setting, select Color Theme from Option Dialog Box. The color theme
settings will be displayed.
Choose Gray (default) or White as the color theme for the ACE software. Changing this setting
will require a software restart to see the result. Both color themes are shown below.
The default project author name can be specified in the Project Settings area. The following set-
tings are available.
Selecting this will use the name specified when saving or creating a new ACE project. A restart
of the ACE software is required to see this change.
If this is not selected, the author field for saving or creating a new ACE project will be blank.
Clicking the Reset to default settings button will set the project's author name to the Windows
user name. A restart of the ACE software is required to see this change.
Window
To access the Window settings, select Window from Option Dialog Box. The Window settings
will be displayed.
Enter the maximum number of tab pages to view in the Edit Pane. With this setting, you can
set the number of tab pages in the Edit Pane and floating edit windows that can be displayed
at a time.
If the check box is selected for When the maximum number of tab pages in the Edit Pane is
exceeded, close the tab page that has not become active for the longest time is active, the Edit
Pane that was opened first will automatically close when attempting to open new tab that
exceeds the set amount. If this is not selected, a warning is displayed and new Edit Pane tabs
cannot be opened until old tabs are closed.
V+ Programs
To access the V+ programs settings, select V+ Programs from Option Dialog Box. The
V+ program settings will be displayed.
Text can be specified as the header for all new V+ programs in an ACE project when this is
enabled. If no text is specified, a new V+ program will consist of only .PROGRAM and
.END statements.
Other Parameters
If the Allow automatic inteliprompt pop-ups in the V+ editor option is selected, typing com-
mands in the V+ editor will trigger suggestions with matching text, as shown below.
If the Use Custom Font option is selected, you can specify a system font for the V+ program
editor. Click the Select button and choose from the Font Setting Dialog Box and then click OK.
The new font setting will be used in the V+ program editor.
l Save the ACE project.
l Disable high power on all connected controllers.
l Close the ACE software.
l Shut down the operating system.
The PowerAct Pro software is included with the AC UPS. The Slave Unit needs to be installed
on the computer running the ACE software.
Hardware Configuration
The basic hardware needed to use the UPS within a robot installation is: Omron AC UPS
BU5002RWL, Omron Web Card SC20G2, Omron S8VKS DC Power Supply, Omron NY Indus-
trial PC and an Industrial Robot.
3 Omron Industrial PC
4 Industrial Robot
5 24VDC
6 Ethernet Cable
7 240VAC
Number Description
2 Optional slot
3 RS-232C Port
4 Cooling fan
8 200-240VAC output terminal
9 AC input cable
Use the following steps to make the equipment connections for the full system.
UPS Software Configuration
When the SC20G2 is installed in the UPS it operates as the "Master Unit" in the PowerAct Pro
connection. The connected devices operate as "Slave Units'" to the UPS. The Slave Units are
then configured to initialize .exe or .bat files on the Industrial PC. When these files are executes
the UPS is able to initiate a safe stop to the program upon low battery.
Use the following steps to set up PowerAct Pro.
Install PowerAct Pro onto the Industrial PC
Connect to the AC UPS with the IP, 10.151.24.79, in the browser, opening the UPS Monitor.
ACE reacts to the UPS shutdown through a .bat file within the Slave Unit Client. You need to
create the .bat and save it with a unique name in the directory, C:\Program Files\Omron\ACE
4.2\UPS. Copy and paste the following into the .bat file: "C:\Program Files\Omron\ACE
4.2\UPS\SignalUpsEvent\SignalUpsEvent.exe". Include the quote marks, " ",in the string.
After the .bat file is created and saved, open the PowerAct Pro on the Industrial PC and go to
the Configuration Setting, shown below.
To access the UPS settings in the ACE software, open ACE and select Tools > Options from the
toolbar.
When the window opens, select UPS from Option Dialog Box. The UPS settings will be dis-
played. Select Enable UPS monitor and respond to UPS events, Save Project File and Disable
High Power on all connected controllers. Click OK. The definition of the individual
UPS options follows the figure.
This option enables or disables the UPS functionality in the ACE software. The functions are
described below.
Shutdown Options
The shutdown options area provides selections for the UPS event functions described below.
When selected, ACE will respond to the Shutdown command from the UPS by saving the pro-
ject file before closing.
ACE will trigger a soft signal on all connected controllers. This allows you to setup a REACTI
to create a specific error response for a power loss.
Triggers the C# program named “UPS Shutdown”. This allows you to have a C# script that
will execute on a power event. The C# program must be named exactly as “UPS Shutdown”.
If selected, this specifies the amount of time in seconds that ACE will wait before saving and
closing. You should verify that this time is long enough for a REACTI or C# program to com-
plete before ACE closes.
If this option is enabled, it disables High Power on all controllers connected to the instance of
ACE. Omron recommends using a C# or REACTI program to bring robot to safe stop instead.
This will stop the robots as quickly as possible immediately after the shutdown signal is
received.
UPS Shutdown Sequence (Typical)
When a UPS is connected and configured, a UPS event can trigger the following sequence to
execute a controlled shutdown of the system.
S8BA-Series UPS Configuration
The script command function is used to specify an executable when a UPS event occurs. The
SignalUPSEvent.exe executable runs when a UPS event occurs and this will trigger the actions
There are two methods of programming ACE applications. The method you choose depends
on type of application you want to develop. The available methods are summarized below
and described in more detail in this chapter.
The V+ Editor described in this chapter is the main tool used to develop programs for the
SmartController.
The C# Editor described in this chapter is the main tool used to develop C# programs for the
Application Manager.
Each V+ Module needs to have a designated Module Program. A Module Program is the
primary V+ program that ACE software uses for Module naming and other internal functions.
When adding a new V+ Module, a new V+ Module Program will be inserted with the default
name of program0. To change the name of the V+ Module, edit the name of the V+ Module Pro-
gram. To designate a different V+ program as the Module Program, right click a selected pro-
gram and click Set as Module Program.
All V+ programs are created inside V+ Modules. V+ programs will be displayed under a V+
Module as shown below.
Item Description
1 V+ Modules
2 V+ programs
3 V+ Module Programs
Rename V+ programs by right-clicking the program and selecting Rename. Renaming the V+
Module Program will also rename the parent V+ Module. The V+ Module name will always
become the name of the V+ program that is designated as the Module Program.
Use the following rules when naming a V+ Module or Program.
l Names must begin with a letter and can be followed by any sequence of letters, num-
bers, periods, and underscores.
all CALL commands used throughout the V+ Module's programs. Clicking on an item in the
list will display its use in the program.
Pull V+ Memory
Use the Pull V+ memory function to ensure the ACE software has the full contents of a con-
troller available to the user interface. This function will upload V+ Modules, V+ programs, and
Variables from the controller to the ACE project.
To upload V+ programs and Modules, right click any V+ Module and select Pull V+ Memory.
This will upload all controller settings, V+ Modules and Programs to the ACE project.
V+ Editor
The ACE V+ Editor is an online, interactive editor. The editor performs syntax checking and
formatting while you are programming.
V+ Editor Functions
Copy/Paste
Allows you to copy and paste ASCII text from other programs or other sources.
Inteliprompt Popups
When you type the significant letters of a V+ keyword and then press the Enter key, the editor
attempts to complete the keyword.
Tool Tip Syntax
If the you hover the mouse cursor over a keyword, the syntax and short description for that
keyword is displayed in a tool tip.
As each line of program is entered it is processed by the ACE software V+ program parser.
This processing performs the formatting and checking, reports back the resulting format, and
then the editor is updated to reflect this. If there is a problem with the entry, the text is under-
lined in red and the Build Tab Page displays the current error. You can hover over the text to
display a status message or click the Show List button in the Status Bar at the bottom of the
V+ Editor to access the build errors.
Drag and Drop
The editor supports drag and drop of ASCII text from another Windows program onto an open
area of the editor. You can also use this feature to move lines of code within the editor. To
move text, highlight the text and then drag it to the new position.
Colorization
The code lines and text are colored to help visually distinguish between actual code, com-
ments, and errors.
Code lines can have the following colors.
Right-clicking in the V+ Editor opens a menu with several functions. In a addition to basic edit-
ing commands (cut, copy, paste, delete, and select all), the following functions are also avail-
able.
Step Into/Over
Use the following Step Into and Step Over functions during troubleshooting and diagnostics.
l Step Into: Single step operation that will enter a program call and single-step through
the program. After the last line of the program has been executed, it returns to the step
following the program call.
l Step Over: Single step operation that skips a program call. When the execution pointer
is positioned at a CALL or CALLS keywords, typing F10 will cause the entire subroutine
to be executed and execution pauses at the step following the program call.
Comments
Add comments to a program for explanation and annotation. Commented program lines are
not executed.
Add bookmarks to a program for quick access to specific program lines. Bookmarks do not
affect program execution. A bookmarked program line will be indicated with the following
symbol.
The V+ Editor can be split into two screens horizontally to allow viewing of multiple sections
of code in the same editor window.
To split the editor window view, drag the handle down as shown below.
Status Bar
The area below the V+ Editor window has the following functions.
1 Show List Button Open the Build Tab Page and display any errors that are
present for any programs in the project.
2 Error List Displays the total count of errors present in the program.
Toolbar Items
Step into (V+ program debug- Single step operation that will enter a program
ging) call and single-step through the program. After
the last line of the program has been executed,
it returns to the step following the program
call.
Step over (V+ program debug- Single step operation that skips a program call.
ging) When the execution pointer is positioned at a
CALL or CALLS keyword, typing F10 will cause
the entire program to be executed and
execution pauses at the step following the
program call.
Jump to current line and step Jump to the currently selected line and then
(V+ program debugging) single-step through the program.
Proceed execution (V+ program Continues execution of the task until the next
debugging) breakpoint or the program terminates.
Display an object member list Based on the cursor location, this displays a list
of the available object members.
Display parameter info While the cursor is on a command, this will dis-
play that command's parameter info.
Display quick info Displays the tooltip info for the cursor location
(the same as the tooltip when the cursor hov-
ers).
V+ Program Debugging
The V+ Editor provides debugging tools when an online connection to the controller is present.
This allows interactive program stepping while simultaneously displaying code variables and
states. If a program in one module steps into a program in another module, the V+ Editor will
automatically step you into that program. Breakpoints in the code can be added or removed
while debugging. You can have as many active debugging sessions as there are tasks.
NOTE: The ACE project must be online to use the V+ Editor program debugging
functions.
Use one of the following methods to access the V+ Editor debugging functions for a program.
l Right-click the program in the Multiview Explorer and select Debug on Task. Select a
task and the program will run with the V+ debugging functions activated.
l Right-click a stopped task in the Task Manager and select Reset and Debug. The pro-
gram will reset and run with the V+ debugging functions activated.
A green arrow ( ) indicates the current program line where the stepping function occurs.
6.2 V+ Variables
V+ Variables are values used for storing various types of data. They can be created and edited
with the V+ Variable Editor for use in V+ programming. Use the information below to under-
stand how to create and use V+ Variables with the ACE software.
NOTE: C# Variable objects can also be created and edited, but these are used
with C# programs only. Refer to Application Manager Programming on page
173 for more information.
V+ Variable Types
The different V+ Variable types are described below.
NOTE: All V+ Variables that are created with the V+ Variable Editor become
global variable types accessible in all V+ programs. Refer to the eV+ Language
Reference Guide (Cat. No. I605) or eV+3 Keyword Reference Manual (Cat. No I652) for
more information about Auto and Local variables.
Real
A real V+ Variable type is a floating-point data type used to represent a real number.
String
Precision Point
A precision point variable type allows you to define a location by specifying a value for each
robot joint. These joint values are absolute and cannot be made relative to other locations or
coordinate frames. Precision point locations are useful for jointed-arm applications and with
applications that require full rotation of robot arms with joint 4. They are also helpful where
joint orientation is critical to the application or when you want to move an individual joint.
Location
A location variable type specifies the position and orientation of the robot tool tip in three-
dimensional space. You can define robot locations using Cartesian coordinates (trans-
formations).
A transformation is a set of six components that identifies a location in Cartesian space and
the orientation of the robot tool flange (X, Y, Z, Yaw, Pitch, Roll). A transformation can also rep-
resent the location of an arbitrary local reference frame (also know as a frame of reference).
Refer to Defining a Location on page 167 for more information.
The coordinate offsets are from the origin of the World coordinate system which is located at
the base of the robot by default.
Defining a Location
V+ Variable location values can be manually entered or acquired from the current robot pos-
ition. Refer to V+ Jog Control on page 117 for more information.
V+ Variable Names
V+ Variables must have a unique name. Use the name to reference V+ Variables in
V+ programs.
Use the following rules when naming the different V+ Variable types.
l Real and Location variables must begin with a letter and can be followed by any
sequence of letters, numbers, periods, and underscores.
l String variables must begin with the $ symbol and can be followed by any sequence of
letters, numbers, periods, and underscores.
l Precision Point variables must begin with the # symbol and can be followed by any
sequence of letters, numbers, periods, and underscores.
l There is a 15 character limit for variable names.
l Variables are not case sensitive and always default to lower-case letters.
l Because ACE automatically creates default system variable names, avoid creating vari-
able names that begin with two or three letters followed by a period to prevent coin-
cidental variable name duplications. For example, sv.error, tsk.idx, and tp.pos1 are
variable names that should be avoided. This restriction applies when creating variables
in the V+ Editor and within V+ programs.
V+ Variable Properties
Variables contain properties that define the behavior and use within the ACE software. Use the
V+ Variable property types described below as new variables are created and edited.
Name
The type property defines what type of data is stored in the variable.
Initial Value
The initial value property is used to set the variables default value before any program or
other function alters it.
Category
A category can be defined for each variable to help classify and organize variables.
V+ Variables can be saved by category as well. Refer to Save Configuration on page 204 for
more information.
Description
A description can be added to each variable for annotation and clarification purposes.
Robot
A robot in the ACE project can be assigned to a location or precision point variable. This prop-
erty does not apply to other variable types. When assigning a robot current position to the loc-
ation or precision point variable, a robot must be selected. The robot must be assigned for
display purposes and for inclusion in location lists provided by the V+ Jog Control.
Display Mode
The following Display Mode options affect how a location or precision point variable appears
in the 3D Visualizer. This property does not apply to other variable types.
Arrays
A variable can be created as a 1D, 2D, or 3D array. Use the array selection options in the Add
a new variable Dialog Box to establish the dimensions of the array. Refer to Creating a New
V+ Variable on page 170 for more information.
NOTE: V+ Variables can also be created from a V+ program. Refer to the eV+
Language Reference Guide (Cat. No. I605) or eV+3 Keyword Reference Manual (Cat. No
I652) for more information.
New Precision Point and Location variable types can also be created from the
V+ Jog Control. Refer to V+ Jog Control on page 117 for more information.
Additional Information: Once a variable is created, the type and array size can-
not be changed.
1. Click the Add button ( ). This will open the Add a new variable Dialog Box.
To delete an existing V+ Variable, either select the variable row and click the Delete button (
) or right-click the variable row and select Delete. A Confirmation Dialog Box will be shown.
NOTE: Multiple variables can be selected by using the Shift or Control buttons
and selecting multiple rows.
Variables can be cut, copied, pasted, and deleted with these options. Pasting a variable will
append "_0" to the name of the new variable.
Show References
Selecting Show References will display a list of program references where the variable is used.
If a variable is used in programs, the program name and line numbers are provided to locate
exactly where the variable is used. Clicking a line number will display the program reference
as shown below.
Add to Watch
Selecting Add to Watch will place the variable in the V+ Watch window. Refer to V+ Watch on
page 136 for more information.
Record Here
Selecting Record Here will acquire the robot's current position values and place them in the ini-
tial value field. This option is only available if a robot has been assigned in the variable. This
option is only available for precision point and location variable types.
Choosing Select in Virtual Pendant will open the V+ Jog Control with the variable pre-selected
in the Location area. This is a convenient method for jogging and teaching a robot position for
a variable. This is only available if robot has been assigned in the variable. Refer to V+ Jog
Control on page 117 for more information. This option is only available for precision point
and location variable types.
Focus in 3D Visualization
Selecting Focus in 3D Visualization will open the 3D Visualizer and center the view at the
coordinates. This option is only available for precision point and location variable types with
an assigned robot.
C# Program Names
Rename C# programs by right-clicking the program and selecting Rename. Program names can
include letters, numbers, and special characters.
C# Program Editor
The C# program editor can be used to create and edit programs for performing various tasks
and automation within the ACE project.
To access the C# Editor, double-click a program in the Multiview Explorer. The C# Editor will
open in the Edit Pane.
C# Editor Details
Application Manager objects can be referenced in the C# program with a drag and drop action.
For example, drag and drop a virtual camera object to place a reference to that object in the
C# program.
Drag and Drop Controller Settings into a C# program to create a reference to a specific
SmartController device. You can access V+ variables, digital I/O, and more from this reference.
Refer to the ACE Reference Guide for more information.
Copy/Paste
Allows you to copy and paste ASCII text from other programs or other sources.
Auto-complete
When you type the significant letters of a statement and then press the Enter key, the editor
attempts to complete the keyword.
If the you hover the mouse cursor over a statement, the syntax and short description for that
statement is displayed in a tool tip.
As each line of program is entered it is processed by ACE. This processing performs the format-
ting and checking, reports back the resulting format, and then the editor is updated to reflect
this. If there is a problem with the entry, an error appears in the Error List tab below the Edit
Pane. Refer to Error List on page 177 for more information.
The editor supports drag and drop of ASCII text from another Windows program onto an open
area of the editor. You can also use this feature to move lines of code within the editor. To
move text, highlight the text and then drag it to the new position.
Colorization
The code lines and text are colored to help visually distinguish between actual code, com-
ments, and errors. Code lines can have the following colors.
Right-clicking in the C# Editor opens a menu with several functions. In a addition to basic edit-
ing commands (cut, copy, paste, delete, and select all), the following functions are also avail-
able.
Comments
Add comments to a program for explanation and annotation. Commented text is not executed.
Bookmarks
Add bookmarks to a program for quick access to specific program lines. Bookmarks do not
affect program execution. A bookmarked program line will be indicated with the following
symbol.
Program compile errors are shown in the Error List Tab below the C# program Edit Pane.
When errors are present in the list, execution is not possible. Use the Error List Line, Char
(character), and Description information to resolve program compile errors.
Trace Messages
Trace Messages are shown in the Trace Message Tab below the C# program Edit Pane. This
displays messages created from any Trace.WriteLine() call in the program and can be useful
when troubleshooting or debugging a program.
Toolbar Items
Display an object member list Based on the cursor location, this displays a list
of the available object members.
Display parameter info While the cursor is on a command, this will dis-
play that command's parameter info.
Display quick info Displays the tooltip info for the cursor location
(the same as the tooltip when the cursor hov-
ers).
Display Word Completion Based on the cursor location, this displays a list
of the available object members.
NOTE: V+ Variables objects can also be created and edited, but these are used
with V+ programs only. Refer to SmartController Programming on page 155 for
more information.
C# Variable objects can be cut, copied, pasted, and deleted with these options. Pasting will pre-
pend "Copy_1_of_" to the name of the new C# Variable object.
Add to Watch
Selecting Add to Watch will place the C# Variable object in the V+ Watch window. Refer to V+
Watch on page 136 for more information.
The following items are available for the configuration and setup of a SmartController.
Use the Multiview Explorer to access these items as shown below.
NOTE: These items do not apply when viewing an application manager device
in the ACE project.
l Motor Tuning
NOTE: Many Controller Settings items are not available while offline. See the fol-
lowing sections for more information.
Configuration
The Configuration area is found in the main view of the Controller Settings editor area. It con-
tains the following read-only items for informational purposes.
Software Revision
The Software Revision field displays the V+ software revision number installed on the
SmartController. This is displayed only while online with the controller.
System Options
The System Options field displays the configuration of the V+ system in a two-word, hexa-
decimal format. Refer to the eV+ Language Reference Guide (Cat. No. I605) or eV+3 Keyword Refer-
ence Manual (Cat. No I652) for more information. This is displayed only while online with the
controller.
Host Address
The Host Address field displays the IP address of the connected PC (host). This is displayed
only while online with the controller.
Parameters
The Parameters area is found in the main view of the Controller Settings editor area. It con-
tains the following items that are described below.
Select Automatically Set Time to automatically synchronize the controller to the PC time.
Dry Run
Select Dry Run to test a project without robot motion. This will cause robots to ignore motion
commands. This is selectable only while online with the controller.
Enabled Encoder Count displays the number of external encoders used by the system. The
number displayed is related to the number of Belt Encoder Channels configured (refer
to Configure Belt Encoder Latches on page 195 for more information).
IP Address
The IP Address field displays the current IP address of the controller. When offline, it is pos-
sible to change or select a controller's IP address by clicking the More button as shown below.
Clicking this button will display the Select a Controller Address Dialog Box.
NOTE: If the desired controller is not available or the changed IP address will
not allow an online connection, refer to Detect and Configure Controller on page
78 for more information.
Control
The Control Menu displays controller-specific setting items described below. These are select-
able only while online with the controller.
Set Time
In order to use error log time stamps for troubleshooting, the correct time needs to be set in the
controller(s).
Selecting Set Time will manually synchronize the controller time to match the time detected on
the connected PC. The following confirmation dialog box will be displayed.
NOTE: The Set Time function will not set the time for a robot node. Use the
FireWire configuration to set robot node times. Refer to Configure FireWire Nodes
on page 194 for more information.
Reboot V+
Selecting Reboot V+ will reboot the controller. The following confirmation dialog box will be
displayed.
Servo Reset
Selecting Servo Reset will reset communication with the robot servo nodes.
Selecting Zero Memory will clear all user programs and variables from the workspace and the
controller. The following confirmation dialog box will be displayed.
Selecting Save Startup Specifications will save all robot and motor specifications to the V+ boot
disk.
NOTE: This is the same function that is present in the Robot Object - Configure
menu. Refer to Configure on page 222 for more information.
Selecting Save Memory to a File will save all V+ programs to a file in the PC. A Save As win-
dow will be displayed.
View eV+ Log
Selecting View eV+ Log will display a list of eV+ event messages for the controller. The fol-
lowing View eV+ Log will be displayed. Refer to eV+ Log on page 592 for more information.
Upgrade
Use the Upgrade function to upgrade V+ or the FireWire firmware. Clicking the Upgrade but-
ton displays the Upgrade Options Dialog Box. This is selectable only while online with the con-
troller.
When updates are required, it is recommended to upgrade both V+ and the FireWire Node
Firmware to ensure the entire system has the latest firmware version(s). Use the following pro-
cedure to upgrade V+ and the FireWire firmware.
1. Select Upgrade V+ in the Upgrade Options Dialog Box and then click the Finish button.
2. Provide the V+ directory where the new V+ distribution is located. Provide a file path to
the \SYSTEM folder located in the distribution file root directory.
3. Select a backup directory to use during the upgrade process.
4. Select the Upgrade Controller FPGA option.
5. After the V+ directory, backup directory, and FPGA controller upgrade items fields and
selections are made, click the Go button to proceed with the upgrade process. The pro-
cess can take several minutes to complete. A progress bar is displayed during the
upgrade procedure. After the V+ upgrade process is complete, proceed to the next step.
6. Select the Upgrade FireWire Firmware option from the Upgrade Options Dialog Box.
7. Select all detected robot amplifier nodes. This process is used to check and update servo
code firmware on robot nodes. If a newer version is not provided, an upgrade will not
occur. Alternatively, select only the node you wish to upgrade / downgrade with a spe-
cific file from the PC.
8. Select Update FPGA and Update Servo options.
10. After all fields and selections are made, click the Go button to proceed with the update
process. The process can take several minutes to complete. A progress bar is displayed
during the upgrade procedure.
11. After updating servo / FPGA you will be prompted to cycle 24 VDC power to the
updated nodes. The update process is complete when the power is cycled.
Upgrade V+
The Upgrade V+ function is used to upgrade the V+ operating system and optionally, to
upgrade an external controller FPGA (firmware). Selecting Upgrade V+ from the Upgrade
Options Dialog Box and then clicking the Finish button will display the following window.
Refer to Basic Upgrade Procedure on page 190 for detailed steps.
The Upgrade FireWire Firmware function is used to upgrade a FireWire node's firmware. Select-
ing Upgrade FireWire Firmware from the Upgrade Options Dialog Box and then clicking the
Finish button will display the following window. Refer to Basic Upgrade Procedure on page
190 for detailed steps.
NOTE: The firmware upgrade process will require several minutes for each
node in the distributed network.
3 Update Servo Enable to update the servo binary image stored in the
selected node's flash memory. Not performing this
relies on successful dynamic download from controller
to robot during boot up (see note below). Performing
the servo update eliminates any potential risk of
dynamic download failure during boot up.
4 Firmware Directory Select the PC directory where the new firmware files
are located. Provide a file path to the \SYSTEM folder of
the distribution file directory.
5 Copy FIRMWARE files from Use firmware files in controller for FireWire node
Controller updates.
Configure
Clicking the Configure button displays the Configure Options Dialog Box. Several con-
figuration items are selectable as described below. The Configure button is available only
while online with the controller.
Configure Licenses
Selecting Configure Licenses allows you to view, install, or remove V+ licenses. Each license is
uniquely paired to a corresponding Security ID and every device has a unique Security ID
number associated with the memory card.
To install or remove a license, first select the device from the left side list, enter the license key
in the Password field, and then click either the Install or Remove buttons.
Configure Robots
Select Configure Robots to add or remove robots from the system. Selecting Configure Robots
and clicking the Finish button will display the Configure Robots Dialog Box. Refer to
Configure Robots on page 211 for more information.
Configure FireWire Nodes
Selecting Configure FireWire Nodes allow you to create a valid FireWire network configuration
to enable communication with devices such as robots, belt encoders, digital I/O, and force
sensors. Selecting Configure FireWire Nodes and clicking the Finish button will open the dialog
box shown below.
Right-click an item in the list to display information and settings for each FireWire node. The
following information and settings are available.
Configure I/O
Selecting Configure I/O allows you to map I/O to numerical signal numbers for use in pro-
gramming. The Configure I/O function provides DeviceNet scanning, configuration, and map-
ping.
Selecting Configure Belt Encoder Latches allows you to view and change latch signals for each
encoder channel of the controller. A Belt Encoder Latch refers to the capture of a conveyor belt
encoder position value when an input signal (latch) changes state. Once configured, the sys-
tem will monitor and record the latch signal number and corresponding encoder position (one
encoder value) in a first-in-first-out buffer for later access and use in a program. This area can
also be used to check current position and velocity for each encoder channel.
NOTE: Belt encoder latches are not required for conveyor belt tracking but are
recommended for positional accuracy. Pack Manager requires use of latch sig-
nals for all belt tracking applications for positional accuracy, as well as cross-con-
troller latch handling.
Encoders should be uniquely numbered in the FireWire configuration before con-
figuring encoder channels/latches.
Selecting Configure Belt Encoder Latches and clicking the Finish button will display the dialog
box shown below.
2 Position (ct) Shows the current position in counts of the corresponding belt
encoder.
3 Velocity (ct/s) Shows the current velocity in counts per second for the cor-
responding belt encoder.
4 Latch Signals Shows the latch signal assignments for the corresponding belt
encoder. One or more signals can be configured as described
below.
ON:
OFF:
Selecting Configure Robot Position Latches allows you to view and change signals associated
with capturing robot position latches. A Robot Position Latch refers to the capture of a robot's
position when an input signal (latch) changes state. A robot's position is captured as a pre-
cision point, which is a data structure containing one value for each joint of the robot. Once
configured, the system will monitor and record the latch signal number and corresponding
robot joint positions in a first-in-first-out buffer for later access and in a program. Robot pos-
ition latches are most frequently used for applications that require vision-guided position
refinement without stopping robot motion.
Selecting Configure Robot Position Latches and clicking the Finish button will display the dia-
log box shown below.
2 Add Latches Click the Latches button to display a list of available input signals
Button to trigger the Robot Position Latch.
3 Negative Select to invert the signal. Selecting this will make the latch occur
Edge Selection when the input signal changes from ON to OFF. If this is not selec-
ted, the latch occurs when the input signal changes from OFF to
ON.
Configure System Settings
Selecting Configure System Settings and clicking the Finish button will display the dialog box
shown below.
Trajectory Period
The trajectory period defines the interval between set points calculated during robot motion tra-
jectory generation. The default setting of 16 ms is adequate for most applications because the
servo loop receives set points and controls motor position based on micro-interpolation at 8
khz (125 μs). In some situations it is helpful to decrease trajectory period, which results in
more frequent set point generation to decrease path following error. However, reducing this
value can have a noticeable impact on quantity of calculations and processor usage, especially
for embedded controllers.
IP Address
Subnet Mask
Gateway
Controller Name
Safety Timeout
View and change the Safety Timeout setting for the connected controller. This controls the beha-
vior of the robot high power request sent by a connected PC, executing program, Front Panel
High Power Enable button, or optional pendant. If a safety timeout is enabled, the robot power
button on the front panel will blink for a specified amount of seconds after a high power
request is made. If the robot power button is not pressed within the specified time, a safety
timeout occurs and robot power is not applied.
IMPORTANT: Ensure adequate safety measures are taken if the safety timeout is
disabled.
The default setting for the Safety Timeout is 10 seconds. Use the following settings to adjust the
Safety Timeout.
l 0 seconds: disables the high power request secondary action and robot power is applied
immediately.
l 1 to 60 seconds: enables the high power request secondary action and robot power is
applied if the Robot Power button is pressed on the front panel within the specified
amount of time.
Auto Start
Enable or disable an Auto Start program. If Auto Start is enabled, as the controller completes
boot process it will load and execute D:\AUTO.V2. An AUTO.V2 program has no user-defined
error handling and should be kept simple to load modules or variable files and execute a pro-
gram that handles system startup. Refer to Save Configuration on page 204 for more inform-
ation.
NOTE: Although the Program System Startup function and the Auto Start func-
tion share some similarities, the startup file for the Program System Startup func-
tion is stored on the PC whereas the Auto Start function files are stored on the
controller. Refer to Program System Startup on page 303 for more information.
View and change the Task Number used for an ePLC configuration. This defines the first task
that ePLC programs will start executing on when Auto Start is enabled (eplc_autostart). ePLC
task numbers must be between 7 and 15 and will occupy at least 5 tasks when Auto Start is
enabled.
View and change the Input Mapping used for an ePLC configuration. This defines the first
input signal in the range of signals mapped by ePLC programs.
View and change the Output Mapping for an ePLC configuration. This defines the first output
signal in the range of signals mapped by ePLC programs.
Select the Auto Start behavior when using the ePLC function. When Auto Start is enabled,
D:\ADEPT\UTIL\ePLC3.v2 will be loaded and executed on the designated task number and
initialized with the Input and Output Mapping signal numbers.
Backup/Restore
Use the Backup/Restore function to backup, restore, or compare the V+ operating system files
and directories. Clicking the Backup/Restore button displays the Backup/Restore Options Dia-
log Box. This is selectable only while online with the controller.
Backup V+
The Backup V+ function allows you to back up the V+ operating system files and directories to
the connected PC. Selecting Backup V+ and then clicking the Finish button will display the fol-
lowing window.
Choose a PC directory for the V+ operating system files and directories to be stored and then
click the Backup button to proceed.
NOTE: The backup process takes several minutes. If there are files present in the
selected PC directory, you will be prompted to remove them before the backup
process can begin.
Restore V+
The Restore V+ function allows you to restore the V+ operating system files and directories
from a directory on the connected PC. Selecting Restore V+ and then clicking the Finish button
will display the following window.
Choose a PC directory where the desired V+ system zip file is located and then click
the Restore button to proceed.
Compare V+ to Backup
The Compare V+ to Backup function allows you to compare V+ operating system files and dir-
ectories stored on the connected PC with those in the connected controller. Selecting Compare
V+ to Backup and then clicking the Finish button will display the following window.
Select a PC directory where the V+ system zip file for comparison is located and then click the
Compare button to proceed.
Encoders
Use the Encoders function to configure and check operating status of devices connected to the
controller's encoder channels. Clicking the Encoders button displays the Encoders Dialog Box.
The Encoders Dialog Box displays all encoder channels that have been configured. The Con-
figure button is reserved for future use.
If an encoder channel that is present does not appear, check the FireWire Configuration and
Belt Encoder Latch configuration.
NOTE: To view and configure all encoder channels, ensure that all encoder
channels are added in the Configure Belt Encoder Latches Dialog Box even if
latches are unnecessary. Refer to Configure Belt Encoder Latches on page 195 for
more information.
NOTE: When Emulation Mode is enabled, saving data to the virtual controller
may result in a loss of data.
This area also provides the capability to generate an AUTO.V2 program that can load mod-
ules, variable file contents, and execute a specific program on a specific task. The Program to
Execute and Task Number fields are used when generating the contents of the AUTO.V2 file
stored at D:\ on the controller.
If the Save Programs and Variables on Controller is enabled, the generated AUTO.V2 will also
include program instructions for loading the saved modules. In order to execute this program
at bootup, you must enable Auto Start (refer to Auto Start on page 200 for more information).
The Save to Controller and Generate Auto Module buttons function only when an online con-
nection is present.
Use the information below to understand the functions of the Save Configuration area.
NOTE: Many Save Configuration items are not available while offline.
l All variables will be saved in a single file named GLOBALS.VAR if Save Variables by
Category is not selected. If Save Variables by Category is selected, refer to the following
section.
l Modules will be saved in a file called {module name}.pg.
Use the Select button ( ) to choose an alternate storage location for the Variable and
Module files on the controller.
If the Save Variables by Category option is selected, individual files will be saved based on the
category name. Variables without a category designation will be saved to a file named
OTHERS.VAR. Refer to Category on page 169 for more information about variable categories.
NOTE: The selection for Save Programs and Variables on Controller must be
enabled to save belt calibrations to the controller.
NOTE: Moving the scroll bar away from the bottom will pause auto-scroll.
Return the scroll bar to the bottom to resume auto-scroll.
Up and down arrow keys can recall recent Monitor Commands for convenient
re-execution. Closing the Monitor Window will clear the recent Monitor Com-
mand history.
Spaces before and after parameter separators are optional. Monitor Command Keyword para-
meters can be optional or required. If a parameter is required, a value must be entered on the
command line or the Monitor Command will not execute correctly. If a parameter is optional,
its value can be omitted and the system will substitute a default value. For example, the com-
mand STATUS has one optional parameter. If the command STATUS is entered, status inform-
ation for all the used system tasks will be displayed. If the command STATUS 1 is entered,
status information will be displayed only for system task number 1.
If one or more parameters follow an omitted parameter, the parameter separator(s) must be
typed. If all the parameters following an omitted parameter are optional, and those parameters
are omitted, the separators do not need to be typed.
Additional Information: You cannot abort any ACE tasks from the Monitor Win-
dow in an ACE project. You can abort an ACE task from the Monitor Window
that is provided at the Connect to Device Dialog Box. Refer to Going Online with
a SmartController on page 75
NOTE: Many Robot Object configuration and control settings are not available
while offline. To access all available items, open the Robot Object while online.
Refer to the following section for more details.
When Emulation Mode is enabled, certain Robot object settings are not available.
Robot Objects must be added to a new ACE project while online with a controller. Right-click-
ing the Robots item in the Multiview Explorer will display the option to Configure Robots.
Selecting this option will display the Configure Robots Dialog Box shown below. Use this
method to add a new robot to an ACE project.
NOTE: When using Emulation Mode, you must manually add robots to the pro-
ject. When connecting to a physical controller, robots will be present in the pro-
ject after the connection is established.
Depending upon the configuration and robot workspace, a configuration singularity may
develop. A configuration singularity can be defined as a location in the robot workspace where
two or more joints no longer independently control the position and orientation of the tool. As
a robot executes a straight-line motion that moves close to a configuration singularity, the
robot joint speeds necessary to achieve that motion become excessive. This speed control and
motion can be controlled through the use of the Singularity Deviation.
When the robot is selected, such as a Viper, the Singularity Deviation can be set to prevent the
condition from occurring thereby preventing excessive joint motor speeds. The addition and
testing of the Singularity Deviation offset should be done in JOG mode.
After the robot is selected, click Set Singularity Deviation to enable that option.
You can then set the value for the deviation between 0, for no deviation to 100. The deviation
is not linear and applies to the relative joint positions and motor speeds as the robot moves
through the workspace.
Configure Robots
Use the Configure Robots Dialog Box to select robots to install on the controller. In the Con-
figure Robots Dialog Box, you can select specific robots to manually install on the controller or
you can select Auto Configuration. After selecting robots to install on the controller, platform
selection (if applicable) and robot positioning are required to complete the Configure Robots
process.
The default robot configuration uses Auto Configuration for identifying the physically con-
nected robot type(s). The identification process occurs when V+ accesses a connected robot's
RSC (Robot Signature Card) data during boot up.
Auto Configuration is convenient when a controller is used with a varying number of con-
nected robots (such as demonstrations or other applications needing interchangeable robots).
The V+ log may contain extra errors or warnings if there are fewer robots connected than
present in the controller configuration.
If more robots are physically connected than listed in the controller configuration, they will
exist on the FireWire network but need configuration before ACE and V+ programs can fully
support them.
IO EndEffectors
IO EndEffectors, also called tools, end effectors, and grippers can be used in picking, placing,
dispensing, or other functions common to robotic applications. End effectors are often driven
by digital outputs to grip, extend, retract, or dispense. Inputs are commonly used on end effect-
ors to detect parts presence and extend / retract status.
Most robots have a single end effector, but some may have multiple end effectors to pick and
place multiple objects at the same time. ACE supports these variations with the IO EndEffector
settings described below.
By default, one end effector object is automatically added with a Robot Object
(IO EndEffector0). This is an I/O driven gripper with single or multiple end effector tips. It uses
digital input and output signals to control each tip. Additional IO EndEffector objects may be
added as needed for your application, such as calibration pointers used to perform calibration
for systems with multi-tip process grippers.
NOTE: IO EndEffector objects represent grippers that are wired to and controlled
by the SmartController device. IO EndEffectors are defined in the SmartController
device because they are associated with specific robots, but full functionality of
the IO EndEffector is utilized by a Process Manager configured in an Application
Manager device.
To add additional IO EndEffector objects, right-click a Robot Object in the Multiview Explorer,
select Add and then click IO EndEffector.
IO EndEffector Settings
ACE software provides an interface for setting various gripper-related parameters, such as end
effector tips, minimum grip time, and maximum grip time. To open the IO EndEffector settings
area, double-click the IO EndEffector object in the Multiview Explorer.
IO EndEffector Settings are described below.
IMPORTANT: IO EndEffector settings are not saved on the controller with the
Save Configuration function.
1 Add/Delete Used to add a new gripper tip or to delete an existing gripper tip.
Buttons
2 Outputs / Inputs Defines the Open / Close or Extend / Retract activation signals
and Opened / Closed or Extended / Retracted status signals for
the selected tip. You can define multiple signals by entering the
signal numbers separated by a space (for example: 97 98). If the
output signals are not valid signals, they are ignored.
ON
OFF
multiple signals (not all signals are ON or OFF)
Input signals are ignored in Emulation Mode, but soft signals will
be monitored.
Grip Dwell Time The time (in seconds) to wait when gripping or releasing before
Release Dwell Time continuing operation. This should be the actuation time for the
gripper.
Open/Close Tab Click this tab to access the Open/Close signal settings for the
selected tip.
Extend/Retract Tab Click this tab to access the Extend/Retract signal settings for the
selected tip.
Open Tip/Close Use the Open Tip and Close Tip buttons to send a open tip ( )
Tip buttons
or close tip ( ) signal to the selected tip.
Extend Use the Extend Tip and Retract Tip buttons to send an extend tip
Tip/Retract Tip ( ) or retract trip ( ) signal to the selected tip.
buttons
time to dwell. After the specified dwell time, the input signals are
checked.
Tip Offset Shows the current offset for the selected tip in the 3D Visualizer.
To change the offset, click the Teach Tool Tip Wizard button (
), which starts the Tool Offset Wizard.
Collision Program Select a program that is invoked when the 3D Visualizer detects
a collision. Refer to Graphical Collision Detection on page 114 for
more information.
Tip Radius The radius of the tip when drawing the IO EndEffector in the 3D
Visualizer. Refer to 3D Visualization on page 243 for more inform-
ation.
4 Payload The weight of the gripper plus the weight of the heaviest part the
robot will carry (in kg). Setting the correct payload will lower
tracking error and settling time, making the robot motion more
precise and faster at the end of the motion.
Max Grip Time Maximum time allowed for the grip to be verified and part pres-
ence to be satisfied, in seconds.
5 Use Tip Selection When enabled, the tip selection program will be called when a tip
is selected.
Tip Selection Task The task used when executing the tip selection program.
3D Visualization
The 3D Visualization setting area is found in the main view of the Robot Object editor area. It
is used to adjust 3D visualization settings for the Robot Object. It contains the following items.
Visible
The Visible check box indicates whether the Robot Object should be rendered in
3D visualization.
Collision Program
The Collision Program field allows selection of a C# program that is invoked when the 3D
Visualizer detects a collision between two objects. Use the Selection button ( ) to select a pro-
gram.
Configuration
The Configuration setting area is found in the main view of the Robot Object editor area. It is
used to adjust robot configuration settings for the Robot Object. It contains the following items.
NOTE: Configuration setting changes take affect immediately, but will not be
retained after a power cycle or reboot unless Save Startup Specifications is selec-
ted from the Configure Menu. Refer to Save Startup Specifications on page 222
for more information.
Control Configuration
The Control Configuration item displays the license status of the Robot Object.
Enabled
The Enabled selection enables or disables control of this robot. This is typically used during
debugging and troubleshooting.
Motion Specifications
The Motion Specifications item can be expanded to display several settings for robot speed
and acceleration as described below.
Item Description
Cartesian Rotation Specifies the Cartesian rotation speed at SPEED 100 (deg/sec).
Speed
Max Percent Speed Specifies the Maximum allowable speed from a SPEED command.
SCALE.ACCEL Upper Specifies the Program speed above which accelerations are sat-
Limit urated when SCALE.ACCEL is enabled.
Timing Specifications
Reserved for future use. Timing Specifications are only available with Expert Access.
Robot Number
The Robot Number field specifies the robot's designated number for the FireWire configuration.
Use the Configure Options - Configure FireWire Nodes to change this value. Refer to Configure
on page 193 for more information.
l If only one robot is present in the system, it must be configured as Robot Number 1.
l If multiple robots are present in the system, they must have unique Robot Numbers.
Joints
The Joints item can be expanded to display settings for range of motion limit, full speed limit,
and full acceleration limit of each robot joint.
Motors
The Motors item can be expanded to display settings for motor gain and nulling tolerances for
each servo motor in the robot.
Item Description
Motor Gains Only available with Expert Access enabled. Contact your local
Omron representative for more information.
Fine Nulling Tolerance Specifies the tolerance for the number of servo motor encoder feed-
back counts to consider a move complete when a move is made
after specifying FINE tolerance (refer to the eV+ Language Refer-
ence Guide (Cat. No. I605) or eV+3 Keyword Reference Manual
(Cat. No I652) for more information).
Coarse Nulling Specifies the tolerance for the number of servo motor encoder feed-
Tolerance back counts to consider a move complete when a move is made
after specifying COARSE tolerance (refer to the eV+ Language
Reference Guide (Cat. No. I605) or eV+3 Keyword Reference
Manual (Cat. No I652) for more information).
The Default Hand Open Signal field specifies the output for the V+ keywords OPEN, OPENI,
CLOSE, and CLOSEI. Refer to theeV+ Language Reference Guide (Cat. No. I605) or eV+3 Keyword
Reference Manual (Cat. No I652) for more information.
The Default Hand Close Signal field specifies the output for the V+ keywords OPEN, OPENI,
CLOSE, and CLOSEI. Refer to the eV+ Language Reference Guide (Cat. No. I605) or eV+3 Keyword
Reference Manual (Cat. No I652)for more information.
The Drive Power Indicator Enable selection is used to enable or disable the signal to indicate
robot power. When enabled, an external indicator can be used to signal when robot power is
ON. Refer to the appropriate robot user's guide for more information.
The Enable Brake Release Input selection is used to turn ON or OFF the brake release input sig-
nal. Selecting this item will allow an external signal to be used for robot brake control.
End-Effector
The End-Effector setting is used to select an IO EndEffector for the Robot Object. This allows a
Process Manager to reference the number of tips available for that robot when defining multi-
pick process, and to control the gripper signals for a specific robot when the Process Manager
is active.
Location
The Location setting area is used to set the workspace coordinates of the Robot Object in
3D Visualization. It contains the following items.
Offset From Parent
The Offset From Parent field specifies a coordinate offset for the Robot Object relative to a par-
ent item for 3D Visualization. This allows relative positioning of objects in workspace coordin-
ates. A robot is typically a parent to other objects. The values are specified as X, Y, Z, yaw,
pitch, and roll.
Parent
The Parent selection specifies the object this robot will be relative to (using the Offset From Par-
ent parameter). Refer to Adding Shapes on page 244 for more information.
Object
The Object Menu displays the Expert Access options described below.
Expert Access
Expert Access grants access to all available parameters and settings. Contact your local Omron
representative for more information.
Configure
The Configure Menu displays the configuration items described below.
Selecting Save Startup Specifications will save all robot and motor specifications to the V+ boot
disk.
NOTE: This is the same function that is present in the Controller Settings - Con-
trol menu. Refer to Control on page 187 for more information.
A Spec file can be used to restore robot and motor specifications from a saved file. Selecting
Load Spec File will open the Load Spec File Dialog Box. Choose a location where the saved
Spec File is stored and then click the Next button to proceed.
Save Spec File
A Spec file can be saved to store robot and motor specifications. Selecting Save Spec File will
open the Save Spec File Dialog Box. Choose a location on the PC to save the Spec File and then
click the Next button to proceed.
Some robots have a variable number of joints and option bits that control the presence of spe-
cial features and kinematic parameters used in position calculations. The Axes, Options, and
Kinematic Parameters Dialog Box allows you to edit these parameters.
IMPORTANT: Improper editing of robot joints, option bits, and kinematic para-
meters can cause the robot to malfunction or become inoperable. Therefore, edit-
ing must be performed by qualified personnel.
Enabled Axes
The Enabled Axes area is used to enable / disable the joints (axes) of the robot. If the robot does
not have joints that can be enabled / disabled, the Enabled Axes check boxes will be disabled.
Robot Options
The Robot Options area is used to select the robot option bits for your robot. See your robot kin-
ematic module documentation for the robot option bits that apply to your robot. See the table
below for some common option bits.
Item Description
Free mode power OFF Robot power is turned OFF rather than disabling the individual amp-
lifier.
Execute CALIBRATE Calibrate the robot after the V+ operating system boots. This is set
command at boot by default on all Viper and Cobra s350 robots. This only works if the
robot can calibrate with power OFF. It does not work on Cobra
robots because they must move joint 4 during calibration.
Check joint inter- While moving, check for obstacle collisions even for joint-inter-
polated collisions polated moves. This causes slightly more CPU usage if set, because
it requires the robot to perform a kinematic solution that is not
part of the normal operation.
Item Description
Z-up during J4 cal- On Cobra robots, J4 must rotate slightly during calibration. This
ibration causes J3 to retract before moving J4.
J6 multi-turn This bit allows infinite rotation of J6. Note that individual moves
must be no more than 360 degrees.
Software motor limits In robot models with multiple motors coupled to move a single
joint, the standard joint motion limits may not be adequate to pre-
vent the motors from hitting physical limits. In such cases, you
may use software motor limits to restrict motor motion.
Kinematics
The Kinematics area is used to display the kinematic parameters for your robot.
Obstacles
Select Obstacles to add / edit the location, type, and size of workcell obstacles. This will open
the Edit Obstacles Dialog Box. It contains the following items.
Item Description
Protected Obstacles Predefined system obstacles that cannot be edited by the user.
Obstacle Type This drop-down list is used to select the type of obstacle: box, cyl-
inder, sphere or frustum. These types are also offered as con-
tainment obstacles for applications where you want to keep the
robot working within a defined area.
Obstacle Center This text box is used to enter the coordinates of the center of the
obstacle.
Obstacle Dimensions Dimensions such as diameter and height are required depending
on the obstacle type chosen.
S-Curve Profiles
The S-Curve Profile Configuration Dialog Box is used to configure the four time values
required to create a new s-curve profile for use in robot motion (also called a trapezoidal accel-
eration curve). Selecting S-Curve Profiles will display the S-Curve Profiles Dialog Box.
Refer to the eV+ Language Reference Guide (Cat. No. I605) or eV+3 Keyword Reference Manual (Cat.
No I652) for more information about using S-Curve Profiles.
S-Curve Profile Considerations
Each of these four acceleration values can be individually specified and a set of the four val-
ues defines a specific acceleration "profile" for use in programming robot motion routines.
Safety Settings
Safety Settings are used to restore the E-Stop hardware delay and the teach-mode restricted
speed to the factory settings. Selecting Safety Settings will display the dialog box shown below.
It contains the following items.
NOTE: Safety Settings are not available in Emulation mode. This menu item is
only available for robots using eAIB or eMB-40R/60R amplifiers.
Configure Teach Restrict
Selecting Configure Teach Restrict and clicking the Next button will step through the pro-
cedure for setting predetermined speed limits for each robot motor.
The objective of the Teach Restrict feature is to comply with safety regulations which require
the speed to be limited while the robot is in manual mode. This is hardware-based safety func-
tionality to prevent rapid robot motion in manual mode even in the unexpected event of soft-
ware error attempting to drive a robot faster than allowed. While jogging the robot in manual
mode, if any joint exceeds its configured speed limit the system will disable high power.
Selecting Verify Teach Restrict Auto and clicking the Next button will step through a pro-
cedure to verify that Teach Restrict is operating properly with a series of automatic com-
manded motions.
Selecting Verify Teach Restrict Manual and clicking the Next button will step through a pro-
cedure to verify Teach Restrict is operating properly with a series of jogging operations per-
formed by the user with a T20 pendant. This may also be useful for troubleshooting or testing
individual joints when Teach Restrict commissioning process or automatic verification fails.
Selecting Configure ESTOP Hardware Delay and clicking the Next button will step through the
procedure for configuring the delay on the ESTOP timer circuit. The objective of the ESTOP
hardware delay feature is to comply with safety regulations which require the robot to have
the capability of disabling high power without software intervention in an emergency stop
scenario.
Verify ESTOP Hardware Delay
Selecting Verify ESTOP Hardware Delay and clicking the Next button will step through the pro-
cedure to verify that robot high power is disabled without software intervention when an
ESTOP is triggered.
Control
The Control Menu displays Hardware Diagnostics, Data Collection and Motor Tuning items
described below. These items are not available in Emulation Mode or while offline.
Hardware Diagnostics
Hardware Diagnostics are used to check robot motor status. For example, when a robot's seg-
mented display shows encoder error "E2", this means encoder error on Motor 2. Hardware Dia-
gnostics can be used to determine what Encoder Alarm Bit on Motor 2 is triggering the
encoder error.
Selecting Hardware Diagnostics will display the Hardware Diagnostics Dialog Box.
Item Description
Amp Enable Enables / disables the amplifier for the selected motor.
Brake Release Enables / disables the brake release for the selected motor.
Output Level Specifies a commanded torque, which is used to test the operation
of the selected motor. The range is from -32767 to 32767, or the
range specified by the Max Output Level parameter for that motor
in the Robot Editor (restricted Expert Access parameter).
Pos Error Displays the position error in encoder counts of the selected motor.
Index Delta Displays the change in encoder counts from the previous latched
zero index and the most recent latched zero index of the selected
motor. Note that this is only useful with incremental encoders to
verify zero index spacing and proper encoder readings.
Error Displays the following error codes for the selected motor.
l P - Positive overtravel
l N - Negative overtravel
l D - Duty cycle error
l A - Amp fault
l R - RSC (Robot Signature Card) power failure
l E - Encoder fault
l H - Hard envelope error
l S - Soft envelope error
l M - Motor stalled
Status Displays the following status codes for the selected motor.
l P - High power on
l T - In tolerance
l C - Calibrated
l H - Home sensor active
l V+ control
l I Independant control
l Q - Current mode
l P - Position mode
l W - Square wave active
l S - Servo trajectory active
Power Toggles the high power (the status field displays the current power
state).
Output - Click the Output + button to increase the DAC output to the selec-
ted motor
Click the Output - button to decrease the DAC output to the selec-
ted motor.
Data Collection
Data Collection can be used to view, store, and plot various robot system data while online
with a controller. A maximum of 8 data items can be examined at up to an 8 khz sampling
rate, up to the memory limit of the controller's data buffer.
Selecting Data Collection will display the following window. Data Collection is not available
in Emulation Mode.
Item Description
Collect Time (sec) Specifies the data collection time in seconds. The default value is 1.
Samples/Sec Specifies the data collection rate in samples per second. The default
value is 1000.
Remove Clicking the Add button will display the Add Items to Collect Dialog Box.
Live Displays a window that shows the real-time data being collected.
Start Click to start the data collection. The data collection will continue until
either the Stop button is clicked or the specified collect time is reached.
Stop Click to stop the data collection. If the specified collect time has already
expired, this button is disabled.
Plot Click to plot the collected data. A progress bar displays while the data is
processed. After the processing has completed, the data is plotted on
the graph located at the lower portion of the Data Collection Dialog Box.
Dump to Screen Displays the collected data in a Data Dump window in text-file format.
Dump to File... Displays a Save As Dialog Box used for saving the collected data to a
text file, which can be viewed or processed at a later time.
Motor Tuning
Motor Tuning is used to send a square wave positioning command to the specified motor and
observe the response for servo motor tuning purposes. Observing the response has the same
functionality as Data Collection (refer to Data Collection on page 231 for more information).
Item Description
Motor Specifies the motor that will receive the square wave positioning
command.
Amplitude (cts) Specifies the amplitude of the square wave in servo counts.
This section describes the functions, configuration, and setup of Application Manager items
and objects.
To transfer a project from the remote PC to the IPC, an exception must first be made for the
port in the Windows firewall. Continuing without the exception will cause an error message,
shown below:
A firewall exception for all TCP and UDP ports should be automatically created during install-
ation, but if additional security is needed, an exception for the default port can be created with
the following steps:
1. Open Windows Firewall with Advanced Security. The name of this program may vary
depending on the software version and computer model.
2. Right-click on Inbound Rules in the navigator on the left and then click on New Rule….
3. Select the bubble next to Port in the Rule Type step. Click the Next button.
Double-clicking on the rule will show its properties. Firewall Inbound Rule - General, Protocols
and Ports Tabs for an example of what this should look like. Double-check the areas outlined
in red and ensure they match the image.
Figure 8-5 Firewall Inbound Rule - General, Protocols and Ports Tabs
ACE Server Instance
An ACE server instance is used as a recipient for projects from a remote computer. While it
has similar functionality to a standard instance of ACE, when synchronized from a client
instance, the active project will become the one from that client instance. A server instance is
identified with a tag in the bottom-left corner, located in the same place as the Emulation
Mode label. This tag also shows the name of the computer it is operating on as well as the
port number. An example of this is outlined in red in the following figure.
Creating a Server
During the installation of ACE, selecting the Application Manager option allows ACE to open
as a server, and installs a desktop short cut for this purpose. Opening an existing installation
of ACE as a server requires the use of a keyword. ACE uses utilizes the following keywords in
Command Prompt open ACE for different purposes:
l “startclient” is used to start a client session, which is the same as opening a standard
instance of ACE.
l “startserver” is used to open a server instance.
l “start” is used to open a server instance with specified defaults
l “help” displays information about the command and offers options.
l “version” displays information about the currently installed version of ACE.
In general, the only one needed for the architecture illustrated above is “startserver.” A client
session can be created by opening ACE normally or using the “startclient” keyword. However,
“startserver” is required to create a server instance using Windows Command Prompt, as
shown in the steps below:
A server instance can also be opened by creating a shortcut specifically for it. This is done by
opening the properties of an ACE shortcut and adding the “startserver” keyword after the quo-
tation marks in the “Target” field, See "Ace Server Properties Short Cut".
NOTE: If the server instance must use a different port number than the default,
add "--tcpport=[port number]" after the keyword "startserver". The target port
number should be indicated by the tag, See "ACE Server in Application Man-
ager"."
If a server instance exists on a computer with the defined IP address, the client instance can
connect to it. This is done using the Online ( ) icon in the toolbar. If the connection can-
not be established, an error similar to the one shown, See "Windows Connection Exception
Message" will appear. Otherwise, the client will go online with the server instance. To dis-
Synchronization
The Synchronize button is used to transfer Application Managers between clients and servers.
Clicking this opens the Synchronization window, shown in the following figure.
The main portion of the window shows the objects in the client and the server Application
Managers in comparison to one another. The "Computer: Data Name" column shows the items
in the client and the "Target: Data Name" column shows those in the server. The text in each
row is color-coded depending on the results of the comparison:
l White: The items are synchronized between the two computers. No differences are detec-
ted.
l Red: There are detected differences between the two items. They must be synchronized
for the project to properly function. This is also marked by a red Error marking on the
left side.
l Green: The item only exists on one computer. It must be transferred to the other. This is
also marked by a yellow Caution symbol on the left side.
l Gray: The item has not been checked. This will usually only occur if there is an error
outside of the Synchronization window.
The space at the bottom displays any necessary messages from the synchronization process.
The four buttons along the bottom of this window are used to synchronize the two ACE
instances.
Transfer To Target synchronizes all checked object data from the client to the server. Clicking
Transfer To Target adds Block Model to the server
Transfer From Target synchronizes all checked items object data from the server to the client.
In addition, the instances of Locator0 in each computer would match whichever was being
transferred.
In the example shown by Application Manager Synchronization, clicking Transfer To Target
would do the following:
l Locator0 from the client instance would overwrite differences on the IPC.
l Block Model from the client instance would be created on the IPC.
l Locator0 from the server instance would overwrite differences in the client instance pro-
ject data.
l Block Model would be removed from the client instance project data."
The Recompare button will search for differences between the two Application Managers. This
happens when the Synchronization window is opened, but the button serves as a way to
verify that the items were properly synchronized.
Finally, the Close button closes the window.
Successfully transferring an Application Manager from a client instance to a server instance
will create an ACE project on the IPC with the name “@autostart!”. Transferring a different
Application Manager to the server instance will replace an existing “@autostart!” project with
the new data.
The server instance can further be protected by adding additional users with various access
levels and setting passwords for them. These can be created on the server instance directly or
they can be created on a client instance and then transferred to the server instance using Syn-
chronization.
Refer to User Management on page 139 for more information.
8.2 3D Visualization
In addition to robots, several other objects that are configured in the Application Manager are
represented in the 3D Visualizer. The location, orientation and dimensions of these items are
defined during the initial configuration of each item and can be adjusted to precisely simulate
the application. The following items will appear in the 3D Visualizer after they are configured.
Adding Shapes
Boxes and cylinders can be added to the 3D Visualizer to represent objects in the robot work-
space. Use the following procedure to add cylinders and boxes to the 3D Visualizer.
1. Right-click 3D Visualization in the Multiview Explorer and select Add and then choose
Box or Cylinder. A new object will be added to the Multiview Explorer under
3D Visualization and it will appear in the 3D Visualizer Window.
2. Access the properties for this object by double-clicking it or right-click and select Edit.
This opens the properties editor in the Edit Pane.
Object Description
3D Visualization
box base.
Location
Offset From Parent Set an offset distance from a parent object (X Y Z Yaw,
Pitch, Roll).
Select “CAD Library”, as shown in the following figure, and then click the Next button to
access the CAD Library section of the wizard. The displayed objects appear in one of the fol-
lowing categories:
Select “Open my own CAD file” in the first step of the Import CAD File wizard to access the
Import File step. Then click the selection icon next to the File Name field. A browser
window will open to select the CAD file. The supported CAD file format is STEP. Selecting a
file and clicking open imports the file into the wizard.
Once imported, the main window of this step shows the CAD Data file as it will be saved in
the project. Some modifications can be made to it here before it is fully integrated. First, the +90
Yaw and +90 Pitch rotate the object to control its standard orientation in the 3D Visualizer.
This is particularly useful if the file orientation is interpreted differently than ideal. For
example, custom frames, as in the above figure, should be positioned so their feet are flat
against the base plane. If they are shown in a different orientation in this window, the buttons
can rotate them into the correct position.
The navigator on the right side shows all of the components in the CAD file. The checkboxes
define which of the parts are imported. By default, all parts will be selected. Deselecting these
will omit them from the resulting CAD Data object in the project.
When the necessary adjustments are made, click Next to close the wizard and import the file.
Configuration of CAD Objects
Object Description
3D Visualization
Location
Rotation Point Set the offset of the object's center of rotation from the
origin.
Others
Category Defines the category of the CAD Data file in the CAD Lir-
ary. By default, this field is blank for imported files.
Update 3D Shapes
Boxes and Cylinders used to represent objects in the robot workspace may eventually need to
be replaced by custom CAD Data. In addition, CAD Data objects may be revised outside of
ACE, requiring the object in the workspace to be updated. Both of these tasks can be accom-
plished using Update All 3D Shapes.
To update 3D Shapes, right-click in 3D Visualization in the Multiview Explorer and select
Update All 3D Shapes. This opens a wizard similar to importing a custom CAD Data object,
refer to Adding and Configuring CAD Data. The first step to the updating process requires you
to select a CAD file saved on the local drive. Once selected, the wizard will allow replacement
of the existing 3D Visualization objects as shown in 3D Shapes Wizard. The left side shows
the hierarchy of the selected CAD file, including all parts and sub-assemblies. The right side of
this is a list shows all the existing 3D Visualization objects. The far right shows the imported
file in the 3D Visualizer.
The 3D Visualization objects can be replaced by the imported file or any part or sub-assembly
contained within it. To do this, first set the specifications of the imported file by unchecking
any parts and sub-assemblies that should not be imported. Also, set the orientation using the
+90 Yaw and +90 Pitch buttons above the visualizer window. Then click the appropriate entry
in the hierarchy and drag it to the respective visualization object. The text beneath the object
name will change from “(no selection)” to the name of the CAD Data. Once the necessary selec-
tions are made, click Finish to close the wizard and replace the objects.
3D Shapes Wizard shows an example of this process. When you click Finish, Box0 will be
replaced by the Quattro Frame October 2019 – Three Crossbeams assembly and CAD Data1
will be replaced by the Plate Holder part. None of the other objects will be updated.
Connection Points
Each object in the 3D Visualizer can have associated connection points. These allow the user
to easily create connections between objects.
There are two types of connection points: Links, where the associated object is the parent item,
and Mounts, where the connected item is the parent. For example, if a connection is made
where a robot is attached to a table, the connection point for the table will be a link and the
connection point for the robot will be a mount. In this way, the table becomes the parent of the
robot. Moving the robot will not affect the table, but moving the table will also move the robot.
The links and mounts for an object are accessed in the lower part of 3D Visualization object
editors, as shown above, See "Connection Points Editor". Clicking the tabs above the displayed
items will toggle between the two connection types. Connection points have the following prop-
erties:
Object Definition
Name User-defined name of the connection. This has no functional effect and is
primarily used so the user can easily show the purpose of the connection.
Type Name Defines the type of connection for which the point is designed. Links and
mounts will only be able to connect if they have the same Type Name.
Offset Set the offset of the connection point from the object origin.
The connections will be displayed in the editor 3D Visualizer window as green dots.
To create a new connection point, open the correct tab in the editor and click the plus button. A
new entry will appear in the editor. It is recommended that the name is set to something dis-
tinct to differentiate it from others. Click the Type Name drop-down menu and set the type
from the following options:
Mounts can be connected to links in two ways. The first is when a CAD Data object is created
from the library. In this case, the last step of the wizard is the Connections step, where you can
select any connections to make upon creation of the object.
The rows on the left side of the Connections pane show the links and mounts associated with
the CAD Library part. Clicking the selection icon on the right allows you to choose an existing
3D Visualizer object to link to the new object. Only objects with connection points that match
the highlighted one will appear in the menu. For example, the option selected in See "Import
CAD File Connections" connects to SCARA Robot Mount Center, so only robots of that type
will be available to select.
The second method to make a connection is to open the 3D Visualizer and use the Snap fea-
ture in the 3D window.
Snap to Edge Snaps the origin of the selected object or a selected mount
point to either an endpoint or the midpoint of the edge of
another object.
Snap to Face Snaps the origin of the selected object or a selected mount
point to the centroid of a face of another object.
Snap to Link Snaps a mount to a link of the same Type Name. Only links
of the selected mount will be visible in the Visualizer.
Selecting a Box, Cylinder, or CAD Data object in the Visualizer will allow selection of either
Snap To Edge and Snap To Face. However, Snap To Link can only be activated by first select-
ing an existing mount. To do this, select an object with a mount and click the Show/Hide
Mount Points icon at the bottom of the window ( ). Then select a mount point. The link
points are shown by hovering the cursor over a linked object, at which point one can be selec-
ted. This will snap the object with the mount to the link.
NOTE: Snapping cannot change the relationship of one of the objects to be the
parent of the other. If the two objects need to be moved as a group, the parents
need to be set manually using the editors.
8.3 ACE Sight
ACE Sight facilitates the integration of vision and V+ programs. The ACE software provides a
collection of ACE Sight objects for the purposes of creating and storing calibrations as well as
communicating vision results to V+ programs. These objects provide the following functions.
l Belt Latch Calibration: calibrating a robot to a latch signal. Refer to Belt Latch Cal-
ibration on page 254 for more information.
l Belt Calibration: calibrating a robot to a conveyor belt. Refer to Belt Calibration on page
257 for more information.
l Camera Calibration: calibrating a robot to a camera. Refer to Camera Calibration on
page 259 for more information.
l Gripper Offset Table: defining the offset on a part to be picked from the actual pick point
to the part origin. Refer to Gripper Offset Table on page 260 for more information.
l Vision Sequence: displays the order and dependency of vision tools while providing
program access to the results. Refer to Vision Sequence on page 262 for more inform-
ation.
l Overlap Tool: define a method to prevent double processing of belt-relative vision res-
ults located in more than one image acquired by a camera. Refer to Overlap Tool on
page 270 for more information.
l Communication Tool: this tool is added to a Vision Sequence and communicates belt-rel-
ative vision results to a controller queue for processing by a robot. Refer to Com-
munication Tool on page 271 for more information.
l Saving calibration data to a file and loading calibration data from a file. Refer to Saving
and Loading Calibration Data on page 276 for more information.
IMPORTANT: Only one instance of ACE software can run with ACE Sight func-
tions. If an additional instance of the ACE software is started on the PC, [ACE
Sight Offline] will be displayed in status bar. In this state, any ACE Sight Key-
words may return an error.
Many ACE Sight objects are dependent on other ACE software objects. When configuring a
new ACE Sight object, the editor will provide information about other objects that may need to
be configured or defined. When configuring a new ACE Sight object, the Edit Pane will indicate
missing dependencies as shown below (as an example).
Requirements
l The PC running the ACE software must be connected to the controller for the robot.
l A belt calibration must be completed.
l The robot, controller, and belt encoder must be properly connected and functioning.
l The belt encoder position latch signal must be configured in SmartController. Refer to
Configure Belt Encoder Latches on page 195 for more information.
NOTE: A latch signal number is not required while in Emulation Mode.
1. Right-click ACE Sight in the Multiview Explorer, select Add, and then click Belt
Latch Calibration. The ACE Sight Robot-to-Belt Latch Calibration Wizard will open.
2. Follow the Calibration Wizard steps to select the robot, end effector, and belt calibration.
Clicking the Finish button will create the Belt Latch Calibration object in the Multiview
Explorer.
NOTE: After the Belt Latch Calibration object is created, you can rename
the new Belt Latch Calibration object by right-clicking the item and select-
ing Rename.
3. Open the new Belt Latch Calibration editor by right-clicking the object and selecting Edit
or double-clicking the object. The Belt Latch Calibration editor will open in the Edit
Pane.
4. Open the Calibration Wizard by clicking the Calibration Wizard button. The Robot-to-
Belt Latch Calibration Sequence will open.
5. Make a selection for the end effector and set the latch sensor offset position. After com-
pleting these steps, click the Finish button. This will close the Robot-to-Belt Latch Cal-
ibration Sequence.
NOTE: The latch sensor is depicted in the wizard's virtual display as
shown below.
6. Review the Belt Latch Calibration object properties in the Belt Latch Calibration editor to
confirm the configuration. You can also use the Robot-to-Belt Latch Calibration Test
Sequence by clicking the Test Calibration button ( ) to confirm the
Belt Calibration
Belt Calibration calibrates a robot to a conveyor belt. Configuring this object will establish a
relationship between the belt, its encoder, and the robot. This calibration is necessary when the
robot will handle parts that are moving on a conveyor belt. Refer to Robot-to-Belt Calibration
on page 33 for more information.
Requirements
l The robot, controller, and belt must be correctly connected and functioning.
l The PC running the ACE software must be connected to the controller for the robot and
belt.
l The robot and gripper must be defined in the ACE software.
1. Right-click ACE Sight in the Multiview Explorer, select Add, and then click Belt Cal-
ibration. The ACE Sight Robot-to-Belt Calibration Wizard will open.
2. Follow the Calibration Wizard steps to select the robot, end effector, and encoder. Click-
ing the Finish button will create the Belt Calibration object in the Multiview Explorer.
NOTE: After the Belt Calibration object is created, you can rename the
new Belt Calibration object by right-clicking the item and selecting
Rename.
3. Open the new Belt Calibration editor by right-clicking the object and selecting Edit or
double-clicking the object. The Belt Calibration editor will open in the Edit Pane.
4. Open the Calibration Wizard by clicking the Calibration Wizard button. The Robot-to-
Belt Calibration Sequence will open.
5. Make a selection for the end effector, test the encoder operation, teach the belt window,
and test the Belt Calibration. Refer to Belt Calibration Results on page 258 for more
information on the Virtual Teach step. After completing these steps, click the Finish but-
ton. This will close the Robot-to-Belt Calibration Sequence.
6. Review the Belt Calibration object properties in the Belt Calibration editor to confirm the
configuration. You can also use the Robot-to-Belt Calibration Test Sequence by clicking
the Test Calibration button ( ) to confirm the configuration is correct. If the
configuration is correct, the calibration procedure is complete.
Additional Information: Level Along and Level Lateral are optional but-
tons found in the Robot-to-Belt Test Sequence. These buttons level the Belt
Transformation along the length or width of the belt.
Belt Calibration Results
The belt calibration results are used to define the belt area for the robot to access. When using
these values in V+ programs, they may need to be adjusted based on your application and will
vary based on factors like robot travel times, part flow rates, belt speed, and other timing con-
ditions.
The values of the items below are set during the belt calibration process.
The Belt Transform is the frame that is generated from the downstream and upstream trans-
form values to define the orientation of the belt window.
Downstream Transform
The Downstream Transform is used to define the downstream belt window limit in a V+ pro-
gram. This sets the downstream threshold where the robot is allowed to access an object. If the
robot is tracking a part when it reaches this point, a Belt Window Violation will occur.
Nearside Transform
The Nearside Transform is the third point taught in the calibration and is used to define the
width of the belt window.
Upstream Transform
The Upstream Transform is used to define the upstream belt window limit in a V+ program.
The belt encoder Scale Factor sets the amount of millimeters per encoder count for the belt's
encoder.
Camera Calibration
Camera Calibration calibrates a robot to a camera. This calibration is necessary if you will be
using a vision system with a robot.
Configuring this object will establish a relationship between the following objects (where
applicable).
l Camera
l Belt
l Robot
l Robot end effector (robot tool)
Requirements
l The robot, controller, belt (if used), and camera must be correctly connected and func-
tioning.
l The Virtual Camera calibration (mm/pixel) must be complete.
l The PC running the ACE software must be connected to the controller for the robot (and
belt).
l The Belt Calibration Wizard must have completed successfully if a conveyor belt is
used.
1. Right-click ACE Sight in the Multiview Explorer, select Add, and then click Camera Cal-
ibration. The ACE Sight Camera Calibration Wizard will open.
2. Follow the Calibration Wizard steps to select the robot, end effector, camera, scenario,
camera link, and belt calibration. Clicking the Finish button will create the Camera Cal-
ibration object in the Multiview Explorer.
NOTE: After the Camera Calibration object is created, you can rename the
new Camera Calibration object by right-clicking the item and selecting
Rename.
3. Open the new Camera Calibration editor by right-clicking the object and selecting Edit
or double-clicking the object. The Camera Calibration editor will open in the Edit Pane.
4. Open the Calibration Wizard by clicking the Calibration Wizard button. The Calibration
Sequence will open.
Additional Information: The Calibration Sequence will vary depending
on the scenario choice selections made during the Calibration Wizard.
5. Make selections for all steps of the Calibration Sequence. After completing these steps,
click the Finish button. This will close the Calibration Sequence.
6. Review the Camera Calibration object properties in the Camera Calibration editor to con-
firm the configuration. You can also use the Test Calibration button ( ) to
confirm the configuration is correct. If the configuration is correct, the calibration pro-
cedure is complete.
Additional Information: The Gripper Offset Table can be useful when a robot
must pick a part in different poses / orientations located by different models. It
may be necessary to create a pick point in a different orientation than the part
has been detected.
2. The offset(s) from the actual pick point to the part origin which indicates where the
robot must pick the part in relation to the origin of the part. This is defined in the Grip-
per Offset Table and is assigned to a specific robot.
Requirements
l The robot and controller must be correctly connected and functioning.
l The PC running ACE software must be connected to the controller.
l All associated objects such as belts, cameras, and vision tools must be defined and con-
figured if used.
1. Right-click ACE Sight in the Multiview Explorer, select Add, and then click Gripper Off-
set Table. The Gripper Offset Table Creation window will open.
2. Follow the Gripper Offset Table Wizard steps to select the robot. Clicking the Finish but-
ton will create the Gripper Offset Table object in the Multiview Explorer.
NOTE: After the Belt Latch Calibration object is created, you can rename
the new Belt Latch Calibration object by right-clicking the item and select-
ing Rename.
3. Open the new Gripper Offset Table editor by right-clicking the object and selecting Edit
or double-clicking the object. The Gripper Offset Table editor will open in the Edit Pane.
4. Use the Add button ( ) to create a new Gripper Offset index item. There are two
methods for editing the Offset values:
Direct Value Entry
Change the values directly for the Gripper Offset index. The following items can be
entered directly.
Teach Button
The Teach button ( ) opens a Gripper Offset Teach Wizard that guides you
through several steps to teach an Offset value while taking into account other objects
that may exist in the project such as:
Vision Sequence
A Vision Sequence let you see the order and dependency of vision tools that will be executed
while giving V+ programs a means for retrieving results from vision tools. The Vision
Sequence object shows the list of tools that will be executed as part of the sequence, the order
in which they will be executed, and the Index associated with each one. The Index is the exe-
cution order of each tool.
The sequence cannot be modified from the Sequence Display Window. It shows the order in
which the tools will be executed, based on the parent tool specified in each tool. The actual
order of a sequence is determined when you specify the Relative To parameter for each of the
tools to be included in the sequence.
When you add a Vision Sequence object to the project, the Vision Tool parameter determines
the Top-Level tool, and all the tools you specified as the Relative To parameter in the chain
under that will automatically show up as members of the sequence, in the order you set.
In a sequence, you specify a robot-to-camera calibration. The calibration is applied to any res-
ult accessed by a VLOCATION transformation function.
NOTE: V+ programs can access the results of intermediate (not only top-level)
tools when a sequence is executed because each tool has an index that can be
accessed.
The Default Calibration is applied to all results, even if they are not the top-level
tool.
Requirements
l The robot and controller must be correctly connected and functioning.
l At least one Vision Tool must be configured to define in the sequence.
l A camera must be defined in the ACE software.
l A Camera Calibration must be completed if a VLOCATION command is used in a
V+ program.
1. Right-click ACE Sight in the Multiview Explorer, select Add, and then click
Vision Sequence. A Vision Sequence object will be added to the Multiview Explorer.
NOTE: After the Vision Sequence object is created, you can rename the
new Vision Sequence object by right-clicking the item and selecting
Rename.
2. Open the new Vision Sequence editor by right-clicking the object and selecting Edit or
double-clicking the object. The Vision Sequence editor will open in the Edit Pane.
Item Description
1 Configuration Status
The configuration status will update after the Run button is clicked.
Hover over the red flag for information about the incomplete configuration.
3 Properties
l Vision Tool: Top-level vision tool this sequence references. All tools
required to operate the tool selected here will be included in the
sequence.
4 VLOCATION Properties
5 Sequence Layout
3. Make the necessary configuration settings in the Edit Pane to complete the procedure for
adding a Vision Sequence object.
The Vision Sequence editor shows the list of tools that will be executed as part of the sequence,
the order in which they will be executed, and the associated index number of each. The tools
are executed in ascending Index value.
The main property of a Vision Sequence is the Vision Tool parameter that defines what tool
marks the end of the sequence. Once it is selected, the Sequence Layout will be populated by
the top-level tool’s dependencies down to the initial image source. For example, the Vision
Sequence shown in Figure 8-29 is based on a Gripper Clearance tool that has Relative To set to
a Locator tool. These are included by starting with the top-level tool and its Image Source and
Relative To properties and working through the same properties of subsequent tools. This is
laid out in the following figure that shows how the tools are associated and what data is
passed between them.
Virtual Camera0
Images from Basler or Emulation Camera
Image
Locator0
Detects model shapes and provides Locator
object location data Model0
Gripper Clearance0
Applies histograms around Locator
points on original image
Another example is shown in the following figure, where the image is modified for instance
locating before the barcodes are actually read. Refer to Figure 8-32 to see this sequence.
Virtual Camera0
Images from Basler or Emulation Camera
Image
Advanced Filter0
Erode image to generic shapes
for instance detection
Image
Edited Image
Locator0
Detects generic shapes and Locator
provides barcode location data Model0
Barcode0
Set Relative To Locator0 on the original
image to read barcodes at correct locations
NOTE: The sequence itself cannot be changed in the Vision Sequence editor. The
Relative To properties in the tools themselves must be used to do this.
For some applications, it may be beneficial to have multiple sequences where
one sequence is a subset of another.
Vision Sequences also provide a means for V+ programs to obtain information from vision
tools using a sequence_id parameter. In addition, the robot-to-camera calibration in the Default
Calibration property can be accessed using a VLOCATION transformation function. The
returned values are provided in the following table.
Value Description
Index Tool number within the sequence. The tools are run in ascending
order.
Show Results Defines if the tool results will be shown in the Vision Window.
Tool Execution Time Time (in milliseconds) taken for the tool to run in the most recent exe-
(ms) cution of the sequence.
The Frame / Group result defines the number of the frames referenced in the calculation of the
tool. If the tool is not set relative to any other tool, the results for this column will all be
returned as 1. However, when the tool is set relative to a tool with more than one returned
instance, the Frame / Group value reflects the result instance of the previous tool.
For example, Figure 8-33 and Figure 8-34 show a sequence that is designed to find the stars
within the defined shapes. The Locator tool in Figure 8-33 disambiguates between the shapes
with five stars and those with six. Then, the Shape Search 3 tool shown in Figure 8-34 locates
the stars within the shape.
The Results section shows that the Frame / Group results directly correlate with the Instance
result from the Locator in Figure 8-33 above. Instances 1 through 6 in Figure 8-34 are in Frame
/ Group 1 since they are located with respect to Instance 1 in Figure 8-33 below. Instances 7
through 11 are in Frame / Group 2 for the same reason. The other regions correlate in the same
way.
The Frame / Group can also be used as an argument for the VLOCATION transformation func-
tion in V+ program code to limit the returned location to a particular instance in another tool.
The syntax of the VLOCATION transformation is provided below.
VLOCATION ($ip, sequence_id, tool_id, instance_id, result_id, index_id, frame_
id)
If there is an argument for frame_id, then instance_id will be evaluated based on that frame.
Otherwise, it will be evaluated with reference to all instances. For example, the following code
returns the location (21.388, 8.330, 0, 0, 0, -1.752) based on Instance 2 in the results of Figure 8-
34 above.
VLOCATION ($ip, 1, 3, 2, 1311, 1)
However, when the frame_id is added as shown below, the line returns (-135.913, 52.251, 0, 0,
0, -31.829) since that is Instance 2 of Frame / Group 2.
VLOCATION ($ip, 1, 3, 2, 1311, 1, 2)
Overlap Tool
The Overlap Tool ensures that parts moving on a conveyor belt are processed only once if loc-
ated in multiple images. A part found by the Locator Vision Tool (or other input tools) may be
present in multiple images acquired by the camera and this tool ensures that the robot is not
instructed to pick up or process the same part more than once. The input required by the Over-
lap Tool can be provided by any tool that returns a transform instance. This tool is typically
used in conveyor tracking applications.
If an instance in the image is a new instance (Pass result) it is passed on to the next tool in the
sequence. If an instance is already known, it is rejected (Fail result), and is not sent to the next
tool in the sequence. This avoids double-picking or double-processing of the object.
The Overlap Tool should be placed near the beginning of a sequence, just after the input tool
and before any inspection tools in the sequence. This ensures that the same instance is not pro-
cessed multiple times by the inspection tools. Refer to Vision Sequence on page 262 for more
information.
Requirements
l The camera, robot, and conveyor belt are calibrated, connected, and correctly func-
tioning.
l The PC running the ACE software must be connected to the controller for the robot and
belt.
l The tool is receiving latched values from the input tool. The belt latch must be wired to
the controller and properly configured.
l A vision input tool is defined and configured.
l The conveyor belt and the controller have been correctly assigned to a camera.
1. Right-click ACE Sight in the Multiview Explorer, select Add, and then click Overlap
Tool. An Overlap Tool object will be added to the Multiview Explorer.
NOTE: After the Overlap Tool object is created, you can rename the new
Overlap Tool object by right-clicking the item and selecting Rename.
2. Open the new Overlap Tool editor by right-clicking the object and selecting Edit or
double-clicking the object. The Overlap Tool editor will open in the Edit Pane.
3. Make all necessary settings in the Overlap Tool editor. When all settings are completed,
the Overlap Tool object configuration procedure is complete.
This specifies how far an instance must be from the expected location of a known instance in a
different image for it to be considered a new instance. Distance is specified in mm. It should be
as small as possible without causing double-picks.
NOTE: Rotation is ignored by the Overlap Tool. Only the difference in X and Y
is considered.
Communication Tool
The Communication Tool is a tool for conveyor tracking applications. The purpose of the Com-
munication Tool is to transfer belt-relative vision results into a controller queue for processing
by a robot.
The Communication Tool typically receives instances from an Overlap Tool, which prevents
different images of the same instance from being interpreted as different instances. The input
to the Communication Tool can also be provided by other tools that output instances, such as
an Inspection or a Locator Tool. The Communication Tool processes the input instances by
applying region-of-interest parameters.
The Communication Tool acts as a filter in the following way.
l Instances that are passed by the tool are sent to the controller queue.
l Instances that are not output to the controller because they are outside the region of
interest or because the queue is full, are rejected. These instances are passed to the next
tool in the sequence, such as another communication tool.
In many applications, it may be useful to use two or more Communication Tools. Examples
when multiple Communication Tools are necessary are provided below.
NOTE: Each tool must have its "Relative To" property set to the preceding tool,
so any parts not queued by one tool are passed to the next tool.
l Use two Communication Tools for managing either side of a conveyor belt. Each Com-
munication Tool sends instances to a robot that picks parts on one side of the belt only.
l Use two (or more) Communication Tools so that the subsequent tools can process
instances that were rejected by the preceding tools because the queue was full. Each tool
will send its passed parts to a different queue, so any parts missed by a robot because
its queue is full will be picked by a subsequent robot.
l Use multiple Communication Tools to send instances to multiple robots positioned near
a single conveyor belt with a single camera.
Requirements
l The camera, robot, and conveyor belt are calibrated, connected, and correctly func-
tioning.
l The conveyor belt and the controller have been correctly assigned to a camera.
1. Right-click ACE Sight in the Multiview Explorer, select Add, and then click
Communication Tool. A Communication Tool will be added to the Multiview Explorer.
NOTE: After the Communication Tool object is created, you can rename
the new Communication Tool object by right-clicking the item and select-
ing Rename.
2. Open the new Communication Tool editor by right-clicking the object and selecting Edit
or double-clicking the object. The Communication Tool editor will open in the Edit Pane.
3. Make all necessary settings in the Communication Tool editor. When all settings are
completed, the Communication Tool object configuration procedure is complete.
Search Area
Search Area is the size of the region of interest is defined the width and height of the region of
interest. Modifying the region of interest is useful for applications in which two or more robots
pick or handle objects on different sides of the belt.
For example, an application could use one Communication Tool configured to output objects
on the right side of the belt to Robot A and a second Communication Tool configured to output
instances on the left side of the belt to Robot B. The region of interest can be the entire image or
a portion of the input image. It can be set in one of the following ways.
l Enter or select values for the Offset and Search Area parameters: Position X, Position Y,
Angle, Width, and Height.
l Resize the bounding box directly in the display. The rectangle represents the tool region
of interest. Drag the mouse to select the portion of the image that should be included in
the region of interest.
Robot
The Robot parameter selects the robot that will handle or pick the instances output by the Com-
munication tool.
NOTE: Ensure the selected robot and the robot of the selected camera calibration
are the same. If not, the transformation may be invalid.
Queue Parameters
The Communication Tool sends instances that pass its criteria to its queue, which is con-
figured with the following parameters.
Queue Index
The Queue Index identifies the queue to which instances will be sent. Two different Com-
munication Tools cannot write to the same queue on a controller. If there are multiple Com-
munication Tools, either on the same or different PCs, each tool must be assigned a unique
queue index. Choose a value from 1 to 100.
In a V+ program, this queue index must be used to access the instances sent to the controller
by the communication tool. For example, in the ACE Sight application sample for belt camera
pick to static place, the "rob.pick" program will use "pick.queue" variable to store the queue
index used when obtaining instances. This occurs with the following V+ program call.
CALL getinstance(pick.queue, peek, inst.loc, model.idx, encoder.idx, vision.x,
vision.y, vision.rot)
Queue Size
Queue Size specifies the number of instances that can be written to the queue. The ideal queue
size varies greatly and may require some testing to optimize this value for a specific applic-
ation and environment. Choose a value from 1 to 100.
Queue Update
Queue Update specifies how often the Communication Tool will write new instance data to the
queue on the controller. The recommended setting is After Every Instance. The Queue Update
options are described below.
l After Every Instance: The After Every Instance setting sends each instance to the queue
separately as it becomes available. This minimizes the time until the first instance is
available for use by the V+ program. If a large number of instances are located, then it
can take longer to push all of the data to the controller.
l After Last Instance: The After Last Instance setting sends multiple instances to the
queue at once. This mode minimizes the total data transfer time, but can increase the
time until the first instance is available for use since the robot is inactive during the
time that the PC is writing to the queue.
NOTE: Both Queue Update settings have the same function when only one
instance is found.
Soft Signal
This sets the value of the Soft Signal to use when Use Soft Signal is enabled. The signal can be
used by a V+ program to synchronize the controller and the PC. This signal instructs the con-
troller that all instances detected by the input tool have been sent to the controller.
Gripper Offset Configuration
This specifies the method and details needed for determining the offset index of the gripper.
Refer to Gripper Offset Table on page 260 for more information.
Use the item descriptions below to make the appropriate configuration settings.
Selection Mode
Default Offset Index
This specifies the index in the Gripper Offset Table to apply as the gripper offset. This setting
can be from 1400 to 1499.
Model Offset Index
Use this area to create associations between Model IDs from a Locator Model and
Gripper Offset Table index numbers. This refers to the Custom Model Identifier property of the
Locator Model.
Use the Add button ( ) and Remove button ( ) to create necessary associations.
Selecting this option indicates that locations should be returned relative to the robot tip pos-
ition when the picture was taken. This is only used if the selected camera calibration was per-
formed with the calibration object attached to the robot tool (an upward looking camera for
example).
In the Multiview Explorer, right-click Cameras, select Add and then choose a camera type. A
new camera object will be added to the Multiview Explorer.
Virtual Cameras
The Virtual Camera object provides an interface between a Vision Sequence and the object
used to acquire an image. The Virtual Camera object is typically used as the Image Source ref-
erence for vision tools (except when Image Processing tools are used).
The Virtual Camera object editor provides access to pixel-to-millimeter calibration data, acquis-
ition settings, image logging, and references for the camera object to use for acquiring an
image. The Default Device setting for the Virtual Camera designates the object used to acquire
an image. When configured properly, this can provide a seamless transition between a phys-
ical camera and an Emulation Camera without changing vision tool settings.
To add a Virtual Camera, right-click Cameras, select Add, and then click Virtual Camera. A
new Virtual Camera will be added to the Multiview Explorer.
NOTE: The option to add a Virtual Camera is present when adding an Emu-
lation Camera, Basler Camera, Sentech Camera or a Custom Device. This is typ-
ically how a Virtual Camera is added to the ACE project.
You can rename a Virtual Camera after it has been added to the Multiview
Explorer by right-clicking the object and then selecting Rename.
To access the Virtual Camera configuration, right-click the object in the Multiview Explorer
and then select Edit, or double-click the object. This will open the Virtual Camera editor in the
Edit Pane.
Item Description
l The Run button will acquire the image once from the camera device specified.
l The Stop button will stop continuous Live image acquisition.
l The Live button will start continuous image acquisition from the camera
device specified.
Use these buttons to adjust the image view area. You can also use the mouse scroll to
zoom in and out.
This area shows the most recent image acquired by the Virtual Camera.
This area is used to configure the Virtual Camera. Refer to Virtual Camera
Configuration Items on page 279 for more information.
This area is used to calibrate the Virtual Camera. Refer to Virtual Camera Calibration
on page 281 for more information.
This area is used to adjust image acquisition settings for the camera. Refer to Acquis-
ition Settings on page 284 for more information.
Default Device
Select a default camera device used by the Virtual Camera (when not in Emulation Mode).
Image Logging
Make the following settings for saving images when the Virtual Camera acquires an image.
Item Description
Enabled Enable or disable the image logging function with this selection
Image Count Enter the number of images to store. Up to 1000 images can be stored.
If you are logging images from a physical, belt-relative camera for use in Emulation Mode,
record the average belt velocity and picture interval in effect while logging images. These two
pieces of information are necessary for the picture spacing on an emulated belt to be similar. If
using a Process Manager, the belt velocity can be recorded from the System Monitor and the
picture interval can be recorded from Process Manager in the Control Sources area. If using a
V+ program and ACE Sight, the belt velocity can be recorded from the Controller Settings -
Encoders area and the picture interval can be recorded from the Robot Vision Manager
Sequence, Continuous Run Delay area.
Without recording using this information, the replayed images may be overlapping and part
flow may be different than you expect. When using the logged images in emulation, be sure to
apply these values in the appropriate fields for the flow of images to match the physical sys-
tem.
Emulation Configuration
When Emulation Mode is enabled, the Virtual Camera object editor uses an Emulation Con-
figuration parameter that specifies one of the following modes for the image source.
This setting will use the camera device specified in the Default Device field.
Random Instances
This setting allows generation of a random quantity of randomly oriented vision results. You
specify the range of possible instance count with minimum and maximum values, but you do
not have control over the random X, Y, and Roll values of the results generated. If this level of
control is required, consider using a Custom Vision tool with user-defined logic.
Additional Information: When using this mode, vision tools will display an
error "There is no image in the buffer.
NOTE: When the Basler Pylon Device is used and Random Instances is selec-
ted, the Fixed Pixel calibration will automatically load as the calibration type in
the Basler Pylon Device Virtual Camera object.
This setting allows pictures to be obtained from another vision device that is configured in the
project (typically an Emulation camera object). Select the alternate vision device with this set-
ting if required.
Images Replay
This setting allows a set of images to be displayed from a specified directory (.hig files only).
Virtual camera calibration is required before vision tools are created to ensure that accurate
measurements and locations are acquired. This is a spatial adjustment that corrects for per-
spective distortion, lens distortion, and defines the relationship between the camera pixels and
real-world dimensions in millimeters. There are two methods available for virtual camera cal-
ibration as described below.
Additional Information: You should calibrate the camera before you create any
vision tools.
NOTE: The offset of the camera from the robot or other equipment is not part of
this calibration. That information is obtained during the robot-to-camera cal-
ibration. Refer to Camera Calibration on page 259 for more information.
Access the camera calibration in the Virtual Camera's editor that is associated with the camera
device. An example is provided below.
l Click the Add button to begin the calibration procedure. Use the Delete button to
remove a previous calibration.
l Click the Load button to load a previously saved calibration. Click the Save button to
save an existing calibration to the PC.
l Click the Calibrate button to adjust an selected calibration.
You can use a grid with known spacing to calibrate the camera. Sample dot target files are
provided with the ACE software installation. Find these in the default installation directory
with the following file names:
l DotPitchOthers_CalibrationTarget.pdf
l DotPitch10_CalibrationTarget.pdf
NOTE: The sample target is intended for teaching purposes only. It is not a
genuine, accurate vision target.
IMPORTANT: Because any error introduced into the vision system at this cal-
ibration will be carried into all following calibrations, the target should be as
accurate as possible. Commercially available targets can be acquired through
third parties such as Edmund Optics or Applied Image Inc.
Creating a Dot Target
Dot targets are commercially available, but you can also create your own targets by following
the guidelines provided below. The quality and precision of a grid of dots target have a direct
impact on the overall precision of your application.
Item Description
1 Dot Radius
2 Dot Pitch
5 Dot Pitch
Additional Information: The dot grid rows and columns must be aligned
with the field of view. Ideally after executing the Calibrate function, there
should be a uniform distribution of yellow and blue alternating dots. A
region without blue dots indicates the calibration is not sufficient in that
region to predict the location of the validation dots.
With the above concepts in mind, you can use the following steps to create specific dot
targets.
Measure the width or height of your FOV by using a ruler inside the camera image in live
mode to obtain the length of the FOV. Using the camera manual, take the matching pixel
resolution, Then calculate the resulting mm-to-pixel ratio. With this information you can
identify the optimal dot pitch and dot radius for your target. As an example, with a
400mm FOV in the horizontal direction and 1600 pixels in the horizontal direction, cam-
era, the calculation for that direction would be:
A standard dot grid with a pitch of 10mm would still be within the scope for accuracy.
Fixed-pixel calibration allows you to specify what physical distance is represented by each
camera pixel. All camera pixels will be given the same dimension, which is not necessarily the
case with a grid of dots. This method of camera calibration will not correct for lens distortion
or perspective.
Acquisition Settings
Acquisition settings are used to view information about the camera and make other image
adjustments for vision devices used by the Virtual Camera.
When configuring a Virtual Camera that uses an Emulation Camera, the settings in this area
are limited to only gray scale conversion and image selection.
When using a Virtual Camera that uses a vision device such as a Basler camera, you can
make several adjustments to the image such as shutter, gain, and exposure along with other
camera related settings as described below.
NOTE: The settings in this area will vary depending on the vision device asso-
ciated with the Virtual Camera.
The Information tab displays the Model, Vendor, and Serial Number of the attached camera.
These fields are read-only.
Stream Format
The Stream Format tab lets you set the Pixel Format and Timeout value for the data being sent
from the camera.
The available pixel formats will be displayed in the drop-down box when you click the down-
arrow (the default selection is recommended).
The Timeout value sets a time limit in milliseconds, after which the vision tool terminates the
processing of an image. If the vision tool has not finished processing an image within the allot-
ted time, the tool returns all the instances it has located up to the timeout. Although Timeout
can be disabled, it is recommended that you use a Timeout value. This is useful for time-crit-
ical applications in which fast operation is more important than the occasional occurrence of
undetected object instances. This value is only approximate and the actual processing time
may be slightly greater.
Video Format
The Video Format tab lets you set Exposure, Gain, Black Level, and color balance.
Each line displays the minimum allowable value for that property, a bar indicating the current
value, the maximum allowable value , and the numeric value of the current level.
Some of the minimum and maximum values, particularly for Gain, will differ depending on
the camera being used.
Exposure Adjustment Considerations
The Exposure time setting determines the time interval during which the sensor is exposed to
light. Choose an exposure time setting that takes into account whether you want to acquire
images of still or moving objects. Adjust Exposure, Gain, and Black Level (in that order) to
improve the quality of acquired images with the following considerations.
l If the object is not moving, you can choose a high exposure time setting (i.e., a long
exposure interval).
l High exposure time settings may reduce the camera’s maximum allowed acquisition
frame rate and may cause artifacts to appear in the image.
l If the object is moving, choose a low exposure time setting to prevent motion blur. As a
general rule, choose a short enough exposure time to make sure that the image of the
object does not move by more than one pixel during exposure. A shorter exposure time
setting may require a higher illumination level.
NOTE: Acquisition parameters are validated before being sent to the camera. If
you enter an exposure time that your camera does not support, the time will be
adjusted to be valid. If you haven't typed in an invalid exposure time, the left
and right arrows will provide valid times.
Gain Adjustment Considerations
Gain is the amplification of the signal being sent from the camera. The readout from each pixel
is amplified by the Gain, so both signal and noise are amplified. This means that it is not pos-
sible to improve the signal-to-noise ratio by increasing gain. You can increase the contrast in
the image by increasing the camera’s gain setting.
Unless your application requires extreme contrast, make sure that detail remains visible in the
brightest portions of the image when increasing gain. Noise is increased by increasing gain.
Increasing gain will increase the image brightness. Set the gain only as high as is necessary.
Black Level Adjustment Considerations
Black Level is an offset, which is used to establish which parts of an image should appear
black. High black level settings will prevent high contrast. Make fine adjustments to the Black
Level to ensure that detail is still visible in the darkest parts of the acquired images.
Balance Red, Balance Green, and Balance Blue are only available if you have a color camera
connected. On some Basler color cameras, such as the A601fc-2, the green balance is a fixed
value that cannot be adjusted. In such cases, only the balance for blue and red will be enabled
in this window (Balance Green will be grayed out).
Trigger
The Trigger tab lets you enable an external trigger for taking a picture and set parameters that
pertain to that trigger.
Most applications will not use trigger mode and the image is taken when requested by the PC,
but some applications need to reduce latency in communication. In this type of situation, a trig-
ger signal would be wired directly to a camera input and trigger mode is enabled and con-
figured in the Virtual Camera. A V+ program would execute a VRUN command to execute a
Vision Sequence but instead of acquiring an image, it will create the image buffer and wait to
receive the image from the camera when it is triggered. A camera exposure active signal could
still be used for position latching if necessary.
Add an Emulation Camera
To add an Emulation Camera, right-click Cameras, select Add, and then click Emulation Cam-
era. The Create Emulator Device window will open.
Provide a name for the Emulation Camera and make a selection for creating a Virtual Camera
associated with this device. Then, click the Next button and use the Add button to load images
to be used with this device. Click the Finish button when all images are added and then the
Emulation Camera (and Virtual Camera) object will be added to the ACE project.
NOTE: After the Emulation Camera object is created, you can rename the new
Emulation Camera object by right-clicking the item and selecting Rename.
To access the Emulation Camera configuration, right-click the object in the Multiview Explorer
and then select Edit, or double-click the object. This will open the Emulation Camera editor in
the Edit Pane.
The Emulation Camera Configuration allows you to manage all images used for this device.
The order of the added images will be used in the associated Virtual Camera during sub-
sequent image acquisition.
Use the table below to understand the Emulation Camera configuration items.
NOTE: When adding images, image files must be of the same type (color vs.
monochrome) and the same size.
Item Description
A camera object stores information necessary for communicating with a physical camera, such
as the Device Friendly Name, Model, Vendor, and Device Full Name for identifying devices
when communicating through the drivers.
Adding a Camera
To add a camera object, right-click Cameras, in Multiview Explorer and select Add, and then
click either Basler Camera or Sentech Camera. The Create Device wizard will appear to step
through the addition procedure.
NOTE: After the Camera object is created, you can rename the new Camera
object by right-clicking the item and selecting Rename.
The camera configuration will vary depending on the camera type, Basler or Sentech, added to
the ACE project, but configuration follows the steps below.
1. Create a New Camera Object: This is where the user selects the camera that will be
linked to the camera object. As shown in Camera Object Configuration Editor (Basler
shown) on page 290, the wizard shows all accessible cameras to which the created
object can connect. Clicking the Preview button will display the current camera image.
The object name can also be adjusted. It is recommended to check the box next to
“Create a virtual camera” or this will have to be done later. The box is checked by
default.
2. Ask Calibration: The wizard will ask the user to choose to calibrate now or later. Choos-
ing to calibrate later and clicking the Next button will skip the last three steps and close
the wizard.
3. Grid Instructions: This step displays the instructions for grid calibration. Refer to Grid
Calibration Method on page 283 for more information.
4. Camera Properties: The image is from the camera is displayed in this. The user can
click the Edit button to modify the camera settings.
5. Grid Calibration: The image is calibrated with the grid. Set the appropriate values in the
settings and click the Finish button to close the wizard.
Once the camera object has been created, it can be opened edited like any other object in the
Multiview Explorer.
Custom Devices
The Custom Device object is a C# program that executes at the level of a camera object and can
be used to acquire an image from any camera device or external vision system. It can also be
used to manipulate image data from one Virtual Camera’s image buffer before input to a
second Virtual Camera that is linked to a Vision Sequence.
A C# program template will be created when the Custom Device is added to the ACE Project.
To add a Custom device, right-click Cameras, select Add, and then click Custom Device. A new
Custom Device will be added to the Multiview Explorer. Use this object to access the
C# program associated with the device.
NOTE: After the Custom device object is created, you can rename the new Cus-
tom device object by right-clicking the item and selecting Rename.
8.5 Configuration
Various configuration objects can be added to the ACE project for different control, setup, and
other functions. Use the information in this section to understand the functions of
the Configuration objects.
NOTE: Although the Controller Connection Startup function and the Auto Start
function share some similarities, the startup file for the Controller Connection
Startup function is stored on the PC whereas the Auto Start function files are
stored on the controller. This allows the Auto Start function to provide a head-
less (no PC required) operation of the application. Refer to Auto Start on page
200 for more information.
To add a Controller Connection Startup object, right-click Configuration, select Add, and then
click Controller Connection Startup. A new Controller Connection Startup object will be added
to the Multiview Explorer.
To access the Controller Connection Startup configuration, right-click the object in the Mul-
tiview Explorer and then select Edit, or double-click the object. This will open the Controller
Connection Startup editor in the Edit Pane.
Use the table below to understand the Controller Connection Startup configuration items.
connection here.
Data Mapper
The Data Mapper provides a method to associate different data items within the ACE project.
For example, you can trigger a Process Manager object to run when a digital input signal turns
on. Any data items that are associated in the Data Mapper will be continuously checked while
the ACE project is open.
To add a Data Mapper object, right-click Configuration, select Add, and then click Data
Mapper. A new Data Mapper object will be added to the Multiview Explorer.
NOTE: After the Data Mapper object is created, you can rename the new Data
Mapper object by right-clicking the item and selecting Rename.
To access the Data Mapper configuration, right-click the object in the Multiview Explorer and
then select Edit, or double-click the object. This will open the Data Mapper editor in the Edit
Pane.
Use the Data Mapper Editor buttons to add, delete, and pause/run Data Mapper items.
Pausing a selected Data Mapper item will prevent it from executing. Click the Run button to
resume a paused Data Mapper item.
Data Mapping
When the Add button is clicked in the Data Mapper editor, an Edit Data Map Dialog Box will
open. This is used to create and edit Data Mapping items. The Data Mapping configuration is
described below.
To edit an existing Data Mapper item, double-click the item in the Data Mapper list to access
the Edit Data Map Dialog Box.
Additional Information: The Data Mapping input and output items that are
available depend on the objects present in the ACE project.
Evaluate as Conditional
When Evaluate as Conditional is selected, the Data Mapper will interpret all input conditions
as a Boolean item. If the value of the input item is 0, the condition is considered OFF. If the
value is non-zero, the condition is considered ON. If all items in the input list are ON, then the
output condition is asserted. If any item in the input list is OFF, the output condition is not-
asserted.
Additionally, when Evaluate as Conditional is selected, you can invert the expected value of
an input item. In that case, if the value is 0, the condition is considered to be ON.
Evaluate as Value
When Evaluate as Value is selected, the value of all input conditions are added together and
written to the output value.
Note Object
The Note object provides a means for creating documentation within an ACE project. Use this
object to create detailed notes for future reference.
To add a Note object, right-click Configuration, select Add, and then click Note. A new Note
object will be added to the Multiview Explorer.
NOTE: After the Note object is created, you can rename the new Note object by
right-clicking the item and selecting Rename.
Note Editing
To access the Note object for editing, right-click the object in the Multiview Explorer and then
select Edit, or double-click the object. This will open the Note editor in the Edit Pane.
OPC Container
The purpose of an OPC container is to provide a standardized infrastructure for the exchange
of process control data that accommodates different data sources, connections, and operating
systems.
OPC stands for Object Linking and Embedding (OLE) for Process Control. It uses Microsoft’s
Component Object Model (COM) and Distributed Component Object Model (DCOM) tech-
nology to enable applications to exchange data on one or more computers using a client/server
architecture.
OPC defines a common set of interfaces which allows various applications to retrieve data in
exactly the same format regardless of whether the data source is a PLC, DCS, gauge, analyzer,
software application, or anything else with OPC support. The data can be available through
different connections such as serial, Ethernet, or radio transmissions for example. Different
operating systems such as Windows, UNIX, DOS, and VMS are also used by many process
control applications.
The OPC protocol consists of many separate specifications. OPC Data Access (DA) provides
access to real-time process data. Using OPC DA you can ask an OPC server for the most recent
values of anything that is being measured, such as flows, pressures, levels, temperatures, dens-
ities, and more. OPC support in ACE software is limited to the DA specification.
For more information on OPC, please see the OPC Foundation website at the following URL:
https://www.opcfoundation.org
An OPC container can be configured for the following functions.
OPC Test Client
An ACE software installation provides an OPC test client. This is useful for testing the func-
tionality of the OPC Container configuration.
The OPC test client can be started by running the SOClient.exe file found in the following
default installation directory.
C:\Program Files\OMRON\ACE 4.X\OPC Test Client
Adding an OPC Container
To add an OPC Container object, right-click Configuration, select Add, and then click
OPC Container. A new OPC Container object will be added to the Multiview Explorer.
NOTE: After the OPC Container object is created, you can rename the new
OPC Container object by right-clicking the item and selecting Rename.
2. Click the Add button to select a Data Item. The Data Item Selection Dialog Box will
appear.
3. Select an item from the list and then click the Select button. The item will be added to
the OPC Container publish list in the Edit Pane.
4. Make a selection for the Read-Only option. If checked, an external OPC client cannot
write to the item. If the Read-Only option is not checked, the OPC client has access to
read and write the item's value.
5. Once all items have been added to the OPC Container publish list, the OPC client can
be configured as shown in the next steps. Run the SOClient.exe file found in the default
ACE installation directory (C:\Program Files\OMRON\ACE 4.X\OPC Test Client).
6. Select the OPC Server Tab and then expand the Local item to expose all Data Access
items.
7. Expand a Data Access item to show the Adept OPC Server item and then double-click it
to add it in the left window pane. You can also right-click the Adept OPC Server item
and then select Add Server.
8. Select the DA Browse tab and then right-click the server on the right window and select
Add Items for all Tags. This will add all associated tags to the server in the left window.
9. Select the DA Items tab to see the item's value and other related information.
The Quality column indicates that the communication with the OPC DA server worked
normally ("GOOD" indicates successful communications).
The TimeStamp column indicates the last update time of the tag.
10. Tags that were configured with the Read-Only selection unchecked can be modified
with the OPC client. To change the value from the OPC client, right click the tag and
select Write. The dialog box below will appear. Enter a new value and then click
the Write button to change the value in the ACE application.
11. Values that have been updated from the ACE application can be verified using the read
function in the OPC client. Right-click the tag and select Read to update values in
the OPC client.
If the values are updating correctly, the configuration procedure is complete.
Use the Add button to add a new item to publish on OPC DA. Use the Delete button to
remove a selected item from the list.
NOTE: Although the Program System Startup function and the Auto Start func-
tion share some similarities, the startup file for the Program System Startup func-
tion is stored on the PC whereas the Auto Start function files are stored on the
controller. This allows the Auto Start function to provide a headless (no PC
required) operation of the application. Refer to Auto Start on page 200 for more
information.
To add a Program System Startup object, right-click Configuration, select Add, and then click
Program System Startup. A new Program System Startup object will be added to the Multiview
Explorer.
NOTE: After the Program System Startup object is created, you can rename the
new Program System Startup object by right-clicking the item and selecting
Rename.
To access the Program System Startup configuration, right-click the object in the Multiview
Explorer and then select Edit, or double-click the object. This will open the Program System
Startup editor in the Edit Pane.
Recipe Overview
Manufacturing processes often require frequent product changeover resulting in the need for
changes to variable values, vision models, vision tool parameters, pallet layouts, motion para-
meters, process definitions, motion offsets, and more. The ACE software provides a Recipe
Manager that simplifies the complex process of saving and restoring large amounts of data to
minimize downtime during frequent product changeover.
There are three steps for recipe management in the ACE software, as described below.
Recipe Definition
Recipe definition involves selecting which objects will be the Sources of recipe data. Sources
are similar to ingredients of a traditional cooking recipe. A recipe will contain a copy of the
data for each Source. Recipes can only store data of objects that are defined as Sources in the
Recipe Manager edit pane. All other objects will have common parameters for all recipes.
When a recipe is created, it will contain a copy of the data that is currently present in the
Source objects. This can significantly reduce the number of objects that must be created and
maintained.
For example, consider a situation where a camera is used to locate a product to be packaged.
In this example, the system can process five different types of products, but only one product
type at a time. Rather than creating five Locator Models and five Locators, you would create
one Locator and one Locator Model, add each as a source, and create five recipes containing
the Model data and Locator parameters optimized for each product type. Alternatively, if two
types of product must be recognized by the same Locator, you could have two Locator Model
objects and include both as sources.
After the recipe Sources have been defined in the Recipe Manager Edit Pane, recipes can be cre-
ated in the Recipe Manager section of Task Status Control.
Task Status Control provides a Recipe Editor that can be used to edit the parameters of all
Source types that are commonly modified by operators. When a recipe is selected, the entire
ACE software interface becomes the editor for the active recipe. The Recipe Editor does not
provide an editor window for all Sources. For example, if a Process Manager is a source for a
recipe, it will not be visible in the Recipe Editor, however the Process Manager edit pane can
be used to make modifications to all Process Manager parameters for the selected recipe.
Refer to Recipe Editor on page 310 for more information regarding creating, modifying, and
deleting recipes.
Recipe Selection
Recipe Selection is a single item selection process for applying the parameters stored in the
recipe to the Source objects.
Once all recipes have been defined and optimized you may want to automate the recipe selec-
tion process so that it does not need to be performed from Task Status Control. This can be
achieved using a V+ program and ACE Sight, or C# program.
When a Recipe is selected, the parameters saved in the Recipe are applied to the ACE project.
All V+ variables will be set to the corresponding values. All vision tool and feeder properties
will be copied into the appropriate sources in the ACE project.
Use the information in the following sections to understand the configuration of recipe man-
agerment.
Recipe Manager
The Recipe Manager is used to define all sources that will be used when creating individual
recipes. You must add sources to the Recipe Manager before creating a Recipe.
The following objects in an ACE project can be used as sources for the Recipe Manager.
Item Description
V+ Variables Specify V+ Variables to be included in a recipe. You can identify how the vari-
able is displayed to the user and what access level a user will have.
Vision Tools All vision tools can be accessed with the Recipe Manager, except the following
tool types.
l Calculation Tools
l Image Process Tools
l Custom Tools
To add a Recipe Manager object, right-click Configuration, select Add, and then click Recipe
Manager. A new Recipe Manager object will be added to the Multiview Explorer.
NOTE: After the Recipe Manager object is created, you can rename the new
Recipe Manager object by right-clicking the item and selecting Rename.
Additional Information: You can drag and drop a Recipe Manager object into a
C# program. Refer to RecipeManager Topics in the ACE Reference Guide for more
information.
To access the Recipe Manager configuration, right-click the object in the Multiview Explorer
and then select Edit, or double-click the object. This will open the Recipe Manager editor in the
Edit Pane.
Sources
The Sources area displays a list of all active sources available for Recipe creation and editing.
Use the Add button ( ) and Remove button ( ) to build a list of sources for use in Recipe cre-
ation. The Up and Down buttons ( ) change the order of the items in the list and order of
these items in the tabs of the Recipe Editor. Place frequently used items near the top of the list.
When a data source is added to the Sources list, you can select it to display the settings in the
configuration window. Configuration window options will vary based on the Source type selec-
ted. All Source types include settings for the following items.
V+ Variable Sources
When configuring a V+ Variable Source in the Recipe Manager, you must add individual vari-
ables in the configuration area. Use the Add button ( ) and Remove button ( ) to make a
list of V+ Variables that will be used in individual Recipes.
Each type of variable contains different properties that affect how the variable is presented to
the you in the Recipe Editor. For example, you can define unique display names and access
levels. To access the V+ Variable Recipe attributes, make a Variable selection and then adjust
its properties as shown below.
V+ Variable values must be edited in the recipe component directly with the Recipe Editor or a
C# program. These values are used to initialize V+ Variables when the recipe is selected, but
the V+ variable values may change while the recipe is active (without it being stored in the
recipe).
Vision Tool Sources
Finder, Inspection, and Reader vision tools can be added to a Recipe configuration. For each
Recipe you create, a copy of the vision tool will be saved with each Recipe.
When a Recipe is selected with a vision tool, it is linked with the vision tool object in the
ACE project that they correspond to. When a vision tool included in the Recipe configuration
is modified in the ACE project object, the selected recipe copy of the vision tool is auto-
matically updated. Likewise, when the vision tool is modified in the Recipe Editor, the
ACE project vision tool object is automatically updated. Because of this linking between the
Recipe and ACE project object, you can configure a vision tool object and it will be saved with
the active Recipe.
The Recipe Editor will vary depending on the vision tool object in use. Typically, the Recipe
Editor is only a small subset of vision tool object's properties. Refer to Recipe Editor on page
310 for more information.
Recipe Script Selection
Select a Recipe Script object created with the Recipe Manager Script editor. Refer to Recipe Man-
ager Script on page 315 for more information.
ACE Sight Index Setting
The ACE Sight index setting defines the index used as the sequence_id when accessing the
recipe manager object from a V+ program. Refer to RecipeManager Properties in the ACE
Reference Guide for more information.
Recipe Editor
After the Recipe Manager object has been configured and all sources are defined, individual
Recipes can be created with the Recipe Editor.
The Recipe Editor can be access from the Task Status Control area. Refer to Task Status Control
on page 122 for more information. The Recipe Editor is described below.
Available Recipes Makes the highlighted Recipe in the Available Recipe list the active
Recipe.
1. Add a new recipe with the Add button ( ). A new Recipe will appear in the Available
Recipes list.
2. Select the recipe and then click the Edit button ( ). This will open the Recipe Editor win-
dow.
3. Select the General item in the Sources list and then input the general information and
settings about the Recipe.
Item Description
Index If the Use Custom Index option is selected, you can set a unique
index number. This is the index of the Recipe used when accessing
the Recipe through ACE Sight or with a C# program. Refer to the
4. Make any adjustments to other data source items for the currently selected Recipe and
then click the Apply button. When all data source items have been adjusted for that
Recipe, click the Close button. The Recipe creation procedure is complete.
V+ Variable Source
Each selected variable is displayed in a list. The display is changed as each variable is
selected based on the settings in the Recipe configuration.
You can see the currently trained locator model and can edit or retrain the locator
model.
The acquisition properties are displayed in a list. You can modify, add, or remove
acquisition settings as needed.
AnyFeeder Sources
NOTE: Most applications that use a Recipe Manager do not require a Recipe
Manager Script.
void BeforeEdit(Recipe recipe) If a Recipe can be edited, this method is called before the
editor is displayed.
void AfterEdit(Recipe recipe) This method is called after the Recipe Editor is closed.
void BeforeSelection(Recipe If a Recipe can be selected, this method is called before the
recipe) Recipe Editor is selected.
To add a Recipe Manager Script object, right-click Configuration, select Add, and then click
Recipe Manager Script. A new Recipe Manager Script object will be added to the Multiview
Explorer.
NOTE: After the Recipe Manager Script object is created, you can rename the
new Recipe Manager Script object by right-clicking the item and selecting
Rename.
l AnyFeeder - a feeder object that uses serial communications with the PC running ACE
software for sequence control and feedback purposes. This is typically controlled using
ACE Sight from a V+ program or from a Custom Vision tool.
l IO Feeder - a feeder object that uses discrete signals for control and feedback purposes.
This is typically used with a Process Manager object to indicate Part and Part Target
availability and can be associated in Control Sources for static Part and Part Target
sources.
Both types of feeder objects can be configured in the Application Manager device of an
ACE project as described in the following sections.
AnyFeeder Object
AnyFeeder objects represent an integrated parts feeding system optimized to work together
with vision, motion, and robots. AnyFeeder objects can be added to provide control and con-
figuration of the parts feeder in the ACE project.
NOTE: When Emulation Mode is enabled, all Feeder Function durations are
emulated. Durations for error reset, initialization, operation abort, or firmware
restart are not emulated because these operations are not intended to be reques-
ted during a feed cycle.
To add an AnyFeeder object, right-click Feeders, select Add, and then click AnyFeeder. The
Create New AnyFeeder wizard will open.
Make selections for the model type, position in the workspace, and motion sequences in the
Create New AnyFeeder wizard. Click the Finish button after completing all steps and then the
AnyFeeder object will be added to the Multiview Explorer. Refer to the sections below for more
information about wizard configuration items.
NOTE: After the AnyFeeder object is created, you can rename the new
AnyFeeder object by right-clicking the item and selecting Rename.
AnyFeeder Configuration
To access the AnyFeeder configuration, right-click the object in the Multiview Explorer and
then select Edit, or double-click the object. This will open the AnyFeeder editor in the Edit
Pane.
Use the information below to understand the AnyFeeder configuration items.
Configuration Items
The Configuration tab contains the following items used for general configuration settings.
3D Display Model Type Select the AnyFeeder Model type from the
dropdown selection menu.
The Standard Controls tab contains the following items used for manually controlling
the AnyFeeder device. The buttons are used to control the AnyFeeder device as described
below.
The Motion Sequences tab shows a listing of high level motion sequences associated with the
AnyFeeder device. You can define a sequence as a collection of individual feeder functions.
When a sequence is executed, all the operations are performed in the order defined in this area.
Motion sequences can be triggered through the AnyFeeder user interface, a C# program, or
with a V+ program. Motion sequences are stored as Command Index numbers between 1000
to 10000. The motion sequence is referenced with this number in C# and V+ programs.
Sequences and sequence steps can be removed using the Delete buttons ( ).
1. Add a new sequence with the Add button ( ). A new sequence will be placed in the
sequence list.
4. Select the action from the Selected Operation list and then make any necessary adjust-
ments to Iterations and Speed.
5. Repeat steps 4 through 8 to add more actions to the sequence as needed.
6. Click the Play button ( ) to execute the sequence as a test. This will cause the con-
nected AnyFeeder to move and execute the sequence. If the sequence executes correctly,
the procedure is complete.
Log Items
The log page shows a history of the communications between the AnyFeeder device and the
PC.
IO Feeder Object
IO Feeder objects represent generic feeder devices that are controlled with input and output sig-
nals from a connected SmartController.
To add an IO Feeder object, right-click Configuration, select Add, and then click IO Feeder. A
new IO Feeder object will be added to the Multiview Explorer.
NOTE: After the IO Feeder object is created, you can rename the new IO Feeder
object by right-clicking the item and selecting Rename.
To access the IO Feeder configuration, right-click the object in the Multiview Explorer and then
select Edit, or double-click the object. This will open the IO Feeder editor in the Edit Pane.
Use the table below to understand the IO Feeder configuration items.
Item Description
Run Button Used to perform one test cycle of the feeder. The operation stops
when the cycle has completed or if the Stop button is clicked before
the end of the cycle. To repeat or restart the cycle, click the Start
button again.
When clicked, this button dims until the Stop button is clicked
(feeder test cycle has been interrupted) or the cycle has completed .
Stop Button Stops (interrupts) the test cycle. The test cycle can be restarted by
clicking the Start button.
This icon is dimmed until the Start button is clicked (feeder test
cycle has started).
Status The Status item provides operation and error information about the
IO Feeder object.
Controller Specifies the controller that will process the feeder signals. Click the
Select icon ( ) to display the list of available controllers and then
select a reference controller from the list.
Feeder Ready Input Specifies the input signal that indicates the feeder is ready and avail-
able to present a part instance.
When Emulation Mode is enabled, this signal is ignored, but soft sig-
nals can be substituted for testing purposes.
Part Processed Output Specifies the output signal that indicates the instance has been pro-
cessed (acquired) by the robot. The feeder should cycle and present a
new part instance.
When Emulation Mode is enabled, this signal is ignored, but soft sig-
nals can be substituted for testing purposes.
Use Handshake Input If enabled, the feeder will assert a signal indicating it has acknow-
ledged the part processed signal.
When Emulation Mode is enabled, this signal is ignored, but soft sig-
nals can be substituted for testing purposes.
Use Custom Program The feeder interface code runs as a V+ program on the specified con-
troller. This program can be overwritten if some custom logic needs
to be applied.
Item Description
If you enable this option, you must use the Select icon ( ) to
select the custom program. A custom V+ program selection wizard
will appear to step through the procedure.
Part Processed Output Specifies the dwell time (time to wait) in milliseconds after the Part
Dwell Processed output signal is turned ON before turning it OFF.
Debounce Time Specifies the amount of time (in milliseconds) that a signal must be
detected in the ON state before it is considered logically ON.
Process objects listed below are described in detail in the following sections.
The Part Buffer object defines an object that is an overflow area where parts can be temporarily
stored when part targets are unavailable to accept more parts.
Refer to Part Buffer Object on page 328 for more information.
Part Target
The Part Target object defines an object that is a possible destination for a part.
Refer to Part Target Object on page 332 for more information.
Part
The Part object defines an object to be picked and processed to an available Part Target.
Refer to Part Object on page 337 for more information.
Belt
The Belt object defines a physical conveyor belt used by the system. The Belt object maintains a
list of encoders that are associated with the physical conveyor.
Refer to Belt Object on page 342 for more information.
Process Manager
Process Managers are the central control point for developing packaging applications. A Pro-
cess Manager allows you to create complex applications without having to write any pro-
gramming code. It provides access to automatically generated V+ and C# programs that allow
you to customize the default behavior to meet the requirements of your application, if neces-
sary.
The Process Manager run-time handler is the supervisory control for the entire packaging sys-
tem, managing allocation and queuing for multiple controllers, robots, conveyors, parts, and
targets.
Refer to Process Manager Object on page 354 for more information.
Allocation Script
The Allocation Script object is used to create and edit custom part-allocation programs for use
with the Process Manager.
Refer to Allocation Script Object on page 408 for more information.
Pallet
The Pallet object defines the layout of a pallet, which can be used to pick parts from or place
parts to.
Refer to Pallet Object on page 409 for more information.
The Vision Refinement Station object defines an object that is used to refine the part to gripper
orientation for improved placement accuracy.
Adding a Part Buffer Object
To add a Part Buffer object, right-click Process, select Add, and then click Part Buffer. A new
Part Buffer object will be added to the Multiview Explorer.
NOTE: After the Part Buffer object is created, you can rename the new Part
Buffer object by right-clicking the item and selecting Rename.
To access the Part Buffer configuration, right-click the object in the Multiview Explorer and
then select Edit, or double-click the object. This will open the Part Buffer editor in the Edit
Pane.
The Configuration drop-down list box is used to specify how the Part Buffer is used by the sys-
tem. Static: Fixed Position is the only option for this item. This means parts are placed at a
static location.
NOTE: This object does not support dynamic Part Buffers by default and oper-
ates under the assumption that parts placed in the buffer will be available in the
same position when accessed later. Be sure to consider the physical state of parts
in the buffer when a Process Manager is stopped and restarted.
Pallet Properties
Use the Pallet Properties area if you need to specify a pallet that is used to hold the part(s).
Make reference to a Pallet object by checking Pallet and then use the Select button to specify
that object as shown below.
NOTE: A Pallet object must already exist in the project. Refer to Pallet Object on
page 409 for more information.
Use the Shape Display to specify a shape to represent the Pallet in the 3D Visualizer when the
Process Manager runs. Select the Shape check box and then use the Select button to specify a
shape.
Shape Display (Part)
Use the Shape Display to specify a shape to represent the Part in the 3D Visualizer when the
Process Manager runs. Select the Shape check box and then use the Select button to specify a
shape.
Use the following procedure to add a Part Buffer to a Process after creating and configuring
the Part Buffer object.
NOTE: The default Process Strategy can choose the appropriate process based
on part / target availability.
1. Create a Process that includes a Robot, a Part as the Pick Configuration, and a Part Tar-
get as the Place Configuration.
2. Create a second Process that includes the original Robot and Part, and select the Part
Buffer as the Place Configuration.
3. Create a third Process that includes the original Robot and Part Target, and select the
Part Buffer as the Pick Configuration.
4. Use the Up and Down buttons in the Process Manager Processes area to arrange the pro-
cess by priority. The process at the top of the list has the highest priority.
5. Set the Part Buffer access order. After the Part Buffer access order is set, the procedure is
complete.
When a pallet is used for the part buffer, you need to specify how the parts will be accessed as
the buffer is being emptied. You can choose between the following options.
l First In, First Out (FIFO): The first part placed into the part buffer will be the first part
removed.
l Last In, First Out (LIFO): The last part placed into the part buffer will be the first part
removed.
NOTE: When parts are stacked (more than one layer is specified for the pallet),
the access order must be set as LIFO.
Use the following procedure to set the Part Buffer access order. Refer to Control Sources on
page 390 for more information.
1. In the Control Sources setting area, select the Static Sources For Part Buffer item from
the Sources list (if the Part Buffer object has been renamed, select the corresponding
item).
2. If necessary, select the desired Buffer Initialization Mode to indicate the state of the part
buffer when it is initialized. The default state is Empty, which means the buffer is
empty when initialized.
3. Select the required Access Order from the list and then the Part Buffer access order set-
ting procedure is complete.
To add a Part Target object, right-click Process, select Add, and then click Part Target. A new
Part Target object will be added to the Multiview Explorer.
NOTE: After the Part Target object is created, you can rename the new Part Tar-
get object by right-clicking the item and selecting Rename.
To access the Part Target configuration, right-click the object in the Multiview Explorer and
then select Edit, or double-click the object. This will open the Part Target editor in the Edit
Pane.
The Configuration drop-down list box is used to specify how the target is input to the system.
The options are described below.
l Belt: Latching, belt camera, or spacing interval - targets / instances are located on a con-
veyor belt using latching or fixed-spacing. Vision and / or a pallet may be included in
the part delivery system. Refer to Belt: Latching, Belt Camera, or Spacing Interval Con-
figuration on page 335 for more information.
l Static: Fixed position - targets / instances are acquired from a static location such as a
part feeder or a pallet. Refer to Static: Fixed Position Configuration on page 336 for more
information.
l Vision: Fixed camera not relative to a belt - locations are acquired through camera that
is not located over a belt. Refer to Vision: Fixed camera not relative to belt on page 336
for more information.
NOTE: If the part is supplied on a belt with a camera, the Belt: Latching, belt
camera, or spacing interval option must be selected.
Pallet Properties
Use the Pallet Properties area if you need to specify a pallet that is used to hold the part(s).
Make reference to a Pallet object by checking Pallet and then use the Select button to specify
that object as shown below.
NOTE: A Pallet object must already exist in the project. Refer to Pallet Object on
page 409 for more information.
Use the Shape Display to specify a shape to represent the Pallet in the 3D Visualizer when the
Process Manager runs. Select the Shape check box and then use the Select button to specify a
shape.
Shape Display
Use the Shape Display to specify a shape to represent the Part in the 3D Visualizer when the
Process Manager runs. Select the Shape check box and then use the Select button to specify a
shape.
When Belt is selected for the Part Target configuration, the operation mode can be either
Vision, Latch, or Spacing.
Belt Properties
This area is used to select the mode of the belt that is used to handle the part. You also specify
other details related to the belt mode selection in this area. Use the information below to make
appropriate selections.
Item Description
Belt / Encoder Select the encoder from a list of available Process Belt Encoders.
This will populate the associated Belt object automatically.
Vision Tool (Vision Mode) Select the vision tool used to detect the part on the belt.
Spacing (Spacing Mode) Specify the spacing in millimeters between targets / instances
on the conveyor belt.
Static: Fixed Position Configuration
When Static is selected for the Part Target configuration, the Part Target is in a fixed position.
There are no additional settings to configure with this selection.
When Vision is selected for the Part Target configuration, a vision tool must be specified in the
Vision Properties area.
Vision Properties
This area is used to select the vision tool and optionally, the named instance that is used to
acquire the part position.
You can optionally specify a named instance and then select a Model or enter a custom result
name.
Part Object
The Part object defines a physical object that is input to the application for processing. The Part
has a configuration property that specifies how the target / instance is input to the application.
Adding a Part Object
To add a Part object, right-click Process, select Add, and then click Part . A new Part object will
be added to the Multiview Explorer.
NOTE: After the Part object is created, you can rename the new Part object by
right-clicking the item and selecting Rename.
Part Configuration
To access the Part configuration, right-click the object in the Multiview Explorer and then
select Edit, or double-click the object. This will open the Part editor in the Edit Pane.
The Configuration drop-down list box is used to specify how the part is input to the system.
The options are described below.
Pallet Properties
Use the Pallet Properties area if you need to specify a pallet that is used to hold the part(s).
Make reference to a Pallet object by checking Pallet and then use the Select button to specify
that object as shown below.
NOTE: A Pallet object must already exist in the project. Refer to Pallet Object on
page 409 for more information.
Use the Shape Display to specify a shape to represent the Pallet in the 3D Visualizer when the
Process Manager runs. Select the Shape check box and then use the Select button to specify a
shape.
Shape Display
Use the Shape Display to specify a shape to represent the Part in the 3D Visualizer when the
Process Manager runs. Select the Shape check box and then use the Select button to specify a
shape.
When Belt is selected for the Part configuration, the operation mode can be either Vision,
Latch, or Spacing.
Belt Properties
This area is used to select the mode of the belt that is used to handle the part. You also specify
other details related to the belt mode selection in this area. Use the information below to make
appropriate selections.
Item Description
Belt / Encoder Select the encoder from a list of available Process Belt Encoders.
This will populate the associated Belt object automatically.
Vision Tool (Vision Mode) Select the vision tool used to detect the part on the belt.
Spacing (Spacing Mode) Specify the spacing in millimeters between parts on the con-
veyor belt.
Static: Fixed Position Configuration
When Static is selected for the Part configuration, the Part is in a fixed position. There are no
additional settings to configure with this selection.
When Vision is selected for the Part configuration, a vision tool must be specified in the Vision
Properties area.
Vision Properties
This area is used to select the vision tool and optionally, the named instance that is used to
acquire the part position.
You can optionally specify a named instance and then select a Model or enter a custom result
name.
Belt Object
A Belt object represents a physical conveyor belt in the workcell or packaging line. A belt may
be tracked by multiple robots that may be controlled by a single or multiple controllers. Belts
may also be related to multiple part or part target objects.
The Belt object provides settings for Active Control, Emulation Mode behavior, workspace pos-
itioning for 3D visualization and multi-robot allocation order, and a list of associated belt
encoder inputs for related controllers. This section will describe how these settings are used.
To add a Belt object, right-click Process, select Add, and then click Belt. The new Belt object wiz-
ard will open.
NOTE: Controllers to be associated with the new Belt object must be online
before adding the Belt object.
NOTE: After the Belt object is created, you can rename the new Belt object by
right-clicking the item and selecting Rename.
Belt Configuration
To access the Belt configuration, double-click the object in the Multiview Explorer or right-click
it and select Edit. This will open the Belt editor in the Edit Pane.
The Belt Control Menu items provide the following functions that can be used as necessary
while configuring a Belt object.
Selecting this menu item will display a wizard used to view the operation of the belt encoder.
Select a belt encoder from the list and then activate the belt and observe the values. These val-
ues should change when the belt moves. You can use the belt controls in the wizard if Active
Control is enabled or you are in Emulation mode.
Test Latch Signal
Selecting this menu item will display a wizard used to view the operation of the encoder latch
signal. Select an encoder latch you want to test, click the Next button, and then use the Pulse
button to ensure the Latch Count increments correctly.
NOTE: A Belt Encoder Latch must be configured and the latch must be enabled
in the Encoders section to make the latch signal appear in the Test Latch Signal
wizard. Refer to Configure Belt Encoder Latches on page 195 for more inform-
ation.
Active Control
Active Control can be configured to control the belt during the calibration and teaching pro-
cess. It can also optionally be set to control the belt during run time based on part or part tar-
get instance location.
If the conveyor belt can be controlled by SmartController digital I/O signals, enable Active Con-
trol, select the controller, and enter the appropriate signal numbers in the Drive Output fields.
Typically, if the Process Manager can stop the belt without affecting upstream or downstream
processes, then the controller of the robot positioned farthest downstream is selected to control
the belt. This robot is usually selected to provide all robots the opportunity to process the part
or part target instance, and if an instance is not processed by any robot, the belt can be stopped
to ensure all of them are processed. Refer to Process Strategy Belt Control Parameters on page
400 for more information.
Physical Parameters
The Physical Parameters are generally set when creating the Belt object, but they can be mod-
ified as needed when the Process Manager is not executing. The Workspace Location, Width,
and Length of the Belt object should be configured to closely approximate the position of the
physical belt relative to robots and other workcell hardware. The accuracy of robot belt track-
ing behavior is dependent on the belt calibration and scale factor, not on the location of the
Belt object. However, the Belt object shown in the 3D Visualizer provides a graphical rep-
resentation of the conveyor and is used to understand the relative position of multiple robots
along the belt for the purposes of instance allocation.
NOTE: In Emulation Mode it is common to set the Belt object position before per-
forming calibrations so that it can be used as a visual reference. On physical sys-
tems it is common to refine the position of the Belt object after performing belt
calibrations.
In the 3D Visualizer, the Belt object displays three red arrows in the 3D Visualizer at one end
of the belt that represent direction of belt travel. These red arrows must align with the white
arrows that indicate direction of belt travel for each belt calibration.
Figure 8-118 shows examples of correct and incorrect belt direction of travel. Notice the lower
belt has three red arrows on the right oriented in the opposite direction of the white arrows in
the calibrations, which is incorrect. This results in the parts within the red box not being alloc-
ated to the upstream robot because the Belt object indicates parts should be allocated to the
downstream robot first. This results in the downstream robot processing parts while all
upstream robots do not pick available instances.
Incorrect belt direction of travel can be corrected by rotating the roll angle of the Belt object 180
degrees to align the direction of belt travel between the Belt object and the associated belt cal-
ibrations, and adjusting X and Y position to accurately represent the physical belt in the sys-
tem. Once corrected, both robots can process instances as expected, where any instances not
processed by the upstream robot can be processed by the downstream robot as shown in Fig-
ure 8-119 .
The Belt object provides an interface for defining Encoders and Encoder Associations. A single
Belt object may be used as a belt source for multiple part and part target types, and also may
be associated with belt encoder inputs for multiple controllers.
Belt object Encoders are effectively virtual encoders for the purpose of constructing tracking
structures and allocation limits independently for each part or part target object type.
Encoder Associations are used to understand the physical belt encoder and latch signal inputs
that are wired to each robot controller that tracks the belt. Depending on the system con-
figuration, you may need to configure Encoders, Encoder Associations, or both. This section
will describe the different situations that are supported.
Consider the system configuration shown in the following figure with two robots and two con-
trollers (one controller per robot). The lower belt is the part picking belt with two belt-relative
cameras locating two different part types (camera one locates Part1 one and camera two loc-
ates Part2). The upper belt is the part placing belt with two latch sensors used for generating
two different part target type instances. A belt encoder input for each belt is required for each
controller.
SmartController0 7
SmartController1
7
3 4
5 6
1
Robot 1 Robot 2
2
8
Item Description
1 Place Belt
2 Pick Belt
5 Latch Signal 1002
7 Belt Encoder Inputs
8 Camera 2
9 Camera 1
With the configuration shown in the figure above, Belt Encoder Latches are configured as
shown below, assuming a rising edge for each part and part target detection.
In the Belt object, a virtual encoder is needed for each part object (Part1 and Part2). Each vir-
tual encoder will have an encoder association with the corresponding Belt object Encoder
Channel of each controller, and the latch signal of the corresponding camera detecting that
part object type as shown below in Figure 8-122 and Figure 8-123 below.
Part1 and Part2 configurations include references to their respective virtual encoders and vis-
ion tool for location as shown in the figure below.
The separate virtual encoders are necessary to support independent tracking structures for the
different part and part target types, including individual belt calibrations and allocation limits.
The separate encoder associations and latch signals allow Pack Manager applications to man-
age capturing and storing latch signals and encoder positions of every instance for each con-
troller. When an instance is not processed by the upstream robot and is reallocated to the
downstream robot, the latch position reference is automatically changed from the SmartCon-
troller0 encoder latch position to the SmartController1 encoder latch position.
Alternatively, the ACE software supports configurations that require multiple cameras to be rel-
ative to a single robot-to-belt calibration and set of allocation limits as shown in Figure 8-128
and Figure 8-129 below, where Part1 and Part2 are configured to use a single virtual encoder.
This is achieved by defining multiple latch signals for specific encoder associations and con-
figuring the corresponding robot-to-sensor belt camera calibrations to use specific latch signal
numbers. This is only valid for belt cameras. For example, multiple latch signals are not
allowed if the configuration is for belt-latch configuration, such as locating multiple pallets
with different latch signals. For latch and spacing calibrations, use multiple virtual encoders.
Use the following procedure to configure multiple cameras relative to a single robot-to-belt cal-
ibration.
1. Open the Belt object and verify that you have associated a controller with the belt
encoder.
2. Specify the latch signals used by each camera in the Signal Number field. Multiple sig-
nals must be separated by a space, as shown in the following figure.
3. Locate the Robot-to-Sensor belt camera calibrations in the Process Manager. Select a
Sensor Calibration and click the Edit button.
4. Expand the Belt Latch Configuration parameter group, uncheck Use Default, and then
enter the desired latch signal number associated with that camera.
5. Repeat steps 3 and 4 for the other belt camera calibrations and corresponding latch sig-
nals. Once all belt camera calibrations and latch signals are accounted for, the pro-
cedure is complete.
Process Components
This section describes the Process components, which are accessed from the Process Manager
object. The other application components, such as robots, grippers (end effectors), controllers,
and vision tools are described in other sections of this documentation.
Process Pallets
The Process Pallet object is used to define the layout and location of a pallet. The pallet can be
in a static position or it can be located on a conveyor belt. The pallet can use a traditional row
and column part layout or use a radial part layout.
The Belt object defines a conveyor belt used by the system. The Belt object maintains a list of
encoders that are associated with the conveyor. The Belt Encoder defines the mm/count ratio of
the encoder. The Belt Encoder Controller Connection maintains a list of controllers that the
encoder is electrically connected to. The controller connection can also specify controller
A Part object defines a part that is input for processing. The Part object has a Configuration
drop-down list box that is used to specify how the part is input to the system.
A Part Target object defines a possible destination for a part. The possible configurations for a
Part Target object are the same as for a Part object. Depending on the selected configuration,
additional information can be defined when configuring the Part / Part Target as described
below.
The Part and Part Target configuration defines the part pick or place requirements. The fol-
lowing options are available.
Part and Part
Target Description Details
Configuration
Belt The part is picked from or A belt and encoder must be specified for
placed onto a conveyor belt. It use with the Part / Part Target. Then, a
may use latching, a camera, or Belt Mode is defined that describes how the
a spacing interval to determ- part is related to the belt.
ine the position.
For this item, additional information is
A pallet is optional. required based on the options below.
Part and Part
Target Description Details
Configuration
A pallet is optional.
Vision The part pick or place process A vision tool is specified that is used to loc-
requires a fixed-location cam- ate the part. For example, this could be an
era. There is no belt used. inspection tool that filters instances based
on some criteria.
A pallet is optional.
Additionally, the vision properties can be
configured to filter the vision results based
on a part name. This will most likely be
associated with a named part returned
from a locator model.
Pallets
Pallet is an optional parameter that specifies the parts are acquired from a pallet or placed into
a pallet. This optional parameter can be used in conjunction with a Belt, Static, or Vision con-
figuration. It is important to note that when used with a Vision or Belt, the vision or belt is con-
figured to locate the origin of the pallet, not the parts in the pallet.
Part Process
A Part Process identifies a robot that can pick a Part or collection of Parts and place it at a Part
Target or collection of Part Targets. The Process Manager is responsible for processing Parts
input to the system and routing them to a Part Target. To do so, it maintains a Part Process
list. The Process Manager examines the list of Part and Part Targets associated with the Part
Processes defined by the user. It will generate a list of Calibration objects, which are displayed
to the user, as follows.
l Robot-to-Belt Calibration
l Robot-to-Belt-Camera Calibration
l Robot-to-Belt-Latch-Sensor Calibration
l Robot-to-Fixed-Camera Calibration
l Robot-to-Belt-Spacing-Reference Calibration
Each calibration object relates the robot to another hardware element in the application.
The Part Process defines a possible processing scenario for handling a Part. The Process
Strategy is responsible for deciding which robots will process which parts. This is done using
the list of Part Process objects as a guide to the valid combinations.
If a Part or Part Target is a pallet, then the Part Process object allows for a Pallet Exclusion Con-
figuration to be defined. The user can limit the pallet positions that can be accessed by the
robot in this configuration.
Motion Information
After a collection of Part Processes is defined, the Process Manager scans the collection to
determine what additional configuration data is needed to properly drive the process. Some
examples are listed below.
Sources
A Source is an object that interacts with the hardware and discovers Part Instances and Part
Target Instances. The Process Manager analyzes the configuration of Part Processes in order to
determine what Sources are needed to control the hardware. A Source is allocated for each of
the following conditions
Part Instance and Part Target Instance objects get allocated to a controller for processing by a
robot. The Process Manager uses the Process Strategy to identify that allocation.
NOTE: The Process Manager knows if a Part/Part Target instance was pro-
cessed, not processed, or if an error happened during processing because of a
grip error. If a grip error occurs, that instance will not be transferred to next robot
and will be counted as not processed in the statistics.
Process Handler
When the Process Manager executes a process, it relies on a series of internal objects to man-
age the interaction with the hardware. The Process Manager is responsible for organizing and
containing the information that is going to be processed. The Process Handler is responsible
for using that information and managing the execution of the process.
In general, the run-time operation of the Process Manager will use the Part Process information
to locate Part Instances and Part Target Instances. Internally, the Process Handler maintains a
collection of internal objects that are responsible for interacting with individual elements of the
hardware. These objects fall into two categories: objects that generate Part / Part Target
Instances and objects that can process the instances.
Controller Queue
The Controller Queue represents a controller with associated robots that can pick from Part
Instances and place to Part Target Instances.
The Controller Queue communicates with the Queue Manager task that manages the collection
of parts to be processed by the robots on a given controller. The Controller Queue receives noti-
fication as the controller processes the instance information. The Controller Queue also mon-
itors for functionality or capacity issues with the robots connected to the controller. It notifies
the Process Manager through an event in the case that the controller is unable to process the
items in its queue within a given time-frame.
The time-frame is based on the belt speed and location of the parts on the belt given the
upstream / downstream limits of the individual robots. The Controller Queue maintains state
information regarding its ability to accept parts. This information is used by the Process
Strategy when determining how to allocate parts.
The Controller Queue also maintains statistics that are captured for a certain number of cycles,
such as idle time, processing time, and parts / targets processed per minute. This information
is available to you and may be used in the allocation of Part Instances.
Line Balancing
A Process Strategy is invoked to determine how to allocate the Part / Part Target Instances iden-
tified by the Process Manager. It uses the list of Part Processes to allocate the instances to spe-
cific robots. The output of this process is passed to the Controller Queue object by the Process
Manager.
Each Process Strategy operates under certain assumptions based on the process being mon-
itored. Those assumptions determine which algorithms are used to perform the allocation.
This process strategy is predicated around certain assumptions on how robots will handle
Parts and Part Targets. For part processing, the overflow from an upstream robot will be
passed to the next downstream robot on the same controller. In other words, the first robot
along the conveyor will pick all parts it is capable of picking. Any part it cannot pick will be
picked by the next robot in the line. This pattern is repeated for all robots in the line.
For the processing of Part Targets, any targets configured as a latched Pallet will be passed
from robot to robot, allowing each one to fill the slots with parts as defined by the Process
Strategy.
There is no logic function that tries to optimize the allocations of parts or targets. The Process
Strategy simply requests that each robot process as many Parts and Part Targets as possible,
and remaining parts are passed to the next robot.
There are user-defined parameters that control this Process Strategy, as described below.
l Robot Parameters: used to specify the queue size for the robot.
l Belt Window Parameters: used to set part-processing filters, which help to optimize
cycle time.
l Belt Control Parameters: used to set conveyor belt on / off and speed controls, which
can dynamically adjust the part flow to the robot.
These parameters are available in the Process Strategy editor.
Custom Process Strategy
If required, the system allows you to define your own Process Strategies using C# within the
application.
Controller Software
1. A series of V+ programs that are responsible for picking and placing an instance.
2. A series of V+ programs responsible for managing the queue of parts and com-
municating with the PC.
The V+ program code is designed to run without any PC interaction. It is triggered by items
arriving in the queue. Motion parameters are defined on the PC and then downloaded to vari-
ables on the controller. Multiple instances of this program are run (one for each robot in the
configuration).
Use the following information to understand the Process Manager Editor area.
4 Belt Lists all Robot-to-Belt calibrations required for the defined pro-
Calibrations Editor cesses. Also provides access to edit belt allocation limits.
Process Manager Configuration Errors
If there is a configuration error, an alert icon ( ) displays in the corresponding item. If you
hover the mouse cursor over the icon, a message displays that describes the error as shown
below.
To add a Process Manager object, right-click Process, select Add, and then click
Process Manager. A new Process Manager object will be added to the Multiview Explorer.
This section describes the Processes Editor section of the Process Manager Object. This area
defines the elements (robot, pick location, place location) for a particular process that will be
controlled by the Process Manager.
Item Description
Up/Down Buttons Sets the priority for the selected process. The process at the top of
the list receives the highest priority.
( )
Refer to Changing Process Priority on page 368 for more information.
Part The Part(s) specified for the process. You can double-click this item or
select the process and then click the Edit button to change it.
Target The Part Target(s) specified for the process. You can double-click this
item or select the process and then click the Edit button to change
it.
( )
Delete Button Remove the selected process from the Processes list.
( )
Alert Icon ( ) Indicates the process needs to be taught or there is some other prob-
lem. Hover the mouse pointer over the icon to view the alert mes-
sage for a description of the reason(s) for the alert.
Part Process Editor
The Part Process Editor is used to specify the items used in the process. Access the Part
Process Editor by double-clicking an existing process, or by clicking the Add button for a new
process.
Use the Pallet Slot Selection button ( ) to select the pallet slots the robot can access for pick
or place configurations. This button is only available when a Pallet object is configured for the
selected part / target.
Robot Reference
The Robot reference is used to specify the robot that will be used for the pick-and-place pro-
cess. Use the Select button ( ) to open the Select a Reference Dialog Box, which allows you to
create or specify the robot that will be used in the pick-and-place process.
Index
This item displays the index number of the process, which can be referenced in V+ programs
and C# programs.
The Pick Configuration group is used to specify the single or multi-pick Part items for the pick-
and-place process, as follows.
Single Pick/Place and Multiple Pick/Place can be used together. For example, if you want to
pick multiple parts individually and then place them all at the same time, you would use Mul-
tiple Pick and Single Place.
Single Pick
Select this item for a single-pick application where only one pick motion is performed. Use the
Select icon to browse for the part that will be picked.
Multiple Pick
Select this item for a multiple-pick application where multiple parts will be picked before place-
ment at a target. When multi-pick is enabled, the available tip indexes will be provided for the
selected robot IO EndEffector. Use the Select icon to browse for the part that will be picked by
each tip.
Use the Up/Down buttons to change the order of the tip processing.
Properties Tab: Place Configuration Group
The Place Configuration group is used to specify the single or multi-place Part Target items for
the pick-and-place process as follows.
Single Place
Select this item for a single-place application where only one place-motion is performed. Use
the Select icon to browse for the part target where the part will be placed.
Multiple Place
Select this item for a multiple-place application where multiple place-motions are performed.
Use the Select icon to browse for the part that will be picked.
Use the Up/Down buttons to change the order of the tip processing.
The Enable Refinement option is used to perform a position refinement operation over an
upward-facing camera to improve part placement accuracy. This is used when placement
error needs to be smaller than error introduced during the pick operation.
When this option is selected, the part is moved to a predefined vision refinement station for
additional processing before being moved to the place (Part Target) location. Refer to Vision
Refinement Station Object on page 415 for more information.
Single Refinement
Select this item for a single-vision-refinement application under the following conditions.
Multi Refinement
Select this item for a multiple-tip-gripper applications. Each tip will be moved individually to
the specified vision refinement station.
Use the Select icon to specify the tool tip and corresponding vision refinement.
Use the Up/Down buttons to change the order of the tip processing.
The Select process only if items are in range check box tells the system to only select this pro-
cess if all parts and targets are in range of the robot. This option is typically disabled, but may
be useful in multi-process applications where you want to select this Process only if the
required Part(s) and Part Target(s) are within range.
This may not be sufficient in parallel-flow configurations with many Parts and Part Targets. In
this case you may need to consider part and target filtering to reduce the number of instances
the robot has to process through when checking position of all instances.
This is typically not necessary, but can be helpful in multi-process configurations. For
example, in a situation where Process A has higher priority than Process B, Process A Part is
further upstream relative to Process B Part, but Process B Target is further downstream than
Process A Target, you may want the system to select Process B when both part and target are
already within range rather than selecting process A by priority and waiting for Part A to
come into range. During this waiting time, Process B Target may move downstream and out of
range.
Changing Process Priority
The process priority is used in situations where multiple processes are defined and a given
robot is capable of doing several of the potential processes. The process at the top of the Pro-
cess list receives highest priority.
You can change the priority for a process by using the Up / Down buttons on the Process list
editor, shown in the following figure.
NOTE: In addition to the arrows, you can also affect process priority through
the Process Selection Mode setting of the Robot Parameters in the Process
Strategy Editor area. Refer to Process Strategy Robot Parameters on page 396 for
more information.
For example, you have three processes defined in the Process Manager, as follows.
Teaching a Process
The last step in creating the Process is the teach function. This must be performed after all cal-
ibrations are complete. Ideally, all calibrations would be performed as accurately as possible
with a calibration pointer and disk. After calibrations are complete, the process end effector
can be installed to teach the process. Every step of the process is performed one at a time to
capture any necessary motion offsets. Select a Process and then click the Teach button (
) to open the Teaching Process wizard.
NOTE: The following process illustrates the steps for a basic pick-and-place
application. The steps in your wizard will vary depending on the application
and types of Parts and Part Targets you have defined for your application.
Additional Information: If a Pallet object is used for a Part or Part Target, you
will see additional screens in the wizard for teaching the pallet frame (the ori-
entation of the pallet in the workspace) and the first position in the pallet. Each
of these steps contain an image of the pallet item being taught, which provides a
visual reference. Refer to Pallet Object on page 409 for more information.
Use the Teaching a Process wizard to make selections for pick, place, and idle robot positions.
The ACE project can share robot-to-hardware calibration information between multiple Process
Manager objects. If you use the same robot and hardware for each process, you can create mul-
tiple Process Manager objects in the ACE project without having to repeat the calibrations.
For example, assume you are setting up an ACE project to handle the packaging of various
fruits. You have three fruits that you want to pack: apples, oranges, and peaches. All fruits will
use the same robot, sensor, and infeed belt. To create the packaging processes for each fruit,
use the following procedure.
14. Repeat steps 9 to 13 for "Peaches Packing". After all Process Manager objects are added,
the procedure is complete.
Belt Calibrations
This section describes the Belt Calibrations list in the Process Manager. The Belt Calibrations
area defines the robot-to-belt calibrations required by the defined Processes in the Process list.
Refer to Processes in the Process Manager Object on page 362 for more information.
Item Description
Robot Specifies the robot for the belt calibration. Double-click this item or
click the Edit button to display the Belt Calibration Editor.
Belt [Encoder] Specifies the belt and encoder for the belt calibration. Double-click
this item or click the Edit button to display the Belt Calibration
Editor.
When a belt calibration is required, the Process Manager displays the Belt object name with an
Alert icon ( ) in the Belt Calibrations list. The belt is calibrated using the Belt Calibration wiz-
ard, which is accessed from the Calibrate button in the Belt Calibrations area. After the belt is
Belt Calibration Wizard
The Robot-to-Belt Calibration wizard provides an interactive series of steps that guide you
through the calibration process. Each step of the wizard contains graphical aids and text that
describe the particular step being performed. As each step of the wizard is completed, the
information from that step is collected and used to populate the fields in the Belt Calibration
Editor.
Click the Calibrate button to begin the Belt Calibration wizard.
After the belt has been calibrated using the Belt Calibration wizard, you can manually edit the
stored belt calibration parameters and allocation limits, such as upstream limit, downstream
limit, and downstream process limit. These parameters are edited using the Belt Calibration
Editor. The following figure illustrates several of the Belt Calibration Editor items in a typical
workcell.
NOTE: The belt must be calibrated using the wizard before the values can be
manually edited.
Item Description
1 Downstream
2 Downstream Limit
3 Process Limit
Item Description
4 Belt Stop Line
6 Upstream Limit
8 Upstream
To access the Belt Calibration editor, click the Edit button in the Belt Calibrations group. The
Belt Calibration editor will open.
Adjust the parameter values or use the graphical representation to reposition the lines accord-
ingly.
Belt Window Belt The transformation describing the location of the belt win-
Transformation dow relative to the origin of the robot. This location also
defines the upstream tracking limit of the belt window.
Downstream Downstream window limit (mm from the belt frame origin
Limit along the belt vector).
Dynamic Wait Distance along belt vector (mm from belt transformation ori-
Offset gin) where robot will wait for a part or target that is cur-
rently upstream of the upstream limit.
Lane Width The width of the belt window starting from the belt trans-
form, pointing in the positive Y-direction.
Upstream Limit Upstream pick limit (mm from belt transformation along
the belt vector).
the width of the belt window. You can force different robots
to pick in different horizontal regions (lanes) of the belt. For
example, if you think of the conveyor belt as a three-lane
highway (as shown in the previous figure), you may have
robot one filtered to pick from the near one-third of the belt
window, robot two filtered to pick from the middle one-third
of the belt window, and robot three filtered to pick from the
far one-third of the belt window.
Far Edge The distance from the far edge of the conveyor where the
Percent robot cannot process.
Near Edge The distance from the near edge of the conveyor where the
Percent robot cannot process.
Testing the Belt Calibration
The Test Belt Calibration page allows you to test the current robot-to-belt calibration. Click
the Test Calibration button to begin the Belt Calibration test.
3. Position the robot tool tip so that it is just above the center of the part.
5. Start the conveyor so the belt is moving. The robot should track the target location until
Additional Information: The distance between the robot tool tip and the part on
the belt should remain constant while tracking. If not, the calibration procedure
should be executed again.
After a calibration has been completed, the data can be saved by selecting File > Save To on the
calibration editor menu. You can load a previously-saved calibration file by selecting File >
Load From on the calibration editor menu.
Sensor Calibrations
The Sensor Calibrations area defines the robot-to-sensor calibrations for the selected workcell
process. Refer to Processes in the Process Manager Object on page 362 for more information.
These should be performed after Robot-to-Belt calibrations.
Item Description
Robot The robot specified for the belt calibration. Double-click this item or
click the Edit button to display the Sensor Calibration Editor.
Item Description
Double-click this item or click the Edit button to display the Sensor
Calibration Editor.
l Belt camera.
l Fixed camera (downward-looking camera).
l Latch sensor.
l Spacing reference.
l Refinement camera (upward-looking camera).
When a sensor calibration is required, the Process Manager displays the Sensor object name
with an alert icon ( ) in the Sensor Calibrations Group.
The sensor is calibrated using the Sensor Calibration wizard, which is accessed from the Cal-
ibrate button ( ) in the Sensor Calibrations group. After the sensor is calibrated
using the wizard, the stored calibration values can be manually edited.
For details on the Vision Windows and image-editing controls in the wizards, refer to the ACE
Reference Guide.
After the sensor is calibrated through the Sensor Calibration wizard, you can manually edit the
stored sensor-calibration parameters, such as the robot-to-sensor offset. These parameters are
edited using the sensor Calibration Editor. To access the sensor Calibration Editor, select a
sensor and then click the Edit button in the Sensor Calibrations group. The Sensor Calibration
Editor opens.
The Sensor Calibration Editor contains the sensor properties configuration parameters. These
are used to configure various settings of the selected sensor.
The following figure shows the Vision Calibration Editor, which contains a calibration offset
along with additional parameters for controlling the robot motion during the picture-taking
and part-pick operations of the automated hardware calibration. These are not used during
run time when the robot is performing the process (run time motion parameters will be found
in configuration items).
After a calibration has been completed, the data can be saved by selecting File and then Save
To on the calibration editor menu.
You can load a previously-saved calibration file by selecting File and then Load From on the
calibration editor menu.
The calibrations can be performed using either the automatic calibration (preferred method) or
the manual calibration procedure. In the automatic calibration procedure, you teach the initial
locations and then the wizard automatically performs the robot movements to acquire enough
data points to calibrate the system. In the manual procedure, you have to move the robot
through each step of the process until enough data points have been acquired. The manual
method is provided for cases where obstructions in the workcell do not allow for automated
movement of the robot during the calibration process.
NOTE: It is recommended that you use the calibration wizard, in order to obtain
the optimum performance from your system.
The manual calibration procedure is available for the Fixed Camera and Refine-
ment Camera calibrations.
Some calibrations operate differently in Emulation Mode. Refer to Emulation
Mode Differences on page 18
The Sensor Calibration wizard provides an interactive series of steps that guide you through
the calibration process. Each step of the wizard contains graphical aids and text that describe
the particular step being performed. As each step of the wizard is completed, the information
from that step is collected and used to populate the fields in the Sensor Calibration Editor,
which is described in the previous section.
Calibration Types
The Sensor Calibrations area defines all of the calibration types that are used in the project.
These calibration types are described below.
The Fixed Camera Calibration wizard configures the positioning of a camera with respect to a
robot when both the camera and the surface in the field of view are stationary. The wizard will
show the 3D visualization of the camera vision window at various steps in the process.
Depending on the application, the wizard will end with manual or automatic configuration.
Automatic calibration assumes that the pick surface is parallel to the tool plane. If the pick sur-
face is not parallel to the tool plane, the parameters should be adjusted so manual calibration
is performed instead.
Refer to Robot-to-Camera Calibration on page 34 for a more detailed explanation of the cal-
ibration process.
The Belt Camera Calibration wizard configures the positioning of a camera with respect to a
robot when a belt object is present. It includes controls for moving the belt, indicators to show
if the belt is ON, and fields for speed, position, and direction. Depending on the step in the cal-
ibration wizard, it also shows the 3D visualization of the vision window for the associated
camera.
Refer to Robot-to-Camera Calibration on page 34 for a more detailed explanation of the cal-
ibration process.
The Belt Latch Calibration wizard configures the positioning of a latch sensor with respect to a
belt. Similar to the Belt Camera Calibration wizard, it includes belt controls, indicators, and 3D
visualization. However, instead of a vision window, this wizard has an additional indicator
showing that the latch has been triggered.
Refer to Section , Robot-to-Latch Calibration on page 35 for a more detailed explanation of the
calibration process.
The Refinement Camera Calibration wizard is functionally similar to the Fixed Camera Cal-
ibration. Refinement Camera Calibrations require the robot to be able to pick up a part. Cal-
ibration pointers will not be helpful in this scenario.
The Spacing Reference Calibration wizard configures the positioning of parts along a belt at
defined intervals. This process is performed by setting a stationary point along the belt from
which the instances will be generated. This point should be calibrated outside of the belt win-
dow to avoid difficulties in allocation. If creating spacing instances for multiple robots, the spa-
cing calibrations must reference the same upstream position.
Configuration Items
The Configuration Items area defines the workcell items and the relationships between those
items that are associated with a particular workcell configuration. The Configuration Items
area also allows quick access to the robot position (location) editors for a particular item, such
as the idle, part pick, or part place location.
The Configuration Items are created automatically as the workcell process is defined through
the Part Process editor. As items are added/deleted in the Part Process editor, they are
added/deleted in the Configuration Items area. For example, a basic pick and place application
would look like this in the Part Process editor:
The Configuration Items are arranged in a tree structure to show the relationships between the
workcell items. The Configuration Items group contains the following features.
l Expand or collapse a tree branch by clicking the arrow icons next to an item name.
l Double-click any of the Position objects (idle or robot) or select the Position object and
click the Edit button to open the location editor that can be used to manually enter the
object location. Refer to the following section for more information.
l Click the Grid button ( ) and use the Motion Sequence Grid Editor to edit
the motion parameters and offset locations (a robot must be selected in the list to enable
Location Editors
There are two types of location editors: a simple editor, which allows you to enter location
information and an enhanced position editor, which contains additional sections such as
Move Parameters, Move Configuration, Approach/Depart Parameters, etc.
For example, the Idle Position editor shown in the following figure is an enhanced position
editor, which contains additional properties for Move Parameters and Move Configuration.
Refer to Enhanced Location Editor Parameters on page 384 for more information.
NOTE: The Location Editor title bar indicates the type of parameters being
edited.
The Robot Position editor shown below is a simple position editor. This allows you to enter or
teach the location information for a static fixed position frame that does not require a robot-to-
belt or robot-to-sensor calibration.
Use the editor's parameter input fields to adjust the Move Parameters and Move Configuration
for the approach, motion, and depart segments. Use the following examples of various
enhanced location editor parameter grids to understand the editor functions.
l Move Configuration Area
These parameters control the configuration of the robot at the selected location. For
example, if your workcell contains a SCARA robot and you want it to be in a lefty con-
figuration, you would set the Righty-Lefty parameter to Lefty.
IMPORTANT: When the Absolute option is selected, you must ensure the
approach/depart heights are set correctly. Otherwise, the robot could move
to an unintended location or out of range, which may damage the robot or
other equipment in the workcell.
l I/O Timing Parameter
An I/O Timing parameter is included which controls the open/close timing of the grip-
per during each part of the motion segment. The I/O Timing (Gripper On) can use either
a percent value, a distance value, or a time value as shown in the following figures.
For example, if you set the value to 25 mm, the gripper will activate at 25 mm from the
pick position. If you set it at 25%, the gripper will activate at 25% of the distance from
the approach start to the pick position. The time value allows you to set the gripper tim-
ing (in milliseconds).
The Vision Refinement Motion Parameters specify how the robot moves to and from the
Vision Refinement Station.
l Move to camera
This is a static refinement where the robot pauses at the Vision Refinement Station.
The Offset tab allows you to edit the gripper (tool) offset.
NOTE: The Vision on the fly mode can provide faster throughput, but
may require more lighting and shorter Exposure Time compared to the
Move to camera mode. A robot position latch should be used for Vision
on the fly mode.
Idle Positions
Idle Positions are staging positions between picking and placing operations. They are initially
defined by the teach process to be the same location, or can be manually taught to be different
locations. If the Wait Mode is set to Move to idle position, then the following descriptions
apply. If not, the Idle Position - Parts and Idle Position - Part Targets locations are not used.
Additional Information: The Wait Mode setting can be found in the Process
Strategy area. Refer to Process Strategy on page 394 for more information.
Idle Position
This is the location the robot uses when no Process is selected or when the Process Manager is
aborted or stopped.
This is the location the robot uses when it is waiting to pick a part, typically when no parts are
available.
The Idle Position - Parts location is not associated with a specific process. If you have multiple
Part sources in different areas of the work envelope, consider setting the Idle Position - Parts
location in an area between the Part source locations and not near one specific Part source.
Idle Position - Part Targets
This is the location the robot uses when it has picked a part and is waiting for a target to
become available.
The Idle Position - Part Targets location is not associated with a specific process. If you have
multiple Part Target sources in different areas of the work envelope, consider setting the Idle
Position - Part Targets location in an area between the Part Target source locations and not
near one specific Part Target source.
To access the Idle Position editor, double-click the Idle Position Configuration Item or select the
item and click the Edit button. The Idle Position editor will open.
The Cycle-Stop Position is a location that is used when the process Cycle-Stop is requested.
The Cycle-Stop can be requested with one of the following methods.
l OPC
l Data Mapper
l C# program
l Clicking the Cycle Stop button in the Task Status Control area
NOTE: Using this method will result in the robot finishing the current
process and then waiting for the Cycle-Stop request to be released.
Robot Frames
Robot frames (also known as reference frames) are useful because they allow you to teach loc-
ations relative to the frame. If the location of the frame changes in the workcell, you simply
have to update the frame information to reflect its new location. Then you can use any loc-
ations created relative to that frame without further modifications.
A process pallet is typically taught relative to a reference frame. This avoids the problem of
teaching many individual pallet positions and then having to reteach all of those positions if
the pallet moves for some reason. Instead, the pallet is taught relative to a frame. If the pallet
moves in the workcell, the frame position is re-taught and the part positions relative to that
frame remain intact.
The Robot Frame editor is used to teach a reference frame, such as a pallet frame. To access
the Robot Frame editor, double-click the Robot Frame Configuration Item or select the item and
click the Edit button.
Additional Information: Robot Frames are only available when a part or target
object is configured as Static: Fixed Position and a Pallet object is defined in that
object.
Robot Frames will typically be defined during the teach process wizard, but can be manually
defined as well. Use the following procedure to manually define a Robot Frame.
2. Teach the +X Location.
Use the V+ Jog Control to position the robot tool tip at a point on the +X axis. In the case
of a rectangular pallet, this can be any pocket position along the +X axis. Optimum res-
ults will be obtained by using a point as far away from the origin as possible. Then,
click the Here button to record the position.
4. Click the Calculate button to calculate the position of the robot frame relative to the
robot.
5. Click the OK button to close the Robot Frame editor and complete this procedure.
The Moti Sequence Grid Editor provides a grid / table interface that allows you to access and
individually edit the common motion parameters used to optimize cycle time. Also, you can
change multiple speed / acceleration / deceleration parameters at the same time. Click on on
one of the necessary fields and drag the cursor across the others, as shown, refer to Multiple
Parameter Selection. Enter a new value to change all of the selected fields.
To access the grid editor, select the robot object in the Configuration Items group and then click
the Grid button ( ). The Motion Sequence Grid Editor opens.
The left pane is used to select the items you wish to display for editing. The right pane con-
tains the editing parameters by group.
Control Sources
The Control Sources editor provides access to parameters that affect Part and Part Target
sources for the defined processes. Sources are responsible for providing instances to a process.
These are automatically created based on the Part and Part Target object configuration prop-
erty. There are three types of Sources that are described in the following sections.
To access the Control Sources Editor, click the Control Sources button ( ).
Belt Control Sources
This section describes the Control Sources Editor when a Belt source is selected.
Cameras
The Cameras list will display the Virtual Cameras that have been associated with this belt
through the Part and Part Target object configurations. You can change the Vision Properties
for the selected camera in this area.
Camera Mode
There are three camera modes available: Distance, Trigger, and Time that are described below.
When Distance is selected, the Field of View Picture interval control is enabled. This control is
used to adjust the picture interval relative to belt travel. The setting is displayed as a per-
centage of the field of view, and in millimeters (mm) as calculated from the calibration in the
selected virtual camera.
When Trigger is selected, the Trigger Signal control is enabled. This specifies the signal num-
ber to use for the vision trigger. When the specified trigger signal number is activated, a new
vision image will be acquired. For example, this can be used in an application where an image
only needs to be acquired when an object activates a sensor if it is below the camera. In this
case, the trigger signal is wired to the robot controller and should not be confused with applic-
ations that require triggering the camera directly. Triggering a camera directly is configured in
the Virtual Camera object Acquisition Settings.
NOTE: In Emulation Mode, Trigger mode will use the Trigger Period in Emu-
lation Mode distance value specified. This is used to simulate the trigger occur-
ring based on the specified distance of belt travel.
When Time is selected, an image will be requested on the specified time interval.
Overlap Configuration
When a part is located in an image associated with a conveyor, the position of the object is
compared with the position of objects located in previous images that have already been
added to the queue of instances to process.
If Disable Overlap Check is selected, all overlap checking is disabled. When this option is
selected, the remaining Overlap Configuration items are not available. If a part is located in
multiple images, the robot will attempt to pick at the same belt-relative position multiple
times. If this occurs when Overlap Configuration is not disabled, consider increasing the Over-
lap distance.
If the newly-located part is within the specified Overlap Distance of a previously located part
(accounting for belt travel), it is assumed to be the same part and will not be added as a duplic-
ate new instance.
If Perform overlap check with instances of different types is selected, the overlap calculation
will check for overlap of any parts, rather than just parts of the same type.
The default belt program (pm.belt.control) is optimized for performance of all default Process
Manager configurations and flexible functionality. The Controllers list will display all con-
trollers associated with the selected Belt object. Each controller executes a V+ program for mon-
itoring and updating encoder position, belt velocity, image requests, latches, and instance
information for all instances allocated to that controller.
Occasionally, applications require customization of this program. For example, you may need
to sacrifice available controller processing time to achieve more frequent latch reporting or
image requests. In these cases, select Use Custom Program and then edit the default program
accordingly. You may need to make the same modifications to the belt program on each con-
troller depending on application requirements.
This section describes the Control Sources Editor when a Static Control Source is selected.
Static Sources are used for Part and Part Targets that are not related to belts or cameras.
If an IOFeeder is not enabled, then the PC generates instances for these sources at the Robot
Frames defined in configuration items. Each time the controller has emptied the queue of
instances, the PC will generate another set of instances and pass them to the controller. The
quantity of instances generated is set by the Number to Queue property. The default value is
two instances to overcome any disturbances in the communication flow between the PC and
controller.
Alternatively, select Use A Feeder and choose an IOFeeder object that controls when parts are
generated. When this is selected, another V+ program is executed for monitoring feeder activ-
ity. This can be used for individual parts or pallet configurations. For example, associate an
IOFeeder with a target pallet source to use an input signal from a sensor to indicate when a
box is present to be filled.
Use the Feeder selection to specify a feeder object. Click the Select button ( ) to select the
feeder object.
Use the Start Delay selection to specify the delay in milliseconds before the feeder is activated.
This delay can be used to ensure the robot has moved out of the pick / image area before the
feeder is activated.
This section describes the Control Sources Editor when a Vision Source is selected.
Vision Sources are used for fixed-mounted cameras. They can be associated with IOFeeder
objects, similar to Static Sources.
If a Feeder is not enabled, Vision Sources will trigger a new image to be taken when the last
instance of the previous image has been processed, delayed by the Picture Delay (in mil-
liseconds). This delay can be used to ensure the robot has moved out of the pick / image area
before a new image is requested because the last part instance is considered processed once the
pick operation has completed without error.
Process Strategy
The Process Manager invokes a Process Strategy to determine how to allocate the Parts and
Part Targets identified by the Process Manager. It uses the list of Part Processes to allocate the
Parts and Part Targets to specific robots. The output of this process is passed to the controller
queue by the Process Manager. Each Process Strategy operates under certain assumptions
based on the process being monitored. Those assumptions determine which algorithms are
used to perform the allocation.
The Process Strategy Editor provides access to the following parameters editors.
To access the Process Strategy editor, click the Process Strategy button ( ). The
appropriate editor is shown based on the object selected in the left pane of the Process Strategy
Editor.
Process Strategy Controller Parameters
The Controller Parameters are displayed when the controller is selected in the Process Strategy
Editor. The Controller Parameters group is used to specify custom V+ programs for the selected
controller.
Item Description
Use Custom The default process monitoring has the following functions.
Monitoring
l Checks for updates to process strategies.
Program
l Handles belt monitoring.
l Monitors parts and part targets.
You can copy the default V+ monitoring program for editing, or select an exist-
ing program.
Use Custom The default initialization program that executes before the control programs
Initialization (robot, belt, process strategy, etc.) are started.
Program
This can be used to initialize system switches, parameters, and variables or
execute V+ programs that need to be started in parallel to the Process Man-
ager. You can copy the default V+ initialization program for editing, or select an
existing program.
Use Custom The default belt program monitors the speed / ON / OFF status of all belts. You
Belt Program can copy the default V+ belt program for editing, or select an existing program.
Use Custom The default error program handles the processing and reporting of errors dur-
Error ing the execution of a process. You can copy the default V+ error program for
Program editing, or select an existing program. This program can be used to automate
Item Description
All Process Manager V+ program error handling will lead to this program. Use
this program to automate error handling of errors that are reported to the PC
by default. This program will check if any user-defined error responses exist in
the Process Strategy - Robot parameters.
Use Custom The custom stop program can be used to perform certain operations after the
Stop Program application has stopped. You can copy the default V+ stop program for editing,
or select an existing program.
Process Strategy Robot Parameters
The Robot Parameters are displayed when the robot is selected in the Process Strategy Editor.
There are four tabs of robot parameters: the General Parameters tab, the Allocation tab, the
Wait Mode Parameters tab, and the Error Response tab that are described below.
General Use Custom Robot Allows you to specify a custom V+ main robot-control
Program program.
Process Selection Mode Specifies the process selection mode used by the robot.
Allocation Part or Target Filtering Specifies how instances are identified for processing by
Mode this robot.
Queue Size Specifies the queue size for the robot. Each robot has a
queue size, which represents the maximum number of
Allocation Distance Specifies the distance upstream from the belt window
that a part instance must be within before the system
is allowed to allocate that part instance to the robot.
Allocation Limit Specifies the distance upstream from the Process Limit
Line that a part instance must be to allocate that part
instance to the robot. This can be considered the min-
imum distance upstream that an instance must be for
allocation.
Wait Mode Stay at current position Causes the robot to remain at the current position
while waiting for a part or target to process.
Move to Idle Position Causes the robot to move to the idle position after the
specified After waiting for time (in milliseconds)
while waiting for a part or target to process.
Use Custom Wait Allows you to specify a custom V+ wait program. The
Program program would be called when the robot does not have
a part or target available. The program could check to
see if the robot needs to be moved to the idle location
or if it should stay at the current location.
Use Signal at Cycle When a cycle stop is issued and this option is enabled,
Stop the specified I/O signal will be turned ON when the
robot has reached the cycle stop state. When the cycle
is resumed (cycle stop state is canceled), it will turn the
specified signal OFF and will attempt to enable power, if
high power was disabled.
Error Output Signal - On Defines a digital signal to assert when an error is detec-
Responses Error ted for the selected robot.
range.
l Belt Window Access Error: Belt window viol-
ations.
l Robot Power Errors: Problems with power being
enabled or enabling power.
l Gripper Errors: All gripper actuations.
l All Errors: All other errors.
The Belt Control Parameters are displayed when the belt is selected in the Process Strategy
Editor.
NOTE: Belt Control Parameters are only available when the following items are
configured.
l Active Belt Control is enabled in the Belt object configuration.
l A controller is selected in the Belt object configuration.
l A defined process includes a Part or Part Target that references the Belt
Object.
The Belt Control Parameters group as shown in the following figure is used to set the belt con-
trol parameters for the selected conveyor belt. These parameters can be set to determine when a
conveyor belt is turned ON or OFF. An optional speed control parameter is also provided. The
decision point for the belt I/O control is based on the selected robot. If objects on the belt in the
selected robot queue reach the specified thresholds, the belt will be turned OFF or the belt
speed will be adjusted.
Item Description
On/Off Specifies the ON / OFF control of the belt. There are three selections available:
Control
l Do not control the belt: Use this option if there are output signals that
can control the belt, but you do not want the Process Manager to control
the belt during run time. If the belt control is provided by a PLC and output
signals are not able to control the belt, disable Active Control in the Belt
object.
l Leave the belt always ON: The belt is turned ON when the process
starts and OFF when the Process Manager is stopped.
l Control the belt: (Default) The belt is controlled based on thresholds
described below. The belt is automatically turned OFF when the Process
Manager stops the application.
Speed If this is selected, you can use the Slow Threshold control to adjust the conveyor
Control speed threshold based on how full the robot queue is. Otherwise, the conveyor
belt operates at a constant speed.
Robot Specifies a robot for queue monitoring (the queue size for the robot is set in the
Queue Robot Parameters group).
The selected robot will typically be the most downstream robot if multiple robots
service the same belt. If parts get to the last robot, it needs to slow / stop the con-
veyor to ensure all Parts or Part Targets are processed.
Slow, Off, These thresholds are used to control the belt based on instance position. This is
and On useful for preventing the belt from feeding the robot faster than the robot can
Thresholds pick the parts, or preventing not-processed Part or Part Targets from passing the
most downstream robot.
Slow Threshold: Specifies the point in the belt window for slowing the conveyor
if parts reach that point. For example, 50% means that if a part reaches the mid-
point of the belt window, the conveyor will be slowed.
Off Threshold: If an instance reaches this threshold, the belt will be turned OFF.
This is set as a percentage from upstream belt window limit to downstream pro-
cess limit and is visualized as a black line in the 3D Visualizer (in the belt window
for the selected robot).
On Threshold: When a belt is turned OFF by the Off Threshold, the belt will
remain OFF until all instances are removed between the Off Threshold point and
the On Threshold point. This can be used to minimize the number of times the
belt is started and stopped. This is set as a percentage from upstream belt window
limit to downstream process limit and is visualized as a green line in the
3D Visualizer (in the belt window for the selected robot).
Product Shows the product flow (belt window) direction of travel in relation to the Slow
Flow and Off Threshold slide controls. It is a reference for the thresholds. The bottom of
the arrow represents the start of the belt window. The top of the arrow rep-
resents the Downstream Process Limit.
Process Manager Control
The Process Manager Control is used to start and stop a process-managed application, such as
a Pack Manager packaging application.
The Task Status Control interface is used to monitor and control Process Manager objects in
the ACE project. A Process Manager item in the Task Status Control area is used to select the
Process Manager object, start and stop the selected application, and view status and instances
on the application while it is operating.
Process Manager control items are added to the Task Status Control area as shown in the fol-
lowing figure. Select a Process Manager control item to view the Hardware and Application
information areas. Refer to Application Manager Control on page 133 for more information.
Process Manager Tasks
Process Manager tasks are displayed under the Application Manager group in a tree view.
Tasks are grouped by type (C# program, Process Manager, etc.). When you select a task, the fol-
lowing functions become available.
Function Description
( )
( )
( )
Icon Description
The Hardware information area displays the hardware items and their status for the selected
Process Manager task. Use the information below to understand the functions of the hardware
information area.
When a robot is waiting (for example, waiting for Parts or Part Targets to arrive or because of a
cycle stop request), a yellow warning condition is displayed on the Process Manager control.
Selecting the item in the Hardware Information area will display additional information in the
status and instance tabs below.
NOTE: Some items on the Hardware list are in Error and Warning states until
the Process Manager establishes communications with and initializes those
items.
ClearAll Clears all Part and Part Target instances from the sys-
Instances tem.
Button
Application Information Area
The Application information area displays feedback on the operation of the item selected in the
Hardware area. The Application information area has a Status tab and an Instances tab which
are described below.
Status Tab
This tab displays information on the status of the control components driving the process. It
shows the hardware in the system and the status of the selected item.
Refer to Section Chapter 9: , Troubleshooting on page 591 for more information about status
codes and messages.
The Status tab includes the following information.
Instances Tab
The Instances tab displays information on the parts and part targets that are associated with
each control source. The Clear button removes all instances from the selected source. To
remove all instances from all sources, use the Clear All Instances button in the Hardware sec-
tion of the Process Manager Control area.
To add an Allocation Script object, right-click Process, select Add, and then click
Allocation Script. A new Allocation Script object will be added to the Multiview Explorer.
To access the Allocation Script configuration, right-click the object in the Multiview Explorer
and then select Edit, or double-click the object. This will open the Allocation Script editor in the
Edit Pane.
Pallet Object
The Pallet object defines the layout of a pallet which can be used to pick parts from or place
parts to. The Pallet object defines the dimensional information only (three-dimensional pallets
are supported). When linked to a frame, it will position the pallet in Cartesian space.
NOTE: When used with a camera or belt, the camera or belt will be configured
to locate the origin of the pallet, not the parts in the pallet.
When defining a pallet layout, you are teaching points for the pallet, such as the pallet origin,
a point along the pallet X-axis, and a point along the pallet Y-axis. See the following figure for
an example.
NOTE: The points labeled in the figures are only for example. You could define
the pallet using any corner part as the origin, and using any row or column ori-
entation. That is, the pallet rows do not need to be parallel to the robot World
axes as shown in the example.
Item Description
A Pallet Origin
For example, assuming a 40 mm part spacing, the 3 x 3 pallet in the previous figure would be
defined as follows.
Location Components
Pallet Properties
X Y Z Yaw Pitch Roll
You can also define the following for each Pallet object as described in this section.
l Access order
l Number of parts and part spacing on the X-axis
l Number of parts and part spacing on the Y-axis
l Number of parts and part spacing on the Z-axis
When teaching the pallet using the ACE software wizard, the system automatically computes
the orientation and origin offset of the pallet. Then, the system has all of the information it
needs to pick or place parts from or to positions in the pallet.
The initial pallet teaching process occurs in the Process Manager object configuration during
calibration or process teaching (depending on the application needs). You can change the val-
ues obtained during the teaching process. Refer to Process Manager Object on page 354 for
more information.
To add a Pallet object, right-click Process, select Add, and then click Pallet. A new Pallet object
will be added to the Multiview Explorer.
NOTE: After the Pallet object is created, you can rename the new Pallet object by
right-clicking the item and selecting Rename.
Pallet Configuration
To access the Pallet configuration, right-click the object in the Multiview Explorer and then
select Edit, or double-click the object. This will open the Pallet editor in the Edit Pane.
The Pallet editor provides an interface for setting various pallet-related parameters, such as the
pallet configuration, location, and properties. This allows you to define individual X, Y, and
degree positions of each slot. You can define circular pallets or pallets with offset rows. The
ACE software calculates the individual positions based on the input data and defines the pos-
itions in the Pallet object.
The Configuration drop-down list box is used to specify the type of pallet being used.
The Offset to First Slot setting defines the origin of the pallet to reference all slot positions.
Properties Area
When a rectangular pallet configuration is selected, use the Properties area to specify the
access order, part count and part spacing for X, Y, and Z.
When a custom pallet configuration is selected, this area changes to a table that contains
information collected from the Add Pattern Dialog Box. Refer to Custom Pallet Configuration
Settings on page 413 for more information.
Use the information below to make the settings for a rectangular pallet configuration.
The Access Order property defines how the robot will access the pallet. For example, if an
access order of Yxz is selected, the robot will begin accessing the pallet positions with the first
Y-row. After the Y row is finished, it will move to the next row in the X-direction. After all X-
direction rows are accessed, it will move in the z-direction to access the next row.
X, Y, Z Count
X, Y, Z Spacing
A custom Pallet is typically used for irregular slot arrangements. The custom Pallet con-
figuration allows you to define each slot position. For example, if your pallet is 3 x 3 x 2, you
will have 18 slot position items defined in the Properties area of the custom Pallet object as
shown below.
You can define individual slot positions manually using the Add button or automatically
using the Pattern button as described below.
Additional Information: When the Pallet has no pattern, use the Add button to
define individual slot positions.
Use the Add, Delete, Up and Down buttons to create and arrange each Pallet slot location.
Click the Pattern button ( ) to define the custom Pallet using the Add Pattern Dia-
log Box and then choose Rectangular or Radial.
l Rectangular Properties: Set the X,Y and Z offset, spacing, and count for the entire
Pattern. When the Rectangular Properties are set, click the Accept button and the
Pallet Visualization
You can select a shape to represent the pallet in the 3D Visualizer. The shape is specified on
the Part or Part Target object editor. The shape can be selected from a box, cylinder, or Refer to
Part Object on page 337, Part Target Object on page 332, and Part Object on page 337 for more
information.
NOTE: The following information assumes you have already installed a phys-
ical camera, created a virtual camera, calibrated the camera, and created a vision
tool and model.
To add a Vision Refinement Station object, right-click Process, select Add, and then click Vision
Refinement Station. A new Vision Refinement Station object will be added to the Multiview
Explorer.
NOTE: After the Vision Refinement Station object is created, you can rename the
new Vision Refinement Station object by right-clicking the item and selecting
Rename.
To access the Vision Refinement Station configuration, right-click the object in the Multiview
Explorer and then select Edit, or double-click the object. This will open the Vision Refinement
Station editor in the Edit Pane.
Vision Properties
The Vision Refinement Station only has a single configuration item. Use the Vision Properties
drop-down to specify the vision tool that will be used to locate the part in the gripper.
As an option, select Use Named Instance (select Model or enter custom result name) and then
use the Select button ( ) to reference an exiting Locator Model or use the Add button ( ) to
add a custom result name. For applications where a custom vision tool is used, this item
would be used to specify custom names that had been associated with the different results
returned from that tool.
After you create the Vision Refinement Station, it must be added to a pick-place process. This
is done using the Advanced tab of the Part Process Editor, shown in the following figure. Refer
to Part Process Editor on page 363 for more information.
After you add the Vision Refinement Station to the pick-place process, you can optionally edit
the motion parameters for the station. This is done using Vision Refinement Motion Para-
meters that are accessed from the Configuration Items group. Refer to Vision Refinement
Motion Parameters on page 385 for more information.
l Finder Tools
Finder Tools create a vectorized description of objects or object features and typically
return coordinate results. These are used to identify features in image sources and
provide locations of objects to be picked.
l Inspection Tools
Inspection Tools are often used in conjunction with Finder Tools to inspect objects that
have been located. They rely on the analysis of pixel information and are designed to
check various aspects of a detected object or feature, such as color deviation, defects, or
product density.
l Reader Tools
Reader Tools are used to return character text string data from codes and text in an
image.
l Calculation Tools
Calculation Tools allow the creation of new entities in an image that can be user-
defined or based on existing entities.
l Image Process Tools
Image Process Tools provide various operations and functions for the analysis and pro-
cessing of images.
l Custom Tools
Custom Tools allow the user to more directly control the way an image or tool is pro-
cessed.
The following table shows all vision tools provided in the ACE software, their respective cat-
egories, and a brief summary of their functions. The tools are described in detail in the fol-
lowing sections.
Finder Tools Arc Finder Identifies circular features on objects and returns the
coordinates of the arc center, the angle of separation
between the two ends, and the radius.
Inspection Tools Arc Caliper Identifies one or more edge pairs in an arc-shaped or cir-
cular area and measures distance between the edges of
each pair.
Arc Edge Locator Identifies an edge or set of edges that meet user-defined
criteria within an arc-shaped or circular area.
Color Data Finds the average color within a region and analyzes its
color variation and deviation from a specified reference
color.
Reader Tools Barcode Reads barcodes in an image and acquires text string
data.
Data Matrix Reads data matrices in an image and acquires text string
data.
OCR Dictionary Stores dictionary data that OCR can use to identify text
characters.
Image Advanced Filter Filters or alters an image using one of a variety of filter
Process Tools libraries, such as Background Suppression, Erosion / Dil-
ation, and Color Gray Filter.
Image Processing Filters or alters a gray scale image using one of a variety
of filters, including logical and arithmetic calculations.
Vision Tool Editor
Each vision tool is configured using its corresponding object editor in the Edit Pane. In general,
most of the tool editors have a similar configuration that can be split into five sections as
described below.
1 Execution This area provides direct control of the tool with the following buttons.
Buttons
l Run: Run the tool once.
This is available with the Barcode, Color Data, Data Matrix, OCR,
and QR Code tools.
2 Image Display This area displays the current image from the camera.
This will also include any required graphics or controls. For example,
the Locator tool shown in Figure 8-190 displays markers for each iden-
tified instance within the green region of interest, which can be mod-
ified as needed.
5 Additional This area provides any additional information about the tool.
Information
This is only found in some of the tools and generally utilizes something
Pane
specific to only that tool.
Region of Interest
Most tools use regions of interest to define the location where the tool will be executed. Some
tools allow multiple regions based around a single reference point, but most use a single rect-
angular region in which to execute the operation. In both cases, the region or regions are out-
lined in green in the tool Vision Window.
Regions of interest can be modified in two ways:
1. Clicking and dragging the green outline or its nodes. Dragging the nodes will re-size the
region while dragging the outline itself will translate it. In some tools, the regions have
a node for rotation.
2. Adjusting the parameters in the properties. All tools with regions of interest have a
Region Of Interest section in the properties that governs the size, location and, in some
cases, behavior of the region. The location and size of the region are typically governed
by the Offset and Search Region properties, but the property names may vary.
NOTE: The region orientation of some tools can only be controlled with
the Offset property.
Tools with multiple regions of interest can be manipulated to allow different behavior in indi-
vidual regions. This is achieved by modifying two properties as described below.
l Overlap
This defines the region as either inclusive or exclusive by setting it to OR or NOT,
respectively. An inclusive region returns all detected contours or instances within its
boundaries. An exclusive region hides all detected contours or instances within its
boundaries. Figure 8-192 shows how individual regions are registered in the figure. The
original regions are displayed on the left and the resulting processed region is displayed
on the right, where the green area shows what parts are read. For example, in the
second set of regions, the NOT region eliminates a section of the full region, resulting in
a rectangular section that is not read.
l Z-Order
This defines the order in which regions are processed. Each region has its own property
that defines its order in the Z-direction of an image. Since the image is two-dimensional,
this value is used to determine which regions are processed first. Regions are processed
in ascending order. This can be seen in the regions in Figure 8-193 where the regions
are processed according to their written Z-Order. In the first example, the results of all
three regions can be seen, since no region is entirely blocked. However, in the second
example, the smallest region is hidden because the red region has a higher value than it
does, according to Z-order.
Relative To Parameter Details
Most of the tools with regions of interest can be set relative to another tool. A vision tool is
ordinarily executed with respect to the origin of the image, but when it is set relative to another
tool, it will instead execute with respect to the result values of that tool. This will cause the sec-
ondary tool to execute once for every input value, unless it is otherwise constrained.
To create this relationship, set the primary tool as the Relative To property of the secondary
tool in the editor of the secondary tool. In this way, the output values of the primary tool
become the input values of the secondary tool. In the following figure, the Gripper Clearance
tool is set relative to a Locator tool and is able to position histograms around all of the objects
by creating new reference points with respect to the Locator results. The input locations are
shown under the Current Values section in the properties.
NOTE: Some tools display all instances in the image display. Some only dis-
play the region created with respect to the first instance and return the remainder
in the Results section. The additional instances can be viewed in the ACE Vision
Window as long as the property Show Results Graphics is enabled. Refer to the
Vision Window in Figure 8-194 (which is based on the tool in Figure 8-195
below).
A tool set relative to another tool can be used to create an Robot Vision Manager Sequence.
Refer to Vision Sequence on page 262 for more information.
Color Spaces
The term color space refers to numeric values (or percentages) of the visible color spectrum,
specifically organized to allow both digital and analog representations of color. A color space
is a spectrum or range of colors with specific values.
This section describes color spaces, color values, and how to define colors by those values.
The HSB color space arranges colors of each hue radially around a central axis of basic colors,
from white at the top to black at the bottom. Hue values are set in degrees from 0 to 360. Sat-
uration and brightness are set in percentages from 0 to 100%.
Hue is the quality of color perceived as the color itself. The hue is determined by the perceived
dominant wavelength, or the central tendency combined wavelengths within the visible spec-
trum.
Saturation is the purity of the color, or the gray in a color. For example a high saturation value
produces a very pure, intense color. Reducing the saturation value adds gray to the color.
Brightness is the amount of white contained in the color. As the value increases, the color
becomes lighter and becomes more white. As the luminance value decreases the color is darker
and becomes more black.
NOTE: HSB is also referenced as HSL (Hue, Saturation, Luminance) and HSV
(Hue, Saturation, Value) in the ACE software.
The RGB color space uses combinations of red, green, and blue to create all colors. Red, green,
and blue values are expressed with a range of 0 to 255.
NOTE: ACE software also accepts the hexadecimal color value in the color
input field.
Settings for items such as color filters and reference colors can be adjusted with HSB, RGB, or
hexadecimal values. The following table provides example values for common colors.
Black 0, 0, 0 0, 0, 0 #000000
Color Tolerances
Color tolerances can be applied to allow for slight color variations. Color tolerances can only
be set with HSB values.
A color tolerance value is distributed equally above and below the color value to which it
applies. For example, if the color hue value is 50 and the hue tolerance value is 20, the filter
will accept colors within a range of hue values from 30 to 70.
Finder Tools
Finder tools are used to identify objects and create detection points for location guidance.
The following Finder tools are described in this section.
Arc Finder
This tool identifies circular features on objects and returns the coordinates of the center of the
arc, the angle between the two ends, and the radius.
Arc Finder is most commonly used to return information about round edges or refine the ori-
entation of a located object. For example, the tool in Figure 8-198 identifies the arc created by
trapezoidal features on the chip. In this way, it can be used to locate circular patterns or
shapes within an object. If the arc to be located should only be in a certain position, the
guideline position and properties such as Search Mode can be adjusted to specify the desired
range of the location.
To create an Arc Finder tool, right-click Vision Tools in the Multiview Explorer and select Add
Finder and then Arc Finder. An Arc Finder tool will be added to the Vision Tools list.
The following figure identifies the specific segments of the Arc Finder tool, shown in the
Editor, refer to Arc Finder Segments on page 430.
Indicator Description
A Center
B Rotation
C Opening
D Radius
H Guideline
Use the table below to understand the Arc Finder tool configuration items.
Tool Links Image Source Defines the image source used for processing by this
vision tool.
Region of Relative To The tool relative to which this tool executes. The out-
Interest put values of the selected tool are the input values of
this one.
Search Region Defines the location and size of the region (X, Y,
radius, thickness, mid-angle position, arc angle
degrees).
Properties Show Results Specifies if the graphics are drawn in the Vision Win-
Graphics dow.
Fit Mode Select how the tool will calculate and return a valid
arc from hypotheses.
Advanced Arc Must Be Totally Specifies if the detected arc can exist outside of the
Properties Enclosed region of interest. When enabled, the start and end
points of the arc must be located on the angle bound-
ary lines. Otherwise, the arc can enter or exit the
region at any point.
Conformity Tolerance Set the maximum local deviation between the expec-
ted arc contours and the arc contours actually detec-
ted in the input image.
Positioning Level Set the configurable effort level of the instance pos-
itioning process. A value of 10 will provide coarser pos-
itioning and lower execution time. Conversely, a
value of 100 will provide high accuracy positioning of
arc entities. The setting range is 10 to 100.
Subsampling Level Set the subsampling level used to detect edges that
are used by the tool to generate hypotheses. High val-
ues provide a coarser search and lower execution
time than lower values. The setting range is 1 to 8.
Use the table below to understand the results of the Arc Finder tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Radius Radius of the arc, measured from the center determined by Arc X and Arc Y.
Opening Angle (in degrees) measured between the two arc endpoints.
Rotation Rotation of the region of interest, measured from the positive X-axis.
Average Average contrast between light and dark pixels on either side of the arc,
Contrast expressed in gray level values.
Fit Quality Normalized average error between the calculated arc contours and the actual
contours detected in the input image. Fit quality ranges from 0 to 100 where
the best quality is 100. A value of 100 means that the average error is 0. A
value of 0 means that the average matched error is equal to the Conformity
Tolerance property.
Match Quality Amount of matched arc contours for the selected instance expressed as a per-
centage. Match quality ranges from 0 to 100 where the best quality is 100. A
value of 100 means that 100% of the arc contours were successfully
matched to the actual contours detected in the input area.
Blob Analyzer
This tool uses pixel information within the region of interest to apply image segmentation
algorithms for blob detection. A blob is any region within a gray scale image with a range of
gray level values that differs from the adjoining areas of the region. The Blob Analyzer tool is
primarily used to find irregularly-shaped objects.
To create a Blob Analyzer tool, right-click Vision Tools in the Multiview Explorer, select Add
Finder and then Blob Analyzer. A Blob Analyzer tool will be added to the Vision Tools list.
Use the table below to understand the Blob Analyzer tool configuration items.
Tool Links Image Source Defines the image source used for processing by this
vision tool.
Region Of Relative To The tool relative to which this tool executes. The out-
Interest put values of the selected tool are the input values of
this one.
Properties Show Results Graph- Specifies if the graphics are drawn in the Vision Win-
ics dow.
Allow Clipped Blobs Enables the inclusion of blobs that have been cut off
by the edge of the region of interest.
Maximum Blob Count Set the maximum number of blobs that the tool is
able to return.
Results Display Mode Defines how results are rendered in the image dis-
play.
Segmentation Para- Properties used by the tool to locate the blob. Refer to
meters Blob Analyzer Segmentation Mode Editor on page
435 for more information.
Blob Sorting Select the order in which the blobs are processed and
output. Most sorting options use the values in a spe-
cific result column. This is disabled by default.
Calculate Blob Angle Enables the calculation of each blob angle. This is
enabled by default. The angle is calculated by col-
lecting four additionally properties if they are not
already calculated: Inertia Minimum, Inertia Max-
Hole Filling Enabled Enables all background pixels within the perimeter of
a given blob become included in the blob. Note that
this effects the tool window but not the main Vision
Window.
The Segmentation Mode editor is accessed by clicking the ellipsis next to the Segmentation
Parameters property. It controls the parameters that dictate which pixels are selected as blobs.
Blob Constraints determine the minimum and maximum area of a blob in calibrated units.
This is useful for filtering out individual or small groups of pixels that are returned as their
own blobs, but are not desired result instances.
Image Segmentation defines the method used to locate blobs. The following options can be
selected.
l Light: Creates blobs from all pixels brighter than the set gray level boundaries.
l Dark: Creates blobs from all pixels darker than the set gray level boundaries.
l Inside: Creates blobs from all pixels inside the set gray level boundaries.
l Outside: Creates blobs from all pixels outside the set gray level boundaries.
l Dynamic Light: Creates blobs from the pixels on the brighter side of a percentage
marker set by the user.
l Dynamic Dark: Creates blobs from the pixels on the darker side of a percentage marker
set by the user.
l Dynamic Inside: Creates blobs from the pixels between two percentage bounds set by
the user.
l Dynamic Outside: Creates blobs from the pixels outside of two percentage bounds set by
the user.
l HSL Inside: Creates blobs from all the pixels that fall within an HSL tolerance.
l HSL Outside: Creates blobs from all the pixels that fall outside an HSL tolerance.
Image Segmentation settings are made using the slider(s) shown at the bottom of the editor
(except for HSL Inside/Outside). The green area indicates which pixels in the histogram will be
included in the blobs and the white area indicates which ones will not be included as shown
in the following figure.
HSL Inside and HSL Outside use a different viewer in the editor since they are based on HSL
color instead of brightness. Therefore, the histogram has no impact. Instead, the editor appears
as shown below and the histogram is replaced by the following properties. Refer to Color
Spaces on page 426 for more information about color definition.
To increase processing speed, Blob Analyzer provides adjustments to identify which specific
results to calculate and collect. Only by selecting the necessary options in Data Collection can
those results be properly calculated and displayed. For details on the results themselves, refer
to the Blob Analyzer Results on page 437. The list below details the options. Any number of
options can be selected.
l Chain Code Results: Refers to the sequence of direction codes that describes the bound-
ary of a blob. Unlike the other options in this list, selecting Chain Code Results does not
affect any results columns. It can be disabled for most applications.
l Extrinsic Inertial Results: Returns moments of inertia results. A moment of inertia meas-
ures the inertial resistance of the blob to rotation about a given axis. Extrinsic moments
of inertia measure the moment of inertia about the X-Y axes of the camera coordinate
system.
l Gray Level Results: Returns information about the gray level distribution within the
blob.
l Intrinsic Box Results: Returns information about the intrinsic bounding box, which is a
bounding box that has been rotated to fit the edges of the blob as tightly as possible.
l Perimeter Results: Returns data regarding the perimeter.
l Topological Results: Returns the Hole Count result.
Use the tables below to understand the results of the Blob Analyzer tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Position Angle Calculated angle of the blob origin with respect to the X-axis. Only available
when Calculate Blob Angle is enabled.
Item Description
Elongation The degree of dispersion of all pixels belonging to the blob around its center of
mass. This is calculated as the square root of the ratio of the moment of iner-
tia about the minor axis (Inertia Maximum) to the moment of inertia about
the major axis (Inertia Minimum). Only available when Calculate Blob Angle is
enabled.
Bounding Box X-coordinate of the center of the bounding box with respect to the camera
Center X coordinate system. Only available when Calculate Blob Angle is enabled.
Bounding Box Y-coordinate of the center of the bounding box with respect to the camera
Center Y coordinate system. Only available when Calculate Blob Angle is enabled.
Bounding Box Height of the bounding box with respect to the Y-axis of the coordinate sys-
Height tem. Only available when the angle is calculated.
Bounding Box Width of the bounding box with respect to the X-axis of the coordinate sys-
Width tem. Only available when the angle is calculated.
Bounding Box X-coordinate of the left side of the bounding box with respect to the camera
Left coordinate system. Only available when the angle is calculated.
Bounding Box X-coordinate of the right side of the bounding box with respect to the camera
Right coordinate system. Only available when the angle is calculated.
Bounding Box Y-coordinate of the top side of the bounding box with respect to the camera
Top coordinate system. Only available when the angle is calculated.
Bounding Box Y-coordinate of the bottom side of the bounding box with respect to the cam-
Bottom era coordinate system. Only available when the angle is calculated.
Bounding Box Rotation of the bounding box with respect to the X-axis of the camera coordin-
Rotation ate system.
Extent Left Distance along the X-axis between the blob center of mass and the left side of
the bounding box. Only available when Calculate Blob Angle is enabled.
Extent Right Distance along the X-axis between the blob center of mass and the right side
of the bounding box. Only available when Calculate Blob Angle is enabled.
Extent Top Distance along the Y-axis between the blob center of mass and the top side of
the bounding box. Only available when Calculate Blob Angle is enabled.
Extent Bottom Distance along the Y-axis between the blob center of mass and the bottom
side of the bounding box. Only available when Calculate Blob Angle is enabled.
Inertia Moment of inertia about the major axis, which corresponds to the lowest
Minimum moment of inertia. Only available when Calculate Blob Angle is enabled
Inertia Moment of inertia about the minor axis, which corresponds to the highest
Maximum moment of inertia. Only available when Calculate Blob Angle is enabled.
Item Description
Inertia X Axis Moment of inertia about the X-axis of the camera coordinate system.
Inertia Y Axis Moment of inertia about the Y-axis of the camera coordinate system.
Item Description
Gray Level Mean Average gray level of the pixels belonging to the blob.
Gray Level Range Calculated difference between the highest and lowest gray levels
found in the blob.
Gray Std Dev Standard deviation of gray levels for the pixels in the blob.
Item Description
Intrinsic Bounding X-coordinate of the center of the bounding box with respect to the X-axis
Box Center X (major axis) of the principal axes.
Intrinsic Bounding Y-coordinate of the center of the bounding box with respect to the Y-axis
Box Center Y (minor axis) of the principal axes.
Intrinsic Bounding Height of the bounding box with respect to the Y-axis (minor axis) of the
Box Height principal axes.
Intrinsic Bounding Width of the bounding box with respect to the X-axis (major axis) of the
Box Width principal axes.
Intrinsic Bounding Leftmost coordinate of the bounding box with respect to the X-axis
Box Left (major axis) or the principal axes.
Intrinsic Bounding Rightmost coordinate of the bounding box with respect to the X-axis
Box Right (major axis) or the principal axes.
Intrinsic Bounding Topmost coordinate of the bounding box with respect to the Y-axis
Box Top (minor axis) of the principal axes.
Intrinsic Bounding Bottommost coordinate of the bounding box with respect to the Y-axis
Box Bottom (minor axis) of the principal axes.
Intrinsic Bounding Rotation of the intrinsic bounding box corresponding to the coun-
Box Rotation terclockwise angle between the X-axis of the bounding box (major axis)
and the X-axis of the camera coordinate system. Only available when
Calculate Blob Angle is enabled.
Intrinsic Extent Distance along the major axis between the blob center of mass and the
Left left side of the intrinsic bounding box.
Intrinsic Extent Distance along the major axis between the blob center of mass and the
Right right side of the intrinsic bounding box.
Intrinsic Extent Top Distance along the minor axis between the blob center of mass and the
Item Description
Intrinsic Extent Distance along the minor axis between the blob center of mass and the
Bottom bottom side of the intrinsic bounding box.
Item Description
Convex Perimeter Perimeter calculated based on projections made at four different ori-
entations: 0°, 45°, 90°, and 180°. The average diameter calculated
from this projections is multiplied by pi to obtain these results.
Raw Parameter Sum of the pixel edge lengths on the contour of the blob. This result is
sensitive of to the blob’s orientation with respect to the pixel grid, so res-
ults may vary greatly. Unless blobs are non-convex, Convex Perimeter
results provide greater accuracy.
Item Description
Gripper Clearance
This tool uses histogram analysis to determine which parts can be picked without interference.
It is configured as a series of rectangular histograms positioned around a part. The histograms
are often set relative to a finder tool, such as Locator or Shape Search 3, so that they are posi-
tioned according to part locations.
In the Gripper Clearance properties, you can define parameters that determine if the area
around a part has clearance necessary for the gripper. These parameters are applied to the his-
tograms to filter the parts. Instances passed through the filter can be picked by the gripper.
To create a Gripper Clearance tool, right-click Vision Tools in the Multiview Explorer, select
Add Finder and then Gripper Clearance. A Gripper Clearance tool will be added to the Vision
Tools list.
Use the table below to understand the Gripper Clearance tool configuration items.
Tool Links Image Source Defines the image source used for processing by this
vision tool.
Region Of Relative To The tool relative to which this tool executes. The out-
Interest put values of the selected tool are the input values of
this one.
Properties Show Results Specifies if the graphics are drawn in the Vision Win-
Graphics dow.
Show Result Image Specifies if the histogram regions are drawn in the
Histogram Regions ACE Vision Window. Show Results Graphics must be
enabled for this to work.
Tail Black Gray Level Percentage of pixels to ignore at the dark end of the
Value gray level distribution. This is calculated after the
pixels affected by the Threshold Black property have
been removed.
Tail White Gray Level Percentage of pixels to ignore at the light end of the
Value gray level distribution. This is calculated after the
pixels affected by the Threshold White property have
been removed.
The histograms are measured using the Histogram Pane located beneath the properties area.
The Add button ( ) and Delete button ( ) are used to create and remove his-
tograms from the tool. The Histogram Pane displays the properties for the currently selected
histogram tab. The properties are described in the following table.
Item Description
Offset Defines the center coordinates of the histogram region with respect to
the reference point defined by Offset in the main tool properties.
Region Name User-defined name of the histogram. This is displayed in the lower left
corner of the histogram region in the Vision Window.
Search Region Defines the size (width, height) of the histogram region.
Gripper Clearance returns each instance that has passed the all histogram ranges. For these
instances, the histogram analysis is available in additional columns. These are not shown by
default and must be added using the Results Column Editor. The numbers of the histograms
are variable and are denoted in the table below as <number> instead of actual values.
NOTE: All results are calculated after applying tails and thresholds.
Use the table below to understand the results of the Gripper Clearance tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool
is set Relative To.
Gray Level Mean Average gray level within the region of this histogram.
<number>
Histogram Pixel Count Number of pixels within the region of this histogram.
<number>
Variance <number> Variance of the gray level values of the pixels within the region of this
histogram.
Labeling
This tool analyzes the image with a specified color and returns color masses as labels. It is
primarily used to locate instances of an object that can vary in shape but are always similar in
color.
To create a Labeling tool, right-click Vision Tools in the Multiview Explorer, select Add Finder
and then Labeling. A Labeling tool will be added to the Vision Tools list.
Use the table below to understand the Labeling tool configuration items.
Tool Links Image Source Defines the image source used for processing by this
vision tool.
Region Of Relative To The tool relative to which this tool executes. The out-
Interest put values of the selected tool are the input values of
this one.
WideArcs
Z-Order Available in all region types. Sets the order for over-
lapping regions of interest. Higher numbers will be in
front and resolved first.
Search Region (Width, Sets the height and width of the rectangular region.
Height) Only available in Rectangles.
Radius X/Y Defines the distance from the center to the exterior
along the X- and Y-axes, respectively. Only available
in Ellipses.
Start/End Angle Defines the start and end angle of the Wide Arc
bounds. Angles are measured in degrees coun-
terclockwise from the positive X-axis. The arc is cre-
ated clockwise starting from the Start Angle and
ending at the End Angle. Only available in WideArcs.
Properties (Label Condition) Defines the behavior of the area outside the search
Outside Condition area. When enabled, the entirety of the area outside
the regions of interest will be returned as the extrac-
ted color, connecting all detected masses that touch
the edges of the regions.
(Label Condition) Sort Select the order in which the returned masses are
Condition processed and output. The order will be descending
(largest to smallest) by default; checking the Ascend-
ing box will sort them smallest to largest.
Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Hole Plug Color Defines which color will fill in detected holes in the
masses. This is disabled by default.
Advanced Prop- Additional Data Set Allows additional data values to be added to the out-
erties put. Select the types of results based on what is
needed.
Image Type Set the color scheme of the image to display. All Col-
ors shows all extracted colors, Binary outputs all
extracted colors as white and everything else as
black, and the rest of the options only extract the
masses that match one defined color.
Max Result Display Set the number of results to display. The tool will not
Count output more instances than this value.
This pane shows the color ranges to be extracted from the image. This is imperative when
using a color image. The color range(s) can be chosen by specifying the minimum and max-
imum for hue, saturation, and value. Conversely, the color range(s) can be selected in the
image. Right-click on a pixel to select a specific color or right-click and drag to select a range.
The detected colors will be displayed in the Color Setting region and the user can define the
limits based on these data points.
The Color Region Pane is split into the following three sections.
1. Color Selection
This shows the colors that have been selected. Up to eight different colors can be extrac-
ted each time the tool is run. The user can determine which colors are extracted from
among the selected ones by checking or unchecking the boxes next to them. For
example, deselecting Colors 2 and 3 in Figure 8-208 will cause only the green regions to
be extracted.
2. Color Setting
This highlights the regions on the color spectrum that are used by the selected colors.
They are also defined by the Hue, Saturation, and Value fields down below. The high-
lighted regions can be modified by dragging them in the color spectrums or adjusting
the values in the fields down below. The selected color range is shown as a rectangle
with a black and white border. Other ranges defined will have a black border. The col-
ors of the pixels in the most recently selected range will appear as gray markers on the
rectangular field and as red marks in the Value slider on the right.
NOTE: The X-axis of the rectangular color region is ranged from 0 to 360,
represented in the number fields below it. Because the Hue range is sup-
posed to be circular, it is possible for a red color range to begin close to the
right side of the region and end on the left side.
Checking the box next to Exclude this color prevents the color from being extracted.
Checking the box next to Auto fit allows the right-click shortcut to be used to define this
color range. This is enabled by default.
Advanced Settings
This provides two additional properties that affect the resulting image. Background Color
defines the color that will be set everywhere that is not an extracted mass. Color Inv. returns
all masses as the background color and everywhere else as a white field.
Labeling Results
To reduce execution time, Labeling only returns result data for properties specified in the
extraction conditions. All other result data is not evaluated and is returned as 0.
Use the table below to understand the results of the Labeling tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool
is set Relative To.
Elliptic axis angle Only calculated if Elliptic Results is selected in Additional Data Set.
Angle of the mass calculated elliptic major axis with respect to the
positive X-axis.
Elliptic major axis Only calculated if Elliptic Results is selected in Additional Data Set.
Length of the mass’s calculated elliptic major axis.
Elliptic minor axis Only calculated if Elliptic Results is selected in Additional Data Set.
Length of the mass’s calculated elliptic minor axis.
Elliptic ratio Only calculated if Elliptic Results is selected in Additional Data Set.
Ratio of the minor axis to the major axis.
Fit rect center X Only calculated if Rotated Rectangle Results is selected in Additional
Data Set. X-coordinate of the rotated rectangle created to best
match the mass.
Item Description
Fit rect center Y Only calculated if Rotated Rectangle Results is selected in Additional
Data Set. Y-coordinate of the rotated rectangle created to best
match the mass.
Fit rect major axis Only calculated if Rotated Rectangle Results is selected in Additional
Data Set. Major axis of the rotated rectangle.
Fit rect minor axis Only calculated if Rotated Rectangle Results is selected in Additional
Data Set. Minor axis of the rotated rectangle.
Fit rect ratio Only calculated if Rotated Rectangle Results is selected in Additional
Data Set. Ratio between the two axes.
Inscribed circle X Only calculated if Inner Circle Results is selected in Additional Data
Set. X-coordinate of the mass’s inscribed circle.
Inscribed circle Y Only calculated if Inner Circle Results is selected in Additional Data
Set. Y-coordinate of the mass’s inscribed circle.
Inscribed circle R Only calculated if Inner Circle Results is selected in Additional Data
Set. Radius of the mass’s inscribed circle.
Circum. circle X Only calculated if Outer Circle Results is selected in Additional Data
Set. X-coordinate of the mass’s inscribed circle.
Circum. circle Y Only calculated if Outer Circle Results is selected in Additional Data
Set. Y-coordinate of the mass’s inscribed circle.
Circum. circle R Only calculated if Outer Circle Results is selected in Additional Data
Set. Radius of the mass’s inscribed circle.
Number of holes Only calculated if Holes Number Results is selected in Additional Data
Set.
Line Finder
This tool identifies linear features on objects and returns angle of inclination and the endpoint
coordinates of the detected line.
Line Finder is most commonly used to return information about straight edges. For example,
the tool in Figure 8-210 is used to locate line created by the left edge of the wire square. In this
way, it can be used to locate and measure straight features within an object . Multiple detected
lines can be used to calculate intersection points and refine a pick position based on part object
geometries. If the line to be located should only be in a certain position, the guideline position
and properties such as Search Mode can be used to decrease the detection range.
To create a Line Finder tool, right-click Vision Tools in the Multiview Explorer, select Add
Finder and then Line Finder. A Line Finder tool will be added to the Vision Tools list.
Use the table below to understand the Line Finder tool configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
region.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Advanced Conformity Tolerance Set the maximum local deviation between the
Properties expected line contours and the line contours actu-
ally detected in the input image.
Positioning Level Set the configurable effort level of the instance pos-
itioning process.
Subsampling Level Set the subsampling level used to detect edges that
are used by the tool to generate hypotheses.
Line Finder Results
Use the table below to understand the results of the Line Finder tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Angle Angle (in degrees) of the detected line, measured from the positive X-axis of
the camera frame.
Average Average contrast between light and dark pixels on either side of the found
Contrast arc, expressed in gray level values.
Fit Quality Normalized average error between the calculated arc contours and the actual
contours detected in the input image. Fit quality ranges from 0 to 100 where
the best quality is 100. A value of 100 means that the average error is 0. A
value of 0 means that the average matched error is equal to the Conformity
Tolerance property.
Match Quality Amount of matched arc contours for the selected instance expressed as a per-
centage. Match quality ranges from 0 to 100 where the best quality is 100. A
value of 100 means that 100% of the arc contours were successfully
matched to the actual contours detected in the input area.
Locator
This tool identifies objects in an image based on geometries defined in one or more Locator
Models. Because of its speed, accuracy, and robustness, the Locator is the ideal frame provider
for ACE Sight inspection tools.
The Locator tool functions by detecting edges in the input images and then using them to gen-
erate a vectorized description of the image. The contours are generated on two coarseness
levels: Outline and Detail. Outline is used to generate hypotheses of potential instances while
Detail is used to confirm the hypotheses and refine the location. The detected contours are then
compared to the model(s) to identify instances of the model(s) within the image.
A Locator can be set relative to other tools, such as another Locator or a Shape Search 3. This
allows the Locator tool to be used to locate features, sub-features, or sub-parts on a parent
object.
NOTE: The Locator tool will not work until a Locator Model has been created.
Refer to Locator Model on page 469 for more information.
Locator provides disambiguation between multiple similar models, but it typ-
ically has longer execution times than Shape Search 3. Shape Search 3 is often
used for simple models while Locator is better when handling multiple models
or situations where the model training process requires user control.
To create a Line Finder tool, right-click Vision Tools in the Multiview Explorer, select Add
Finder and then Locator. A Locator tool will be added to the Vision Tools list.
Use the table below to understand the Locator tool configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Results Display Mode Defines how the results are rendered in the image
display. Marker shows only the origin marker of the
model, Model shows the outline of the detected
model, and Marker and Model shows both.
Advanced Conformity Tolerance Set the maximum local deviation between the
Properties expected model contours of an instance and the
contours actually detected in the input image.
Contour Detection Set how the contour detection parameters are con-
figured.
Instance Ordering Select the order in which the instances are pro-
cessed and output.
Nominal Rotation Set the required rotation range for valid instances.
Nominal Scale Factor Set the required scale factor for an object instance
to be recognized.
Search Based On Set to cause the tool to only use the Outline Level
Outline Level Only contours of the model to detect instances.
Show Model Name Set whether the model name will be displayed in
the Vision Window.
Locator Models Pane
The Models pane is used to edit the models used in this tool.
2. Click and drag the Locator Model object from the Multiview Explorer to the Models
Pane.
Models can be deleted from this pane by selecting them and clicking the Delete button.
NOTE: The order in which models are added defines the Model Index result
value. Models cannot be reordered once they have been added to the Models
Pane.
The properties in this section modify the quality and quantity of contours that are generated
from the input image.
Contour Detection
This property sets how the contour detection parameters are configured. For most applications,
Contour Detection should be set to All Models, where the contour detection parameters are
optimized by analyzing the parameters that were used to build all the currently active models.
Custom contour detection should only be used when the default values do not work correctly.
Setting this to Custom allows the user to specify Outline Level, Detail Level, and Tracking Iner-
tia as described below.
l The Outline Level contours are used to rapidly identify potential instances of the object.
Coarseness values range from 1 to 16 where 1 is full resolution and all other values are
factors of 1. For example, at level 8, the resolution is 8 times lower than an image at full
resolution. Lower values of Outline Level lead to higher execution times.
l The Detail Level contours are used to confirm recognition and refine the position of
valid instances. For images that are not in perfect focus, better results will be obtained
with a higher value of Detail Level. To obtain high-accuracy object location, use images
with sharp edges and set Detail Level to the lowest coarseness value. Detail Level and
Outline Level have the same range, but Detail Level must have a lower value. Lower
values of Detail Level lead to higher execution times.
l Tracking Inertia defines the longest gap that can be closed to connect two edge elements
when building the source contours. It is set on a scale of 0 to 1. Higher values can help
close small gaps and connect contours that would otherwise be broken into smaller sec-
tions.
Contrast Polarity
This property defines the polarity change in gray level values between an object and its back-
ground, which can be dark to light, light to dark, or a combination of both. The reference polar-
ity for an object is defined by its Model with respect to the polarity in the image on which the
Model was created. It can be set to the following options.
l Normal searches only for instances that match the polarity of the Model. For example, if
the Model is a dark object on a light background, the Locator searches only for dark
objects on a light background (refer to the middle image in the following figure).
l Reverse searches only for instances that have opposite polarity from the Model. For
example, if the Model is a dark object on a light background, the Locator searches only
for light objects on a dark background (refer to the right image in the following figure).
l Normal And Reverse searches for both of the above. This will not include cases where
polarity is reversed at various locations along the edges of an object.
l Do Not Care indicates that polarity should not be taken into account when searching for
instances. This is useful when a model has multiple local polarity changes, such as in
the checkerboard background of the figure below, where Do Not Care must be selected
in order for the object to be detected.
Contrast Threshold
This property sets the minimum contrast needed for an edge to be detected in the input image.
The threshold value is interpreted as the change in gray level value required to detect con-
tours.
Contrast Threshold can be set to Adaptive Low Sensitivity, Adaptive Normal Sensitivity,
Adaptive High Sensitivity, or Fixed Value. Higher values reduce sensitivity to contrast, redu-
cing noise and lowering the amount of low-contrast edges. Conversely, lower values increase
sensitivity and add a greater amount of edges to the contours at the expense of adding more
noise, which can generate false detections and/or increase execution time .
Adaptive thresholds set a sensitivity level based on image content. This provides flexibility to
variations in image lighting conditions and variations in contrast during the Search process,
and can generally be used for most applications. The Fixed Value option allows the user to set
the desired value on a scale from 1 to 255, corresponding to the minimum step in gray level
values required to detect contours. This is primarily used when there is little variance in light-
ing conditions.
This property restricts the search to using only the Outline Level source contours to search,
recognize, and position object instances. Detail Level contours are ignored completely.
Enabling this can increase speed with possible loss of accuracy and detection of false
instances.
An Outline-based search is mainly used for time critical applications that do not require a
high-positioning accuracy or that only need to check for presence/absence of objects. To be
effective, this type of search requires clean run time images that provide high-contrast contours
with little or no noise or clutter.
The properties in this section are constraints that restrict the Locator’s search process.
Conformity Tolerance
This property defines the maximum allowable local deviation of instance contours from the
expected model contours. Its value corresponds to the maximum distance by which a matched
contour can deviate from either side of its expected position in the model. Portions of the
instance hypothesis that are not within the Conformity Tolerance range are not considered to
be valid. Only the contours within Conformity Tolerance are recognized and calculated for the
Minimum Model Recognition search constraint. See the following figure for an example.
Item Description
l Use Default: Enabling this causes the Locator tool to calculate a default value by ana-
lyzing the calibration, contour detection parameters, and search parameters. This box
must be unchecked if either of the other options are going to be used.
l Range Enabled: Enables the use of the manually-set tolerance value. This is set by Tol-
erance, which defines the maximum difference in calibrated units by which a matched
contour can deviate from either side of its expected position. It can be set on a range of 1
to 100.
Nominal Rotation
This property constrains the rotation range within which Locator can detect instances. Any
possible instance must satisfy this property in order to be recognized as instances. By default,
the range is set from -180 to 180 degrees. This can be changed depending on the needs of the
application. The rotation range spans counterclockwise from the minimum to the maximum
angle, as shown in .
Conversely, Use Nominal can be enabled. This applies the value in the Nominal field to the
Locator, which searches for instances within a tolerance of that angle. Any instances found
within the tolerance will automatically be set to the angle in the Nominal field. If the actual
angle of rotation is required, it is recommended to leave the Use Nominal box disabled and
instead enter a small range, such as 89 to 91 degrees.
NOTE: If the trained Locator Model has rotational symmetry, it is possible that
Nominal Rotation will cause an instance to be detected that is not actually
within the target angle range. In this case, raising the Minimum Model Per-
centage can be used to prevent the symmetric instance from being detected.
This property sets the required scale factor for an object to be recognized. Similar to Nominal
Rotation, it can constrain the range of scale factors for which the Locator will search by either
setting a minimum and maximum value or a fixed nominal value. By default, the range is set
from 0.9 to 1.1. Any possible instance must lie within this range in order to be output as an
identified instance. Note that the scale factor parameter has one of the most significant impacts
on search speed and using a large scale factor range can significantly slow down the process.
The range should be configured to include only the scale factors that are expected from the
application.
Conversely, Use Nominal can be enabled to make the Locator search only for the value in the
Nominal range. However, if objects have a slight variation in scale, the objects may be recog-
nized and positioned with reduced quality because their true scale factor will not be measured.
In such a case, it is recommended to configure a narrow scale range instead of a nominal
value.
The Minimum, Maximum, and Nominal values can all be set within a range of 0.1 to 10.
Positioning Level
This property modifies the positioning accuracy. This can be set on a range from 0 to 10,
although the default setting of 5 is the optimized and recommended setting for typical applic-
ations. Lower values provide coarser positioning with faster execution times while higher val-
ues provide high-accuracy positioning at the cost of speed.
Positioning Level does not have a large impact on execution speed. However, in applications
where accuracy is not critical, the value can be decreased to lower the processing the time.
Recognition Level
This property slightly modifies the level of recognition effort. This can be set on a range from 0
to 10, but the default setting of 5 is the optimized and recommended setting for typical applic-
ations. Lower values provides faster searches that may miss partially blocked instances while
higher values more accurately recognize all objects in cluttered or noisy images at the cost of
speed.
When adjusting the Recognition Level, it is important to repeatedly test the application to find
the optimum speed at which the process will still find all necessary objects. If the recognition
level is too low, some instances may be ignored, but if it is too high, the application may run
too slowly.
Recognition Level does not affect positioning accuracy.
The properties in this section constrain how the tool interacts with the model.
This property defines the percentage of required features that need to be recognized for the Loc-
ator to accept a valid instance of an object. In most applications, a feature in the Locator Model
will be set as Required if it needs to be present in every single instance, but this property
allows some flexibility. Minimum Required Features is set as a percentage of all required fea-
tures on a range from 0.1 to 100. Note that this parameter is expressed in terms of a percentage
of the number of required features in a model and does not consider the amount of contour
each required feature represents.
Refer to Locator Model Feature Selection Pane on page 471 for information on how to set a fea-
ture as required.
This property determines if disambiguation is applied to the detected instances to resolve ambi-
guity between similar models by analyzing distinguishing features. This is enabled by default
and should remain enabled in most applications. Disabling this will significantly reduce the
time needed to learn or relearn models, but it prevents the Locator from differentiating between
similar models. This should only be done in applications that require many different models.
This property determines if the models can be optimized interactively by building a model
from multiple instances of an object. When enabled, the user can build an optimized model by
creating an initial Locator Model and then running the Locator. Each new instance of the
object found by the Locator is analyzed and compiled into the current optimized model. Strong
features that recur frequently in the analyzed instances are retained in the optimized model.
Once the model is considered satisfactory, the optimized model can be saved.
By default, this property is disabled and can remain disabled for most applications. It may be
useful for applications where the objects have a variable shape.
This property defines points used to create an optimization model. It is set as a percentage of
the points on a model contour actually used to locate instances. For example, when it is set to
the default value of 50%, one out of every two points is used. This can be set on a range from
0.1 to 100. Higher values can increase the accuracy of the optimized model but incur longer
optimization time while lower values lower the accuracy while improving speed.
This property determines if the model name is displayed in the Vision Window. Enabling this
will display the name of the respective Locator Model tool name(s) in the tool window and the
Vision Window. This is only evaluated if Show Results Graphics is enabled.
The properties in this section constrain how the tool outputs instances.
Instance Ordering
This property sets the order in which object instances are output. In general, it is adjusted by
opening the dropdown menu and changing the method. Reference values may also be
required.
l Directional: Four of the options are Left to Right, Right to Left, Top to Bottom, and Bot-
tom to Top. This refers to the position of the instance within the image and is useful for
pick-and-place applications in which parts that are farther down a conveyer must be
picked first.
l Quality: The instances are ordered according to their Match Quality. If two instances
have the same Match Quality, then they are sorted by their Fit Quality. Note that this set-
ting can significantly increase the search time because the tool cannot output instance
results until it has found and compared all instances to determine their order. The time
required to output the first instance corresponds to the total time needed to search the
image and analyze all potential instances. The time for additional instances is then zero
because the search process is already complete.
l Distance: Two of the options are Distance Image and Distance World. In both, the
instances are ordered according to their proximity to a user-defined point in the camera
coordinate system, as defined by the fields Reference X and Reference Y. In Distance
Image, these fields are in terms of pixels while in Distance World they are expressed in
calibrated length units.
l Shading Consistency: The instances are ordered according to the custom shading area
created in the model. If no Custom Shading Area is defined in the model, the locator
uses the entire model area for shading analysis. This mode is useful when the shading
information can assist in discriminating between similar hypotheses. This is a require-
ment for color processing of models and also often used for ball grid array applications,
as illustrated in the figures below.
Minimum Clearance
This property sets the minimum percentage of the model bounding box area that must be free
of other instances to consider an object instance as valid. It is disabled by default. When
enabled, the Locator tool scans the bounding boxes of all instances for obstacles, such as other
instances. If the amount of obstacle-free space is less than the bounding box percentage listed
in the Minimum Clearance field, then the instance is not returned.
Enabling this property may significantly increase the search time. It is primarily intended for
pick-and-place applications to confirm that objects have the necessary clearance to be picked.
Minimum Clearance also activates the computation of the Clear Quality result for each
instance.
This property defines how the Locator will handle symmetric (or nearly symmetric) objects. It
is disabled by default, causing the search process to output only the best quality instance of a
symmetric object. If enabled, all possible symmetries of a symmetric object will be output. This
can significantly increase execution time when there are many possible symmetries of an
object, such as when the object is circular.
Overlap Configuration
This property determines if the results of the tool will be checked for overlaps. If any instances
overlap, they will be excluded from the results. Note that instances will qualify as overlapping
if their bounding boxes overlap, even if the models themselves do not. If overlapping bound-
ing boxes are acceptable to the application or the objects have unique shapes that cause super-
fluous overlap, it is recommended to instead use Minimum Clearance to perform this check.
Timeout
This property controls the elapsed time after which the Locator aborts its search process. This
period does not include the model learning phase. When this value (in milliseconds) is
reached, the instances already recognized are output by the Locator and the search is aborted.
Timeout can be set on a range from 1 to 60,000. It can also be disabled by deselecting the
Enable box.
Locator Results
Use the table below to understand the results of the Locator tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To. This will only be different if the Locator is set relative to another
tool.
Model Index Index of the model located for each instance. If only one model is used, this
will be the same for every instance.
Model Name Name of the model located for each instance. Each Model Name is identical to
the associated Locator Model tool name.
Fit Quality Normalized average error between the matched model contours and the
actual contours detected in the input image. Fit quality ranges from 0 to 100
where the best quality is 100. A value of 100 means that the average error is
0. A value of 0 means that the average matched error is equal to the Con-
formity Tolerance.
Match Quality Amount of matched model contours for the selected object instance
expressed as a percentage. Match quality ranges from 0 to 100 where the
best quality is 100. A value of 100 means that 100% of the model contours
were successfully matched to the actual contours detected in the input
image.
Clear Quality Measurement of the clear area surrounding the specified object instance.
Clear quality ranges from 0 to 100 where the best quality is 100. A value of
100 means that the instance is completely free of obstacles. If Minimum
Clearances is Disabled, this value is 100.
Scale Factor Relative size of the instance with respect to its associated model.
Symmetry Index of the instance of which this instance is a symmetry. Output Sym-
metric Instances must be enabled.
Time Time in milliseconds needed to recognize and locate the object instance.
NOTE: The result columns Fit Quality, Match Quality, and Clear Quality do not
directly correlate with the Minimum Model Percentage property. This is because
Minimum Model Percentage is compared to the initial coarse outline while these
results are originated from the refined detail search.
This model tool describes the geometry of an object to be found by the Locator tool.
To create a Locator Model, right-click Vision Tools in the Multiview Explorer, select Add Finder
and then Locator Model. A Locator Model will be added to the Vision Tools list.
NOTE: The Virtual Camera should be calibrated before any Locator Models are
created. If it is not calibrated, any applied calibration will cause the model geo-
metries to scale incorrectly.
Use the table below to understand the Locator Model configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Training Contrast Threshold Set the minimum contrast needed for an edge to be
detected in the input image and used for arc com-
putation. This threshold is expressed in terms of a
step in gray level value.
Minimum Feature Size Set the minimum length of a feature (in mil-
limeters) required for it to be selected. This has no
impact on feature detection, only feature selection.
The region of interest defines where Locator Model will look for the contours defined by con-
trast changes. Any contours located outside the region will not be detected or considered to be
part of the model. Therefore, it is best to set the region as close to the part as possible so it can
be identified without cropping any part edges.
The Locator Model region of interest can be set by dragging the corners of the green box in the
display window or by modifying the numbers in the Offset and Search Region properties. In
the property fields, the numbers can either be entered manually or changed using the up /
down arrows for each value. Note that only the property fields can be used to rotate the region.
The model origin, depicted as a yellow frame, defines the position and orientation of the pick
location. It can be set in the following ways.
l Manually drag the yellow origin indicator to the desired location. Adjust the orientation
by clicking and dragging the rotation symbol. This is the best method for irregularly-
shaped parts with off-center masses. The coordinates in the property fields will auto-
matically adjust to the new position.
l Manually enter a desired location in the Origin Offset field. This is typically used to
make small adjustments on the origin position.
l Click the Center button in the Feature Selection pane without a feature selected. The ori-
gin will be centered in the model’s region of interest. This may not be optimal in irreg-
ularly-shaped parts and depends entirely on the size of the region of interest. This does
not affect the origin orientation. If a feature is selected when Center is clicked, the origin
will be centered on the feature.
l Select a feature in the Feature Selection pane and click the Center button. The origin will
The Custom Model Identifier can be any integer from 0 to 10000. It is defined in the property’s
dropdown menu, as shown in the following figure.
NOTE: The Enable box must be checked before a value can be entered for the
identifier. If this box is not checked, the Locator tool will assign numbers auto-
matically.
The Feature Selection Pane provides more direct control over the model selection process.
This pane lists all features that have been detected in the model within the following columns.
l Location: Select if the feature’s location should be used to define the location of the part.
This is enabled by default and should be disabled on unstable features with variable loc-
ation. In this way, the feature would be used to recognize the part but not to refine its
location.
l Required: Select if an instance requires this feature to be a viable candidate. The Min-
imum Required Features property in the Locator tool also governs these. If a feature is
not Included, then it cannot be Required.
l Length (mm): The length of the feature in millimeters.
The pane also has five buttons that help to define the model as described below.
l Center: Centers the origin of the model on the region of interest or specific feature,
depending on whether or not a feature is selected.
l Train: Trains the model based on the current parameters and region of interest location.
This is always enabled, but note that it will eliminate all feature selection changes.
l Crop: Cuts everything from the image except for the area within the region of interest.
This is only enabled when the model is trained.
l Edit: Opens the Edge Editor window so you can more directly edit features. Refer to Loc-
ator Model Edge Editor on page 473 for more information.
l Update: Update the trained model with the new feature selections in place. This is only
enabled if feature selection changes have been made.
The features in the tool Vision Window are color-coded based on current selections and
options as described below.
l Red: Feature was included and is now excluded, but an update is required for the
The Edge Editor allows the user to split a feature into multiple segments and choose which seg-
ments to include or exclude from a model. This can be useful when improper lighting results
in a feature including both part object and shadow outline geometries.
The main part of this editor is the Vision Window, which is controlled in the same way as the
tool Vision Window. The only feature that appears in this window is the one currently being
edited. The right side of the editor shows the segments of the feature and the check boxes that
control if they are included.
Features can be split into multiple segments by clicking somewhere on the feature and then
clicking the Split button below the image. This adds a new segment on the right side and
allows you to determine which segments are included in the overall feature. Only segments
with checked boxes are included in the feature.
The feature is color-coded in this Vision Window with the following designations.
l Orange: Currently selected segment.
l Purple: Ends of segments. A purple line that connects to lines at both ends represents a
split between two segments. If it only connects at one end, it is an endpoint of the fea-
ture.
l Green: Segment is included in the overall feature.
NOTE: Accepting the changes here will be reflected in the tool Vision Window.
See the bottom-right die circle in the figure below. Making any changes to the
Included property of the feature in the Feature Selection pane will revert all
changes made in the Edge Editor.
Locator Model has no data or image results. Its only function is to create a model to be used in
the Locator tool. Refer to Locator on page 455 for more information.
Configuration Process
The general steps required to configure a Locator Model are listed below.
Training a Model
The region of interest defines where the Locator Model will look for contours identified by con-
trast changes. Any contours outside the region will not be detected or considered to be part of
the model. Therefore, it is best to set the region as close as possible to the part to be identified
without omitting any edges of the part.
Once the region has been defined, the model needs to be trained by clicking the Train button.
This is also required whenever the region of interest is adjusted or a new image is loaded. You
can then decide which features to include using the Feature Selection pane. A Locator tool will
search for features included in the model when detecting instances. Locator searches by com-
paring a potential match with the Minimum Model Percentage property of that tool, which
defines a percentage that an instance needs to match to be considered a valid instance. This
can be further controlled by marking some features as Required in this pane. If a Locator
detects a potential instance that meets the Minimum Model Percentage but lacks the required
features, the detected candidate will not be returned as a result.
NOTE: Whenever any change is made in the Feature Selection pane, the Update
button must be clicked for the changes to take effect.
The training process is also controlled by the parameters in the Training section. Contrast
Threshold and Levels Configuration can usually be set to automatic parameters, but the other
two need to be defined.
Maximum Number Of Features defines the maximum number of edge contours that will be
detected to potentially be included in the model. Locator Model will always return all detected
edges if it is less than or equal to this value. However, if there are more edges within the
region of interest, the largest ones will be returned until the number of edges is equal to this
value. For example, if Maximum Number Of Features is set to 20 and the tool detects 30 edges,
only the 20 largest will be shown in the Feature Selection pane. Conversely, Minimum Feature
Size has no effect on which edges are returned. Instead, it defines which features are initially
included in the model itself. When the Train button is clicked or a new image is loaded, Loc-
ator Model will automatically include all features in the model that have length greater than
or equal to this value.
All other features within the Maximum Number Of Features constraint will be returned and
shown in the Feature Selection pane, but they will not automatically be included. Figure 8-227
and Figure 8-228 show how this property is utilized during training. It can be seen in Figure 8-
228 that all the features included in Figure 8-227 are still returned, but only some of them are
actually included in the model.
NOTE: If a single contour of the part is broken into multiple features, you may
need to adjust lighting or the Contrast Threshold and Levels Configuration.
Once the model has been fully trained, the model origin must be set. This point is depicted as
a yellow frame and defines the position and orientation of the pick location. When the model
is used in a Locator tool, this location will be used as the point of reference to define instance
positions. The origin can be set in five ways as described below.
l Manually drag the yellow origin indicator to the desired location. Adjust the orientation
by clicking and dragging the rotation symbol or the extendable arrows. This is the best
method for irregularly-shaped parts with off-center masses. The coordinates in the prop-
erty fields will automatically adjust to the new position.
l Manually enter a desired location in the Origin Offset field. This is typically used to
make small adjustments of the origin.
l Click the Center button in the Feature Selection pane without a feature selected. The ori-
gin will be centered in the region of interest. This may not be optimal for irregularly-
shaped parts and depends entirely on the size of the region of interest. This does not
affect the origin orientation. If a feature is selected when the Center button is clicked, the
origin will be centered on the feature.
l Select a feature in the Feature Selection pane and click the Center button. The origin will
be centered on that feature. This is useful when a particular feature is going to be used
as the pick point. This does not affect the origin orientation.
In many situations the origin can be centered on one feature and the roll angle can be set by
aligning an extendable arrow with another part geometry that may or may not be included in
the model.
Cropping the model will automatically center the origin in the region of interest if the origin is
located outside of the region after cropping. This does not affect the origin orientation.
After testing that the Locator Model and Locator sufficiently locate the object, it is recom-
mended to crop the image using the Crop button. Figure 8-229 shows a model in the original
image on the left and the cropped version on the right.
Additional Information: Cropping Locator Model Images can reduce project file
size.
Figure 8-229 Locator Model Image (Left) vs. Cropped Image (Right)
Shape Search 3
This tool identifies objects in an image based on geometries defined in a Shape Search 3
Model. Because the model requires a small degree of training, Shape Search 3 can output
instances and correlation values based on similarity, measurement target position, and ori-
entation.
Unlike other search methods where color and texture information are used to detect objects,
Shape Search 3 uses edge information as features. This enables highly robust and fast detec-
tion despite environmental variations including shading, reflections, lighting, shape deform-
ations, pose, and noise.
NOTE: Shape Search 3 will not work until a Shape Search 3 Model has been cre-
ated. Refer to Shape Search 3 Model on page 484 for more information.
Shape Search 3 typically provides shorter execution times than Locator, but it
cannot provide disambiguation between multiple similar models. Shape Search
3 is useful for simple models while Locator is better for handling multiple mod-
els or situations where the model training process requires user control.
To create a Shape Search 3 tool, right-click Vision Tools in the Multiview Explorer, select Add
Finder and then Shape Search 3. A Shape Search 3 tool will be added to the Vision Tools list.
Use the table below to understand the Shape Search 3 configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Shape Search 3 Model Select the Shape Search 3 Model that will be
searched for in the image.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Z-Order Available in all region types. Sets the order for over-
lapping regions of interest. Higher numbers will be
in front and resolved first.
Search Region (Width, Only available in Rectangles. Sets the height and
Height) width of the rectangular region.
Properties Measurement Set the percentage of match required for the tool to
Condition - Candidate detect instances. Any instance that has a Cor-
Level relation Results value lower than this value will not
be recognized.
Rotation Angle Range Set the angle range within which the tool will
detect candidates.
Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Show Edge Image Shows only the detected edges of each model. All
edge pixels become white and all others are set to
black.
Shape Search 3 must reference a Shape Search 3 Model in the Shape Search 3 Model property.
This is done by clicking the ellipsis next to the field and selecting an appropriate tool. Shape
Search 3 will then compare the model to the image and search within the region(s) of interest
for contours that match the model.
Shape Search 3 always displays the model region(s) of interest and origin in purple around the
detected instances. It also allows you to view the detected instances in different ways. These
are controlled by the Show Corresponding Model and Show Edge Image properties. If both of
these are disabled, only the border of the model region(s) of interest will be displayed, as
shown in Figure 8-230 above. Enabling these provides a different view.
Show Corresponding Model shows all of the edges drawn in the Shape Search 3 Model in
green. It also applies a darkened mask to everything except the pixels within the model edges
(refer to the following figure).
Show Edge Image changes everything to black except for the edges detected in the image,
which become white (refer to the following figure).
Use the table below to understand the results of the Shape Search 3.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To. This will only be different if Shape Search 3 is set relative to
another tool.
Correlation Percentage the instance correlates with the model. Any instance will be omit-
ted if this value is lower than the defined Candidate Level property.
Shape Search 3 Model
This model describes the geometry of an object to be found by the Shape Search 3 tool. Shape
Search 3 Model is designed to detect specific edges in an image and register them.
Before running the tool, the region of interest must be verified in the correct location so the
model can be properly trained.
Use the table below to understand the Shape Search 3 Model configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Reference Point Define the model origin. This will be referenced as
Interest the instance center in the Shape Search 3 tool.
WideArcs
Z-Order Available in all region types. Sets the order for over-
lapping regions of interest. Higher numbers will be
in front and resolved first.
Search Region (Width, Sets the height and width of the rectangular
Height) region. Only available in Rectangles.
Radius X/Y Defines the distance from the center to the exterior
along the X- and Y-axes, respectively. Only avail-
able in Ellipses.
Start/End Angle Defines the start and end angle of the Wide Arc
bounds. Angles are measured in degrees coun-
terclockwise from the positive X-axis. The arc is cre-
ated clockwise starting from the Start Angle and
ending at the End Angle. Only available in
WideArcs.
Model Parameter - Size Set the upper and lower limit of model size fluc-
Change tuation.
Edge Settings - Mask Size Select the neighborhood of pixels to use to detect
model edges. Higher selections will help detec-
tion when brightness varies among pixels.
Edge Settings - Set the lower limit of edge level for an edge to be
Edge Level recognized. Edges with a higher edge level than
this value will be included in the model. Higher
settings will result in fewer edges.
Edge Settings - Noise Set the upper limit of noise level to eliminate.
Removal Level Noise with a level below this value will be elim-
inated. Higher numbers will lead to more fea-
tures being removed from the model.
Edge Settings - Show Specifies if the Edge Model is drawn in the Vision
Edge Model Window.
Shape Search 3 Model automatically runs whenever any change is made to the parameters or
region(s) of interest. Because of this, the training process only requires you to position the ref-
erence point and the region(s) of interest in the necessary location(s). As many regions can be
added as necessary to create a custom model shape. The Overlap can further define a custom
model.
When the trained model is used in Shape Search 3, the position result for each detected
instance will be returned as the position of the model Reference Point property. Therefore,
before continuing, the location of the reference point needs to be verified.
Shape Search 3 Model has fewer controlling parameters than Locator Model since the focus of
Shape Search 3 is speed. However, several properties provide control over this tool. In par-
ticular, the parameters in the Edge Settings section are useful to adjust the model. Mask Size
helps balance brightness issues, Edge Level sets the necessary deviation of edges in order to be
included, and Noise Removal Level removes unnecessary features.
Shape Search 3 Model has no data or image results. Its only function is to create a model to be
used in the Shape Search 3 tool. Refer to Shape Search 3 on page 478 for more information.
Inspection Tools
Inspection tools are typically used for part inspection purposes.
The following Inspection tools are described in this section.
Arc Caliper
This tool identifies and measures the gap between one or more edge pairs of arc-shaped
objects. Using pixel gray-level values within regions of interest, Arc Caliper is able to build pro-
jections needed for edge detection. Edges can be displayed in a radial or annular position.
After detecting potential edges, the tool determines which edge pairs are valid by applying the
user-defined constraints configured for each one. The valid pairs are then scored and meas-
ured.
To create an Arc Caliper tool, right-click Vision Tools in the Multiview Explorer, select Add
Inspection and then Arc Caliper. An Arc Caliper tool will be added to the Vision Tools list.
Use the table below to understand the Arc Caliper configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Search Region Defines the location and size of the region (X, Y,
radius, thickness, mid-angle position, arc angle
degrees).
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Custom Sampling Step Defines the sampling step used in calculation. This
is set to 1 by default. Enable this to adjust the set-
ting. Higher values decrease execution time and
sensitivity and can improve processing in high-res-
olution images.
Edge Filter Half Width Half width of the convolution filter used to compute
the edge magnitude curve, from which actual
edges are detected. The filter approximates the first
derivative of the projection curve. The half width of
the filter should be set in order to match the width
of the edge in the projection curve (the extent of
the gray scale transition, expressed in number of
pixels).
Clicking the ellipsis next to Pairs opens the Edge Constraint Editor. This allows the user to set
specific constraints on the detected pair(s) of edges.
Pairs Pane
The area highlighted in the figure below lists the pairs of edges to be detected. By default, Arc
Caliper will try to detect only one pair. The Add and Delete buttons at the bottom change the
number of pairs to be searched for when the tool is run. The name of each pair is adjusted by
selecting the associated label in this pane and then changing the Pair Name field to the right.
Edges
This section defines the constraints on the detected edges within a pair.
Region Image
The image beneath the edge constraints shows only the pixels within the region of interest in
the main image window. The image is altered from the region of interest to be rectangular
instead of circular. The positive X-axis in this image is oriented to be aligned with the coun-
terclockwise direction in the region of interest.
Use the table below to understand the results of the Arc Caliper tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Pair X X-coordinate of the center point of the caliper measure at the midpoint of the
edge pair.
Pair Y Y-coordinate of the center point of the caliper measure at the midpoint of the
edge pair.
Pair Radius Radius of the detected edge. This is only properly measured when Projection
Mode is set to Annular.
Pair Score Score of the resultant pair, which is equal to the mean score of the two edges
in the pair.
Item Description
Edge 1 Score between 0 and 1 of the first edge, calculated according to the Mag-
Magnitude nitude Constraint property.
Score
Edge 1 Position Score between 0 and 1 of the first edge, calculated according to the Position
Score Constraint property.
Edge 1 Angle of rotation of the first edge, measured from the positive X-axis. This
Rotation returns valid results only when Projection Mode is set to Radial.
Edge 1 Score Score of the first edge, computed according to the constraints set by the Edge
Constraints properties.
Edge 2 Angle of rotation of the second edge, measured from the positive X-axis. This
Rotation returns valid results only when Projection Mode is set to Radial.
Edge 2 Score Score of the second edge, computed according to the constraints set by the
Edge Constraints properties.
This tool identifies and measures the position of one or more edges on a circular object. Using
pixel gray-level values within regions of interest, Arc Edge Locator is able to build projections
needed for edge detection. Edges can be displayed in a radial or annular position. After detect-
ing potential edges, the tool determines which edges are valid by applying the user-defined
constraints on the edge candidates. The valid edges are then scored and measured.
Additional Information: While Arc Edge Locator can determine the position of
one or more edges, it cannot measure the length of lines detected in the region of
interest. To measure arcs and lines on an object, use the Arc Finder or Line
Finder tools. Refer to Arc Finder on page 429 and Line Finder on page 452 for
more information.
To create an Arc Edge Locator tool, right-click Vision Tools in the Multiview Explorer, select
Add Inspection and then Arc Edge Locator. An Arc Edge Locator tool will be added to the
Vision Tools list.
Use the table below to understand the Arc Edge Locator configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Search Region Defines the location and size of the region (X, Y,
radius, thickness, mid-angle position, arc angle
degrees).
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Custom Sampling Step Defines the sampling step used in calculation. This
is set to 1 by default. Enable this to adjust the set-
ting. Higher values decrease execution time and
sensitivity but can improve processing in high-res-
olution images.
Filter Half Width Half width of the convolution filter used to compute
the edge magnitude curve, from which actual
edges are detected. The filter approximates the first
derivative of the projection curve. The half width of
the filter should be set in order to match the width
of the edge in the projection curve (the extent of
the gray scale transition, expressed in number of
pixels).
Clicking the ellipsis next to Search Parameters as shown in the following figure opens the
Edge Constraint Editor. This allows you to set specific constraints on the detected edges.
The Editor has two different sections named Edges and Region Image as described below.
Edges
This section defines the constraints on the detected edges. Constraints affect all detected edges
with the following properties.
l Polarity: defines the gray level deviation for which the tool searches. This is performed
with respect to the clockwise direction of the region of interest. For example, in the fol-
lowing figure, Either is selected, and dark and light areas can be seen on different sides
of multiple edges.
l Constraints: enables constraining the edge by both Position and Magnitude. A slider bar
appears below the image in both cases. The edge must be between the two position
sliders and its magnitude must be higher than defined in the magnitude slider (refer to
the following figure for an example).
l Score Threshold: defines the minimum score (quality) an edge must have to be con-
sidered valid. The value is set between 0 and 1.
Region Image
The image beneath the edge constraints shows only the pixels within the region of interest in
the main image window. The image is altered from the region of interest to be rectangular
instead of circular. When Projection Mode is set to Radial, the positive X-axis in this image is
oriented to be aligned with the clockwise direction of the region of interest. Otherwise, it is
aligned with a radius from the inside to the outside.
Use the table below to understand the results of the Arc Edge Locator tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Radius Radius of the edge. This is only properly measured when Projection Mode is
set to Annular.
Edge Score Calculated score (quality) of the edge, computed according to the constraints
set by the Edge Constraints properties.
Magnitude Measurement of how well the region of interest arc or radii matches the
found arc or radii. This will be returned as negative if the found arc is a reflec-
tion of the region of interest arc.
Magnitude Score between 0 and 1 calculated according to the Magnitude Constraint prop-
Score erty.
Position Score Score between 0 and 1 calculated according to the Position Constraint prop-
erty.
Projection Measurement of the deviation between the gray level of the projection pixels
Magnitude and the pixels surrounding it. This is returned on a range between -255 and
255. Positive and negative peaks in the value indicate potential edges. Sharp
peaks indicate strong, well-defined edges while dull peaks may indicate noise
or poorly-defined edges.
Projection Average gray level value for all projection paths within the physical area
Average bounded by the region of interest. This minimizes variations in pixel values
caused by non-edge features or noise.
Caliper
This tool identifies and measures the distance between one or more parallel edge pairs in an
image. It uses pixel gray-level values within the region of interest to build projections needed
for edge detection. After the potential edges are detected, Caliper determines which edge pairs
are valid by applying constraints that are manually configured for each edge pair.
To create a Caliper tool, right-click Vision Tools in the Multiview Explorer, select Add Inspec-
tion and then Caliper. A Caliper tool will be added to the Vision Tools list.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Custom Sampling Step Defines the sampling step used in calculation. This
is set to 1 by default. Enable this to adjust the set-
ting. Higher values decrease execution time and
sensitivity and can improve processing in high-res-
olution images.
Edge Filter Half Width Half width of the convolution filter used to compute
the edge magnitude curve, from which actual
edges are detected. The filter approximates the first
derivative of the projection curve. The half width of
the filter should be set in order to match the width
of the edge in the projection curve (the extent of
the gray scale transition, expressed in number of
pixels).
Clicking the ellipsis next to Pairs opens the Edge Constraint Editor. This allows you to set spe-
cific constraints on the detected pair(s).
The editor has several different sections that are described below.
Pairs Pane
The area highlighted in the figure below lists the pairs of edges to be detected. By default, Cal-
iper will try to detect only one. The Add and Delete buttons at the bottom can change the
amount of pairs detected when the tool is run. The name of each pair is adjusted by selecting
the pair in this pane and then changing the Pair Name field.
Edges
This section defines the constraints on the detected edges within a pair. Each edge is adjusted
individually with the following properties.
l Polarity: defines the gray level deviation for which the tool searches. This is performed
with respect to moving from left to right across the region (shown in an image in this
editor). For example, in the following figure, Either is selected, and dark and light areas
can be seen on different sides of the two edges.
l Constraints: enables constraining the edge by both Position and Magnitude. A slider bar
appears below the image in both cases. The edge must be between the two position
sliders and its magnitude must be higher than defined in the magnitude slider (refer to
the following figure for an example).
l Score Threshold: defines the minimum score (quality) an edge must have to pass. This
value is set between 0 and 1.
Region Image
The image beneath the edge constraints shows only the pixels within the region of interest in
the main image window. Regardless of the region’s orientation in the main image, this will
always be shown with the region’s positive X-axis oriented to the right.
Caliper Results
Use the table below to understand the results of the Caliper tool.
Item Description
Frame/Group Index of the related result. It is associated with the results of the Relative To
tool.
Pair X X-coordinate of the center point of the caliper measure at the midpoint of the
edge pair.
Pair Y Y-coordinate of the center point of the caliper measure at the midpoint of the
edge pair.
Pair Score Score of the resultant pair, which is equal to the mean score of the two edges
in the pair.
Item Description
Edge 1 Score Score of the first edge, computed according to the constraints set by the Edge
Constraints Properties.
Edge 2 Score Score of the second edge, computed according to the constraints set by the
Edge Constraints Properties.
Color Data
This tool finds the average color within a region and performs statistical analysis using the
deviation from a user-defined reference color and color variation of the measurement range. It
is primarily used to obtain data that will be analyzed by an Inspection tool.
To create a Color Data tool, right-click Vision Tools in the Multiview Explorer, select Add
Inspection and then Color Data. A Color Data tool will be added to the Vision Tools list.
Use the table below to understand the Color Data configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
WideArcs
Z-Order Available in all region types. Sets the order for over-
lapping regions of interest. Higher numbers will be
in front and resolved first.
Search Region (Width, Sets the height and width of the rectangular
Height) region. Only available in Rectangles.
Radius X/Y Defines the distance from the center to the exter-
ior along the X- and Y-axes, respectively. Only avail-
able in Ellipses.
Start/End Angle Defines the start and end angle of the Wide Arc
bounds. Angles are measured in degrees coun-
terclockwise from the positive X-axis. The arc is cre-
ated clockwise starting from the Start Angle and
ending at the End Angle. Only available in
WideArcs.
Properties Reference Color Defines the color that will be compared to all pixels
(Measurement) in the search area. The Auto Tuning button sets
this property based on what is currently in the
region of interest.
Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
The colors in a region are measured against a reference color, which can be set in two ways. If
there is a specific color against which the pixels need to be measured, it can be chosen manu-
ally using the Reference Color property. Alternatively, in color images, a particular color can be
highlighted within the search area. Clicking the Auto Tuning button will automatically set this
as the Reference Color. All results will then be measured against it.
The ResultDifference result illustrates the difference between the measured color range and the
Reference Color by using the following formula.
=√((AverageR-Reference R)2 +(AverageG-Reference G)2 +(AverageB-Reference B)2 )
This tool is particularly useful when used in conjunction with Inspection tool. When the Mode
Of Operation for an Inspection Filter is set to Color Data, the Inspection tool can determine
whether Color Data tool results fall within a specified range, allowing instances to be cat-
egorized by hue. Refer to Inspection on page 519 for more information.
NOTE: Color Data accuracy is dependent on the camera white balance settings,
which may need to be modified outside of the ACE software.
Use the table below to understand the results of the Color Data tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Item Description
If Color Data is set relative to another tool, this will be the X-coordinate of
the associated instance. Otherwise, it will be the origin of the image field of
view.
If Color Data is set relative to another tool, this will be the Y-coordinate of
the associated instance. Otherwise, it will be the origin of the image field of
view.
ResultDifference Color difference between the average color in the measurement area and
the Reference Color.
MonAve Only calculated in gray scale images. Average gray level value within the
search area.
MonDev Only calculated in gray scale images. Gray level deviation in the meas-
urement region.
Edge Locator
This tool identifies and measures the position of one or more straight edges on an object. Using
pixel gray-level values within regions of interest, Edge Locator is able to build projections
needed for edge detection. After detecting potential edges, the tool determines which edges are
valid by applying the user-defined constraints on the edge candidates. The valid edges are
then measured to determine score (quality), length, and position.
Additional Information: While Edge Locator can determine the position of one
or more edges, it cannot measure the length of lines detected in the region of
interest. To extrapolate and measure a line on an object, use the Line Finder tool.
Refer to Line Finder on page 452 for more information.
To create an Edge Locator tool, right-click Vision Tools in the Multiview Explorer, select Add
Inspection and then Edge Locator. An Edge Locator tool will be added to the Vision Tools list.
Use the table below to understand the Edge Locator configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Custom Sampling Set Defines the sampling step used in calculation. This
is set to 1 by default. Enable this to adjust the set-
ting.
Filter Half Width Half width of the convolution filter used to compute
the edge magnitude curve, from which actual
edges are detected. The filter approximates the first
derivative of the projection curve. The half width of
the filter should be set in order to match the width
of the edge in the projection curve (the extent of
the gray scale transition, expressed in number of
pixels).
Clicking the ellipsis next to Search Parameters opens the Edge Constraint Editor. This allows
you to set specific constraints on the detected edges.
The Editor has two different sections named Edges and Region Image as described below.
Edges
This section defines the constraints on the detected edges. Constraints affect all detected edges
with the following properties.
l Polarity: defines the gray level deviation for which the tool searches. This is performed
with respect to the clockwise direction of the region of interest. For example, in the fol-
lowing figure, Either is selected, and dark and light areas can be seen on different sides
of multiple edges.
l Constraints: enables constraining the edge by both Position and Magnitude. A slider bar
appears below the image in both cases. The edge must be between the two position
sliders and its magnitude must be higher than defined in the magnitude slider (refer to
the following figure for an example).
l Score Threshold: defines the minimum score (quality) an edge must have to be con-
sidered valid. The value is set between 0 and 1.
Region Image
The image beneath the edge constraints shows only the pixels within the region of interest in
the main image window. The image is altered from the region of interest so that it appears rect-
angular instead of circular. Regardless of the region’s orientation in the main image, this will
always be shown with the region’s positive X-axis oriented to the right.
Use the table below to understand the results of the Edge Locator tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Angle Angle of the edge segment with respect to the image X-axis.
Edge Score Calculated score (quality) of the edge, computed according to the constraints
Magnitude Peak value of the edge in the magnitude curve. Negative values indicate a
light-to-dark transition while positive values indicate the opposite.
Magnitude Score between 0 and 1 calculated according to the Magnitude Constraint prop-
Score erty.
Position Score Score between 0 and 1 calculated according to the Position Constraint prop-
erty.
Projection Measurement of the deviation between the gray level of the projection pixels
Magnitude and the pixels surrounding it. This is returned on a range between -255 and
255. Positive and negative peaks in the value indicate potential edges. Sharp
peaks indicate strong, well-defined edges while dull peaks may indicate noise
or poorly-defined edges.
Projection Average gray level value for all projection paths within the physical area
Average bounded by the region of interest. This minimizes variations in pixel values
caused by non-edge features or noise.
Feeder Histogram
This tool is used to calculated product density in user-defined regions of interest. It is designed
to function in conjunction with an AnyFeeder to identify the density of products within regions
associated with the dispense, pick, and front zones. Refer to AnyFeeder Object on page 317 for
more information.
To create a Feeder Histogram tool, right-click Vision Tools in the Multiview Explorer, select
Add Finder and then Feeder Histogram. The Feeder Histogram tool will be added to the Vision
Tools list.
Use the table below to understand the Feeder Histogram configuration items.
Item Description
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Link Name Select the property in the relative tool that will
provide the input values.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Show Result Image Specifies if the histogram regions are drawn in the
Histogram Regions ACE Vision Window. Show Results Graphics must
be enabled for this to work.
Tail Black Gray Level Percentage of pixels to ignore at the dark end of the
Value gray level distribution. This is calculated after the
pixels affected by the Threshold Black property
have been removed.
Tail White Gray Level Percentage of pixels to ignore at the light end of the
Value gray level distribution. This is calculated after the
Feeder Histogram is designed so you can create a series of histograms to measure product
density in multiple ranges in the image. These histograms are organized using the Histogram
Pane positioned beneath the properties, as shown in the figure below.
The Add and Delete buttons are used to create or remove histograms from the tool. Typically,
2 to 4 zones will be configured depending on the complexity of feeding logic required for the
application.
The pane also shows properties that apply only to the selected histogram. They are shown in
the table below.
Item Description
Offset Defines the center coordinates of the histogram region with respect to the ref-
erence point defined by Offset in the main tool properties.
Region Name User-defined name of the histogram. This is displayed in the lower left corner
of the histogram region in the region.
Search Region Defines the size (width, height) of the histogram region.
The product density is calculated for each histogram region using the following formula:
Product Density = Histogram Pixel Count / Image Pixel Count
where Histogram Pixel Count is the number of pixels that fall within the defined thresholds
and Image Pixel Count is the number of total pixels in the histogram region.
A Feeder Histogram tool will return one result containing the product densities for each his-
togram. These are not shown by default and must be added using the Results Column Editor.
The numbers of the histograms are variable and are denoted here as <number> instead of
actual values.
Item Description
Image Histogram
This tool computes image statistics for all the pixels contained in the region of interest. This is
used primarily for identifying product or clutter density, verifying the camera or lighting
adjustment during setup of an application, and providing input for the Inspection tool.
To create an Image Histogram tool, right-click Vision Tools in the Multiview Explorer, select
Add Inspection and then Image Histogram. An Image Histogram tool will be added to the
Vision Tools list.
Use the table below to understand the Image Histogram configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Show Result Image Specifies if the histogram regions are drawn in the
Histogram Regions ACE Vision Window. Show Results Graphics must
be enabled for this to work.
Custom Sampling Step Defines the sampling step used in calculation. This
is set to 1 by default. Enable this to adjust the set-
ting. Higher values result in higher processing time.
Image Histogram is designed to return a range of statistics about the pixels within a region of
interest. The final calculation ignores pixels that have been excluded by thresholds or tails.
These properties provide a way for the user to ignore pixels in the histogram. Threshold Black
and Threshold White set a maximum and minimum gray level to consider when building the
histogram. Tail Black and Tail White remove a percentage of pixels from the ends of the spec-
trum within the thresholds. In this way, the user is able to eliminate results from areas that are
not required by the application.
For example, the tool shown in Figure 8-252 can be manipulated to only return results about
the pixels within the guitar picks by setting the Threshold White property to 200, as shown in
the figure below. This removes all pixels with a gray level value higher than 200 from ana-
lysis.
Use the table below to understand the results of the Image Histogram tool.
NOTE: All results are calculated after applying white and black tails and
thresholds.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Mode Gray level value that appears in the largest number of pixels compared to
other gray level values.
Mode Pixel Number of pixels that correspond to the Mode gray level value.
Count
Mean Average gray level distribution of the pixels in the region of interest.
Median Median of the gray level distribution of the pixels in the region of interest.
Standard Standard deviation of the gray level distribution of the pixels in the region of
Deviation interest.
Variance Variance of the gray level distribution of the pixels in the region of interest.
Gray Level Difference between the Minimum Gray Level and Maximum Gray Level val-
Range ues.
Tail Black Gray Darkest gray level value that remains in the histogram after the tail is
Level removed.
Tail White Gray Brightest gray level value that remains in the histogram after the tail is
Level removed.
Inspection
This tool organizes instances based on the results of other tools and inspection filters. Custom
categories and filters can be created to identify information about data or images from another
tool. In this way, returned instances from another vision tool can be sorted into groups.
To create an Inspection tool, right-click Vision Tools in the Multiview Explorer, select Add
Inspection and then Inspection. An Inspection tool will be added to the Vision Tools list.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
compared.
The filter pane is found below the Properties viewer on the right side of the tool editor. This
shows the current categories and filters used by the tool.
Item Description
Up/Down Move selected categories up or down to determine the order in which they
Arrows will be used to sort instances. Moving categories retains the order of filters
within that category. Filters cannot be moved outside of their category.
Filter Add a new filter at the bottom of the selected category list.
Delete Delete the selected filter or category. Deleting a category deletes all filters
within it.
Edit Edit the selected filter or category. This can also be done by double-clicking a
filter or category.
Inspection Categories
The purpose of categories is to group instances using defined filters. For each instance, cat-
egories are evaluated starting with the category listed at the top of the Filter pane. An instance
is put into the first category for which it passes the required number of filters as defined by the
operator.
Each category is assigned a name and an operator. The operator (AND or OR) is applied to all
filters within that category to evaluate the instances. The operator associated with a category is
displayed in the Description field of the Configuration section. If the category has an AND
operator, an instance must pass all filters in that category to qualify. Conversely, if the category
has an OR operator, an instance needs to pass only one of the filters to qualify.
The name and operator of a category can be changed by editing the category in the filter pane.
Because instances are put into the first category for which they qualify, the order of the cat-
egory in the Filter pane is important. The Up/Down arrow buttons are used to adjust the cat-
egory order.
If the selected category is at the top or bottom of the list, the Up or Down Arrow button will be
disabled, respectively. These cannot be used to move filters between categories.
NOTE: The location results associated with each category are exposed to ACE
Sight in a separate frame index for each category. The frame number is equal to
the category’s position; the first is associated with Frame 1, the second with
Frame 2, and the pattern continues.
Filters
Filters define the actual inspection or comparison to be performed. Each filter has a name and
belongs to a category. A filter cannot be added if an existing category is not selected (the Add
Filter button will be disabled until then). The inspection performed by a filter is displayed in
the Description field of the Filter pane.
Editing a filter opens the Filter Editor. The filter setting items are described below.
Item Description
Mode Of Oper- Defines the type of evaluation this filter performs. The options are:
ation
l Measure Distance Between Two Points
l Measure The Shortest Distance Between A Point and a Line
l Measure The Angle Between Two Lines
l Test The Value Of A Vision Result Variable
l Vision Tool Transform Results
l Number of Results
l Number of Characters
l OCR Quality
l String
l Color Data
Vision Tools The appropriate vision tools and the necessary properties.
If the results of an inspection fall between these values, the instance passes
this filter. Like the Vision Tools, this also varies with the Mode Of Operation,
but it is a Minimum and Maximum Value in most cases.
Results Sample results from evaluating this filter on the current image.
This can be used to show whether certain instances pass or fail, allowing the
user to tune the limits. In most cases, there are three result columns in this
editor:
If the Mode Of Operations is set to Test The Value Of A Vision Result Variable, an additional
Result from another Vision Tool will need to be selected. This requires its own window, as
described below.
Highlighting any of the sub-properties will display its description. When the appropriate sub-
property has been highlighted, click the Right Arrow button to select and test it.
The fields above the available sub-properties pane will be filled. Click the Accept button to con-
firm the selection and close the window.
Use the table below to understand the results of the Inspection tool.
NOTE: The Results columns refer to existing filters and thus change depending
on the name of the filter. For these columns in the following table, these
instances will include <filter> instead of specific names.
Item Description
<filter> Pass Shows whether or not the instance passes a filter by displaying "True" if it
Status passes and "False" if it does not.
Category The category for which this instance qualifies. If this instance does not qualify
for any category, this will display "Unassigned".
If Inspection is set relative to another tool, this will be the X-coordinate of the
associated instance. Otherwise, it will be the origin of the image field of view.
If Inspection is set relative to another tool, this will be the Y-coordinate of the
associated instance. Otherwise, it will be the origin of the image field of view.
This tool performs differential processing on the image to detect defects and contamination on
the edges of plain measurement objects with high precision. This is done by using elements of
varying size and comparison intervals. By changing these parameters, fine customization of
speed and precision is possible. Precise Defect Path is primarily used for identifying 1-dimen-
sional defects or variations and can be utilized for part edge inspection, as shown in the fol-
lowing figure.
To create a Precise Defect Path tool, right-click Vision Tools in the Multiview Explorer, select
Add Inspection and then Precise Defect Path. A Precise Defect Path tool will be added to the
Vision Tools list.
Use the table below to understand the Precise Defect Path configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Name Available for all Path Type selections. Sets the user-
defined name of the region.
Search Region (Width, Sets the height and width of the rectangular
Height) region. Only available when Path Type is set to Rect-
angle.
Start/End Angle Defines the start and end angle of the Wide Arc
bounds. Angles are measured in degrees coun-
terclockwise from the positive X-axis. The arc is cre-
ated clockwise starting from the Start Angle and
ending at the End Angle. Only available when Path
Type is set to Wide Arc.
Properties (Element Condition) Set the width and height in pixels of defects / con-
Element Size tamination to be detected. Higher values increase
the degree of defects for larger defects.
This tool functions by creating elements in the search area and comparing them to determine
which elements deviate the most from their neighboring elements. The elements to be com-
pared are determined by all parameters in the Element Condition section. The size and spa-
cing of the created elements is also determined by the properties. This tool focuses primarily
on detecting defects along a path, making it useful for locating scratches or dents in a uniform
surface or edge, as shown in Figure 8-259 above.
When run, Precise Defect Path will return the position of the largest detected defect, shown in
the Vision Window. Only one result will be returned per input instance. For edge inspections,
multiple Precise Defect Path tools may be linked to an Inspection tool or used within a Custom
Vision Tool with logic to filter results as needed.
Use the table below to understand the results of the Precise Defect Path tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
This tool performs differential processing on the image to detect defects and contamination
within plain measurement objects with high precision. This is done by using elements of vary-
ing size and comparison intervals. By changing these parameters, fine customization of speed
and precision is possible. Precise Defect Region is primarily used for identifying 2-dimensional
defects or variations and can be utilized for inspection of a part area.
To create a Precise Defect Region tool, right-click Vision Tools in the Multiview Explorer, select
Add Inspection and then Precise Defect Region. A Precise Defect Region tool will be added to
the Vision Tools list.
Use the table below to understand the Precise Defect Region configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
WideArcs
Z-Order Available in all region types. Sets the order for over-
lapping regions of interest. Higher numbers will be
in front and resolved first.
Search Region (Width, Sets the height and width of the rectangular
Height) region. Only available in Rectangles.
Radius X/Y Defines the distance from the center to the exterior
along the X- and Y-axes, respectively. Only avail-
able in Ellipses.
Start/End Angle Defines the start and end angle of the Wide Arc
bounds. Angles are measured in degrees coun-
terclockwise from the positive X-axis. The arc is cre-
ated clockwise starting from the Start Angle and
ending at the End Angle. Only available in
WideArcs.
Properties (Element Condition) Set the width and height in pixels of defects / con-
Element Size tamination to be detected. Higher values increase
the degree of defects for larger defects.
This tool functions by creating elements in the search area and comparing them to determine
which elements deviate the most from their neighboring elements. The elements to be com-
pared are determined by all parameters in the Element Condition section. The size and spa-
cing of the created elements is also determined by the properties. To see an example of how the
various parameters affect detection, refer to Figure 8-260 above.
Item Description
1 Comparing Element
2 Result Element
3 Defect Point
5 Element Size X
6 Element Size Y
When run, Precise Defect Region will return the position of the largest detected defect, shown
in the Vision Window. Only one result will be returned per input instance. For example, if Pre-
cise Defect Region is set relative to another tool, one result will be returned for each instance
created by that tool.
If Area Measurement is enabled, the result will include the number of defects on that part in
the Defect Num result. Otherwise, Defect Num will be 0.
Use the table below to understand the results of the Precise Defect Region tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To.
Min Area Pixel area of the smallest defect in the search area.
Max Area Pixel area of the largest defect in the search area.
Reader Tools
Reader tools are typically used for reading data from located objects and features.
The following Reader tools are described in this section.
Barcode
This tool reads a barcode in a region of interest and returns text string data.
To create a Barcode tool, right-click Vision Tools in the Multiview Explorer, select Add Reader
and then Barcode. A Barcode tool will be added to the Vision Tools list.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Properties (Measurement) Code Select the appropriate code type from among the
Type following options:
l JAN/EAN/UPC
l Code39
l Codabar
l ITF
l Code93
l Code128/GS1-128
l GS1 DataBar
l Pharmacode
(Measurement Detail) Enables the tool to read white code on a black back-
Code Color ground. This is disabled by default since standard
barcode color is black on a white background.
Show Result Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Barcode Reading
The Barcode tool is typically configured relative to a Finder tool such as Locator or Shape
Search 3 to locate and orient the region of interest on one barcode at a time. To configure, use
an image with only one barcode present and configure the region of interest around the code
with sufficient white space. Once the region has been set, the Auto Tuning button can be used
to attempt to automatically configure many of the properties in the property grid, particularly
Code Type, as shown in the previous figure. If Auto Tuning does not work properly, a small
adjustment of the region of interest may resolve this. Note that the orientation of the region rel-
ative to the barcode must be close to 0°.
After the parameters have been set, the Barcode tool can be run like any other vision tool. Click-
ing the Run button executes the tool and returns a text string for each region of interest created
using the Relative To property, as shown in the figure below.
Barcode Results
Use the table below to understand the results of the Barcode tool.
Item Description
ProcErrorCode ProcErrorCode
Data Matrix
This tool reads a data matrix in a region of interest and returns text string data.
To create a Data Matrix tool, right-click Vision Tools in the Multiview Explorer, select Add
Reader and then Data Matrix. A Data Matrix tool will be added to the Vision Tools list.
Use the table below to understand the Data Matrix configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Properties (Measurement) Set the time in milliseconds that can elapse before
Timeout the tool execution is aborted. Any instances recor-
ded before this time period is reached will be
returned.
(Measurement Detail) Select the color of the code. Select Auto to auto-
Code Color matically detect the color, Black to look for black
code on a white background, and White to look for
white code on a black background.
(Measurement Detail) Select the shape of the code. Select Auto tool to
Shape automatically detect the shape, Square to look for a
square matrix, and Rectangle to look for a rect-
angular matrix.
Show Result Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
The Data Matrix tool is typically configured relative to a Finder tool such as Locator or Shape
Search 3 to decode all matrices in an image. To configure, use an image with only one matrix
present and configure the region of interest around the code with sufficient white space. Once
the region has been set, the Auto Tuning button can be used to attempt to automatically con-
figure many of the properties in the property grid. If Auto Tuning does not work properly,
increasing white space around the matrix or improving its alignment with the center of the
matrix may resolve this.
After the parameters have been set, the Data Matrix tool can be run like any other vision tool.
Clicking the Run button executes the tool and returns a text string for each region of interest
created using the Relative To property.
Use the table below to understand the results of the Data Matrix tool.
Item Description
Detected Coordinates of the vertices that form a rectangle around the matrix.
Polygon
ErrCellNum ErrCellNum
Item Description
ISOTR29158
Grid
Nonuniformity
OCR
This tool detects text characters in images and compares them to an internal font property to
output character strings. A custom user dictionary can also be prepared to recognize characters
NOTE: Typical OCR applications use strings of number and capital letters. If
the application involves detection of lower case letters, a user-defined OCR Dic-
tionary is required.
To create an OCR tool, right-click Vision Tools in the Multiview Explorer, select Add Reader
and then OCR. An OCR tool will be added to the Vision Tools list.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
OCR Dictionary Defines which OCR Dictionary tool is used for ref-
erence and registering characters.
Region Of Relative To The tool relative to which this tool executes. The
Interest
Properties (Auto Tuning String Enter the specific characters that are expected to
Format) Line be read in the region of interest. This defines what
No1/2/3/4 the tool will look for during the Auto Tuning pro-
cess. If this field is left blank, it will not be pop-
ulated. The generic characters can also be entered
here. Refer to the Inspection String Format prop-
erty for details.
Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Cut out image display Enables display to show all characters as cutouts.
Recognized characters will be isolated on a white
background. The gray region displayed in the
cutout image display is the region bounded by the
Dot Pitch X and Dot Pitch Y parameters.
Clicking the Auto Tuning button will attempt to automatically detect the types of characters in
the region. These will be displayed in the Inspection String Format section, as shown the pre-
vious figure. If the tool is not correctly tuning to the text, it can be manipulated using this sec-
tion and Auto Tuning String Format.
For example the tool will try to find the best numerical matches for all the letters in Line 1 and
the best letter matches for all the numbers in Line 2 if the corresponding lines in the Auto
Tuning String Format section are set to the number symbol "#" and the character symbol "$",
respectively. In this way, characters that are read inaccurately can be corrected to their proper
form. The specific expected characters can also be entered to further improve the results.
With some fonts and applications, OCR incorrectly identifies a character or fails to distinguish
one character from another. When this is the case, it is recommended to register these char-
acters to an OCR Dictionary tool. OCR tools that reference an OCR Dictionary will compare
detected characters to those saved in the dictionary before returning a text string.
To register detected characters to a dictionary, the OCR Dictionary property must be linked to
an OCR Dictionary Tool. Clicking the Register to OCR Dictionary button will then allow the
user to select what characters to save. Refer to the following figures for more information.
The registered characters do not need to match the appropriate character upon registration. For
example, in Figure 8-268 the character in C is actually a 6. That can be moved now (as shown)
or later in the OCR Dictionary tool. Refer to OCR Dictionary on page 548 for more information.
OCR Results
Use the table below to understand the results of the OCR tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To. This will only be different if the Locator is set relative to another
tool.
Match Quality Percentage that the detected character matches the recorded character in
the internal dictionary or OCR Dictionary tool.
Stability Measurement of how likely the character is to be the result that was iden-
tified.
Position X X-coordinate of the detected character with respect to the image origin.
Position Y Y-coordinate of the detected character with respect to the image origin.
OCR Dictionary
This tool is used only as a reference for OCR tools and can only be configured for and pop-
ulated by an OCR tool. It is used to save characters that the OCR identifies incorrectly or fails
to identify. An OCR tool will first compare detected characters to an internal dictionary data
and then to a referenced OCR Dictionary. Internal dictionary data cannot be modified. Refer to
OCR on page 542 for more information.
Because OCR Dictionary is only a reference, it has no properties or results. It only contains
data about registered characters.
NOTE: The following figure shows the OCR Dictionary with some registered
characters. The OCR Dictionary will appear blank if no characters have been
registered.
To create an OCR Dictionary tool, right-click Vision Tools in the Multiview Explorer, select Add
Reader and then OCR Dictionary. An OCR Dictionary tool will be added to the Vision Tools
list.
OCR Dictionary Configuration
Up to ten dictionary entries can be registered for each character. If more than ten characters are
registered, the tool will only save the first ten and remove the others.
The characters can be changed or deleted at any time by right-clicking on the figure. Selecting
Delete will remove the entry from the dictionary. Selecting Change Character will allow the
entry to be saved to any other character.
If the character to which the entry would be changed already contains ten entries, the entry
will not move. Figure 8-269 shows the OCR Dictionary after all necessary changes have been
made. The following figure shows one such change in progress.
QR Code
This tool reads QR Codes and Micro QR Codes in the image and returns text string data.
To create a QR Code tool, right-click Vision Tools in the Multiview Explorer, select Add Reader
and then QR Code. A QR Code tool will be added to the Vision Tools list.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Properties (Measurement) Set the method of code reading. Select Normal for
Reading Mode standard applications and DPM to read 2D code
where direct parts marking (DPM) is applied.
(Measurement Detail) Select the color of the code. Select Auto to auto-
Code Color matically detect the color, Black to look for black
code on a white background, and White to look for
white code on a black background.
(Measurement Detail) Set the reduction ratio for images when reading
Magnify Level code. This is automatically determined by the teach-
ing process. Disable Auto to set it manually.
(Measurement Detail) Select the direction in which the tool will read the
Mirror Image code. Select Auto for the tool to automatically
detect the direction during the teaching process,
Normal for the code to be read normally, or Mirror
for the code to be read in reverse.
(Measurement Detail) Only applicable if Code Type is set to QR. Select the
QR Size size of the QR code (in cells). Select Auto for the
size to be detected automatically.
(Measurement Detail) Only applicable if Code Type is set to QR. Set the QR
QR Model code model. Select Auto for the model to be detec-
ted automatically.
(Measurement Detail) Select the code error correction (ECC) level. Select
QR Ecc Level Auto to automatically adjust it as necessary.
Show Result Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
QR Code Reading
The QR Code tool is typically configured relative to a Finder tool such as Locator or Shape
Search 3 to decode all codes in an image. To configure, use an image with only one QR Code
present and configure the region of interest around it with sufficient white space.
Once the region has been set, the Auto Tuning button can be used to attempt to automatically
configure many of the properties in the property grid. To expedite this, most of the properties
are set to Auto by default so they automatically detect the necessary information. If Auto Tun-
ing does not work properly, increasing white space around the code or improving its align-
ment with the center of the code may resolve this.
Auto Tuning will not function if the code around which the region of interest is aligned does
not match the Code Type property. For example, if Code Type is set to MicroQR and the code
in the image is a standard QR code, the data will not be output. Unlike most of the other prop-
erties, Code Type cannot be set automatically, so you must verify that it is accurate before run-
ning the tool.
After the parameters have been set, the QR Code can be run like any other vision tool. Clicking
the Run button executes the tool and returns a text string for each region of interest created
using the Relative To property.
QR Code Results
Use the table below to understand the results of the QR Code tool.
Item Description
Detected Coordinates of the vertices that form a rectangle around the code.
Polygon
Focus Number of false cell detections in the finder pattern, timing pattern, and
data region.
ErrCellNum ErrCellNum
Calculation Tools
Calculation tools are used for calculating or refining detection points. The following Cal-
culation tools are described in this section.
Calculated Arc
This tool is used to create a graphical circle enclosing an arc based on referenced elements. The
circle can be used for tasks such as better defining a circular part or creating a clearance his-
togram. It can also be used to identify the center of a part with corners, as shown in the fol-
lowing figure.
To create a Calculated Arc tool, right-click Vision Tools in the Multiview Explorer, select Add
Calculation and then Calculated Arc. A Calculated Arc tool will be added to the Vision Tools
list.
Use the table below to understand the Calculated Arc configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
First Arc Point Select the tool that contains the first, second, or
third point on the desired arc.
Second Arc Point Select the tool that contains the appropriate point
on the desired arc. Only available when Mode is set
to Three Points On The Arc.
Third Arc Point Select the tool that contains the appropriate point
on the desired arc. Only available when Mode is set
to Three Points On The Arc.
Center Arc Point Select the tool that contains the center point for
the arc. Only available when Mode is set to Center
Point and 1 Point on Arc.
This property defines the way the arc is calculated. Depending on the selection, different tools
and properties will need to be referenced. For example, the Center Point and 1 Point on Arc
mode as shown in Figure 8-272 uses the properties Center Arc Point and First Arc Point to find
the circumscribed circle of a grid square. The links to use for those properties are defined by
clicking the ellipsis next to each property and then selecting the correct source. Only tools that
yield the possible result will be shown in the box.
The possible mode types are listed below.
l Three Points On The Arc: Requires First Arc Point, Second Arc Point, and Third Arc
Point. The arc is created across the three points and the center is calculated based on
these.
l Center Point and 1 Point on Arc: Requires Center Arc Point and First Arc Point. The arc
is created based on the center and calculated radius.
Use the table below to understand the results of the Calculated Arc tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To. This will only be different if the tool is set relative to another tool.
Item Description
Since this tool always generates circles, this is invariably returned as 360.
Calculated Frame
This tool is used to create a vision frame from referenced elements. Frames allow placement of
vision tools on objects that are not always in the same location or orientation. When a new vis-
ion tool is created, it can be specified to be relative to a vision frame. If the object that defines
the vision frame moves, so will the frame and the tools that are relative to that frame.
Additional Information: A fixed frame can also be created using this tool.
To create a Calculated Frame tool, right-click Vision Tools in the Multiview Explorer, select
Add Calculation and then Calculated Frame. A Calculated Frame tool will be added to the
Vision Tools list.
Use the table below to understand the Calculated Frame configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
X Axis Line Select the tool that contains the appropriate line.
Only available when Mode is set to Two Lines or Ori-
gin Point Following Line Angle.
Y Axis Line Select the tool that contains the appropriate line.
Only available when Mode is set to Two Lines.
Origin Point Select the tool that contains the appropriate point.
Only available when Mode is set to Two Points, One
Point, or Origin Point Following Line Angle.
Positive X Point Select the tool that contains the appropriate point.
Only available when Mode is set to Two Points.
Origin Transform Select the tool that contains the appropriate point.
Only available when Mode is set to Frame Relative.
This property defines the way the frame is calculated. Depending on the selection, different
tools and properties will need to be referenced. For example, the Two Lines mode as shown in
Figure 8-273 uses the properties X Axis Line and Y Axis Line. The links to use for those prop-
erties are selected by clicking the ellipsis next to each property and then selecting the correct
source. Only tools that yield the possible result will be shown in the box. Points are generated
from most tools, but lines are only generated by Calculated Line and Line Finder tools.
l One Point: Requires Origin Point. The frame is positioned and oriented to match Origin
Point.
l Frame Relative: Requires Origin Transform. The frame is positioned and oriented to
match Origin Transform.
l Origin Point Following Line Angle: Requires Origin Point and X Axis Line. The frame is
positioned at Origin Point and oriented so that the X-axis is parallel to X Axis Line.
NOTE: All calculations are adjusted based on Offset unless otherwise specified.
Use the table below to understand the results of the Calculated Frame tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To. This will only be different if the tool is set relative to another tool.
Angle Angle of the instance with respect to the X-axis of the camera coordinate sys-
tem.
Calculation Line
This tool is used to create lines based on referenced elements These lines are primarily used to
create other graphical features, such as a Calculated Point or Calculated Frame..
To create a Calculated Line tool, right-click Vision Tools in the Multiview Explorer, select Add
Calculation and then Calculated Line. A Calculated Line tool will be added to the Vision Tools
list.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
the line.
This property defines the way the line is calculated. Depending on the selection, different tools
and properties will need to be referenced. For example, the Perpendicular Line mode as shown
in Figure 8-274 uses the properties Point 1 and Line 1. The links to use for those properties are
selected by clicking the ellipsis next to each property and then selecting the correct source.
Only tools that yield the possible result will be shown in the box. Points are generated from
most tools, but lines are only generated by Calculated Line and Line Finder tools.
The possible mode types are listed below.
l Two Points: Requires Point 1 and Point 2. The line is positioned between the two points.
l Perpendicular Line: Requires Point 1 and Line 1. The line is positioned with an end-
point at Point 1 and passes through Line 1 so that the two lines are perpendicular.
Use the table below to understand the results of the Calculation Line tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To. This will only be different if the tool is set relative to another tool.
Angle Calculated angle of the line with respect to the X-axis of the camera coordin-
ate system.
Calculated Point
This tool is used to create points based on referenced elements. These points can be used to cre-
ate other graphical features, such as Calculated Arc or Calculated Line, or to act as a reference
point from which other measurements can be made.
To create a Calculated Point tool, right-click Vision Tools in the Multiview Explorer, select Add
Calculation and then Calculated Point. A Calculated Point tool will be added to the Vision
Tools list.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Line 1 Select the tool that contains the first reference line.
Only available when Mode is set to Point On A Line
Closest To A Point, Line – Arc Intersection, or Inter-
section Of Two Lines.
First Arc Select the tool that contains the first reference arc.
Only available when Mode is set to Point On An Arc
Closest To A Point, Line – Arc Intersection, or Inter-
section of Two Arcs.
Second Arc Select the tool that contains the second reference
arc. Only available when Mode is set to Intersection
of Two Arcs.
This property defines the way the point is calculated. Depending on the selection, different
tools and properties will need to be referenced. For example, the Intersection Of Two Lines
mode as shown in Figure 8-275 uses the properties Line 1 and Line 2. The links to use for
those properties are selected by clicking the ellipsis next to each property and then selecting
the correct source. Only tools that yield the correct type of result will be shown in the box.
l Line – Arc Intersection: Requires First Arc and Line 1. The point is the positioned at the
intersection of the line and the arc.
l Intersection Of Two Lines: Requires Line 1 and Line 2. The point is positioned at the
intersection of the two lines.
l Intersection of Two Arcs: Requires First Arc and Second Arc. The point is positioned at
the intersection of the two points.
NOTE: All calculations are adjusted based on Offset unless otherwise specified.
Calculated Point Results
Use the table below to understand the results of the Calculated Point tool.
Item Description
Frame/Group Index of the related result. It is associated with the tool that this tool is set
Relative To. This will only be different if the tool is set relative to another tool.
Image Process Tools
Image Process tools are used to manipulate image data. The output image of an Image Process
tool can be used as an Image Source by another tool.
The following Image Process tools are described in this section.
Advanced Filter
This tool applies a filter or operation to the input image to better facilitate processing by other
tools, including additional Advanced Filters. It can be used to perform tasks such as Back-
ground Suppression, Color Gray Filter, Erosion/Dilation, and edge extraction. In this way, it
can prepare the image to be used by less versatile tools. For example, the Background Sup-
pression filter shown in the following figure can be used by a Locator or Shape Search 3 tool to
more easily detect the poker chips.
To create an Advanced Filter tool, right-click Vision Tools in the Multiview Explorer, select
Add Image Process and then Advanced Filter. An Advanced Filter tool will be added to the
Vision Tools list.
The following filters can be applied by selecting the appropriate option in Filter Contents Selec-
tion.
No Filter
This is the default setting and makes no change to the input image. This setting is typically
used to temporarily disable Advanced Filter during testing and creation of the application. It
has no practical use during run time.
Smoothing Weak/Strong
This filter blends each pixel value with the others in its neighborhood to blur the image, a pro-
cess used to reduce detail and emphasize deviations. This is used to blur an image to emphas-
ize contrast. In particular, it can be used to reduce deviations between individual pixels to
make consistent deviations more distinct, thus improving edge detection. Smoothing Weak
and Smoothing Strong filters function by using a Gaussian blur to distribute color / gray level
within neighborhoods of pixels. Smoothing Strong does this to a higher degree than Smoothing
Weak.
Dilation
This filter changes the value of each pixel to the brightest (highest gray level value) within its
neighborhood. This is used to artificially raise the brightness of an image. Dilation determines
the highest gray level value within a kernel and then changes all pixels within that kernel to
match it.
NOTE: Since large sections of pixels are changing each time the tool is run, the
quality of the image decreases with each iteration.
Erosion
This filter changes the value of each pixel to the darkest (lowest gray level value) within its
neighborhood. This is used to artificially lower the brightness of an image. Erosion determines
the lowest gray level value within a kernel and then changes all pixels within that kernel to
match it.
Similar to Dilation, the quality of the image decreases with each iteration.
Median
This filter changes the value of each pixel to the median gray level value of the pixels within
its neighborhood. This is used to reduce details and defects while maintaining the overall
shapes and edges within an image. Median determines the median values within a neigh-
borhood and enhances those over the other pixels.
Similar to Dilation and Erosion, the quality of the image decreases with each iteration.
Edge Extraction
This filter changes all pixels to black except for those on detected edges. If there is a strong
deviation between pixels, then those pixels are highlighted. Otherwise, they are changed to
black to emphasize the edges.
This filter performs Edge Extraction only with edges detected from deviations that would res-
ult in horizontal edges.
This filter performs Edge Extraction only with edges detected from deviations that would res-
ult in vertical edges.
Edge Enhance
This filter blends pixel values along detected deviations to increase edge visibility. This
emphasizes the deviation between touching dark and light regions. It is mainly used in the
case of blurry images to clearly show the offset between light and dark regions.
This filter returns a gray scale image by setting a standard white color. The image can be pro-
cess based on RGB or HSV colors.
When RGB is selected, the option selected in the RGB Filter property will define the governing
color scheme of the filter. For example, if Red is selected in the RGB Filter property, the gray
level value in the resulting pixels will be equal to the red value of the original pixels. This
works similarly for HSV except that everything outside of the defined tolerances will become
black in the resultant image. Refer to Color Spaces on page 426 for more information.
Background Suppression
This filter sets a threshold for gray level or individual / collective color level to filter back-
grounds. It is designed to filter out the background of images in order to emphasize the targets
Labeling
Similar to the Labeling tool, this filter isolates masses of pixels that fall within a certain color
range and meet different extraction conditions. You can set a single color by right-clicking in
the image or a color range by right-clicking and dragging to establish a region. Alternatively,
you can manually enter the colors using the Color Region section in the Properties.
Once one or more color thresholds have been established, the tool will filter everything out of
the image except for the pixels that fall within the ranges. Additional extraction conditions can
be set to further limit the identified regions. Unlike the Labeling tool, the regions are not
returned as data. Instead, Labeling Filter results in a new image.
Refer to Labeling on page 445 for more information.
The Image Operation filter uses mathematical or bit operations to alter the value of pixels
using a constant operand value.
The 2 Image Operation filter uses mathematical or bit operations to alter the value of pixels by
using a second image as an operand.
In a 2 Image Operation filter, the images must be the same size and type (monochrome or
color) and the operand value is the gray level or color value of corresponding pixels. The oper-
ation options are listed below.
l Arithmetic Operation: Perform a mathematical calculation between the two values.
l Add: Add the operand value to the pixel gray level to a maximum of 255.
l Subtraction: Subtract the operand value from the pixel gray level to a minimum
of 0.
l Subtraction (Absolute): Subtract the operand value from the pixel gray level with
no minimum. The resulting gray level will be the absolute value of the operation
result.
l Multiplication: Multiply the operand value by the pixel gray level to a minimum
of 0 and a maximum of 255.
l Multiplication (Normalization): Performs a multiplication operation and then nor-
malizes for brightness.
l Average: Only available for 2 Image Operations. Returns the average of the cor-
responding pixel values.
l Maximum: Only available for 2 Image Operations. Returns the maximum of the
corresponding pixel values.
l Minimum: Only available for 2 Image Operations. Returns the minimum of the
corresponding pixel values.
l Bit Operation: Perform a logical test on the image.
l NOT: Reverses the RGB or gray level polarity of the image, regardless of the oper-
and value.
l AND: Performs an AND operation on the pixels by comparing the binary digits
of the two values. It is generally used to compare two images. It can also be used
in masking.
l OR: Performs an OR operation on the pixels by comparing the binary digits of
the two values. It is generally used to merge two images together.
l XOR: Performs an XOR operation on the pixels by comparing the binary digits of
the two values. It is generally used to create a binary image (black and white).
l NAND: Performs a NAND operation on the two pixels by comparing the binary
digits of the two values. Like AND, this is used to compare images, but it also
essentially returns negatives of what the AND result would be.
l NOR: Performs a NOR operation on the pixels by comparing the binary digits of
the two values. It is generally used to merge two images together and return the
negatives.
l XNOR: Performs an XNOR operation on the pixels by comparing the binary
digits of the two values. It is generally used to create a negative binary (black and
white) image.
l Bit Shift: Shift the values of the binary digits to the left or right. The bits on the digit that
is shifted away from (first digit in a right shift and last digit in a left shift) becomes 0.
This is only available in a single image operation. This is used when Arithmetic Mul-
tiplication is too computationally taxing.
l Change Pixel Value: Assign a fixed value to all pixels that fall within a certain gray
level range. Change Pixel Value defines the resulting gray level and the Bounds define
the range to be changed or retained, depending on the Change Pixel Mode.
The properties of Advanced Filter change depending on what type of filter is selected in Filter
Contents Selection. The table below shows all properties.
The Filter Contents Selection column defines those filter(s) for which the property is available.
If it is blank, the property is available for all or most of them.
Filter Contents
Group Item Description
Selection
Tool Links --- Image Source Defines the image source used
for processing by this vision tool.
Color Gray Filter Filter Kind Select whether the image will be
filtered in RGB or HSV.
Color Gray Filter Gain (Red/Green/Blue) These set the scale from 0-1 of
how the gray level value is
calculated. Only available when
Filter Kind is set to RGB and RGB
Filter is set to custom.
Color Gray Filter Standard Hue Defines the nominal hue color
that the filter will return on a
360-degree circle. Red is 0,
green is 120, and blue is 240.
Only available when Filter Kind is
set to HSV.
Color Gray Filter Hue Range Sets the tolerance for the Hue
value within which the gray level
value will be returned. Only
available when Filter Kind is set
to HSV.
Color Gray Filter Color Chroma Set the bounds for the filter’s
saturation range.
Filter Contents
Group Item Description
Selection
Labeling Filter Hole Plug Color When enabled, sets the color
that will fill all holes in the
detected masses.
Labeling Filter Extract Condition 1/2/3 Define the conditions this tool
will consider when extracting
masses from the image.
Conditions can be set by the
type (kind) and minimum and
maximum values.
Image Operation Arithmetic Value Set the operand value for the
arithmetic operation. Only
available when Operation Type is
set to Arithmetic Operation.
Image Operation Operation Value Set the operand value for the bit
operation. Only available when
Operation Type is set to Bit
Operation.
Image Operation Bit Shift Value Set the operand value for the bit
shift. Only available when
Operation Type is set to Bit
Shift.
Filter Contents
Group Item Description
Selection
Image Operation Change Pixel Value Set the gray level value to which
pixels will be changed. Only
available when Operation Type is
set to Change Pixel Value.
The Advanced Filter tool can execute several types of functions on an image, so it is important
that the appropriate choice is selected from the Filter Contents Selection menu before any other
changes are made. The Properties will adjust to match the type of filter.
There is no generic way to configure this type of tool because it is designed to perform a mul-
titude of different operations. However, in general, the tool is operated by first adjusting the
region of interest to the appropriate location and size. The exception to this is Color Gray Filter,
which does not have a region of interest. Any filters or operations performed will take place
only within the established region.
Most filter types require the Iteration and Kernel Size item. The Iteration value can be
increased for the tool to run multiple times whenever executed. This leads to higher processing
time, but it can be useful to remove detail from an image. Conversely, Kernel Size affects the
size of the pixel neighborhoods. Larger selections lead to fewer calculations and faster pro-
cessing time. Modifying these will impact performance and can be used to yield optimum
images.
Several filters use both RGB and HSV color schemes. Refer to Color Spaces on page 426 for
more information.
Advanced Filter Results
Advanced Filter returns a modified image that can be used by other vision tools. To do this,
set Advanced Filter as the Image Source property of the subsequent tool. The resultant image
can be viewed in the Result section of the object editor. Refer to Figure 8-277 for an example.
Color Matching
This tool searches and analyzes images to find areas of color that match user-defined filters. It
is typically used to analyze an area on an object for the purpose of verifying if the object meets
defined color criteria. It can also be used to filter unnecessary colors from the image before use
in later tools.
To create a Color Matching tool, right-click Vision Tools in the Multiview Explorer, select Add
Image Process and then Color Matching. A Color Matching tool will be added to the Vision
Tools list.
Use the table below to understand the Color Matching configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region Of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Custom Sampling Set Defines the sampling step used in calculation. This
is set to 1 by default. Enable this to adjust the set-
ting.
Output As Gray scale Specifies the color scheme of the output image.
Image When enabled, the resultant image is converted to
gray scale after color filters are applied.
The filters required for Color Matching are organized using the pane located below the prop-
erties. Color Matching will output the colors that are included in any filter listed in this pane.
All other pixels in the image will be output as black.
Any number of filters can be added to the pane. They are managed using the three buttons
described below.
l Add: Creates a new filter and adds it at the end of the current list of filters.
l Delete: Removes the currently selected filter. This is only available if a filter is high-
lighted.
l Edit: Opens the Color Finder editing window. This is only available if a filter is high-
lighted. Refer to the following section for more information.
Filter Name
The name of the filter as it appears in the Filters Pane. This is modified by typing in the text
box and causes no change to the operation of the tool. It is used to label filters and is useful
when there are multiple filters.
Color
Defines the color to be searched for by the filter and included in the resulting image. This is
defined in two different ways:
1. Right-click and drag in the image on the right to return the average color from the res-
ulting rectangular region. For example, the Color in Figure 8-279 was created by clicking
and dragging within one of the green-tinted bread slices of the corresponding image.
2. Click the arrow next to the Color field and choose a specific color from the color wheel
window.
Tolerances
Specifies the ranges of allowable color from the nominal one defined by the Color parameter.
Any color that has a Hue (H), Saturation (S), or Luminescence (L) value that has a smaller dif-
ference from the nominal than the values defined in the H, S, and L fields is included in the
resultant image. These are initially set to 25 by default, but using the right-click method to
select a color will automatically change these so that the entire region defined in the image is
output.
Color Matching returns a modified image that can be used by other vision tools. This image
will only include the colors defined in the filters. All other pixels will be returned as black. To
use this image with another tool, set Color Matching as the Image Source property of the sub-
sequent tool.
The resultant image can be viewed in the Vision Window as shown in the following figure.
Image Processing
This tool applies a filter to gray scale images to better facilitate processing by other tools,
including additional Image Processing tools.
NOTE: Advanced Filter tool provides more capabilities and may offer faster exe-
cution times than the Image Processing tool. Refer to Advanced Filter on page
564 for more information.
To create an Image Processing tool, right-click Vision Tools in the Multiview Explorer, select
Add Image Process and then Image Processing. An Image Processing tool will be added to the
Vision Tools list.
The following filters can be applied by selecting the appropriate option in Mode Of Operation.
Arithmetic Addition Add the operand value to the pixel gray level to a maximum
of 255.
Arithmetic Subtraction Subtract the operand value from the pixel gray level to a
minimum of 0.
Arithmetic Multiplication Multiply the operand value by the pixel gray level to a min-
imum of 0 and a maximum of 255.
Arithmetic Division Divide the operand value by the pixel gray level to a min-
imum of 0 and a maximum of 255.
Arithmetic Lightest Returns the maximum of the pixel value and operand value.
Arithmetic Darkest Returns the minimum of the pixel value and operand value.
Assignment Initialization Assigns a defined gray-level value to all pixels in the image.
Assignment Copy Copies the input value of each pixel to the corresponding out-
put pixels. Virtually no change is made to the image.
Assignment Inversion Inverts the gray level value of each pixel and outputs it.
Filtering Average Changes the color of each pixel to the average gray level
value of the pixels within its neighborhood.
Filtering Laplacian Increases or decreases the gray level values of light and
dark pixels at high contrast areas, respectively, to enhance
edges. This filter is extremely sensitive to noise and should
only be performed after the image has been blurred or
smoothed.
Filtering Horizontal Sobel Uses the Sobel operator to brighten pixels at horizontal
edges and darken the others. This has slightly better noise
filtering than Filtering Horizontal Prewitt.
Filtering Vertical Sobel Uses the Sobel operator to brighten pixels at vertical edges
and darken the others. This has slightly better noise filtering
than Filtering Vertical Prewitt.
Filtering Sharpen Increases or decreases the gray level values of light and
dark pixels, respectively, to increase contrast.
Filtering Sharpen Low Increases or decreases the gray level values of light and
dark pixels, respectively, to increase contrast. This filter per-
forms this to a lesser degree than Filtering Sharpen.
Filtering Horizontal Prewitt Uses the Prewitt operator to brighten pixels at horizontal
edges and darken the others. This has slightly worse noise fil-
tering than Filtering Horizontal Sobel.
Filtering Vertical Prewitt Uses the Prewitt operator to brighten pixels at vertical edges
and darken the others. This has slightly worse noise filtering
than Filtering Vertical Sobel.
Filtering High Pass Increases or decreases the gray level values of light and
dark pixels at high contrast areas, respectively, to enhance
edges. This filter is extremely sensitive to noise and should
only be performed after the image has been blurred or
smoothed. It is functionally identical to Filtering Laplacian.
Filtering Median Changes the color of each pixel to the median gray level
value of the pixels within its neighborhood.
Morphological Dilate Changes the color of each pixel to the brightest (highest
gray level value) within its neighborhood.
Morphological Erode Changes the color of each pixel to the darkest (lowest gray
level value) within its neighborhood.
Morphological Close Removes small dark particles and holes within an image.
Morphological Open Removes peaks from an image, leaving only the image back-
ground.
Histogram Light Threshold Changes each pixel value depending on whether they are
less or greater than the specified threshold. If an input pixel
value is less than the threshold, the corresponding output
pixel is set to the minimum acceptable value. Otherwise, it
is set to the maximum presentable value.
Histogram Dark Threshold Changes each pixel value depending on whether they are
less or greater than the specified threshold. If an input pixel
value is less than the threshold, the corresponding output
pixel is set to the maximum presentable value. Otherwise, it
is set to the minimum acceptable value.
Image Processing Configuration
The Image Processing tool is generally configured by selecting a Mode Of Operation and one
or two input images. All of the available Modes can be performed with one image and some
can be performed with two.
The properties in the bottom half of the tool editor will change depending on the selected
Mode Of Operation, as shown in the following table. Any mode with a Constant property can
be operated with two images. If an Operand Image is selected, the filter will use the values of
the corresponding pixels in that image. Otherwise, it will default to the Constant property.
The properties of Image Processing change depending on what type of filter is selected in
Mode Of Operation. General properties are described below.
Mode
Properties Description
of Operation
Arithmetic Clipping Mode Select the method by which the calculation handles
Operations resultant values below 0. Normal converts all of
them to 0 while Absolute returns the absolute
value of any result below 0.
Operations image.
Filtering Clipping Mode Select the method by which the calculation handles
Operations resultant values below 0. Normal converts all of
them to 0 while Absolute returns the absolute
value of any result below 0.
Histogram Threshold Defines the gray level value threshold for the his-
Operations togram. The gray level values of all pixels in the
image change to either 0 or 255 depending on this
value and the type of histogram (available for His-
togram Light / Dart Threshold Mode of Operation
only).
Image Processing Results
There are no results in the tool editor. Image Processing outputs a modified version of the
input image and it does not return any data. The modified image can be viewed in the Vision
Window or by selecting Image Processing as another tool’s Image Source.
Image Sampling
This tool is used to extract a rectangular section of an image and output it as a separate image.
To create an Image Sampling tool, right-click Vision Tools in the Multiview Explorer, select
Add Image Process and then Image Sampling. An Image Sampling tool will be added to the
Vision Tools list.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Image Sampling Configuration
The Image Sampling tool is primarily used to limit image processing to a specific area of the
vision field of view. This means that subsequent tools will only need to view the resultant
region of this tool. For example, in Figure 8-284 the image is cropped so that only the region
with guitar chips is returned.
To do this, the region of interest can be defined in the target location. Everything outside of it
will be removed from the resultant image, which can be used as an image source in other
tools.
Image Sampling Results
There are no results in the tool editor. Image Sampling returns a cropped image that can be
used by other vision tools. To do this, set Image Sampling as the Image Source property of the
subsequent tool. The resultant image can be viewed in the Vision Window. Refer to Figure 8-
285 above.
Position Compensation
This tool is used to execute a transformation on a region of an image and orient it into a spe-
cific orientation. This contrasts with the Relative To function of most tools, which transforms
the tool itself to an instance of an object without changing image orientation. Position Com-
pensation instead orients an image so that processing only needs to occur in one location.
While Relative To is more convenient for processing during run time, Position Compensation
can be used to make an operation more user-friendly while configuring the application. For
example, Relative To can move any reader tool into an orientation and read the character
string, but Position Compensation can be used during configuration so the character strings
are always in a readable orientation.
To create a Position Compensation tool, right-click Vision Tools in the Multiview Explorer,
select Add Image Process and then Position Compensation. The Position Compensation tool
will be added to the Vision Tools list.
Use the table below to understand the Position Compensation configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Region of Relative To The tool relative to which this tool executes. The
Interest output values of the selected tool are the input val-
ues of this one.
Link Name Select the property in the relative tool that will
provide the input values.
Position Compensation Configuration
The Position Compensation tool is primarily used to focus an image on only a specific area of
the vision field of view. This is useful during configuration to allow the user to develop an
application while images are in an ideal orientation. For example, Figure 8-286 the barcode is
rotated and centered so that a Barcode tool will only need to focus on the center region of the
image.
To do this, Position Compensation can be set in a known orientation to guarantee that the
instance is centered and returned in a target orientation. All pixels outside of the region of
interest will be returned as black. This may cause interference with reader tools if there is not
sufficient white space around the object to be read.
Position Compensation Results
There are no results in the tool editor. Position Compensation returns a modified image that
can be used by other vision tools. To do this, set Position Compensation as the Image Source
property of the subsequent tool. The resultant image can be viewed in the Result section of the
object editor. Refer to Figure 8-286 for an example.
Custom Tools
Custom tools allow you to specify a program that will be called when the tool is executed. The
following Custom Tools are described in this section.
Custom Vision Tool
This tool is a C# program that can be referenced and executed as a vision tool. Other vision
tools and objects can be referenced and used within the program. At the end of the program, a
collection of defined vision transform results will be returned.
To create a Custom Vision tool, right-click Vision Tools in the Multiview Explorer, select Add
Custom and then Custom Vision Tool. A Custom Vision tool will be added to the Vision Tools
list.
Use the following information to understand common uses for Custom vision tools.
This can include the use of tools configured ahead of time or modifying tool parameters before
executing them during run time. A Custom Vision Tool has access to vision tool parameters,
frames, and results to allow for complete flexibility in selecting which results are accessed and
returned.
In many situations, the Random Instances feature does not provide adequate control over vis-
ion result generation for effectively emulating the application. A Custom Vision Tool can be
used to program the logic of vision result generation to be as simple or complex as necessary.
Any information can be stored in the tag property or directly into a controller variable struc-
ture to be used later in a process.
Use the table below to understand the Custom Vision tool configuration items.
Tool Links Image Source Defines the image source used for processing by
this vision tool.
Properties Show Results Graphics Specifies if the graphics are drawn in the Vision Win-
dow.
Custom Vision Tool Customization
Custom Vision Tools can be used to do almost anything with other vision tools in the work-
space. For example, the tool shown in Figure 8-287 is a simple example used to return
instances from a Locator. The instances are drawn in the Vision Window as shown in the fig-
ure below.
Like other tools, Custom Vision Tool must have a camera or tool selected in the Image Source
to operate.
Additional Information: Every Custom Vision Tool should end with the line
“return results.ToArray();”. This outputs the results to the Results section of the
tool editor.
Use the table below to understand the results of the Custom Vision tool.
Item Description
Position X X-coordinate of the instance. This is returned using the “x” entry of the
VisionTransform list.
Position Y Y-coordinate of the instance. This is returned using the “y” entry of the
VisionTransform list.
Angle Angle of the instance. This is returned using the “roll” entry of the
VisionTransform list.
Remote Vision Tool
Some applications require Process Managers operating many processes for multiple robots and
multiple cameras. There may be other situations where applications are operating on a single
PC, where the PC hardware can limit applications that require large amounts of processing
power. An example is a PC operating with multiple robot processes and different Vision Tools.
Either situation can cause a lag in response and motion. In both situations, it may be useful to
offload vision operations to another computer to improve application speed and performance.
Improved performance is accomplished by referencing a vision tool in a separate Application
Manager and returning results from the other tool. To distribute vision operations, you can cre-
ate and configure another Application Manager, and use the “Move To” option to move the
Vision Tools to the second Application Manager. A part or target object vision tool con-
figuration can reference a Remote Vision Tool (RVT) which returns the VisionTransform res-
ults from a vision tool in a different Application Manager. The RVT is used in conjunction
with other Vision Tools to off load image processing to a second PC acting as a vision server.
To create a Remote Vision Tool, right-click on Vision Tools in the Multiview Explorer and
select: Add Custom and then Remote Vision Tool
NOTE: This is not necessary for eV+ applications using ACE Sight objects
because eV+ can directly request results from multiple Application Managers.
Configuration
Object Definition
Tool Links
Vision Tool Selects the vision tool that the Remote Vision Tool will execute and
from which it will acquire results.
Properties
Show Results Graphics Specifies if the graphics are drawn in the Vision Window.
Time Out Set the maximum time period (in milliseconds) that the tool is able
to run. This includes both execution of tool and acquisition of results.
RVTs will likely utilize a camera that is connected to a separate computer. Therefore, all hard-
ware connections should be verified before configuration. The camera calibrations should also
be performed. This can be done either remotely or on the server computer. For more inform-
ation, refer to Remote Application ManagerSee "Remote Application Manager".
While the function of RVT depends entirely on the Vision Tool to which it is linked, the con-
figuration is largely the same. The user should first create and configure a separate
Application Manager to act as a vision server and create the necessary vision tool on there.
Only then should the correct Vision Tool be selected in the properties. Linking an RVT to a
Vision Tool that already has errors will cause the RVT to fail.
NOTE: Each RVT can link to only one Vision Tool, but RVTs in the same
Application Manager can link to Vision Tools in multiple other Application
Managers.
Runtime
During runtime, RVT will continuously update to show the latest values of its results, as
would any other Vision Tool. However, RVT does not return the images or graphics from
these values. Therefore, it is recommended to perform any image processing or editing in the
Application Manager containing the linked Vision Tool.
Results
Object Definition
Tool Compatibility
RVT is not able to connect to every type of Vision Tool. It is primarily designed to work with
Finder Tools so it can return the locations of instances. RVT supports the following Vision
Tools:
l Blob Analyzer
l Labeling
l Locator
l Shape Search 3
l Custom Vision Tool
In the first four Vision Tools listed, the RVT obtains its results by returning the values for each
instance that match the Results columns shown, See "Remote Vision Tools Editor". Any other
Vision Tool can be supported through a Custom Vision Tool in the Application Manager, act-
ing as a vision server. RVT supports VisionTransform object type to give proper values to Pos-
ition X, Position Y, and Angle.
This section describes the different messages that are available for troubleshooting purposes.
You can sort the events by clicking the Type, Time Stamp, or Message column headings. Each
click will toggle the sort between ascending and descending order.
You can filter events by clicking the event type buttons in the Event Log area.
You can copy, clear, and select all Event Log entries by right-clicking the list and making a
selection.
Additional Information: Some errors (such as servo errors) may require access-
ing the FireWire event log for additional information. Refer to FireWire Event Log
on page 593 for more information.
Use the Types and Levels selections to filter events. The Copy button will copy all events in
the list to the clipboard.
1. While online with a physical controller, access the Controller Settings area. Refer to Con-
troller Settings on page 185 for more information.
2. Click the Configure button to display the Configure Options Dialog Box.
3. Select Configure FireWire Nodes and then click the Finish button. The
Configure FireWire Nodes Dialog Box will be displayed.
4. Right-click a FireWire node and then select View Event Log to access the FireWire Event
Log. Select Clear Event Log to clear the FireWire Event Log. After the selection is made,
the procedure is complete.
FireWire Configuration Conflicts
When connecting to a controller, ACE software will check for FireWire configuration conflicts.
If a conflict is present, error messages will be displayed after the connection is established.
NOTE: The ACE software installation includes the Basler Pylon Suite that is
required for the camera configuration.
Camera Connections
The Basler cameras ship with power and data cables. The following identifies the Omron part
numbers.
Basler:
Power I/O cable p/n 09454-610
Cat6 Camera Cable p/n 18472-000
Use the following information to understand the Basler camera connections. Cameras with
GPIO are models: acA60-300, acA800-200, acA1300-75, acA1920-40, acA1920-48, acA1920-50,
acA2500-20. All other Basler camera models do not have GPIO capability, use the following
two tables as guides for power connections.
6 4
1 3
4 3 2
5 2
8
6 1
2 Pink Opto IN 1
3 Green No Connection
NOTE: Wire colors indicated in the table above correspond to the power I/O
cable supplied with the camera.
2 Pink Opto IN 1
3 Green No Connection
NOTE: Wire colors indicated in the table above correspond to the power I/O
cable supplied with the camera.
Use the following information to supply power to the camera. Refer to Camera Connections on
page 597 for more information.
NOTE: If using a GigE type camera, external power connections should not be
used if the camera is connected to a PoE port. Only the optocoupler needs to be
connected after the Ethernet PoE cable.
Use the following connections to supply external power to the camera if the model uses GPIO.
l Pin 1: + 24 VDC
l Pin 6: 0 VDC
Use the following connections to supply external power to the camera if the model does
NOT use GPIO.
l Pin 1: + 12 VDC
l Pin 6: 0 VDC
Camera Communication Connections
If using a GigE type camera, connect the RJ45 camera port to the PC or the local area network
using the supplied cable.
If using a USB camera, connect the provided USB cable to the PC using the supplied cable.
NOTE: The Pylon tools referenced in this section can be found in the Windows
Start Menu under the Pylon program group. These tools are included with the
default ACE software installation.
Access the PC network adapter configuration and enable the Jumbo Packet property. Set the
value to 9014 Bytes.
Open the Pylon IP Configurator tool to view the camera communication status with the PC. If
the camera IP configuration is incorrect, this tool can be used to correct the settings. Click the
Save button after making any setting adjustments.
The figure below provides an example that shows one camera that is communicating properly
and one camera that is not reachable due to an incompatible subnet configuration.
The above figure shows two cameras connected to individual Ethernet connections. When the
cameras are connected on the same IP address, as shown in the following figure. The figure
shows two acA1600-60gm running in continuous mode. Each of them on an own network
interface of the Omron IPC – having the Basler GigE Vision Adapter properly installed.
Each camera is using more than 100 MByte/s for 1600x1200 pixels at 50+ fps. In the Pryncy-
Teka setup, you need to share the max. of 125 Mbyte/s, which is what a perfect hardware can
deliver per channel, between two even higher resolution cameras with 1920x1200 pixels.
The only way to check, if the cameras properly communicate with the Windows PC is the
Pylon Viewer.
An operating issue is that different NICs and switches support different packet sizes. Defining
the correct packet size is system dependent. When the packets are too small the NIC buffers
can become overloaded and packets can be dropped. When packets are too large the NIC may
not support that size and it drops the packet. In either case, when packets are dropped, the
ACE driver detects the drop and requess the camera to resend.
The typical packet size of ~500 are too small when dealing with large images that naturally
consist of lots of data. And, not many devices support the full 16000 Byte packets. Starting at
1500 Bytes is usually the fail-safe. If you notice your CPU load is higher than you'd like.
You can also enable the "Auto Packet Size" feature in the camera, and the camera will try to
negotiate a working packet size on its own.
The Pylon Viewer is used to adjust the camera image acquisition settings. Use this tool to
enable the Exposure Active signal and change any other necessary camera settings before
adding a camera to the ACE project.
NOTE: If any camera settings are changed with the Pylon Viewer tool after it
has been added to the ACE project, camera functions may be disrupted.
Open the Pylon Viewer tool and select a camera from the Devices list. Then, click the Open
Camera button to access all settings associated with the selected camera.
The Exposure Active signal is an output from the camera to the robot controller that indicates
the moment when the camera acquired an image. This signal is used by the controller for pos-
Image Acquisition Check
It may be helpful to use the Pylon Viewer tool to confirm image acquisition before adding the
camera object to the ACE project. Use the Single Shot button to acquire an image and then
make any necessary adjustments such as exposure or white balance if required.
NOTE: The ACE software and Pylon Viewer tool cannot access a camera sim-
ultaneously. Always disconnect the Pylon Viewer tool before opening an ACE
project with a Basler camera object present.
Position Latch Wiring
If a camera is used in a vision-guided application with functions such as belt tracking or pos-
ition refinement, a position latch signal must be connected from the camera to a robot con-
troller input signal. This will allow the robot controller to capture the belt or robot position
when the latch condition is met.
Use the information in this section to wire and configure a typical latch position signal from a
connected Basler camera.
The latch signal pin connections are indicated as follows. Refer to Camera Connections on
page 597 for more information.
The following example applies when using the SmartController XDIO terminal block to receive
a rising edge latch on input 1001 (+1001 in the latch configuration). Refer to Configure on page
193 for more information.
+1 1
1001 –2 2
+3 3
1002 –4 4
Latch
+5 5
1003 –6 6
+7
1004 –8
+9
1005 – 10
+ 11
1006 – 12
+ 13
1007 – 14
+ 15
1008 – 16
+ 17
1009 – 18
+ 19
1010 – 20
+ 21
1011 – 22
+ 23
1012 – 24
+24 V (1 A) 41
42
43
44
45
46
47
GND
48
49
50
The following example applies when using the e-Series controller XIO terminal block to receive
a rising edge latch on input 1001. Refer to Configure on page 193 for more information.
NOTE: Signal numbers may differ than what is shown in the following figure.
Refer to the robot User Guide for more information.
Ensure the appropriate XIO Termination Block bank switch is in the HI position.
Basler Camera
1
2
3
4
Latch
5
6
24V GND
4 I1
Signal 1001
5 I2
Signal 1002
6 I3
Signal 1003
7 I4
Signal 1004
Input Bank 1
8 I5
Signal 1005
9 I6
Signal 1006
Bank 1 3
Common
2,11 HI
+24V
GND 1,10
LO
This is a simple test that can be performed for each camera in an application before pro-
ceeding with calibrations.
1. Access the Monitor Window. Refer to Monitor Window on page 206 for more inform-
ation.
2. Clear the latch FIFO buffer using the program keyword CLEAR.LATCHES.
For belts, use "do@1 clear.latches(-n)" where "n" is the belt object number.
For robots, use "do@1 clear.latches(+n)" where "n" is the robot number.
3. Ensure the latch FIFO buffer is empty by entering "listr latched(-n)" or "listr latched(+n)",
where "n" is the same as in step 2. This should return "0" if the FIFO buffer is clear. If
Camera Connections
ACE Sight vision tools support Sentech cameras supplied by Omron. These cameras need to be
configured before use with the ACE software.
Use the following steps to understand the general Sentech camera configuration procedure.
The steps are detailed in the following sections.
NOTE: The ACE software installation includes the Sentech ‘StViewer’ and
‘GigECameraIPConfig’ utilities are required for the camera configuration.
The Sentech cameras ship with power and data cables. The following identifies the Omron
part numbers.
Sentech:
Power I/O Cable p/n 21942-000
Cat5e Camera Cable p/n 21943-000
1 Blue Power IN
6 Black GND
NOTE: Wire colors indicated in the table above correspond to the power I/O
cable (ORT P/N 21942-000) supplied with the camera kit. For more information
regarding pin voltages and camera specifications please refer to Sentech Camera
Documentation
Use the following information to supply power to the camera. Refer to Camera Connections on
page 607 for more information.
NOTE: When using a GigE type camera, external power connections should not
be used if the camera is connected to a PoE port.
Use the following connections to supply external power to the camera if the model uses GPIO.
l Pin 1: + 24 VDC
l Pin 6: 0 VDC
Camera Communication Connections
If using a GigE type camera, connect the RJ45 camera port to the PC or the local area network
using the supplied cable, ORT P/N 21942-000.
If using a USB camera, connect the provided USB cable to the PC using the supplied cable.
NOTE: The StViewer tools referenced in this section can be found in the Win-
dows Start Menu under the Sentech SDK program group. These tools are
included with the default ACE software installation.
PC Port Settings
Open the PC Control Panel and then open the Network Connections. Right click the port used
for the Sentech camera and select Properties. Select Internet Protocol Viewer 4 (TCP/IPv4) and
click Properties, to open that panel, as shown in the following figure.
Use the Network Properties setup the correct compatible IP address for the PC port com-
municating with the Sentech Camera. When done, click OK on the Properties panel. The Eth-
ernet Properties panel should remain open.
On the Ethernet Properties panel, click the Configure button. The enables the PC network
adapter configuration and enable the Jumbo Packet property. Set the value to 9014 Bytes. Click
the Advanced tab and scroll down the Property options and select Jumbo Packet. In the Value
side, use the drop down and select 9014 Bytes, as shown in the following figure. Then click,
OK to close the panel.
In the Windows Start Menu, navigate to the Sentech SDK and open the GigECameraIPConfig
to view the camera communication status with the PC. If the camera IP configuration is incor-
rect, this tool can be used to correct the settings. Click the Apply button after making any set-
ting adjustments.
NOTE: If an active connection is present between the camera and the ACE soft-
ware, the Sentech StViewer settings may be inaccessible since only one software
may access the camera at any time.
NOTE: Click the DHCP and persistent IP check-boxes, as shown in the fol-
lowing figure, to ensure the IP settings persists even after the camera is rebooted.
The StViewer is used to adjust the camera image acquisition settings. Use this tool to enable
the Exposure Active signal and change any other necessary camera settings before adding a
camera to the ACE project.
NOTE: If any camera settings are changed with the GigECameraIPConfig or the
StViewer tool after it has been added to the ACE project, camera functions may
be disrupted.
The Exposure Active signal is an output from the camera to the robot controller that indicates
the moment when the camera acquired an image. This signal is used by the controller for pos-
ition latching in applications that use vision-guided motion. The Exposure Active signal must
be enabled for all applications that require robot or belt encoder position latching.
Expand the Digital I/O Controls item, as shown in the following figure, and make the fol-
lowing settings.
Sentech Camera Settings need to be saved to a User Set to ensure the settings are applied when
the camera loses power or the connected PC is restarted. Before loading or saving User Profiles,
the acquisition needs to be turned off by pressing the STOP Acquisition button in the top left
corner in the tool bar.
To save these settings, for example to User Set 0, “User Set Selector should be changed to
“User Set 0”, then execute User Set Save. When “User Set Default” is set to “User Set 0” the
camera will load User Set 0 settings when power is applied. All changes made to the camera’s
It may be helpful to use the StViewer tool to confirm image acquisition before adding the cam-
era object to the ACE project. After selecting the appropriate camera, enable image acquisition
using the PLAY button in the top left corner and Use the Trigger Software Execute button
under Remote Device_Acquisition Control to acquire an image. Customize the camera expos-
ure or white balance if required.
NOTE: The ACE software and StViewer tool cannot access a camera sim-
ultaneously. Always disconnect the StViewer tool before opening an ACE project
with a Sentech camera object present.
Position Latch Wiring
If a camera is used in a vision-guided application with functions such as belt tracking or pos-
ition refinement, a position latch signal must be connected from the camera to a robot con-
troller input signal. This will allow the robot controller to capture the belt or robot position
when the latch condition is met.
Use the information in this section to wire and configure a typical latch position signal from a
connected Sentech camera.
The latch signal pin connections are indicated as follows. Refer to Camera Connections on
page 607 for more information.
The following example applies when using the SmartController XDIO terminal block to receive
a rising edge latch on input 1001 (+1001 in the latch configuration). Refer to Configure on page
193 for more information.
+1 1
1001 –2 2
+3 3
1002 –4 4
Latch
+5 5
1003 –6 6
+7
1004 –8
+9
1005 – 10
+ 11
1006 – 12
+ 13
1007 – 14
+ 15
1008 – 16
+ 17
1009 – 18
+ 19
1010 – 20
+ 21
1011 – 22
+ 23
1012 – 24
+24 V (1 A) 41
42
43
44
45
46
47
GND
48
49
50
The following example applies when using the e-Series controller XIO terminal block to receive
a rising edge latch on input 1001. Refer to Configure on page 193 for more information.
NOTE: Signal numbers may differ than what is shown in the following figure.
Refer to the robot User Guide for more information.
Ensure the appropriate XIO Termination Block bank switch is in the HI position.
Sentech Camera
1
2
3
4
Latch
5
6
24V GND
4 I1
Signal 1001
5 I2
Signal 1002
6 I3
Signal 1003
7 I4
Signal 1004
Input Bank 1
8 I5
Signal 1005
9 I6
Signal 1006
Bank 1 3
Common
2,11 HI
+24V
GND 1,10
LO
This is a simple test that can be performed for each camera in an application before pro-
ceeding with calibrations.
1. Access the Monitor Window. Refer to Monitor Window on page 206 for more inform-
ation.
2. Clear the latch FIFO buffer using the program keyword CLEAR.LATCHES.
For belts, use "do@1 clear.latches(-n)" where "n" is the belt object number.
For robots, use "do@1 clear.latches(+n)" where "n" is the robot number.
3. Ensure the latch FIFO buffer is empty by entering "listr latched(-n)" or "listr latched(+n)",
where "n" is the same as in step 2. This should return "0" if the FIFO buffer is clear. If
NOTE: Canceling the application sample wizard before completion can lead to
an ACE project with partial or no functionality. Completing the wizard is recom-
mended.
3. Add a Cobra 800 Pro robot to the Installed Robots area and then click the Next button.
4. After adding a robot, the Pack Manager application sample wizard will begin and the
next phase will establish the pick and place configuration. Proceed with the wizard and
select the Pick Configuration of On a belt located with a camera because the robot will
be picking the jars from a belt and using a camera for location information.
9. Use the default robot position as the Safe Robot Position or move the robot to an altern-
ate position and then click the Here button. This step completes the robot and end
effector configuration phase. Proceed to the Pick Configuration phase described below.
10. Select Create an Emulation Camera in the Create a New Basler Camera step. This will
bypass all steps in this phase that are needed when a physical camera is used. This
phase of the sample wizard process creates the camera, virtual camera, and vision tool
objects in the project and associates them with the Part object.
11. Select Encoder Channel 0 on SmartController0 for the Select the Encoder step. This con-
figures the Belt object, virtual encoder, and encoder association and links it to the Part
object.
12. The Test Encoder Operation step does not require output signals in Emulation Mode.
Leave these settings set to 0. These settings can be modified later if necessary.
13. The Virtual Teach step simulates performing a belt calibration. This step will require
positioning the belt window while ensuring the entire belt window is within the robot
work envelope. Use the belt transformation values shown in the figure below to ensure
consistency with the rest of this procedure. These settings will be stored in
the Process Manager Robot-to-Belt calibration and Pick Belt object.
14. The Test Belt Calibration step allows you to test belt tracking by moving the robot into
the belt window and then tracking along the belt. Position the robot over the belt, click
the Start Tracking button, and move the conveyor with the belt control I/O signals to
verify the robot tracks the belt movement.
Additional Information: In Emulation Mode, testing the belt calibration is
simplified because tracking along the belt vector is based on an incre-
menting encoder count multiplied by the specified scale factor. If using
hardware, this allows you to check that the distance between the tool tip
and a jar on the belt remains relatively constant. Any deviation while
tracking typically indicates the belt calibration needs to be performed
again.
15. The Virtual Teach step will teach the camera location to simulate performing a robot-to-
sensor, belt-relative camera calibration which defines the position of the camera field of
view relative to the robot world coordinate system. Use the Sensor Information Offset
values shown in the figure below to ensure consistency with the rest of this procedure.
These settings will be stored in the Process Manager Sensor Calibration area. After this
is completed, proceed to the next phase that will define the pallet properties for the jar
placement.
16. The application sample only supports X-Y (2D) arrays, so the Z dimension of the Pallet
Properties will need to be modified later. Set the X-Y pallet properties as follows to com-
plete the Place Configuration phase.
l X Count: 4
l X Spacing: 54 mm
l Y Count: 4
l Y Spacing: 54 mm
17. The Teach Robot phase consists of completing the Teach Process wizard to step through
the sequence of operations the robot will perform for this process. Begin with the Teach
Idle Position step that is used when the robot is not picking or placing jars. Move the
robot to the idle position shown in the figure below and then click the Here button. Use
the values shown in the figure below to ensure consistency with the rest of this pro-
cedure.
IMPORTANT: The robot will move to this location without a tool offset
applied. This location should be positioned above all obstacles in the
work envelope. When the robot moves to this Idle Position, it will first
align in the Z direction before moving parallel to the X-Y plane. If this loc-
ation is taught incorrectly and below an obstacle in the work envelope, the
robot may crash into an obstacle.
18. In the Advance Belt step, you will advance the belt to move the part instance from the
center of the camera field of view to the belt window. Use the buttons in the Belt Control
area to move the belt until the part instance is between the Upstream limit (orange line)
and Process Limit (purple line) as shown below.
19. In the Teach Position step, you will teach the robot where and how to pick the jar. This
will compare the predicted position based on all calibrations to the actual taught pos-
ition. Any difference will be stored in the Pick Motion Parameters Tool Tip Offset, and is
often referred to as the pick offset. Be sure to consider the following.
l The height of the jar
l The height of any obstacles in the work envelope
l How the robot will approach and depart this location
The part instance location is on the belt surface with negligible height, however the jars
are 35 mm tall. To account for this, click the Move button to move the robot to the
instance location. Then add 35 mm to the Z coordinate (235 mm) and then click the
Here button. The Taught Position should be similar to the following figure.
20. After departing from the pick operation, the robot may move to the idle position if no
targets are available and the Process Strategy Robot Wait Mode is set to Move to idle
position. If you wish to see the motion from the pick depart position to the idle position,
click the Move button on the Move to Idle Position step. Otherwise, proceed without
moving to the Idle Position.
21. The Teach Origin, Teach +X, Teach +Y, and Teach Position steps are used to create the
pallet frame, and will be stored in the Place Part Target Robot Frame in the Process Man-
ager Configuration Items area. Because there are three different box sizes used in this
example, the pallet frame is defined in such a way that all three boxes can be aligned at
a common reference location. Then, recipe management of the Pallet object can be used
to manage the pallet layouts. The following figure depicts three box sizes aligned to a
common pallet frame that is in close proximity to the picking location. Proceed to teach
pallet frame points in the steps below while considering the largest pallet X-Y layout.
Click the Here button to teach the origin position of the frame using the values shown
in the Figure A-28 to ensure consistency with the rest of this procedure.
22. Click the Here button to teach the +X position of the frame using the values shown
below to ensure consistency with the rest of this procedure.
23. Click the Here button to teach the +Y position of the frame using the values shown
below to ensure consistency with the rest of this procedure.
24. Click the Here button to teach the robot the first pallet / slot position as a 35 mm X and
Y offset from the origin of the frame using the values shown below to ensure con-
sistency with the rest of this procedure.
25. Complete the wizard and then save the project with a new name. The basic Pack Man-
ager Application Sample is completed after this step.
Additional Information: Test the application by opening the
3D Visualizer, access the Task Status Control, and start the Process
Application Specifics
The following details are used for this Pack Manager application sample exercise.
l The robot turns a signal ON when the box is full, stops the belt, and waits for an input
signal indicating an empty box is in position before resuming the belt and pick / place
motion.
l There is only one jar size that is approximately 35 mm tall with a 25 mm radius. A spa-
cing of 4 mm between jars in the box is required.
l A pair of jars spaced 60 mm apart (center-to-center) will be positioned every 98 mm on
the belt as they travel towards the robot to the picking location.
l Jars can be packed into boxes containing 12 (3 x 2 x 2), 24 (4 x 2 x 3), or 48 (4 x 4 x 3)
jars each. The robot must place a cardboard divider between each layer of jars in the
box. These dividers are available at a static location for the robot to use as needed.
l The conveyor is 150 mm wide and moves at a rate of 42.5 mm/s.
l The jars are picked using a vacuum tip gripper that is 50 mm long with a suction cup
that is 15 mm in radius, requiring approximately 20 ms dwell time to grip and release
the jars.
l The end effector uses internal robot solenoids triggered with -3001 to open, 3001 to
close, and 3002 to release.
l An emulation camera is used for simulation purposes.
This exercise will explain how to create one possible cell layout for this application. This cell
layout is pictured below, where the box has been represented with only two sides and a bot-
tom to expose the target instances within.
To adjust the belt velocity specified as 42.5 mm/s, set the Pick Belt object Emulation Fast / Slow
Speed settings as shown below.
To adjust the IO EndEffector radius to 15 mm and dwell time to 20 ms, open the
IO EndEffector object and adjust the following settings.
The default setting for the Virtual Camera Emulation Configuration Behavior is Random
Instances. To change this so the jars are spaced 60 mm apart (center-to-center) every 98 mm on
the belt to accurately represent the application, you can load images into the Emulation Cam-
era and train the Locator Model and Locator tools, or you can use a Custom Vision Tool to gen-
erate vision results in specific locations.
The procedure below uses a Custom Vision tool to generate vision results in the desired loc-
ations to represent the application accurately when images are not available.
2. Set the Pick Virtual Camera Emulation Configuration Behavior to Use Default Device.
3. Set the Pick Part object vision tool to reference the Custom Vision tool created in step
one above. After this step is finished, the procedure is complete and vision results will
be located according to the application summary.
Use the following procedure to add cylinder objects to accurately represent the jar dimensions
of 35 mm tall with a 25 mm radius.
Additional Information: The color of the objects are changed from the default
color of gray to red to make them easier to see in the 3D Visualizer.
1. Add a new Cylinder object to the 3D Visualization with the following settings. Rename
this object "PartCAD" for reference later.
2. Set the Pick Part object Shape Display to use the PartCAD object.
3. Copy the PartCAD object created in step 1 and rename the copy to "TargetCAD". Change
the color to differentiate this from the PartCAD object (blue is used in this example).
4. Set the Place Part Target object Shape Display to use the TargetCAD object. After this
step is finished, the 3D Visualizer will display cylinders that accurately represent the jar
dimensions for the pick and place instances.
NOTE: Default Pack Manager part and part target instance display col-
oration based on allocation status (light / dark yellow for parts, light /
dark green for part targets) is not available when part and part target
object Shape Display is enabled. When Shape Display is enabled, parts
and part targets are displayed according to properties of the 3D Visu-
alization object.
Use the following procedure to create bottom, front, and side box surfaces to show a cutaway
view of the box that represents an obstacle that the robot must place the jars inside. These are
positioned using the pallet frame transformation from Process Manager Configuration Items
and half the length of the box sides applied in X, Y, and Z accordingly.
1. Add a new Box object to the 3D Visualization with the following settings. Rename this
object "BoxBottom" for reference later.
2. Add a new Box object to the 3D Visualization with the following settings. Rename this
object "BoxFront" for reference later.
3. Add a new Box object to the 3D Visualization with the following settings. Rename this
object "BoxSide" for reference later. After the three sides are created, shapes to represent
the placement box will be visible in the 3D Visualizer.
By default, the robot tool tip interferes with the target instance, when it should be moving to
the top of the target instance instead.
The result is an accurate tool tip position for each target instance.
Because the Pack Manager application sample only supports X-Y (2D) arrays, the Z dimension
needs to be added to the Place Pallet object. Access the Place Pallet object and make the fol-
lowing settings according to the box packaging details defined in the application summary.
To improve the speed of the Pack Manager application, make the following adjustments.
There are many methods to incorporate picking and placing of dividers between layers of
parts, but all of these methods require customization of the existing process. One of the
simplest methods is to customize a place operation to check if a divider is needed based on
how many target instances remain to be filled and if a divider is needed, call a program to
handle picking and placing a divider. This should be structured in a way that can be flexible
for the different box sizes and configurations.
The following procedure will provide an example method for picking and placing dividers
between layers of parts.
2. Select the Create a new program from the default option and click the Next button.
3. Create and select a new module where the new program will be created and then click
the Next button.
4. Name the new V+ program "cust_place" and then click the Finish button.
5. Create new V+ location variables that will be used for the divider pick point, divider
approach point, and each divider placement point. Use the figure below to create the
variables. Use the exact names and initial values to ensure consistency with the rest of
this procedure.
6. Create a new V+ program named "place_divider". This will be used to pick and place
the divider at a location passed in as an argument. Add the following code to the new
place_divider V+ program.
7. Edit the cust_place() program to check if a divider is needed. The following approach
monitors the target queue and checks how many instances are remaining.
There are two steps with this approach:
l Retrieve the index of the specific target type queue using program
pm.trg.get.idx().
l Retrieve how many targets are available in the queue using program
pm.trg.avail().
After each place operation, the program will check how many target instances are
remaining to determine when a layer is full. Edit the cust_place() program to check how
many targets remain. Remember to add any new variables to the AUTO variable
8. The default part target instance queue size of 10 needs to be changed so the available
target count will decrement properly. The available target count incorrectly remains at
10 and this can be viewed in two places:
l Open the Monitor Window, clear all instances, and start the Process Manager.
You will see the available target count will return 10 after each place operation,
and does not decrement as desired.
l Temporarily disable the Shape Display for the Place Part Target and then start
the Process Manger while viewing the 3D Visualizer. Notice there are only ten
part targets allocated (light green objects).
For the customization to work for this application, the entire box of part target instances
must be allocated to the robot. In Process Strategy Robot Allocation, increase Queue Size
to the total target count in the largest pallet, which is 48.
Clear all instances and start the Process Manager while viewing the 3D Visualizer (tem-
porarily disable the Shape Display for the Place Part Target) . See that all 48 target
instances are allocated to the robot as shown in the figure below.
The Monitor Window will now correctly decrement after jars are placed in the box as
shown in the figure below.
9. The available target count is updated after a jar is placed but this custom program
should account for target instances currently being processed. The remaining target
count needs to decrement from 47 when starting with an empty box and a full pallet of
target instances. To accomplish this, edit the cust_place V+ program and add the lines
shown in the figure below.
10. Now that a counter decrements with each jar place operation, this can be used to check
if a divider is necessary by storing a global V+ Variable for the number of available tar-
get instances remaining in the pallet, at the end of each layer.
Edit the cust_place V+ program as shown in the figure below to compare the remaining
part target count to global V+ Variables "layer1" and "layer2". These are the number of
target instances remaining when each layer is complete.
12. To confirm correct functionality and complete this procedure, clear all instances and
start the Process Manager with the Monitor Window and 3D Visualizer open. You will
To visualize the divider pick and place locations, use the following procedure to create box
objects and position them at the corresponding V+ location variables.
NOTE: This procedure requires that all previous steps for creating the Pack
Manager Packaging Application Sample are completed.
1. Create a new Box object in the 3D Visualization with the following settings. Rename
this object "Divider" for reference later. This will represent the divider on the gripper.
2. Create a new Box object in the 3D Visualization with the following settings. Rename
this object "Divider pick.div" for reference later. This will represent the divider pick loc-
ation.
3. Create a new Box object in the 3D Visualization with the following settings. Rename
this object "Divider pick.div1" for reference later. This will represent the lower divider in
the place location.
4. Create a new Box object in the 3D Visualization with the following settings. Rename
this object "Divider pick.div2" for reference later. This will represent the upper divider
in the place location.
5. To confirm correct functionality and complete this procedure, view the objects in the
3D Visualizer to ensure they are accurately represented.
IO Feeder Integration
This section demonstrates how to integrate an IO Feeder into the application to create target
pallet instances only if a box is present. The IO Feeder can use an input signal from a sensor to
the robot controller, a signal from a PLC, or a handshake with another V+ program with soft
signals to ensure the next pallet of target instances is not created until a full box is removed
and a new empty box is present.
For the purposes of demonstrating how the IO Feeder object is used in this application, we will
assume the following.
NOTE: This procedure requires that all previous steps for creating the Pack
Manager Packaging Application Sample are completed.
2. In Control Sources for the Static Source for Place Part Target, enable Use A Feeder and
select the IO Feeder object that was just created.
3. Create a V+ program to simulate IO Feeder signal operation. This is typically done dur-
ing development in Emulation Mode to simulate signals that would normally be
present during run time. This will be executed on another task when the Process Man-
ager starts by using a Process Strategy Custom Initialization program as described in
the following steps.
Create a new V+ program named "box_signal" with the code shown in the figure below.
4. In the Process Manager Process Strategy Editor, select Use Custom Initialization Pro-
gram and then click the Selection button ( ).
5. Select Create a new program from the default and then click the Next button.
6. Select the module where the new program will be created and then click the Next but-
ton.
7. Name the new V+ program "cust_init" and click the Finish button.
8. Edit the cust_init V+ Program to execute the previously created box_signal V+ program,
as shown in the figure below.
9. To confirm correct functionality and complete this procedure, confirm the IO Feeder
integration and simulation by clearing all instances, starting the Process Manager, and
observing a 5 second robot pause after the box is filled.
Adding Recipes
This section demonstrates how to integrate recipes into the application to accommodate the dif-
ferent package sizes described in the application summary.
The following procedure will provide steps to integrate recipes in this application.
NOTE: This procedure requires that all previous steps for creating the Pack
Manager Packaging Application Sample are completed.
3. Select the Variables source and add the global variables to the recipe (in the Location
Variables area). The size of dividers for the boxes will change, therefore the locations
associated with the dividers should be included.
4. Since the number of target instances per layer may change, add layer1 and layer2 vari-
ables (in the Real Variables area).
5. Open the Recipe Manager in Task Status Control and add new recipes named "48-
pack", "24-pack", and "12-pack".
l Pallet Y-Count = 2
l Layer1 = 16 (4 x 2 x 2)
l Layer2 = 8 (4 x 2 x 1)
Adjust locations to center the divider in the smaller box size:
l Pallet X-Count = 3
l Pallet Y-Count = 2
l Pallet Z-Count = 2
l Layer1 = 6 (2 x 3 x 1)
l Layer2 = 16 (Only 2 layers, divider 2 unnecessary, leave value larger than total
count)
Adjust locations to center the divider in the smaller box size:
This section demonstrates how to visualize the part (jar) on the gripper in the 3D Visualizer
using a C# program.
The following procedure will provide steps to view the PartCAD object as a 3D Visualizer
object on the gripper tip when the gripper signal is turned ON.
NOTE: This procedure requires that all previous steps for creating the Pack
Manager Packaging Application Sample are completed.
1. Access the PartCAD object, set the Parent Offset the same as the gripper tip offset, and
select the robot as the Parent.
3. To confirm correct functionality and complete this procedure, observe the part (jar) vis-
ibility in the 3D Visualizer by clearing all instances, starting the GripperPartVis C# pro-
gram, and starting the Process Manager.
This section demonstrates how to visualize the divider on the gripper in the 3D Visualizer by
editing the GripperPartVis C# program and the place_divider V+ program.
The following procedure will provide steps to view the Divider item as a 3D Visualizer object
on the gripper tip when the gripper signal is turned ON.
NOTE: This procedure requires that all previous steps for creating the Pack
Manager Packaging Application Sample are completed.
1. Open the GripperPartVis C# program and make the edits as shown in the figure below.
Additional Information: Lines 20 and 24 can be created by dragging and
dropping the Divider object and Controller Settings object into the C#
Editor. The example renames these items "divider" and "controller" respect-
ively, as shown in the figure below.
2. Open the place_divider V+ program and make the edits as shown in the figure below.
Additional Information: You can expand upon this idea and turn ON a
soft signal during the place_divider() V+ program, to enable visualization
of a divider on the tip during that program, and use a part for the other
pick operations. Refer to the following program edits, which now provide
visualization of a part on the gripper or a divider on the gripper, depend-
ing on what operation the robot is performing.
3. To confirm correct functionality and complete this procedure, confirm the divider vis-
ibility in the 3D Visualizer by clearing all instances, starting the GripperPartVis C# pro-
gram, and starting the Process Manager.
If the upstream process allows for the conveyor belt to be turned OFF, you can enable Active
Control in the Pick Belt object. This will allow the robot to turn OFF the belt to prevent any jars
crossing the downstream process limit while the robot is waiting for a new box (per the applic-
ation summary requirements).
Use the following procedure to enable Active Control for the belt.
1. Access the Pick Belt object and select Active Control and select the Controller Settings0
object as a reference. Set the Drive Output values as shown in the following figure.
2. Access the Process Strategy Editor and adjust the Pick Belt Control Threshold para-
meters as shown in the following figure.
3. To confirm correct functionality and complete this procedure, clear all instances, start
the GripperPartVis C# Program, and start the Process Manager with the 3D Visualizer
open. You will see the belt stop when the jars reach the downstream limit during the
simulated box replacement function.
The version control function enables you to leave a change record of a project at any time. You
can return to the desired project by tracing back the change record and comparing the present
version with past projects. The version control function provides the capability to check the dif-
ference and merge the changes when you apply them to the master project.
When you use the version control function to control projects, you can effectively manage and
develop programs by multiple developers. This facilitates the management of developing
derived machines.
Software
The following software is needed besides the ACE software. Download the latest edition of soft-
ware from their official Web sites.
Installing Git
Download the latest installer from the Git download site and install it as a user with admin-
istrator rights. Depending on the operating system installed on the computer, download the
32-bit or 64-bit edition of the installer, as defined by the computer. 64 bit is preferred for Win-
dows 10.
Follow the instructions in the Git installer wizard. Although the wizard displays several pages
during the installation, the description below covers the steps requiring your input. The pages
4. Click Next and select Checkout as-is, commit as-is for Line ending conversions
5. Click Next and set the terminal emulator, See "Terminal Emulator, Windows Default"
6. Click Next to configure Extra Options. Select top two options only.
Installing TortoiseGit
Download the latest installer from the TortoiseGit download site and install it as a user with
administrator rights. Depending on the operating system installed on the computer, download
the 32-bit or 64-bit edition of the installer, as defined by the computer. 64 bit is preferred for
Windows 10.
Follow the instructions in the TortoiseGit installer wizard. Although the wizard displays sev-
eral pages during the installation, the description below covers all the steps. The pages on
which you must select a specific item are shown with figures. You can leave other pages as
default
Locate the downloaded installer and double click it. When the Installer window opens, click
Next and follow the steps below, after accepting the license, default installation destination
and default components.
1. Accept license
2. Accept the default install location
3. Install the application
4. When completed, run the start wizard
5. Select Language
6. Configure git path
7. Click Next to configure user. Enter your user name and e-mail
9. Click Finish
Both Git and TortoiseGit are now installed onto the computer and ready for use as a single
repository.
The following figure shows the minimum configuration for a single user to access the ACE pro-
ject repository.
NOTE: You can control only one ACE project per repository. Create a directory
for each project to use version control function.
1. Using a shared repository on the user's computer to share it as the remote repository
2. A dedicated Git server to share it as the remote repository
3. Utilizing a Git server service on the Internet to share the remote repository
For detailed information about using TortoiseGit double click the desktop icon and select Help,
to open the user manual.
OMRON ASIA PACIFIC PTE. LTD. OMRON ROBOTICS AND SAFETY TECHNOLOGIES, INC.
No. 438A Alexandra Road # 05-05/08 (Lobby 2), 4225 Hacienda Drive, Pleasanton, CA 94588 U.S.A
Alexandra Technopark, Tel: (1) 925-245-3400/Fax: (1) 925-960-0590 © OMRON Corporation 2016-2020 All Rights
Singapore 119967 Reserved. In the interest of product improvement,
Tel: (65) 6835-3011/Fax: (65) 6835-2711 OMRON (CHINA) CO., LTD. specifications are subject to change without notice.
Room 2211, Bank of China Tower, 200 Yin Cheng Zhong Road,
PuDong New Area, Shanghai, 200120, China
Tel: (86) 21-5037-2222/Fax: (86) 21-5037-2200 Cat. No. I633-E-05 0920
24000-000 G