Manual Racer GigE
Manual Racer GigE
Warranty note
Do not open the housing of the camera. The warranty becomes void if the housing is opened.
All material in this publication is subject to change without notice and is copyright
Basler AG.
Contacting Basler Support Worldwide
The Americas
Basler, Inc.
855 Springdale Drive, Suite 203
Exton, PA 19341
USA
Tel. +1 610 280 0171
Fax +1 610 280 7608
[email protected]
Asia-Pacific
Basler Asia Pte. Ltd.
35 Marsiling Industrial Estate Road 3
#05–06
Singapore 739257
Tel. +65 6367 1355
Fax +65 6367 1255
[email protected]
www.baslerweb.com
Table of Contents AW00118306000
Table of Contents
1 Specifications, Requirements, and Precautions . . . . . . . . . . . . . . . . . . . . . . . .1
1.1 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 General Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Accessories . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4 Spectral Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.5 Mechanical Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.1 Camera Dimensions and Mounting Points. . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5.2 Sensor Line Location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.3 Lens Adapter Dimensions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.5.4 Selecting the Optimum Lens Adapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.5.5 Attaching a Lens Adapter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.6 Software Licensing Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.1 LWIP TCP/IP Licensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6.2 LZ4 Licensing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
1.7 Avoiding EMI and ESD Problems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
1.8 Environmental Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8.1 Temperature and Humidity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8.2 Heat Dissipation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
1.8.3 Imaging Sensor Over Temperature Condition . . . . . . . . . . . . . . . . . . . . . . . . 21
1.9 Precautions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
8 Acquisition Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.1 Defining a Frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
8.2 Controlling Acquisition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
8.2.1 Acquisition Start and Stop Commands and the Acquisition Mode . . . . . . . . . 77
8.2.2 Acquisition Start Triggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.2.2.1 TriggerMode (Acquisition Start) = Off . . . . . . . . . . . . . . . . . . . . . 79
8.2.2.2 TriggerMode (Acquisition Start) = On . . . . . . . . . . . . . . . . . . . . . 79
8.2.2.3 AcquisitionFrameCount . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
8.2.2.4 Setting The Acquisition Start Trigger Mode and
Related Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
8.2.3 Frame Start Triggering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242
1 Specifications, Requirements,
and Precautions
This chapter lists the camera models covered by the manual. It provides the general specifications
for those models and the basic requirements for using them.
This chapter also includes specific precautions that you should keep in mind when using the
cameras. We strongly recommend that you read and follow the precautions.
1.1 Models
The current Basler racer GigE Vision camera models are listed in the top row of the specification
tables on the next pages of this manual. The camera models are differentiated by their resolution
and their maximum line rate at full resolution.
Unless otherwise noted, the material in this manual applies to all of the camera models listed in the
tables. Material that only applies to a particular camera model or to a subset of models will be so
designated.
Pixel Size 7 µm x 7 µm
Min Line Rate No minimum when an external line trigger signal is used
100 Hz when an external line trigger signal is not used
Mono/Color Mono
Data Output Type Fast Ethernet (100 Mbit/s) or Gigabit Ethernet (1000 Mbit/s)
Camera Power +12 VDC (- 10 %) to +24 VDC (+ 5 %), < 1 % ripple, supplied via the camera’s 6-pin
Requirements connector
Lens Adapter Universal camera front, suitable for lens mount adapters with the following lens
mounts: C-mount (2k cameras), F-mount, M42x0.75-mount, M42x1.0-mount,
M42x1.0-mount (FBD 45.56 mm), M58x0.75-mount.
See Section 1.5.4 on page 16 for information about selecting a suitable lens adapter
for your camera.
Min Line Rate No minimum when an external line trigger signal is used
100 Hz when an external line trigger signal is not used
Mono/Color Mono
Data Output Type Fast Ethernet (100 Mbit/s) or Gigabit Ethernet (1000 Mbit/s)
Camera Power +12 VDC (- 10 %) to +24 VDC (+ 5 %), < 1 % ripple, supplied via the camera’s 6-pin
Requirements connector
Lens Adapter Universal camera front, suitable for lens mount adapters with the following lens
mounts: F-mount, M42x0.75-mount, M42x1.0-mount (FBD 16 mm), M42x1.0-mount
(FBD 45.56 mm), M58x0.75-mount.
See Section 1.5.4 on page 16 for information about selecting a suitable lens adapter
for your camera.
Specification raL12288-8gm
Min Line Rate No minimum when an external line trigger signal is used
100 Hz when an external line trigger signal is not used
Mono/Color Mono
Data Output Type Fast Ethernet (100 Mbit/s) or Gigabit Ethernet (1000 Mbit/s)
Camera Power +12 VDC (- 10 %) to +24 VDC (+ 5 %), < 1 % ripple, supplied via the camera’s 6-pin
Requirements connector
Lens Adapter Universal camera front, suitable for lens mount adapters with the following lens
mounts: F-mount, M42x0.75-mount, M42x1.0-mount (FBD 16 mm), M42x1.0-mount
(FBD 45.56 mm), M58x0.75-mount.
See Section 1.5.4 on page 16 for information about selecting a suitable lens adapter
for your camera.
Specification raL12288-8gm
1.3 Accessories
Basler’s cooperation with carefully selected suppliers means you get accessories you can trust
which makes building a high-performance image processing system hassle-free.
Key Reasons for Choosing Lenses, Cables, and Other Accessories from Basler
Perfect match for Basler cameras
One-stop-shopping for your image processing system
Stable performance through highest quality standards
Easy integration into existing systems
Expert advice during selection process
See the Basler website for information about Basler’s extensive accessories portfolio (e.g. cables,
lenses, host adapter cards, switches): www.baslerweb.com
The quantum efficiency curve excludes lens characteristics and light source
characteristics.
0.7
Quantum Efficiency (e-/Photon)
0.6
0.5
0.4
0.3
0.2
0.1
0.0
300 400 500 600 700 800 900 1000 1100
Fig. 2: Quantum Efficiency of the Monochrome Sensor in 12 Bit Depth Mode (Based on Sensor Vendor Information)
43
6.5
56 42.42
35.39 36.12
17.69 6.5
20 Photosensitive Surface
of the Sensor.
4.3
49
62
9
49.5
8.88
24.02
14.82
36.79
) °
(90
Reference Plane
43
Not to Scale
Marker Hole
Sensor Line
Not to Scale
Fig. 4: Mono Sensor Line Location with Approximate Starting Points (Pixel 1) for Pixel Numbering
50.83
42.42
36.12 56
ø53
17.526
ø30 ± 0.05
62
Photosensitive
Surface of the
Sensor ø51.2
5.5
79.92
42.42
ø51.5 ± 0.05
36.12
56
3
46.5
ø59
62
Photosensitive
Surface of the
Sensor
5.5
37.5 Not to Scale
49.42
42.42
36.12 56
ø53
ø51.5 ± 0.05
16
62
Photosensitive
Surface of the
Sensor M42 x 1.0 or
M42 x 0.75
5.5
7 Not to Scale
Fig. 7: M42 x 1.0 or M42 x 0.75 Mount Adapter on a racer GigE Camera; Dimensions in mm
78.98
42.42
36.12 56
ø53
ø51.5 ± 0.05
45.56
62
Photosensitive
Surface of the
Sensor
M42 x 1.0
5.5
Fig. 8: M42 x 1.0 FBD 45.56 mm Mount Adapter on a racer GigE Camera; Dimensions in mm
53.42
42.42
36.12 56
ø51.5 ± 0.05
M58x0.75,
5 deep
20
62
Photosensitive
Surface of the
Sensor
ø62
11
Not to Scale
C-mount - - - -
F-mount
M42 x 1.0 1) 1)
M42 x 0.75 1) 1)
M42 x 1.0 FBD 45.56 1) 1)
M58 1) 1)
Table 4: Recommended Lens Adapters Depending on Camera Model ( = recommended, - = not recommended.
1) To ensure coverage of the entire sensor, contact Basler technical support for assistance when choosing a lens.)
NOTICE
Screwing with excessive torque can damage the camera, lens adapter or setscrews.
When screwing in the supplied M2.5 setscrews, make sure to never exceed a torque of 0.4 Nm.
The Basler application note called Avoiding EMI and ESD in Basler Camera
Installations provides much more detail about avoiding EMI and ESD.
This application note can be obtained from the Downloads section of our website:
www.baslerweb.com
UL 60950-1 test conditions: no lens attached to the camera and without efficient heat
dissipation; ambient temperature kept at 50 °C (+122 °F).
The camera has imaging sensor over temperature protection. If the temperature of the
camera’s imaging sensor rises above 75° C, an over temperature error condition will be reported
(see also Section 10.15 on page 209) and the circuitry for the imaging sensor will switch off. In this
situation, you will still be able to communicate with the camera but the camera will no longer
acquire images.
Provide the necessary cooling when this situation arises. After the imaging sensor circuitry has
sufficiently cooled bring the camera back to normal operation by either action:
Carry out a camera restart by switching power off and on again or
Carry out a camera reset as described in Section 12.1 on page 236.
1.9 Precautions
DANGER
WARNING
Fire Hazard
Non-approved power supplies may cause fire and burns.
You must use a camera power supply which meets the Limited Power Source
(LPS) requirements.
NOTICE
NOTICE
Using a wrong pin assignment for the 12-pin receptacle can severely damage the camera.
Make sure the cable and plug you connect to the 12-pin receptacle follows the correct pin
assignment. In particular, do not use a pin assignment that would be correct for Basler area scan
cameras. The 12-pin receptacles of Basler line scan and area scan cameras are electrically
incompatible.
NOTICE
NOTICE
NOTICE
Warranty Precautions
Transport properly
Transport the camera in its original packaging only. Do not discard the packaging.
Clean properly
Avoid cleaning the surface of the camera’s sensor if possible. If you must clean it, use a soft, lint
free cloth dampened with a small quantity of high quality window cleaner. Because electrostatic
discharge can damage the sensor, you must use a cloth that will not generate static during cleaning
(cotton is a good choice).
To clean the surface of the camera housing, use a soft, dry cloth. To remove severe stains, use a
soft cloth dampened with a small quantity of neutral detergent, then wipe dry.
Do not use solvents or thinners to clean the housing; they can damage the surface finish.
The pylon Viewer is included in the Basler pylon Camera Software Suite. It is a standalone
application that lets you view and change most of the camera’s parameter settings via a GUI-based
interface. Using the pylon Viewer is a very convenient way to get your camera up and running
quickly during your initial camera evaluation or a camera design-in for a new project.
For more information about using the pylon Viewer, see the Installation and Setup Guide for
Cameras Used with Basler pylon for Windows (AW000611).
Three pylon SDKs are part of the Basler pylon Camera Software Suite:
You can access all of the camera’s parameters and control the camera’s full functionality from
within your application software by using the matching pylon API (C++, C, or .NET).
The sample programs illustrate how to use the pylon API to parameterize and operate the
camera.
For each environment (C++, C, and .NET), a Programmer's Guide and Reference
Documentation is available. The documentation gives an introduction to the pylon API and
provides information about all methods and objects of the API.
During the installation process you should have installed either the filter driver or
the performance driver.
For more information about compatible Intel chipsets and about installing the network drivers, see
the Installation and Setup Guide for Cameras Used with pylon for Windows (AW000611).
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The Basler performance driver uses a "receive window" to check the status of packets. The check
for missing packets is made as packets enter the receive window. If a packet arrives from higher in
the sequence of packets than expected, the preceding skipped packet or packets are detected as
missing. For example, suppose packet (n-1) has entered the receive window and is immediately
followed by packet (n+1). In this case, as soon as packet (n+1) enters the receive window, packet
n will be detected as missing.
General Parameters
The threshold resend request mechanism is illustrated in Fig. 10 where the following assumptions
are made:
Packets 997, 998, and 999 are missing from the stream of packets.
Packet 1002 is missing from the stream of packets.
985 986 987 988 989 990 991 992 993 994 995 996 1000 1001 1003 1004 1005 1006 1007
Time
Fig. 10: Example of a Receive Window with Resend Request Threshold & Resend Request Batching Threshold
(1) Front end of the receive window. Missing packets are detected here.
(2) Stream of packets. Gray indicates that the status was checked as the packet entered the
receive window. White indicates that the status has not yet been checked.
(3) Receive window of the performance driver.
(4) Threshold for sending resend requests (resend request threshold).
(5) A separate resend request is sent for each packets 997, 998, and 999.
(6) Threshold for batching resend requests for consecutive missing packets (resend request
batching threshold). Only one resend request will be sent for the consecutive missing
packets.
ResendRequestBatching - This parameter determines the location of the resend request batching
threshold in the receive window (Fig. 10). The parameter value is in per cent of a span that starts
with the resend request threshold and ends with the front end of the receive window. The maximum
allowed parameter value is 100. In Fig. 10 the resend request batching threshold is set at 80% of
the span.
The resend request batching threshold relates to consecutive missing packets, i.e., to a continuous
sequence of missing packets. Resend request batching allows grouping of consecutive missing
packets for a single resend request rather than sending a sequence of resend requests where each
resend request relates to just one missing packet.
The location of the resend request batching threshold determines the maximum number of
consecutive missing packets that can be grouped together for a single resend request. The
maximum number corresponds to the number of packets that fit into the span between the resend
request threshold and the resend request batching threshold plus one.
If the Resend Request Batching parameter is set to 0, no batching will occur and a resend request
will be sent for each single missing packet. For other settings, consider an example: Suppose the
Resend Request Batching parameter is set to 80 referring to a span between the resend request
threshold and the front end of the receive window that can hold five packets (Fig. 10). In this case
4 packets (5 x 80%) will fit into the span between the resend request threshold and the resend
request batching threshold. Accordingly, the maximum number of consecutive missing packets that
can be batched is 5 (4 + 1).
The timeout resend mechanism is illustrated in Fig. 11 where the following assumptions are made:
The frame includes 3000 packets.
Packet 1002 is missing within the stream of packets and has not been recovered.
Packets 2999 and 3000 are missing at the end of the stream of packets (end of the frame).
The MaximumNumberResendRequests parameter is set to 3.
995 996 997 998 999 1000 1001 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 2996 2997 2998
Time
Fig. 11: Incomplete Stream of Packets and Part of the Resend Mechanism
(1) Stream of packets. Gray indicates that the status was checked as the packet entered the
receive window. White indicates that the status has not yet been checked.
(2) Receive window of the performance driver.
(3) As packet 1003 enters the receive window, packet 1002 is detected as missing.
(4) Interval defined by the ResendTimeout parameter.
(5) The Resend Timeout interval expires and the first resend request for packet 1002 is sent to
the camera. The camera does not respond with a resend.
(6) Interval defined by the ResendResponseTimeout parameter.
(7) The Resend Response Timeout interval expires and a second resend request for packet
1002 is sent to the camera. The camera does not respond with a resend.
(8) Interval defined by the ResendResponseTimeout parameter.
(9) The Resend Response Timeout interval expires and a third resend request for packet 1002 is
sent to the camera. The camera still does not respond with a resend.
(10) Interval defined by the ResendResponseTimeout parameter.
(11) Because the maximum number of resend requests has been sent and the last Resend
Response Timeout interval has expired, packet 1002 is now considered as lost.
(12) End of the frame.
(13) Missing packets at the end of the frame (2999 and 3000).
(14) Interval defined by the PacketTimeout parameter.
ResendTimeout - The ResendTimeout parameter defines how long (in milliseconds) the
performance driver will wait after detecting that a packet is missing before sending a resend request
to the camera. The parameter applies only once to each missing packet after the packet was
detected as missing.
PacketTimeout - The PacketTimeout parameter defines how long (in milliseconds) the
performance driver will wait for the next expected packet before it sends a resend request to the
camera. This parameter ensures that resend requests are sent for missing packets near to the end
of a frame. In the event of a major interruption in the stream of packets, the parameter will also
ensure that resend requests are sent for missing packets that were detected to be missing
immediately before the interruption. Make sure the PacketTimeout parameter is set to a longer time
interval than the time interval set for the inter-packet delay.
Fig. 12 illustrates the combined action of the threshold and the timeout resend mechanisms where
the following assumptions are made:
All parameters set to default.
The frame includes 3000 packets.
Packet 1002 is missing within the stream of packets and has not been recovered.
Packets 2999 and 3000 are missing at the end of the stream of packets (end of the frame).
The default values for the performance driver parameters will cause the threshold resend
mechanism to become operative before the timeout resend mechanism. This ensures maximum
efficiency and that resend requests will be sent for all missing packets.
With the default parameter values, the resend request threshold is located very close to the front
end of the receive window. Accordingly, there will be only a minimum delay between detecting a
missing packet and sending a resend request for it. In this case, a delay according to the
ResendTimeout parameter will not occur (see Fig. 12). In addition, resend request batching will not
occur.
995 996 997 998 999 1000 1001 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 2996 2997 2998
Fig. 12: Combination of Threshold Resend Mechanism and Timeout Resend Mechanism
(1) Stream of packets, Gray indicates that the status was checked as the packet entered the
receive window. White indicates that the status has not yet been checked.
(2) Receive window of the performance driver.
(3) Threshold for sending resend requests (resend request threshold). The first resend request
for packet 1002 is sent to the camera. The camera does not respond with a resend.
(4) Interval defined by the ResendResponseTimeout parameter.
(5) The Resend Timeout interval expires and the second resend request for packet 1002 is sent
to the camera. The camera does not respond with a resend.
(6) Interval defined by the ResendResponseTimeout parameter
(7) The Resend Timeout interval expires and the third resend request for packet 1002 is sent to
the camera. The camera does not respond with a resend.
(8) Interval defined by the ResendResponseTimeout parameter
(9) Because the maximum number of resend requests has been sent and the last Resend
Response Timeout interval has expired, packet 1002 is now considered as lost.
(10) End of the frame.
(11) Missing packets at the end of the frame (2999 and 3000).
(12) Interval defined by the PacketTimeout parameter.
You can set the performance driver parameter values from within your application software by using
the Basler pylon API. The following code snippet illustrates using the API to read and write the
parameter values:
// Get the Stream Parameters object
Camera_t::StreamGrabber_t StreamGrabber(camera.GetStreamGrabber(0));
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
(The performance driver parameters will only appear in the viewer if the performance driver is
installed on the adapter to which your camera is connected.)
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Adapter Properties
When the Basler Performance driver is installed, it adds a set of "advanced" properties to the
network adapter. These properties include:
Max Packet Latency - A value in microseconds that defines how long the adapter will wait after it
receives a packet before it generates a packet received interrupt.
Max Receive Inter-Packet Delay - A value in microseconds that defines the maximum amount of
time allowed between incoming packets.
Maximum Interrupts per Second - Sets the maximum number of interrupts per second that the
adapter will generate.
Network Address - allows the user to specify a MAC address that will override the default address
provided by the adapter.
Packet Buffer Size - Sets the size in bytes of the buffers used by the receive descriptors and the
transmit descriptors.
Receive Descriptors - Sets the number of descriptors to use in the adapter’s receiving ring.
Transmit Descriptors - Sets the number of descriptors to use in the adapter’s transmit ring.
You can set the driver related transport layer parameter values from within your application software
by using the Basler pylon API. The following code snippet illustrates using the API to read and write
the parameter values:
// Read/Write Timeout
Camera_t::TlParams_t TlParams(camera.GetTLNodeMap());
TlParams.ReadTimeout.SetValue(500); // 500 milliseconds
TlParams.WriteTimeout.SetValue(500); // 500 milliseconds
// Heartbeat Timeout
Camera_t::TlParams_t TlParams(camera.GetTLNodeMap()(;
TlParams.HeartbeatTimeout.SetValue(5000); // 5 seconds
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
X Packets Y Bytes
---------------------------- --------------------
Frame Packet
Bandwidth Assigned = -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
---------------------------- -------------------- ------------- + --------------------------- - – 1 IPD 8 ns
X Packets Y Bytes 8 ns X Packets
Frame Packet Byte Frame
1 Port
GigE
Adapter
Single Path
GigE
Network
Switch
One way to manage the situation where multiple cameras are sharing a single network path is to
make sure that only one of the cameras is acquiring and transmitting frames at any given time. The
data output from a single camera is well within the bandwidth capacity of the single path and you
should have no problem with bandwidth in this case.
If you want to acquire and transmit frames from several cameras simultaneously, however, you
must determine the total data output rate for all the cameras that will be operating simultaneously
and you must make sure that this total does not exceed the bandwidth of the single path
(125 MByte/s).
An easy way to make a quick check of the total data output from the cameras that will operate
simultaneously is to read the value of the Bandwidth Assigned (GevSCBWA) parameter for each
camera. This parameter indicates the camera’s gross data output rate in bytes per second with its
current settings. If the sum of the bandwidth assigned values is less than 125 MByte/s, the cameras
should be able to operate simultaneously without problems. If it is greater, you must lower the data
output rate of one or more of the cameras.
You can lower the data output rate on a camera by using the Inter-Packet Delay (GevSCPD)
parameter. This parameter adds a delay between the transmission of each packet from the camera
and thus slows the data transmission rate of the camera. The higher the inter-packet delay
parameter is set, the greater the delay between the transmission of each packet will be and the
lower the data transmission rate will be. After you have adjusted the Inter-Packet Delay (GevSCPD)
parameter on each camera, you can check the sum of the Bandwidth Assigned parameter values
and see if the sum is now less than 125 MByte/s.
Step 2 - Set the Packet Size (GevSCPSPacketSize) parameter on each camera as large as
possible.
Using the largest possible packet size has two advantages, it increases the efficiency of network
transmissions between the camera and the PC and it reduces the time required by the PC to
process incoming packets. The largest packet size setting that you can use with your camera is
determined by the largest packet size that can be handled by your network. The size of the packets
that can be handled by the network depends on the capabilities and settings of the network adapter
you are using and on capabilities of the network switch you are using.
Start by checking the documentation for your adapter to determine the maximum packet size
(sometimes called “frame” size) that the adapter can handle. Many adapters can handle what is
known as “jumbo packets” or "jumbo frames". These are packets with a 16 kB size. Once you have
determined the maximum size packets the adapter can handle, make sure that the adapter is set
to use the maximum packet size.
Next, check the documentation for your network switch and determine the maximum packet size
that it can handle. If there are any settings available for the switch, make sure that the switch is set
for the largest packet size possible.
Now that you have set the adapter and switch, you can determine the largest packet size the
network can handle. The device with the smallest maximum packet size determines the maximum
allowed packet size for the network. For example, if the adapter can handle 16 kB packets and the
switch can handle 8 kB packets, then the maximum for the network is 8 kB packets.
Once you have determined the maximum packet size for your network, set the value of the Packet
Size (GevSCPSPacketSize) parameter on each camera to this value.
Step 3 - Set the Bandwidth Reserve (GevSCBWR) parameter for each camera.
The Bandwidth Reserve (GevSCBWR) parameter setting for a camera determines how much of the
bandwidth assigned to that camera will be reserved for lost packet resends and for asynchronous
traffic such as commands sent to the camera. If you are operating the camera in a relatively EMI
free environment, you may find that a bandwidth reserve of 2% or 3% is adequate. If you are
operating in an extremely noisy environment, you may find that a reserve of 8% or 10% is more
appropriate.
Payload Size 1 4
Bytes/Frame = ----------------------------------- Packet Overhead + Payload Size + Leader Size + Trailer Size
Packet Size
Where:
Packet Overhead = 72 (for a GigE network)
78 (for a 100 MBit/s network)
Leader Size = Packet Overhead + 36 (if chunk mode is not active)
Packet Overhead + 12 (if chunk mode is active)
Trailer Size = Packet Overhead + 8
Step 6 - For each camera, compare the data bandwidth needed with the data bandwidth
assigned.
For each camera, you should now compare the data bandwidth assigned to the camera (as
determined in step 4) with the bandwidth needed by the camera (as determined in step 3).
For bandwidth to be used most efficiently, the data bandwidth assigned to a camera should be equal
to or just slightly greater than the data bandwidth needed by the camera. If you find that this is the
situation for all of the cameras on the network, you can go on to step 6 now. If you find a camera
that has much more data bandwidth assigned than it needs, you should make an adjustment.
To lower the amount of data bandwidth assigned, you must adjust a parameter called the Inter-
Packet Delay (GevSCPD). If you increase the Inter-Packet Delay (GevSCPD) parameter value on
a camera, the data bandwidth assigned to the camera will decrease. So for any camera where you
find that the data bandwidth assigned is much greater then the data bandwidth needed, you should
do this:
1. Raise the setting for the Inter-Packet Delay (GevSCPD) parameter for the camera.
2. Recalculate the data bandwidth assigned to the camera.
3. Compare the new data bandwidth assigned to the data bandwidth needed.
4. Repeat 1, 2, and 3 until the data bandwidth assigned is equal to or just greater than the data
bandwidth needed.
If you increase the inter-packet delay to lower a camera’s data output rate there is
something that you must keep in mind. When you lower the data output rate, you
increase the amount of time that the camera needs to transmit an acquired frame.
Increasing the frame transmission time can restrict the camera’s maximum
allowed acquisition line rate.
Step 7 - Check that the total bandwidth assigned is less than the network capacity.
1. For each camera, determine the current value of the Bandwidth Assigned (GevSCBWA)
parameter. The value is in Byte/s. (Make sure that you determine the value of the parameter
after you have made any adjustments described in the earlier steps.)
2. Find the sum of the current Bandwidth Assigned (GevSCBWA) parameter values for all of the
cameras.
If the sum of the Bandwidth Assigned values is less than 125 MByte/s for a GigE network or
12.5 M/Byte/s for a 100 Bit/s network, the bandwidth management is OK.
If the sum of the Bandwidth Assigned values is greater than 125 MByte/s for a GigE network or
12.5 M/Byte/s for a 100 Bit/s network, the cameras need more bandwidth than is available and you
must make adjustments. In essence, you must lower the data bandwidth needed by one or more of
the cameras and then adjust the data bandwidths assigned so that they reflect the lower bandwidth
needs.
You can lower the data bandwidth needed by a camera either by lowering its line rate or by
decreasing the size of the frame. Once you have adjusted the line rates and/or frame size on the
cameras, you should repeat steps 2 through 6.
For more information about the camera’s maximum allowed line rate, see Section 8.5 on page 131.
For more information about the frame size, see Section 8.1 on page 74.
6 Camera Functional
Description
This chapter provides an overview of the camera’s functionality from a system perspective. The
overview will aid your understanding when you read the more detailed information included in the
later chapters of the user’s manual.
Each camera employs a single line CMOS sensor chip designed for monochrome imaging. For 2k
cameras, the sensor includes 2048 pixels with a pixel size of 7 µm x 7 µm. For 4k/6k cameras, the
sensor consists of two/three 2k sensor segments with a pixel size of 7 µm x 7 µm, resulting in a total
of 4096/6144 pixels. For 8k/12k cameras, the sensor consists of 2/3 4k sensor segments with a
pixel size of 3.5 µm x 3.5 µm, resulting in a total of 8192/12288 pixels. See Fig. 14 on page 48 for
an overview of the different sensor architectures.
Acquisition start, frame start, and line start can be controlled via externally generated hardware
trigger signals. These signals facilitate periodic or non-periodic frame/line start. Modes are available
that allow the length of exposure time to be directly controlled by the external line start signal or to
be set for a pre-programmed period of time.
Acquisition start, frame start, and exposure time can also be controlled by parameters transmitted
to the camera via the Basler pylon API and the GigE interface.
Accumulated charges are read out of the sensor when exposure ends. At readout, accumulated
charges are moved from the sensor’s light-sensitive elements (pixels) into the analog processing
section of the sensor (Fig. 14 on page 48). As the charges move from the pixels to the analog
processing section, they are converted to voltages proportional to the size of each charge. The
voltages from the analog processing section are next passed to a bank of 12 Bit Analog-to-Digital
converters (ADCs).
Finally, the gray values pass through a section of the sensor where they receive additional digital
processing and then they are moved out of the sensor. As each gray value leaves the sensor, it
passes through an FPGA and into an image buffer (Fig. 15 on page 49). All shifting is clocked
according to the camera’s internal data rate. Shifting continues until all image data has been read
out of the sensor.
The gray values leave the image buffer and pass back through the FPGA to an Ethernet controller
where they are assembled into data packets. The packets are then transmitted via an Ethernet
network to a network adapter in the host PC. The Ethernet controller also handles transmission and
receipt of control data such as changes to the camera’s parameters.
The image buffer between the sensor and the Ethernet controller allows data to be read out of the
sensor at a rate that is independent of the data transmission rate between the camera and the host
computer. This ensures that the data transmission rate has no influence on image quality.
Type B
CMOS Sensor
Digital Processing
ADCs
Type A
Analog Processing
CMOS Sensor
Pixels Pixels
ADCs ADCs
Fig. 14: CMOS Sensor Architecture. Type A is a 2k Sensor or 2k Sensor Segment with a Pixel Size of 7 µm x 7 µm
and Type B is a 4k Sensor Segment with a Pixel Size of 3.5 µm x 3.5 µm.
Pixel Pixel
Data Data
Control Control
Micro-
Controller Control
Data
7 Physical Interface
This chapter provides detailed information, such as pinouts and voltage requirements, for the
physical interface on the camera. This information will be especially useful during your initial
design-in process.
12-pin
6-pin Receptacle
Receptacle (I/O)
(Power)
8-pin
RJ-45
Jack
(Ethernet)
Functional Earth
Connection
12 5 11
4 3 6 4
5 2
7 3
8 2
6 1 9 10 1
Fig. 17: Pin Numbering for the 6-pin and 12-pin Receptacles
Pin Designation
1
+12 VDC (- 10 %) to +24 VDC (+ 5 %), < 1 % ripple, Camera Power (*)
2
3 Not Connected
4 Not Connected
5
DC Ground (**)
6
NOTICE
Pin Designation
1 I/O Input 1 -
2 I/O Input 1+
3 I/O Input 3 -
4 I/O Input 3 +
5 Gnd
6 I/O Output 1-
7 I/O Output 1 +
8 I/O Input 2 -
9 I/O Input 2 +
10 Not connected
11 I/O Output 2 -
12 I/O Output 2+
Table 6: Pin Assignments for the 12-pin Connector
NOTICE
Using a wrong pin assignment for the 12-pin connector can severely damage the camera.
Make sure the cable and plug you connect to the 12-pin connector follows the correct pin
assignment. In particular, do not use a pin assignment that would be correct for Basler area scan
cameras. The 12-pin connectors of Basler line scan and area scan cameras are electrically
incompatible.
For the I/O lines to work correctly, pin 5 must be connected to ground.
NOTICE
Hirose
HR10A-7P-6S
6-pin Plug
Camera
DC
Power +12 VDC to +24 VDC
1
Supply +12 VDC to +24 VDC
2
Not Connected
3
AC In Not Connected
4
DC Gnd 5
DC Gnd
6
Shield
Power Cable
NOTICE
Hirose
I/O In 1 - HR10A-10P-12S
1 12-pin Plug
I/O In 1 +
2
I/O In 3 -
3
I/O In 3 + 4
Gnd 5
Not Connected 10
I/O Out 1 - 6
I/O Out 1 + 7
I/O In 2 - 8
I/O In 2 + 9
I/O Out 2 - 11
I/O Out 2 + 12
I/O Cable
Fig. 19: I/O Cable
NOTICE
NOTICE
For more information about the 6-pin connector, see Section 7.2.1 on page 52 and Section 7.3.1 on
page 54.
For more information about the power cable, see Section 7.4.1 on page 55.
As shown in Fig. 20 and in the I/O schematic at the beginning of this section, each input is designed
to receive an RS-422 signal. For the camera’s I/O circuitry to operate properly, you must supply a
ground as shown in Fig. 20.
12-pin
Connector
The RS-422 standard allows devices to be used with a bus structure to form an interface circuit. So,
for example, input line 1 on several different cameras can be connected via an RS-422 bus as
shown in Fig. 21.
RO RO
R1 R3
D RT R4 RO
R2
RO
Connected to the bus would be one camera as the "master" transmitter (driver D; only one driver
allowed) and up to ten cameras (receivers R), with the "master" transmitter sending signals to the
"slave" inputs of the receivers. The inputs of the receivers would be connected in parallel to the
driver via the bus.
The separations between receivers and bus should be as small as possible. The bus must be
terminated by a 120 ohm termination resistor (RT). Each RS-422 input on the cameras includes a
switchable 120 ohm termination resistor as shown in Fig. 20. When a camera input of the last
receiver in the bus terminates the bus (as shown in Fig. 21.: R4), the termination resistor on that
input should be enabled. You should not use multiple termination resistors on a single bus. Using
multiple termination resistors will lower signalling reliability and has the potential for causing
damage to the RS-422 devices.
The inputs on the camera can accept RS-644 low voltage differential signals (LVDS).
If you are supplying an RS-644 LVDS signal to an input on the camera, the 120 ohm termination
resistor on that input must be enabled. The input will not reliably react to RS-644 signals if the
resistor is disabled.
For the camera’s I/O circuitry to operate properly, you must supply a ground as shown in Fig. 20.
A camera input line can accept a Low Voltage TTL signal when the signal is input into the camera
as shown in Fig. 22.
The following voltage requirements apply to the camera’s I/O input (pin 2 of the 12-pin connector):
Voltage Significance
> +0.8 to +2.0 VDC Region where the transition threshold occurs; the logical state is not defined in this
region.
> +2.0 VDC The voltage indicates a logical 1.
+6.0 VDC Absolute maximum; the camera can be damaged when the absolute maximum is
exceeded.
Table 7: Voltage Requirements for the I/O Input When Using LVTTL
When LVTTL signals are applied to an input, the 120 ohm termination resistor on that input must
be disabled. The input will not react to LVTTL signals if the resistor is enabled.
For the camera’s I/O circuitry to operate properly, you must supply a ground as shown Fig. 22.
12-pin
Connector
Not
Connected
Camera
3.3 V Your 0 to +5 VDC
1 TTL Input Signal
I/O In 1 +
2
3
4
5
10
Gnd
120 6
Receiver 7 Your
To Gnd
FPGA 8
(control) 9
Gnd 3.3 V
11
8.2 10
RS-422 k k 12
Transceiver
You can select an input line and enable or disable the termination resistor on the line from within
your application software by using the pylon API. The following code snippet illustrates using the
API to set the parameter values:
camera.LineSelector.SetValue(LineSelector_Line1);
camera.LineTermination.SetValue(true);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily enable or disable the resistors.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Each individual input line is equipped with a debouncer. The debouncer aids in discriminating
between valid and invalid input signals. The debouncer value specifies the minimum time that an
input signal must remain high or remain low in order to be considered a valid input signal.
We recommend setting the debouncer value so that it is slightly greater than the
longest expected duration of an invalid signal.
Setting the debouncer to a value that is too short will result in accepting invalid
signals. Setting the debouncer to a value that is too long will result in rejecting valid
signals.
To set a debouncer:
1. Use the LineSelector parameter to select the camera input line for which you want to set the
debouncer.
2. Set the value of the LineDebouncerTimeAbs parameter.
You can set the LineSelector and the value of the LineDebouncerAbs parameter from within your
application software by using the pylon API. The following code snippet illustrates using the API to
set the parameter values:
// Select input line 1 and set the debouncer value to 100 microseconds
camera.LineSelector.SetValue(LineSelector_Line1);
camera.LineDebouncerTimeAbs.SetValue(100);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
You can set each individual input line to invert or not to invert the incoming electrical signal.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
You can select an input line as the source signal for the following camera functions:
the Acquisition Start Trigger
the Frame Start Trigger
the Line Start Trigger
the Phase A input for the shaft encoder module
the Phase B input for the shaft encoder module
To use an input line as the source signal for a camera function, you must apply an electrical signal
to the input line that is appropriately timed for the function.
For detailed information about selecting an input line as the source signal for the camera’s
Acquisition Start Trigger function, see Section 8.2.2.2 on page 79.
For detailed information about selecting an input line as the source signal for the camera’s Frame
Start Trigger function, see Section 8.2.3.3 on page 84.
For detailed information about selecting an input line as the source signal for the camera’s Line Start
Trigger function, see Section 8.2.4.2 on page 87 and Section 8.2.4.3 on page 90.
For detailed information about selecting an input line as the source signal for the shaft encoder
model Phase A or Phase B input, see Section 8.6 on page 136.
By default:
Input Line 1 is selected as the source signal for the camera’s Line Start Trigger function.
Input Line 1 is also selected as the source signal for shaft encoder module Phase A input.
Input Line 2 is selected as the source signal for shaft encoder module Phase B input.
Input Line 3 is selected as the source signal for the camera’s Frame Start Trigger function.
As shown in Fig. 23 and in the I/O schematic at the beginning of this section, each output is
designed to transmit an RS-422 signal. For the camera’s I/O circuitry to operate properly, you must
supply a ground as shown in Fig. 23.
The RS-422 standard allows devices to be used with a bus structure to form an interface circuit. So,
for example, output line 1 on a camera can be connected to an RS-422 bus in parallel with the inputs
on several of your devices (receivers). The camera with output line 1 connected to the bus would
serve as a "master" transmitter to the "slave" inputs of the other connected devices. For more
information about an RS-422 interface circuit and a related figure, see the "Using the Inputs with
RS-422" section.
Be aware that the last receiver in an RS-422 bus must have a 120 Ohm termination resistor.
12-pin
Connector
Camera
3.3 V
1
Your
2 Gnd
3
Gnd
4
5 To your
10 RS-422
I/O Out 1 - input
6
I/O Out 1 +
7
To
FPGA 8
(control) Driver 9
11
RS-422 12
Transceiver
You cannot directly use the RS-422 signal from a camera output line as an input to an RS-644 low
voltage differential signal (LVDS) receiver. However, if a resistor network is placed on the camera’s
output as shown in Fig. 24, you can use the signal from the camera’s output line as an input to an
RS-644 device.
For the camera’s I/O circuitry to operate properly, you must supply a ground as shown in Fig. 24.
12-pin
Connector
Camera
3.3 V
1 Your
2 Gnd
3
Gnd
4
5
To Your
10 47 RS-644
I/O Out 1 -
6 Input
I/O Out 1 + 22
7
To
FPGA 8
(control) Driver 9 47
11
12
RS-422
Transceiver
Fig. 24: RS-422 Output Signal Modified for Use with an RS-644 Input
You can use a camera output line as an input to a low voltage TTL receiver, but only if the camera’s
output signal is used as shown in Fig. 25. In this situation, a low will be indicated by a camera output
voltage near zero, and a high will be indicated by a camera output voltage of approximately 3.3
VDC. These voltages are within the typically specified levels for low voltage TTL devices.
For the camera’s I/O circuitry to operate properly, you must supply a ground as shown in Fig. 25.
12-pin
Connector
Camera
3.3 V
1 Your
2 Gnd
3 Not
Gnd
4 Connected
5
10
I/O Out 1 - LVTTL Signal
6
I/O Out 1 + to Your Input
7
To
FPGA 8
(control) Driver 9
11
Your
12 Gnd
RS-422
Transceiver
Fig. 25: Output Line Wired for Use with an LVTTL Input
The camera allows selecting the output signals of the shaft encoder module or of the frequency
converter module and assigning them to one of the camera’s digital output lines. In this fashion input
signals can be passed through a camera to trigger additional cameras.
In this case, setting a minimum output pulse width may be necessary to ensure output signal
detection.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output lines, see
Section 7.6.2.1 on page 64.
For more information about the minimum output pulse width feature, see Section 7.6.2.3 on
page 67.
You can use the minimum output pulse width feature to ensure that even very narrow camera output
signals, e.g. signals originating from a shaft encoder, will reliably be detected by other devices. The
MinOutPulseWidthAbs parameter sets output signals for the selected output line to a minimum
width. The parameter is set in microseconds and can be set in a range from 0 to 100 µs.
You can set the LineSelector and the MinOutPulseWidthAbs parameter values from within your
application software by using the pylon API. The following code snippet illustrates using the API to
set the parameter values:
// Select the output line
camera.LineSelector.SetValue(LineSelector_Out1);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output lines, see
Section 7.6.2.1 on page 64.
You can set each individual output line to invert or not to invert the outgoing signal.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
To make a physical output line useful, you must select a source signal for the output line.
The camera has the following standard output signals available that can be selected as the source
signal for an output line:
the Exposure Active signal
the Acquisition Trigger Wait signal
the Frame Trigger Wait signal
the Line Trigger Wait signal
the Shaft Encoder Module Out signal
the Frequency Converter signal
You can also select one of the following as the source signal for an output:
the "User Output" signal (when you select "user output" as the source signal for an output line,
you can use the camera’s API to set the state of the line as you desire)
Off (when Off is selected as the source signal, the output is disabled.)
To select one of the standard output signals as the source signal for an output line:
// Select the shaft encoder module out signal for output line 1
camera.LineSelector.SetValue(LineSelector_Out1);
camera.LineSource.SetValue(LineSource_ShaftEncoderModuleOut);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about
the pylon API and the pylon Viewer, see Section 3.1 on page 26.
the Exposure Active signal, see Section 8.3.1 on page 120.
the Acquisition Trigger Wait signal, see Section 8.3.3.1 on page 122.
the Frame Trigger Wait signal, see Section 8.3.3.2 on page 124.
the Line Trigger Wait signal, see Section 8.3.3.3 on page 126.
working with outputs that have "user settable" as the signal source, see Section 7.6.2.6.
By default, the camera’s Exposure Active signal is selected as the source signal for Output Line 1,
and the camera’s Frame Trigger Wait signal is selected as the source signal for Output Line 2.
As mentioned in the previous section, you can select "user output" as the signal source for an output
line. For an output line that has "user output" as the signal source, you can use camera parameters
to set the state of the line.
//Set the state of output line 2 and then read the state
camera.UserOutputSelector.SetValue(UserOutputSelector_UserOutput2);
camera.UserOutputValue.SetValue(true);
bool currentUserOutput2State = camera.UserOutputValue.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
If you have designated both of the cameras output lines as user outputs, you can use the
UserOutputValueAll parameter to set the state of both outputs.
The UserOutputValueAll parameter is a 32 bit value. As shown in Fig. 26, the lowest two bits of the
parameter value will set the state of the user outputs. If a bit is 0, it will set the state of the associated
output to low. If a bit is high, it will set the state of the associated port to high.
Not used
LSB
Use the UserOutputValueAll parameter to set the state of multiple user outputs.
You can set the UserOutputValueAll parameter from within your application software by using the
pylon API. The following code snippet illustrates using the API to set the parameter:
// Set the state of both output lines to 1 and read the state
camera.UserOutputValueAll.SetValue(0x3);
int64_t currentOutputState = camera.UserOutputValueAll.GetValue();
If you have the invert function enabled on an output line that is designated as a
user output, the user setting sets the state of the line before the inverter.
You can determine the current state of all input and output lines with a single operation.
The LineStatusAll parameter is a 32 bit value. As shown in Fig. 27, certain bits in the value are
associated with each line and the bits will indicate the state of the lines. If a bit is 0, it indicates that
the state of the associated line is currently low. If a bit is 1, it indicates that the state of the associated
line is currently high.
Indicates input line 3 state
Indicates output line 2 state Indicates input line 2 state
Indicates output line 1 state Indicates input line 1 state
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
// Select the I/O line and read the line logic type
camera.LineSelector.SetValue(LineSelector_Line1);
LineLogicEnums lineLogicLine1 = camera.LineLogic.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
As shown in the I/O schematic at the beginning of this section, the camera’s I/O circuitry will
incorporate Linear Technology LTC2855 transceivers or the equivalent. For more detailed
information about response characteristics, refer to the LTC2855 data sheet.
The response times for the output lines on your camera will fall into the ranges
specified above. The exact response time for your specific application will depend
on your circuit design.
8 Acquisition Control
The sample code included in this section represents "low level" code that is
actually used by the camera.
Many tasks, however, can be programmed more conveniently with fewer lines of
code when employing the Instant Camera classes, provided by the Basler pylon
C++ API.
For information about the Instant Camera classes, see the C++ Programmer's
Guide and Reference Documentation delivered with the Basler pylon Camera
Software Suite.
This section provides detailed information about controlling the acquisition of image information.
You will find details about triggering frame and line acquisition, about setting the exposure time for
acquired lines, about setting the camera’s line acquisition rate, and about how the camera’s
maximum allowed line acquisition rate can vary depending on the current camera settings.
As another example, assume that the OffsetX parameter is set to 10 and the Width parameter is set
to 25. With these settings, pixels 10 through 34 would be used for each line acquisition as shown
in Fig. 28.
OffsetX Width
The Height parameter determines the number of lines that will be included in each frame. For
example, assume that the Height parameter is set to 100 and that the camera has just started to
acquire lines. In this case, the camera will accumulate acquired line data in an internal buffer until
100 lines have been accumulated. Once pixel data for 100 lines has accumulated in the buffer, the
camera will recognize this as a complete frame and it will begin to transmit the acquired frame to
your host PC via the GigE network connection. The camera has multiple frame buffers, so it can
begin to acquire lines for a new frame as it is transmitting data for the previously acquired frame.
The absolute maximum for the Height parameter value is 12288. Accordingly, a
single frame may include 12288 lines at most.
This maximum number of lines can, however, not be obtained under all conditions:
In the event of limitations due to the current camera parameter settings or due to
the transport layer, the camera will automatically decrease the Height parameter
to a suitable value. Each frame will then include fewer lines than originally set.
Given the current camera parameter settings, check the Height parameter to see
whether the desired number of lines per frame can actually be obtained.
When setting the frame parameters, the following guidelines must be followed:
The sum of the OffsetX parameter plus the Width parameter must be less than or equal to the
total number of pixels in the camera’s sensor line. For example, if you are working with a
camera that has a line with 2048 pixels, the sum of the OffsetX setting plus the Width setting
must be less than or equal to 2048.
The Height parameter must be set to or below the maximum allowed value.
The maximum allowed value for the Height parameter setting will be at least 512, but will vary
depending on your camera model and on how the camera’s parameters are set. To determine
the maximum allowed Height value, see below.
The following code snippet illustrates using the pylon API to get the maximum allowed Height value:
int64_t widthMax = camera.Width.GetMax();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
You can set the OffsetX, Width, and Height parameter values from within your application software
by using the pylon API. The following code snippets illustrate using the API to get the maximum
allowed settings and the increments for the OffsetX, Width and Height parameters. They also
illustrate setting the OffsetX, Width, and Height parameter values.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The Width and Height parameters cannot be changed while the camera is in the
process of acquiring frames. If the camera receives commands to change the
Width or Height parameter values while it is in the process of acquiring frames:
If the camera is set for single frame mode, the parameters will not change
until the current frame is complete or you issue an acquisition stop command.
If the camera is set for continuous frame mode, the parameters will not
change until you issue an acquisition stop command.
You can set the AcquisitionMode parameter value and you can issue AcquisitionStart or
AcquisitionStop commands from within your application software by using the pylon API. The code
snippet below illustrates using the API to set the AcquisitionMode parameter value and to issue an
AcquisitionStart command. The snippet also illustrates setting several parameters regarding frame
and line triggering. These parameters are discussed later in this chapter.
camera.AcquisitionMode.SetValue(AcquisitionMode_SingleFrame);
camera.TriggerSelector.SetValue(TriggerSelector_FrameStart);
camera.TriggerMode.SetValue(TriggerMode_On);
camera.TriggerActivation.SetValue(TriggerActivation_RisingEdge);
camera.TriggerSelector.SetValue(TriggerSelector_LineStart);
camera.TriggerMode.SetValue(TriggerMode_On);
camera.TriggerActivation.SetValue(TriggerActivation_RisingEdge);
camera.ExposureMode.SetValue(ExposureMode_Timed);
camera.ExposureTimeAbs.SetValue(55);
camera.AcquisitionStart.Execute();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
When the camera's acquisition mode is set to single frame, the maximum possible
acquisition frame rate for a given AOI cannot be achieved. This is true because
the camera performs a complete internal setup cycle for each single frame.
When the TriggerMode parameter for the acquisition start trigger is set to Off, the camera will
generate all required acquisition start trigger signals internally, and you do not need to apply
acquisition start trigger signals to the camera.
When the TriggerMode parameter for the acquisition start trigger is set to On, you must apply an
acquisition start trigger to the camera in order to make the camera’s acquisition state valid. Once
an acquisition start trigger has been applied to the camera and the acquisition state has become
valid, the state will remain valid until the camera has acquired the number of frames specified by
the AcquisitionFrameCount parameter. At that point, the acquisition state will become invalid, and
you must apply a new acquisition start trigger to the camera before it can acquire any more frames.
When the TriggerMode parameter for the acquisition start trigger is set to On, you must select a
source signal to serve as the acquisition start trigger. The TriggerSource parameter specifies the
source signal. The available selections for the TriggerSource parameter are:
Software - When the acquisition start trigger source is set to software, the user applies an
acquisition start trigger to the camera by issuing an acquisition start TriggerSoftware command
to the camera from the host PC.
Line1, Line2 or Line3 - When the acquisition start trigger source is set to Line1, Line2 or Line3,
the user applies an acquisition start trigger to the camera by injecting an externally generated
acquisition start trigger signal (referred to as an ExASTrig signal) into physical input line 1, line
2 or line 3 on the camera.
If the TriggerSource parameter for the acquisition start trigger is set to Line1, Line2 or Line3, the
user must also set the TriggerActivation parameter. The available settings for the TriggerActivation
parameter are:
RisingEdge - specifies that a rising edge of the hardware trigger signal will act as the
acquisition start trigger.
FallingEdge - specifies that a falling edge of the hardware trigger signal will act as the
acquisition start trigger.
When the TriggerMode parameter for the acquisition start trigger is set to On, the
camera’s AcquisitionMode parameter must be set to continuous.
8.2.2.3 AcquisitionFrameCount
When the TriggerMode parameter for the acquisition start trigger is set to On, you must set the value
of the camera’s AcquisitionFrameCount parameter. The value of the AcquisitionFrameCount can
range from 1 to 65535.
With acquisition start triggering on, the camera will initially be in a "waiting for acquisition start
trigger" acquisition status. When in this acquisition status, the camera cannot react to frame start
trigger signals. If an acquisition start trigger signal is applied to the camera, the camera will exit the
"waiting for acquisition start trigger" acquisition status and will enter the "waiting for frame start
trigger" acquisition status. It can then react to frame start trigger signals. When the camera has
received a number of frame start trigger signals equal to the current AcquisitionFrameCount
parameter setting, it will return to the "waiting for acquisition start trigger" acquisition status. At that
point, you must apply a new acquisition start trigger signal to exit the camera from the "waiting for
acquisition start trigger" acquisition status.
You can set the TriggerMode and TriggerSource parameter values for the acquisition start trigger
and the AcquisitionFrameCount parameter value from within your application software by using the
pylon API.
The following code snippet illustrates using the API to set the acquisition start Trigger Mode to on,
the Trigger Source to software, and the AcquisitionFrameCount to 5:
// Select the acquisition start trigger
camera.TriggerSelector.SetValue(TriggerSelector_AcquisitionStart);
// Set the mode for the selected trigger
camera.TriggerMode.SetValue(TriggerMode_On);
// Set the source for the selected trigger
camera.TriggerSource.SetValue(TriggerSource_Software);
// Set the acquisition frame count
camera.AcquisitionFrameCount.SetValue(5);
The following code snippet illustrates using the API to set the Trigger Mode to on, the Trigger
Source to line 1, the Trigger Activation to rising edge, and the AcquisitionFrameCount to 5:
// Select the acquisition start trigger
camera.TriggerSelector.SetValue(TriggerSelector_AcquisitionStart);
// Set the mode for the selected trigger
camera.TriggerMode.SetValue(TriggerMode_On);
// Set the source for the selected trigger
camera.TriggerSource.SetValue (TriggerSource_Line1);
// Set the activation mode for the selected trigger to rising edge
camera.TriggerActivation.SetValue(TriggerActivation_RisingEdge);
// Set the acquisition frame count
camera.AcquisitionFrameCount.SetValue(5);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
When the Frame Start TriggerMode parameter is set to Off, selection of a source signal for the
frame start trigger is not required. With the mode set to Off, the camera operates the frame start
trigger automatically. How the camera will operate the frame start trigger depends on the setting of
the camera’s AcquisitionMode parameter:
If the AcquisitionMode parameter is set to single frame, the camera will automatically make the
frame start trigger valid when it receives an AcquisitionStart command. The trigger will remain
valid until enough lines have been acquired to constitute a complete frame and then will
become invalid.
If the AcquisitionMode parameter is set to continuous frame:
a. The camera will automatically make the frame start trigger valid when it receives an
AcquisitionStart command.
b. The frame start trigger will be held valid until enough lines have been acquired to constitute
a complete frame and then will become invalid.
c. As soon as acquisition of lines for a next frame can start, the frame start trigger will
automatically be made valid, will be held valid until enough lines have been acquired to
constitute a complete frame, and then will become invalid.
d. The behavior in step c will repeat until the camera receives an AcquisitionStop command.
When an AcquisitionStop command is received, the frame start trigger will become
continuously invalid.
When the Frame Start TriggerMode parameter is set to On, you must select a source signal for the
frame start trigger. The Frame Start Trigger Source parameter specifies the source of the signal.
The available selections for the Frame Start Trigger Source parameter are:
Software - When the frame start trigger source is set to software, the user triggers frame start
by issuing a TriggerSoftware command to the camera from the host PC. Each time a
TriggerSoftware command is received by the camera, the frame start trigger will become valid
and will remain valid until enough lines have been acquired to constitute a complete frame.
The frame start trigger will then become invalid.
Line 1 - When the frame start trigger source is set to line 1, the user triggers frame start by
applying an external electrical signal (referred to as an ExFSTrig signal) to physical input line 1
on the camera.
Line 2 - When the frame start trigger source is set to line 2, the user triggers frame start by
applying an ExFSTrig signal to physical input line 2 on the camera.
Line 3 - When the frame start trigger source is set to line 3, the user triggers frame start by
applying an ExFSTrig signal to physical input line 3 on the camera.
Shaft Encoder Module Out - When the frame start trigger source is set to shaft encoder module
out, the output signal from the camera’s shaft encoder software module will trigger frame start.
If the Frame Start Trigger Source parameter is set to Line 1, Line 2, Line 3, or Shaft Encoder Module
Out, the user must also set the Frame Start Trigger Activation parameter. The available settings for
the Frame Start Trigger Activation parameter are:
RisingEdge - specifies that a rising edge of the source signal will make the frame start trigger
valid. The frame start trigger will remain valid until enough lines have been acquired to
constitute a complete frame and then will become invalid.
FallingEdge - specifies that a falling edge of the source signal will make the frame start trigger
valid. The frame start trigger will remain valid until enough lines have been acquired to
constitute a complete frame and then will become invalid.
LevelHigh - specifies that a rising edge of the source signal will make the frame start trigger
valid. The frame start trigger will remain valid as long as the signal remains high. The frame
start trigger will become invalid when the signal becomes low.
LevelLow - specifies that a falling edge of the source signal will make the frame start trigger
valid. The frame start trigger will remain valid as long as the signal remains low. The frame
start trigger will become invalid when the signal becomes high.
If the Frame Start Trigger Activation parameter is set to LevelHigh or LevelLow, the user must also
set the TriggerPartialClosingFrame parameter. The available settings for the
TriggerPartialClosingFrame parameter are:
True: When the frame start trigger signal transitions while a frame is being acquired frame
acquisition will stop and only the portion of the frame acquired so far will be transmitted.
False - When the frame start trigger signal transitions while a frame is being acquired the
complete frame will be acquired and transmitted.
By default, Input Line 3 is selected as the source signal for the Frame Start Trigger.
If the Frame Start Trigger Source parameter is set to Shaft Encoder Module Out,
the recommended setting for the Frame Start Trigger Activation parameter is
RisingEdge.
If the Frame Start Trigger Source parameter is set to Line 1, Line 2, or Line 3, the
electrical signal applied to the selected input line must be held high for at least 100
ns for the camera to detect a transition from low to high and must be held low for
at least 100 ns for the camera to detect a transition from high to low.
To see graphical representations of frame start triggering, refer to the use case diagrams in
Section 8.3 on page 119.
You can set the Trigger Mode, Trigger Source, and Trigger Activation parameter values for the
frame start trigger from within your application software by using the pylon API. If your settings
make it necessary, you can also issue a Trigger Software command. The following code snippet
illustrates using the API to set the frame start trigger to mode = on, with rising edge triggering on
input line 1:
// Select the trigger you want to work with
camera.TriggerSelector.SetValue(TriggerSelector_FrameStart);
// Set the mode for the selected trigger
camera.TriggerMode.SetValue(TriggerMode_On);
// Set the source for the selected trigger
camera.TriggerSource.SetValue (TriggerSource_Line1);
// Set the activation scheme for the selected trigger
camera.TriggerActivation.SetValue(TriggerActivation_RisingEdge);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The Frame Timeout Abs parameter allows setting a maximum time (in microseconds) that may
elapse for each frame acquisition, i.e. the maximum time for the acquisition of the lines for a frame.
When the frame timeout is enabled and a time is set a partial frame will be transmitted if the set time
has elapsed before all lines specified for the frame are acquired. In addition, a frame timeout event
will be generated if it was enabled.
The minimum value for the Frame Timeout Abs parameter is 0 and the maximum value is 10000000
(= 10 seconds).
You can enable and configure the frame timeout from within your application software by using the
pylon API. The following code snippet illustrates using the API to enable and configure the frame
timeout:
// enable FrameTimeout and set FrameTimeout value
camera.FrameTimeoutEnable.SetValue(true);
// Although FrameTimeoutAbs is measured in microseconds the current resolution
// is just milliseconds.
double FrameTimeout_us = 20000.0; // 20 ms
camera.FrameTimeoutAbs.SetValue(FrameTimeout_us);
You can enable the frame timeout event from within your application software by using the pylon
API. The following code snippet illustrates using the API to enable the frame timeout event:
// enable FrameTimeout event
camera.EventSelector.SetValue(EventSelector_FrameTimeout);
camera.EventNotification.SetValue(EventNotification_GenICamEvent);
For more information about event reporting and enabling an event, see Section 10.5 on page 175.
When the Line Start TriggerMode parameter is set to Off, selection of a source signal for the line
start trigger is not required. With the mode set to Off, the camera operates the line start trigger
automatically. How the camera will operate the line start trigger depends on the setting of the
camera’s AcquisitionMode parameter:
If the AcquisitionMode parameter is set to single frame, the camera will automatically begin
generating line start triggers when it receives an AcquisitionStart command. The camera will
generate line start triggers until enough lines have been acquired to constitute a complete
frame and then will stop generating line start triggers.
If the AcquisitionMode parameter is set to continuous frame, the camera will automatically
begin generating line start triggers when it receives an AcquisitionStart command. The camera
will continue to generate line start triggers until it receives an AcquisitionStop command.
The rate at which the line start triggers are generated will be determined by the camera’s
AcquisitionLineRateAbs parameter:
If AcquisitionLineRateAbs is set to a value less than the maximum allowed line acquisition rate,
the parameter limits the line rate to the given value. This is useful if you want to limit the
amount of data to be transferred from the camera to the PC.
If AcquisitionLineRateAbs is set to a value greater than the maximum allowed line acquisition
rate, the camera will generate line start triggers at the maximum allowed line rate.
For more information about
setting the parameter, see Section 8.2.4.3 on page 90.
the maximum allowed line rate, see Section 8.5 on page 131.
When the line start trigger mode is set to Off, the exposure time for each line acquisition is
determined by the value of the camera’s Exposure Time parameters.
For more information about the camera’s exposure time parameters, see Section 8.2.5.2 on
page 92.
When the TriggerMode parameter for the line start trigger is set to On, you must select a source
signal for the line start trigger. The TriggerSource parameter specifies the source signal. The
available selections for the TriggerSource parameter are:
Software - When the line start trigger source is set to software, the user triggers line start by
issuing a TriggerSoftware command to the camera from the host PC. Each time a
TriggerSoftware command is received by the camera, the line start trigger will become valid. It
will become invalid during line acquisition and will become valid again when the next
TriggerSoftware command is received and when the camera is ready again for a new line
acquisition.
Line1 - When the line start trigger source is set to Line1, the user triggers each line acquisition
start by applying an external electrical signal (referred to as an ExLSTrig signal) to physical
input line 1 on the camera.
Line2 - When the line start trigger source is set to Line2, the user triggers each line acquisition
start by applying an ExLSTrig signal to physical input line 2 on the camera.
Line3 - When the line start trigger source is set to Line3, the user triggers each line acquisition
start by applying an ExLSTrig signal to physical input line 3 on the camera.
ShaftEncoderModuleOut - When the line start trigger source is set to ShaftEncoderModuleOut,
the output signal from the camera’s shaft encoder software module will trigger each line
acquisition start.
If the TriggerSource parameter is set to Line 1, Line 2, Line 3, or ShaftEncoderModuleOut, the user
must also set the TriggerActivation parameter for the line start trigger. The available settings for the
TriggerActivation parameter are:
RisingEdge - specifies that a rising edge of the source signal will start a line acquisition.
FallingEdge - specifies that a falling edge of the source signal will start a line acquisition.
By default, input line 1 is selected as the source signal for the Line Start Trigger.
All line start trigger signals input into the camera when the frame start trigger signal
is invalid will be ignored by the camera.
If the Trigger Source parameter is set to Shaft Encoder Module Out, the
recommended setting for the Line Start Trigger Activation parameter is
RisingEdge.
If the Line Start Trigger Source parameter is set to Line 1, Line 2, or Line 3, the
electrical signal applied to the selected input line must be held high for at least 100
ns for the camera to detect a transition from low to high and must be held low for
at least 100 ns for the camera to detect a transition from high to low.
If you are using a software trigger, do not trigger line acquisition at a rate that
exceeds the maximum allowed for the current camera settings. If you apply
line start trigger signals to the camera when it is not ready to receive them,
the signals will be ignored. For more information about determining the
maximum allowed line rate, see Section 8.5 on page 131.
exceeds the host computer’s capacity limits for data transfer or storage or
both. If you try to transfer more image data than the host computer is able to
process, lines may be dropped. For more information about bandwidth
optimization, see the Installation and Setup Guide for Cameras Used with
Basler pylon for Windows (AW000611).
When the Line Start TriggerMode parameter is set to On, there are three modes available to control
the exposure time for each acquired line: trigger width control, timed control, and control off. You
can set the camera’s Exposure Mode parameter to select one of the exposure time control modes.
The modes are explained in detail below.
If you have the Line Start Trigger Source parameter set to Line 1, Line 2, or Line 3, any one of the
two exposure time control modes will work well. You should select the mode that is most appropriate
for your application.
If you have the Line Start Trigger Source parameter set to Shaft Encoder Module out, we
recommend that you select the timed control mode. The trigger width mode should not be used in
this case.
In all cases, the exposure time for each line must be within the minimum and the
maximum stated in Table 10 on page 92. This is true regardless of the method
used to control exposure.
When the trigger width exposure time control mode is selected, the exposure time for each line
acquisition will be directly controlled by the source signal for the line start trigger. If the camera is
set for rising edge triggering, the exposure time begins when the signal rises and continues until the
signal falls. If the camera is set for falling edge triggering, the exposure time begins when the signal
falls and continues until the signal rises. Fig. 29 illustrates trigger width exposure with the camera
set for rising edge line start triggering.
Trigger width exposure is especially useful if you intend to vary the length of the exposure time for
each acquired line.
Exposure
Source
Signal
Fig. 29: Trigger Width Exposure with RisingEdge Line Start Triggering
Source
Signal
Exposure
(duration determined by the
exposure time parameters)
When the TriggerMode for the line start trigger is set to On and an input line is selected as the
source signal, there is a delay between the transition of the line start signal and the actual start of
exposure. For example, if you are using the timed exposure mode with rising edge triggering, there
is a delay between the rise of the signal and the actual start of exposure.
There is also an exposure end delay, i.e., a delay between the point when exposure should end as
explained in the diagrams on the previous page and when it actually does end.
The base exposure start and end delays are as shown in Table 8:
When using the frequency converter, the delay values may slightly differ from
those given in Table 8.
There is also a second component to the start and end delays. This second component is the
debouncer setting for the input line. The debouncer setting for the input line must be added to the
base start and end delays shown in Table 8 to determine the total start delay and end delay.
For example, assume that you are using an raL2048-48gm camera and that you have set the line
start trigger mode to on. Also assume that you have selected input line 1 as the source signal for
the line start trigger and that the debouncer parameter for line 1 is set to 5 µs. In this case:
Total Start Delay = Start Delay Value from Table 8 + Debouncer Setting
Total Start Delay = 1.5 µs + 5 µs
Total Start Delay = 6.5 µs
Total End Delay = End Delay Value from Table 8 + Debouncer Setting
Total End Delay = 1.2 µs + 5 µs
Total End Delay = 6.2 µs
You can set the Trigger Mode, Trigger Source, and Trigger Activation parameter values for the line
start trigger from within your application software by using the pylon API. If your settings make it
necessary, you can also select an exposure mode and set the exposure time.
The following code snippet illustrates using the API to set the line start trigger to mode = off, the
acquisition line rate to 20000, and the exposure time to 50 µs:
// Select the trigger you want to work with
camera.TriggerSelector.SetValue(TriggerSelector_LineStart);
// Set the mode for the selected trigger
camera.TriggerMode.SetValue(TriggerMode_Off);
// set a line rate
camera.AcquisitionLineRateAbs.SetValue(20000);
// set the exposure time to 50 µs
camera.ExposureTimeAbs.SetValue(50.0);
The following code snippet illustrates using the API to set the line start trigger to mode = on, to set
rising edge triggering on input line 2, to set the exposure mode to timed, and to set the exposure
time to 60 µs:
// Select the trigger you want to work with
camera.TriggerSelector.SetValue(TriggerSelector_LineStart);
// Set the mode for the selected trigger
camera.TriggerMode.SetValue(TriggerMode_On);
// Set the source for the selected trigger
camera.TriggerSource.SetValue (TriggerSource_Line2);
// Set the activation for the selected trigger
camera.TriggerActivation.SetValue(TriggerActivation_RisingEdge);
// set for the timed exposure mode and set exposure time to 60 µs
camera.ExposureMode.SetValue(ExposureMode_Timed);
camera.ExposureTimeAbs.SetValue(60.0);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
If you are operating the camera in either of the following ways, you must specify an exposure time
by setting the camera’s exposure time parameters:
the Line Start Trigger Mode is set to Off
the Line Start Trigger Mode is set to On and the Timed Exposure Time Control Mode is
selected
There are two ways to specify the exposure time: by setting "raw" parameter values or by setting
an "absolute" parameter value. The two methods are described below. You can use whichever
method you prefer to set the exposure time.
When exposure time is set using "raw" values, the exposure time will be determined by a
combination of two elements. The first element is the value of the Exposure Time Raw parameter.
The second element is the Exposure Time Base that is 100 ns on racer cameras.
Exposure Time = (Exposure Time Raw Parameter Value) x 100 ns
The Exposure Time Raw parameter value can be set in a range from 1 to 4095.
The exposure time, i.e. the product of the Exposure Time Raw parameter setting
and the Exposure Time Base Abs parameter value (i.e., 100 ns) must be equal to
or greater than the minimum exposure specified in the table on the previous page.
It is possible to use the parameters to set the exposure time lower than what is
shown in the table, but this is not allowed and the camera will not operate properly
when set this way.
If you are using a GenICam compliant tool such as the Basler pylon Viewer and
you attempt to set the exposure time to exactly the minimum allowed or to exactly
the maximum allowed, you will see unusual error codes. This is an artifact of a
rounding error in the GenICam interface architecture. As a work around, you could
set the exposure time slightly above the minimum or below the maximum. Values
between the minimum and the maximum are not affected by the problem.
You can set the Exposure Time Raw parameter value from within your application software by using
the pylon API. The following code snippet illustrates using the API to set the parameter values:
camera.ExposureMode.SetValue(ExposureMode_Timed);
camera.ExposureTimeRaw.SetValue(2);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
You can also set the exposure time with an "absolute" parameter. This is accomplished by setting
the camera’s ExposureTimeAbs parameter. The unit for the ExposureTimeAbs parameter is µs.
The setting for the ExposureTimeAbs parameter must be between the minimum
and the maximum allowed values (inclusive) shown in Table 8 on page 89.
The increment for the ExposureTimeAbs parameter is determined by the Exposure Time Base Abs
parameter value, i.e. 100 ns. For example, you can set the ExposureTimeAbs parameter to 15.0 µs,
15.1 µs, 15.2 µs, etc.
If you set the ExposureTimeAbs parameter to a value that is not a multiple of the Exposure Time
Base parameter value (i.e., 100 ns), the camera will automatically change the setting for the
ExposureTimeAbs parameter to the nearest multiple of 100 ns.
You should also be aware that if you change the exposure time using the raw settings, the
ExposureTimeAbs parameter will automatically be updated to reflect the new exposure time.
You can set the ExposureTimeAbs parameter value from within your application software by using
the pylon API. The following code snippet illustrates using the API to set the parameter value:
camera.ExposureTimeAbs.SetValue(124.0);
double resultingExpTime = camera.ExposureTimeAbs.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Use Case 1 - Acquisition Start, Frame Start, and Line Start Triggering Off
(Free Run), SingleFrame Mode
Settings:
AcquisitionMode = SingleFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
= frame transmitted
Acquisition Acquisition
start start
command command
Acquisition start
trigger signal
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 31: Use Case 1 - Single Frame Mode with Acquisition Start, Frame Start, and Line Start Triggering
Set to Off
Use Case 2 - Acquisition Start, Frame Start, and Line Start Triggering Off
(Free Run), ContinuousFrame Mode
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
Acquisition Acquisition
start stop
command command
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 32: Use Case 2 - Continuous Frame Mode with Acquisition Start, Frame Start and Line Start Triggering Set to
Off
Use Case 3 - Acquisition Start and Line Start Triggering Off (Free Run),
Frame Start Triggering On
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
= frame transmitted
Acquisition Acquisition
start stop
command command
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 33: Use Case 3 - Continuous Frame Mode with Acquisition Start and Line Start Triggering Set to Off and Frame
Start Triggering Set to On
Use Case 4 - Acquisition Start Triggering Off (Free Run), Frame Start and
Line Start Triggering On
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line start trigger signal is ignored because the camera is waiting for a frame start trigger signal
= frame transmitted
Acquisition Acquisition
start stop
command command
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 34: Use Case 4 - Continuous Frame Mode with Acquisition Start Triggering Set to Off and Frame Start and Line
Start Triggering Set to On
Use Case 5 - Acquisition Start Triggering Off (Free Run), Frame Start and
Line Start Triggering On, Frame Start Trigger LevelHigh,
TriggerPartialClosingFrame False
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
= line start trigger signal is ignored because the camera is waiting for a frame start trigger signal
= frame transmitted
Acquisition Acquisition
start stop
command command
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 35: Use Case 5 - Continuous Frame Mode with Acquisition Start Triggering Set to Off, Frame Start and Line
Start Triggering Set to On, and TriggerPartialClosingFrame set to False
Use Case 6 - Acquisition Start Triggering Off (Free Run), Frame Start and
Line Start Triggering On, Frame Start Trigger LevelHigh,
TriggerPartialClosingFrame True
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
= line start trigger signal is ignored because the camera is waiting for a frame start trigger signal
Acquisition Acquisition
start stop
command command
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 36: Use Case 6 - Continuous Frame Mode with Acquisition Start Triggering Set to Off, Frame Start and Line
Start Triggering Set to On, and TriggerPartialClosingFrame set to True
Use Case 7 - Acquisition Start and Frame Start Triggering Off (Free Run),
Line Start Triggering On
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
Acquisition Acquisition
start stop
command command
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 37: Use Case 7 - Continuous Frame Mode with Acquisition Start and Frame Start Triggering Set to Off and Line
Start Triggering Set to On
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
= frame transmitted
Acquisition Acquisition
start stop
command command
Acquisition start
trigger signal
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 38: Use Case 8 - Continuous Frame Mode with Acquisition Start Triggering Set to On and Frame Start and Line
Start Triggering Set to Off
Settings:
AcquisitionMode = ContinuousFrame
= trigger signal internally generated by the camera; trigger wait signal is not available
= line exposure and readout; white horizontal ruling: period used for exposure and exposure overhead
= line start trigger signal is ignored because the camera is waiting for an acquisition start trigger signal
Acquisition Acquisition
start stop
command command
Acquisition start
trigger signal
Frame start
trigger signal
Line start
trigger signal
Time
Fig. 39: Use Case 9 - Continuous Frame Mode with Acquisition Start and Line Start Triggering Set to On and Frame
Start Triggering Set to Off
ExLSTrig
Signal
Line Acquisition N Line Acquisition N+1 Line Acquisition N+2
Exposure Readout Exposure Readout Exposure Readout
Time
In the overlapped mode of operation, the exposure of a new line begins while the camera is still
reading out the sensor data for the previously acquired line. This situation is illustrated in Fig. 41
with the camera set for the trigger width exposure mode.
ExLSTrig
Signal
Line Acquisition N
Exposure Readout
Time
Determining whether your camera is operating with overlapped or non-overlapped exposure and
readout is not a matter of issuing a command or switching a setting on or off. Rather the way that
you operate the camera will determine whether the exposures and readouts are overlapped or not.
If we define the “line period” as the time from the start of exposure for one line acquisition to the
start of exposure for the next line acquisition, then:
Exposure will not overlap when: Line Period > Exposure Time + Readout Time
Exposure will overlap when: Line Period Exposure Time + Readout Time
You can determine the readout time by reading the value of the ReadoutTimeAbs parameter. The
parameter indicates what the readout time will be in microseconds given the camera’s current
settings. You can read the ReadoutTimeAbs parameter value from within your application software
by using the Basler pylon API. The following code snippet illustrates using the API to get the
parameter value:
double ReadoutTime = camera.ReadoutTimeAbs.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily get the parameter value.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
To ensure smooth line acquisition and avoid overtriggering, you may only apply a line acquisition-
related trigger when the camera is waiting for it. If the trigger is nonetheless applied, the trigger will
be ignored and considered an overtrigger.
The risk of overtriggering exists particularly for overlapped operation where the sequence of line
start triggers must be carefully coordinated both with the camera’s exposure time and the sensor
readout time.
The following examples use a non-inverted, rising edge external line start trigger signal (ExLSTrig).
Certain attempts of triggering overlapped line acquisition are illegal and do not result in line
acquisitions: When a line start trigger signal attempts an illegal line acquisition the trigger signal will
be ignored and, accordingly, no line acquisition will be performed. In addition, the trigger signal will
be reported as an overtrigger (see also Section 10.15 on page 209). Illegal triggering and
impossible overlaps are shown in Fig. 42 on page 114 and Fig. 43 on page 115.
Illegal triggering when overlapping line acquisitions:
The line start trigger goes high to start the exposure for line acquisition N+1 before the
exposure or the exposure overhead for line acquisition N has ended (see Fig. 42 on page 114).
This would result in the illegal overlap of exposures or of exposure and exposure overhead.
The exposure overhead (see Fig. 42 on page 114) is part of every exposure
process. For simplicity, it is omitted from the other figures illustrating exposure
and readout (see, for example, Fig. 43 on page 115).
The duration of the exposure overhead is expressed by constant C1 (see also
Section 8.5 on page 131).
The line start trigger goes low to end the exposure for line acquisition N+1 before readout for
acquisition N has ended (premature exposure end; see Fig. 43 on page 115).
This would result in the illegal overlap of two readouts (in trigger width exposure mode only).
Illegal
ExLSTrig
Signal
Line Acquisition N
Exposure Readout
Exposure Overhead
Exposure Overhead
Time
Fig. 42: Exposure N+1 Illegally Starts Before the Exposure Overhead for the Preceding Line Acquisition N Has
Ended; the Shown Overlap of Readouts is Also Illegal; Timed Exposure Mode Used as an Example
Illegal
ExLSTrig
Signal
Line Acquisition N
Exposure Readout
Exposure
Time
Fig. 43: Exposure N+1 Illegally Ends Before Readout of the Preceding Line Acquisition N Has Ended; Applies to
Trigger Width Exposure Mode Only
When the line start trigger has illegally gone low to end the exposure for line acquisition N+1 before
readout for acquisition N has ended (in trigger width exposure mode; see Fig. 43), the camera will
behave as shown in Fig. 44: The camera will extend the exposure and end it when the next valid
trigger for ending exposure occurs.
ExLSTrig
Signal
The ExLSTrg Signal goes low too early because
Line Acquisition N readout of the previous acquisition is not yet
finished. An overtrigger will be reported.
Exposure Readout
The exposure is ended with the next valid
attempt to end it.
Time
Fig. 44: Extension of Exposure N+1 After the Illegal Attempt of Ending It Too Early; Applies to Trigger Width
Exposure Mode Only
As mentioned above, you can avoid overtriggering by applying an acquisition-related trigger only
when the camera is waiting for it.
You can achieve this goal by
making use of acquisition monitoring tools, i.e. monitoring the camera’s acquisition status and
triggering only when the camera indicates that it is in the "waiting status" for the trigger or by
strictly obeying timing limits for exposure and triggering.
To get informed whether the camera is waiting for a trigger you can use the acquisition monitoring
tools described in Section 8.3 on page 119.
By applying an ExLSTrig signal as soon as the camera indicates that it is waiting for an ExLSTrig
signal you can operate the camera in "overlapped mode" wihout overtriggering.
As an example, and in the context of overlapped exposure, the use of the line trigger wait signal is
described in Section 8.3.3.3 on page 126 for proper triggering with the line start trigger. Both timed
and trigger width exposure mode are considered.
When strictly obeying the following timing limits you can avoid overtriggering in "overlapped mode"
and "non-overlapped mode" without having to monitor the camera’s acquisition status (see also
Fig. 45 and Fig. 46 below). You must ensure that the following four conditions are fulfilled at the
same time:
Condition one: The exposure time E is 2 µs.
This is the minimum allowed exposure time also given in Section 8.2.5 on page 91.
Condition two: Period F is C1 (for C1 values, see Section 8.5 on page 131); period F follows
immediately after the exposure.
The constant C1 expresses the duration of exposure overhead. Exposure is not possible during
this period. Its default value equals 5.4 µs. However, when the parameter limit is removed from
the ExposureOverhead parameter (see Section 8.5.1 on page 134), C1 equals 3.4 µs.
Accordingly, F must be 5.4 µs in the general case, and 3.4 µs when the parameter limit is
removed from the ExposureOverhead parameter.
Condition three: ExLSTrig Signal Period Minimum allowed line period.
Condition four: Make sure the ExposureOverlapTimeMaxAbs parameter value is set to the
appropriate value:
appropriate ExposureOverlapTimeMaxAbs parameter value =
= minimum allowed line period - duration of exposure overhead (i.e. C1,
i.e. minimum F value).
The maximum allowed exposure time E that is compatible with the maximum line rate is equal
to the appropriate ExposureOverlapTimeMaxAbs parameter value.
It follows that
if you increase the ExposureOverlapTimeMaxAbs parameter value above its appropriate
value while maintaining the trigger signal period, the minimum value for the duration of the
exposure overhead would be ignored and trigger signals can be considered overtriggers
when in fact they are not.
if you decrease the ExposureOverlapTimeMaxAbs parameter value below its appropriate
value you would have to increase the duration of the exposure overhead and thereby
increase the line rate.
When you want to operate the camera at the maximum allowed line rate and
have set the ExposureOverlapTimeMaxAbs parameter value to the
appropriate value (see above) you can vary the exposure time E within a range
of values between 2 µs and the applicable maximum allowed exposure time E
(see condition four).
ExLSTrig
Signal
E: Exposure F: Exposure Overhead or
Determined by the Exposure Time Exposure Overhead &
Parameter or Register Setting (at Least Partial) Readout
Fig. 45: Relation of the ExLSTrig Signal Period and Periods E and F for Regular Line Acquisition in Timed Exposure
Mode
ExLSTrig
Signal
E: Exposure F: Exposure Overhead or
Determined by the Exposure Time Exposure Overhead &
Parameter or Register Setting (at Least Partial) Readout
Fig. 46: Relation of the ExLSTrig Signal Period and Periods E and F for Regular Line Acquisition in Trigger Width
Exposure Mode
From the above conditions, one can readily calculate the allowed values for E and F for regularly
operating the camera at the maximum allowed line rate. This operation will also involve the
maximum possible overlap between consecutive line acquisitions.
Example
Assume that you are using an raL2048-48gm camera at full resolution (2048 pixels), assume that
you want to use the minimum allowed line acquisition period and the default value for C1.
Also assume that the other relevant settings are in accord with operation at the minimum allowed
line acquisition period.
At full resolution, the camera is capable of a minimum line acquisition period of 19.7 µs
(corresponding to a maximum allowed line acquisition rate of approximately 51 kHz; see
Section 1.2 on page 2).
Accordingly, the corresponding minimum allowed ExLSTrig signal period where overtriggering is
avoided is 19.7 µs.
From condition number four follows the maximum possible exposure time that is compatible with
(default) maximum overlap:
E = min. ExLSTrig Signal Period - C1 = 19.7 µs - 5.4 µs = 14.3 µs
Therefore, when operating the camera at a line acquisition period of 19.7 µs (involving maximum
overlap) and using the default value for C1, the maximum possible exposure time is 14.3 µs. When
also considering the above condition number one, it follows that the exposure time can range
between 2 µs and 14.3 µs to remain in accord with camera operation at a line acquisition period of
19.7 µs (and a line rate of approximately 51 kHz).
If you increase an exposure time beyond its upper limits the related extent of
overlap and the acquisition line rate will decrease. When extending exposure time
even further, consecutive line acquisitions will eventually not overlap at all.
Exposure
Active Signal
ExLSTrig
Signal
Exposure Exposure Exposure
Line N Line N+1 Line N+2
Not to scale
By default, the Exposure Active signal is selected as the source signal for output line 1 on the
camera. However, the selection of the source signal for a physical output line can be changed.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output lines, see
Section 7.6.2.1 on page 64.
The acquisition status indicator is designed for use when you are using host control of image
acquisition, i.e., when you are using software acquisition start, frame start, and line start trigger
signals.
To determine the acquisition status of the camera using the Basler pylon API:
1. Use the AcquisitionStatusSelector parameter to select the AcquisitionTriggerWait status or the
FrameTriggerWait status or the LineTriggerWait status.
2. Read the value of the AcquisitionStatus parameter.
If the value is set to False, the camera is not waiting for the trigger signal.
If the value is set to True, the camera is waiting for the trigger signal.
You can check the acquisition status from within your application software by using the Basler pylon
API. The following code snippet illustrates using the API to check the acquisition status:
// Check the acquisition start trigger acquisition status
// Set the acquisition status selector
camera.AcquisitionStatusSelector.SetValue
(AcquisitionStatusSelector_AcquisitionTriggerWait);
// Read the acquisition status
bool IsWaitingForAcquisitionTrigger = camera.AcquisitionStatus.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
As you are acquiring frames, the camera automatically monitors the acquisition start trigger status
and supplies a signal that indicates the current status.
The Acquisition Trigger Wait signal will go high whenever the camera enters a "waiting for
acquisition start trigger" status. The signal will go low when an external acquisition start trigger
(ExASTrig) signal is applied to the camera and the camera exits the "waiting for acquisition start
trigger status". The signal will go high again when the camera again enters a "waiting for acquisition
trigger" status and it is safe to apply the next acquisition start trigger signal.
If you base your use of the ExASTrig signal on the state of the acquisition trigger wait signal, you
can avoid "acquisition start overtriggering", i.e., applying an acquisition start trigger signal to the
camera when it is not in a "waiting for acquisition start trigger" acquisition status. If you do apply an
acquisition start trigger signal to the camera when it is not ready to receive the signal, it will be
ignored and an acquisition start overtrigger event will be reported.
Fig. 48 illustrates the Acquisition Trigger Wait signal with the AcquisitionFrameCount parameter set
to 2, with the lines per frame (Height) set to 3, and with exposure and readout overlapped. The
figure assumes raising edge triggering and that the trigger mode for the frame start trigger and for
the line start trigger is set to Off, so the camera is internally generating frame and line start trigger
signals.
The acquisition trigger wait signal can be selected as the source signal for one of the output lines
on the camera.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output lines, see
Section 7.6.2.1 on page 64.
For more information about event reporting, see Section 10.5 on page 175.
Acq. Trigger
Wait Signal
ExASTrig
Signal
Line Acquisition
{
Exp. Readout
Line Acquisition
Frame M
Exp. Readout
Line Acquisition
Exp. Readout
Line Acquisition
{
Exp. Readout
Line Acquisition
Frame M+1
Exp. Readout
Line Acquisition
Exp. Readout
Time
The acquisition trigger wait signal will only be available when hardware
acquisition start triggering is enabled.
The acquisition trigger wait signal can be selected to act as the source signal for e.g. camera output
line 1. Selecting a source signal for the output line is a two step process:
Use the LineSelector parameter to select output line 1.
Set the value of the LineSource parameter to the acquisition trigger wait signal.
You can set the LineSelector and the LineSource parameter values from within your application
software by using the Basler pylon API. The following code snippet illustrates using the API to set
the parameter values:
camera.LineSelector.SetValue(LineSelector_Out1);
camera.LineSource.SetValue(LineSource_AcquisitionTriggerWait);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output line, see
Section 7.6.2.1 on page 64.
As you are acquiring frames, the camera automatically monitors the frame start trigger status and
supplies a signal that indicates the current status.
The Frame Trigger Wait signal will go high whenever the camera enters a "waiting for frame start
trigger" status. The signal will go low when an external frame start trigger (ExFSTrig) signal is
applied to the camera and the camera exits the "waiting for frame start trigger status". The signal
will go high again when the camera again enters a "waiting for frame trigger" status and it is safe to
apply the next frame start trigger signal.
If you base your use of the ExFSTrig signal on the state of the frame trigger wait signal, you can
avoid "frame start overtriggering", i.e., applying a frame start trigger signal to the camera when it is
not in a "waiting for frame start trigger" acquisition status. If you do apply a frame start trigger signal
to the camera when it is not ready to receive the signal, it will be ignored and a frame start
overtrigger event will be reported.
Fig. 49 illustrates the Frame Trigger Wait signal with the lines per frame (Height) set to 3, and with
exposure and readout overlapped. The figure assumes raising edge triggering and that the trigger
mode for the acquisition start trigger and for the line start trigger is set to Off, so the camera is
internally generating acquisition and line start trigger signals.
Frame Trigger
Wait Signal
ExFSTrig
Signal
Line Acqn. N
Exp. Readout
Time
The frame trigger wait signal will only be available when hardware frame start
triggering is enabled.
By default, the frame trigger wait signal is selected as the source signal for output line 2 on the
camera. However, the selection of the source signal for a physical output line can be changed.
Selecting the Frame Trigger Wait Signal as the Source Signal for
the Output Line
The frame trigger wait signal can be selected to act as the source signal for e.g. camera output
line 1. Selecting a source signal for the output line is a two step process:
Use the LineSelector parameter to select output line 1.
Set the value of the LineSource parameter to the frame trigger wait signal.
You can set the LineSelector and the LineSource parameter values from within your application
software by using the Basler pylon API. The following code snippet illustrates using the API to set
the parameter values:
camera.LineSelector.SetValue(LineSelector_Out1);
camera.LineSource.SetValue(LineSource_FrameTriggerWait);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output lines, see
Section 7.6.2.1 on page 64.
For more information about event reporting, see Section 10.5 on page 175.
For more information about hardware triggering, see Section 8.2 on page 77.
As you are acquiring lines, the camera automatically monitors the line start trigger status and
supplies a signal that indicates the current status.
The Line Trigger Wait signall will go high whenever the camera enters a "waiting for line start trigger"
status. The signal will go low when an external line start trigger (ExLSTrig) signal is applied to the
camera and the camera exits the "waiting for line start trigger status". The signal will go high again
when the camera again enters a "waiting for line trigger" status and it is safe to apply the next line
start trigger signal.
If you base your use of the ExLSTrig signal on the state of the line trigger wait signal, you can avoid
"line start overtriggering", i.e., applying a line start trigger signal to the camera when it is not in a
"waiting for line start trigger" acquisition status. If you do apply a line start trigger signal to the
camera when it is not ready to receive the signal, it will be ignored and a line start overtrigger event
will be reported.
The line trigger wait signal can be selected as the source signal for one of the output lines on the
camera.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output lines, see
Section 7.6.2.1 on page 64.
Fig. 49 and Fig. 51 illustrate the Frame Trigger Wait signal with exposure and readout overlapped.
The figures assume raising edge triggering and that the trigger mode for the acquisition start trigger
and for the frame start trigger is set to Off, so the camera is internally generating acquisition and
frame start trigger signals.
Using the Line Trigger Wait Signal with the Timed Exposure Mode
When the camera is set for the timed exposure mode, the rise of the Line Trigger Wait signal is
based on the current ExposureTimeAbs parameter setting and on when readout of the current line
will end. This functionality is illustrated in Fig. 50.
If you are operating the camera in the timed exposure mode, you can avoid overtriggering by always
making sure that the Line Trigger Wait signal is high before you trigger the start of line capture.
ExLSTrig
Signal
Time
= Camera is in a "waiting
for line start trigger" status
Fig. 50: Line Trigger Wait Signal with the Timed Exposure Mode
Using the Line Trigger Wait Signal with the Trigger Width Exposure Mode
When the camera is set for the trigger width exposure mode, the rise of the Line Trigger Wait signal
is based on the ExposureOverlapTimeMaxAbs parameter setting and on when readout of the
current line will end. This functionality is illustrated in Fig. 51.
ExLSTrig
Signal
Line Acquisition N
The rise of the Line Trigger Wait
signal is based on the end of
Exposure Readout line readout and on the current
ExposureOverlapTimeMaxAbs
Exp. Overlap Time parameter setting
Max Abs Setting
Time
Fig. 51: Line Trigger Wait Signal with the Trigger Width Exposure Mode
If you are operating the camera in the trigger width exposure mode and monitor the Line Trigger
Wait signal, you can avoid overtriggering the camera by always doing the following:
Setting the camera’s ExposureOverlapTimeMaxAbs parameter so that it represents the
shortest exposure time you intend to use. If the shortest intended exposure time is larger than
the maximum settable ExposureOverlapTimeMaxAbs parameter value, set the
ExposureOverlapTimeMaxAbs parameter value to its maximum.
Making sure that your exposure time is always equal to or greater than the setting for the
ExposureOverlapTimeMaxAbs parameter.
Only use the ExLSTrig signal to start exposure when the Line Trigger Wait signal is high.
You can use the Basler pylon API to set the ExposureOverlapTimeMaxAbs parameter value from
within your application software. The following code snippet illustrates using the API to set the
parameter value:
// Set the Exposure Overlap Time Max to 4 µs
camera.ExposureOverlapTimeMaxAbs.SetValue(4);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Selecting the Line Trigger Wait Signal as the Source Signal for the
Output Line
The line trigger wait signal can be selected to act as the source signal for e.g. camera output line 1.
Selecting a source signal for the output line is a two step process:
Use the LineSelector parameter to select output line 1.
Set the value of the LineSource parameter to the line trigger wait signal.
You can set the LineSelector and the LineSource parameter values from within your application
software by using the Basler pylon API. The following code snippet illustrates using the API to set
the parameter values:
camera.LineSelector.SetValue(LineSelector_Out1);
camera.LineSource.SetValue(LineSource_LineTriggerWait);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For more information about selecting the source signal for an output line on the camera, see
Section 7.6.2.5 on page 68.
For more information about the electrical characteristics of the camera’s output line, see
Section 7.6.2 on page 64.
This is an approximate frame transmission time. Due to the nature of the Ethernet network, the
transmission time could vary.
Due to the nature of the Ethernet network, there can be a delay between the point where a complete
frame is acquired and the point where transmission of the acquired frame begins. This start delay
can vary from frame to frame. The start delay, however, is of very low significance when compared
to the transmission time.
For more information about the Payload Size and Device Current Throughput parameters, see
Section 5.1 on page 39.
When the camera’s acquisition mode is set to single frame, the maximum possible
acquisition frame rate can not be achieved. This is true because the camera
performs a complete internal setup cycle for each single frame.
To determine the maximum allowed line acquisition rate with your current camera settings, you can
use the ResultingLineRateAbs parameter. The ResultingLineRateAbs parameter indicates the
camera’s current maximum allowed line acquisition rate taking the readout time, exposure time, and
bandwidth settings into account.
You may find that you would like to acquire lines at a rate higher than the maximum allowed with
the camera’s current settings. In this case, you must first determine what factor is most restricting
the maximum line rate. The descriptions of the three factors that appear below will let you determine
which factor is restricting the rate.
Factor 1:
Factor 1 is the sensor readout time. The readout time for a particular sensor is a fixed value and
thus the maximum line acquisition rate as determined by the readout time is also fixed. The table
below shows the maximum line rate (in lines per second) based on sensor readout time for each
camera model.
Factor 2:
Factor 2 is the exposure time. You can use the formula below to calculate the maximum line rate
based on the exposure time for each acquired line:
1
Max Lines/s = --------------------------------------------------------------------
Exposure time in µs + C 1
Where the constant C1 depends on the camera model and on whether the parameter limit is
removed from the ExposureOverhead parameter, as shown in the table below:
For more information about setting the exposure time, see Section 8.2.5.2 on page 92. For more
information about removing parameter limits and the implications, see Section 10.2 on page 161.
Factor 3:
Factor 3 is the frame transmission time. You can use the formula below to calculate the maximum
line rate based on the frame transmission time:
Once you have determined which factor is most restrictive on the line rate, you can try to make that
factor less restrictive if possible:
If you find that the sensor readout time is most restrictive factor, you cannot make any
adjustments that will result in a higher maximum line rate.
If you are using long exposure times, it is quite possible to find that your exposure time is the
most restrictive factor on the line rate. In this case, you should lower your exposure time. (You
may need to compensate for a lower exposure time by using a brighter light source or
increasing the opening of your lens aperture.) You can extend the exposure time to some
degree after having removed the parameter limits from the ExposureOverhead parameter (for
more information, see Section 8.5.1).
The frame transmission time will not normally be a restricting factor. But if you are using
multiple cameras and you have set a small packet size or a large inter-packet delay, you may
find that the transmission time is restricting the maximum allowed line rate. In this case, you
could increase the packet size or decrease the inter-packet delay. If you are using several
cameras connected to the host PC via a network switch, you could also use a multiport
network adapter in the PC instead of a switch. This would allow you to increase the Ethernet
bandwidth assigned to the camera and thus decrease the transmission time.
For more information about the settings that determine the bandwidth assigned to the camera, see
Section 5.2 on page 41.
Example
Assume that you are using an raL2048-48gm camera set for an exposure time of 190 µs and a
frame height of 500 lines. Also assume that you use the default value for C1, that you have checked
the value of the DeviceCurrentThroughput parameter and the PayloadSize parameters and found
them to be 110000000 and 5120000 respectively.
1
Max Lines/s = -----------------------------------------
190 µs + 5.4 µs
Factor 2, the exposure time, is the most restrictive factor. In this case, the exposure time setting is
limiting the maximum allowed line rate to 5117 lines per second. If you wanted to operate the
camera at a higher line rate, you would need to lower the exposure time.
Because the exposure time is the most restrictive factor, you could also remove the limit from the
ExposureOverhead parameter to increase the line rate. In this case C1 = 3.4 µs would apply, and
according to the formula for Factor 2 a maximum allowed line rate of 5170 lines per second would
result.
For more information about removing the limit from the ExposureOverhead parameter and the
resulting side effects, see Section 8.5.1.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Shaft Camera
Encoder Line Start
Phase A Input Line 1 Trigger
To use the shaft encoder module, you must select a source signal for the Phase A input and for the
Phase B input on the module. The allowed source signals for the Phase A and Phase B module
inputs are camera input line 1, camera input line 2, and camera input line 3. So, for example, you
could apply the Phase A signal from a shaft encoder to physical input line 1 of the camera and select
input line 1 as the source signal for the Phase A input to the module. And you could apply the Phase
B signal from a shaft encoder to physical input line 2 of the camera and select input line 2 as the
source signal for the Phase B input to the module. More information about selecting a source signal
for the module inputs appears in a code snippet later in this section.
Fig. 53 shows how the software module will interpret the input from the shaft encoder when the
encoder is connected as illustrated in Fig. 52. The software module will sense forward ticks from
the encoder when the input is as shown in the left part of Fig. 53. The software module will sense
reverse ticks from the encoder when the input is as shown in the right part of the Fig. 53.
Phase A Phase A
Phase B Phase B
(Phase A leads Phase B, i.e., Phase B (Phase B leads Phase A, i.e., Phase A
low at rising edge of Phase A) low at rising edge of Phase B)
If this interpretation of direction is not as you desire, you could change it by moving the Phase A
output from the shaft encoder to input line 2 and the Phase B output to input line 1.
There are several parameters and commands associated with the shaft encoder module. The list
below describes the parameters and commands and explains how they influence the operation of
the module.
The ShaftEncoderModuleCounterMode parameter controls the tick counter on the shaft
encoder module. The tick counter counts the number of ticks that have been received by the
module from the shaft encoder. This parameter has two possible values: FollowDirection and
IgnoreDirection.
If the mode is set to FollowDirection, the counter will increment when the module receives
forward ticks from the shaft encoder and will decrement when it receives reverse ticks.
If the mode is set to IgnoreDirection, the counter will increment when it receives either forward
ticks or reverse ticks.
The ShaftEncoderModuleCounter parameter indicates the current value of the tick counter.
This is a read only parameter.
The ShaftEncoderCounterModuleMax parameter sets the maximum value for the tick
counter. The minimum value for this parameter is 0 and the maximum is 32767.
If the counter is incrementing and it reaches the max, it will roll over to 0. That is:
Max + 1 = 0
If the counter is decrementing and it reaches 0, it will roll back to the max. That is:
0 - 1 = Max
The ShaftEncoderModuleCounterReset command resets the tick counter count to 0.
The ShaftEncoderModuleMode parameter controls the behavior of the "reverse counter" that
is built into the module. This parameter has two possible values: AnyDirection and
ForwardOnly. For more information about this parameter, see the detailed description of the
reverse counter that appears later in this section.
The ShaftEncoderModuleReverseCounterMax parameter sets a maximum value for the
module’s "reverse counter". The minimum value for this parameter is 0 and the maximum is
32767. For more information about this parameter, see the detailed description of the reverse
counter that appears later in this section.
The ShaftEncoderModuleReverseCounterReset command resets the reverse counter count
to 0 and informs the software module that the current direction of conveyor movement is
forward. For more information about this parameter, see the detailed description of the reverse
counter that appears later in this section.
// Enable the camera’s Line Start Trigger function and select the output from the
encoder module as the source signal for the Line Start Trigger
camera.TriggerSelector.SetValue(TriggerSelector_LineStart);
camera.TriggerMode.SetValue(TriggerMode_On);
camera.TriggerSource.SetValue (TriggerSource_ShaftEncoderModuleOut);
camera.TriggerActivation.SetValue(TriggerActivation_RisingEdge);
// Set the shaft encoder module counter max and the shaft encoder module reverse
counter max
camera.ShaftEncoderModuleCounterMax.SetValue(32767);
camera.ShaftEncoderModuleReverseCounterMax.SetValue(15);
// Reset the shaft encoder module counter and the shaft encoder module reverse
counter
camera.ShaftEncoderModuleCounterReset.Execute();
camera.ShaftEncoderModuleReverseCounterReset.Execute();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For more information about the line start trigger, see Section 8.2.4 on page 86.
The main purpose of the reverse counter is to compensate for mechanical "jitter" in the conveyor
used to move objects past the camera. This jitter usually manifests itself as a momentary change
in the direction of the conveyor.
The rules that govern the operation of the reverse counter are as follows:
If the conveyor is running in the reverse direction and the current reverse counter count is less
than the maximum (i.e., less than the current setting of the ReverseCounterMax parameter),
the reverse counter will increment once for each shaft encoder reverse tick received.
If the conveyor is running in the forward direction and the current reverse counter count is
greater than zero, the reverse counter will decrement once for each shaft encoder forward tick
received.
When the Shaft Encoder Mode is set to Forward Only:
If the reverse counter is not incrementing or decrementing, the software module will output
a trigger signal for each forward tick received from the shaft encoder.
If the reverse counter is incrementing or decrementing, trigger signal output will be
suppressed.
When the Shaft Encoder Mode is set to Any Direction:
If the reverse counter is not incrementing or decrementing, the software module will output
a trigger signal for each forward tick or reverse tick received from the shaft encoder.
If the reverse counter is incrementing or decrementing, trigger signal output will be
suppressed.
To understand how these rules affect the operation of the encoder software module, consider the
following cases:
Case 1
This is the simplest case, i.e., the Shaft Encoder Reverse Counter Max is set to zero. In this
situation, the reverse counter never increments or decrements and it will have no effect on the
operation of the encoder software module.
When the Shaft Encoder Reverse Counter Max is set to zero:
If the Shaft Encoder Module Mode is set to Forward Only, the software module will output a
trigger signal whenever it receives a forward tick from the shaft encoder, but not when it
receives a reverse tick.
If the Shaft Encoder Module Mode is set to Any Direction, the software module will output a
trigger signal whenever it receives either a forward tick or a reverse tick from the shaft encoder.
Case 2
In this case, assume that:
A shaft encoder is attached to a conveyor belt that normally moves continuously in the forward
direction past a camera.
The conveyor occasionally "jitters" and when it jitters, it moves in reverse for 4 or 5 ticks.
For this case, the ShaftEncoderModuleMode parameter should be set to ForwardOnly. The
ShaftEncoderModuleReverseCounterMax should be set to a value that is higher than the jitter we
expect to see. We decide to set the value to 10.
Given this situation and these settings, the series of diagrams below explains how the encoder
software module will act:
1 Camera
The conveyor is moving forward and the
encoder is generating forward ticks.
Whenever the module receives a
forward tick, it outputs a trigger signal.
The reverse counter is at 0.
Forward
Reverse
By suppressing trigger signals when the conveyor was moving in reverse and then suppressing an
equal number of trigger signals when forward motion is resumed, we ensure that the conveyor is in
its "pre-jitter" position when the module begins generating trigger signals again.
Note in step two that if the conveyor runs in reverse for a long period and the reverse counter
reaches the max setting, the counter simply stops incrementing. If the conveyor continues in
reverse, no output triggers will be generated because the Shaft Encoder Mode is set to Forward
only.
Case 3
In this case, assume that:
We are working with a small conveyor that moves back and forth in front of a camera.
A shaft encoder is attached to the conveyor.
The conveyor moves in the forward direction past the camera through its complete range of
motion, stops, and then begins moving in reverse.
The conveyor moves in the reverse direction past the camera through its complete range of
motion, stops, and then begins moving forward.
This back an forth motion repeats.
The conveyor occasionally "jitters". When it jitters, it moves 4 or 5 ticks in a direction of travel
opposite to the current normal direction.
For this case, the ShaftEncoderModuleMode parameter should be set to Any Direction. The
ShaftEncoderModuleReverseCounterMax should be set to a value that is higher than the jitter we
expect to see. We decide to set the value to 10.
Given this situation and these settings, this series of diagrams explains how the encoder software
module will act during conveyor travel:
Forward
4
Camera The conveyor reaches the end of its
forward travel and it stops.
Stop
Reverse
Camera The reverse counter reaches the max (10 in this case)
6 and stops incrementing.
Suppression of trigger signals is ended. Because the
shaft encoder mode is set to any direction, the module
begins generating one trigger signal for each reverse
tick received.
The reverse counter remains at 10.
Reverse
Camera
9 The conveyor reaches the end of its
reverse travel and it stops.
Stop
Forward
Forward
There are two main things to notice about this example. First, because the encoder mode is set to
any direction, ticks from the shaft encoder will cause the module to output trigger signals regardless
of the conveyor direction, as long as the reverse counter is not incrementing or decrementing.
Second, the reverse counter will compensate for conveyor jitter regardless of the conveyor
direction.
It is important to reset the reverse counter before the first traverse in the forward
direction. A reset sets the counter to 0 and synchronizes the counter software with
the conveyor direction. (The software assumes that the conveyor will move in the
forward direction after a counter reset).
If for example a post-divider of 2 is selected only every other signal received from the multiplier
module is passed out from the divider module and, accordingly, the frequency is halved. If a
post-divider of 1 is selected every signal received from the multiplier module is passed out un-
changed from the divider module.
You can use the frequency converter to multiply the original signal frequency by a
fractional value. We recommend multiplying the frequency by the enumerator
value using the multiplier module and dividing the resulting frequency by the
denominator value using the post-divider module.
You can configure the frequency converter module from within your application by using a dynamic
API. The following code snippet illustrates setting parameter values:
INodeMap &Control = *camera.GetNodeMap();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For more information about the shaft encoder module see, Section 8.6 on page 136.
Mono 8
Mono 12
Mono 12 Packed
YUV 4:2:2 Packed
YUV 4:2:2 (YUYV) Packed
Table 11: Available Pixel Formats
Details of the monochrome camera formats are described in Section 9.2 on page 148.
You can set the PixelFormat parameter value from within your application software by using the
pylon API. The following code snippet illustrates using the API to set the parameter value:
camera.PixelFormat.SetValue(PixelFormat_Mono8);
camera.PixelFormat.SetValue(PixelFormat_Mono12);
camera.PixelFormat.SetValue(PixelFormat_Mono12Packed);
camera.PixelFormat.SetValue(PixelFormat_YUV422Packed);
camera.PixelFormat.SetValue(PixelFormat_YUV422_YUYV_Packed);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
With the camera set for Mono 8, the pixel data output is 8 bit data of the “unsigned char” type. The
available range of data values and the corresponding indicated signal levels are as shown in the
table below.
When the camera is set for Mono 12, the pixel data output is 16 bit data of the “unsigned short (little
endian)” type. The available range of data values and the corresponding indicated signal levels are
as shown in the table below. For 16 bit data, you might expect a value range from 0x0000 to
0xFFFF. However, with the camera set for Mono 12 only 12 bits of the 16 bits transmitted are
effective. Therefore, the highest data value you will see is 0x0FFF indicating a signal level of 4095.
When a monochrome camera is set for Mono 12 Packed, the pixel data output is 12 bit data of the
“unsigned” type. The available range of data values and the corresponding indicated signal levels
are as shown in the table below.
When the camera is set for YUV 4:2:2 Packed output, the pixel data output for the Y component is
8 bit data of the “unsigned char” type. The range of data values for the Y component and the
corresponding indicated signal levels are shown below.
The pixel data output for the U component or the V component is 8 bit data of the “straight binary”
type and will always be zero.
When the camera is set for YUV 4:2:2 (YUYV) output, the pixel data output for the Y component is
8 bit data of the “unsigned char” type. The range of data values for the Y component and the
corresponding indicated signal levels are shown below.
The pixel data output for the U component or the V component is 8 bit data of the “straight binary”
type and will always be zero.
Row 0 Col 0, Row 0 Col 1, Row 0 Col 2 .. .. Row 0 Col m-2, Row 0 Col m-1, Row 0 Col m
Row 1 Col 0, Row 1 Col 1, Row 1 Col 2 .. .. Row 1 Col m-2, Row 1 Col m-1, Row 1 Col m
Row 2 Col 0, Row 2 Col 1, Row 2 Col 2 .. .. Row 2 Col m-2, Row 2 Col m-1, Row 2 Col m
: : : : : :
: : : : : :
Row n-2 Col 0, Row n-2 Col 1, Row n-2 Col 2 .. .. Row n-2 Col m-2, Row n-2 Col m-1, Row n-2 Col m
Row n-1 Col 0, Row n-1 Col 1, Row n-1 Col 2 .. .. Row n-1 Col m-2, Row n-1 Col m-1, Row n-1 Col m
Row n Col 0, Row n Col 1, Row n Col 2 .. .. Row n Col m-2, Row n Col m-1, Row n Col m
Where:
Row 0 Col 0 is the upper left corner of the frame.
The columns are numbered 0 through m from the left side to the right side of the frame
The rows are numbered 0 through n from the top to the bottom of the frame, corresponding to
n+1 line acquisitions.
10 Standard Features
This chapter provides detailed information about the standard features available on each camera.
It also includes an explanation of their operation and the parameters associated with each feature.
10.1.1 Gain
The camera’s gain is adjustable. As shown in
Fig. 54, increasing the gain increases the Gray Values
slope of the response curve for the camera.
This results in an increase in the gray values (12-bit) (8-bit)
output from the camera for a given amount of
output from the imaging sensor. Decreasing
the gain decreases the slope of the response
curve and results in lower gray values for a
given amount of sensor output.
Increasing the gain is useful when at your
brightest exposure, the highest gray values
achieved are lower than 255 (for pixel data
formats with 8 bit depth) or 4095 (for pixel
data formats with 12 bit depth). For example,
Sensor Output Signal (%)
if you found that at your brightest exposure
the gray values output by the camera were no
Fig. 54: Gain in dB
higher than 127 (in an 8 bit format), you could
increase the gain to 6 dB (an amplification
factor of 2) and thus reach gray values of 254.
You can use the analog gain for coarsely setting gain and the digital gain for finer adjustment.
The camera’s analog gain is determined by the GainRaw parameter with the gain selector set to
AnalogAll. All pixels in the sensor are affected by this setting.
The allowed parameter values are 1 and 4. A parameter value of 1 corresponds to 0 dB and gain
will not be modified. A parameter value of 4 corresponds to 12 dB and an amplification factor of 4.
You must stop image acquisition by issuing an acquisition stop command before
changing the analog gain settings.
For more information about the acquisition stop command, see Section 8.2.1 on
page 77.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Adjusting the camera’s digital gain will digitally shift the group of bits that is output for the pixel
values from each ADC in the camera.
Increasing the digital gain setting will result in an amplified gain and therefore in higher pixel values.
Decreasing the digital gain setting will result in a decreased gain and therefore in lower pixel values.
The digital gain can be set on an integer scale ranging from 256 to 2047. This range of settings is
linearly related to a range of amplification factors where a parameter value of 256 corresponds to
0 dB and gain will not be modified and a parameter value of 2047 corresponds to 18.058 dB and
an amplification factor of approximately 7.996.
You can use the formula below to calculate the dB of gain that will result from the GainRaw
parameter values:
Due to the nature of digital gain, certain gray values will be absent in the image ("missing codes")
if digital gain is set to a value larger than 256.
You can use the remove parameter limits feature to remove to lower limit for digital gain parameter
values. When you use the remove parameter limits feature you can also set digital gain parameter
values in the range from 0 to 255. This corresponds to a range of amplification factors from 0 to
approximately 0.99.
If the digital gain parameter value is set below 256 using the remove parameter
limits feature: In this case, regardless of the brightness of illumination, the camera
will not be able to reach the maximum gray values that otherwise could be
reached. For example, if the camera is set to a 12 bit pixel data format, the
maximum gray value of 4095 can not be reached if the digital gain parameter
value is set below 256.
For more information about the remove parameter limits feature, see Section 10.2 on page 161.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
You can use analog gain and digital gain at the same time. In this case, the amplification factors will
multiply. For example, if you set analog gain to an amplification factor of 4 and use an amplification
factor of 1.2 for digital gain, the total amplification factor will be 4.8. This corresponds to adding
12 dB and 1.6 dB to give a total gain of 13.6 dB.
For optimum image quality, we recommend to set the total amplification as low as possible. If you
need an amplification factor larger than 4 we recommend to set analog gain to 4 and then digital
gain to reach the desired total amplification.
If you use analog gain and digital gain at the same time and also use the remove
parameter limits for digital gain with digital gain parameter values below 264, the
amplification factor for total gain will be 0 if the digital gain setting is 0.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Normally, the OffsetX and Width parameter settings refer to the physical line of
the sensor. But if binning is enabled, these parameters are set in terms of a
"virtual" line. For more information about binning, see Section 10.7 on page 181.
For information about the contribution of the image AOI feature to defining a frame and for code
snippets illustrating how to set the OffsetX and Width parameters using the API, see Section 8.1 on
page 74.
You can set the CenterX parameter value from within your application software by using the Basler
pylon API. The following code snippet illustrates using the API to enable automatic image AOI
centering.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
A target value for an image property can only be reached if it is in accord with all
pertinent camera settings and with the general circumstances used for
acquisition. Otherwise, the target value will only be approached.
For example, with a short exposure time, insufficient illumination, and a low
setting for the upper limit of the gain parameter value, the Gain Auto function may
not be able to achieve the current target average gray value setting for the image.
You can use an auto function when binning is enabled. An auto function uses the
binned pixel data and controls the image property of the binned line.
For more information about binning, see Section 10.7 on page 181.
If an auto function is set to the "once" operation mode and if the circumstances
will not allow reaching a target value for an image property, the auto function
will try to reach the target value for a maximum of 30 frames and will then be
set to Off.
"Continuous" mode of operation: When this mode of operation is selected, the parameter
value is adjusted repeatedly while frames are acquired. Depending on the current frame rate,
the automatic adjustments will usually be carried out for every or every other frame.
The repeated automatic adjustment will proceed until the "once" mode of operation is used or
until the auto function is set to Off, in which case the parameter value resulting from the latest
automatic adjustment will operate, unless the parameter is manually adjusted.
Off: When an auto function is set to Off, the parameter value resulting from the latest
automatic adjustment will operate, unless the parameter is manually adjusted.
You can enable auto functions and change their settings while the camera is
capturing frames ("on the fly").
If you have set an auto function to "once" or "continuous" operation mode while
the camera was continuously capturing frames, the auto function will become
effective with a short delay and the first few frames may not be affected by the
auto function.
Pixel
Image
Line AOI
Auto Function
AOI Width
The size and position of the auto function AOI can be, but need not be, identical to the size and
position of the image AOI.
The overlap between auto function AOI and image AOI determines whether and to what extent the
auto function will control the related image property. Only the pixel data from the areas of overlap
will be used by the auto function to control the image property of the entire frame.
Different degrees of overlap are illustrated in Fig. 57. The hatched areas in the figure indicate areas
of overlap.
If the auto function AOI is completely included in the image AOI (see (a) in Fig. 57), the pixel
data from the auto function AOI will be used to control the image property.
If the image AOI is completely included in the auto function AOI (see (b) in Fig. 57), only the
pixel data from the image AOI will be used to control the image property.
If the image AOI only partially overlaps the auto function AOI (see (c) in Fig. 57), only the pixel
data from the area of partial overlap will be used to control the image property.
If the auto function AOI does not overlap the image AOI (see (d) in Fig. 57), the Auto Function
will not or only to a limited degree control the image property. For details, see the sections
below, describing the individual auto functions.
We strongly recommend completely including the auto function AOI within the
image AOI, or, depending on your needs, choosing identical positions and sizes
for auto function AOI and image AOI.
Image AOI
(a)
Image AOI
(b)
Image AOI
(c)
Image AOI
(d)
Fig. 57: Various Degrees of Overlap Between the Auto Function AOI and the Image AOI
By default, the auto function AOI is set to the full width of the sensor line and to a height of 128 lines.
You can change the size and the position of the auto function AOI by changing the value of the
following parameters:
AutoFunctionAOIOffsetX: determines the starting pixel (in horizontal direction) for the auto
function AOI. The outer left pixel is designated as pixel 0.
AutoFunctionAOIWidth: determines the width of the auto function AOI.
AutoFunctionAOIHeight: determines the height of the auto function AOI.
All parameters can be set in increments of 1.
For general information about the parameters, see Section 10.4.1 on page 164.
When you are setting the auto function AOI, you must follow these guidelines:
Guideline Notes
AutoFunctionAOIHeight ≤ Height The Height parameter defines the image AOI height.
If AutoFunctionAOIHeight > Height, the effective auto
function AOI height will be equal to Height.
Example: If you set AutoFunctionAOIHeight to 400
and Height to 100, the effective auto function AOI
height will be 100.
Depending on your firmware version, you may be able to choose from multiple
auto function AOIs (AOI 1, AOI 2, ...) using the AutoFunctionAOISelector
parameter. However, leave the auto function AOI 1 selected at all times and do
not use the other auto function AOIs.
You can set the parameter values for the auto function AOI from within your application software by
using the Basler pylon API:
// Select Auto Function AOI 1
camera.AutoFunctionAOISelector.SetValue(AutoFunctionAOISelector_AOI1);
// Set the position and size of the auto function AOI (sample values)
camera.AutoFunctionAOIOffsetX.SetValue(20);
camera.AutoFunctionAOIWidth.SetValue(500);
camera.AutoFunctionAOIHeight.SetValue(300);
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
// Set the position and size of the auto function AOI (sample values)
camera.AutoFunctionAOIOffsetX.SetValue(20);
camera.AutoFunctionAOIWidth.SetValue(500);
camera.AutoFunctionAOIHeight.SetValue(300);
// Select gain all and set the upper and lower gain limits for the
// gain auto function
camera.GainSelector.SetValue(GainSelector_DigitalAll);
camera.AutoGainRawLowerLimit.SetValue(camera.GainRaw.GetMin());
camera.AutoGainRawUpperLimit.SetValue(camera.GainRaw.GetMax());
// Set the target gray value for the gain auto function
// (If exposure auto is enabled, this target is also used for
// exposure auto control.)
camera.AutoTargetValue.SetValue(128);
You can also use the Basler pylon Viewer application to easily set the parameters.
For general information about auto functions, see Section 10.4.1 on page 164.
For more information about
the pylon API and the pylon Viewer, see Section 3.1 on page 26.
the auto function AOI and how to set it, see Section 10.4.3 on page 166.
The exposure auto function will not work if the camera’s exposure mode is set to
trigger width. For more information about the trigger width exposure mode, see
Section 8.2.4.2 on page 87.
Exposure Auto is the "automatic" counterpart to manually setting the ExposureTimeAbs parameter.
The exposure auto function automatically adjusts the ExposureTimeAbs parameter value within set
limits until a target average gray value for the pixel data from the auto function AOI is reached.
The exposure auto function can be operated in the "once" and continuous" modes of operation.
If the auto function AOI does not overlap the image AOI (see Section 10.4.3.1 on page 167), the
pixel data from the auto function AOI will not be used to control the exposure time. Instead, the
current manual setting of the ExposureTimeAbs parameter value will control the exposure time.
The exposure auto function and the gain auto function can be used at the same time. In this case,
however, you must also set the auto function profile feature.
// Set the target gray value for the exposure auto function
// (If gain auto is enabled, this target is also used for
// gain auto control.)
camera.AutoTargetValue.SetValue(128);
You can also use the Basler pylon Viewer application to easily set the parameters.
For information about
the pylon API and the pylon Viewer, see Section 3.1 on page 26
the auto function AOI and how to set it, see Section 10.4.3 on page 166
minimum allowed and maximum possible exposure time, see Section 8.2.5 on page 91.
For general information about auto functions, see Section 10.4.1 on page 164.
You can set the gray value adjustment damping from within your application software by using the
pylon API. The following code snippets illustrate using the API to set the gray value adjustment
damping:
camera.GrayValueAdjustmentDampingAbs.SetValue(0.5);
You can also use the Basler pylon Viewer application to easily set the parameters.
To use the gain auto function and the exposure auto function at the same time:
1. Set the value of the AutoFunctionProfile parameter to specify whether gain or exposure time
will be minimized during automatic adjustments.
2. Set the value of the GainAuto parameter to the "continuous" mode of operation.
3. Set the value of the ExposureAuto parameter to the "continuous" mode of operation.
You can set the auto function profile from within your application software by using the pylon API.
The following code snippet illustrates using the API to set the auto function profile. As an example,
Gain is set to be minimized during adjustments:
// Use GainAuto and ExposureAuto simultaneously
camera.AutoFunctionProfile.SetValue(AutoFunctionProfile_GainMinimum);
camera.GainAuto.SetValue(GainAuto_Continuous);
camera.ExposureAuto.SetValue(ExposureAuto_Continuous);
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
An example related to the Frame Start Overtrigger event illustrates how event reporting works. The
example assumes that your system is set for event reporting (see below) and that the camera has
received a frame start trigger while it is currently in the process of acquiring a frame. In this case:
1. A Frame Start Overtrigger event is created. The event contains the event in the strict sense
and supplementary information:
An Event Type Identifier. In this case, the identifier would show that a frame start overtrigger
type event has occurred.
A Stream Channel Identifier. Currently this identifier is always 0.
A Timestamp. This is a timestamp indicating when the event occurred. (The time stamp
timer starts running at power off/on or at camera reset. The unit for the timer is "ticks" where
one tick = 8 ns. The timestamp is a 64 bit value.)
2. The event is placed in an internal queue in the camera.
3. As soon as network transmission time is available, an event message will be sent to the PC. If
only one event is in the queue, the message will contain the single event. If more than one
event is in the queue, the message will contain multiple events.
a. After the camera sends an event message, it waits for an acknowledgement. If no
acknowledgement is received within a specified timeout, the camera will resend the event
message. If an acknowledgement is still not received, the timeout and resend mechanism
will repeat until a specified maximum number of retries is reached. If the maximum number
of retries is reached and no acknowledge has been received, the message will be dropped.
During the time that the camera is waiting for an acknowledgement, no new event
messages can be transmitted.
4. Event reporting involves some further software-related steps and settings to be made. For
more information, see the "Camera Events" code sample included with the pylon software
development kit.
As mentioned in the example above, the camera has an event queue. The intention of the queue is
to handle short term delays in the camera’s ability to access the network and send event messages.
When event reporting is working "smoothly", a single event will be placed in the queue and this
event will be sent to the PC in an event message before the next event is placed in the queue. If
there is an occasional short term delay in event message transmission, the queue can buffer
several events and can send them within a single event message as soon as transmission time is
available.
However, if you are operating the camera at high line rates, the camera may be able to generate
and queue events faster than they can be transmitted and acknowledged. In this case:
1. The queue will fill and events will be dropped.
2. An event overrun will occur.
3. Assuming that you have event overrun reporting enabled, the camera will generate an "event
overrun event" and place it in the queue.
4. As soon as transmission time is available, an event message containing the event overrun
event will be transmitted to the PC.
The event overrun event is simply a warning that events are being dropped. The notification
contains no specific information about how many or which events have been dropped.
Event reporting must be enabled in the camera and some additional software-related settings must
be made. This is described in the "Camera Events" code sample included with the pylon software
development kit.
Event reporting must be specifically set up for each type of event using the parameter name of the
event and of the supplementary information. The following table lists the relevant parameter names:
EventOverrunEventTimestamp
Table 12: Parameter Names of Events and Supplementary Information
You can enable event reporting and make the additional settings from within your application
software by using the pylon API. The pylon software development kit includes a "Camera Events"
code sample that illustrates the entire process.
For more detailed information about using the pylon API, refer to the Basler pylon Programmer’s
Guide and API Reference.
4095
3072
Substitute
12 Bit
Value
2048
1024
0
0 1024 2048 3072 4095
4095
3072
Substitute
12 Bit
Value
2048
1024
0
0 1024 2048 3072 4095
Fig. 59: Lookup Table with Values Mapped for Higher Camera Output at Low Sensor Readings
As mentioned above, when the camera is set for a 12 bit pixel data format, the lookup table can be
used to perform a 12 bit to 12 bit substitution. The lookup table can also be used in 12 bit to 8 bit
fashion. To use the table in 12 bit to 8 bit fashion, you enter 12 bit substitution values into the table
and enable the table as you normally would. But instead of setting the camera for a 12 bit pixel data
format, you set the camera for an 8 bit format (such as Mono 8). In this situation, the camera will
first use the values in the table to do a 12 bit to 12 bit substitution. It will then truncate the lowest 4
bits of the substitute value and will transmit the remaining 8 highest bits.
Changing the Values in the Luminance Lookup Table and Enabling the Table
You can change the values in the luminance lookup table (LUT) and enable the use of the lookup
table by doing the following:
1. Use the LUTSelector parameter to select a lookup table. (Currently there is only one lookup
table available, i.e., the "luminance" lookup table described above.)
2. Use the LUTIndex parameter to select an index number.
3. Use the LUTValue parameter to enter the substitute value that will be stored at the index
number that you selected in step 2.
4. Repeat steps 2 and 3 to enter other substitute values into the table as desired.
5. Use the LUTEnable parameter to enable the table.
You can set the LUTSelector, the LUTIndex parameter, and the LUTValue parameter from within
your application software by using the pylon API. The following code snippet illustrates using the
API to set the parameter values:
// Select the lookup table
camera.LUTSelector.SetValue(LUTSelector_Luminance);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
10.7 Binning
Binning increases the camera’s response to light by summing the charges from adjacent pixels into
one pixel.
With horizontal binning, the charges of 2, 3, or a maximum of 4 adjacent pixels are summed and
are reported out of the camera as a single pixel. Fig. 60 illustrates horizontal binning.
Setting Binning
You can enable horizontal binning by setting the BinningHorizontal parameter. Setting the parame-
ter’s value to 2, 3, or 4 enables horizontal binning by 2, horizontal binning by 3, or horizontal binning
by 4 respectively. Setting the parameter’s value to 1 disables horizontal binning.
You can set the BinningVertical or the BinningHorizontal parameter value from within your
application software by using the Basler pylon API. The following code snippet illustrates using the
API to set the parameter values:
// Enable horizontal binning by 4
camera.BinningHorizontal.SetValue(4);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Y uncorrected
Y corrected = --------------------------- Y max
Y max
The formula uses uncorrected and corrected pixel brightnesses that are normalized by the
maximum pixel brightness. The maximum pixel brightness equals 255 for 8 bit output and 4095 for
12 bit output.
When the gamma correction factor is set to 1, the output pixel brightness will not be corrected.
A gamma correction factor between 0 and 1 will result in increased overall brightness, and a gamma
correction factor greater than 1 will result in decreased overall brightness.
In all cases, black (output pixel brightness equals 0) and white (output pixel brightness equals 255
at 8 bit output and 4095 at 12 bit output) will not be corrected.
You can enable or disable the gamma correction feature by setting the value of the GammaEnable
parameter.
When gamma correction is enabled, the correction factor is determined by the value of the Gamma
parameter. The Gamma parameter can be set in a range from 0 to 3.99902. So if the Gamma
parameter is set to 1.2, for example, the gamma correction factor will be 1.2.
You can set the GammaEnable and Gamma parameter values from within your application software
by using the Basler pylon API. The following code snippet illustrates using the API to set the
parameter values:
// Enable the Gamma feature
camera.GammaEnable.SetValue(true);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
"Defaultshading" files are always enabled. When "usershading" files are also
enabled, they will supplement the default shading correction by modifying the
default correction values.
To create a "usershading" file and enable it, you must take the steps listed below. We strongly
recommend that you read through all of the steps and read all of the other information in this section
before you attempt to do shading correction.
The steps below are intended to give you the basic knowledge needed to create
a "usershading" file and to activate it. A code sample that includes the
complete details of how to create a usershading file and how to enable
shading correction on a camera is included with the Basler pylon SDK.
When you create a "usershading" file you must make sure to create correction values for all of the
pixels in the sensor’s line regardless of how you plan to use the camera during normal operation.
Creating a "usershading" file for offset shading correction will overwrite any
"usershading" file for offset shading correction that is already in the camera’s
memory.
If you want to preserve the previous "usershading" file save it to your PC before
creating the new "usershading" file.
For information about saving a "usershading" file to the PC, see Section 10.9.3.2
on page 187.
Any time you make a change to the line rate, exposure time control mode,
exposure time, gain, or camera temperature, you must create a new
"usershading" file for offset shading correction. Using an out of date
"usershading" file can result in poor image quality.
Creating a "usershading" file for gain shading correction will overwrite any
"usershading" file for gain shading correction that is already in the camera’s
memory.
If you want to preserve the previous "usershading" file save it to your PC before
creating the new "usershading" file.
For information about saving a "usershading" file to the PC, see Section 10.9.3.2
on page 187.
After 128 line acquisitions are completed the camera creates the "usershading" file
automatically. The "usershading" file is stored in the camera’s non-volatile memory and is not
lost if the camera power is switched off.
Any time you make a change to the optics or lighting or if you change the
camera’s gain settings or exposure mode, you must create a new "usershading"
file. Using an out of date "usershading" file can result in poor image quality.
Once you have created shading set files, you can use the following pylon API functions to work with
the shading sets:
Shading Selector - is used to select the type of shading correction to configure, i.e. offset shading
correction or gain shading correction.
Shading Create - is used to create a "usershading" file. The enumeration allows selecting the
settings Off and Once.
Shading Enable - is used to enable and disable the selected type of shading correction.
Shading Set Selector - is used to select the shading set to which the activate and the create
enumeration commands will be applied.
Shading Set Activate - is used to activate the selected shading set. "Activate" means that the
shading set will be copied from the camera’s non-volatile memory into it’s volatile memory. When
the shading correction feature is enabled, the shading set in the volatile memory will be used to
perform shading correction.
Shading Set Default Selector - is used to select the shading set that will be loaded into the
camera’s volatile memory during camera bootup.
Shading Status - is used to determine the error status of operations such as Shading Set Activate.
The following error statuses may be indicated:
No error - the last operation performed was successful.
Startup Set error - there was a problem with the default shading set.
Activate error - the selected shading set could not be loaded into the volatile memory.
Create error - and error occurred during the attempt of creating a "usershading" file.
The use of the pylon API functions listed above is illustrated in the shading correction sample code
included with the pylon SDK.
You can also use the Shading parameters group in the Basler pylon Viewer application to access
these functions.
And you can use the File Access selection in the Camera menu of the Viewer to save a shading set
file to a PC and to upload a shading set file from the PC to the camera.
// Trigger delay
camera.TriggerSource = FrameStart;
int NumberLineTriggers = 100;
camera.TriggerDelaySource = LineTrigger;
camera.TriggerDelayLineTriggerCount.SetValue(NumberLineTriggers);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The Precision Time Protocol (PTP) provides a method to synchronize multiple GigE cameras
operated in the same network. It achieves clock accuracy in the sub-microsecond range.
The protocol is defined in the IEEE 1588 standard. The Basler racer GigE cameras support the
revised version of the standard (IEEE 1588-2008, also known as PTP Version 2).
PTP enables a camera to use the following features:
Action commands
This feature lets you trigger actions in multiple cameras synchronously.
For more information, see Section 10.12 on page 195.
Scheduled action commands
This feature lets you trigger actions in a camera or in multiple cameras at a specific time in the
future.
For more information, see Section 10.13 on page 202.
Synchronous free run
This feature makes it possible to let multiple cameras capture their images synchronously.
For more information, see Section 10.14 on page 204.
Measurement and automation systems involving multiple devices (e.g. cameras) often require
accurate timing in order to facilitate event synchronization and data correlation.
Through PTP, multiple devices (e.g. cameras) are automatically synchronized with the most
accurate clock found in a network, the so-called master clock or best master clock.
The protocol enables systems within a network
to synchronize a local clock with the master clock, i.e. to set the local clock as precisely as
possible to the time of the master clock, and
to syntonize a local clock with a master clock, i.e. to adjust the frequency of the local clock to
the frequency of the master clock. The duration of a second is as identical as possible on both
devices.
There are two different concepts of finding the master clock:
A. The synchronization between the different device clocks will determine a clock within one of
the cameras to be the best master clock.
B. A clock outside of the cameras will be determined as the master clock; i.e. an external device
(e.g. a GPS device) will be the best master clock.
PC
Switch
The IEEE 1588 standard defines a Best Master Clock (BMC) algorithm in which each clock in a
network identifies the most accurate clock and labels it "master clock". All other "slave clocks"
synchronize and syntonize with this master.
The basic concept of IEEE 1588 is the exchange of timestamp messages. The protocol defines
several periodic messages that trigger a clock to capture a timestamp and communicate timestamp
information between the master and slave clocks. This method of using timestamps enables each
slave clock in a network to analyze the master clock’s timestamp and the network propagation
delay. This allows the slave clock to determine the delta from the master in its synchronization
algorithm. For details about PTP messages, see the note box below.
IEEE 1588 defines 80-bit timestamps for storing and transporting time information. As GigE Vision
uses 64-bit timestamps, the PTP timestamps are mapped to the 64-bit timestamps of GigE Vision.
An IEEE 1588 enabled device that operates in a network with no other enabled devices will not
discipline its local clock. The drift and precision of the local clock is identical to a non-IEEE 1588
enabled device.
If no device in a network of IEEE 1588 enabled devices has a time traceable to the Universal Time
Coordinated (UTC), the network will operate in the arbitrary timescale mode (ARB). In this mode,
the epoch is arbitrary, as it is not bound to an absolute time. This timescale is relative, i.e. it is only
valid in the network. The best master clock algorithm will select the clock which has the highest
stability and precision as the master clock of the network.
TS Sync
TS
Follow_up
Delay_Req
TS
3
TS
Delay_Resp
TS = Timestamp
Time Time
The "delay request" message is received and time stamped by the master clock,
and the arrival timestamp is sent back to the slave clock in a "delay response"
packet. The difference between these two timestamps is the network propagation
delay.
By sending and receiving these synchronization packets, the slave clocks can
accurately measure the offset between their local clock and the master clock. The
slaves can then adjust their clocks by this offset to match the time of the master.
1. If you want to use an external device as master clock (e.g. a GPS device or a software
application on a computer synchronized by NTP - Network Time Protocol):
Configure the external device as master clock. We recommend an ANNOUNCE interval of
2 seconds and a SYNC interval of 0.5 seconds.
2. Make sure that the following requirements are met:
All cameras you want to set up PTP for are installed and configured in the same network
segment.
All cameras support PTP.
You can check whether a camera supports PTP via the following command:
if (GenApi::IsWritable(camera.GevIEEE1588))
{
// ...
}
3. For all cameras that you want to synchronize, enable the PTP clock synchronization:
camera.GevIEEE1588.SetValue(true);
If the PTP clock synchronization is enabled, and if the GigE Vision Timestamp
Value bootstrap register is controlled by the IEEE 1588 protocol,
the camera’s GevTimestampTickFrequency parameter value is fixed to
1000000000 (1 GHz).
the camera’s GevTimestampControlReset feature is disabled.
Status Parameters
Four parameter values can be read from each device to determine the status of the PTP clock
synchronization:
GevIEEE1588OffsetFromMaster: A 32-bit number. Indicates the temporal offset between the
master clock and the clock of the current IEEE 1588 device in nanoseconds.
GevIEEE1588ClockId: A 64-bit number. Indicates the unique ID of the current IEEE 1588
device (the "clock ID").
GevIEEE1588ParentClockId: A 64-bit number. Indicates the clock ID of the IEEE 1588 device
that currently serves as the master clock (the "parent clock ID").
GevIEEE1588StatusLatched: An enumeration. Indicates the state of the current IEEE 1588
device, e.g. whether it is a master or a slave clock. The returned values match the IEEE 1588
PTP port state enumeration (Initializing, Faulty, Disabled, Listening, Pre_Master, Master,
Passive, Uncalibrated, and Slave). For more information, refer to the pylon API documentation
and the IEEE 1588 specification.
The parameter values can be used to e.g.
delay image acquisition until all cameras are properly synchronized, i.e. until the master and
slave clocks have been determined and the temporal offset between the master clock and the
slave clocks is low enough for your needs, or to
optimize your network setup for high clock accuracy. For example, you can compare the
temporal offsets of the IEEE 1588 devices while changing the network hardware, e.g. routers
or switches.
Before the parameter values can be read, you must execute the GevIEEE1588DataSetLatch
command to take a "snapshot" (also known as the "latched data set") of the camera’s current PTP
clock synchronization status. This ensures that all parameter values relate to exactly the same point
in time.
The snapshot includes all four status parameter values: GevIEEE1588OffsetFromMaster,
GevIEEE1588ClockId, GevIEEE1588ParentClockId, and GevIEEE1588StatusLatched. The values
will not change until you execute the GevIEEE1588DataSetLatch command on this device again.
1. Make sure that PTP clock synchronization has been enabled on all devices (see
Section 10.11.1 on page 192).
For all cameras that you want to check the status for, perform the following steps:
2. Execute the GevIEEE1588DataSetLatch command to take a snapshot of the camera’s current
PTP clock synchronization status.
3. Read one or more of the following parameter values from the device:
GevIEEE1588OffsetFromMaster
GevIEEE1588ClockId
GevIEEE1588ParentClockId
GevIEEE1588StatusLatched
All of these parameter values relate to exactly the same point in time, i.e. the point in time when
the device received the GevIEEE1588Latch command.
Code Example
You can set the Precision Time Protocol parameters from within your application software by using
the Basler pylon API.
The following code snippet illustrates using the API to take a snapshot of the synchronization status,
read the clock ID of the current device and determine the temporal offset between the master clock
and the clock of the current device.
camera.GevIEEE1588DataSetLatch.Execute();
int64_t clockId = camera.GevIEEE1588ClockId.GetValue();
int64_t offset = camera.GevIEEE1588OffsetFromMaster.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Action commands let you execute actions in multiple cameras at roughly the same time by using a
single broadcast protocol message.
Each action protocol message contains an action device key, an action group key, and an action
group mask. If the camera detects a match between this protocol information and one of the actions
selected in the camera, the device executes the corresponding action.
You can use action commands to synchronously
capture images with multiple cameras (see Section 10.12.3.1 on page 198)
reset the frame counter in multiple cameras (see Section 10.12.3.2 on page 200)
PC
Switch
Action
Command
SG2 G1
SG3
SG1 Triggering sub-groups of cameras
(SG1 - SGn) to capture images
as the horse advances.
etc.
When the horse passes, four cameras positioned next to each other (sub-group SG1 in Fig. 63)
synchronously execute an action (in this example: image acquisition).
As the horse advances, the next four cameras (sub-group SG2 in Fig. 63) synchronously capture
images. One sub-group follows another in this fashion until the horse reaches the end of the race
track. The resulting images can be combined and analyzed in a subsequent step.
In this sample use case, the following must be defined:
A unique device key to authorize the execution of the synchronous image acquisition. The
device key must be configured in each camera and it must be the same as the device key for
the action command protocol message.
The group of cameras in a network segment that is addressed by the action command. In
Fig. 63, this group is G1.
The sub-groups in the group of cameras that capture images synchronously. In Fig. 63, these
sub-groups are SG1, SG2, SG3, and so on.
To define the device key, the group of cameras, and their sub-groups, the parameters action device
key, action group key, and action group mask are used. For more information about these
parameters, see Section 10.12.2.
1 000001 0x1
2 000010 0x2
3 000100 0x4
4 001000 0x8
5 010000 0x10
6 100000 0x20
You can use action commands to synchronously capture images with multiple cameras (see
example in Section 10.12.1 on page 195).
This will send an action command to all cameras with a device key of 4711 and a group key of 1,
regardless of their group mask number or their network address.
Code Example
You can set the action command parameters from within your application software by using the
Basler pylon API.
The following code snippet illustrates using the API to set up four cameras for synchronous image
acquisition with a frame start trigger. For the ActionDeviceKey, the ActionGroupKey, and the
ActionGroupMask parameters, sample values are used. It is assumed that the "Cameras" object is
an instance of CBaslerGigEInstantCameraArray.
After the camera has been set up, an action command is sent to the cameras.
//--- Start of camera setup ---
for (size_t i = 0; i < Cameras.GetSize(); ++i)
{
Cameras[i].Open();
//Set the trigger selector
Cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
//other than 0
Cameras[i].ActionGroupMask.SetValue(0xffffffff);
}
//--- End of camera setup ---
//Send an action command to the cameras
GigeTL->IssueActionCommand(4711, 1, 0xffffffff, "255.255.255.255");
You can use the Action Command feature to synchronously reset the frame counter in multiple
cameras.
To use an action command to synchronously reset frame counters:
This will send an action command to all cameras with a device key of 4711 and a group key of 1,
regardless of their group mask number or their network address.
Code Example
You can set the action command parameters from within your application software by using the
Basler pylon API.
The following code snippet illustrates using the API to set up a specific camera to synchronously
reset the frame counter. For the ActionDeviceKey, the ActionGroupKey, and the ActionGroupMask
parameters, sample values are used. It is assumed that the object "Cameras" is an instance of
CBaslerGigEInstantCameraArray.
After the camera has been set up, an action command is sent to the camera.
Code Example
Refer to the code examples in Section 10.12.3.1 on page 198 and Section 10.12.3.2 on page 200.
These code examples can also be used to set up a scheduled action command. To do so, simply
replace the IssueActionCommand call in the code examples by "IssueScheduledActionCommand"
and add the Action Time parameter as described above.
In a group of cameras that are not using the Precision Time Protocol (PTP), cameras that run in the
free run mode may capture images at the same frame rate, but their image captures will be slightly
asynchronous due to different reasons. See example A in Fig. 64.
Image
Captures
Image
Captures
Time
Fig. 64: Example A: Without PTP: Cameras Capturing at the Same Frame Rate, but Running Asynchronously
The Precision Time Protocol (PTP) in combination with the synchronous free run feature makes it
possible to let multiple cameras in a network capture their images synchronously, i.e. at the same
time and at the same frame rate. See example B in Fig. 65.
The frame rate is based on a tick frequency value that is the same for all cameras in the network.
It is also possible to start the image captures of multiple cameras at a precise start time.
For more information about the PTP feature, see Section 10.11 on page 189.
Start
Time
Image
Captures
PTP
Image
Captures
Time
You can also use the synchronous free run feature in order to set a group of cameras as in
example C (Fig. 66) and example D (Fig. 67):
The cameras have exactly the same exposure time for their image capture but
they capture their images in precisely time-aligned intervals, i.e. in a precise chronological
sequence - for example: one camera starts capturing images immediately (start time = 0), the
second camera 20 milliseconds after the start time, the third camera 30 milliseconds after the
start time of the first camera and so on.
Start
Time
Image
Captures
PTP
Image
Captures
PTP
Image
Captures
Time
Start
Time
Image
Captures
PTP
Image
Captures
PTP
Image
Captures
Time
Fig. 67: Example D: Same Start Time and Same SyncFreeRunTriggerRateAbs but Different
Exposure Times
To configure the synchronous free run for multiple Basler racer cameras:
1. Before configuring the synchronous free run of multiple cameras, make sure that the following
requirements are met:
All cameras you want to trigger synchronously via the synchronous free run feature must be
configured in the same network segment.
The Precision Time Protocol (PTP) is implemented and enabled for all cameras.
All camera clocks run synchronously.
For more information about enabling PTP, see Section 10.11.1 on page 192.
For all cameras that you want to run in the synchronized free run, make the following settings:
2. Make sure that the AcquisitionMode parameter is set to Continuous.
3. Set the TriggerMode parameter for the following trigger types to Off:
Acquisition start trigger
Frame start trigger
4. Set the parameters specific for the synchronous free run feature:
a. Set the SyncFreeRunTimerStartTimeLow and SyncFreeRunTimerStartTimeHigh
parameters to zero (0).
b. Verify the maximum possible frame rate the camera can manage.
c. Set the trigger rate for the synchronous free run (SyncFreeRunTimerTriggerRateAbs
parameter) to the desired value. Example: If you want to acquire 10 frames per second, set
the SyncFreeRunTimerTriggerRateAbs parameter value to 10.
Make sure that you do not overtrigger the camera. If you overtrigger the camera, frame
triggers may be ignored.
d. Send the SyncFreeRunTimerUpdate command so that the complete start time (i.e. the low
and high portion) and the frame rate are adopted by the camera.
e. Set the SyncFreeRunTimerEnable parameter to True.
5. Set the parameters for all cameras you want to execute a synchronous free run for.
As soon as the start time for the synchronous free run is reached, the camera starts acquiring
images continuously.
Code Example
You can set the parameter values associated with synchronous free run feature from within your
application software by using the Basler pylon API.
The following code snippets illustrate using the API to set the synchronous free run for a number of
cameras so that they capture synchronously images, without a specific point of time in the future.
The cameras will start as soon as the feature is enabled. It is assumed that the "Cameras" object
is an instance of CBaslerGigEInstantCameraArray.
for (size_t i = 0; i < Cameras.GetSize(); ++i)
{
Cameras[i].Open();
// Enable PTP
Cameras[i].GevIEEE1588.SetValue(true);
// Make sure the frame trigger is set to Off to enable free run
Cameras[i].TriggerSelector.SetValue(TriggerSelector_FrameStart);
Cameras[i].TriggerMode.SetValue(TriggerMode_Off);
// Let the free run start immediately without a specific start time
camera.SyncFreeRunTimerStartTimeLow.SetValue(0);
camera.SyncFreeRunTimerStartTimeHigh.SetValue(0);
0 No Error The camera has not detected any errors since the last time that the
error memory was cleared.
1 Overtrigger An overtrigger has occurred.
The user has applied an acquisition start trigger to the camera when
the camera was not in a waiting for acquisition start condition.
Or, the user has applied a frame start trigger to the camera when the
camera was not in a waiting for frame start condition.
2 User Set Load An error occurred when attempting to load a user set.
Typically, this means that the user set contains an invalid value. Try
loading a different user set.
3 Invalid Parameter A parameter is set out of range or in an otherwise invalid manner.
4 Over Temperature The camera has stopped image acquisition due to overheating.
Provide adequate cooling to he camera.
Table 13: Error Codes
When the camera detects a user correctable error, it sets the appropriate error code in an error
memory. If two or three different detectable errors have occurred, the camera will store the code for
each type of error that it has detected (it will store one occurrence of the each code no matter how
many times it has detected the corresponding error).
You can use the following procedure to check the error codes:
Read the value of the LastError parameter. The LastError parameter will indicate the last error
code stored in the memory.
Execute the ClearLastErrorCommand to clear the last error code from the memory.
Continue reading and clearing the last error until the parameter indicates a No Error code.
You can use the pylon API to read the value of the Last Error parameter and to execute a
ClearLastError command from within your application software. The following code snippets
illustrate using the API to read the parameter value and execute the command:
// Read the value of the last error code in the memory
LastErrorEnums lasterror = camera.LastError.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
If the camera is set to use an electrical signal applied to input line 1, line 2, or
line 3 as the source signal for the frame trigger and/or the line trigger, these
signals must be provided to the camera in order to generate test images.
When any test image is active, the camera’s analog features such as analog gain, black level, and
exposure time have no effect on the images transmitted by the camera. For test images 1, 2, and
3, the camera’s digital features, will also have no effect on the transmitted images. But for test
images 4 and 5, the cameras digital features will affect the images transmitted by the camera.
The TestImageSelector parameter is used to set the camera to output a test image. You can set the
value of the TestImageSelector parameter to one of the test images or to "test image off".
You can set the TestImageSelector parameter from within your application software by using the
pylon API. The following code snippets illustrate using the API to set the selector:
// set for no test image
camera.TestImageSelector.SetValue(TestImageSelector_Off);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The 8 bit fixed diagonal gray gradient test image is best suited for use when the camera is set for
monochrome 8 bit output. The test image consists of fixed diagonal gray gradients ranging from 0
to 255.
If the camera is set for 8 bit output, test image one will look similar to Fig. 68.
The mathematical expression for this test image is:
Gray Value = [column number + row number] MOD 256
The 8 bit moving diagonal gray gradient test image is similar to test image 1, but it is not stationary.
The image moves by one pixel from right to left whenever a new frame acquisition is initiated. The
test pattern uses a counter that increments by one for each new frame acquisition.
The mathematical expression for this test image is:
Gray Value = [column number + row number + counter] MOD 256
The 12 bit moving diagonal gray gradient test image is similar to test image 2, but it is a 12 bit
pattern. The image moves by one pixel from right to left whenever a new frame acquisition is
initiated. The test pattern uses a counter that increments by one for each new frame acquisition.
The mathematical expression for this test image is:
Gray Value = [column number + row number + counter] MOD 4096
The basic appearance of test image 4 is similar to test image 2 (the 8 bit moving diagonal gray
gradient image). The difference between test image 4 and test image 2 is this: if a camera feature
that involves digital processing is enabled, test image 4 will show the effects of the feature while
test image 2 will not. This makes test image 4 useful for checking the effects of digital features such
as spatial correction.
Test Image 5 - Moving Diagonal Gray Gradient Feature Test (12 bit)
The basic appearance of test image 5 is similar to test image 3 (the 12 bit moving diagonal gray
gradient image). The difference between test image 5 and test image 3 is this: if a camera feature
that involves digital processing is enabled, test image 5 will show the effects of the feature while
test image 3 will not. This makes test image 5 useful for checking the effects of digital features.
You can read the values for all of the device information parameters or set the value of the
DeviceUserID parameter from within your application software by using the pylon API. The following
code snippets illustrate using the API to read the parameters or write the DeviceUserID:
// Read the Vendor Name parameter
Pylon::String_t vendorName = camera.DeviceVendorName.GetValue();
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
For detailed information about using the pylon API and the pylon IP Configuration Tool, refer to the
Basler pylon Programmer’s Guide and API Reference.
Setting a user defined value using Basler pylon is a two step process:
Set the UserDefinedValueSelector parameter to Value1 or Value2.
Set the UserDefinedValue parameter to the desired value for the selected value.
You can use the pylon API to set the UserDefinedValueSelector and the UserDefinedValue
parameter values from within your application software. The following code snippet illustrates using
the API to set the parameter values:
// Set user defined value 1
camera.UserDefinedValueSelector.SetValue(UserDefinedValueSelector_Value1);
camera.UserDefinedValue.SetValue(1000);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
When a camera is manufactured, a test setup is performed on the camera and an optimized
configuration is determined. The default configuration set contains the camera’s factory optimized
configuration. The default configuration set is saved in a permanent file in the camera’s non-volatile
memory. It is not lost when the camera is reset or switched off and it cannot be changed. The default
configuration set is usually just called the "default set" for short.
As mentioned above, the active configuration set is stored in the camera’s volatile memory and the
settings are lost if the camera is reset or if power is switched off. The camera can save most of the
settings from the current active set to a reserved area in the camera’s non-volatile memory. A
configuration set saved in the non-volatile memory is not lost when the camera is reset or switched
off. There are three reserved areas in the camera’s non-volatile memory available for saving
configuration sets. A configuration set saved in a reserved area is commonly referred to as a "user
configuration set" or "user set" for short.
The three available user sets are called User Set 1, User Set 2, and User Set 3.
The settings for frame transmission delay, inter packet delay, and the luminance
lookup table are not saved in the user sets and are lost when the camera is reset
or switched off. If used, these settings must be set again after each camera reset
or restart.
You can select the default configuration set or one of the user configuration sets stored in the
camera’s non-volatile memory to be the "default startup set." The configuration set that you
designate as the default startup set will be loaded into the active set whenever the camera starts
up at power on or after a reset. Instructions for selecting the default startup set appear on the next
page.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
To load a saved configuration set or the default set into the active set:
1. Set the UserSetSelector parameter to UserSet1, UserSet2, UserSet3, or Default.
2. Execute a UserSetLoad command to load the selected set into the active set.
You can set the UserSetSelector parameter and execute the UserSetLoad command from within
your application software by using the pylon API. The following code snippet illustrates using the
API to set the parameter and execute the command:
camera.UserSetSelector.SetValue(UserSetSelector_UserSet2);
camera.UserSetLoad.Execute();
Loading a user set or the default set into the active set is only allowed when the
camera is idle, i.e. when it is not acquiring lines.
Loading the default set into the active set is a good course of action if you have
grossly misadjusted the settings in the camera and you are not sure how to
recover. The default settings are optimized for use in typical situations and will
provide good camera performance in most cases.
11 Chunk Features
Making the chunk mode inactive switches all chunk features off.
When you enable ChunkModeActive, the PayloadType for the camera changes from
"Pylon::PayloadType_Image" to "Pylon::PayloadType_ChunkData".
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
When the chunk mode made is active, the Extended Frame Data feature will automatically be
enabled, and the camera will add an "extended frame data" chunk to each acquired image. The
extended frame data chunk appended to each acquired image contains some basic information
about the frame. The information contained in the chunk includes:
The OffsetX, Width, and Height settings for the frame
The pixel format of the image data in the frame
The minimum dynamic range and the maximum dynamic range
To retrieve data from the extended frame data chunk appended to a frame that has been received
by your PC, you must first run the frame and its appended chunks through the chunk parser
included in the pylon API. Once the chunk parser has been used, you can retrieve the extended
frame data by doing the following:
Read the value of the ChunkOffsetX parameter.
Read the value of the ChunkWidth parameter.
Read the value of the ChunkHeight parameter.
Read the value of the ChunkPixelFormat parameter.
Read the value of the ChunkDynamicRangeMin parameter.
Read the value of the ChunkDynamicRangeMax parameter.
The following code snippet illustrates using the pylon API to run the parser and retrieve the
extended image data:
For more information about using the chunk parser, see the sample code that is included with the
Basler pylon Software Development Kit (SDK).
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The chunk mode must be made active before you can enable the frame counter
feature or any of the other chunk features. Making the chunk mode inactive
disables all chunk features.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
Whenever the camera is powered off, the frame counter will reset to 0. During operation, you can
reset the frame counter via I/O input 1, I/O input 2, I/O input 3 or software, and you can disable the
reset. By default, the frame counter reset is disabled.
// disable reset
camera.CounterResetSource.SetValue(CounterResetSource_Off);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The chunk mode must be made active before you can enable the time stamp
feature or any of the other chunk features. Making the chunk mode inactive
disables all chunk features.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The chunk mode must be made active before you can enable the time stamp
feature or any of the other chunk features. Making the chunk mode inactive
disables all chunk features.
The Line Trigger Ignored, Frame Trigger Ignored, Line Trigger End To End, and Line Trigger
counters are each 32-bit counters. The Frame Trigger and Frames Per Trigger counters are each
16-bit counters.
The Line Trigger Counter numbers external line acquisition triggers sequentially as they are
received. When this counter is enabled, a chunk is added to each completed frame containing the
value of the counter.
Be aware that the Line Trigger Counter counts all incoming line trigger signals, whether they are
successful (acted on) or ignored (not acted on, e.g. due to overtriggering).
Example: You set the height of the frame to 100 lines, enable the line trigger counter, and send 120
line trigger signals at a rate that is higher than allowed. Due to overtriggering, 20 signals are
ignored. Nevertheless, a line trigger counter value of 120 will be added to the first completed frame.
The line trigger counter is the only trigger counter that can be reset. For more information about
resetting the line trigger counter, see Section 11.5.2 on page 230.
For more information about setting the height of a frame, see Section 8.1 on page 74.
The Line Trigger Ignored Counter counts the number of line triggers that were received during the
acquisition of the current frame but were ignored (not acted on). A line trigger will be ignored if the
camera is already in the process of acquiring a line when the trigger is received. Typically, this will
happen if you overtrigger the camera, i.e., try to acquire lines at a rate that is higher than allowed.
The magnitude of this counter will give you an idea of how badly the camera is being overtriggered.
The higher the counter, the worse the overtriggering.
The Frame Trigger Ignored Counter counts the number of frame triggers that were not acted upon
during the acquisition of the frame because the camera was not ready to accept the trigger.
Typically, this will happen if you attempt to trigger the start of a new frame while the camera is
currently in the process of acquiring a frame.
The Line Trigger End to End Counter counts the number of line triggers received by the camera
from the end of the previous frame acquisition to the end of the current frame acquisition. If you
subtract the number of lines actually included in the current frame from the number of lines shown
by this counter, it will tell you the number of line triggers that were received but not acted on during
the frame end to frame end period.
The Frame Trigger Counter and the Frames Per Trigger Counter are designed to be used together.
They are available when the frame start trigger activation is set to either LevelHigh or LevelLow.
The Frame Trigger Counter counts the number of frame trigger valid periods, and it increments each
time the frame trigger becomes valid. The Frames Per Trigger counter counts the number of frames
acquired during each frame valid period. The counter increments for each acquired frame (also for
partial frames) and resets to zero for each new frame valid period. The way that the counters work
is illustrated below.
Fig. 70: Frame Trigger Counter and Frames Per Trigger Counter
These two counters can be used to determine which frames were acquired during a particular frame
trigger valid period. This information will be especially useful in a situation where several frames
must be stitched together to form an image of a single large object.
To retrieve data from a chunk appended a frame that has been received by your PC, you must first
run the frame and its appended chunks through the chunk parser included in the pylon API.
Once the chunk parser has been used, you can retrieve the counter values from the chunks by
reading one or more of the following parameters:
Chunk Line Trigger Counter
Chunk Line Trigger Ignored Counter
Chunk Frame Trigger Ignored Counter
Chunk Line Trigger End To End Counter
Chunk Frame Trigger Counter
Chunk Frames Per Trigger Counter.
You can run the chunk parser and retrieve the counter values from within your application software
by using the pylon API. The following code snippet illustrates using the API to run the parser and
retrieve the frame counter chunk data:
// run the chunk parser
IChunkParser &ChunkParser = *camera.CreateChunkParser();
GrabResult Result;
StreamGrabber.RetrieveResult(Result);
ChunkParser.AttachBuffer((unsigned char*) Result.Buffer(),
Result.GetPayloadSize());
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
// disable reset
camera.CounterResetSource.SetValue(CounterResetSource_Off);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference. You can also use the Basler pylon Viewer application to easily set the
parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
The chunk mode must be made active before you can enable the time stamp
feature or any of the other chunk features. Making the chunk mode inactive
disables all chunk features.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
3 2 1 0
The chunk mode must be active before you can enable the input status at line
trigger chunk or any of the other chunk feature. Making the chunk mode inactive
disables all chunk features.
The maximum for the Height parameter value is 1024 if the input status at line
trigger chunk is enabled. Other conditions may further decrease the maximum
parameter value. For more information, see Section 8.1 on page 74.
You can set the ChunkSelector and ChunkEnable parameter values from within your application
software by using the pylon API. You can also run the parser and retrieve the chunk data. The
following code snippets illustrate using the API to activate the chunk mode, enable the input status
at line trigger chunk, run the parser, and retrieve the input status at line trigger chunk data for the
acquired line i:
camera.ChunkModeActive.SetValue(true);
camera.ChunkSelector.SetValue(ChunkSelector_InputStatusAtLineTrigger);
camera.ChunkEnable.SetValue(true);
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
The chunk mode must be made active before you can enable the time stamp
feature or any of the other chunk features. Making the chunk mode inactive
disables all chunk features.
For detailed information about using the pylon API, refer to the Basler pylon Programmer’s Guide
and API Reference.
You can also use the Basler pylon Viewer application to easily set the parameters.
For more information about the pylon API and the pylon Viewer, see Section 3.1 on page 26.
When camera reset is carried out, all settings stored in the camera’s volatile
memory are lost.
If you want to preserve settings stored in the camera’s volatile memory, safe them
as a user set before carrying out camera reset. Some settings can not be saved
in a user set, for example the settings for the luminance lookup table.
After having issued the camera reset command perform the subsequently necessary steps, e.g.
some cleanup on the PC, in accord with the DeviceRemovalHandling sample code that is included
in the pylon SDK documentation.
After camera reset was carried out, allow some time to elapse until the camera is detected again.
For more information about the pylon API and SDK documentation, see Section 3.1 on page 26.
Occasionally when:
Pixel Format:
Packet Size:
Exposure Time:
Line rate:
Revision History
AW00118301000 20 Jun 2012 Preliminary release of this document. Applies to prototype cameras only.
AW00118302000 09 Apr 2013 Initial release for series cameras.
AW00118303000 13 Sep 2013 Updated the contact information for Asia.
Updated some max. line rates in Section 1.2 on page 2.
Added bit depth to Fig. 1 in Section 1.3 on page 8.
Added a dimension value to Fig. 2 in Section 1.4.1 on page 9.
Modified dimension values in Fig. 4 on page 12 and Fig. 5 on page 13.
Updated the LZ4 licensing text in Section 1.5.2 on page 17.
Updated the title of document AW000611 in Section 2 on page 24 through
Section 4 on page 27.
Rearranged Section 3.1.3 on page 26.
Corrected the absolute maximum value for the Height parameter in Section
8.1 on page 73.
Modified the Line Acquisition While Obeying Timing Limits section in
Section 8.2.7 on page 111.
AW00118304000 04 Aug 2015 Minor corrections and modifications throughout the manual.
Updated the contact information on the back of the front page.
Corrected subpart J to subpart B in the FCC section at the back of the front
page.
Added references to the CE Conformity Declaration in Section 1.2 on page
2.
Updated the dimensions of the F-mount lens adapter in Section 1.2 on
page 2 and Section 1.4.3 on page 12.
Added precautions related to SELV and LPS in Section 1.8 on page 21.
Updated Chapter 3 on page 25 to reflect software update to Basler pylon
Camera Software Suite.
Corrected camera power requirements in Section 7.2.1 on page 51.
Updated the power cable drawing in Section 7.4.1 on page 54.
Added information about "low level" code and the Instant Camera classes
in Chapter 8 on page 73.
Added information about the AcquisitionLineRateAbs parameter in Section
8.2.4 on page 85.
Added minimum and maximum values for the Frame Timeout Abs
parameter in Section 8.2.3.4 on page 84.
Renamed GainSelector_All to GainSelector_DigitalAll in Section 10.1.1.2
on page 157.
Added the Auto Functions feature in Section 10.4 on page 163.
Added the Precision Time Protocol feature in Section 10.11 on page 188.
Added the Action Command feature in Section 10.12 on page 194.
AW00118306000 08 Sep 2016 Added the M42-mount FBD 45.46 and M58-mount adapters to the
specification tables in Section 1.2 on page 2, to the lens adapter drawings
in Section 1.5.3 on page 13, and to the table in Section 1.5.4 on page 16.
Added Section 1.3 on page 8 ("Accessories").
Added the ExposureOverlapTimeMaxAbs and the TargetGrayValue
parameters to the list of parameters whose limits can be removed in
Section 10.2 on page 161.
Index
A nominal voltage.................................. 57
required voltage ................................. 57
acquisition frame count parameter...........80 camera power requirements .................... 57
acquisition start overtrigger event ..........175 camera reset .......................................... 236
acquisition start trigger .......................77, 79 camera restart........................................ 236
acquisition status indicator .....................120 chunk dynamic range max parameter.... 221
acquisition status parameter .................. 121 chunk dynamic range min parameter..... 221
acquisition trigger wait signal .................122 chunk enable parameter223, 225, 228, 231,
action command.....................................195 232, ....................................................... 234
action device key ...................................196 chunk encoder counter parameter ......... 231
action group key.....................................196 chunk features, explained ...................... 220
action group mask..................................196 chunk frame counter parameter ............. 223
action selector ........................................197 chunk frame trigger counter parameter.. 229
action signals .........................................197 chunk frame trigger ignored counter
adjustment damping parameter............................................... 229
gray value ~ .....................................173 chunk frames per trigger counter parameter
analog gain ............................................158 229
AOI chunk height parameter ......................... 221
see image area of interest chunk input status at line trigger parameter .
auto functions 232
explained..........................................164 chunk line trigger counter parameter ..... 229
modes of operation ..........................165 chunk line trigger end to end counter
target value ......................................164 parameter............................................... 229
using with binning ............................164 chunk line trigger ignored counter parameter
auto functions profile ..............................174 229
chunk mode ........................................... 221
chunk mode active parameter................ 221
B chunk offset x parameter ....................... 221
bandwidth assigned parameter ................40 chunk parser .221, 223, 225, 229, 231, 232,
bandwidth, managing ...............................41 234
binning ...................................................181 chunk pixel format parameter ................ 221
bit depth .............................................2, 4, 6 chunk selector ........................228, 231, 232
black level chunk time stamp parameter ................. 225
mono cameras .................................160 chunk width parameter........................... 221
block diagram...........................................49 cleaning the camera and sensor .............. 24
Broadcast address .................................197 C-mount adapter ...................................... 13
bus ...........................................................59 code snippets, proper use........................ 23
configuration set loaded at startup ......... 219
configuration sets ...........................217–219
C conformity ..........................................3, 5, 7
cables connector types........................................ 54
Ethernet .............................................56 connectors ............................................... 50
I/O ......................................................56 CPU interrupts ......................................... 42
power .................................................55 CRC checksum ...................................... 234
camera events .......................................177
camera power
D minimum.............................................91
setting.................................................92
damping
exposure time abs parameter...................93
gray value adjustment ~ ...................173
exposure time control modes
debouncer ................................................62
timed ..........................................89, 127
default shading set file............................184
trigger width................................88, 128
default startup set...................................219
exposure time parameters........................92
device firmware version parameter ........214
exposure time raw parameter...................92
device ID parameter ...............................214
extended frame data ..............................221
device manufacturer info parameter.......214
device model name parameter...............214
device scan type parameter ...................214 F
device user ID parameter .......................214
device vendor name parameter..............214 filter driver.................................................28
device version parameter .......................214 F-mount adapter.......................................14
digital gain ..............................................158 frame counter .........................................223
dimensions .................................. 2, 4, 6, 10 frame counter chunk
drivers, network ........................................28 reset .................................................224
DSNU frame counter reset, synchronous..........200
see offset shading correction frame retention parameter........................29
dust...........................................................22 frame size.................................................75
frame start overtrigger event ..................175
frame start trigger ...............................77, 82
E frame start trigger activation parameter ...83
falling edge.........................................83
earth .........................................................19
level high ............................................83
electromagnetic interference ....................19
level low .............................................83
electrostatic discharge..............................19
rising edge..........................................83
EMI ...........................................................19
frame start trigger mode parameter..........82
enable resend parameter ...................29, 31
frame start trigger source parameter........82
encoder counter chunk...........................231
frame timeout ...........................................85
environmental requirements.....................20
frame timeout event..........................85, 175
ESD ..........................................................19
frame transmission delay parameter ........40
event overrun event................................175
frame trigger counter ..............................226
event reporting .......................................175
frame trigger ignored counter .................226
exposure
frame trigger wait signal .........................124
extension..........................................115
frames per trigger counter ......................226
overhead ..................................113, 161
free run ...............................................94, 96
overlapped .......................................112
free run, synchronous.............................204
premature end..................................114
frequency converter................................145
exposure active signal............................120
functional description................................47
exposure auto.........................................171
functional earth...................................19, 50
exposure modes
timed ................................................117
trigger width......................................117 G
exposure overhead.........................132, 134
exposure start delay .................................89 gain
exposure time analog ..............................................158
extension..........................................115 digital................................................158
maximum............................................91 mono cameras .................................157
gain auto.................................................170