PI OPCInt 2.5.0.9a
PI OPCInt 2.5.0.9a
User Guide
OSIsoft, LLC
777 Davis St., Suite 250
San Leandro, CA 94577 USA
Tel: (01) 510-297-5800
Fax: (01) 510-357-8136
Web: http://www.osisoft.com
Scan.............................................................................................................................................................. 34
Shutdown......................................................................................................................................................34
Exception processing..................................................................................................................................... 35
Output points................................................................................................................................................ 35
Event points.................................................................................................................................................. 36
Reading OPC array item PI points.................................................................................................................. 37
Reading OPC arrays as event points...............................................................................................................37
Reading OPC quality into a digital PI point.....................................................................................................38
The following configuration places the OPC server on its own computer.
Related manuals
For details about related products and technologies, refer to the following OSIsoft manuals:
• PI Server manuals
• PI API Installation Manual
• PI OPCClient User’s Guide
• PI Interface Configuration Utility User Manual
• UniInt Interface User’s Guide
• DCOM Security and Configuration Guide
• Time stamps
• Logging
• Buffering
• Failover
Interface startup
The OPC interface is started using a Windows batch file that invokes the OPC interface
executable and specifies settings using command-line parameters. To ensure a correctly-
formatted batch file, do not edit the batch file manually; use PI ICU. For a complete list of
UniInt startup parameters, see the UniInt Interface User Manual.
Time stamps
The PI OPC interface can use the time stamps provided by the OPC server or create its own
time stamps at the time that the data is received. Time stamps coming from the OPC server are
in Coordinated Universal Time (UTC), and are sent to the PI system in UTC as well.
If the OPC server provides time stamps, you can use PI ICU to configure the behavior of the PI
OPC interface as follows:
Option Description Timestamp Offset Applied
Interface Provides Time (Default) The PI OPC interface time Difference between PI Server
stamp (/TS=N stamps each value as it is received. node and interface node.
Choose this option If the OPC server
cannot provide time stamps or you
do not want to use the time stamps
returned by the OPC server.)
For details about reading and writing time stamps from a PI point when the time stamp is the
value of the point, see Time stamps.
Logging
The PI OPC interface logs messages about its operation in the local PI message log file. The
following information is logged:
Buffering
Buffering is temporary storage of the data that the PI OPC interface collects and forwards to
the PI Server. To ensure that you do not lose any data if the PI OPC interface cannot
communicate with the PI Server, enable buffering. The PI SDK installation kit installs two
buffering applications: the PI Buffer Subsystem (PIBufss) and the PI API Buffer Server
(BufServ). PIBufss and BufServ are mutually exclusive; that is, on a particular computer, you
can run only one at a time. For details about configuring buffering, refer to the PI Buffering User
Guide.
To ensure data integrity, enable buffering even if the PI OPC interface runs on the PI server
node, because the OPC server sometimes sends data in bursts, with all values arriving within
the same millisecond. To ensure that the interface and buffering restart when the interface
node is restarted, configure both as Windows services.
To assign the scan class for a point, set Location4. Do not assign the same scan class to both
advise and polled points; use separate scan classes.
Polled points
Polled PI points are grouped by scan class, and, if possible, groups are read at the rate
configured for scan class of the point. However, the OPC server determines its own update rate
for scanning its data sources, and you can configure the update rate manually (using PI ICU).
The PI OPC interface requests the OPC server to use an update rate identical to the scan class,
but the OPC server does not guarantee that the rates match. The PI scan class offset has no
effect on the OPC server, unless the interface is configured for staggered group activation and
the OPC server uses the activation of the group to initiate the scanning cycle.
For details about polled points, see the Data Access Custom Interface Standard v2.05a from the
OPC Foundation.
Advise points
Advise points are sent to the PI OPC interface by the OPC server only when a new value is read
into the server’s cache. Scan class 1 is reserved for advise points, and you can create additional
scan classes for advise points as required. Be sure that the scan rate is fast enough to capture
all the changes from the data source. The default maximum number of points in scan class 1 is
800. Up to 800 points with the same deadband can reside in the same group. If there are more
than 800 points with the same deadband in scan class 1, the OPC interface creates as many
groups as needed. (To ensure best performance, ensure that groups size does not exceed 800
items). To change the default limit, use PI ICU to set the Number of Tags in the advise group
field on the OPCInt > Data Handling page. Your server might perform better with smaller
group sizes; a limit of 200 points per group has proven effective with a number of OPC servers.
Event points
Event points are read by the PI OPC interface when it receives notification that a trigger point
has a new event. The PI point that triggers the read is specified in the event point’s ExDesc
attribute. When a new event for a trigger point is sent to the PI Snapshot, the PI system notifies
the PI OPC interface, which reads the values for all the associated event points from the OPC
server. For v1.0a servers, an asynchronous read is sent to the server’s cache. For v2.0 servers,
the PI OPC interface performs an asynchronous read from the device.
To configure event points, specify 0 for the scan class. To assign event points to the same OPC
event group (so they are read together), specify the same integer in each point'sUserInt2
attribute. Set each event point’s ExDesc attribute to the name of the triggering point. For
details about configuring event points, refer to the UniInt Interface User Manual.
Frequent device reads can impair the performance of the OPC server. For any asynchronous
read, the OPC server is required to return all of the values together, which can delay the return
of new values to the PI Server if the OPC server encounters a delay in reading the values. To
improve performance in this case, group points according to the device where the data
originates.
If your OPC server does not permit clients to specify a data type, set Location2 to 8 for all
your OPC PI points. Use with caution: The OPC interface might receive data for which no
reasonable conversion is possible. Where possible, always specify the OPC data type that
matches the PI point.
• Booleans
Some OPC servers send Boolean values as 0 and -1 when read as integers. This approach
creates a problem when reading that data into a PI digital point, because "-1" is not the
value that must be stored. To handle the data from such servers, the OPC interface uses the
absolute value of any integer or real values read for digital points. Because digital point
values are actually offsets into the digital set for the point, and a negative offset has no
functional meaning, this issue does not cause problems for properly-written servers.
The PI OPC interface can also request the item as a Boolean (VT_BOOL). This approach
works only for items that have two possible states, because any non-zero value is
interpreted as 1. To have points read and written as though they were Booleans, set
Location2 to 2.
• Float64 values
To handle eight-byte floating-point numbers (VT_R8), set the Location2 of the target point
to 5. PI stores the value as a four-byte floating-point number, with possible loss of precision.
If the number is too large to fit in the point, a status of BAD INPUT is stored.
Time stamps
The PI OPC interface does not adjust the time stamps it receives, regardless of the time zone
settings or /TS parameter specified on the command line. Any scaling or transformation is
performed after the string has been translated into seconds, which enables a wide range of
values to be handled.
Token Description
mon Three-character month (Jan Feb Mar, etc.)
dd Two-digit day
hh Two-digit hour from 0 to 23
hr Two-digit hour from 0 to 12
mm Two-digit minute
ss Two-digit second
24 Three-digit milliseconds
XM AM or PM
The position of the tokens and delimiters must specify the format of the time stamp string
precisely. Examples:
Format String Result
ccyy/mn/dd hh:mm:ss.000 1998/11/29 15:32:19.391
dd mon, ccyy hr:mm:ss XM 29 Nov, 1998 03:32:19 PM
mn-dd-ccyy hh:mm:ss 11-29-1998 15:32:19
hh:mm:ss.000 15:32:19.482
Only one format string can be specified for each instance of the PI OPC interface. If more
than one format of time stamp needs to be processed, configure additional instances of the
PI OPC interface with the required format string.
If you omit elements of the format strings, the defaults are as follows ("current" values are
GMT):
Format String Element Omitted Default
Day Current day
Month Current month
Year Current year
Century Current century
Note:
If you specify only hours, minutes and seconds, the date defaults to January 1, 1970.
To ensure accurate timestamps, be sure to specify all the elements of the timestamp
format. If the OPC server returns a zero value for day, month or year, the interface
applies the defaults described above, regardless of the format string you specify.
Scaling
To configure scaling for a PI OPC point, set the TotalCode and SquareRoot attributes of the
point. The Convers attribute specifies the span of the device, and the ExDesc specifies the
device zero (Dzero). Using these values, the PI OPC interface can translate a value from the
scale of the device to the scale of the point. Scaling is only supported for numeric points.
For simple square/square root scaling, set TotalCode and Convers to zero. To configure how
the value is stored, set SquareRoot as follows:
• To square a value before sending it to PI Server, set SquareRoot to 1. For output values, the
square root is calculated before it is written to the device.
• To send the square root to PI Server and the square to the device, set SquareRoot to 2.
Transformation
To transform the value to another scale of measurement, to apply an offset or conversion
factor, or to perform bit masking, configure the settings as shown in the following table. If
SquareRoot is set to 1 or 2, the square root or square of the value is calculated first, then the
formula is applied.
Conver TotalCo SquareRo Dzero Operation Input points Operation Output points
s de ot
0 0 1 No (Value)2 (Value)0.5
effect
2 No (Value)0.5 (Value)2
effect
Conver TotalCo SquareRo Dzero Operation Input points Operation Output points
s de ot
Non- 1 0 Define [ (Value – Dzero) / Convers ] [ (Value – Zero) / Span] *
zero d * Span + Zero Convers + Dzero
1 Define [ ((Value)2 – Dzero) / [ ((Value)0.5 – Zero) / Span] *
d Convers ] * Span + Zero Convers + Dzero
2 Define [ ((Value)0.5 – Dzero) / [ ((Value)2 – Zero) / Span] *
d Convers ] * Span + Zero Convers + Dzero
2 0 No Value * Convers Value / Convers
effect
1 No (Value)2 * Convers (Value)0.5 / Convers
effect
2 No (Value)0.5 * Convers (Value)2 / Convers
effect
3 0 Define (Value / Convers) – Dzero (Value + Dzero) * Convers
d
1 Define ((Value)2 / Convers) – Dzero ((Value)0.5 + Dzero) * Convers
d
2 Define ((Value)0.5 / Convers) – ((Value)2 + Dzero) * Convers
d Dzero
4 0 Define (Value – Dzero) / Convers (Value * Convers) + Dzero
d
1 Define ((Value)2 – Dzero)/ Convers ((Value)0.5 * Convers) + Dzero
d
2 Define ((Value)0.5 – Dzero)/ ((Value)2 * Convers) + Dzero
d Convers
5 0 No Value + Convers Value – Convers
effect
1 No (Value)2 + Convers (Value)0.5 – Convers
effect
2 No (Value)0.5 + Convers (Value)2 – Convers
effect
6 No effect No Value AND Convers Value AND Convers
effect
7 No effect No Value OR Convers Value OR Convers
effect
8 No effect No Value = Value XOR Convers Value = Value XOR Convers
effect
Quality states
Quality data is returned using a bit mask. The first number corresponds to a hexadecimal value
between 0xC0 (11000000) and 0xFF (11111111). The following tables list the values that are
returned.
Good quality
Quality OPC Definition PI Status
11SSSSLL Non-specific Good
Except: Local Override _SUBStituted
110110LL
Questionable quality
Quality OPC Definition PI Status
010110 LL Sub-Normal Bad_Quality
010101LL Engineering Units Exceeded
LL=01 Low Limited Under LCL
LL=10 High Limited Over UCL
Otherwise Inp OutRange
To replace the default PI digital states with custom states using PI ICU, go to the OPCInt > Data
Handling page and set the Alternate Digital State for Questionable/Bad Qualities field. To
override the default states, you must specify the full set of replacements, and the numeric
values must be contiguous. The following table lists the digital states and PI statuses that you
can override.
Custom digital states
Order After Marker State Default PI status
1 Bad_Quality
2 Under LCL
3 Over UCL
4 Inp OutRange
5 Under Range
6 Over Range
7 Invalid Data
8 Bad Input
9 No_Sample
10 Doubtful
11 Out of Service
12 Comm Fail
Failover
The PI OPC interface is designed to provide redundancy for both the OPC server and the PI OPC
interface, as follows.
• Server-level failover
The PI OPC interface can be configured to change to another OPC server when a problem is
detected.
• Interface-level failover
To ensure against data loss, you can run two instances of the PI OPC interface on different
machines. If the primary instance fails, the backup instance can take over.
• Installation prerequisites
• Create trusts
• Enable buffering
The %PIHOME% directory, which is the root directory under which OSIsoft products are
installed, is defined by the PIHOME entry in the pipc.ini configuration file in the %windir%
directory. To override the default locations, edit the pipc.ini configuration file.
Note:
Reserve the C: drive for the operating system and install the interface on another drive.
The PI OPC interface installation directory contains all the files required to configure and run
the interface. The OPCEnum tool, which discovers OPC servers, is installed in the …\%windir%
\system32 directory, except on 64-bit systems, where it is installed in %windir%\sysWOW64.
The OPC libraries, provided by the OPC Foundation and installed with the PI OPC interface, are
installed in the same directory as OPCEnum.
Installation prerequisites
Before installing and configuring, ensure that the following prerequisites are met:
Create trusts
When creating trusts, you have many options. Following is a simple and secure approach,
creating a trust for the following applications:
• PI OPC interface
• PI Interface Configuration Utility (ICU)
• Buffering
To create each of these trusts using PI System Management Tools, connect to the PI Server and
perform the following steps:
Procedure
1. Click Security and choose Mappings & Trusts.
2. On the Trusts tab, right-click and choose New Trust. The Add Trust wizard appears.
3. Specify a meaningful name and description for the trust.
4. Configure settings as follows:
Procedure
1. On the OPC interface node, run the PI OPC interface setup program.
For details on the file and directory structure, see Installation directory and file locations.
2. On the OPC interface node, test the API connection to the PI Server: In the %PIPC%\bin
directory, issue the apisnap PISERVERNODE command.
3. On the OPC interface node, test the SDK connection to the PI Server: Choose Start > All
Programs > PI System > About PI-SDK and use File > Connections to connect to the PI
Server.
4. Create an OPC data owner (that is, a PI user with read/write access to the PI points that this
interface will be using).
5. On the server node, use PI SMT to create API trusts that permit the following applications to
access the server node:
◦ PI ICU
◦ Buffering process
◦ OPC interface (application name: OPCpE)
For details, see Create trusts
6. On the OPC interface node, use PI ICU to create a new OPC interface instance from the OPC
interface batch file (OPCint.bat_new).
◦ General
Setting Value
Point source OPC (or an unused point source of your choice)
Scan classes As desired. (Scan class 1 is reserved for advise points.)
Interface ID 1 or any unused numeric ID
◦ Configure diagnostics
Diagnostic points enable you to track the activity and performance of the OPC interface.
Note that the OPC interface does not support scan class performance points. For details
about diagnostic and health points, see the UniInt Interface User Manual.
◦ Configure failover
Failover enables the PI System to switch to another instance of the OPC interface if the
currently-running instance fails. For details, see Configuring failover for the PI OPC DA
interface
Procedure
1. Launch PI ICU.
2. Choose Interface > New from BAT file
3. Browse to the directory where the PI OPC interface is installed (default is %PIPC%
\Interfaces\OPCInt), select OPCInt.bat_new and click Open. The Select PI Host Server
dialog box is displayed.
4. Specify the PI Server and click OK. PI ICU displays the settings of the new instance of the PI
OPC interface.
5. Edit the basic settings as follows:
▪ Point source
OPC or a point source not already in user Interface
▪ ID
1 or a numeric ID not already in use by another instance of the interface
▪ Scan Class
Set to desired scan frequency. (Scan class 1 is reserved for advise tags.) Note that,
when defining scan classes, you can spread the server workload using offsets.
◦ OPCInt tab
Click the List Available Servers button, then select your server from the drop-down list
of servers. If the server resides on another machine, specify the node name or IP address
in the Server Node field before listing the available servers.
Procedure
1. To display the message log, launch PI System Management Tools and choose the Operation
> Message Logs menu option.
2. To start the interface using PI ICU, choose Interface > Start Interactive.
PI ICU displays a command window and invokes the startup batch file, and you can observe
progress as the interface attempts to initialize and run.
3. Watch log for messages indicating success or errors.
4. To stop the interface, close the command window.
Procedure
1. Launch PI ICU and click the Service tab in the PI ICU window.
2. Set the fields as described in the following table.
Field Description
Service name Descriptive name of the PI OPC interface service.
ID Numeric ID of the PI OPC interface instance. Must be unique for each instance.
Display name The service name displayed in the Windows Services control panel. The default
display name is the service name with a PI- prefix. You can override the default. To
ensure that OSIsoft-related services are sorted together in the Services control
panel, retain the PI- prefix.
Log on as The Windows account associated with the service. The user must have DCOM
permissions configured on the OPC. Set password expiration to Never.
Password Password, if any, for the preceding user.
Dependencies Any services that the OPC interface depends on. The only dependency is the TCP/IP
service, which is pre-configured. If buffering is enabled, you are prompted to create
a dependency on the buffering service.
Startup type Specifies whether the PI OPC interface service starts automatically when the
interface node is restarted. Generally, PI OPC interface services are set to start
automatically.
Procedure
• To verify that the service is running, use the Windows Services applet in Control Panel.
• To stop the service, click .
Enable buffering
Procedure
1. In PI ICU, choose Tools > Buffering. The Buffering dialog box appears.
2. Click Enable buffering with PI Buffer Subsystem.
3. To start the buffering service, click PI Buffer Subsystem Service, then click .
4. To verify that buffering starts successfully, check the message log for messages that indicate
that the buffering application is connected to PI Server.
5. To verify that the configuration is working as intended, reboot the interface node and
confirm that the interface service and the buffering application restart.
• Tag name
• Point source
• Data type
• Interface instance
• Tag type
• Scan class
• Instrument tag
Depending on the type of point you are creating, a few other settings might be required. The
following sections describe basic PI point settings in more detail.
• SourceTag
• TotalCode
• SquareRoot
• Convers
• Scan
• Shutdown
• Exception processing
• Output points
• Event points
Procedure
1. Start PI OPCClient and connect to your OPC server.
2. To select the OPC points you want to export, create a group (click ) and add the desired
points to it.
3. Choose File > Save As and specify the name and location of the export file.
4. Click Save.
PI OPCClient creates a .csv file containing the OPC points you selected.
5. In PI SMT, start Microsoft Excel by choosing Tools > Tag Configurator.
6. In Microsoft Excel, open the .csv file that contains the exported OPC points.
7. Examine the generated entries to ensure that the desired points are listed. If any entries
have Unknown in the PointType column, specify the desired data type for the point.
8. To generate the PI points, choose PI SMT > Export Tags. The Export PI Tags window appears
9. Choose the target PI Server and click OK.
10. Examine the list of results to verify that the PI points are created.
If your PI Server is earlier than version 3.4.370.x or the PI API version is earlier than 1.6.0.2
and you want to create points with names that exceed 255 characters, you must enable the PI
SDK.
Note:
If the source point name length exceeds 80 characters, you must use the UserInt1
attribute for source point mapping, due to a limitation of the PI API.
For an advise point, the PI OPC interface registers for updates with the OPC server, and the
OPC server sends new values to the PI OPC interface (at a rate not exceeding the update rate
for the group.)
Specify scan frequency and optional offset using the following format:
HH:MM:SS.##,HH:MM:SS.##
Examples:
/f=00:01:00,00:00:05 /f=00:00:07
or, equivalently:
/f=60,5 /f=7
If you omit HH and MM, the scan period is assumed to be in seconds. Sub-second scans are
specified as hundredths of a second (.01 to .99).
To define a time of day at which a single scan is performed, append an L following the time:
HH:MM:SS.##L
The OPC standard does not guarantee that it can scan data at the rate that you specify for a
scan class. If the OPC server does not support the requested scan frequency, the frequency
assigned to the class is logged in the pipc.log file. If the interface workload is heavy, scans
can occur late or be skipped. For more information on skipped scans, see the UniInt Interface
User Manual.
Scanning offsets
To mitigate the interface and OPC server workload, you can use the offset to stagger scanning.
If an offset is specified, scan time is calculated from midnight on the day that the interface was
started, applying any offset specified. In the above example, if the interface was started at
05:06:06, the first scan occurs at 05:07:05, the second scan at 05:08:05, and so on. If offset is
omitted, scanning is performed at the specified interval, regardless of clock time.
Offsets determine when the interface asks the OPC server for the current values for polled
classes. They do not control the behavior of the OPC server, and have no effect on advise
classes unless the /GA parameter is specified to stagger the activation of groups. In this case,
the offsets are used to time the activation of all groups except for scan class 1 (which is
reserved for advise tags).
Update rates
The OPC server reads data from the device according to the update rate for the group in which
the item resides. By default, the update rate is the same as the scan rate. To override the
default using PI ICU, browse to the OPCInt > OPC Server > Advanced Options window and
enter the desired update rates in the Update Rates section. (/UR).
For polled groups, configuring an update rate that is shorter than the scan period can ensure
that the interface is receiving current data. For example, if the scan period is five seconds but
the update rate is two seconds, the data is no more than two seconds old when it is read.
However, note that a faster update rate increases the OPC server workload.
For advise groups, assign identical update and scan rates, with one exception: if you are using
UniInt Failover Phase 1, then to ensure that the interface sees new values for failover heartbeat
tags as soon as possible, set the update rate to half the scan period. This configuration reduces
the risk of thrashing, where control switches back and forth needlessly. Dedicate a scan class
with faster update rate to the failover heartbeat tags. OSIsoft recommends using Phase 2
failover instead.
the OPC server point is defined as analog with an EuMin of -10 and an EuMax of 10, and
Location5 contains 2500 (meaning 25%), data is sent to the PI OPC interface only when the
difference between the new value and the old value is at least 5 (25% of 20 = 5). PI exception
processing continues to be applied to the values received by the interface. The deadband only
affects the values sent by the OPC server.
◦ Tim: The time stamp is written as a string (VT_BSTR), formatted as configured for the PI
OPC interface instance (/TF)
◦ Dat: The time stamp is written as a VT_DATE.
VT_DATE is a universal (UTC) format that does not depend on the time zone or daylight
savings time setting. For VT_BSTR, the time stamp comes from the PI Server and is not
adjusted for differences in time zone or daylight savings time setting. In error messages
related to this time stamp ItemID, the PI OPC interface reports a generated tag name of
the form TS:xxxxxx, where xxxxxx is the name of the PI output tag.
If you use this attribute to specify more than one setting, put a comma between the definitions.
By default, leading and trailing spaces are stripped from entries in this attribute. To preserve
leading and trailing spaces, enclose your entry in double quotes.
SourceTag
For output points (points that write data to the OPC server), this attribute specifies the PI point
from which data is read. See Output points for more information.
TotalCode
This attribute contains a code that specifies how the value is to be scaled. TotalCode is used in
conjunction with the SquareRoot, Convers, and ExDesc attributes. See Transformations and
scaling for details.
SquareRoot
Specifies that the square or square root of the value is to be used. See Transformations and
scaling for details.
Convers
For scaled tags, this attribute contains the device span. The device item can have a zero and a
span, which define the actual span of values that the device sends. The PI OPC interface can use
these two values to translate the units used by the device to the units defined for the PI point.
The Convers attribute can also contain an offset or multiplier. See Transformations and
scaling for details.
returned by the OPC server. If it is an array item, the type of the value is VT_ARRAY |
VT_other, where VT_other is a data type such as VT_R4 or VT_I2. The values in the array are
sent as one data item and they all have the same data type.
PI Server does not support PI points with an array type, so values must be assigned to a
number of individual PI points. The first value in the array maps to the PI point that has
UserInt1 set to 1, the second to the tag with UserInt1 set to 2, and so on. If these values need
to be processed as different data types, use the Location2 attribute for the PI point with
UserInt1=1 and the settings for scaling and transformation for each individual point to
configure how the PI OPC interface handles the individual value. The PI OPC interface receives
the data using the data type specified by the Location2 value for the point with UserInt1=1,
then processes the value according to how the individual point is configured. Note that some
servers cannot provide array data using any data type other than the canonical data type (the
one displayed in the PI OPCClient if you omit data type). For those servers, you must either use
a PI tag with the correct data type, or set Location2 to 8 to configure the interface to ask for
the canonical data type. For maximum efficiency, always use the canonical data type.
Scan
This attribute enables or disables data collection for the PI point. By default, data collection is
enabled (Scan is set to 1). To disable data collection, set Scan to 0. If the Scan attribute is 0
when the interface starts, the interface does not load or update the point. If you enable
scanning while the interface is running, the time required for data collection to start depends
on how many points you enable, because they are processed in batches. For efficiency, if you
need to enable scanning for a large number of points, stop and restart the interface. If a point
that is loaded by the interface is subsequently edited so that the point is no longer valid, the
point is removed from the interface and SCAN OFF is written to the point.
Shutdown
By default, the PI Shutdown subsystem writes the SHUTDOWN digital state to all PI points when
PI Server is started. The time stamp that is used for the SHUTDOWN events is retrieved from a
file that is updated by the snapshot subsystem. The time stamp is usually updated every 15
minutes, which means that the time stamp for the SHUTDOWN events is accurate to within 15
minutes in the event of a power failure. For additional information on shutdown events, refer
to PI Server manuals.
Note:
The SHUTDOWN events that are written by the PI shutdown subsystem are independent of
the SHUTDOWN events that are written by the interface.
To prevent SHUTDOWN events from being written when PI Server is restarted, set the Shutdown
attribute to 0. To configure the PI shutdown subsystem to write SHUTDOWN events only for PI
points that have their Shutdown attribute set to 1, edit the \\PI\dat\Shutdown.dat file, as
described in PI buffering documentation.
Exception processing
The ExcMax, ExcMin, and ExcDev parameters control exception reporting in the PI OPC
interface. To turn off exception reporting, set ExcMax, ExcMin, and ExcDev to 0. See the UniInt
Interface User Manual for more information about exception processing.
• ExcMax
This attribute configures the maximum time period allowed between sending values to PI
Server. This setting applies to both advise and polled tags. For advise tags, if the PI OPC
interface does not receive a value after the specified number of seconds and does not detect
a dropped connection, it sends the last value received to the PI server with the time stamp
set to the current time. For polled tags, the interface sends a value to PI Server if it has not
sent one in the last ExcMax seconds, even if the new value does not pass ExcDev tests.
• ExcMin
This attribute configures the minimum time period between values sent to PI Server.
• ExcDev
This attribute configures the minimum change from the last value sent to PI Server required
for the PI OPC interface to send a new value.
Output points
Output points send data from the PI server to the OPC server. Note that only good values can
be sent. System digital states are not sent to OPC items. To configure an output point, edit the
point using PI Point Builder and specify the following settings:
• Set Location1 to the PI OPC interface instance. (/ID)
• Set Location3 to 2.
• Specify the ItemID (the OPC item to be written).
• Optional: Specify the source point (the PI point that contains the value to be written to the
OPC server). Not required if you intend to send the value directly to the output item without
copying the values and time stamps to a PI point.
• Optional: Set Location4 to the desired output group. Output points with Location4 set to
0 are distributed across output groups for load balancing.
There are two mechanisms for triggering an output: configuring a separate source point, and
writing new values to a snapshot, as described in the following sections.
output point. If the interface does not succeed in updating the OPC item, it writes a digital state
that describes the error to the output point. For output points, a success status indicates that
the OPC server item has been updated, but there is no guarantee that the corresponding data
source has been updated. To verify that the data source has been updated, create a
corresponding input point and add logic to ensure that the values of the input and output
points match.
The PointSource of the output point must match the PointSource of the interface instance,
but the source point can be associated with any point source. The data type of the source point
must be compatible with that of the output point.
No source point
To use the same PI point as the source and the output point, leave the SourceTag attribute
blank. Any means of updating the snapshot value of the output point is acceptable. To trigger
the output to the target OPC item, the time stamp must be more recent than the previous time
stamp, regardless of whether the value changes. The value that you enter into the output
point’s snapshot is written to the target item on the OPC server. Any new value is sent to the
OPC item.
Event points
Event PI points are configured with a trigger PI point. When the trigger point receives a value,
the event point is read. To create event points, set Location4 to 0 and specify the name of the
trigger PI point in the ExDesc attribute using the following format:
TRIG=’triggertagname’ event_condition
Enclose the name of the trigger point in single quotes. To treat all changes as triggering events,
omit event_condition. For more information about the event_condition argument, see the UniInt
Interface Users Manual.
By default, the server is requested to update its cache every second for every event point
defined. OPC v2.0 servers always read event points from the device, not the cache. To minimize
the overhead incurred when the OPC server updates the cache, set the event rate (/ER) to a
high value such as eight hours. For v1.0a OPC servers, asynchronous reads come from the
cache. The cache does not need to be updated frequently for all event points, so you can
increase the event rate.
To define a set of event PI points that are read together in the same OPC event group, assign
identical integer values to the UserInt2 attribute of the PI points. (For example, a plug-in DLL
that post-processes data might require the data to be sent in a single group.)
For efficiency with v1.0a servers, separate event points into groups based on the triggering
event. For OPC v2.0 servers, separate event points according to the data source. The OPC v2.0
standard requires that all asynchronous reads originate from the device rather than from the
server’s cache, so set the cache update rate high and do not group values that come from
different devices. The following example point definitions illustrate this approach:
Tag ExDesc Instrume Loccation Location2 Location3 Location4 Location5 UserInt1 UserInt2
ntTag 1
PM1_Tem TRIG=PM ItemID1 1 0 0 0 0 0 1
p.PV 1_Trigger
PM1_Rate TRIG=PM ItemID2 1 0 0 0 0 0 1
.PV 1_Trigger
Tag ExDesc Instrume Loccation Location2 Location3 Location4 Location5 UserInt1 UserInt2
ntTag 1
PM2_Tem TRIG=PM ItemID3 1 0 0 0 0 0 2
p.PV 2_Trigger
In the preceding example, PM1_Trigger and PM2_Trigger are points that are updated either
by this PI OPC interface instance, another interface, or by manual entry. When PM1_Trigger
gets a new event in the PI snapshot, the PI OPC interface sends the OPC server a read
command that requests data for both PM1_Temp.PV and PM1_Rate.PV. Both values are
returned in a single call. Likewise, when PM2_Trigger gets an event in the snapshot, the
interface requests a value for PM2_Temp.PV.
• You must define a point that reads the first array element.
• Assign the points to the same scan class.
• To optimize CPU usage, do not use the same scan class to read more than one OPC array.
• If you need to read the same OPC array element into more than one point, you must assign
the points to different scan classes.
Configuring arrays that are read as event tags is complex: because only the first array item
(with UserInt1 = 1) causes a read, you must create a dummy trigger PI point to use with the
rest of the array items. That PI point must have a PointSource that is either unused or used
for manual entry points (lab data usually is entered manually, so L is often used as the
PointSource for manual entry PI points). In the following example, the trigger PI point is
called TriggerTag and the dummy trigger PI point is called DummyTrigger.
Tag ExDesc Instrume Location1 Location2 Location3 Location4 Location5 UserInt1 UserInt2
ntTag
Array000 TRIG=Tri Data.Arra 1 0 0 0 0 1 1
1.PV ggerTag y
Array000 TRIG=Du Data.Arra 1 0 0 0 0 2 1
2.PV mmyTrig y
ger
Array000 TRIG=Du Data.Arra 1 0 0 0 0 3 1
3.PV mmyTrig y
ger
Because all the tags in an array must belong to the same group, even if the OPC server is v2.0
and some part of the array data comes from a different device than the rest of the array data,
all the array tags must be configured to be in the same event group.
Because each range has the same size (decimal 64), you can use a simple conversion to obtain
the corresponding digital state, as follows:
Convers TotalCode SquareRoot Dzero Operation
Not 0 3 0 Defined Input points:
Value = (Value / Convers) – Dzero
Output points:
Value = (Value + Dzero) * Convers
Attribute Setting
Convers 64
TotalCode 3
SquareRoot 0
ExDesc "Dzero=0"
• OPC Server-level failover ensures that, if the PI OPC interface stops receiving data from the
currently connected OPC server, it can switch to another OPC server and resume data
collection.
• UniInt failover ensures that, if one instance of the PI OPC interface fails, another instance
can take over data collection.
If you are configuring both, configure and verify UniInt failover first. Disable UniInt failover
and configure and test server-level failover separately, then re-enable UniInt failover.
UniInt failover
UniInt failover ensures against data loss by enabling a backup PI OPC interface instance to take
over data collection if a primary instance fails. There are two approaches to configuring
failover: synchronization through the OPC server (phase 1 failover), and synchronization
through a shared file (phase 2 failover). This guide tells you how to configure phase 2 failover.
Note:
Phase 1 failover is now deprecated and is not recommended. For details, contact OSIsoft
Technical Support. For more details about UniInt failover, refer to the UniInt Interface
User Manual.
Failover works as follows: you configure two identical instances of the PI OPC interface on two
different computers. One instance functions as the primary instance and the other one as the
backup, depending on which one is started first. If the primary fails, the backup becomes the
primary and takes over transmitting data from the OPC server to the PI server. If that interface
subsequently fails and the other interface has been restored, the other interface becomes
primary and resumes transmitting data. (Note that “primary” and “backup” are terms used to
clarify operation. Failover seeks to keep a running instance of the PI OPC interface connected
with a functional OPC server, so, in action, either interface might be primary.)
If the PI OPC interface instances are configured to use disconnected startup, the interfaces can
start and fail over even if the PI Server is unavailable, as long as they both have access to the
shared file.
The solid magenta lines show the data path from the PI OPC interface nodes to the shared file.
During normal operation, the primary PI OPC interface collects data from the OPC server and
sends it to the PI server. The ActiveID point and its corresponding entry in the shared file are
set to the failover ID of the primary instance. Both primary and backup instances regularly
update their heartbeat value, monitor the heartbeat value and device status for the other
instance, and check the active ID. Normal operation continues as long as the heartbeat value
for the primary instance indicates that it is running, the ActiveID has not been manually
changed, and the device status on the primary PI OPC interface is good.
Phase 2 failover tracks status using the following points.
• ActiveID
Tracks which PI OPC interface instance is currently forwarding data from the OPC server to
the PI server. If the backup instance detects that the primary instance has failed, it sets
ActiveID to its own failover ID and assumes responsibility for data collection (thereby
becoming the primary).
• Heartbeat Primary
Enables the backup PI OPC interface instance to detect whether the primary instance is
running.
• Heartbeat Backup
Enables the primary PI OPC interface instance to detect whether the backup instance is
running.
Note:
Do not confuse the device status points with the UniInt health device status points. The
information in the two points is similar, but the failover device status points are integer
values, while the health device status points are string values.
To indicate that it is up and running, each PI OPC interface instance refreshes its heartbeat
value by incrementing it at the rate specified by the failover update interval. The heartbeat
value starts at one and is incremented until it reaches 15, at which point it is reset to one. If the
instance loses its connection to the PI server, the value of the heartbeat cycles from 17 to 31.
When the connection is restored, the heartbeat values revert back to the one-to-15 range.
During a normal shutdown process, the heartbeat value is set to zero.
If the shared file cannot be accessed, the PI OPC interface instances attempt to use the PI
Server to transmit failover status data to each other. If the target PI OPC interface also cannot
be accessed through the PI server, it is assumed to have failed, and both interface instances
collect data, to ensure no data loss. In a hot failover configuration, each PI OPC interface
instance queues three failover intervals worth of data to prevent any data loss. When failover
occurs, data for up to three intervals might overlap. The exact amount of overlap is determined
by the timing and the cause of the failover. For example, if the update interval is five seconds,
data can overlap between 0 and 15 seconds.
Hot failover
Hot failover is the most resource-intensive mode. Both the primary and backup OPC interface
instances are collecting data, possibly from the same OPC server. No data is lost during
failover, but the OPC server carries a double workload, or, if two servers are used, the backend
system must support both OPC servers.
Warm failover
There are three options for warm failover:
the data source system. When the backup OPC interface becomes primary, all it needs to do
to start collecting data is to advise the groups, making this the fastest warm failover option.
Cold failover
Cold failover is desirable if an OPC server can support only one client, or if you are using
redundant OPC servers and the backup OPC server cannot accept connections. The backup
instance does not connect with the OPC server until it becomes primary. At this point, it must
create groups, add items to groups, and advise the groups. This delay almost always causes
some data loss, but imposes no load at all on the OPC server or data source system.
Note:
The OPC interface supports using watchdog points to control failover. Watchdog points
enable the OPC interface to detect when its OPC server is unable to adequately serve data
and failover to the other interface if the other interface is better able to collect data. This
approach is intended for OPC servers that are data aggregators, collecting data from
multiple PLCs. If one point on each PLC is designated as a watchdog point, the interface
can be instructed to failover if less than a specified number of those points are readable.
This approach enables the benefits of redundancy to be applied at the data collection
level. For more on how to configure this option, see Configure server-specific watchdog
PI points for efficient failover.
Procedure
1. Create identical PI OPC interface instances on the primary and backup nodes.
A simple way to ensure that the instances are identical is to use PI ICU to configure the
primary instance correctly, then copy its batch file to the backup PI OPC interface node. On
the backup node, create the instance by using PI ICU to import the batch file. Verify that the
instances can collect data.
2. Configure buffering for each instance and verify that buffering is working.
3. To configure the location of the shared file, create a folder and set its sharing properties to
grant read/write access for both PI OPC interface nodes to the user that runs the OPC
interface instance. To ensure that the file remains accessible if either of the OPC interface
nodes fails, put the folder on a machine other than the primary or backup OPC interface
nodes.
Note that the shared file is a binary file, not text.
4. On both OPC interface nodes, use PI ICU to configure failover as follows:
a. Choose UniInt > Failover. The UniInt Failover page is displayed.
b. Check Enable UniInt Failover and choose Phase 2.
c. In the Synchronization File Path field, specify the location of the shared file.
d. In the UFO Type field, choose the level of failover that you want to configure, ranging
from COLD to HOT.
e. Specify a different failover ID number for the primary and backup instances, and
configure the location of the primary and backup instances.
If you use a PI collective, point the primary and backup instances to different members
of the collective. Go to the General tab and set the SDK Member field. (/host).
Note:
Make sure that the UFO_ID of one interface matches the UFO_OtherID of the other
interface, and vice versa. If the PI Servers are a collective, set Host on the primary
interface node (PI ICU General tab) to the Primary PI Server, and set Host on the
backup interface node (PI ICU General tab) to the secondary PI Server.
f. Click Apply.
g. When creating the first instance, create the required PI points by right-clicking the list of
failover points and choosing Create all points (UFO Phase 2), as shown in the following
figure.
h. Click Close to save changes and update the PI OPC interface batch file.
Procedure
1. Start the first PI OPC interface using PI ICU. Verify that the startup output indicates that
failover is correctly configured:
OPCpi> 1 1> UniInt failover: Successfully Initialized:
This Failover ID (/UFO_Id): 1
Other Failover ID (/UFO_OtherId): 2
2. After the primary OPC interface has successfully started and is collecting data, start the
other PI OPC interface instance. Again, verify that startup output indicates that failover is
correctly configured.
3. To test failover, stop the primary OPC interface. Verify that the backup OPC interface has
detected the absence of the primary instance and taken over data collection by examining
its output for the following messages:
> UniInt failover: Interface is attempting to assume the "Primary" state.
Waiting 2 ufo intervals to confirm state of other copy.
Fri Jun 22 11:43:26 2012
> UniInt failover: Waited 2 ufo intervals, Other copy has not updated our
activeId, transition to primary.
Fri Jun 22 11:43:26 2012
> UniInt failover: Interface in the "Primary" state and actively sending data
to PI.
4. Check for data loss in PI Server (for example, using PI ProcessBook to display a data trend).
5. Test failover with different failure scenarios (for example, test loss of PI Server connection
for a single PI OPC interface copy). Verify that no data is lost by checking the data in PI and
on the data source.
6. Stop both copies of the PI OPC interface, start buffering, configure and start each interface
instance as a service.
Procedure
1. Go to the OPCInt Failover > Server Level pane and enter the node and name of the other
OPC server.
This basic configuration triggers failover only when the PI OPC interface loses connectivity
to the OPC server.
2. To verify that failover occurs when connectivity is lost:
a. Start both OPC servers, then start the PI OPC interface.
b. Use PI SMT or the pigetmsg utility to check the PI SDK log on the PI server node for
messages that verify successful startup.
c. Stop the currently active OPC server and check the SDK log to confirm that the OPC
interface has switched to the other OPC server.
d. To switch back to the first OPC server, restart it, stop the second server, and check the
SDK log or the value of the PI active server point, if defined, to verify that the PI OPC
interface has switched back to the first OPC server.
entered the RUNNING state before the specified period expires, the interface retries the first
server, alternating between the two until one is detected to be running again.
Procedure
1. Create a PI point. Map the point to an OPC item that you consider a reliable indicator of
server status.
The OPC item to which the point is mapped must be defined identically in both the primary
and backup OPC servers, while possibly having different values on the two servers.
2. Indicate the watchdog point for the primary and backup OPC servers: Using PI ICU, go to
theOPCInt > Failover > Server Level page and configure the Primary Server Watchdog Tag
and Backup Server Watchdog Tag fields.
3. Verify that the OPC item triggers failover:
a. Start the OPC servers and verify that the watchdog item is non-zero on at least one of the
servers. Start the OPC interface.
b. Manually set the OPC item to 0 on the currently used server.
c. Examine the PI SDK log or check the Active Server point to determine whether the
interface failed over to the other OPC server.
Procedure
1. Create PI points and map them to the OPC items that you consider reliable indicators of OPC
server status. For each point, set Location3 to 3 for polled points or 4 for advise points.
2. Using PI ICU, go to the OPCInt > Failover > Server Level page and set the Multiple Watchdog
Tags Trigger Sum field to the minimum acceptable total of the values of the watchdog
points.
3. Verify that failover is triggered if the total of the values drops below the specified minimum:
a. Start the OPC servers and the OPC interface.
b. Manually set the values of the OPC items.
c. Examine the SDK log or check the Active Server point to determine whether the interface
failed over to the backup OPC server.
the OPC server to determine the state of both servers without the overhead of creating a
second connection.
Note:
The method by which an OPC server tracks its state is highly vendor-dependent and
implementations vary. For details, consult your OPC server documentation.
To configure server-specific mode:
Procedure
1. In both OPC servers, create identical items that track the status of each server.
If an OPC server is active, the OPC item must contain a positive value. If an OPC server is
unable to serve data, the item value must be zero. Implement any logic required to ensure
that both servers correctly detect and maintain the status of the other server and that, in
both OPC servers, the values are identical.
2. Configure the OPC servers so that, during normal operation, one server sends data to the PI
OPC interface and the other waits until the primary server fails.
Status for the primary server must be positive, and for the backup server, status can be
zero. If failover occurs, the primary server status must be set to zero and the backup server
status to a positive value.
3. In PI Server, create a watchdog PI point for each OPC server, mapped to the OPC items you
created in step 1.
4. Using PI ICU, go to the OPCInt > Failover > Server Level page and set the Primary Server
Watchdog Tag and Backup Server Watchdog Tag fields to the names of the watchdog PI
points you created in the previous step.
In the interface batch startup file, these settings are specified by the /WD1 and /WD2
parameters.
Results
If both watchdog points are zero, data collection stops until one watchdog point becomes
positive. If both watchdog points are positive, the OPC interface remains connected to the
server that is currently serving data to it.
instances connect to the OPC server simultaneously. (To reduce the impact of restarting
multiple instances, you can also use the Startup Delay setting.)
a callback, the OPC interface acts as a DCOM server, while the OPC server acts as a DCOM client.
For this reason, DCOM security on the OPC interface node must be configured to allow access
by the account associated with the OPC server.
• Interactive user
The account that is logged on to the console of the computer where the server is running.
This setting is problematic for OPC: if no one is logged on to the console or the user logged
on does not have DCOM permissions, the client cannot connect to the OPC server.
• Launching user
The server process runs under the same account as the calling client. Do not use this setting
if multiple clients running under different accounts need to access the same OPC server,
because a new instance of the OPC server is launched for each user. Note that the calling
client's user ID might not have permission to connect to the server, because many servers
implement their own user authentication aside from DCOM permissions.
• This user
Recommended, unless the OPC server vendor specifies a different setting. Include the
specified user in the default DCOM ACLs on the interface node. If the OPC server runs as a
Windows service, use the same account as the logon account for the service.
• Time stamps
• False values
• Access path
• OPC refreshes
Item browsing
To be able to map PI points to OPC items, you must have access to OPC item names. However,
OPC servers are not required to support item browsing. If browsing is supported, you can use
the PI OPCClient to display the points that the OPC server recognizes.
Time stamps
Some OPC servers send the time stamp for the last time that the data value and quality were
read from the device, which means that the time stamp changes even if the value does not.
Others send the time stamp of the last change to value or quality, so if the data remains the
same, the time stamp does not change. In this case, you must configure the time stamp setting
using PI ICU. (/TS)
False values
Some OPC servers return a value when a client connects to a point, even if the server does not
yet have a valid value for the point. Some servers send a false value with a status of GOOD,
which results in the false value being sent to the PI archive. To screen out these false values,
use PI ICU to enable the Ignore First Value option on the Data Handling page (/IF=Y).
Access path
In OPC items, the access path suggests how the server can access the data. The OPC standard
states that it is valid for servers to require path information to access a value, but not for them
to require that it be sent in the access path field. According to the standard, the OPC server can
ignore it, but some non-compliant OPC servers require the access path. For example, RSLinx
requires path information in the access path or as part of the ItemID, in the following format:
[accesspath]itemid
If your OPC server requires an access path, contact your OPC server vendor to determine how
best to configure the server with the OPC interface.
The following error messages indicate that the data received from the OPC server contained
errors and the OPC server did not return a text explanation of the error:
In UnPack2 Tag MyPV3.pv returns error : Unknown error(800482d2)
In UnPack2 Tag MyPV4.pv returns error E004823E: Unknown error (e004823e).
In UnPack2 Tag MyPV5.pv returns error E241205C: Unknown error (e241205c)
In UnPack2 Tag MyPV6.pv returns error E2412029: Unknown error (e2412029)
To troubleshoot such data-related issues, consider the following causes and solutions:
• If you see Unknown errors, check with the OPC server vendor and have them look up the
error code displayed in the error message. OPC servers can generate vendor-specific error
codes, and only the OPC server vendor can explain what they mean.
• Restarting the OPC server might resolve the issue.
• Type mismatch errors indicate incompatible data types. Check for a mismatch between
the PI Server data type and the OPC item type. Check Location2 settings. To avoid cache
issues after data types are changed, restart the OPC interface.
• Verify that the data type of the PI point can accommodate the range of values being sent by
the OPC server. For example, if a PI point is defined as a two-byte integer and the OPC
server sends values that are too large for it to accommodate, the point overflows.
• Make sure the data type of the OPC item and PI point are compatible.
• The data source might be sending corrupt data to the OPC server. Check for network issues
that might corrupt the data packets.
• Check the size of the OPC server group. If the scan class contains more points than
permitted in the OPC server group, Unpack2 errors might result. Consult the OPC server
documentation for group size limits.
• If the point is digital, and the data can be read into a PI string point, and the underlying
control system is Honeywell, the digital state strings in PI might need to exactly match the
string reported by the DCS. To determine the digital states, go to Honeywell Universal
Station or GUS to look at each controller block (data source).
• PI thread
Interacts with PI Server
• COM thread
Interacts with the OPC server
Polled PI points
For polled PI points, the OPC interface notifies the PI thread when it’s time to scan. The PI
thread starts the data collection process and logs the time, group number, and current flag
value in opcscan.log, then sets the flag. (If the flag in opcscan.log is non-zero, the last call
made to the server did not return before the OPC interface initiated another poll, and data
might have been missed as a result.)
When the COM thread detects that the flag is set, it logs the time, group number and
transaction ID in the opcrefresh.log file and makes a refresh call to the OPC server. When it
receives the synchronous response from the OPC server, it clears the flag.
Now the OPC server can send data at any time, in an asynchronous manner. When the OPC
server sends data to the OPC interface COM thread, the time, group number and transaction ID
are logged in opcresponse.log.
Advise PI points
For advise PI points, the COM thread receives callbacks only when the data from OPC server
changes value. Therefore, advise points do not generate entries in the opcscan.log or
opcrefresh.log files, and only the data callbacks are logged in the opcresponse.log file.
Advise points can be identified in the opcresponse.log file by group numbers that range
from 200 to 800.
OPC refreshes
Logging refreshes
To log OPC refreshes, enable debug option 8, which causes the PI OPC interface to create three
log files: opcscan.log, opcrefresh.log, and opcresponse.log. If the OPC interface is
running as a service, the files are located in the %windows%/system32 directory (%windows%/
sysWOW64 for 64-bit systems), otherwise the files reside in the directory where the OPC
interface is running. When the OPC interface sets the flag for a scan, it logs the current time, the
number of the scan class, and the current value of the scan flag in the opcscan.log file. The
time stamp is in UTC (Greenwich time zone, daylight savings time is not observed), structured
as a FILETIME structure written as an I64X field. The lower and upper halves of the number
are transposed and the actual number is a count of the interval since January 1, 1601,
measured in 10E-7 seconds.
After logging the data, the OPC interface sets the scan flag for the group, then the COM thread
takes its turn. When the OPC interface cycles around to perform the poll, it logs the time, the
scan class, and the TransID used in the opcrefresh.log file. For v1.0a server, the TransID
logged is the TransID that was returned from the last poll of the group. For v2.0 servers, it is
the actual TransID returned from the server.
When the OPC interface receives data from the OPC server, the OPC interface logs the time, the
scan class, and the TransID received in the opcresponse.log file. For advise points, no
entries are logged in the opcrefresh.log and opcscan.log files. Only the
opcresponse.log file is updated.
Time stamps in OPC interface logs are stored in their native format, which is hard to read. To
translate the time stamps to a readily readable format, use the following programs, which are
installed into the Tools sub-directory below the OPC interface directory:
• opcscan.exe
• opcrefresh.exe
• opcresponse.exe
To run one of these programs from the command line, specify the input and output file names.
Examples:
> opcscan.exe opcscan.log scan.log
> opcrefresh c:\pipc\Interfaces\OPCInt\opcrefresh.log c:\temp\refresh.log
> tools\opcresponse opcresponse.log response.log
The utilities display the UTC time stamp that came with the data, both raw and translated, the
time stamp translated into local time, both raw and translated, and the PI time sent to the PI
Server. For example:
response.log 126054824424850000 2000/06/14 18:54:02.485 126054680424850000
2000/06/14 14:54:02.485 960994309.485001 2 1db8
To check the time stamp returned from the OPC server, consult these log files. The time stamp
is UTC time, which is based in 1600, so if you see a date around 1600, it indicates that the
server is not sending valid time stamps. To configure the OPC interface to create time stamps
when it gets the data, use PI ICU to enable the Interface Provides Timestamp option on the
OPCInt tab (or edit the batch file and specify the /TS=N flag).
If the OPC interface is running with debugging options 32 or 64 enabled, the log file contains
entries for individual data items that were received by the COM thread. For advise points, the
group number in the opcresponse.log file might not be correct for entries generated by
debugging options 32 or 64, although the shorter entries generated by debugging option 8
correspond to the correct group number.
By looking at the log files, you can see when the OPC interface decided to poll, when it made
the call, and when the data came in. If the flag in opcscan.log is non- zero, the last call made
to the server did not return by the time the OPC interface started another poll. If you find non-
zero flags in the log file, contact your server vendor and have them contact OSIsoft.
This message indicates that the OPC server failed to respond to a refresh call. This problem
occurs when the OPC server cannot keep up with the update rates or has suspended operation
due to a bug. The message is repeated for each additional 100 refresh calls that receive
responses from the OPC server for each scan class. If these messages appear in your local PI
message log, data loss might be occurring. Contact your OPC server vendor immediately, and
consider the following adjustments to reduce load on the OPC Server:
• Move points into the Advise scan class (#1).
• Reduce the total number of scan classes for the interface.
Feature Support
Source of time stamps Interface or OPC server (configurable)
History recovery No
Disconnected startup Yes
SetDeviceStatus Yes
Failover options OPC server-level failover and UniInt Phase 2
Interface-level failover(Phase 1 deprecated)
Vendor software required on PI interface node No
Vendor software required on DCS system Yes
Vendor hardware required No
Additional PI software included with interface Yes
Device point types VT_I2
VT_I4
VT_R4
VT_R8
VT_BSTR
VT_DATE
Serial-based interface No
To display the contents of your OPC server, OSIsoft provides PI OPCClient, another graphical
tool. To launch PI OPCClient, double-click the OPCClient.exe executable file or choose Start
menu > All Programs > PI System > PI OPCClient. You can use PI OPCClient to connect to the
OPC server and test data exchange procedures such as sync read, refresh, advise, and
outputs.
Installation prerequisites
Before installing and configuring, ensure that the following prerequisites are met:
◦ General
Setting Value
Point source OPC (or an unused point source of your choice)
Scan classes As desired. (Scan class 1 is reserved for advise points.)
Interface ID 1 or any unused numeric ID
14. Restart the OPC interface node and confirm that the PI OPC interface service and the
buffering application restart.
15. Build input points and, if desired, output points for this PI OPC interface. Verify that data
appears in PI as expected. For detailed information about OPC points, refer to Configuring
PI points for the PI OPC DA interface.
Tag Attribute Description
PointSource Identifies all points that belong to this instance of the PI OPC interface. Specify
the same Point source entered on the PI ICU General tab .
Location1 Specifies the OPC interface instance ID, which is displayed on the PI ICU
General tab.
Location2 To enable handling for OPC servers that do not return certain numeric types
in their native format, set Location2 to 1. For details, see Location2 (data-
type handling)
Location3 Point type (0=polled, 1=advise, 2=output).
Location4 Specifies the scan class.
Location5 Optional deadband value for advise points.
ExDesc Specifies event points, Long ItemID, Dzero for scaled points, or ItemID to get
the time stamp for an output value.
InstrumentTag OPC ItemID that corresponds to the PI point you are defining. Case-sensitive.
To display OPC server points, use PI OPCClient.
16. Optional: The following procedures are useful but not required.
◦ Configure diagnostics
◦ Configure disconnected startup
◦ Configure failover
◦ Install the PI Interface Status Utility
• Failover settings
• Plug-In settings
• Miscellaneous settings
• Debug settings
Timestamps
Interface Provides Timestamp: The OPC interface provides a time stamp when the data is
received (/TS=N).
OPC server Provides Timestamp: The OPC interface uses the data time stamps provided by the
OPC server, and accounts for the offset between the OPC server and the PI server (/TS=Y).
Timestamp for Advise Tags Only: The OPC server provides time stamps only for advise tags,
and the OPC interface accounts for the offset between the OPC server and the PI server. For all
other tags, the OPC interface provides a time stamp when the data is received (/TS=A).
OPC Server Provides Timestamp (no offset): The OPC Server provides the timestamps for all
data, and the interface will not apply any time offset to these values. Data loss will occur of a
value is received from OPC with timestamp 10 minutes or more past the PI Server's current
time. (/TS=U)
Questionable Quality
Store Quality Only: If data has other than GOOD quality, store the quality information rather
than the value (/SQ=Y).
Store Value Only: The OPC interface treats “questionable” quality as “good” (/SQ=I). Bad
quality data is stored as a system digital state.
Update Rates
Specifies the requested update rate, if different from the scan period. Select a scan class from
the dropdown box, enter the desired rate in the box to the right of the scan class, and click .
The scan class, scan rate, and update rate appear in the box below the period. Only scan classes
that have update rates are listed.
This option is useful when the server must have a recent value for the items but the OPC
interface does not read it very often, for example, if the PI OPC interface polls for the value
every 30 minutes, but the value itself must be no more than one minute old. This situation
imposes more load on the OPC server than if the update rate and the scan period are the same,
but it can reduce latency of values for items that need to be read less frequently. (/UR=period).
Update Snapshot
If the current snapshot is a system digital state (such as I/O timeout, Shutdown, and so
forth) and the OPC interface reads in a new value that is older than the snapshot, the OPC
interface sends the new value one second after the snapshot time stamp of the system digital
state. This check is omitted if the current snapshot is a good value. This is useful for setpoints
that rarely change. (/US).
No Timeout
Direct the OPC interface never to write I/O timeout errors, even if the OPC interface loses its
connection with the OPC server. Set this when configuring failover. (/NT=Y)
Disable Callbacks
Reduce the load on the OPC server by disabling call backs for polled groups. By default, polled
groups have call backs enabled, but these call backs are not used by the PI OPC interface. This
option has no effect on advise groups. (/DC)
Time Offset
If the OPC server node is set to a time zone other than the local time zone, this option directs
the PI OPC interface to adjust all the time stamps by the specified amount. To specify the offset,
use the format [-]HH:MM:SS. (/TO=offset)
Trend Advise
For advise tags, send the value from the preceding scan if the new value's timestamp is greater
than the number of scan periods (configured by the /TA flag). Enabling this setting causes
advise tags to behave as if the Step attribute is enabled.
• DEFAULT
• NONE
• CONNECT (default)
• CALL
• PKT
• PKT_INTEGRITY
• PKT_PRIVACY
• ANONYMOUS
• IDENTIFY (default)
• IMPERSONATE
• DELEGATE
Failover settings
UniInt-Interface Level Failover
The following three options are enabled only if warm failover is enabled on the UniInt >
Failover page:
• Maximum number of Watchdog Tags which can have Bad Quality or Any Error
without triggering Failover
Specify the maximum number of watchdog PI points that can have an error or bad quality
before failover is triggered failover. You can configure watchdog PI points to control
failover when the OPC interface is unable to read some or all of the points, or when the
points have bad quality. This feature enables you to trigger failover when a data source
loses the connection to one OPC server, but is able to serve data to the other. To configure
watchdog PI points, set Location3. For a watchdog point that is in an advise group, set
Location3 to 4. For a watchdog point that is in a polled group, set Location3 to 3. (/UWQ).
Setting Description
Cluster Mode Configure behavior of the backup PI OPC interface.
Primary Bias: This node is the preferred
primary. (/CM=0).
No Bias: No node is preferred. The active PI OPC
interface stays active until the cluster resource
fails over, either as the result of a failure or
through human intervention. (/CM=1)
Resource Number for APIOnline Identify the apionline instance that goes with this
PI OPC interface instance. For example, to
configure the OPC interface to depend on an
instance named apionline2, set this field to 2. To
configure the OPC interface to depend on an
instance named apionline (no resource number),
set this field to -1. (/RN=#)
Active Interface Node Tag Specify the string point that contains the name of
the currently active OPC interface node. (/CN).
Health Tag ID This parameter is used to filter UniInt health
points by Location3. The parameter must be
unique for each PI OPC interface – failover
member parameter. If this parameter has an
invalid value or is not set, the default value of 0 is
used for the Location3 PI attribute when creating
UniInt health points. (/UHT_ID)
Switch to Backup Delay (sec) The number of seconds to try to connect before
switching to the backup server (/FT=#).
Wait for RUNNING State (sec) The number of seconds to wait for RUNNING
status before switching to the backup server (/
SW=#).
Current Active Server Tag (Optional) PI string point that contains the name
of the currently active server. If set, the OPC
interface writes the name of the OPC server to this
point whenever it connects. Useful for debugging
server-level failover. (/CS=tag).
Primary Server Watchdog Tag Watchdog point for the primary server (/
WD1=tag).
Backup Server Watchdog Tag Watchdog point for the backup server (/WD2=tag).
Setting Description
Multiple Watchdog Tag Trigger Sum The minimum total value of the watchdog points.
Failover is triggered if the sum of the value of
these points drops below the specified value.
(/WD=#)
Maximum number of Watchdog Tags which can Default=0 if only one watchdog point. Cannot
have Bad Quality or Any Error without triggering exceed the number of watchdog points defined.
Failover (/WQ=#)
Failover if Server Leaves RUNNING State Triggers failover if the server state changes to
anything other than RUNNING./WS=1)
Plug-In settings
• Post Processing DLL
Enter the DLL name and path to the post-processing DLL, for example, /DLL=”
\Interfaces\OPCInt\plug-ins\mydll.dll”
Miscellaneous settings
Caution:
Do not modify these settings unless directed to do so by OSIsoft Technical Support.
Debug settings
To enable debugging options using PI ICU, go to the UniInt > Debug tab. In general, enable
debug options for a short period of time, as they can bloat log files and reduce performance.
For options marked "Technical Support only," enable only at the direction of OSIsoft Technical
Support. For details about other command-line parameters, refer to the UniInt Interface Users
Manual.
Option Description Value
Internal Testing Only For OSIsoft internal testing only. /DB=1
Log of Startup Logs startup information for /DB=2
each PI point, including
InstrumentTag and ExDesc
Enter any additional parameters that are not available through PI ICU, (for example, /
dbUniInt=0x0400). Separate parameters with one or more spaces. If a parameter argument
contains embedded spaces, enclose the argument in double quotes.
Parameter Description
/BACKUP=hostname::OPCservername For server-level failover, specifies the name of the
backup OPC server. If the OPC server is on the local
machine, omit hostname. If your server name has
embedded spaces, enclose the name in double
quotes.
/CACHEMODE Enable disconnected startup.
Parameter Description
/DA=option (Optional) Configures default authentication level,
part of DCOM security settings for the interface.
Default: CONNECT This parameter sets the interface-specific
authentication level required to verify the identity
of the OPC server during calls. Valid values are as
follows:
• DEFAULT
• NONE
• CONNECT (default)
• CALL
• PKT
• PKT_INTEGRITY
• PKT_PRIVACY
Use this setting with the /DI parameter. If you
set /DI and omit /DA, CONNECT is used. If
neither /DA nor /DI is configured, the interface
uses the default permissions on the client machine.
/DB=# (Optional) Set level of debugging output to be
logged. By default, debug logging is disabled. For
valid settings, see Debug settings.
/DC (Optional) Disable callbacks for polled groups, to
reduce OPC server workload. No effect on advise
groups. By default, callbacks are enabled.
/DF=tag_name (Optional) Configure a PI point that contains the
debug level, to enable you to change debug level
while the interface is running. Configure an Int32
output point for the interface, and set its value to
0, then configure the point using the /DF
parameter. After starting the interface, you can
change debug level by setting the point to the
desired level. For valid settings, see Debug
settings.
For InstrumentTag, you are required to enter a
value, but the value is ignored and need not be a
valid OPC ItemID.
Parameter Description
/DLL=postproc.dll (Optional) Configure a post-processing DLL: /
DLL=drive:\path\filename.dll The default path is
the PlugIns sub-directory of the interface
installation directory. You cannot configure more
than one plug-in.
Parameter Description
/FM=# (Optional) Configure type of interface-level
failover. Valid options are:
Default: 3 • 1: (Chilly) Do not create groups on the server
• 2: (Cool) Create inactive groups and add tags
• 3: Warm Create active groups , but do not
advise groups
For details, see Configuring failover for the PI OPC
DA interface.
Parameter Description
/IF=Y or N (Optional) Ignore the first value sent for each
point. For use with OPC servers that send a
Default: N response when the interface connects to a point,
regardless of whether they have a valid value.
Parameter Description
/OG=# Number of output groups. Each group has its own
queue.
Default: 1
Parameter Description
/PS=point_source (Required) Specifies the point source for the
interface instance. Not case sensitive. The interface
instance uses this setting to determine which PI
points to load and update.
Parameter Description
/SG[= S] (Optional) Send only GOOD quality data.
Questionable quality data and BAD quality data are
ignored. To ignore substatus for values that have
GOOD status, specify /SG=S.
To treat OPC_QUALITY_LOCAL_OVERRIDE as
SUBSTITUTED, specify /SG. To treat
OPC_QUALITY_LOCAL_OVERRIDE as GOOD,
specify /SG=S.
If the /SQ=I or /SQ=Y parameter is also set,
questionable quality data is sent to PI. BAD quality
data is ignored. Quality information continues to
be sent to points that are configured to store
quality instead of values.
Parameter Description
/TA= #.# (Optional) For advise tags, send the value from the
preceding scan if the new value's timestamp is
greater than the specified number of scan periods.
Enabling this setting causes advise tags to behave
as if the Step attribute is enabled.
Parameter Description
/UFO_SYNC=path/[/file_name] (Required for phase 2 interface level failover) Path
and, optionally, name of the shared file containing
the failover data. The path can be a fully qualified
node name and directory, a mapped drive letter, or
a local path if the shared file is on an interface
node. The path must be terminated by a slash or
backslash character. The default filename is:
executablename_pointsource_interfaceID.dat If
there are any spaces in the path or filename, the
entire path and filename must be enclosed in
quotes. If you enclose the path in double quotes,
the final backslash must be a double backslash (\
\).
Parameter Description
/WD=# (Optional) For configuring failover using multiple
watchdog points, trigger failover if the sum of the
values of the points drops below the specified
value.
Parameters by function
The parameters are grouped by how they are used. They are specific to the PI OPC interface,
except for the UniInt parameters that are common to all OSIsoft UniInt-based interfaces.
/F /DC /HOST
/IS /TA
/MA /TF
/OC /TO
/OD /UR
/OG /US
/OT
/OUTPUTACKTIME
/OUTPUTSNAPTIME
/OW
/RD
/SD
/DT
/WD /RP
/WD1 /RT
/WD2
/WQ
/WS
/TS /SEC
/VN /ST
• During startup: Messages include the version of the interface, the version of UniInt, the
command line parameters used, and the number of points.
• During point retrieval: Messages are sent to the log if there are problems with the
configuration of the points.
• If debugging is enabled.
Messages
The log contains messages from the OPC interface, the UniInt framework, and the PI API. This
list describes only messages from the interface. If any error message has a PI point number as
well as a point name, use the point number to identify the problem point, because long point
names are truncated to 12 characters.
Informational
Message No ConnectionPoint for OPCShutdown
Shutdown Advise Failed
Meaning The OPC server does not implement the Shutdown
interface or does not implement it properly. Does
not prevent proper operation of the interface.
Message QueryInterface:IID_IconnectionPointCont
ainer failed, using v1.0a protocol
Meaning The OPC server does not support OPC DA v2.0.
Errors
Message Out of Memory.
Cannot allocate a list; fails.
Unable to add tag.
Message CLSIDFromProgID
Cause PI Server’s Registry entries are not valid.
Resolution Check your server installation instructions.
Message CoCreateInstanceEx
Cause Indicates a problem with your DCOM
configuration.
Resolution Check your DCOM settings.
Message IOPCServer
Cause The proxy stub files are not registered.
Resolution To register the opcproxy.dll and
opccomn_ps.dll files, open a command prompt
window, change to the directory where the
interface is installed, and issue the following
commands:
>regsvr32 opcproxy.dll
>regsvr32 opccomn_ps.dll
Message AddRef
Cause Indicates that the OPC server does not let the
interface perform the simplest function.
Resolution If you can read and write points using PI OPCClient
but this error is logged, check your DCOM settings,
check what user account the interface is running
under, and try running the interface interactively.
Message GetStatus
Cause The OPC server did not respond to a status query.
It might be down or disconnected.
Resolution Use PI OPCClient to check the status.
Critical errors
Message Error from CoInitialize:
Error from CoInitializeSecurity:
Errors (Phase 1)
Message 17-May-06 09:06:03 OPCInt 1> UniInt failover: Interface in an
“Error” state. Could not read failover control points.
Cause The failover control points on the data source are returning an erroneous value to
the interface. This error can be caused by creating a non-initialized control point on
the data source. This message will only be received if the interface is configured to
be synchronized through the data source (Phase 1).
Resolution Check validity of the value of the control points on the data source.
Message 17-May-06 09:05:39 OPCInt 1> Error reading Active ID point from
Data source Active_IN (Point 29600) status = -255
Cause The Active ID point value on the data source caused an error when read by the
interface. The value read from the data source must be valid. Upon receiving this
error, the interface enters the “Backup in Error state.”
Resolution Check validity of the value of the Active ID point on the data source.
Message 17-May-06 09:06:03 OPCInt 1> Error reading the value for the
other copy’s Heartbeat point from Data source HB2_IN (Point
29604) status = -255
Cause The heartbeat point value on the data source caused an error when read by the
interface. The value read from the data source must be valid. Upon receiving this
error, the interface enters the “Backup in Error state.”
Resolution Check validity of the value of the heartbeat point on the data source
Message Sun Jun 29 17:18:51 2008 PI Eight Track 1 2> WARNING> Failover
Warning: Error = 64 Unable to open Failover Control File ‘\
\georgiaking\GeorgiaKingStorage\Eight\PIEightTrack_eight_1.dat’
The interface will not be able to change state if PI is not
available
Cause The interface is unable to open the failover synchronization file. The interface
failover continues to operate correctly if communication to the PI Server is not
interrupted. If communication to PI is interrupted while one or both interfaces
cannot access the synchronization file, the interfaces remain in the state they were
in at the time of the second failure: the primary interface remains primary and the
backup interface remains backup.
Resolution Ensure the account that the interface is running under has read and write
permissions for the directory. Set the "log on as" property of the Windows service
to an account that has permissions for the directory.