3BSE034679-510 C en System 800xa 5.1 Automated Installation
3BSE034679-510 C en System 800xa 5.1 Automated Installation
3BSE034679-510 C en System 800xa 5.1 Automated Installation
Configuration
System Version 5.1
ABB may have one or more patents or pending patent applications protecting the intel-
lectual property in the ABB products described in this document.
The information in this document is subject to change without notice and should not be
construed as a commitment by ABB. ABB assumes no responsibility for any errors that
may appear in this document.
In no event shall ABB be liable for direct, indirect, special, incidental or consequential
damages of any nature or kind arising from the use of this document, nor shall ABB be
liable for incidental or consequential damages arising from use of any software or hard-
ware described in this document.
This document and parts thereof must not be reproduced or copied without written per-
mission from ABB, and the contents thereof must not be imparted to a third party nor used
for any unauthorized purpose.
The software or hardware described in this document is furnished under a license and
may be used, copied, or disclosed only in accordance with the terms of such license. This
product meets the requirements specified in EMC Directive 2004/108/EEC and in Low
Voltage Directive 2006/95/EEC.
TRADEMARKS
All rights to copyrights, registered trademarks, and trademarks reside with their respec-
tive owners.
3BUF001092-510 5
Table of Contents
6 3BUF001092-510
Table of Contents
3BUF001092-510 7
Table of Contents
8 3BUF001092-510
Table of Contents
Use Case 3 - SoftPoint Signals Show Bad Data Quality when Calculations are not
Updating Data Quality ......................................................................131
Use Case 4 – Usage of External Libraries and COM/DCOM from a Calculation
Aspect................................................................................................132
Use Case 5 – Usage of Calculations in a Redundant Configuration..................132
Use Case 6 - Calculations Service Diagnostic Messages...................................132
3BUF001092-510 9
Table of Contents
10 3BUF001092-510
Table of Contents
3BUF001092-510 11
Table of Contents
12 3BUF001092-510
Table of Contents
3BUF001092-510 13
Table of Contents
14 3BUF001092-510
Table of Contents
3BUF001092-510 15
Table of Contents
16 3BUF001092-510
Table of Contents
3BUF001092-510 17
Table of Contents
18 3BUF001092-510
Table of Contents
3BUF001092-510 19
Table of Contents
20 3BUF001092-510
Table of Contents
Linking Information Management History with the 800xA System Trend Function ...480
Integrating System Message and History Servers .........................................................482
3BUF001092-510 21
Table of Contents
Section 16 - Authentication
Usage within Information Management........................................................................ 553
Configuring Authentication........................................................................................... 554
Configuring Authentication for Aspects Related to Softpoint Configuration............... 558
22 3BUF001092-510
Table of Contents
3BUF001092-510 23
Table of Contents
INDEX
24 3BUF001092-510
About This Book
Getting Started
This book provides instructions for configuring Information Management
functionality for the 800xA System. This includes:
• Data access via data providers and Open Data Access.
• Softpoint Services.
• Calculations.
• History Services for:
– Logging numeric (process) data collected from object properties.
– Logging alarms/events.
– Storing completed reports executed via the application scheduler.
– Archival of the aforementioned historical data.
– Consolidation of message, PDL, and property logs.
All Information Management consolidation functionality is limited to 800xA 5.1
to 800xA 5.1Systems.
These Information Management functions comprise a repository of process, event
and production information, and provide a single interface for all data in the system.
This book is intended for application engineers who are responsible for configuring
and maintaining these applications. This book is not the sole source of instruction
for this functionality. It is recommended that you attend the applicable training
courses offered by ABB.
3BUF001092-510 25
Document Conventions About This Book
Document Conventions
Microsoft Windows conventions are normally used for the standard presentation of
material when entering text, key sequences, prompts, messages, menu items, screen
elements, etc.
Electrical warning icon indicates the presence of a hazard which could result in
electrical shock.
Warning icon indicates the presence of a hazard which could result in personal
injury.
Tip icon indicates advice on, for example, how to design your project or how to
use a certain function
Although Warning hazards are related to personal injury, and Caution hazards are
associated with equipment or property damage, it should be understood that
operation of damaged equipment could, under certain operational conditions, result
in degraded process performance leading to personal injury or death. Therefore,
fully comply with all Warning and Caution notices.
26 3BUF001092-510
About This Book Terminology
Terminology
A complete and comprehensive list of terms is included in System 800xA System
Guide Functional Description (3BSE038018*). The listing included in Functional
Description includes terms and definitions as they that apply to the 800xA system
where the usage is different from commonly accepted industry standard definitions
and definitions given in standard dictionaries such as Webster’s Dictionary of
Computer Terms.
Related Documentation
A complete list of all documents applicable to the 800xA System is provided in
System 800xA Released User Documents (3BUA000263*). This document lists
applicable Release Notes and User Instructions. It is provided in PDF format and is
included on the Release Notes/Documentation media provided with your system.
Released User Documents are updated with each release and a new file is provided
that contains all user documents applicable for that release with their applicable
document number. Whenever a reference to a specific instruction is made, the
instruction number is included in the reference.
3BUF001092-510 27
Related Documentation About This Book
28 3BUF001092-510
Section 1 Configuration Overview
Introduction
Information Management applications comprise a repository of process, event and
production information. Information Management functionality includes:
• History Services for collection, online and offline storage, consolidation, and
retrieval for process/lab data, alarms/events, and reports.
All Information Management consolidation functionality is limited to 800xA 5.1
to 800xA 5.1 Systems.
• Real-time Database Services for access to real-time process data from AC
800M controllers, and other ABB and third-party control systems when the
applicable connectivity components are installed. Real-time database services
also support the configuration of softpoints to hold application-generated data
not directly connected to any process. Softpoints are accessible by all other
functions in the system. Calculation Services is used to apply calculations to
other objects in the system, including both process and softpoint objects.
• Display and Client Services includes the following desktop client applications
for data access: DataDirect (for Excel Data Access refer to Installing Add-ins
in Microsoft Excel on page 286), Display Services, and Desktop Trends. These
desktop tools may run directly on the Information Management server, or on
remote computer clients. The Information Management server also supports
data access by third-party applications such as Crystal Reports when Open
Data Access is installed.
• Application Scheduler. This supports scheduling and execution of jobs
implemented as system plug-ins. This includes:
– Reports created with DataDirect, or a third-party report building package
such as Crystal Reports.
3BUF001092-510 29
Configuration Requirements Section 1 Configuration Overview
Configuration Requirements
The following points briefly describe the configuration requirements and work flow
for the Information Management functions.
• Service Providers
The following Information Management functions are managed by service
providers in the 800xA system: History, Archive, Calculations, Softpoints,
Scheduling, and Open Data Access Services. Verify all service providers are
properly installed and enabled before using the Information Management
functions. This is described in Section 2, Verifying Services.
• Configuring Data Access
There are two methods by which client applications access data from the
800xA system. The Information Management Display and Client Services use
the data providers supplied with ABB Data Services. Third-party tools such as
Crystal Reports, and Microsoft Excel (without DataDirect plug-ins), use the
ODA server. To configure data providers, refer to Section 15, Configuring Data
Providers. To configure Open Data Access, refer to Section 14, Open Data
Access.
• Softpoint Configuration
SoftPoint Services is used to configure and use internal process variables not
connected to an external physical process signal. To configure softpoints, refer
to Section 3, Configuring Softpoints.
• Calculations Configuration
Calculations Services is used to configure and schedule calculations that are
applied to real-time database objects, including both softpoints and actual
process points. To configure calculations, refer to Section 4, Configuring
Calculations.
30 3BUF001092-510
Section 1 Configuration Overview Configuration Requirements
• History Configuration
History Services database configuration requires a History database instance.
The default instance is sized to meet the requirements of most History
applications. It may be necessary to drop the default instance and create a new
larger instance depending on performance requirements. Refer to Section 5,
Configuring History Database. Once the Oracle database instance is created,
use the following procedures to configure historical data collection, storage,
and archival for the system:
– Configuring Property Logs
Process data collection refers to the collection and storage of data from
aspect object properties. This includes live process data, softpoint data,
and lab data (programmatically generated, or manually entered). The
objects that perform process data collection and storage are called
property logs. Process data collection may be implemented on two levels
in the 800xA system. The standard system offering supports short-term
data storage with trend logs. When Information Management History
Server is installed, long-term storage and permanent offline storage
(archive) is supported with history logs. For instructions on how to
integrate and configure trend and history logs, refer to Section 9,
Historical Process Data Collection.
– Alarm/Event Message Logging
All alarm and event messages for the 800xA system, including Audit Trail
messages, are collected and stored by the 800xA System Message Server.
This provides a short-term storage facility with the capacity to store up to
50,000 messages. If the system has Information Management History
Server installed, the messages stored by 800xA System Message Server
may be forwarded to a message log for extended online storage. This
message log can store up to 12 million messages. Refer to Section 7,
Alarm/Event Message Logging.
– Archive Configuration
The archive function supports permanent offline storage for historical data
collected in property, message, and report logs, as well as the 800xA
System alarm/event message buffer. When a history log becomes full, the
oldest entries are replaced by new entries, or deleted. Archiving copies the
contents of selected logs to a designated archive media. For instructions on
3BUF001092-510 31
Configuration Requirements Section 1 Configuration Overview
32 3BUF001092-510
Section 2 Verifying Services
3BUF001092-510 33
Open Plant Explorer Section 2 Verifying Services
For Softpoint Server, the service group/service provider object are created when a
Softpoint Generic Control Network object is created in the Control structure. Other
softpoint server set-up is also required. This is described in Softpoint Set-up
Requirements on page 40.
All procedures related to service group/service provider objects are done via the
Plant Explorer. The remainder of this section describes how to open the Plant
Explorer workplace, how to verify that these objects exist, create the objects if
necessary, and enable/disable a service provider.
34 3BUF001092-510
Section 2 Verifying Services Creating Service Group/Service Provider Objects
After restoring a system containing IM logs a check should be done that the
Service Group isn’t missing for the IM log in Log Configuration. If the Service
Group is missing a new group has to be configured.
3BUF001092-510 35
History Server Section 2 Verifying Services
History Server
History is installed in two parts: the operator trend functionality which is installed as
standard and the Information Management History Server which is optional. Both
functions are implemented as services in the Service Structure. The service
categories are Basic History, Service, and Inform IT History, Service
respectively.
Both of these service categories require a Service Group/Service Provider set for
each node where the Information Management History Server runs. The Service
Provider represents the service running on a specific node. An example is shown in
Figure 3. The Service Group structure is intended to support redundancy for certain
services (for instance Connectivity Servers). Redundancy is not supported for
Information Management History Server, so there is always a one-to-one
correspondence between the History Service Group and its History Service
Provider.
36 3BUF001092-510
Section 2 Verifying Services History Server
3BUF001092-510 37
History Server Section 2 Verifying Services
38 3BUF001092-510
Section 3 Configuring Softpoints
Using Softpoints
SoftPoint Services is used to configure and use internal process variables not
connected to an external physical process signal. Once configured, the softpoints
may be accessed by other 800xA system applications as if they were actual process
points. For example, softpoint values may be stored in property logs in History
Services. Reporting packages such as Crystal Reports may access softpoints for
presentation in reports. Desktop tools such as Desktop Trends and DataDirect can
read from and write to softpoints.
Softpoint alarms can be configured and are directly integrated with system alarms
and events. Engineering unit definitions for softpoints include minimum/maximum
limits and a unit descriptor.
SoftPoint Services can be configured as redundant on a non-redundant Information
Manager Server pair or on a redundant connectivity server. For greatest efficiency
between Calculations and Softpoints, put the primary Calculation Server on the
same machine as the primary Softpoint Server.
The Softpoint Services software may run on multiple servers. Each softpoint server
can have up to 2500 softpoint objects. Each softpoint object can have up to 100
signals; however, the total number of signals per server cannot exceed 25,000.
Data types supported are: Boolean, integer (32bit), single precision floating point
(32 bit) and string. Also, double precision floating point (64 bit) is supported as an
extended data type.
The softpoint object types specify the signal properties for the softpoint objects.
This includes signal ranges for real and integer signals, alarm limits, event triggers,
and whether or not the signals will be controllable. The process objects inherit the
properties of the object types from which they are instantiated.
3BUF001092-510 39
Softpoint Set-up Requirements Section 3 Configuring Softpoints
Configuring Softpoints
Configure softpoints via the Plant Explorer. Softpoint objects are instantiated as
actual process objects in the Control Structure. They are instantiated from the
softpoint object type templates configured in the Object Type Structure. Softpoint
objects must be created in the Control structure as sub-objects of the Softpoints
container for the applicable Softpoint Generic Control Network. Once created, these
objects can be inserted into other structures, for example the Functional structure.
• Creating a softpoint object adds a signal of each signal type in the softpoint
object type.
• Softpoint objects cannot be copied, cut, pasted, moved or renamed.
Deleting an object deletes all included signals.
• When dragging an input/output from a softpoint property (in a softpoint
object type) to a Calculation aspect, the selected information is not correct
and needs to be edited. This is not a problem when dragging an input/output
from an instantiated softpoint object. Use relative naming in this case.
40 3BUF001092-510
Section 3 Configuring Softpoints Adding a New Softpoint Object Type
3BUF001092-510 41
Configuring a Softpoint Object Type Section 3 Configuring Softpoints
2. Select the Softpoint Object Types group and choose New Object from the
context menu.
3. In the New Object dialog, assign a name to the new object, then click Create.
This adds the new object type to the Softpoint Object Type group, Figure 5.
42 3BUF001092-510
Section 3 Configuring Softpoints Adding Signal Types
2) Process Object
1) New Object Type Selected 3) Configuration View
Configuration Aspect Selected
3BUF001092-510 43
Adding Signal Types Section 3 Configuring Softpoints
Real
String
Signals must be added and deleted in the Object Type structure. Signals can not
be added or deleted directly on softpoint objects in the Control structure. Do not
copy, cut, paste, move or rename signal types. When a signal type is added to a
softpoint object type, a signal of this signal type is added to all softpoint objects
based on the softpoint object type. Deleting a signal type deletes all signals of
this type.
The Add Signal buttons are located in the upper left part of the Softpoint Process
Object Configuration aspect, Figure 7.
Binary
String
Integer Real
44 3BUF001092-510
Section 3 Configuring Softpoints How to Delete a Signal Type
3BUF001092-510 45
Signal Properties Section 3 Configuring Softpoints
Signal Properties
The following signal properties may be configured:
• Real and integer signal types can have an overall range. Refer to Configuring
the Overall Range on page 46.
• Real, integer, and binary signal types can be made controllable. Refer to
Making a Signal Controllable on page 48.
• Authentication can be configured for all signal types. Refer to Configuring
Authentication for Signal Types on page 49.
46 3BUF001092-510
Section 3 Configuring Softpoints How to Use Special Characters in the Engineering Units Tag
1. Select the real or integer signal type from the Object Type structure.
2. Select the Signal Configuration aspect.
3. Select the Range tab, Figure 10.
3BUF001092-510 47
Making a Signal Controllable Section 3 Configuring Softpoints
2. Select a character.
3. Click OK. The Character map dialog box will be closed and the character
added to the engineering unit.
4. Repeat step 1-3 to select additional characters.
5. Click Apply.
48 3BUF001092-510
Section 3 Configuring Softpoints Making a Signal Controllable
3BUF001092-510 49
Configuring Alarms and Events for Signal Types Section 3 Configuring Softpoints
None (no users) Reauthentication (one user), and Double authentication (two
users). The default is not to require authentication.
Authentication for softpoint signal types is configured on an individual basis for
each signal property. Instantiated signals inherit authentication from their respective
softpoint object types. Authentication may be changed for instantiated signals in the
Control structure.
To configure authentication for a softpoint signal type, Figure 13:
1. Click the Property Permissions tab on the Signal Configuration aspect.
2. Select the Property from the property list.
3. Select the Authentication Level from the pull-down list and click Apply.
Selected property
50 3BUF001092-510
Section 3 Configuring Softpoints Configuring Limiters for Real and Integer Signals
Figure 14. Alarm Event Configuration Aspect for Real and Integer Signal Types
3BUF001092-510 51
Configuring Limiters for Real and Integer Signals Section 3 Configuring Softpoints
3. The name defaults to the signal name with a sequential numeric prefix, for
example: RPM1. Rename the limiter to make it more easily identifiable, for
example: RPM_HIHI.
4. Specify the Type to indicate whether the limiter will trigger an event when the
signal rises above a maximum value (Max), or drops below a minimum value
(Min).
5. Enter a Limit. This is the threshold that the signal must cross to trip an alarm
(drop below if type is Min, or exceed if type is Max).
6. Set the Hysteresis to filter spurious alarms caused by signals that fluctuate
around the limit. For a real signal type, the hysteresis range is 0.00 to 9.99
expressed as percent of signal range. For integer signal types, use a whole
number. The default is 0.
Events are triggered when the signal passes the limit value regardless of the
Hysteresis. In order for a triggered alarm to reset so that subsequent alarms
may be triggered, the signal must first re-cross the alarm limit in the opposite
direction (decreasing for Max limit, or increasing for Min limit), and continue
to decrease or increase until the point established by the Hysteresis is crossed.
This requires the Hysteresis to be set to a value greater than zero (0).
When the Hysteresis is set to zero, the alarm will reset as soon as the signal re-
crosses the alarm limit.
7. Click Apply.
52 3BUF001092-510
Section 3 Configuring Softpoints Configuring Alarm and Event Handling
3BUF001092-510 53
Configuring Alarm and Event Handling Section 3 Configuring Softpoints
Events
To specify that an event be generated when a signal state changes (reference
Figure 17):
1. Select the signal type object.
2. Select the Alarm Event Configuration aspect.
Figure 17 demonstrates how to enable and configure events for a binary signal
type. When configuring events for a real or integer signal limiter, the Alarm
Event Configuration aspect will have a third tab for selecting the limiter whose
event is being configured (Figure 14).
3. Click the Is an event check box on the Event tab. The default is to have events
generated for both state changes On-to-Off, and Off-to-On). This is specified
by having both Log status changes check boxes unchecked.
To generate events for both state changes, both the Is an event and Is an alarm
check boxes must be checked, and both boxes for log status change unchecked.
54 3BUF001092-510
Section 3 Configuring Softpoints Configuring Alarm and Event Handling
4. As an option, use the check boxes to make the event occur exclusively for one
state change or the other:
– Checking Log status changes ‘On’ and NOT checking Log status
changes ‘Off’ will result in events only when the state changes from Off
to On (rising edge), Figure 18.
Figure 18. Event Generated Only When State Changes from Off to On
– Checking Log status changes ‘Off’ and NOT checking Log status
changes ‘On’ will result in events only when the state changes from On to
Off (falling edge), Figure 19.
Figure 19. Event Generated Only When State Changes from On to Off
5. As a further option, change the Alarm text group. Alarm text groups specify the
messages to be displayed for certain events. All events belonging to the same
group will be displayed and/or printed with the same text for a particular
change of state. Alarm text groups are configured as described in Defining
Alarm Text Groups on page 65. To change the alarm text group for a signal, use
the Alarm text group pull-down list. This list contains all Alarm text groups
that have been configured in the system.
If alarm handling for the signal is needed, refer to Alarms on page 55.
Alarms
To show a signal in the alarm list, alarm line and printouts when the alarm status is
activated, specify that the signal Is an alarm. This is done via the Alarm tab on the
Alarm Event Configuration aspect. Also, the signal must be defined as an event by
checking the Is an event check box under the Event tab of the Alarm Event
Configuration aspect (Events on page 54).
3BUF001092-510 55
Configuring Alarm and Event Handling Section 3 Configuring Softpoints
The Alarm tab is also used to specify how alarm acknowledgement will occur,
alarm priority, and a time delay to filter out spurious alarms.
To configure alarm handling for a signal:
1. Make sure the Is an event check box is selected on the Events tab.
2. Select the Alarm tab and check the Is an alarm check box, Figure 20.
Option Description
Normal acknowledge Show alarm as long as it remains active or
unacknowledged.
No acknowledge Show alarm as long as it remains active.
Disappears with Show alarm as long as it remains unacknowledged.
acknowledge When the alarm is acknowledged, it is removed from the
alarm list regardless of the current alarm status. The
system will not detect when the alarm becomes inactive.
Thus printout is not available for when an alarm became
inactive, only when it was acknowledged.
Alarm list colors are set in the Alarm and Event List Configuration aspect.
56 3BUF001092-510
Section 3 Configuring Softpoints Instantiating a Softpoint Object in the Control Structure
3BUF001092-510 57
Bulk Instantiation of Softpoint Objects in the Control Structure Section 3 Configuring Softpoints
This adds a new softpoint object in the Control structure, Figure 22.
Instantiated
Softpoint Object
58 3BUF001092-510
Section 3 Configuring Softpoints Bulk Instantiation of Softpoint Objects in the Control Structure
1. In the Control structure, select the Softpoint Generic Control Network object
for the node where the softpoints will be instantiated.
2. Click the Generic Control Network Configuration aspect.
3. Click the Add Objects button in the top left corner of this aspect.
This displays a dialog for selecting the object type on which to base the objects
being instantiated, and for specifying the number of objects to be instantiated.
An example is shown in Figure 24.
4. Use the browser on the left side to select the object type from which to
instantiate the new objects (for example, Area1_Pump is selected in Figure 24).
5. Specify the number of objects to create in the Number of new objects field. In
this example, nine (9) new objects will be created.
6. Enter the Name of the new object that will be used as the base name for all
new objects to be created. Each new object will be made unique by appending a
sequential numeric suffix to the same base name starting with the specified
Starting number (in this case: 2).
7. Click OK. The result is shown in Figure 25.
3BUF001092-510 59
Deleting Instantiated Objects Section 3 Configuring Softpoints
Bulk instantiated
softpoint objects
60 3BUF001092-510
Section 3 Configuring Softpoints Adjusting Alarm/Event Properties
3BUF001092-510 61
Adjusting Alarm/Event Properties Section 3 Configuring Softpoints
4. To enter the event text, click the Event2 tab and then enter the text in the First
Row of Event Text: field, Figure 28. If additional lines are requires, use the
Extended event text: edit window.
Additional lines
if necessary
62 3BUF001092-510
Section 3 Configuring Softpoints Adjusting Alarm/Event Properties for Limiters
5. The Alarm tab, Figure 29, is used to adjust the alarm handling parameters
originally configured as described in Configuring Alarm and Event Handling
on page 53.
6. The Alarm2 tab, Figure 30, is used to configure class. Classes are groups of
alarms without any ranking of the groups. The range is 1 to 9999.
3BUF001092-510 63
Adjusting Properties for Signals Section 3 Configuring Softpoints
For details regarding all other tabs on this aspect, refer to Adjusting Alarm/Event
Properties for Binary Signals on page 61.
64 3BUF001092-510
Section 3 Configuring Softpoints Defining Alarm Text Groups
Most of the other signal properties are inherited from the respective signal types. As
an option, disable the default settings from the Range tab, Figure 33.
3BUF001092-510 65
Defining Alarm Text Groups Section 3 Configuring Softpoints
Settings aspect of the Softpoint Object Type group in the Object Type structure,
Figure 34.
66 3BUF001092-510
Section 3 Configuring Softpoints Defining Alarm Text Groups
4. Enter the status texts in their respective boxes as described in Table 3. All texts
can be up to 16 characters maximum. Refer to Name Rules on page 71.
Properties with an asterisk (*) can only be change from the default group.
Table 3. Alarm Text Group Configuration
3BUF001092-510 67
Deploying a Softpoint Configuration Section 3 Configuring Softpoints
b. Select the applicable Alarm and Event Service Provider under the Alarm
and Event Services category.
c. Click on the Service Provider Definition aspect.
d. Click on the Configuration tab.
e. Uncheck the Enabled check box and click Apply.
f. Check the Enabled check box and click Apply.
68 3BUF001092-510
Section 3 Configuring Softpoints Deploying a Softpoint Configuration
Incremental deploys only those changes that have been made since the last
deployment. A prompt is provided when a Full deploy is required.
3. Click Deploy to start the deploy process. During this time, softpoint processing
is suspended, current values and new inputs are held. This process is generally
completed within five minutes, even for large configurations. Completion of
the deploy process is indicated by the Deploy ended message, Figure 37.
3BUF001092-510 69
Enabling/Disabling Warning for Changes to Object Types Section 3 Configuring Softpoints
70 3BUF001092-510
Section 3 Configuring Softpoints Name Rules
Name Rules
Use any combination of characters in Table 4 for softpoint object and signal names.
Table 4. Valid Characters for Softpoint and Signal Names
Description Characters
Lower case a-z, å, ä, ö, æ, ø, ü
Upper case A-Z, Å, Ä, Ö, Æ, Ø, Ü
Underscore _
Numbers 0-9
3BUF001092-510 71
Working with Softpoints Online Section 3 Configuring Softpoints
The Softpoint Object dialog shows the signals configured for this softpoint object.
Double-click on any signal to show the corresponding faceplate, Figure 40.
72 3BUF001092-510
Section 3 Configuring Softpoints Working with Softpoints Online
Open the faceplate for another signal either by using the pull-down list, Figure 41,
or by clicking the up/down arrows.
3BUF001092-510 73
Working with Softpoints Online Section 3 Configuring Softpoints
74 3BUF001092-510
Section 4 Configuring Calculations
Using Calculations
Refer to the Calculations Service Recommendations and Guidelines on page 131
for specific use cases related to improved stability and predictability of the
Calculation Service.
Calculations Services is used to configure and schedule calculations that are applied
to real-time database objects, including both softpoints and actual process points.
Calculations may also be applied to object types allowing the calculation to be re-
used each time a new object is instantiated from the object type. Calculations also
have the ability to update/insert a historically stored value into a numeric log.
Calculations can be triggered by changes to the inputs, or be scheduled to execute
cyclically or at a given date and time. A calculation aspect may be applied to any
aspect object such as a unit, vessel, pump, or softpoint. Inputs can be any aspect
object property, and outputs can be any writable point in the system. Input/output
definitions can be made relative to the object for which the calculation is defined.
Data quality and alarm generation are supported. Calculation logic is written in
VBScript. The ability to write a timestamp to a softpoint or a log is provided to align
calculations along with the inputs so they have the same timestamp for retrieval.
Administrative rights are required to add, configure, or modify calculations.
The following calculation example finds the average value for up to four input
variables, Figure 42. The variable mapping and VBScript are shown in Figure 43.
3BUF001092-510 75
Using Calculations Section 4 Configuring Calculations
Input No. 2
Calculation Output No. 1
Input No. 4
Input No. 4
76 3BUF001092-510
Section 4 Configuring Calculations Set-up and Administration
Calculation Aspects
Calculations are created as aspects of objects or object types. An example is shown
in Figure 44.
Aspect Object in
Control Structure
Calculation Aspect
Figure 44. Calculation Aspect for Softpoint Object in the Control Structure
User Interface
The user interface for Calculations Services is comprised of aspect views that are
accessed via the Plant Explorer. The following aspect views are provided:
• Special Configuration for configuration of global parameters on the
Configuration Server level. Refer to Configuring Global Calculations
Parameters on page 78.
• Calculation Editor for configuring calculations. Refer to Editing the
Calculation on page 82.
• Calculation Scheduler for scheduling calculations either cyclically or
according to a schedule. Refer to Scheduling the Calculation on page 99.
• Calculation Status Viewer for monitoring and managing calculations. Refer
to Managing Calculations on page 119.
3BUF001092-510 77
Configuring Global Calculations Parameters Section 4 Configuring Calculations
78 3BUF001092-510
Section 4 Configuring Calculations Configuring Global Calculations Parameters
Parameter Description
OPC Base Rate at which input variables are updated by their respective OPC data sources.
Rate Range: 0 to 2,147,483,647 milliseconds
Default: 1000 ms (1 second)
Cycle Base Rate at which the Calculations Scheduler scans the list of cyclically scheduled
Rate calculations. Typically, the default rate is used and is adjusted only if the
Calculations Scheduler is consuming too much CPU time (as indicated by
Windows Task Manager).
Range: 100 to 3,600,000 milliseconds (3600000 = 1 hour)
Default: 500 ms (1/2 second)
Cycle Offset The first run for a cyclically scheduled calculation will occur when the specified
Percentage cycle interval has elapsed after the calculation has been submitted. The Cycle
Offset Percentage is used to stagger the execution of cyclic calculations to
minimize peaks (where several cyclic calculations attempt to run at the same
time).
The cyclic offset reduces the cycle interval to advance the first execution of each
cyclic calculation by a random amount between zero and the specified offset
percentage. Subsequent executions of the calculation will occur according to the
specified interval (without the offset).
For example, if the cycle interval is 10 minutes and the offset percentage is 25,
then the time when the calculation will FIRST be executed will be advanced by
0% to 25%. In other words, the time of first execution will be somewhere between
7.5 minutes and 10 minutes after the calculation is submitted for execution.
Subsequent executions will occur every 10 minutes.
Range: 0 to 100%
Default: 100%
No adjustment will be made to time-based calculations.
3BUF001092-510 79
Configuring Calculations Section 4 Configuring Calculations
Configuring Calculations
This section describes how to add a calculation aspect to an aspect object, and how
to use the aspect to create and schedule calculations.
Calculations are defined using VBScript. Any capabilities within VBScript or COM
technologies can be used to manipulate data. External functions and libraries may
also be used. Certain calculation properties are exposed for access by OPC client
applications. These properties are described in OPC Access to Calculation
Properties on page 106.
Once configured, calculations can be managed via the Calculations Status Viewer.
This is described in Managing Calculations on page 119.
80 3BUF001092-510
Section 4 Configuring Calculations Adding a Calculation Aspect to an Object
3BUF001092-510 81
Using the Calculation Aspect Section 4 Configuring Calculations
The calculation aspect has two views - an editor and a scheduler. The editor is
displayed by default. This is where calculation variables are defined, VBScript
written and the trace feature enabled for debugging the calculation script offline.
Begin configuration by using the calculation editor to define variables and write the
script as described in Editing the Calculation on page 82. When the calculation is
ready, use the edit/schedule toggle button on the Tool Bar to switch to the scheduler
view. Refer to Scheduling the Calculation on page 99.
Other procedures that can be performed via the editor include:
• Tracing the execution of a calculation - Refer to Tracing the Calculation
Execution on page 98.
• Forcing the execution of a calculation - Refer to Running the Calculation
Manually on page 102. The calculation can run on line or offline for
simulations when run manually.
• Enabling/disabling the execution - Refer to Table 10.
82 3BUF001092-510
Section 4 Configuring Calculations Mapping Calculation Variables
VBScript are processed, and the results are displayed in the trace window. Enable or
disable tracing via the editor’s context menu, or programatically within the
VBScript. The trace window has a limit of about 20Kbytes which is equivalent to
about 10,000 characters.
Use the variable grid to map input and output variables to their respective OPC tags.
Refer to Mapping Calculation Variables on page 83. Then use the edit window to
write the VBScript. Refer to Writing VBScript on page 87.
Variable
Grid
Edit Window
for VBScript
Trace
Window
3BUF001092-510 83
Mapping Calculation Variables Section 4 Configuring Calculations
• Whether to use the online data point value or a configured offline value.
• Whether to trigger the execution of the calculation when an input variable
changes. Do not use analog variables to trigger the execution of a calculation.
Use the Insert Line and Delete Line buttons on the Tool Bar, Figure 50 to add and
delete variables in this map as required. Up to 50 variables can be specified. The
variables defined in this grid can be used without having to declare them in the
VBScript. The parameters needed to specify each variable are described in Table 6.
Insert Delete
84 3BUF001092-510
Section 4 Configuring Calculations Mapping Calculation Variables
When using relative object referencing to define calculations in the Object Type
structure, select the property for the reference before adding the name of the
specific structure to the path. After entering the structure into the path, the
property list will no longer be populated because the list is built from the object
reference which is no longer valid. For further information on relative object
referencing, refer to Object Referencing Guidelines on page 111.
Table 6. OPC Mapping Parameters
Parameter Description
Variable This is the name of the variable as it is referenced in the VBScript. This does not
need to be declared anywhere else. It is available for use immediately.
Object This is an object reference. The object reference and the property parameter
make up the data point specification. Enter the object reference by typing, or
drag the corresponding object from the applicable structure. To drag the object,
the Object field must first be selected. General object referencing syntax is
described in Object Referencing Guidelines on page 111.
As an alternative to using an absolute object reference, specify a relative object
reference. This uses the same calculation for multiple object instances without
having to change object references. A relative reference starts with the object to
which the calculation is attached and must descend from that object. It is not
possible to select a relative item outside the sub-tree of the selected object.
Refer to Relative Object Referencing Guidelines on page 115.
Property This pick list is populated with properties based on the object specified. For input
variables, select the property to be read. For output variables, select the property
to be written.
Direction This specifies whether the variable is an input or output.
• Input - the value is read into the variable before the calculation is executed.
The OPC update rate is configurable. Refer to Configuring Global
Calculations Parameters on page 78.
• Output - the variable value is written to the property AFTER the calculation
has executed. The output variable value is available for use in the script as it
executes. NOTE: Object properties are NOT updated while the calculation
is executing. For instance, a repeating loop that increments a variable value
does not continually update the object. Instead, the value that occurs when
the calculation is finished is written.
3BUF001092-510 85
Mapping Calculation Variables Section 4 Configuring Calculations
Parameter Description
On Line Value This is a read-only field that indicates the current value for the variable.
Offline Value This value is used in place of the actual (Online) value when the Online flag in
the State column is set to False. This is typically used for performing offline tests
or declaring constants.
State Online- Use the actual online value. For inputs, this is the online OPC property
value. For outputs, this is the calculated variable value.
Offline - Use the specified offline value
Event This is only applicable for input variables. It is used to specify whether or not to
execute the calculation in the event that the value for this input changes:
• True - Execute the calculation when the input value changes.
• False - Do not execute the calculation when the input value changes.
Note: Analog signals should not be used as event triggers.
Log The Log field is used to pick a log that will be used to store output data from the
calculation. When the calculation is run, the data written to the output parameter
will also be written to the numeric log, if present. If the timestamp already exists
in the log for this data, then that data can also be updated. Also, the quality for
the softpoint output object will be used as the data quality for the numeric log.
The Log field is enabled when an object and a property for a variable is specified.
This allows selection of a Lab Data log that has been created for that property. If
no Lab Data logs have been created for the Object/Property combination, the
pick list will be blank. The Lab Data log can be from either a trend log or a history
log. Once specified and the aspect has been saved, the Direction column will be
disabled with ‘Output Only’ displayed (Calculations can not read from a log). In
addition, the Online Value and Event columns display '----' since they do not
apply to an output only variable.
The calculation variable can be treated like any other output variable, specifying
the Value, Quality, and TimeStamp fields for the variable. When the calculation
runs, the Value and Quality for the specified TimeStamp will be written to the
specified 'Lab Data' log. If a value exists in the log for the specified timestamp,
that entry will be overwritten with the new value/quality. If no value exists in the
log for the specified timestamp, the value/quality will be inserted into the log with
the specified timestamp (required).
86 3BUF001092-510
Section 4 Configuring Calculations Writing VBScript
Writing VBScript
When working with variables in VBScript, the assignment of any output variable
must be explicitly cast to the output variable type using expressions such as:
OutputVar = CInt(variabledata)
Calculation logic is written in VBScript. Use the edit window in the middle part of
the Calculation Editor to write the VBScript, Figure 51. Any capabilities within
VBScript or COM technologies can be used to manipulate data. External functions
and libraries may also be used.
3BUF001092-510 87
Settings.UpdateStatus Section 4 Configuring Calculations
Settings.UpdateStatus
The Settings.UpdateStatus defaults to False. This is to optimize performance.
Certain properties, mostly related to debugging, will not be updated when the
calculation is run on line. These are described in Reading Calculation Status
Information on page 120. Set the update status to true for debug purposes if
necessary; however, it is generally recommended not to run calculations in this
mode.
If a relatively few calculations are running at a modest rate of execution, run the
calculations with Settings.UpdateStatus = True. This allows the Result property to
link calculations. Refer to Calculation Result on page 90.
The calculation can also periodically set the update status true every n number of
executions to periodically update status-related parameters. An example is provided
in Improving Performance on page 129.
Settings.UpdateStatus
To set the update status to true, either change the value of Settings.UpdateStatus
to True, or by comment out the line.
DO NOT delete the line. This would require re-entering the line to reset the
update status to False.
When True, versioning can not be used. This is because online values of the
Calculation aspect differ between environments.
88 3BUF001092-510
Section 4 Configuring Calculations Variable Scope and Usage
Online Variables
Online input variables declared in the grid contain read-only values that correspond
to the value of the referenced object and property. The variable within the script can
be modified, but these changes are not persisted between executions and the value
after execution is not written to the actual process value.
Online output variables declared in the grid contain the actual process values of the
referenced objects and properties before script execution. Also any modification of
these variables from within the script will modify the actual process values after
execution completes.
Offline Variables
Offline values are used in place of the actual (Online) values when the Online flag in
the State column is set to False. These values are also used when executing the
calculation in offline mode. Offline variables are not supported in an engineering
environment.
Offline input variables are like constants. They contain the value as designated in
the Offline Value column during execution. The variable may be modified within
the script, but changes are not persisted between calculation executions. Further
Offline values are not updated when the Update Status is set to False (default).
Offline output variables function like internal softpoints. Before calculation
execution, the variable is set to the value contained in the Offline Value column.
During execution, the variable can be modified via the script. After execution, the
change (if any) is written back to the Offline Value. This offline value is not
accessible to any client other than the calculation itself. This is useful for creating
static variables (values are persisted between executions) where the values are only
used internally within the calculation. For example, an internal counter could record
the number of times the calculation has executed.
3BUF001092-510 89
Calculation Result Section 4 Configuring Calculations
Calculation Result
The calculation result is a special calculation property that is treated as a variable
within the VB script. It does not need to be declared in the grid or the script.
This property is only applicable when the Update Status is set to True.
Before execution, the previous result value is loaded into the variable and is
available for use by the script. The result value can be set within the calculation’s
VB script, for example result.value = 1. After execution, the value is written back
to the calculation's Result property. The result value is then accessible by OPC
clients, and may also be referenced within other calculations as an input variable in
the calculation’s variable grid.
The result quality may also be set within the calculation script, for example:
result.quality = 0. However, any change to this value is not saved once the
calculation has completed. The result quality always reverts back to quality = good
once the calculation has completed.
If history is to be collected for the result of a calculation, it is best not to collect
directly from the Result property. For best performance, write the result to a
softpoint from which the data may then be collected.
The calculation result can be used to link calculations without having to use an
external softpoint. When Update Status is set to True, the result variable of the
calculation aspect can be used to link one or more calculations. The result of one
calculation can be used as the input and trigger for another calculation. This is
illustrated in the following example.
Example:
M1 consists of two calculations: CalcA and CalcB. The result property of CalcA
will be used to trigger CalcB.
The script for CalcA sets its result property to the some input, in this case INPUT1,
Figure 53.
CalcB then subscribes to CalcA's result property as an input, Figure 54. In this case,
the event flag is also set to True so that each time the result of CalcA changes,
CalcB executes. CalcB could also be executed manually or scheduled instead.
90 3BUF001092-510
Section 4 Configuring Calculations Variable Properties and Methods
3BUF001092-510 91
Variable Properties and Methods Section 4 Configuring Calculations
Table 7. Properties
Table 8. Methods
92 3BUF001092-510
Section 4 Configuring Calculations Variable Data Types and Conversion for Offline Variables
3BUF001092-510 93
Language Extensions Section 4 Configuring Calculations
In addition, some OPC properties will not convert to the appropriate data type
before allowing the data to be committed. For example, attempting to write the
value 255 to the NAME:DESCRIPTION property of an object will not work.
However writing the value "255" or using the CSTR conversion function will work.
This is illustrated in Figure 57.
Language Extensions
Language extensions provide additional functionality as described in Table 9.
Table 9. Language Extensions
Language Restrictions
Due to the nature of calculations, the following restrictions are placed on the
language:
• User Interface - The VBScript language supports functions that produce a
user-interface. For example, the MsgBox function displays a window that
94 3BUF001092-510
Section 4 Configuring Calculations Writing to Quality of a Data Point from a Calculation
remains visible until the user manually closes it. All user-interface functions
are turned off for calculations.
• Timeout - To prevent the user from performing operations that could
potentially hang the calculation service, a configurable timeout period is
employed. If the calculation does not complete in the time specified, it is forced
to exit with an error status indicating a timeout has occurred. This prevents
endless loops and other coding errors from adversely affecting the system.
• Calculations variables can not be used as a control variable in a For loop.
Using a variable declared in the calculation grid as a control variable will cause
problems in a "For" loop. For example, INPUTA is a variable declared in the
Calculations grid. This code will result in an illegal assignment error for the
calculation. The problem area is shown in bold type.
For INPUTA = 1 To 10
Do Something
Next
3BUF001092-510 95
Writing Timestamps to Softpoint Objects and Logs Section 4 Configuring Calculations
96 3BUF001092-510
Section 4 Configuring Calculations SetLocale Function for Native Language Support
Figure 58. SetLocale Function for Floating Point Values in Native Language
3BUF001092-510 97
Tracing the Calculation Execution Section 4 Configuring Calculations
98 3BUF001092-510
Section 4 Configuring Calculations Tool Bar
Tool Bar
The tool bar, Figure 60, provides access to Calculation Editor functions. Table 10
provides additional notes.
Save Add Line Cut Paste Enable/Disable Run on client Help
(Offline)
The Calculation Editor View does not prompt the user when exiting and changes
are pending. Use Save button before leaving the display or pin the aspect.
Icon Notes
Saving changes completely replaces the original calculation.
To preserve the original, make a second copy to edit.
Enable/Disable. Each Calculation must be assigned to a
specific Service Provider BEFORE enabling the calculation.
This procedure is described in Distributing Calculations on
Different Servers on page 122.
Run Calculation on the Server (Online). The calculation must
be saved and enabled before it is run.
3BUF001092-510 99
Scheduling the Calculation Section 4 Configuring Calculations
Cyclic Schedule
To specify a cyclic schedule, make sure the Schedule check box is unchecked, and
then check the Cycle check box. In the corresponding fields specify the interval unit
(hours, minutes, seconds), and the number of intervals. The example in Figure 61
shows a one-second cycle.
100 3BUF001092-510
Section 4 Configuring Calculations Scheduling the Calculation
Time-based Schedule
To specify a time-based schedule, make sure the Cycle check box is unchecked, and
then check the Schedule check box. In the corresponding fields specify the time
components (year, month, day, hour, minute, second). The * character is a wildcard
entry which means every. Figure 62 shows a time-based schedule that will execute
at 12:01:01 (one minute and one second after noon) on every day of every month,
for every year.
3BUF001092-510 101
Running the Calculation Manually Section 4 Configuring Calculations
Offline Execution
In the offline mode, the calculation uses the offline values specified on the variable
grid. This way the calculation execution does not affect actual process values. The
calculation must be saved before it can run.
To run a calculation offline, click the offline button .
Online Execution
In the online mode the calculation uses the online values specified on the variable
grid. Save and enable the calculation before running the calculation in this mode.
Assign each Calculation to a specific Service Provider BEFORE enabling the
calculation. This is described in Distributing Calculations on Different Servers
on page 122.
To run the calculation online, click the online button .
102 3BUF001092-510
Section 4 Configuring Calculations Scheduling Calculations via the Application Scheduler
3BUF001092-510 103
Scheduling Calculations via the Application Scheduler Section 4 Configuring Calculations
104 3BUF001092-510
Section 4 Configuring Calculations Instantiating Calculation Aspects On Object Types
3BUF001092-510 105
OPC Access to Calculation Properties Section 4 Configuring Calculations
106 3BUF001092-510
Section 4 Configuring Calculations Making Bulk Changes to Instantiated Calculations
3BUF001092-510 107
Making Bulk Changes to Instantiated Calculations Section 4 Configuring Calculations
of that type. For example, a defect may be found in the source code of the
calculation. Opening and editing each calculation by hand to make the same fix
could be time consuming and introduce additional defects. The Clone function of
the Calculation Status Viewer is used to make such changes quickly and
consistently.
To do this:
1. Using the Calculation Editor aspect, modify a single calculation with the
necessary change. This aspect will become the source for the clone. For
example, when disabling tracing for the calculation.
2. Open the Calculation Status Viewer. To do this, go to the Service structure,
select the Calculation Server, Service, and then select the Calculation Status
Viewer Aspect, Figure 73.
108 3BUF001092-510
Section 4 Configuring Calculations Making Bulk Changes to Instantiated Calculations
3BUF001092-510 109
Making Bulk Changes to Instantiated Calculations Section 4 Configuring Calculations
Clone
Source
Properties
to be
Cloned
110 3BUF001092-510
Section 4 Configuring Calculations Object Referencing Guidelines
3BUF001092-510 111
Absolute Referencing Guidelines Section 4 Configuring Calculations
Each component (plant2, section1, and so on) references a specific object in the
structure hierarchy. Each component is delimited by a separator character. The dot
(.) and slash (/) are default separators. Specify different characters to use as
separators using the aspect [Service Structure]Services/Aspect Directory-> SNS
Separators. The Name Server must be restarted after any changes.
If a separator character is actually part of an object name, it must be delimited
with a “\” backslash in the object reference. For example, the string AI1.1 will
search for an object named 1 below an object named AI1. To search instead for
an object named AI1.1, reference the object as: AI1\.1.
Absolute object references can be refined using the following syntax rules:
• . (dot) - search for the following object (right of dot) below the previous object
left of dot). This includes direct children as well as their offspring.
• .. (double dot) - go one level up to the parent.
• [down] - (default) searches from parent to children.
• [up] - searches from child to parent.
• [direct] - Limits search to just direct children. Refer to Direct on page 112.
• [structure] - Limits search to a specified structure, for example Functional
Structure or Control Structure. Refer to Structure Category on page 113.
• [name] - Limits search to the main object name (of category name). Refer to
Name on page 114.
This syntax can be used for powerful queries. Also, restricting a search using these
keywords can significantly improve the search speed. For example, the following
query path will start from a Signal object, and will retrieve the board for the signal
and the rack where this board is located:
.[up][Control Structure]{Type Name}board[Location Structure]{Type Name}rack
Direct
By default, queries search for the referenced object at all levels below the parent
object, not just for direct children. To limit the search to direct children, use the
[Direct] keyword.
112 3BUF001092-510
Section 4 Configuring Calculations Absolute Referencing Guidelines
For example, the query path plant2.motor3 will find motor3 in all of these
structures: heyden.plant2.section3.motor3, plant2.section3.drive4.motor3, and
plant2.motor3. The query [Direct]plant2.motor3 will find motor3 only at
plant2.motor3.
For further considerations regarding the Direct keyword, refer to Using the
[Direct] Keyword on page 118.
Structure Category
All structures are searched by default. Restrict the search to a specific structure by
specifying the structure name, Table 12. For example:
[Functional Structure]drive3.motor2 searches only in the Functional Structure.
The query [Functional Structure]drive3.motor2[]signal4 starts in the Functional
Structure until it finds an object named motor2 below an object named drive3, and
then searches all structures for an object named signal4 below motor2.
Use the [Direct] keyword in combination with a structure reference, for example:
[Direct][Functional Structure]plant2.section1. Do not specify several structures. For
example, the following syntax is not allowed: [Functional Structure][Control
Structure]motor2. If this functionality is needed, use the
IAfwTranslate::PushEnvironment method of the Name Server.
To negate a query path, add the not symbol. For example, [!Functional
Structure]motor2 searches for a motor2 not occurring in the Functional Structure.
Table 12. Structure Names
3BUF001092-510 113
Absolute Referencing Guidelines Section 4 Configuring Calculations
For further considerations regarding Structure, refer to Using the Structure Name
on page 118.
Name
An object can have a number of different names. Each name is assigned as an
aspect. The name categories are described in Table 13. The main name is the name
that associated with the object in the Plant Explorer. This is generally the name used
for absolute object references. By default a query will search for all names of a
given object. This may provide unexpected results when objects, other than the one
that was intended to be referenced, have names that fit the specified object
reference. For example the query A* may return objects whose main name does not
start with A, but whose OPC Source Name does start with A. This can be avoided by
using the [name] keyword. This restricts the search to main names (category name)
only. For example: {Name}A*.
Table 13. Name Categories
114 3BUF001092-510
Section 4 Configuring Calculations Relative Object Referencing Guidelines
Examples
Table 14. Example Queries
Query Description
“{Functional Designation}M1.R2” Look only at name aspects of category ‘Functional
Designation’. First search an object M1 and from there
an object R2.
“[Control Structure]board11[Functional Search for board11 in the Control Structure, then
Structure]signal3” switch to the Functional Structure and search signal3.
“[Control Structure]board11[]signal3” Search “board11” in the Control Structure and from
there search all structures to find signal3.
“[Direct][Functional Structure]plant.a1” Start with the root object plant in the Functional
Structure and search for a direct child named a1.
3BUF001092-510 115
Relative Object Referencing Guidelines Section 4 Configuring Calculations
Type: Motor
Calculation
116 3BUF001092-510
Section 4 Configuring Calculations Relative Object Referencing Guidelines
3. Add the Calculation aspect to the motor object type. When adding the variables
for the two binary signals, specify the object reference, Figure 72.
3BUF001092-510 117
Performance Considerations for Object Referencing Section 4 Configuring Calculations
The dot (.) at the beginning of the object reference specifies that the reference
will start at the object that owns the calculation aspect, in this case the motor
object. Only motor objects in the Control Structure will be referenced.
4. Instantiate the motor object type in the Control Structure under the objects
representing Units 1 and 2 respectively. When the calculation runs for the
motor on Unit 1, the references for START and STOP will point to B117 and
B118 respectively. When the calculation runs for the motor on Unit 2, the
references for START and STOP will point to B217 and B218 respectively.
118 3BUF001092-510
Section 4 Configuring Calculations Managing Calculations
Managing Calculations
Calculations are managed via the Calculations Status Viewer. This viewer supports
the following functionality:
• Reading Calculation Status Information.
• Enabling/Disabling Calculations.
• Distributing Calculations on Multiple Servers.
In addition, the ability to collect and display performance statistics generated by the
Calculation server can be configured.
Some tasks performed in the Status Viewer may be lengthy, for example enabling
or disabling a large number of calculations. DO NOT close or change the Status
Viewer aspect while any Status Viewer task is in progress. The view MUST
remain open until the task is finished; otherwise, the client which is running the
task will be disconnected from the Calculation service.
Take the following measures to ensure that the view remains unchanged and the
client remains connected:
• If the Calculation Status Viewer aspect is hosted within the Plant Explorer
workplace, use the Pin / Unpin feature to ensure that no other aspect
replaces it during the task. Also, do not close the hosting workplace until the
operation is complete.
• If the Calculation Status Viewer aspect is being displayed in a child window,
do not close the window until the operation has completed. Also do not close
the workplace from which the child was launched, as this will close all child
windows.
3BUF001092-510 119
Reading Calculation Status Information Section 4 Configuring Calculations
To access this viewer, go to the Service structure, select the Calculation Server,
Service, and then select the Calculation Status Viewer Aspect, Figure 73.
120 3BUF001092-510
Section 4 Configuring Calculations Enabling/Disabling Calculations
Parameter Description
Name Calculation Name specified when the aspect was created.
Object Name Name of the object for which the calculation aspect was created.
Object Type Type of object for which the calculation aspect was created.
Enabled Indicates whether a calculation is enabled (True) or disabled (False). Toggle
states via the right-click menu. Refer to Enabling/Disabling Calculations on page
121.
Status Status codes for errors generated by related applications including OPC,
Microsoft, and Softpoint Server. These codes are intended for ABB Technical
Support personnel.
Message Textual message related to status.
Last Execution Date and time that the calculation was last executed.
Service Calculation Server where the calculation runs. This assignment can be
changed. Refer to Distributing Calculations on Different Servers on page 122.
Execution Time Time it took for the last execution of the calculation.
Sorting of columns with numbers or dates is done alphabetically which can yield
unusual sort for dates and numbers.
The following properties will not be updated when the Settings.UpdateStatus line
T
in the script is set to False (default condition): Result, Execution Time, last
Execution, Status, Status Message.
Enabling/Disabling Calculations
To enable or disable one or more calculations, select the calculations in the viewer,
then right click and choose Enable (or Disable) from the context menu, Figure 75.
3BUF001092-510 121
Distributing Calculations on Different Servers Section 4 Configuring Calculations
122 3BUF001092-510
Section 4 Configuring Calculations Performance Tracking
Figure 76. Calculation Services Dialogs (Dual on left and Redundant on right)
This dialog lists all Calculation Service Groups and Service Providers that exist in
the current system. Assign the selected calculation to a Calculation Service Group.
In a non-redundant configuration, only one Calculation Service Provider exists in a
Calculation Service Group. In a redundant configuration, two service providers exist
in one group. Their can be multiple Calculation Service Groups.
Performance Tracking
The Calculation Server generates performance statistics that used to monitor and
adjust the load on the server. These statistics may be collected by a softpoint object
created in the Control structure. Employ a trend aspect to monitor the performance
over time, identify peaks during execution, and take corrective action if necessary.
Guidelines for implementing this functionality are provided in:
• Collecting Performance Data for a Calculation Service on page 123.
• Accessing the Performance Data on page 128.
• Improving Performance on page 129.
3BUF001092-510 123
Collecting Performance Data for a Calculation Service Section 4 Configuring Calculations
124 3BUF001092-510
Section 4 Configuring Calculations Collecting Performance Data for a Calculation Service
PerformanceMonitor
Softpoint Object Type
3BUF001092-510 125
Collecting Performance Data for a Calculation Service Section 4 Configuring Calculations
126 3BUF001092-510
Section 4 Configuring Calculations Collecting Performance Data for a Calculation Service
Create the PerformanceMonitor instance under the Softpoint Object category in the
Control Structure. Remember to use the name of the Calculation Service Provider
when naming the softpoint object, Figure 79.
After creating the Performance Monitor object, use the softpoint configuration
aspect to deploy the new softpoint configuration. Refer to Deploying a Softpoint
Configuration on page 68Deploying a Softpoint Configuration for details.
3BUF001092-510 127
Accessing the Performance Data Section 4 Configuring Calculations
128 3BUF001092-510
Section 4 Configuring Calculations Improving Performance
Figure 81. Viewing Performance Data Via the Softpoint Object Dialog
Improving Performance
Refer to the Calculations Service Recommendations and Guidelines on page 131 for
specific use cases related to improved stability and predictability of the Calculation
Service.
Several factors affect the execution time for a calculation. These include:
• Number of inputs and outputs.
• Size and nature of the calculation source code.
• Persistence of the data in the aspect directory after calculation execution.
Data persistence can account for up to one-third of the total execution time of a
calculation. This is controlled on an individual calculation basis by the
Settings.UpdateStatus command in the calculation script. By default this
command is set to False. This means STATUS, STATUSMESSAGE,
EXECUTIONTIME, LASTEXECUTION, TRACEMESSAGE, RESULT
properties, and offline output values are NOT updated in the aspect directory upon
completion of the calculation execution. It is generally recommended that the
calculation update status be set to False.
Set the Update Status in the calculation script to true to have these properties
updated. For example, update the calculation properties every n number of
executions by using the Settings.UpdateStatus in a loop that sets the value to TRUE
after n iterations. An example is shown in Figure 82.
Execution times for calculations may be reduced up to 33%. For calculations that do
not execute often or for calculations with logic that require a significant amount of
3BUF001092-510 129
Improving Performance Section 4 Configuring Calculations
time to execute, this savings will be negligible. The savings will be most valuable
for smaller calculations that execute often.
Refer to the considerations below to better understand the consequences of using
the UpdateStatus feature to disable persistence of the calculation properties.
130 3BUF001092-510
Section 4 Configuring Calculations Calculations Service Recommendations and Guidelines
Use Case 3 - SoftPoint Signals Show Bad Data Quality when Calculations are
not Updating Data Quality
The quality of calculations inputs are used as the output if the calculation body does
not expressly set the quality attribute of the output variable. A race condition can
occur where bad data quality can be sent to SoftPoints before the input received
indicating the true data quality from the input signal. This can occur when the
SoftPoint Server is restarted either manually or as part of the reboot of the
subsystem that executed the SoftPoint.
It is recommended when using a SoftPoint as an output destination that as part of the
calculation the data quality should always be set.
3BUF001092-510 131
Use Case 4 – Usage of External Libraries and COM/DCOM from a Calculation Aspect Section 4
132 3BUF001092-510
Section 5 Configuring History Database
Introduction
This section provides details on configuring History databases.
3BUF001092-510 133
Dropping the Default Database Instance Section 5 Configuring History Database
3. Click Yes to continue dropping the database instance. The time to complete this
process may take up to several minutes. Progress is indicated by the History
Database Instance Delete message.
Once the default database instance has been dropped, create a new properly sized
instance. Refer to the Post Installation book Information Management section for
information on creating a new database instance.
134 3BUF001092-510
Section 5 Configuring History Database Duplicating an Existing Configuration
3BUF001092-510 135
Tablespace Maintenance Considerations Section 5 Configuring History Database
136 3BUF001092-510
Section 5 Configuring History DatabaseFlat File Maintenance via the Instance Maintenance Wizard
3BUF001092-510 137
Change the Default Oracle User Passwords Section 5 Configuring History Database
138 3BUF001092-510
Section 6 Configuring Log Sets
Introduction
Log sets is used to start or stop data collection for a number of property or message
logs simultaneously with one command. A log can be assigned to one or two log
sets. The only restriction is that all logs in the log set must reside in the same node.
For convenience, build log sets prior to building the logs so the log can be assigned
to a log set when it is configured. Logs can still be added to log sets later.
Log set should be avoided if possible. They exist primarily for migrations from
Enterprise Historian systems to 800xA. There are configurations where log sets
are required when profile logs are used. However, for non-profile applications
they should be avoided if possible.
Trend logs cannot be started and stopped with log sets. This functionality is only
applicable for history logs.
The operating parameters for a log set are specified in a Log Set aspect. A dedicated
aspect is required for each log set that needs to be configured. To create a Log Set
aspect, create a Log Set object under a specific server node in the Node
Administration structure refer to Adding a Log Set on page 140. The Log Set aspect
is automatically created for the object. To configure the Log Set aspect, refer to Log
Set Aspect on page 141.
Once the log sets are built, assign the logs to their respective log sets. This is done
via each log’s configuration aspect:
• For property logs this is done via the IM Definition tab of the Property Log
Configuration aspect and is described in Assigning a Log to a Log Set on page
259.
• For message logs, refer to Message Log Aspect on page 150.
3BUF001092-510 139
Adding a Log Set Section 6 Configuring Log Sets
140 3BUF001092-510
Section 6 Configuring Log Sets Log Set Aspect
5. Enter a name for the object in the Name field, for example: LogSet1.
6. Click Create. This adds the object under the Log Sets group, and creates a
corresponding Log Set Aspect.
Figure 87. Selecting the Inform IT Log Set in the New Object Dialog
3BUF001092-510 141
Specify Log Set Operating Parameters Section 6 Configuring Log Sets
Figure 88. Accessing the Configuration View for the Log Set Aspect
Attribute Description
Service Group When adding the Log Set object in the Node Administration structure, the
Service Group defaults to the Basic History Service Group for the selected
node. This specification cannot be changed.
Location This is a read-only field which indicates the node where the log set will run
based on the selected Service Group.
142 3BUF001092-510
Section 6 Configuring Log Sets Activate/Deactivate Logs in a Log Set
Set Name The log set name can be up to 32 characters. If the length limitation is
exceeded it will be truncated to size. This name will replace the object name
given when this object was created. DO NOT use spaces in the log set name
or the object name will be truncated at the space when the log set name
replaces the object name.
The log set name can not be changed in the Configuration View of the Log Set
aspect. Once it is entered in the Set Name field and Apply is clicked, DO NOT
change the name of the aspect. This will cause any new log configurations
pointing to the history template that references the renamed log set to fail
when created.
Description The description is optional. The length limitation is 40 characters.
Log Status This field indicates whether the logs in this log set are active or inactive. Refer
to Activate/Deactivate Logs in a Log Set on page 143.
Logs in Set This is a read-only field that lists the logs in this log set. No entries can be
made. To assign logs to a log set, refer to Assigning a Log to a Log Set on
page 259.
Number of Logs This field indicates the total number of logs in the log set.
3BUF001092-510 143
Delete a Log Set Section 6 Configuring Log Sets
144 3BUF001092-510
Section 7 Alarm/Event Message Logging
3BUF001092-510 145
Message Log Overview Section 7 Alarm/Event Message Logging
Audit Trail
Alarms/Events
Begin with any one of the following topics to suite your depth of knowledge of
message logs:
• If unfamiliar with message logs, refer to Message Log Overview on page 146
for a quick introduction.
• To configure message logs refer to Configuring Message Logs on page 148.
146 3BUF001092-510
Section 7 Alarm/Event Message Logging Message Log Overview
3BUF001092-510 147
Configuring Message Logs Section 7 Alarm/Event Message Logging
Offline Storage
Events and messages stored in a message log can be copied to an offline storage
media. Refer to Section 11, Configuring the Archive Function.
148 3BUF001092-510
Section 7 Alarm/Event Message Logging Adding a Message Log
3BUF001092-510 149
Message Log Aspect Section 7 Alarm/Event Message Logging
150 3BUF001092-510
Section 7 Alarm/Event Message Logging Specify Message Log Operating Parameters
Figure 91. Accessing the Configuration View for the Message Log Aspect
3BUF001092-510 151
Message Logging for 800xA System Alarm and Event Server Section 7 Alarm/Event Message
152 3BUF001092-510
Section 7 Alarm/Event Message Logging Message Logging for Batch Management
3BUF001092-510 153
Message Log Attributes Section 7 Alarm/Event Message Logging
Attribute Description
Service Group When adding the Message Log object in the Node Administration structure, the
Service Group defaults to the Basic History Service Group for the selected
node. This specification cannot be changed.
Location This is a read-only field which indicates the node where the message log will
run based on the selected Service Group.
154 3BUF001092-510
Section 7 Alarm/Event Message Logging Message Log Attributes
Attribute Description
Message Log Select the message log type before configuring any other parameters.
Type • For 800xA System Alarm/Event messages select OPC_MESSAGE.
• Batch Management messages select OPC_MESSAGE.
• For Advant OCS messages (for consolidating message logs from earlier
Enterprise Historian systems) select DCS_MESSAGE.
Access Name Assign the access name according to the selected message log type. Refer to:
• Message Logging for 800xA System Alarm and Event Server on page 152.
• Message Logging for Batch Management on page 153.
The access name text string can be 64 characters maximum. If the length
limitation is exceeded it will be truncated to size.
NOTE: The access name will replace the object name given when this object
was created. DO NOT use spaces in the log set name or the object name will be
truncated at the space when the log set name replaces the object name.
Description This field may be used to enter an optional text string. It does not have to be
unique system wide. The length limitation is 40 characters. All characters are
allowed.
Description has a special application for DCS MESSAGE logs in systems with
Master software.
Log Name History assigns a default log name by adding a prefix $HS and a suffix -1-o to
the access name. The last character in the log name indicates whether the log
is original (from disk) or restored from tape: o = original, r = restored
The default log name can be edited but can not be more than 64 characters. It
must be unique system wide.
Capacity The log capacity for the message log must be configured. This is the number of
messages the log can hold. If a log is full, new entries replace the oldest entries.
Specify the capacity as the number of messages needed to maintain online.
The maximum capacity is 12 million (12000000) entries; however, the field will
accept numerical values up to 9999999. Note that configuring the log capacity
larger than required will needlessly consume more disk space.
3BUF001092-510 155
Message Log Attributes Section 7 Alarm/Event Message Logging
Attribute Description
Archive Group This is the archive group to which this log is assigned. This assignment can also
be made via the Archive Group aspect as described in Configuring Archive
Groups on page 331.
Start State Log state upon starting or restarting the node.
Start Inactive
Start Active
Start Last State - restart in state log had when node shut down (default)
Start Time The earliest allowable system time that the log can become active. If the
specified start time has already past, the log will begin to store data at the next
storage interval.
If the start time is in the future, the log will go to PENDING state until the start
time arrives. The default is 01-01-1990 00:00:00.
156 3BUF001092-510
Section 7 Alarm/Event Message Logging Delete a Message Log
Attribute Description
Log Set and Log set to which this log belongs. The log can be reassigned to a new log set.
Second Log Set Note: The user of log sets should be avoided.
3BUF001092-510 157
Accessing a Message Log with an Alarm and Event List Section 7 Alarm/Event Message Logging
Figure 94. Configuration for Reading Message Log from Alarm and Event List
These aspects may be added to any object in any structure. However, it is generally
recommended that a generic object be created and then all three aspects added to
that object. This allows use of the same Alarm and Event List view in multiple
locations in the plant (in more than one structure). To do this, create one object with
these three aspects and use the Insert Object function to represent it in multiple
locations. Otherwise, the three aspects would need to be added to multiple objects.
158 3BUF001092-510
Section 7 Alarm/Event Message Logging Accessing a Message Log with an Alarm and Event List
The A/E Linked Server Configuration aspect must be configured to specify the
Information Management server where the message log resides. It must also be
configured to specify one of four classes of alarm/event data to read:
• Messages currently stored in the message log.
• Published messages.
• Restored messages.
• Messages residing on a specified archive volume.
To read more than one class of message, configure a separate object with all three
aspects for each class.
The Alarm and Event Configuration aspect must be configured to direct the Alarm
Event List aspect to access messages from the Information Management message
log rather than the 800xA system message service. This is done via the Source field
on this aspect.
Finally, the Alarm and Event List aspect must be configured to use to the Alarm and
Event List Configuration aspect described above.
3BUF001092-510 159
Accessing a Message Log with an Alarm and Event List Section 7 Alarm/Event Message Logging
2. Add the three aforementioned aspects to the new Generic type object.
160 3BUF001092-510
Section 7 Alarm/Event Message Logging Accessing a Message Log with an Alarm and Event List
a. The Alarm and Event List aspect is located under the Alarm and Events
category in the New Aspect dialog, Figure 96.
3BUF001092-510 161
Accessing a Message Log with an Alarm and Event List Section 7 Alarm/Event Message Logging
162 3BUF001092-510
Section 7 Alarm/Event Message Logging Accessing a Message Log with an Alarm and Event List
3BUF001092-510 163
Accessing a Message Log with an Alarm and Event List Section 7 Alarm/Event Message Logging
c. Select one of the four mutually exclusive radio buttons to select the type of
messages to read, Figure 101. The categories are described in Table 20.
To read more than one type of message, configure a separate object with all three
aspects for each type.
d. Click Apply when done.
Table 20. Retrieved Data Categories
Category Description
Retrieve Near-Online A/E Data by Retrieves messages currently stored in the message
Log Name log.
Retrieve Restored A/E Data Retrieves messages that have been restored from an
archive media.
Retrieve Published A/E Data by Retrieves messages from all published volumes.
Log name
Retrieve Archived A/E Data by Location Retrieves messages from a selected archive volume.
164 3BUF001092-510
Section 7 Alarm/Event Message Logging Accessing a Message Log with an Alarm and Event List
4. Configure the Source on the Filter tab of the Alarm and Event List
Configuration aspect to direct the Alarm and Event List aspect to read from the
Information Management message log. To do this (reference Figure 102):
a. Select the Alarm and Event List Configuration aspect.
b. Select the Filter tab.
c. Select IM Archive Events from the Source pull-down list.
d. Click Apply.
3BUF001092-510 165
Creating an Inform IT Event Filter Section 7 Alarm/Event Message Logging
e. Enable the Condition Events, Simple Events, and Tracking Events options
in the Categories area.
f. Click Apply.
166 3BUF001092-510
Section 7 Alarm/Event Message Logging Creating an Inform IT Event Filter
3BUF001092-510 167
Creating an Inform IT Event Filter Section 7 Alarm/Event Message Logging
168 3BUF001092-510
Section 8 Historizing Reports
Introduction
This section describes how to configure report logs to store finished reports
scheduled and executed via the Application Scheduler and Report action plug-in.
Completed reports may also be stored by embedding them in a completed report
object in the Scheduling structure. This must be specified within the Report Action.
Reports stored either as objects, or in a report log can be archived on either a cyclic
or manual basis. This is described in Section 11, Configuring the Archive Function.
Access to archived reports logs is via the View Reports aspect as described in the
section on viewing archive data in System 800xA Information Management Data
Access and Reports (3BUF001094*). When reviewing a restored report log, it will
be presented in the tool appropriate for the type of file the report was saved as (i.e.
Internet Explorer will be launched if the report was saved in .html format).
Generally, one report log is dedicated to one report. For instance, a WKLY_SUM
report will have a log especially for it. Each week’s version of the WKLY_SUM
report is put into the report log. Different report logs are then used for other reports.
More than one type of report can be sent to one report log if desired. Each report log
can be specified as to how many report occurrences to save.
3BUF001092-510 169
Adding a Report Log Section 8 Historizing Reports
To configure the Report Log aspect, refer to Report Log Aspect on page 171.
170 3BUF001092-510
Section 8 Historizing Reports Report Log Aspect
Under the InformIT History Object find containers for each of the Inform IT
History object types. The History objects (in this case, a report log) must be
instantiated under the corresponding container.
4. Right-click on the Report Logs group and choose New Object from the
context menu. This displays the New Object dialog with the Inform IT Report
Log object type selected.
5. Enter a name for the object in the Name field, for example: ReportLog1, then
click Create. This adds the object under the Report Logs group, and creates a
corresponding Report Log Aspect.
3BUF001092-510 171
Configure Report Log Operating Parameters Section 8 Historizing Reports
Attribute Description
Service Group When adding the Report Log object in the Node Administration structure,
the Service Group defaults to the Basic History Service Group for the
selected node. This specification cannot be changed.
Location This is a read-only field which indicates the node where the report log will
run based on the selected Service Group.
Access Name This is the name assigned the log for retrieval. Generally, use the name of
the report as specified in the report building package. Use either the access
name or log name to retrieve the data in a log.
The access name text string can be 64 characters maximum. If the length
limitation is exceeded it will be truncated to size.
NOTE: This name will replace the object name given when this object was
created. DO NOT use spaces in the access name or the object name will be
truncated at the space when the log set name replaces the object name.
Description An optional text string for documentation purposes only. It does not have to
be unique system wide. The length limitation is 40 characters maximum. All
characters allowed.
172 3BUF001092-510
Section 8 Historizing Reports Delete a Report Log
Log Name The log name must be unique system wide. It is derived from the access
name. History assigns a default log name by adding a prefix $HS and a
suffix -n-o to the access name. n is an integer that uniquely identifies
different logs in a composite log. This value will always be 1 since no other
logs can exist in a composite log containing a report log. The last character
in the log name indicates whether the log is original (from disk) or restored
from tape: o = original, r = restored
The default log name can be edited but can not be more than 64 characters.
It must be unique system wide.
Capacity This is the number of reports the log can hold. Be sure to specify a log
capacity for report logs. The field will accept numeric values up to 9999. If a
log is full, new entries replace the oldest entries.
Archive Group This is the archive group to which this log is assigned. This assignment can
also be made via the Archive Group aspect as described in Configuring
Archive Groups on page 331.
3BUF001092-510 173
View a Report Log Section 8 Historizing Reports
174 3BUF001092-510
Section 9 Historical Process Data Collection
Introduction
Process data collection refers to the collection and storage of numeric values from
aspect object properties. This includes properties for live process data, softpoint
data, and lab data (programmatically generated, or manually entered). This
functionality is supported by property log objects.
Process data collection may be implemented on two levels in the 800xA system.
The standard system offering supports storage of operator trend data via trend logs.
When the Information Management History Server function is installed, extended
and permanent offline storage (archive) is supported via history logs. This section
describes how to integrate and configure operator trend and history logs.
Begin with any one of the following topics depending on your current depth of
knowledge on property logs:
• To learn the concept of property logs, refer to Property Log Overview on page
176.
• If History Source aspects have not yet been configured in the system, refer to
Configuring Node Assignments for Property Logs on page 181. This must be
done before the logs are activated and historical data collection begins.
• To learn about important considerations and configuration tips before actually
beginning to configure property logs, refer to History Configuration Guidelines
on page 186.
• For a quick demonstration on how to configure a property log, refer to Building
a Simple Property Log on page 188.
• For details on specific property log applications, refer to:
– Lab Data Logs for Asynchronous User Input on page 203.
– Event-driven Data Collection on page 204.
– History Logs with Calculations on page 211.
3BUF001092-510 175
Property Log Overview Section 9 Historical Process Data Collection
1. One exception to this scenario is when a property log is created to collect (consolidate) historical data from
another Information Management server in another system.
176 3BUF001092-510
Section 9 Historical Process Data Collection Property Log Overview
For example, in Figure 106, the trend log stores high resolution data for a short time
span, while the history log stores the same high resolution data for a much longer
time span.
History Server
Connectivity
Server Hierarchical Primary History Log
Dual Logs
As an option, configure a dual log where the same trend log feeds two history logs
on two different history server nodes, Figure 107. This may be required when the
application cannot wait for history data to be back-filled in the event that a history
server goes off line. For example, shift reports may be required to execute at the end
of each 8-hour shift and cannot wait days or weeks for the data to be back-filled.
Configuration guidelines are provided in Dual Logs on page 203.
Source
Primary History Log
History Server 2
3BUF001092-510 177
Property Log Overview Section 9 Historical Process Data Collection
The following sections provide a brief overview of the historical collection and
storage functionality supported by history logs:
• Blocking and Alignment on page 179.
• Calculations on page 179.
• Data Compaction on page 179.
• Event Driven Data Collection on page 179.
• Offline Storage on page 180.
• Considerations for Oracle or File-based Storage on page 180.
178 3BUF001092-510
Section 9 Historical Process Data Collection Property Log Overview
Calculations
Although both trend and history logs can have calculations performed on collected
data prior to storage, it is generally recommended that raw data be collected and
stored, and then the required calculations can be performed using the data retrieval
tool, for example, DataDirect. To do this for history logs, the STORE_AS_IS
calculation algorithm is generally used.
This calculation algorithm causes data to be stored only when it is received from the
data source. For OPC type logs, this calculation lets the OPC server’s exception-
based reporting be used as a deadband filter. This effectively increases the log
period so the same number of samples stored (as determined by the log capacity)
cover a longer period of time. Refer to STORE AS IS Calculation on page 230 for
further details.
Data Compaction
The recommended method for data compaction is to use the STORE_AS_IS
calculation algorithm. (STORE AS IS Calculation on page 230). The attributes on
the Deadband tab may be used in certain circumstances, for example to support
upgrades from earlier installations that used deadband, or when configuring
calculation algorithms such as minimum, maximum, and average for history logs.
3BUF001092-510 179
Property Log Overview Section 9 Historical Process Data Collection
Offline Storage
All process data stored in a property log can be copied to an offline storage media.
This can be done on a cyclic or manual basis. Refer to Section 11, Configuring the
Archive Function.
Seamless Retrieval
Seamless retrieval makes it easier to access historical data. For trends, when
scrolling back in time beyond the capacity of the specified log, the seamless
retrieval function will go to the next (secondary) log in the log hierarchy.
Applications do not need to know the name of the log in order to retrieve data.
Attributes such as data source, log period, calculation algorithm, and retrieval type
can be specified and History will search for the log which most closely fits the
profile.
Consolidation
Data from property logs on history server nodes in different systems (as configured
via the 800xA system configuration wizard) may be consolidated onto one central
consolidation node as illustrated in Figure 109.
All Information Management consolidation functionality is limited to 800xA 5.1
to 800xA 5.1 Systems.
180 3BUF001092-510
Section 9 Historical Process Data Collection Configuring Node Assignments for Property Logs
$HSTC100,MEASURE-1-o $HSTC100,MEASURE-1-o
Data Source
$HSTC200,MEASURE-1-o $HSTC200,MEASURE-1-o
Data Source
History Server
in System A
$HSTC300,MEASURE-1-o $HSTC300,MEASURE-1-o
Data Source
History Server Consolidation Node
in System B in System C
3BUF001092-510 181
Using One History Source Section 9 Historical Process Data Collection
the object hierarchy to find and associate itself with the first History Source aspect it
encounters. The History Source aspect points to a History Service group on a
specified Connectivity Server. To ensure that logs are assigned to the correct nodes,
add one or more History Source aspects in the applicable structure where the log
configuration aspects will be instantiated. Care must be taken to situate the History
Source aspects such that each log will find the History Source that points to its
respective Connectivity Server. Two examples are illustrated in Figure 110 and
Figure 111.
• Property logs will not collect data until they are associated with a History
Source aspect.
• History Source aspects must be placed in the same structure where log
configuration aspects are to be instantiated.
• Add History Source aspects as early as possible in the engineering process.
The change to update the History Server for a Service Group can take a
long time for large configurations. All logged data will be lost during change
of Service Group for affected log configurations.
182 3BUF001092-510
Section 9 Historical Process Data Collection Using Multiple History Sources
Control Structure
History Source
Control Application 1
Property Log
Property Log
Control Application 2
Property Log
Property Log
Control Structure
Control Application 1
History Source
Property Log
Property Log
Control Application 2
History Source
Property Log
Property Log
3BUF001092-510 183
Pointing History Source to History Service Group Section 9 Historical Process Data Collection
OPCDA_Connector, Service
Control Application 2
eng64AC800Mopc, Service Grp
History Source 2
tar22AC800Mopc, Service Grp
Property Logs
184 3BUF001092-510
Section 9 Historical Process Data Collection Adding A History Source Aspect
3. Click on the History Source aspect and use the Service Group pull-down list to
select the Service Group for a specific node, Figure 114 (use the same node
specified in the OPC Data Source aspect (Figure 112).
4. Click Apply. This completes the History set-up requirements.
3BUF001092-510 185
History Configuration Guidelines Section 9 Historical Process Data Collection
186 3BUF001092-510
Section 9 Historical Process Data Collection Configuring History Objects - Procedural Overview
3BUF001092-510 187
Post-configuration Requirements Section 9 Historical Process Data Collection
Post-configuration Requirements
• Make sure History Source objects have been added as required. Refer to
Configuring Node Assignments for Property Logs on page 181.
• History logs will not collect data until they are activated. Refer to Starting
and Stopping Data Collection on page 426.
• Use the stagger function as described in Stagger Collection and Storage on
page 459 to stagger collection and storage times to avoid spikes in CPU
usage which may occur when sample and storage rates cause a large
number of messages to be written to disk at one time. Do this BEFORE
activating logs. After stagger is complete, restart the History software and
then activate the logs.
• Test the operation of one or more property logs. Use the Status tab on the
Log Configuration aspect (Status on page 254), or use one of the installed
desktop tools such as DataDirect or Desktop Trends.
• Sometimes changes to Log Configurations Aspects may not take effect until
PPA History Services is restarted in the Connectivity Server where the Trend
Log resides. To prevent this problem from occurring, restart the History
Services after any History Log configuration activity. When redundant
Connectivity servers are used, the restart should be staggered.
Source
There are two basic parts to this tutorial: creating a log template, and configuring the
log configuration aspect. Start with Creating a Log Template below.
188 3BUF001092-510
Section 9 Historical Process Data Collection Creating a Log Template
The log template name is included in the Data Source name when a log
configuration aspect is attached to an object (object name:property name,log
template name). While a long name is supported, keep it reasonable.
3BUF001092-510 189
Creating a Log Template Section 9 Historical Process Data Collection
c. Click Create when finished. This adds the History Log Template object to
the History Log Template Library.
190 3BUF001092-510
Section 9 Historical Process Data Collection Creating a Log Template
1. The one exception to this rule is when a property log is created to collect historical data from an earlier
platform such as an Enterprise Historian with MOD 300 or master software.
3BUF001092-510 191
Creating a Log Template Section 9 Historical Process Data Collection
Figure 117. Basic History Trend Log Configuration View - Log Definition Tab
1. Use the Log Definition tab to specify a log name, Figure 118. Again, use a
name that identifies the function this log will perform. For example, the name
in Figure 118 identifies the log as a trend log with a 1-week storage size.
The Log Type, Source, and Service Group are fixed as specified in the New
Log Template dialog and cannot be modified.
2. Click the Data Collection tab and configure how this log will collect from the
OPC data source. For this tutorial, set the Storage Interval Max Time to
1 Minutes, and the Storage Size Time to 1 Weeks, Figure 119. For further
information regarding trend log data collection attributes, refer to Collection
Attributes for Trend Logs on page 219.
192 3BUF001092-510
Section 9 Historical Process Data Collection Creating a Log Template
3BUF001092-510 193
Creating a Log Template Section 9 Historical Process Data Collection
following steps. To configure any other attributes, refer to Property Log Attributes
on page 212.
1. Use the Log Definition tab to specify a Log Name and Service Group.
The Service Group pull-down list contains all History servers defined in the
system. Use this list to specify the server where this log will reside. The list
defaults to the local node.
194 3BUF001092-510
Section 9 Historical Process Data Collection Creating a Log Template
3BUF001092-510 195
Creating a Log Template Section 9 Historical Process Data Collection
a. Configure the Sample Interval to establish the base sample rate, for
example: 1 Minutes. The Storage Interval automatically defaults to the
same value. Also, the Sample Blocking Rate defaults to a multiple of the
Sample Interval, in this case, one hour.
b. Then enter the Log Period, for example: 52 Weeks. This determines the
Log Capacity (log capacity = log period * storage interval).
It is strongly recommended that the defaults be used for calculation algorithm
(Store As Is) and storage type (type 5). Typically, calculations are not used for
data collection. Desktop tools can perform calculations during data retrieval.
Store As Is causes data to be stored only when it is received from the data
source. In this case the sample and storage intervals are essentially ignored for
data collection, and are used only to properly size the log.
4. Click Apply when finished with both the trend and history log configurations.
This completes the template configuration. Deadband is not here.
196 3BUF001092-510
Section 9 Historical Process Data Collection Configuring a Log Configuration Aspect
3BUF001092-510 197
Configuring a Log Configuration Aspect Section 9 Historical Process Data Collection
This adds the Log Configuration aspect to the object’s aspect list.
198 3BUF001092-510
Section 9 Historical Process Data Collection Configuring a Log Configuration Aspect
Figure 124. Log Configuration Aspect Added to the Object’s Aspect List
The Logged pane in the Log Configuration view shows the name of the object with
one or more property logs being added. A property log must be added for each
property whose value will be logged.
3BUF001092-510 199
Configuring a Log Configuration Aspect Section 9 Historical Process Data Collection
This displays the New Property Log dialog, Figure 126. This dialog is used to
select one of the object’s properties for which to collect data, and apply the Log
Template which meets the property’s data collection requirements. If necessary,
change the default data type specification for the selected property.
2. From the Property list, select the property for which data will be collected.
Then select a template from the Template list.
Data Type
1) Select a (Recommended
Property not to change)
2) Select a
Log Template
The supported data types for property logs are listed in Table 22. When a property is
selected, the default data type is automatically applied. This is the native data type
for the property as determined by the OPC server. The OPC/DA data type is mapped
to the corresponding 800xA data type. DO NOT change the data type. When the
storage type for the History log is configured as TYPE5 (recommended), the log is
sized to accommodate the data type.
200 3BUF001092-510
Section 9 Historical Process Data Collection Configuring a Log Configuration Aspect
3BUF001092-510 201
Configuring a Log Configuration Aspect Section 9 Historical Process Data Collection
Template
OPC Object
This aspect does not require any further configuration; however, some data
collection parameters may be adjusted via the tabs provided for each component
log. These tabs are basically the same as the tabs provided for log template
configuration. Some configuration parameters are read-only and cannot be modified
via the log configuration aspect. Also some parameters that were not accessible via
the log configuration template are accessible via the log configuration aspect.
Two additional tabs are provided for the log configuration aspect. The Presentation
tab is used to configure presentation parameters for viewing log data on a trend
display. The Status tab is used to view log data directly via the log configuration
aspect.
Post-configuration Requirements
History logs will not collect data until they are activated. Refer to Starting and
Stopping Data Collection on page 426.
Test the operation of one or more property logs. Use the Status tab on the Log
Configuration aspect (Status on page 254), or use one of the installed desktop tools
such as DataDirect or Desktop Trends.
What to Do Next
This concludes the tutorial for configuring a basic property log. To create a large
quantity of similar logs, use the Bulk Configuration utility as described in Bulk
Configuration of Property Logs on page 262. To learn about common property log
applications, refer to Property Log Applications on page 203. For more information
202 3BUF001092-510
Section 9 Historical Process Data Collection Property Log Applications
Dual Logs
As an option, configure a dual log where the same trend log feeds two history logs
on two different history server nodes. This may be required when the application
cannot wait for history data to be back-filled in the event that a history server goes
off line. For example, shift reports may be required to execute at the end of each 8-
hour shift, and cannot wait for the hardware to be repaired and for the data to be
back-filled.
To implement this functionality, create a log template with two identical primary
history logs, with the only difference being the Service Group definition.
3BUF001092-510 203
Event-driven Data Collection Section 9 Historical Process Data Collection
disabled since this is not applicable for lab data logs. Add a Lab Data log alone, or
as part of a log hierarchy with a direct log, Figure 128.
When added with synchronous logs, the asynchronous lab data log does not
receive data from the data source. The connection is only graphical in this case.
Lab Data Log Added With Lab Data Log Added Alone
a Direct Log
When configuring a history-type lab data log, most of the tabs specific to history
logs are not applicable. The only attribute that must be configured for a Lab Data
log is Log Capacity as described in Data Collection Attributes on page 219.
A lab data log CANNOT be the source for another log. Secondary hierarchical
logs cannot collect from lab data logs.
The event-driven property log is configured much like any other property log as
described in Building a Simple Property Log on page 188. The only special
requirements when configuring the log for event driven data collection are:
204 3BUF001092-510
Section 9 Historical Process Data Collection Event-driven Data Collection
• The direct trend log must always be active in order to collect data before the
event actually triggers data collection.
• The event-driven log must be a history log, and it must be the (first) primary
log connected to the direct trend log, Figure 129. Also this log must be
configured to start deactivated. The log will go active when the event trigger
occurs.
Secondary History Log
History Server
Connectivity
Server Primary History Log
Always Active
Create a Job
To schedule event-driven data collection, create a job with a Data Collection action
in Application Scheduler. The Scheduling Definition aspect will be set up to have
specified logs collect for a one-hour period, every day between 11:30 AM and 12:30
PM. Jobs and scheduling options are also described in System 800xA Information
Management Data Access and Reports (3BUF001094*).
To create a job in the Scheduling structure:
1. In the Plant Explorer, select the Scheduling Structure.
2. Select Job Descriptions and choose New Object from the context menu.
3BUF001092-510 205
Event-driven Data Collection Section 9 Historical Process Data Collection
3. In the New Object dialog, Figure 130, select Job Description and assign the
Job object a logical name (for example StartLogging1).
4. Click Create. This creates the new job under the Job Descriptions group, and
adds the Schedule Definition aspect to the object’s aspect list.
5. Click on the Scheduling Definition aspect to display the configuration view,
Figure 131. This figure shows the scheduling definition aspect configured as a
periodic schedule. The trigger event will occur once every day, starting July 3rd
at 12:00 PM, and continuing until August 3rd at 12:00 PM.
206 3BUF001092-510
Section 9 Historical Process Data Collection Event-driven Data Collection
3BUF001092-510 207
Event-driven Data Collection Section 9 Historical Process Data Collection
a. Click the Add Logs button. This displays a browser dialog similar to the
Plant Explorer. Use this browser to find and select the logs in the Plant
Explorer structures, Figure 133.
208 3BUF001092-510
Section 9 Historical Process Data Collection Event-driven Data Collection
b. Select the object whose log will be added to the list, then select the log in
the log hierarchy, and click Add Log To List. This adds the selected log to
the log list, Figure 134.
3BUF001092-510 209
Event-driven Data Collection Section 9 Historical Process Data Collection
210 3BUF001092-510
Section 9 Historical Process Data Collection History Logs with Calculations
3BUF001092-510 211
Property Log Attributes Section 9 Historical Process Data Collection
History Log
Trend Log
History Log
Source
Storage Rate: 10 seconds
Time: 2 Weeks Storage Rate: 1 Minute
Time period: 13 Weeks
Calculation: Minimum
History Log
212 3BUF001092-510
Section 9 Historical Process Data Collection New Log Template Attributes
• Source.
3BUF001092-510 213
Log Definition Attributes Section 9 Historical Process Data Collection
214 3BUF001092-510
Section 9 Historical Process Data Collection IM Definition Attributes
• Log Name.
This is the name by which each component log is identified within the property
log hierarchy. Use a meaningful name. One way is to describe the data
collection scheme in the name, for example: Log_1m_13wAvg (log for data
collected @ 1-minute sample rate and storing the 13-week calculated average).
• Service Group.
The Service Group pull-down list contains all History servers defined in the
system. Use this to specify the server where this log will reside. The list
defaults to the local node.
The Default (History Source Aspect) is only applicable for trend logs.
IM Definition Attributes
This tab is only available for linked history logs using the IM History Log server
type. It is used to configure Information Management history functionality,
Figure 138. This includes: Log Set, Archive Group, Start State, Collection Type, IM
Historian Log State, and Dual Log.
Collection Mode is a read-only attribute.
The IM Data Source attributes is not accessible for log template configuration. This
attribute is defined via the IM Definition tab on the Log Configuration aspect.
3BUF001092-510 215
IM Definition Attributes Section 9 Historical Process Data Collection
IM Data Source
The IM data source field is activated on the Log Configuration aspect view. This
field is dimmed on the log template view. The IM data source defaults to a unique
ID for the object name:property name, log template name and its path in the
structure. For hierarchical logs connected to a direct log, the IM data source is fixed
and cannot be changed.
For applications where History collects data from a remote history log configured in
an existing history database, the Collection Type is set to API. In this case enter the
name using the following format: $HSobject,attribute-n-oIPaddress
The IP address is only required when collecting from multiple sites and identical
log names are used on some sites. Refer to Consolidating Historical Process Data
on page 356.
Log Set
This is the primary Log Set to which this log belongs. The Log Set pull-down list
includes all log sets configured on the local history server. Logs can be assigned to a
second log set via the Second Log Set field. To delete a log from a log set, clear the
Log Set field. For details on configuring log sets, refer to Section 6, Configuring
Log Sets.
216 3BUF001092-510
Section 9 Historical Process Data Collection IM Definition Attributes
Archive Group
This is the archive group to which this log belongs. The Archive Group pull-down
list includes all archive groups configured on the local history server. To delete a log
from an archive group, clear the Archive Group field. Logs may also be assigned to
an archive group via the Archive Group aspect. This is described in Configuring
Archive Groups on page 331.
If available archive groups are not listed in the Archive Group combo box, click
Apply changes to the Log Template aspect. This makes the archive groups
available for selection via the combo box.
Start State
This is the initial log state when History Services is restarted via PAS. Choices are:
START_INACTIVE
START_ACTIVE
START_LAST_STATE (default) - restart in state log had when node shut
down
Hierarchical logs do not start collecting and storing data until the direct log becomes
active. The entire property log with all of its component logs can be activated and
de-activated as a unit via the Enable check box in the Log Configuration aspect.
Collection Mode
The collection mode determines how data is processed when it is received by
History. Collection Mode also determines the valid choices for Collection Type and
Storage Type. Collection mode is a read-only attribute whose value is fixed based on
the selected Log Type.. The mode can be either synchronous or asynchronous:
SYNCHRONOUS - periodic Time Sequence data. This is the collection mode for
logs which collect from a specified object property. The time interval between
samples received by History must always be the same. Calculations and deadband
can be applied to synchronously collected data.
If Collection Mode is Synchronous, all Storage Types are allowed. However, the
data must be entered in time forward order. This means that once an entry is stored
at a particular time, the log will not allow entries with a date before that time. This is
true for all Collection Types: API, OPC HDA, and USER_SUPPLIED.
3BUF001092-510 217
IM Definition Attributes Section 9 Historical Process Data Collection
Also, entries are received and stored at the configured storage rate. For example, for
a user supplied, synchronous log, attempting to add an entry every second for a log
with a 1 minute storage interval, only one entry over a one-minute period will be
stored. If only one entry is received every day for a one minute log, history will
insert at least one "NO DATA" entry between each value received.
ASYNCHRONOUS - data is entered by an application such as DataDirect, or
Display Services. This mode is for Lab Data logs. The Storage Type must be
Oracle and the Collection Type is undefined. The time between entries sent to
History does not have to be the same every time. When the log reaches its capacity,
the oldest values are deleted. Retrieval is time based.
Collection Type
This determines the application for which History collects data. Collection type is
only configurable for primary history logs. The options are:
• OPC HDA- The History log collects OPC data from a trend log.
• API - History periodically collects data from a remote history log.
• USER_SUPPLIED: Samples are sent to History from a user application such
as DataDirect. The time between samples sent is expected to be equal to the
sample rate of the log.
When Collection Mode is Asynchronous, the Collection Type is undefined;
however, the log operates as if the collection type was defined as User Supplied.
Dual Log
This configuration parameter is not used.
218 3BUF001092-510
Section 9 Historical Process Data Collection Data Collection Attributes
Aggregate
For hierarchical trend logs apply any one of the aggregates described in Table 23.
Aggregates are not applicable for Direct-type logs.
3BUF001092-510 219
Collection Attributes for Trend Logs Section 9 Historical Process Data Collection
220 3BUF001092-510
Section 9 Historical Process Data Collection Collection Attributes for Trend Logs
Aggregate Description
Interpolated Interpolated values.
Total Time integral of the data over the re-sample interval.
Average Average data over the re-sample interval.
Timeaverage Time weighted average data over the re-sample interval.
Count Number of raw values over the re-sample interval.
Standard Deviation Standard deviation over the re-sample interval.
Minimum Actual Minimum value in the re-sample interval and the time stamp of the minimum
Time value (in seconds).
Minimum Minimum value in the re-sample interval.
Maximum Actual Maximum value in the re-sample interval and the time stamp of the
Time maximum value (in seconds).
Maximum Maximum value in the re-sample interval.
Start Value at the beginning of the re-sample interval. The time stamp is the time
stamp of the beginning of the interval.
End Value at the end of the re-sample interval. The time stamp is the time stamp
of the end of the interval.
Delta Difference between the first and last value in the re-sample interval.
Regression Slope Slope (per cycle time) of the regression line over the resample interval.
Regression Const Intercept of the regression line over the re-sample interval. This is the value
of the regression line at the start of the interval.
Regression Standard deviation of the regression line over the re-sample interval.
Deviation
Variance Variance over the sample interval.
Range Difference between minimum and maximum value over the sample interval.
Duration Good Duration (in seconds) of time in the interval during which the data is good.
3BUF001092-510 221
Collection Attributes for Trend Logs Section 9 Historical Process Data Collection
Aggregate Description
Duration Bad Duration (in seconds) of time in the interval during which the data is bad.
Percentage Good Percentage of data in the interval which has good quality. One (1) = 100%
Percentage Bad Percentage of data in the interval which has bad quality. One (1) = 100%
Worst Quality Worst quality of data in the interval.
Storage Interval
The storage interval is the rate at which samples are collected and stored. This is
configured using Min Time, Max Time and Exception Deviation (%) properties.
Set Min Time to the base rate at which samples are to be collected. Samples will be
stored at this rate unless data compaction is implemented using Exception Deviation
(%). The shortest possible time between two samples is one (1) sec.
Exception Deviation (%) establishes a deadband range by which to compare the
deviation of a sample value. If the current sample value deviates from the previous
sample value by an amount equal to or greater than the specified Exception
Deviation (%), then the sample is stored; otherwise, the sample is not stored.
Max Time establishes the maximum time that may elapse before storing a sample.
This way, if a process value is very stable and not changing by the specified
Exception Deviation (%), samples are stored at the rate specified by the Max Time.
To store all samples (no data compaction), set the Exception Deviation (%) to 0, and
set the Max Time equal to the Min Time.
It is always recommended to have Max Time greater than Min Time. When Max
Time equals Min Time, extra values can be inserted into the log when the next
sample point is delayed from the source.
Storage Size
Storage size can either be specified as number of points stored in the log or as the
time range covered in the log. If the number of points is specified, the time covered
in the log will be calculated and displayed in the time edit box.
222 3BUF001092-510
Section 9 Historical Process Data Collection Collection Attributes for History Logs
Data Data
Buffer Calculation
Source
Storage
Send to
Secondary
Secondary Data Log(s)
Figure 140 shows that data sent to a secondary log does not undergo data
compaction, even if the primary log has deadband configured. The secondary log
gets all values that result from the calculation. Thus, the calculation performed by
the secondary log is more accurate than if done using data stored after deadband
compaction.
3BUF001092-510 223
Collection Attributes for History Logs Section 9 Historical Process Data Collection
History can be configured to perform calculations on collected data before the data
is actually stored in the log. When a calculation is active, the specified algorithm,
such as summing or averaging, is applied to a configured number of data samples
from the data source. The result of the calculation is stored by History.
For example, configure History to collect five values from the data source, calculate
the average of the five values, and then store the average. This calculation is actually
configured by time, not the number of values. The calculation is applied to the
values collected over a configured time period.
Data compaction can also be configured. This uses a mathematical compaction
algorithm to reduce the amount of data stored for properties. Both calculations and
deadband can be used concurrently for a log. The calculation is done first. Then the
deadband algorithm is applied to see if the result should be stored.
For examples of data collection applications, refer to Property Log Configuration
Reference on page 258.
The Data Collection tab for history logs is shown in Figure 141. Use this tab to
configure: Log Period, Sample Interval, Storage Interval, Sample Blocking Rate,
Calculation Algorithm, Storage Type, Entry Type (when storage type = 5), Start
Time, Log Capacity (for lab data log only), Bad Data Quality Limit.
224 3BUF001092-510
Section 9 Historical Process Data Collection Collection Attributes for History Logs
Log Period
Disk space is allocated to logs according to their log capacity, which is a function of
the log time period and storage interval. The log period can be adjusted on an
individual log basis to keep log capacity at a reasonable size for each log. For all
synchronous logs with or without data compaction, log capacity is calculated as: log
capacity = (log period/storage interval).
When using compaction (or Store As Is), the log period is effectively increased so
the same number of samples (as determined by the log capacity) are stored over a
longer period of time.
Enter the log period as an integer value ≤4095 with time unit indicator for Weeks,
Days, Hours, Minutes, Seconds. The maximum is 4095 Weeks.
Sample Interval
This is the rate at which data is sampled from a data source. For USER_SUPPLIED,
this is the expected time interval between samples sent to History. Whether or not
the sample is stored depends on whether or not a calculation algorithm is specified,
and whether or not a compaction deadband (or Store As Is) is specified.
Enter the sample interval as an Integer value with a time unit indicator: Weeks,
Days, Hours, Minutes, Seconds. Monthly is NOT a legal sample interval.
If data is not received within 50% of the interval prior to the expected time, or 100%
of the interval after the expected time, the entry value is marked with status
NO_DATA. Time stamps may drift due to load in the system. An example of a case
where time drift can cause NO_DATA entries to be stored is illustrated in
Figure 142.
3BUF001092-510 225
Collection Attributes for History Logs Section 9 Historical Process Data Collection
For 1:40 time slot, NO_DATA is stored since the actual time
(1:50) is offset from the expected time (1:40)
by 100% of the sample interval (10s)
NO_DATA
0:00 0:11 0:22 0:33 0:44 0:55 1:06 1:17 1:28 1:39 1:50 2:01 2:12
Actual Times
Storage Interval
This is the time interval at which data is stored in the log, except when deadband is
active (or Store As Is is used). Enter the storage interval as an Integer value with a
time unit indicator: Weeks, Days, Hours, Minutes, Seconds. The maximum is
4095 Weeks. The storage interval must be a multiple of the sample interval. For
example, if the sample interval is 5, the storage interval can be 10, 15, 20, 25, and so
on. When using Store As Is or Instantaneous as the calculation algorithm, the
storage interval must be set equal to the sample interval.
A special storage interval is monthly. This causes a value to be stored at the end of
the month. The timestamp of a monthly log depends upon the storage interval and
alignment of its data source:
• If the data source is a daily log which is aligned to 06:00, then the monthly log
values will have a time of 06:00.
• If the data source is hourly or less, then the monthly log will be around
midnight (depending on the alignment of the data source). For example, an
hourly log at 30 minutes after the hour will produce a monthly value at 00:30.
• If the data source is an ‘x’ hours log aligned to 06:00 (for example, 8 Hours log
with values stored at 06:00, 14:00, 22:00, and so on), then the monthly log will
have a time of 06:00.
226 3BUF001092-510
Section 9 Historical Process Data Collection Collection Attributes for History Logs
Sample blocking rate is based on the storage interval of the log and is set to an
appropriate value for most configurations. After new logs are created, the stagger
functionality should be used to adjust stagger and phasing for the entire IM
server. It is not recommended to attempt to adjust stagger and phasing on a per
template or per log basis. It is recommended that all IM severs run hsDBMaint,
the stagger after any configuration changes. Refer to Stagger Collection and
Storage on page 459.
Sample blocking rate is NOT used under the following conditions:
• if the storage interval is Monthly.
• if the Collection Type is USER_SUPPLIED. In this case the values are stored
as they are sent to History.
3BUF001092-510 227
Collection Attributes for History Logs Section 9 Historical Process Data Collection
Storage Type
Log entries can be stored in Oracle tables or in files maintained by History. File
storage is faster and uses less disk space than Oracle storage. File-based storage is
only applicable for synchronous property logs. The default storage type is file-based
TYPE5 which supports variable size based on the type of data being collected.
ORACLE is mandatory for Asynchronous logs. This type supports collection for
floating point data only. Other storage type are available. For further information
refer to File Storage vs. Oracle Tables on page 247.
Entry Type
For TYPE5 Storage Type the Entry Type can be specified to conserve the disk space
required per entry. The Entry Type attribute specifies whether or not to save the
original value with the modified value when a client application is used to modify a
stored history value.
Disk space is allocated for both the original and modified value whether or not the
value is actually modified. Therefore, only choose to save the original value when
the modify history value function is being used and original values need to be saved.
The choices are:
• Do Not Save Original Value - If a log entry is modified, the original value is
not saved.
• Save Original Value - If a log entry is modified, the original value is saved
along with the modified value. This allocates disk space for both values,
whether or not the value is actually modified.
• Not Applicable - For storage types other than TYPE5.
Log Capacity
This is the maximum number of entries for a log. If a log is full, new entries replace
the oldest entries.
For asynchronous collection, the log capacity is configurable. Enter an integer
value. The maximum log capacity is different for each Storage Type as indicated in
Table 24.
228 3BUF001092-510
Section 9 Historical Process Data Collection Collection Attributes for History Logs
For synchronous collection, this field is not configurable. Log capacity is calculated
as follows:
Log Capacity = (Log Period/ Storage Interval).
If the specified log period or storage interval result in a calculated log capacity
that exceeds the maximum log capacity (Table 24), an error message will be
generated and either the log period or storage interval must be adjusted. For
further information regarding these log attributes, refer to:
• Log Period on page 225.
• Storage Interval on page 226.
.
Calculation Algorithm
Although both trend and history logs can perform calculations on collected data
prior to storage, it is a good practice to collect and store raw data, and then perform
the required calculations using the data retrieval tool, for example, DataDirect. To
do this, use the STORE_AS_IS calculation algorithm, Figure 143.
This calculation causes data to be stored only when it is received from the data
source. For OPC type logs, this calculation can use the OPC server’s exception-
based reporting as a deadband filter. This effectively increases the log period so the
3BUF001092-510 229
Collection Attributes for History Logs Section 9 Historical Process Data Collection
same number of samples stored (as determined by the log capacity) cover a longer
period of time. Refer to STORE AS IS Calculation on page 230 for further details.
History Server
Connectivity
Server Hierarchical Primary History Log
Other calculations may be implemented for certain property log applications. These
are described in Other Calculation Algorithms on page 232.
STORE AS IS Calculation
This calculation causes data to be stored only when it is received from the data
source. This functionality has three different applications, depending on whether it
is applied to an OPC type log, or and API (OMF) type log.
Collecting from Operator Trend Logs (OPC Data)
STORE_AS_IS is an option in the Calculation Algorithm section on the Data
Collection tab of the Log Configuration aspect. For OPC type logs, this calculation
uses the OPC server’s exception-based reporting as a deadband filter. When
collecting from an OPC/DA source, OPC returns data at the subscription rate, only
if the data is changing. This effectively increases the log period so the same number
of samples stored (as determined by the log capacity) cover a longer period of time.
For all calculations other than STORE_AS_IS, if data is not changing and samples
are not being received, History will insert previous values with the appropriate time
stamps. This is required for calculations to occur normally.
With STORE_AS_IS, no other calculations are performed (including
INSTANTANEOUS); therefore, data does not need to be stored at the specified
230 3BUF001092-510
Section 9 Historical Process Data Collection Collection Attributes for History Logs
sample rate. Samples will only be recorded when a tag’s value changes, or at some
other minimum interval based on the log configuration.
Data Forwarding of Deadbanded Data to a Consolidation Node
STORE_AS_IS can implement data compaction on the collection node, before data
forwarding to the consolidation node. This saves disk space on the collection node.
When data forwarding deadbanded data to a consolidation node, be sure to use
STORE_AS_IS rather than INSTANTANEOUS as the calculation algorithm for the
hierarchical log on a consolidation node. This eliminates insertion of NO_DATA
entries for missing samples.
Time Drift in Subscription Data
When running synchronous logs (API or USER_SUPPLIED) at a fast rate, it is
possible for time stamps to drift over time, eventually forcing a sample interval to be
missed. For all calculations other than STORE_AS_IS, History adds an entry with a
status of NO_DATA for the missing sample interval. This is a false indication of
missing data, and can cause a spike or drop in a trend display (refer to Figure 142).
By using STORE_AS_IS, rather than INSTANTANEOUS, these spikes will not
occur.
The STORE AS IS calculation is generally applied to the primary (first) hierarchical
log. The log where the STORE AS IS calculation is applied can be the data source
for a hierarchical log, if the hierarchical log is also STORE AS IS (for example data
forwarding to a consolidation node). Do not use the STORE AS IS calculation when
further calculations are required.
Deadband Storage Interval
In the case where a process is very stable and values are not changing, there may be
very long periods where no samples are stored. This may result in holes in the trend
display where no data is available to build the trends. Use the deadband storage
interval on the Deadband Attributes tab to force a value to be stored periodically,
even if the value does not change. This ensures that samples are stored on a regular
basis. Refer to Deadband Compaction % on page 237.
Shutdown and Startup
3BUF001092-510 231
Collection Attributes for History Logs Section 9 Historical Process Data Collection
Since data is not being stored if it is not changing, History will insert the current
value and current timestamp when it is shutdown. This guarantees that the data can
be interpolated correctly up until the time History was shutdown.
The following is applicable for API (OMF-based), but NOT for operator trend logs
or user-supplied logs.
At startup, NO_DATA entries will be inserted for STORE_AS_IS type logs to show
that the node was down, and that data was not collected during that time. This will
allow interpolation algorithms to work correctly, and not presume that the data was
OK during the time the node was down.
If a calculation algorithm other than INSTANTANEOUS is specified for the log, the
algorithm is performed at each storage interval. The storage interval must be a
232 3BUF001092-510
Section 9 Historical Process Data Collection Collection Attributes for History Logs
multiple of the sample interval. For example, if the sample interval is 5 Seconds, the
storage interval can be 10, 15, 20, or 25 Seconds, and so on.
When the calculation is performed, the specified number of data samples are entered
in the algorithm, and the result is passed to the deadband function. If the calculation
algorithm is INSTANTANEOUS (no calculation), the sample and storage intervals
must be equal, and each data sample is treated as a result.
Start Time
This is the earliest allowable system time that the log can become active. If the
specified start time has already past, the log will begin to store data at the next
storage interval. If the start time is in the future, the log will go to PENDING state
until the start time arrives. Enter the start time in the following format:
1/1/1990 12:00:00 AM(default)
The order of the month, day, and year and the abbreviation of the month is
standardized based upon the language being used.
Use the start time to distribute the load. Both the blocking rate and start time may be
defined initially when configuring the History database. If the attributes are not
configured initially, or if adjustments are required, use the Stagger Collection
function in the hsDBMaint utility as a convenient means for adjusting these
parameters. Refer to Stagger Collection and Storage on page 459.
It is recommended NOT to use small differences (less than one minute) to distribute
the load. Managing log collection at slightly different times will result in significant
overhead.
The time when a log is scheduled to start collecting data is referred to as the log’s
alignment time. Alignment time is the intersection of the log’s start time and its
storage interval. For instance if the start time is 7/3 2003 10:00:00 AM, and the
storage interval is 15 Seconds, the log will store data at 00, 15, 30, and 45 seconds
of each minute starting at 10:00:00 AM on July 3, 2003. In this case, this is the
alignment time.
If the start time for a log has already passed, it will be aligned based on the storage
interval. For example, a log with a storage interval of 1 Minute will store data at the
beginning of each minute. So its alignment time is the beginning of the next minute.
3BUF001092-510 233
Deadband Attributes Section 9 Historical Process Data Collection
The log stays in PENDING state until its alignment time. For the log whose storage
interval is 1 minute, it will be PENDING until the start of the next minute.
The default is: 1/1/1990 12:00:00 AM. The order of the month, day, and year and
the abbreviation of the month is standardized based upon the language being used.
Deadband Attributes
The Deadband attributes support data compaction. The recommended method for
achieving data compaction is to use the STORE_AS_IS calculation algorithm
(STORE AS IS Calculation on page 230). The attributes on the Deadband tab may
be used in certain circumstances, for example to support upgrades from earlier
installations that used deadband, or when configuring calculation algorithms such as
minimum, maximum, and average for history logs.
Deadband compaction has two advantages:
• It reduces the load on the system by filtering out values that do not have to be
stored since there is not a significant change in the value.
• It allows data to be stored online for a longer period of time since the data is not
replaced (overwritten) as quickly.
234 3BUF001092-510
Section 9 Historical Process Data Collection Deadband Attributes
Compaction is specified on an individual log basis. It may be used for some logs and
not for others. Compaction should not be used indiscriminately in all cases.
• Use compaction if an accurate representation of the process over time is
needed, and it is not required that every data sample be stored.
• Do not use compaction if the application requires that every data sample be
stored as an instance in the log.
Deadband can be used on digital data sources so that only state changes are saved.
The Deadband filters out insignificant changes in the process variable.The deadband
defines the minimum change in value or in rate of change for the change to be
significant. The deadband is defined as a percent of the engineering units range.
For example, if a process variable is scaled from 200 to 500, the range is 300. If the
deadband is 2%, the minimum significant change is 6 (2% of 300). With these
values, if the last stored sample was 400, and the next sample is 397, then 397 is
within the deadband and will not be stored. If the next sample is 406.2, it will be
stored. A small deadband results in more stored samples than a large deadband.
Deadband is activated by configuring a compaction deadband greater than zero.
When deadband is active, the deadband algorithm determines whether the
calculation result, or raw value if calculation algorithm is instantaneous, is within or
outside the deadband. If the most recent result is outside the deadband, the previous
result is stored. If the most recent result is within the deadband, it is held until the
next result is calculated, and the previous result is discarded. If the compaction
deadband is zero, so compaction not activated, all results are stored.
A deadband storage interval is defined on an individual basis for each log. The
deadband storage interval is a multiple of the log’s storage interval. It is the
maximum time that can elapse without storing a sample when compaction is
activated. Even if the process is so stable that there are no samples outside the
deadband, samples will still be stored at the deadband storage interval. The
deadband storage interval is always measured from the time the last sample was
stored. The deadband storage interval should be at least 10 times larger than the
storage interval to ensure a high compaction ratio.
When deadband is active, a sample may be stored for any one of the following
reasons:
• sample outside the deadband.
3BUF001092-510 235
Deadband Attributes Section 9 Historical Process Data Collection
• value status changed (possible statuses are: no data, bad data, good data).
• deadband storage interval elapsed since last stored sample.
Figure 145 shows how samples are stored when deadband is active. The deadband
function holds each sample until it determines whether or not the sample is required
to plot the trend. Samples that are not required are discarded.
A sample is stored when deadband is activated at 0s. The sample at 10s is held until
the next sample at 20s. Since there is no change in slope between 0s and 20s, the 10s
sample is discarded and the 20s sample is held. This pattern is repeated at each 10-
second interval until a change in slope is detected at 50s. When this occurs, the
samples at 40s and 50s are stored to plot the change. At 90s another change in slope
is detected, so the samples at 80s and 90s are stored.
Even though there are spikes between 80s and 90s, these values were not sampled so
the slope remains constant (as indicated by the dotted line). Another change in slope
is detected at 100s. The sample at 100s is stored to plot the slope. The sample at 90s,
also used to plot the slope, was already stored. The slope remains constant
thereafter. The next sample stored is 140s since the Deadband Storage Interval (40s)
elapses between 100s and 140s. The sample at 180s is also stored due to the
Deadband Storage Interval.
Deadband Storage Interval = 40s
Sample & Storage Interval = 10s Samples stored at these Intervals
because Deadband Storage Interval
elapsed since last sample stored
Compaction activated,
sample stored
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150160 170 180 190 200
236 3BUF001092-510
Section 9 Historical Process Data Collection Deadband Attributes
Compaction effectively increases the log period so the same number of samples
stored (as determined by the log capacity) cover a longer period of time. If the
process is very unstable such that every sample is stored and there is no data
compaction, the effective log period will be equal to the configured log period. If the
process is perfectly stable (maximum data compaction, samples stored only at
deadband storage interval), the effective log period is extended according to the
following formula:
effective log period = (deadband storage interval/storage interval) * configured
log period
In most cases, process variation is not at either extreme, so the effective log period
will fall somewhere in between the configured (minimum) and maximum value.
The calculated effective log period is stored in the EST_LOG_TIME_PER attribute.
This attribute is updated periodically according to a schedule which repeats as often
as every five days. If necessary, manually execute the update function via the hsDB-
Maint utility. This is described in Update Deadband Ratio Online on page 450.
When requesting by name, EST_LOG_TIME_PER determines which log to retrieve
data from (seamless retrieval). For instance, consider a case where the trend log
stores data at 30 second intervals over a one-day time period, and the history log
stores data at one-hour intervals over a one-week time period. The data compaction
ratio is 5:1 which extends the effective log period of the trend log to five days.
Making a request to retrieve three-day-old data causes the History retrieval function
to get the data from the primary log rather than the hierarchical log, even through
the configured log period of the primary log is only one day.
When EST_LOG_TIME_PER is updated, both internal History memory and the
history database are updated with the new values of EST_LOG_TIME_PER and
COMPACTION_RATIO. This will ensure the information will be available after a
re-start of the History node.
Before using compaction, refer to examples 2 & 5 in Property Log Configuration
Reference on page 258.
Deadband Compaction %
This is the minimum change between a sample and previously stored value that
causes the new sample to be stored. Enter this as a percent of engineering units
range (Range Max. - Range Min.). For example, 1 means 1% of range.
3BUF001092-510 237
Deadband Attributes Section 9 Historical Process Data Collection
Estimated Period
This attribute indicates the time between the oldest and newest stored values. This is
filled in by running hsDBMaint. If necessary, use the History Utility to manually
update this attribute. Refer to Update Deadband Ratio Online on page 450.
Compaction Ratio
This is a read-only attribute that indicates the number of entries sampled for each
entry stored. It is filled in by running hsDBMaint. Refer to Update Deadband Ratio
Online on page 450.
Expected Ratio
This attribute indicates the expected ratio for number of entries sampled/each entry
stored.
238 3BUF001092-510
Section 9 Historical Process Data Collection IM Imported Tab
IM Imported Tab
This tab, Figure 146, is provided for log configurations that have been imported
from other systems (Importing Remote Log Configurations on page 357).
3BUF001092-510 239
Data Collection Examples Section 9 Historical Process Data Collection
Storage Interval
(Sample Interval)
0 5 10 15 Seconds 20 60
240 3BUF001092-510
Section 9 Historical Process Data Collection Data Collection Examples
storage interval whether or not a change in slope is detected. In Figure 148, the data
is sampled every five seconds. Of the 17 samples taken, only 7 are stored. A
summary is provided in Table 25 below.
Table 25. Storing Instantaneous Values w/Compaction
3BUF001092-510 241
Data Collection Examples Section 9 Historical Process Data Collection
400
390
0 10 20 30 40 50 60 70 80
242 3BUF001092-510
Section 9 Historical Process Data Collection Data Collection Examples
calculations are written to disk at 24 hours (blocking rate). The process is repeated
at 48 hours and averages are stored at 32, 40, and 48 hour.
0 8h 16h 24h
Data samples from Data Source at: Results calculated at each Storage Interval
1h
2h
3h
4h Calculate
Calculates result at 8h
5h Average
6h
7h
8h
9h
.
. Calculate
Calculates result at 16h
. Average
16h
17h
.
. Calculate
Calculate and store result at 24h
. Average
24h
3BUF001092-510 243
Data Collection Examples Section 9 Historical Process Data Collection
32h
Calculate
40h Stores result at 48h Results stored at
48h Average each Storage Interval
56h
64h Calculate
Stores result at 72h
72h Average
244 3BUF001092-510
Section 9 Historical Process Data Collection Data Collection Examples
The sample blocking rate is used for primary history logs that are connected
directly to the direct trend log. Subsequent (secondary) history logs do not use the
sample blocking rate. The sample blocking rate from the primary log determines
when values are stored for all history logs in the property log.
3BUF001092-510 245
Data Collection Examples Section 9 Historical Process Data Collection
400
390
0 10 20 30 40 50 60 70 80
246 3BUF001092-510
Section 9 Historical Process Data Collection File Storage vs. Oracle Tables
3BUF001092-510 247
File Storage vs. Oracle Tables Section 9 Historical Process Data Collection
Even though TYPE2 storage provides a reduction in disk usage, it may not be
applicable in all cases. TYPE2 storage is recommended when:
• The storage interval is 1 hour or less and is divisible into 1 hour, and the data
source is TTD. TTDs, do not have object status, and times are always aligned.
• The storage interval is one minute or less when collecting from process objects.
A time stamp is recorded every 500th storage. During the interim, values are
assumed to be exactly one storage interval apart (as noted below). Of course
larger storage intervals can be configured, but they should be confirmed as
acceptable. Since TYPE2 logs only correct their alignment after the 500th
storage, they should only be used on fast logs. For example, a one-day storage
log, if not properly aligned, will take 500 days for the alignment to correct.
TYPE2 logs should not be used for logs with storage intervals greater than one hour.
Use TYPE3 logs instead.
TYPE3 (8-byte) is similar to TYPE2. TYPE3 provides full precision time stamp
(like TYPE1), and stores a status when a value is modified. TYPE3 does not store
the original value when the value is modified. TYPE3 logs grow to 16-bytes when
the storage interval is greater than 30 minutes.
The complete information provided for TYPE1 entries is described below. The
differences between TYPE1 and other types are described where applicable.
Time Stamp
Each data entry in a log is time stamped with Universal Time (UTC) Coordinate
(Greenwich Mean Time). It is not affected by the time zone, nor daylight saving
time. This gives consistency throughout all external changes such as Daylight
Saving Time and any system resets that affect the Local time.
TYPE1, TYPE3, TYPE5, and Oracle-based logs support microsecond resolution for
timestamps. For TYPE2 logs (file-based 4-byte entries), the time is an estimate of
the actual time for the entry. The time for the entries are all defined to be a storage
interval apart. A reference time is stored for each block of 500 values. The time for
subsequent values after the reference time = Reference time + (n * storage interval)
where n is the sequential number of the value in the block (0 to 499). So if a log is
configured to store a value every 15 seconds, the entries will have timestamps at 0,
15, 30, 45s of each minute.
248 3BUF001092-510
Section 9 Historical Process Data Collection File Storage vs. Oracle Tables
Data Value
The data value format for all types is floating point. If the data being collected is not
floating point (for example, integer), the data will be converted to floating type and
then stored in that format. This may result in some decimal places being truncated
when the data is stored.
The only exception is when TYPE5 is used in combination with the STORE AS IS
Calculation. In this case the TYPE5 log maintains the data type of the collected
data.
Status
Status information is stored according to OPC DA specifications.
Object Status
Object status is the status of the data source when the value was sampled. This field
is not used when the data source is TTD. The format of the status (what is
represented and how) is dependent upon the data source object type. When a
hierarchical log is created, the object status is obtained from the last primary log
entry used in the calculation.
If a User Object that inherits a Basic Object is the source for a log, its status will
also be collected and stored.
Object status is not stored for TYPE2 or TYPE3 logs.
3BUF001092-510 249
File Storage vs. Oracle Tables Section 9 Historical Process Data Collection
250 3BUF001092-510
Section 9 Historical Process Data Collection File Storage vs. Oracle Tables
3BUF001092-510 251
File Storage vs. Oracle Tables Section 9 Historical Process Data Collection
{
#data_blocks = (1+(capacity*((adjustedArray+2)*4+32)+32735)/32736);
block_size = 32736
252 3BUF001092-510
Section 9 Historical Process Data Collection Presentation and Status Functions
Presentation
Configure presentation attributes for the Trend Display on this tab, Figure 152. The
default settings for these attributes are set in one of the following ways:
• Object Type (i.e. in the Property Aspect).
• Object Instance (i.e in the Property Aspect).
• Log (in the Log Configuration Aspect).
• Trend Display.
These are listed are in order from lowest to highest precedence.
For example, to override a value set in the Object Type write a value in the Object
Instance. A value set in the Object Type, Object Instance, or the Log can be
overridden in the Trend Display.
3BUF001092-510 253
Status Section 9 Historical Process Data Collection
To override a presentation attribute the check box for the attribute must be marked.
If the check box is unmarked the default value is displayed. The default value can be
a property name or a value, depending on what is specified in the Control
Connection Aspect.
Engineering Units are inherited from the source. It is possible to override it. Normal
Maximum and Normal Minimum are scaling factors, used for the scaling of Y-axis
in the Trend Display. The values are retrieved from the source but are possible to
override.
The Number of Decimals are retrieved from the source but are possible to override.
The Display Mode can be either Interpolated or Stepped. Interpolated is
appropriate for real values, Stepped is for binary.
Click Apply to put the changes into effect.
Status
This tab is used to retrieve and view data for a selected log, Figure 153. As an
option, use the Settings button to set viewing options. Refer to:
• Data Retrieval Settings on page 255.
• Display Options on page 256.
Click Read to retrieve the log data. Use the arrow buttons to go to the next or
previous page. Page size is configurable via the Data Retrieval Settings. The default
is 1000 points per page.
Click Action to:
• Import Data from File. Opens csv file to Insert, Replace, or Insert/Replace data.
• Export Data to File. Opens wizard to export csv file.
• Copy Data to Clipboard.
• Restore Log from Backup. Backup object needed in Maintenance structure and
file on backup server.
254 3BUF001092-510
Section 9 Historical Process Data Collection Status
3BUF001092-510 255
Status Section 9 Historical Process Data Collection
Set the number of points per page in the Page Size area. Default is 1000 and
maximum is 10,000.
Set the time range for retrieval of data in the Time Frame area.
• Latest retrieves one page worth of the most recent entries.
• Oldest retrieves one page worth of the oldest entries.
• At time is used to specify the time interval for which entries will be retrieved.
The data is retrieved from the time before the specified date and time.
Display Options
The Display button displays the Display Options dialog used to change the
presentation parameters data and time stamp, Figure 155.
Values are displayed as text by default. Unchecking the check box in the Quality
area, displays the values as hexadecimals.
Timestamps are displayed in local time by default. Unchecking the first check box
in the Time & Date area, changes the timestamps to UTC time.
Timestamps are displayed in millisecond resolution by default. Unchecking the
second check box in the Time & Date area changes the timestamp resolution to one
second.
256 3BUF001092-510
Section 9 Historical Process Data Collection Property Log Tab
Status Light
The Service Status row shows status information for the configured Service Group.
The name of the configured group is presented in the row. If no history sources have
been defined (refer to Configuring Node Assignments for Property Logs on page
181), the text ‘Not Specified’ is displayed. Status for configured Service Group may
be:
• OK: All providers in the service group are up and running.
• OK/ERROR: At least one service provider is up and running.
• ERROR: All providers in the service group are down.
3BUF001092-510 257
Property Log Configuration Reference Section 9 Historical Process Data Collection
258 3BUF001092-510
Section 9 Historical Process Data Collection Adding a Hierarchical Log
3BUF001092-510 259
Assigning Logs to an Archive Group Section 9 Historical Process Data Collection
260 3BUF001092-510
Section 9 Historical Process Data Collection Activating/Deactivating a Property Log
The enabled check box is only for Property logs (basic history).
To activate an individual log within a property log, select the log in the hierarchy,
click the IM Definition tab, and then click Activate under the IM Historian Log
State control, Figure 159.
3BUF001092-510 261
Bulk Configuration of Property Logs Section 9 Historical Process Data Collection
The Bulk Configuration tool is used to generate a load report which calculates the
disk space occupied by existing file-based property logs, and additional disk space
which will be required as a result of instantiating new property logs via this tool.
Use this report to determine whether to adjust the disk space allocation for file-
based logs. This tool also is used to verify whether or not the logs have been created
when finished.
Import filters in the Bulk Configuration tool can be used to shorten the time it takes
to read information from aspect objects into the Excel spreadsheet. The most
effective filter in this regard is an import filter for some combination of object types,
properties, and data type. Since some objects may have hundreds of properties, the
time savings can be significant.
A list of properties can be created for which logs have already been created in the
event that the log configurations for those properties need to be modified or deleted.
Extensive use of the Bulk Configuration tool may cause disks to become
fragmented. This, in turn, may impair the response time and general performance
of the system. Therefore, check the system for fragmented files after any such
procedure, and defragment disks as required.
History configuration impacts not only the History server disk(s), but also the
disk(s) on any Connectivity Servers where trend logs are configured. Therefore,
check the disks on any Connectivity Server where trend or history logs are
located.
262 3BUF001092-510
Section 9 Historical Process Data Collection Work Flow
Work Flow
Configure all other history objects, including log templates, before using this tool.
The recommended work flow is as follows (instructions for all steps with the
exception of 2, 3 & 12 are provided in this section):
1. To start with a fresh (empty) database, refer to the procedure for Dropping the
Existing Database on page 264 and recreate the History database before adding
any new logs. If this step is not performed, the new logs will be in addition to
any existing logs in the current History database.
2. Configure log sets, message logs, report logs as required by the application. If
archival is required, create an Archive Device and Archive Groups. Refer to:
Section 6, Configuring Log Sets
Section 7, Alarm/Event Message Logging
Section 8, Historizing Reports
Section 11, Configuring the Archive Function
3. Configure the Log Templates as described in Building a Simple Property Log
on page 188. DO NOT create Log Configuration aspects at this time.
4. Create a new workbook in Excel and initialize the workbook to add the Bulk
Configuration add-ins.
5. Create a list of object properties whose values are to be logged.
6. For each object property, specify the Log Template that meets its data
collection requirements, and specify the name of the Log Configuration aspect.
7. Configure presentation attributes if necessary.
8. Generate a load report to see whether or not the disk space allocation for file-
based logs needs to be adjusted. Make any adjustments as required.
9. Create new property logs based on the spreadsheet specifications.
3BUF001092-510 263
Dropping the Existing Database Section 9 Historical Process Data Collection
10. Verify that all property logs and their respective component logs have been
created.
If multiple property log configurations are created using the Bulk Import tool and
Industrial IT Basic History and Information Management logs stop collecting
follow these steps:
a. Locate the Basic History Service Providers for the log configurations just
created.
b. Restart the Basic History Service Providers located in step a.
11. Run the Load Report function again. This time, the report should indicate that
all logs have been created (existing space used = proposed space requirement).
12. Create a backup copy of the database. Refer to Backup and Restore on page
396.
13. On all nodes where either trend or history logs are configured, check system for
fragmented files, and defragment the applicable disks as required.
264 3BUF001092-510
Section 9 Historical Process Data Collection Initializing a New Workbook
are added on a user basis. The add-ins are automatically added for the user that
installed the Information Management software. Before running Microsoft Excel as
a different user, install the add-ins for that user. Instructions are provided in
Installing Add-ins in Microsoft Excel on page 286.
To create and initialize a new workbook:
1. Launch Microsoft Excel and create a new workbook, Figure 160.
This sets up the workbook for bulk configuration by adding the applicable row
and column headings. Also, a dialog is presented for selecting a system (system
created via the 800xA Configuration Wizard), Figure 162.
3BUF001092-510 265
Creating a List of Object Properties Section 9 Historical Process Data Collection
Excel sheet columns from A through E may disappear hiding any log names. This
can be corrected by unfreezing panes and refreezing panes.
3. Click Set System to connect the spreadsheet to the system indicated in the
Select a System dialog. Use the pull-down list to change the system if needed.
When finished initializing the worksheet, continue with Creating a List of Object
Properties on page 266.
266 3BUF001092-510
Section 9 Historical Process Data Collection Creating a List of Object Properties
The example specification in Figure 163 will import all objects of the type
ggcAIN and ggcPID (TYPE:ggcAIN:ggcPID).
3BUF001092-510 267
Creating a List of Object Properties Section 9 Historical Process Data Collection
Use the Object Type pick list if the spelling of the object type names is not
known. To do this, right click in the Object Column for the Import Filter row
(cell C2) and choose Object Type List from the context menu, Figure 164.
Figure 164. Opening the Object Type List (Using Context Menu on Object Column)
This displays a dialog for selecting one or more object types, Figure 165. Scroll
to and select the applicable object types (in this case ggcAIN and ggcPID).
Select as many object types as required, although selecting too many will
defeat the purpose of the filter. Select an object type again to unselect it.
Click OK when finished.
268 3BUF001092-510
Section 9 Historical Process Data Collection Creating a List of Object Properties
2. Enter the property specification in the Property column of the Import Filter
row. Specify one or more properties using the following syntax:
NAME:property1:property2, where each property is separated by a colon (:).
For example: NAME:Link.PV:Link.SP:Link.OUT, Figure 166.
To interpret the text as a pattern to match property names (for example: *value*),
omit the NAME: keyword.
3BUF001092-510 269
Creating a List of Object Properties Section 9 Historical Process Data Collection
Optionally, use a Data Type filter to make the object search more closely match
your requirements. The Data Type filter is a pattern filter. Enter the data type
manually (for example: Float), or use Data Type pick list which is available via
the Data Type cell’s context menu, Figure 167.
4. When finished with the import filter specification, choose Bulk Import>Load
Sheet from System, Figure 168.
270 3BUF001092-510
Section 9 Historical Process Data Collection Creating a List of Object Properties
This displays a dialog for selecting the root object in the Aspect Directory
under which the importer will look for objects and properties that meet the
filtering criteria, Figure 169.
3BUF001092-510 271
Creating a List of Object Properties Section 9 Historical Process Data Collection
Use the browser to select the root object whose properties will be imported.
Typically, the Include Child Objects check box to the right of the browser is
checked. This specifies that properties of the selected object AND properties of
all child objects under the selected object will be imported. If the box is not
checked, only properties for the selected object will be imported.
The other two check boxes specify:
• Whether or not to limit the import to properties that already have logs.
• Whether or not to enter the log configuration information in the spreadsheet
when importing properties that have logs.
ALWAYS check one or both of these check boxes; otherwise, no properties will
be imported. The operation of these check boxes is described in Table 28.
Table 28. Logged and Non-Logged Properties Check boxes
Include Include
Logged Non-Logged Operation
Properties Properties
X X Import all properties according to specified filter. For those
properties that already have logs, fill in the Log
Configuration Aspect and Log Template specifications.
X Import all properties according to specified filter. For those
properties that already have logs, do not fill in the Log
Configuration Aspect and Log Template specifications.
X Import only properties that already have logs, and fill in the
Log Configuration Aspect and Log Template specifications.
This will result in no properties being imported.
For example, the selections in Figure 170 will import the properties for objects
in the Group_01 Generic OPC Object branch based on the filter set up in steps
1&2. It will import the log configuration and log template specifications for
any properties that already have log configurations.
272 3BUF001092-510
Section 9 Historical Process Data Collection Creating a List of Object Properties
Only objects and properties that meet the filtering criteria will be imported.
6. Click Add when finished. This runs the importer.
A partial listing of the import result based on the specified filter is shown in
Figure 171. (The complete list is too lengthy to show.)
3BUF001092-510 273
Assigning Log Templates Section 9 Historical Process Data Collection
Use the Sheet Filter (row below Input Filter), or the sorting and filtering tools in
Microsoft Excel to make further adjustments to the list of imported objects and
properties.
Once the list is refined, continue with the procedure covered in Assigning Log
Templates below.
274 3BUF001092-510
Section 9 Historical Process Data Collection Assigning Log Templates
This displays a pull-down list for all property log templates that exist in the
system, Figure 173.
3BUF001092-510 275
Assigning Log Templates Section 9 Historical Process Data Collection
To use the same template for additional object properties in contiguous rows
under the currently selected object property, click on a corner of the cell, and pull
the corner down to highlight the Property Template column for as many rows as
required, Figure 174. The template specification will automatically be entered in
the highlighted cells when the mouse button is released.
3. The Log Configuration aspect name is specified in much the same manner as
the property template. To create a new Log Configuration aspect for the
property log because when one does not yet exist, enter a Log Configuration
aspect name directly in the spreadsheet cell.
If no Log Configuration aspect names have been specified yet in the entire
column, typing the letter L in a cell will automatically fill in the cell with: Log
Configuration (matching the Column Name). This is a suitable aspect name. It
is not required for these names to be unique.
To add the property log to an existing Log Configuration aspect, right-click and
choose Log Config List from the context menu. This displays a list of Log
Configuration aspects that exist for that particular object. If no Log
Configuration aspects exist for the object, the list will indicate None.
As with the Property Template specification, to use the same Log
Configuration aspect name for contiguous object properties in the list, click on
276 3BUF001092-510
Section 9 Historical Process Data Collection Configuring Other Log Attributes
corner of the cell, and pull the corner down to highlight the Log Configuration
column for as many object properties as required.
When finished assigning Log Templates and Log Configuration Aspect names,
continue with the procedure described in Configuring Other Log Attributes below.
Then click the Load Report tab at the bottom of the spreadsheet, Figure 176.
3BUF001092-510 277
Generating a Load Report Section 9 Historical Process Data Collection
An example report is shown in Figure 177. The report shows the disk space
requirements on a per-server basis. The information for each server is presented in a
separate column. For the example in Figure 177 there is only one server.
Load Per Server The report shows the disk space requirements on a per-
server basis. In this example, there is only one server (in
this case ROC2HFVP01).
Existing This row indicates the disk space (in bytes) used by
existing property logs. In this case, 0 (zero) indicates
that no space is used since there are no existing logs.
Proposed This row indicates the space requirement (in bytes) for
the property logs which will be created based on this
spreadsheet. In this case 3,891,200 bytes will be
required.
Total This row indicates the sum of the Existing and Proposed
disk space requirements.
Grand Total This row indicates the sum of the totals for all servers.
278 3BUF001092-510
Section 9 Historical Process Data Collection Creating the Property Logs
Compare the results of this report with the current disk space allocation for file-
based logs. Use the Instance Maintenance Wizard or hsDBMaint function as
described in Directory Maintenance for File-based Logs on page 454. Make any
adjustments as required, following the guidelines provided in the Directory
Maintenance section. When finished, continue with the procedure covered in
Creating the Property Logs on page 279.
3BUF001092-510 279
Verifying the Update Section 9 Historical Process Data Collection
280 3BUF001092-510
Section 9 Historical Process Data Collection Verifying the Update
An example result is shown in Figure 181. The presence of a red not verified
message in the far left (A) column indicates a problem within the corresponding
row. The item that caused the failure, for example the log template, will also be
shown in red. Verified rows are shown in black.
Some rows may be marked as not verified because the verification function was run
before History Services was able to create the logs. Clear these rows by re-running
the verification function. Rows that were not verified on the previous run, but are
verified on a subsequent run are shown in green. The next time the function is run,
the green rows will be shown in black.
Guidelines for handling not verified rows are provided in Handling Not Verified
Rows on page 284.
To get a summary for each time the verification function is run, click the Status and
Errors tab near the bottom of the spreadsheet, Figure 182. A new entry is created
each time the verify function is run. The summary is described in Table 29.
3BUF001092-510 281
Verifying the Update Section 9 Historical Process Data Collection
Item Description
Date and Time Date and time when the verify function was run.
Rows Processed Number of rows that History Services finished processing at the time
this verification instance.
Rows Verified Number of processed rows that have been verified at the time of this
verification instance.
Rows Not Verified Due to Number of rows not verified due to a problem with the Log
Property Logs Configuration aspect or property log.
Rows Not Verified Due to Number of rows not verified due to a problem with component logs.
Component Logs The log configuration aspect was created, but the component history
logs do not have item IDs.
Logs Processed Number of component logs (within a property log structure)
processed.
Logs Verified Number of component logs (within a property log structure) verified.
Logs Not Verified Number of component logs (within a property log structure) not
verified.
Total Time Time taken to run the verify function.
When using the Bulk Configuration tool in Microsoft Excel, the Update System
Status information will be incorrect when a sheet has been filtered. After filtering,
only the visible rows are processed during the Update Log Aspects from Sheet
operation. However, the Status and Errors shows the number of rows written to
system as the total (unfiltered) number of rows. This is the normal mode of
operation.
282 3BUF001092-510
Section 9 Historical Process Data Collection Verifying the Update
To further confirm that property logs have been instantiated for their respective
object properties, go to the Control structure and display the Log Configuration
aspect for one of the objects. An example is shown in Figure 183.
3BUF001092-510 283
Verifying the Update Section 9 Historical Process Data Collection
284 3BUF001092-510
Section 9 Historical Process Data Collection Troubleshooting
Troubleshooting
On occasion, the Bulk Configuration Tool may fail to create logs for one or more
properties in the property list because the tool was temporarily unable to access the
Aspect Directory for data. The rows for these properties will have a red not verified
message in column A of the spreadsheet after running the verification function
(Verifying the Update on page 280). Also, the item that caused the failure (for
example, the log template) will be shown in red. If this occurs, use the following
procedure to modify the property list to show not verified rows, and then re-run the
Bulk Configuration tool to create the logs for those rows:
1. Select column A, and then choose Data>Filter>Auto Filter from the Excel
menu bar, Figure 184.
This will hide all rows that have no content in Column A. These rows have
been verified. Only rows that have not been verified will be shown, Figure 186.
3BUF001092-510 285
Installing Add-ins in Microsoft Excel Section 9 Historical Process Data Collection
286 3BUF001092-510
Section 9 Historical Process Data Collection Installing Add-ins in Microsoft Excel
9. Verify that the Inform IT Bulk Import box is checked in the Add-ins tool,
Figure 187, then click OK.
10. Verify the Bulk Import add-in tools are added in Excel, Figure 188.
Unless Archive Macros are signed, Excel Macro Security must be set to low for
the macros to function. When the Bulk Import Tool or the Archive import tool is
used, the macro security must be set to LOW.
3BUF001092-510 287
What To Look For When Logs Do Not Collect Data Section 9 Historical Process Data Collection
288 3BUF001092-510
Section 10 Profile Data Collection
Introduction
This section provides an overview of the Profile Historian function in the 800xA
system, and instructions for configuring this functionality.
3BUF001092-510 289
Section 10 Profile Data Collection
Machines 1..n
Profile Historian Profile Historian
Server Client
Frame S1-1
Profile Logs in
AC 800M AccuRay
History Database Client
Controllers Object
Frame S1-2 1..n Server Displays
Reel/Grade Report
Configuration
290 3BUF001092-510
Section 10 Profile Data Collection Configuring a Log Set for Profile Logs
3BUF001092-510 291
Configuring Profile Logs Section 10 Profile Data Collection
Under the InformIT History Object find containers for each of the Inform IT
History object types. The History objects (in this case, a Profile Log) must be
instantiated under the corresponding container.
4. From the Profile Logs group, choose New Object from the context menu. This
displays the New Object dialog with the Inform IT Profile Log object type
selected.
5. Enter a name for the object in the Name field, for example: CA1Profile, then
click Create. This adds the object under the Profile Logs group, and creates a
corresponding Profile Log Aspect. This aspect is used to configure the profile
log attributes.
6. To display this aspect, select the applicable Profile Log object, then select the
Inform IT History Profile Log aspect from the object’s aspect list.
7. Use this aspect to configure the Profile Log Attributes.
292 3BUF001092-510
Section 10 Profile Data Collection Profile Log Attributes
3BUF001092-510 293
Profile Log Attributes Section 10 Profile Data Collection
Data Source
The profile log has multiple data sources - one for each of the quality measurements
being stored in the profile log. The Data Source entered on the Main tab specifies
the root portion of the OPC tag. The full data source tags for each of the quality
measurements are automatically filled in on the Data Source tab based on the root
portion of the OPC tag.
To select the OPC tag:
1. Click the browse button for Data Source, Figure 193.
Click the Browse Button
This displays the Select Data Source dialog for browsing the OPC server for
the applicable object.
2. Use this dialog to select the OPC object which holds the profile data as
demonstrated in Figure 194. Typically, the object is located in the Control
structure, under the Generic OPC Network object for the Profile data server.
294 3BUF001092-510
Section 10 Profile Data Collection Profile Log Attributes
3. Click Apply when done. This enters the object name in the Data Source field,
Figure 195.
In addition, the data source tags for the quality measurements are automatically
entered under the Data Source tab, Figure 196.
3BUF001092-510 295
Profile Log Attributes Section 10 Profile Data Collection
The!DATA_SOURCE! string represents the root of the OPC tag which is common
for all data sources. The string following the colon (:) specifies the remainder of the
tag for each measurement, for example !DATA_SOURCE!:FirstBox,
!DATA_SOURCE!:ProfileAverage, and so on.
Log Set
Use the Log Set pull-down list on the Main tab to select the log set which was
configured specifically for the profile logs. The name of this log set must match the
machine name, for example PM1 as shown in Figure 197. Log Sets should not be
used for xPAT configurations.
Array Size
The Array Size attribute is located on the Collection tab, Figure 198. Array size
must be configured to match the number of data points per scan for the machine
from which the data points are being collected. The default is 100.
296 3BUF001092-510
Section 10 Profile Data Collection Profile Log Attributes
To determine the correct array size for the profile log via the Control Connection
aspect of the selected OPC tag, Figure 199, select the Control Connection aspect,
check the Subscribe to Live Data check box, then read the value for ProfileArray
(for example, 600 in Figure 199).
If Array Size is configured too large or too small, the file will not be
appropriately sized for the expected log period. For instance, consider a case
where the log period is eight days and the machine generates 200 data points per
scan. If the array size is set to 100, the log will only be capable of holding four
days worth of data. If the array size is set to 400, the log will store 16 days worth
of data, and needlessly consume more disk space.
Log Period
Disk space is allocated to logs according to their log capacity, which is a function of
the log time period and storage interval. The log period can be adjusted on an
individual log basis to keep log capacity at a reasonable size for each log. For profile
logs, log capacity is calculated as: log capacity = (log period/storage interval). Enter
the log period as an integer value ≤4095 with time unit indicator for Weeks, Days,
Hours, Minutes, Seconds. The maximum is 4095 Weeks.
3BUF001092-510 297
Profile Log Attributes Section 10 Profile Data Collection
Storage Interval
This is the time interval at which data is stored in the log. Enter the storage interval
as an Integer value with a time unit indicator: Weeks, Days, Hours, Minutes,
Seconds. For example: 40d = 40 days, 8h = 8 hours. The maximum is 4095 Weeks.
Machine Position
This attribute is located on the Data Source tab, Figure 200. Enter the name of the
object which holds the Reel Footage value. An example is shown in Figure 201.
Using this example, the Machine Position value would be
PM1/SQCS/MIS/RL/ACCREP:ReelFootage.
298 3BUF001092-510
Section 10 Profile Data Collection Profile Log Attributes
Figure 201. Finding the Object Which Holds the Reel Footage Value
3BUF001092-510 299
Profile Log Attributes Section 10 Profile Data Collection
Attribute Description
Data Source Refer to Data Source on page 294.
Log Set Refer to Log Set on page 296.
2nd Log Set Optional second log set.
Archive Group Typically, to archive the profile log, assign the log to an archive group. For
details regarding archive groups, and the archive function in general, refer to
Section 11, Configuring the Archive Function.
Description Enter an optional descriptive text string for this log.
Start State This is the initial log state when History Services is restarted via PAS. Choices
are:
START_INACTIVE
START_ACTIVE
START_LAST_STATE (default) - restart in state log had when node shut down
Calculation Type This defaults to OPC and CANNOT be changed.
Calculation Mode This defaults to Synchronous and CANNOT be changed.
Attribute Description
Log Period Refer to Log Period on page 297.
Storage Interval Refer to Storage Interval on page 298.
Array Size Refer to Array Size on page 296.
Compression Type This attribute is not implemented in this version of History Services
and may be left at its default value (No Compression).
300 3BUF001092-510
Section 10 Profile Data Collection Configuring Reports
Attribute Description
Log Capacity This is the maximum number of entries for a log. If a log is full, new
entries replace the oldest entries. This field is not configurable. Log
capacity is calculated as follows:
Log Capacity = (Log Period/ Storage Interval).
Storage Type Storage Type defaults to type 4 which is the only valid storage type for
profile logs. This attribute cannot be modified.
Start Time This is the earliest allowable system time that the log can become
active. If the specified start time has already past, the log will begin to
store data at the next storage interval. If the start time is in the future,
the log will go to PENDING state until the start time arrives. Enter the
start time in the following format:
1/1/1990 12:00:00 AM(default)
The order of the month, day, and year and the abbreviation of the
month is standardized based upon the language being used.
Log Data Type Data type defaults to Float which is the only valid data type for profile
logs. This attribute cannot be modified.
Attribute Description
Machine Position Refer to Machine Position on page 298.
All other attributes These are the names for the objects which hold the quality
on this tab. measurements for the profile log. The root object name is specified
via the Data Source attribute on the Main tab, and is represented as
!DATA_SOURCE! in the respective fields on this tab.
Configuring Reports
History Profiles may be configured to collect data for: reel turn-up events, product
grade change events, day shift, and roll setup. This functionality is configured via
the Reel Report Configuration Tool.
3BUF001092-510 301
Configuring Reports Section 10 Profile Data Collection
The Reel Report Configuration Tool is used to build these four reports for each
machine in the plant. An example is shown in Figure 202.
For each machine specified, the Reel Report Configuration Tool provides a template
to facilitate report configuration. This template provides place holders for each of
the signals/variables which must be specified. Add signals/variables as required.
To collect data for a particular report, specify the tag corresponding to each
signals/variable. The signals/variables are read from OPC data items stored in an
OPC server.
Start Machine Specification for PM1
Trigger, Measurement,
& Reel Number for PM1
Reel Turn-up Event
302 3BUF001092-510
Section 10 Profile Data Collection Configuring Reports
Start by adding a machine, and then specify the OPC data items to be read for that
machine.
3BUF001092-510 303
Configuring Reports Section 10 Profile Data Collection
This adds a new machine with the default structure as shown in Figure 205.
The default machine structure provides a default specification for REEL turn-
up, GRADE change, DAYSHIFT, and RollSetup.
2. Specify the machine name. To do this, click on Machine Name, and then enter
a new name, for example: PM1, Figure 206. This machine name must also be
used as the name for the log set to which all profile logs for this machine will
be assigned.
304 3BUF001092-510
Section 10 Profile Data Collection Configuring Reports
3BUF001092-510 305
Configuring Reports Section 10 Profile Data Collection
2. Enter the OPC tag for the trigger that indicates the occurrence of a reel turn-up.
An example is shown in Figure 208.
3. Repeat this procedure to specify the OPC tag for the REEL Number counter
and ReelTrim. An example completed specification is shown in Figure 209.
b. Select the Report Heading place holder and enter the new report heading,
for example: ProductionSummary. There are no specific requirements
for the heading, except that it must be unique within the event (in this case,
within the REEL for PM1).
306 3BUF001092-510
Section 10 Profile Data Collection Configuring Reports
c. Right-click on the report heading and choose Add New Tag from the
context menu. This adds an OPC tag place holder for the new
measurement.
d. Specify the OPC tag for the measurement. This procedure is the same as
for configuring any other signal. An example is shown in Figure 211.
3BUF001092-510 307
Configuring Reports Section 10 Profile Data Collection
Configuring ROLLSetup
This report is used to examine sets and rolls within a reel. This setup divides the reel
lengthwise into a specified number of sets. Each set is divided by width into a
specified number of rolls. This is illustrated in Figure 215.
The default specification for ROLLSetup is shown in Figure 216. The variables are
described in Table 33. These signals are mandatory in order to generate a
ROLLSetup report. They cannot be deleted. One or more additional measurements
can be added to collect on an optional basis. The procedure for specifying the
signals is the same as the procedure described in Specifying Signals for Reel Turn-
up on page 305.
308 3BUF001092-510
Section 10 Profile Data Collection Configuring Reports
Signal Description
NumberOfRolls Indicates the number of rolls for the current set being processed.
ReelNumber Indicates the reel number for the current reel.
ReelTrim Indicates the width of the reel.
RollTrimArray One value for each roll in the current set indicates the width of each
roll.
SetFootage Indicates the actual length of the current set.
3BUF001092-510 309
Configuring Reports Section 10 Profile Data Collection
Signal Description
SetLength Indicates the recommended or target length of any set for this reel.
SetNumber Indicates the current set number.
SetupReadyTrigger Indicates when the current set has been completed.
310 3BUF001092-510
Section 10 Profile Data Collection Exporting/Importing a Reel Report Configuration File
3BUF001092-510 311
Activating Profile Logs Section 10 Profile Data Collection
Since profile log data is automatically archived with the PDL data, only the PDL
archive function for the Profile Historian application will need to be configured. For
details regarding this procedure, refer to PDL Archive Configuration on page 350.
312 3BUF001092-510
Section 11 Configuring the Archive Function
3BUF001092-510 313
Archive Configuration Section 11 Configuring the Archive Function
The hard disk may be partitioned into multiple volumes which are sized to
match CD ROM or DVD media. The archive backup function may be set up to
write the contents of archive volumes to ISO Image files as volumes become
full. The ISO image files may be burned onto CD ROM or DVD media for
permanent storage. As files are saved on the CD or DVD media, the file copies
on hard disk must periodically be purged to make room for new archive entries.
As an alternative, specify the archive backup function to create shadow copies
of filled archive volumes on network file servers. Use both ISO image files and
shadow copies as needed.
Archive Configuration
Archiving is managed by one or more archive device objects which are configured
in the Node Administration structure. An archive device is a logical entity that
defines where and how archive data is written. Every MO or disk drive used for
archiving must have at least one archive device aspect configured for it. A single
drive may have several archive devices configured for it to satisfy different archive
requirements. For example, more sensitive data may be archived through a separate
device which is configured to prevent automatic overwriting of stored data.
Archiving may be scheduled to occur on a periodic or event-driven basis through the
Application Scheduler, or execute manual archive operations on demand. For
manual archives, if the specified time range has no samples, no data will be
archived. For scheduled archives, even if no new samples were collected, at least
one sample (the last valid point) will be archived. Each archive operation is referred
to as an archive entry.
Scheduled archiving is implemented through archive groups. These are user-defined
groups of logs which are archived together as a single unit. Scheduling instructions
for archive groups are specified in job description objects created in the Scheduling
structure. The schedules are associated with their respective archive groups through
an Archive Action aspect attached to the job description object. Manual archiving
may be done on an archive group basis, or by selecting individual logs.
314 3BUF001092-510
Section 11 Configuring the Archive Function Accessing Archived Data
3BUF001092-510 315
Archive Topics Section 11 Configuring the Archive Function
may eventually exceed the capacity of the archive media. To reduce the risk of this
occurring, configure the archive device and archive groups according to the
guidelines described in Archive Configuration Guidelines on page 316.
Archive Topics
• Configuring Archive Devices on page 321.
• Configuring Archive Groups on page 331.
• Setting Up the Archive Schedule for an Archive Group on page 342.
• Adding a Read-Only Volume for a Mapped Network Drive on page 349.
• PDL Archive Configuration on page 350.
• Accessing and managing archive data is described in System 800xA
Information Management Data Access and Reports (3BUF001094*).
316 3BUF001092-510
Section 11 Configuring the Archive Function Estimating How Often to Change the Archive Media
• 2000 logs @ 60-second storage rate (2000 points per minute), 20-day log
period.
The archive media is a 5-gigabyte MO platter (2.5 gigabytes per side). After
subtracting archive overhead, each side can hold about 2.2 gigabytes.
This example may be adapted for the hard disk/ROM archive scenario. In this
case, the hard disk must have approximately 2.2 gigabytes of free space allotted
to storage of archive data.
This example covers numeric data only.
To proceed with planning for this configuration, follow the procedures described in:
• Estimating How Often to Change the Archive Media on page 317.
• Determining the Quantity and Size of Archive Groups on page 318.
For archiving to hard disk, calculate the estimated time to fill all volumes.
3BUF001092-510 317
Deadband Considerations Section 11 Configuring the Archive Function
Deadband Considerations
Deadband decreases the rate at which the platter is filled.1 For example, if deadband
results in only 70% of the samples being saved, the rate at which the platter is filled
will decrease as shown below:
Time = 2.2 gigabytes / (29K ppm * 0.7 * 60 *24 * 21) = 3.6 days
318 3BUF001092-510
Section 11 Configuring the Archive Function Determining the Quantity and Size of Archive Groups
platter is full and archive operations are not monitored over that time period. A
longer interval (two weeks) is less likely to miss so many archive attempts.
If the cumulative size of the current archive entry and the previous failed attempts
exceeds the capacity of the archive media, the archive entry will be split into some
number of smaller entries such that one entry will not exceed the available space. To
minimize this occurrence, use a conservative estimate for reasonable number of
consecutive failures. At the same time, keep in mind that being overly conservative
will result in an abnormally large number of archive groups.
Use the following procedure to calculate a reasonable size for an archive entry. This
procedure must be done for each unique log configuration (storage rate/log period).
To illustrate, the following calculation is for the log configuration that has 500 logs
@ 2-second storage rate, and 20-day log period:
1. Calculate the archive interval. It is recommended that an interval which is no
greater than 1/4 the log time period be used. This allows for a number of
archive failures (due to media full or drive problems) without loss of data.
A smaller archive interval is recommended for large log periods (one year). This
limits the amount of data that may be lost in the event of a hardware failure. Do
not make the archive interval too small. This causes a greater percentage of the
disk to be used for overhead.
For a 20-day log period, the recommended archive interval = 1/4 * 20 = 5 days.
2. Calculate the amount of archive data that one log generates over the archive
interval. The following calculation is based on the 2-second storage interval.
21 bytes per archive entry * 30 points per minute * 60 minutes
* 24 hours * 5 days = 4.5 megabytes1
For systems using deadband, the amount of data per archive interval will be
reduced.
3. Calculate the maximum archive entry size for any archive group supporting this
log configuration:
3BUF001092-510 319
Summary of Results for Other Log Configurations Section 11 Configuring the Archive Function
Steps 1-4 must be repeated for each of log configuration (storage rate, log period
combination).
No. of Archive
Configuration One Log’s Data/ Archive Interval Logs/Group
Groups
500 @ 2-sec. 21bytes*30ppm*60m*24h*5d = 4.5Mbytes 550/4.5 = 125 500/125 = 4
3000 @ 15-sec. 21bytes*4ppm*60m*24h*5d = 605Kbytes 550/.605 = 909 3000/909 = 4
2000 @ 60-sec. 21bytes*1ppm*60m*24h*5d = 151Kbytes 550/.151 = 3642 2000/3642 = 1
Configure Archiving
Configure the archive application according to the results of your calculations:
320 3BUF001092-510
Section 11 Configuring the Archive Function Configuring Archive Devices
3BUF001092-510 321
Adding an Archive Device Section 11 Configuring the Archive Function
Removed archive volumes don't recapture used disk space. The second step of
manually deleting the extra or deleted volumes ensures archive data is not deleted
as a result of a configuration mistake.
When deleting volumes from a Disk Drive archive device, delete the folder
manually. Look for a folder under the Device File with the name nnArch where
nn is the volume number. Delete folders that match the volumes that were deleted
from the device.
Refer to the computer’s documentation for instructions on connecting storage
devices to the computer where the archive service runs.
The operating parameters for each archive device are specified in the corresponding
archive device aspect. This requires adding one or more archive device objects for
each archive media (MO or disk drive), and then configure the archive device
aspects. For instructions, refer to Adding an Archive Device on page 322.
322 3BUF001092-510
Section 11 Configuring the Archive Function Adding an Archive Device
Node Where Archive Device Object is being Added Archive Service Aspect
Click Here
5. Click Archive Device in the Create New Archive Object section. This displays
the New Archive Device dialog, Figure 219.
6. Enter a name for the object in the Name field, for example: ArchDev2OnAD,
then click OK.
Keep the Show Aspect Config Page check box checked. This will automatically
open the configuration view of the Archive Device aspect.
3BUF001092-510 323
Archive Device Aspect Section 11 Configuring the Archive Function
This adds the Archive Device object under the Industrial IT Archive object and
creates an Archive Device Aspect for the new object. Use this aspect to configure
the device as described in Archive Device Aspect below.
324 3BUF001092-510
Section 11 Configuring the Archive Function Archive Device Aspect
This aspect also has a main view for managing the archive volumes on the archive
device. This is described in System 800xA Information Management Data Access
and Reports (3BUF001094*).
The archive device operating parameters which must be configured, and the manner
in which they are configured depends somewhat on the type of device (MO or hard
disk). Review the guidelines below. Details are provided in the sections that follow.
When done, click Apply to apply the changes.
Guidelines
To configure an archive device, first specify the type of media for which the device
is being configured, and then configure the operating parameters according to that
media.
The Archive Path specifies the drive where archive data will be written. If the
archive media is a hard disk, specify the directory.
The Device Behavior and Overwrite Timeout fields are used to specify how
archiving will proceed when the current archive media (volume) becomes full.
When using a disk drive, partition the disk into one or more volumes. This is not
applicable for MO drives. To configure volumes, specify the number and size of the
volumes, and set up volume naming conventions. Optionally, specify whether or not
to have archive volumes automatically published when the volumes have been
copied to a removable media (CD or DVD) and the media is remounted.
For disk-based archiving, it is recommended that automatic backup of archive data
to another media be configured. This way when a volume becomes full, an ISO
image or shadow copy, or both are created on another media. This is not applicable
for MO media which are removable and replaceable.
Optionally, specify whether or not to generate an alarm when the archive media (or
backup media) exceed a specified disk usage limit. If these features are enabled, the
alarms recorded in the 800xA System message buffer.
Attempting to apply an invalid parameter setting causes the Device State field to
be highlighted in red to indicate an invalid configuration.
Device Type
Specify the type of media for which the archive device is being configured:
3BUF001092-510 325
Archive Device Aspect Section 11 Configuring the Archive Function
Archive Path
This specifies the location where data will be archived. If the Device Type is an MO
drive, enter the drive letter, for example: D: or E: If the Device Type is Disk Drive,
specify the full path to the directory where data will be archived for example:
E:\archive (root directory is not a valid entry for disk drive).
326 3BUF001092-510
Section 11 Configuring the Archive Function Archive Device Aspect
virtually no delay, enter a very small delay, for instance: 1 second. The Overwrite
Timeout is stored on the media, so removing the media from the drive and then
replacing it will not affect the Overwrite Timeout. Overwrite Timeout can be
changed via the Initialize dialog when a media is initialized for manual archiving.
When using archive backup (Configuring Archive Backup on page 329) follow
these guidelines for configuring Device Behavior and Overwrite Timeout.
When using archive backup, it is recommended that the device behavior be set to
Wrap When Full, and set the overwrite timeout to allow for periodic checking of the
backup destination to make sure its not full. The overwrite timeout should be two to
three times the interval that is checked for the backup destination. If checking once a
week, then set the overwrite timeout to two or three weeks. Also, the number of
volumes and volume size must be configured to hold that amount of data (in this
case two to three weeks). This is covered in Configuring Volumes on page 327.
Following these guidelines will ensure reliable archival with ample time to check
the backup destination and move the archive files to an offline permanent storage
media.
Configuring Volumes
Hard disks can be partitioned into multiple volumes. This involves configuring the
number of volumes, the active volume, volume quota, volume name format, and
volume name counter. Specify whether or not to have archive volumes
automatically published when the media where the volumes have been copied are
remounted. Configuring volumes is not applicable for MO drives.
The Volumes field specifies the number of volumes on the media that this archive
device can access. The maximum range is 64. The quantity specified here will result
in that number of Archive Volume objects being created under the Archive Device
object after the change is applied, Figure 221.
3BUF001092-510 327
Archive Device Aspect Section 11 Configuring the Archive Function
Figure 221. Specified Number of Volumes Created Under Archive Device Object
Volume Quota is the partition size for each volume in megabytes. For example, a
20 gigabyte hard disk can be partitioned to five 4000-megabyte partitions where
4000 (MB) is the Volume Quota and five is the number of volumes as determined by
the Volumes field. Typically, size volumes on the hard disk to correspond to the size
of the ROM media to which the archive data will be copied, for example 650 MB for
CD ROM, or 4000 MB for DVD. The minimum Volume Quota value is 100M. The
maximum Volume Quota is 97.6GB.
The Volume Name Format is a format string for generating the volume ID when
initializing a volume during timed archive. Enter any string with/without format
characters (all strftime format characters are accepted). This can be changed when
manually initializing a volume. The default value is ‘%d%b%y_####’:
%d = day of month as a decimal number [1,31].
%b = abbreviated month name for locale, for example Feb or Sept.
%y = year without century as a decimal number [00,99].
#### is replaced by the Volume Name Counter value to make it a unique volume ID,
the number of #’s determines the number of digits in the number.
The Volume Name Counter is used to generate a unique volume id when the
volume is initialized. The default value is 1. This number is incremented and
appended to the Volume ID each time the volume is initialized.
The Active Volume field is the volume number of the current volume being
archived, or to be archived.
The Create Auto-Publish Volumes check box is not operational in this software
version.
328 3BUF001092-510
Section 11 Configuring the Archive Function Archive Device Aspect
When ISO backups are selected, individual archives are limited to 2GB. The ISO
file format does not support files greater than 2GB. If ISO backups are not used,
the maximum individual archive can be equal to the maximum archive volume
size.
Archive backup is configured with the Backup Archive Path and Backup Type
fields.
The Backup Archive Path specifies the destination for ISO Image files and/or
archive copies when backing up archive data. The path must specify both the disk
and directory structure where the archiving is to take place, for example:
E:\archivebackup.
The ISO image and shadow copy can be made on a remote computer which does not
require Information Management software. The directory on the remote node must
be a shared folder with the correct privileges (the write privilege must be one of the
privileges). For example: \\130.110.111.20\isofile.
3BUF001092-510 329
Archive Device Aspect Section 11 Configuring the Archive Function
There are two requirements for the computer where the destination directory is
located:
• The destination directory must exist on the computer BEFORE configuring
the archive device.
• The computer must have a Windows user account identical to the account
for the user under which the Archive Service runs. This is typically the 800xA
system service account.
If these conditions are not met, the error message shown in Figure 222 will be
displayed after applying the archive device configuration. The object will be created
(Apply Successful initial indication); however, the archive device will not be
operational, and the error message will appear when the message bar updates.
The Backup Type specifies the backup method. The options are:
ISO Image creates an ISO image file for the currently active volume when the
volume becomes full. The ISO Image file can then be copied to a ROM media for
permanent storage.
When ISO backups are selected, individual archives are limited to 2GB. The ISO
file format does not support files greater than 2GB. If ISO backups are not used,
the maximum individual archive can be 50GB and the maximum archive can be
100GB.
Copy Files creates a shadow copy rather than an ISO Image file.
BOTH creates both an ISO image and a shadow copy.
330 3BUF001092-510
Section 11 Configuring the Archive Function Activate/Deactivate an Archive Device
3BUF001092-510 331
Adding an Archive Group Section 11 Configuring the Archive Function
3. In the object tree for the selected node, expand the Industrial IT Service
Provider tree and select the Industrial IT Archive object.
4. Select the Archive Service Aspect from this object’s aspect list.
5. Click Archive Group, Figure 223.
Click here
6. Enter a name for the object in the Name field, for example: ArchGroupOnAD,
then click OK.
Keep the Show Aspect Config Page check box checked. This will automatically
open the configuration view of the Archive Device aspect.
332 3BUF001092-510
Section 11 Configuring the Archive Function Archive Group Aspect
This adds the Archive Group object under the Industrial IT Archive object and
creates an Archive Group Aspect for the new object. Use this aspect to configure
one or more archive groups as described in Archive Group Aspect on page 333.
When configuring the History database, the primary function of this aspect is for
adding and configuring archive groups. This is described in Adding and Configuring
an Archive Group on page 334.
After configuring an archive group, make changes to the group, or to the archive
entries added to the group. This is described in Adjusting Archive Group
Configurations on page 341.
This aspect also is used to invoke manual archive operations on an archive group
basis. This is described in System 800xA Information Management Data Access and
Reports (3BUF001094*).
3BUF001092-510 333
Archive Group Aspect Section 11 Configuring the Archive Function
2. Use this dialog to specify the Group name, description (optional), and the
Industrial IT Archive Service Group whose service provider will manage this
archive group.
3. Click OK when finished.
4. Click Apply to initialize the new group.
Repeat this to add as many groups as required. Then specify the contents of each
archive group as described in Adding Entries to Archive Groups.
334 3BUF001092-510
Section 11 Configuring the Archive Function Archive Group Aspect
3BUF001092-510 335
Archive Group Aspect Section 11 Configuring the Archive Function
336 3BUF001092-510
Section 11 Configuring the Archive Function Archive Group Aspect
Figure 230. Add Archive Group Numeric Log Entry, IM Objects Entry
3BUF001092-510 337
Archive Group Aspect Section 11 Configuring the Archive Function
When the preset configuration is Signed report Objects, and if the completed
reports (FileViewer aspects in the completed reports objects) are not signed, the
Scheduled Platform object archiving skips the archiving of these report objects.
When the aspects are signed, these skipped reports will be archived with the next
archive entry.
There is a possibility that the unsigned reports missed by the archive will be deleted
before they are archived due to the Report Preferences setting in the Reports tree
of the scheduling structure. Make sure that reports are digitally signed. Do not let
the archive utility build up to the maximum number of reports before unsigned
reports are deleted.
338 3BUF001092-510
Section 11 Configuring the Archive Function Archive Group Aspect
Select
Browse
Config.
If another type of object must be selected, adjust one or more settings, or simply
understand what each setting does, refer to Table 36.
3BUF001092-510 339
Archive Group Aspect Section 11 Configuring the Archive Function
Feature Description
Browser Left-hand side of dialog provides a browser similar to Plant Explorer for
navigating the aspect directory and selecting the root object. Objects and
aspects under the selected root will be selected based on the filters.
Object Type & These filters are used to specify whether to include or exclude all objects
Aspect Filters and/or aspects of one or more selected types.
To use a filter, the corresponding Enabled check box must be checked.
The Inclusive Filter check box is used to specify whether to include or
exclude all object (or aspect) types that satisfy the filter. Inclusive includes the
selected aspects/objects in the list, exclusive (check box not checked)
excludes them.
To add an object (or aspect) to the filter, click the corresponding Add button.
This displays a search dialog. Set the view in this dialog to hierarchy or list.
Any object or aspect type that has been added may be removed from the list
by selecting it and clicking Remove.
Required This area is used to specify whether or not to check for signatures on selected
Signatures aspects. When Enabled, only aspects which have been signed with the
specified number of signatures (default is one signature) will be archived.
To select an aspect, click the Add button. This displays a search dialog. Set
the view in this dialog to hierarchy or list. Any aspect that has been added may
be removed from the list by selecting it and clicking Remove.
The Change button displays a dialog used to change the number of required
signatures (up to five maximum).
Generate This check box is used to specify whether or not to generate an event when
warning if not the archive service does not archive an aspect because the aspect is not
signed signed. If an aspect has missed an archive because it was not signed, once
the aspect is signed, it will be archived with the next archive entry.
Restored object This area is used to specify the type of object to create when an archived
type aspect is restored. Check the corresponding check box, then click Select to
select the object type.
340 3BUF001092-510
Section 11 Configuring the Archive Function Archive Group Aspect
When selecting an entry from the entries list for an archive group, the context menu
(or Group Entries button) is used to modify or delete the entry, Figure 233. The
Modify Entry command displays the object selection dialog as described in Adding
Entries to Archive Groups on page 335.
3BUF001092-510 341
Setting Up the Archive Schedule for an Archive Group Section 11 Configuring the Archive Function
342 3BUF001092-510
Section 11 Configuring the Archive Function Setting Up the Archive Schedule for an Archive Group
3. Browse to the Scheduling Objects category and select the Job Description
object (Object Types>ABB Systems>Scheduling Objects>Job Description),
Figure 235.
4. Assign the object a logical name.
Job Description
Selected
Object
Name
3BUF001092-510 343
Setting Up the Archive Schedule for an Archive Group Section 11 Configuring the Archive Function
5. Click Create. This creates the new job under the Job Descriptions group, and
adds the Schedule Definition aspect to the object’s aspect list.
6. Click on the Scheduling Definition aspect to display the configuration view,
Figure 236. This figure shows the scheduling definition aspect configured as a
periodic schedule. A new archive will be created every three days, starting June
9th at 5:00 PM, and continuing until September 9th at 5:00 PM.
344 3BUF001092-510
Section 11 Configuring the Archive Function Setting Up the Archive Schedule for an Archive Group
2. In the New Aspect dialog, browse to the Scheduler category and select the
Action aspect (path is: Scheduler>Action Aspect>Action Aspect),
Figure 238.
Use the default aspect name, or specify a new name.
3BUF001092-510 345
Setting Up the Archive Schedule for an Archive Group Section 11 Configuring the Archive Function
346 3BUF001092-510
Section 11 Configuring the Archive Function Setting Up the Archive Schedule for an Archive Group
Stagger
Logs Effect
Scheduled
Archive
Time
3BUF001092-510 347
Delete/Create Archive Logs using Maintenance Button Section 11 Configuring the Archive Function
348 3BUF001092-510
Section 11 Configuring the Archive Function Adding a Read-Only Volume for a Mapped Network
3BUF001092-510 349
PDL Archive Configuration Section 11 Configuring the Archive Function
7. This adds the Archive Volume object under the Industrial IT Archive object and
creates an Archive Volume aspect for the new object.
8. Use the config view to select the Service Group that will manage archive
transactions for this volume, and set the volume path to the mapped network
drive.
350 3BUF001092-510
Section 11 Configuring the Archive Function Archive and Delete Modes
1. Select the Inform IT History Object under the applicable node in the Node
Administration structure.
2. Select the Production Data Log Container.
3. Select the Inform IT PDL Auto Archive aspect, Figure 244.
Inform IT PDL Auto Archive
History Object Aspect
3BUF001092-510 351
Archive and Delete Modes Section 11 Configuring the Archive Function
Table 37. Legal Combinations for Archive Mode, Delete Mode, and Delay Mode
Use the Archive Mode button to select the archive mode. The choices are:
• No_auto_archive - Archive PDLs on demand via View Production Data Logs.
• On Job End - Archive PDLs automatically after specified delay from end of
job (for Batch Management, job = campaign).
If On Job End is chosen for a Batch Management campaign with multiple
batches in the campaign, Information Management archives data at the end of
each batch in the campaign, and at the end of the campaign. For example, if there
are three batches in a campaign, and On Job End is selected as the Archive
mode, there will be three archives: batch 1, batch 1 and batch 2, and batch 1,
batch2, and batch 3.
• On Batch End - Archive PDLs automatically after specified delay from end of
batch. (For Profile Historian, batch = reel or grade.)
For Profile Historian applications, configure the archive mode as ON BATCH
END.
Use the Delete Mode button to select a delete mode. The choices are:
• No_auto_delete - Delete PDLs on demand via PDL window.
• After Archive - Delete PDLs from PDL database automatically, a specified
time after they have been archived.
352 3BUF001092-510
Section 11 Configuring the Archive Function Archive Delay
• After Job End - Delete PDLs after specified delay from end of job (for Batch
Management, job = campaign). The PDL does not have to be archived.
• After Batch End - Delete PDLs after a specified delay from the end of the
batch. The PDL does not have to be archived to do this. (For Profile Historian,
batch = reel or grade.)
For Profile Historian applications, configure the delete mode as AFTER
ARCHIVE. Also the Delete Delay must be set to a value greater than or equal to
the profile log time period.
Archive Delay
A delay can be specified between the time a job or batch finishes and the time the
PDLs are actually archived. This field is only applicable if On Job End or On
Batch End for archive mode is selected.
Configure the delay to be at least five minutes (5m). This ensures that all data is
ready for archive after the end of a job or batch. A delay of hours (h) or days (d) can
be specified if time is needed to enter additional data for archive.
Delete Delay
A delay until PDLs are actually deleted can be specified. This delay is not
applicable when the delete mode is configured as NO_AUTO_DELETE. This delay
has a different purpose depending on whether the PDL is archived or not.
When the PDL is being archived, this is the amount of time to delay the delete
operation after the archive. This is used to archive the data immediately for security,
and keep the data online for a period of time for reporting, analysis and so on.
When the PDL is not archived, this is the amount of time to delay after the end of
the job or batch before deleting. Configure the delay to be at least five minutes (5m).
A delay of hours (h) or days (d) can be specified.
For Profile Historian applications, configure the Delete Delay equal to or slightly
greater than the profile log time period.
Archive Device
Enter the archive device name.
3BUF001092-510 353
Archive Device Section 11 Configuring the Archive Function
354 3BUF001092-510
Section 12 Consolidating Historical Data
Introduction
All Information Management consolidation functionality is limited to 800xA 5.1
to 800xA 5.1 Systems.
This section describes how to consolidate historical process data, alarm/event data,
and production data from remote history servers on a dedicated consolidation node.
This includes data from 800xA 5.1 or later Information Management nodes in
different systems.
The consolidation node collects directly from the History logs on the remote nodes.
For instructions on consolidating historical process data, refer to Consolidating
Historical Process Data on page 356.
For instructions on consolidating alarm/event data stored in message logs, and
production data stored in PDLs, refer to Consolidating Message Logs and PDLs on
page 389.
3BUF001092-510 355
Consolidating Historical Process Data Section 12 Consolidating Historical Data
When the consolidation node is set up, a new Log Configuration aspect is manually
created on the consolidation node. If the naming convention for the numeric log
does not match the one on the source node, any attempt to retrieve data in a report
will fail.
When setting up numeric consolidation on nodes that will also consolidate Batch
PDL data with history associations, the naming convention on the Numeric Log
template on the consolidation node must match the naming convention for the
numeric log template on the source node.
356 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
3BUF001092-510 357
Importing Remote Log Configurations Section 12 Consolidating Historical Data
This starts the process of reading the log configuration information from the
specified remote history server. The progress is indicated in the dialog shown
in Figure 246. When the Importer Module successfully loaded message is
displayed, the read-in process is done.
358 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
7. Click Next, then continue with the procedure for Assigning Objects to the
Imported Tags on page 359.
3BUF001092-510 359
Importing Remote Log Configurations Section 12 Consolidating Historical Data
This displays a dialog for specifying the location and naming conventions for
the Log Template object, and the Log Configuration aspects, Figure 248.
2. Typically, the default settings can be used. This will create the Log Template in
the History Log Template Library and provide easy-to-recognize names for
the template and Log Configuration aspects. Adjust these settings if necessary.
Click Next when done.
360 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
3BUF001092-510 361
Importing Remote Log Configurations Section 12 Consolidating Historical Data
3. Click Assign Objects. This displays the Assign Objects dialog, Figure 250.
This dialog lists all logs that have been imported from the remote history node
and are now in the resulting xml file. The status of each log is indicated by a
color-coded icon in the Access Name column. The key for interpreting these
icons is provided in the lower left corner of the dialog.
4. Use the Assign Objects dialog to make any adjustments to the imported tag
specifications that may be required.
362 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
and Log Configuration aspects may be created on the consolidation node that do not
exist on the source node. Use the following procedure to avoid this.
To modify the object and property names to make the History configurations match:
1. Split the Access Name into two fields - field 1 will become the object name,
and field 2 will become the property name. To split the Access Name:
a. Make sure the selected Object String is Access Name.
b. Click the Add button associated with the Split String At field. This
displays the Add String Split Rule dialog, Figure 251.
3BUF001092-510 363
Importing Remote Log Configurations Section 12 Consolidating Historical Data
c. For Split Character, enter the character that splits the object and property
names. For Information Management, the comma (,) character is used.
Leave the other parameters at their default settings: Occurrence = 1st,
Search Direction = Forward.
d. Click OK.
2. Make Field 1 of the split Access Name (text left of the split character) the
Object String, and trim (remove) the split character (in this case, the comma)
from the Object String. To do this (reference Figure 252):
a. In the String for Object Name section, select 1 from the Field pull-down
list. This specifies Field 1 (text left of split character) as the Object String.
b. In the Trim section check the Right check box and enter 1. This specifies
that one character will be trimmed from the right end of the text string.
As changes are made, the Object String will change accordingly. The resulting
Object String is shown in Figure 253.
364 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
Figure 253. Object String Modified Showing Results of Object String Adjustment
3. Make Field 2 of the split Access Name (text right of the split character) the
Property String. To do this, in the String for Object Property section, select 2
from the Field pull-down list, Figure 254. This specifies Field 2 (text right of
split character in the Access Name) as the Object Property String.
3BUF001092-510 365
Importing Remote Log Configurations Section 12 Consolidating Historical Data
After any desired adjustments are made, continue with the procedure for Creating
the Object/Tag Associations on page 366.
366 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
configuration information from the existing XML file rather than re-read the remote
history server.
To create the associations:
1. For the entire tag list, click Create All. For a partial list, first select the tags,
then click Create Selected.
The result (for creating selected tags) is shown in Figure 256. The status
indicator for tags that have been assigned to new objects will be green.
2. Click Close. This displays the Assign Objects summary again. This time the
summary indicates the result of the assign objects process, Figure 257.
3BUF001092-510 367
Importing Remote Log Configurations Section 12 Consolidating Historical Data
3. Click Next. This displays a dialog for specifying parameters for creating the
new Log Template and Log Configuration aspects.
4. Continue with the procedure for Specifying the Service Group and Other
Miscellaneous Parameters on page 368.
368 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
specifying eng130_ as the prefix, and 0...9 as the Counter Type, the following group
names will be used: eng130_0, eng130_1, and so on.
Set the Source Type to Remote Historian.
Click Next when done. This displays an import preview where settings can be
confirmed, Figure 259.
3BUF001092-510 369
Importing Remote Log Configurations Section 12 Consolidating Historical Data
Read the preview. When ready, click Next to start the import.
The progress status is displayed while the log configurations are being imported,
Figure 260. When the Import has completed successfully message appears, click
Finish.
370 3BUF001092-510
Section 12 Consolidating Historical Data Importing Remote Log Configurations
The Log Configuration aspects have now been created for the imported log
configurations.
A History Source object must be present above the Log Configuration aspects in
the structure where they reside, Figure 261. If the History Source object is not
present, refer to the procedure for adding History Source objects in Configuring
Node Assignments for Property Logs on page 181.
3BUF001092-510 371
Importing Remote Log Configurations Section 12 Consolidating Historical Data
The Log Configuration aspect’s Status tab (or any Desktop tool such as DataDirect
or desktop Trends) can now be used to read historical data from the remote log. An
example is shown in Figure 262.
Figure 262. Example, Viewing Historical Data from an Imported Log Configuration
Finish the set-up for historical data consolidation by Collecting Historical Data from
Remote History Servers on page 373.
372 3BUF001092-510
Section 12 Consolidating Historical Data Collecting Historical Data from Remote History Servers
3BUF001092-510 373
Collecting Historical Data from Remote History Servers Section 12 Consolidating Historical Data
Figure 263. Example, Looking Up the Storage Rate for the Remote Logs
374 3BUF001092-510
Section 12 Consolidating Historical Data Collecting Historical Data from Remote History Servers
When configuring the Data Collection parameters, set the storage interval to match
or be a multiple of the remote log’s storage interval, Figure 265.
3BUF001092-510 375
Collecting Historical Data from Remote History Servers Section 12 Consolidating Historical Data
Modify the Item IDs in the spreadsheet to make the IM data source for the direct
logs reference the remote history log names. The IM Data Source specification has
the following form: $HSobject, attribute-n-oIPaddress.
376 3BUF001092-510
Section 12 Consolidating Historical Data Collecting Historical Data from Remote History Servers
The IP address is only required when collecting from multiple sites, and some
sites use identical log names. The IP address is not required when all data sources
are unique.
Scroll to the Item ID column. The remote log name is embedded in the
corresponding object’s Item ID, Figure 267.
Item ID
Column
Log Name
The portion of the Item ID text string before the start of the .$HS string must be
stripped away and be replaced with DS, Figure 268. This instructs the Bulk Import
tool to use the remainder of the Item ID ($HSobject, attribute-n-oIPaddress) as the
log’s data source.
Replace this
part with DS
3BUF001092-510 377
Collecting Historical Data from Remote History Servers Section 12 Consolidating Historical Data
The result of the find and replace operation is shown in Figure 270.
When consolidating a dual log configuration, edit the ItemID string to specify
BOTH logs in the dual log configuration. The syntax for this is described in
Additional ItemID Changes for Dual Logs on page 378.
378 3BUF001092-510
Section 12 Consolidating Historical Data Collecting Historical Data from Remote History Servers
Consolidation Log
Direct Trend Log
3BUF001092-510 379
Collecting Historical Data from Remote History Servers Section 12 Consolidating Historical Data
1 3
Direct Trend Log Consolidation Log
2 4
syntax = DS.$HSobject,attribute-3-d OR
DS.$HSobject,attribute-3-d4
1 2
Direct Trend Log Consolidation Log
3 4
syntax = DS.$HSobject,attribute-2-d4
380 3BUF001092-510
Section 12 Consolidating Historical Data Collecting Historical Data from Remote History Servers
4. Click on a corner of the cell, and pull the corner down to highlight the Property
Template column for the remaining objects. The template specification will
automatically be entered in the highlighted cells when the mouse button is
released, Figure 275.
Figure 275. Applying the New Log Template Specification to All Objects
5. When finished, run the Bulk Configuration utility to update the log aspects
from the spreadsheet. To do this, choose Bulk Import>Update Log Aspect
from Sheet.
To check the results, go to the location where the Log Configuration aspects have
been instantiated, and use the Status tab of a Log Configuration aspect to read
historical data for the log. These logs will be located under the same root object as
the imported log configuration aspects. An example is shown in Figure 276.
3BUF001092-510 381
Collecting from TTD Logs Section 12 Consolidating Historical Data
382 3BUF001092-510
Section 12 Consolidating Historical Data Collecting from TTD Logs
In order to do this the TTD log configurations must be present in the Aspect
Directory. Refer to the applicable AC 400 Controller documentation for this
procedure. This creates a Log Configuration Template and Log Configuration
Aspect for each TTD log group. The Log Configuration Aspects will be located
under an object in the Control structure as specified during this procedure.
This step does not establish historical data collection and storage on the local
node for the imported logs. The Log Configuration aspects created as a result of
this step merely provide a window for viewing the historical data stored in the
TTD logs on the remote nodes.
Once the TTD log configurations are present in the Aspect Directory, follow the
guidelines in this section to create property logs to collect from the TTD logs. There
are two basic steps as described below. Further details are provided in the referenced
sections:
• Create a new Log Template. This procedure is basically the same as Building
a Simple Property Log on page 188, except that the direct log in the property
log hierarchy must be an IM Collector link type, rather than a basic history
trend log. Also, the Collection Type must be OPC HDA. For details refer to
Creating a New Log Template on page 383.
• Use the Bulk Import tool to instantiate property logs to collect from the TTD
logs. When this is done, specify the Item IDs for all objects to reference the
TTD logs, and apply the new Log Template. Refer to Using the Bulk Import
Tool to Instantiate the Property Logs on page 385.
3BUF001092-510 383
Collecting from TTD Logs Section 12 Consolidating Historical Data
384 3BUF001092-510
Section 12 Consolidating Historical Data Collecting from TTD Logs
Modify the Item IDs in the spreadsheet to make the IM data source for the direct
logs reference the TTD log names. The IM Data Source specification has the
following form: DS.object:property,TTDlogname. To do this, do the following for
each TTD log specification:
3BUF001092-510 385
Collecting from TTD Logs Section 12 Consolidating Historical Data
1. Obtain the object name from the Object column. The object name is the text
string that follows the last delimiting character in the full text string,
Figure 280.
2. Obtain the property name from the Property column, Figure 281.
3. Obtain the TTD log name from the applicable log template which was created
in the Aspect Directory for the TTD log group, Figure 282.
386 3BUF001092-510
Section 12 Consolidating Historical Data Collecting from TTD Logs
4. Enter the Data Source specification in the Item ID column, Figure 283.
Replace the current Log Template specification with the new Log Template. To do
this, right click on the Property template Column for the first (top) object, and
choose Template List from the context menu. Then select the template from the list
provided Figure 284.
3BUF001092-510 387
Collecting from TTD Logs Section 12 Consolidating Historical Data
Click on a corner of the cell, and pull the corner down to highlight the Property
Template column for the remaining objects. The template specification will
automatically be entered in the highlighted cells when the mouse button is released.
When finished, run the Bulk Configuration utility to update the log aspects from the
spreadsheet: choose Bulk Import>Update Log Aspect from Sheet.
To check the results, go to the location where the Log Configuration aspects have
been instantiated, and use the Status tab of a Log Configuration aspect to read
historical data for the log. An example is shown in Figure 285.
388 3BUF001092-510
Section 12 Consolidating Historical Data Consolidating Message Logs and PDLs
3BUF001092-510 389
Setting Up the Schedule Section 12 Consolidating Historical Data
to the HistoryAdmin group. Detailed instructions for adding user accounts are
provided in the section on Managing Users on page 569.
3. On the consolidation node, create the Job Description object and configure the
schedule and start condition(s) through the respective aspects. Refer to Setting
Up the Schedule on page 390.
4. Add an action aspect to the Job Description object. Refer to Setting Up the
Schedule on page 390.
5. Open the action aspect view, and select IM Consolidate Action from the
Action pull-down list. Then use the dialog to specify the source and target
nodes, and the source and target log names. Each action is used to specify logs
for ONE source node. Refer to Using the Plug-in for IM Consolidation on page
392.
6. Repeat Step 4 and Step 5 for each source node from which the consolidation
node will consolidate message and PDL data.
Reuse of Batch ID (TCL) is supported by PDL Consolidation. A warning is
displayed in the log file when processing a remote task entry (TCL batch) that
already exists on the local node with a different start time. The user can decide
what to do about the duplicate. If nothing is done, the local entry will eventually be
removed through the archive process.
390 3BUF001092-510
Section 12 Consolidating Historical Data Setting Up the Schedule
3BUF001092-510 391
Using the Plug-in for IM Consolidation Section 12 Consolidating Historical Data
392 3BUF001092-510
Section 12 Consolidating Historical Data Using the Plug-in for IM Consolidation
3BUF001092-510 393
Using the Plug-in for IM Consolidation Section 12 Consolidating Historical Data
NOTE: the $HS prefix and -n-o suffix must be used when specifying the
local and remote log names.
c. Select the Message Log Type - OPC, DCS, or Audit Trail.
d. Click OK when finished. This puts the log specification in the list of
message logs to be consolidated, Figure 289.
Consolidate Log
Specification Added
394 3BUF001092-510
Section 13 History Database Maintenance
Overview
This section provides instructions for optimizing and maintaining the history
database after configuration has been completed. Utilities for some of these
maintenance functions are available via the Windows task bar.
The procedures covered in this section are:
• Backup and Restore on page 396 describes how to create backup files for the
current History database configuration, restore a history database configuration
from backup files, and synchronize the Aspect Directory contents with the
current Information Management History database configuration.
• Schedule History Backups on page 425 using the Report Action of the Action
Aspect to schedule a History backup.
• Starting and Stopping History on page 424 describes how to stop and start
history processes under PAS supervision. This is required for various database
maintenance and configuration procedures.
• Starting and Stopping Data Collection on page 426 describes various methods
for starting and stopping data collection.
• Viewing Log Runtime Status and Configuration on page 429 describes how to
use the Log List aspect for viewing runtime status for logs.
• Presentation and Status Functions on page 434 describes how to use the
Presentation and Status tabs on the log configuration aspect for formatting
history presentation on Operator Workplace displays, and for viewing log data
directly from the log configuration aspect.
• History Control Aspect on page 440 describes how to read status and network
information for the History Service Provider for a selected node in the Node
Administration structure.
3BUF001092-510 395
Accessing History Applications Section 13 History Database Maintenance
396 3BUF001092-510
Section 13 History Database Maintenance Backup and Restore
not include archive data which may be backed up by using the Backup
Destination feature as described in Configuring Archive Backup on page 329.
Trend log configurations are backed up by this utility ONLY when they exist in a
property log structure in combination with a History log (as defined by the log
template). Property logs made up entirely of trend logs are NOT backed up.
These log configurations must be backed up via the 800xA system backup.
• Restore a history database from backup files. This procedure recreates the
Information Management History database using the backup files described
above. If the backup was created from an earlier History database version, the
restore utility will convert the restored database to the current version.
• Synchronize the Aspect Directory contents with the current Information
Management History database configuration. This procedure ensures that
the Information Management History database is at the current version, and
that the Aspect Directory matches the current database configuration.
Considerations
When backing up or restoring the History database, make sure the disk is ready and
available on the machine on which the procedure is to occur. Also, these preparatory
steps are required before performing a restore operation:
• All services under PAS supervision must be stopped.
• The Inform IT History Service Provider must be stopped.
• There must be NO applications accessing the Oracle database.
The log file should be checked after each backup and restore operation to make sure
that each backup or restore operation completed successfully.
3BUF001092-510 397
Backing Up the History Database Section 13 History Database Maintenance
As an option, choose to backup just the Aspect System Definition file. This provides
a file that can be used to recreate the History database configuration in the Aspect
Directory.
398 3BUF001092-510
Section 13 History Database Maintenance Backing Up the History Database
When backing up the History database, make sure the disk is ready and available on
the node on which the procedure is to occur. The log file should be checked after the
backup operation to make sure that the backup operation completed successfully.
Make sure the system drive is not getting full. Temp space is required to make the
backup. If the log file indicates that the Oracle export failed, use the option to export
to a disk with more space.
The following procedures can be used to backup all information related to
Information Management software. Information Management software includes:
• Inform IT History (includes Archive).
• Inform IT Display Services.
• Inform IT Application Scheduler.
• Inform IT Data Direct.
• Inform IT Desktop Trends.
• Inform IT Open Data Access.
• Inform IT Calculations.
Not all components require additional steps to backup local files. An Information
Management server will have all the software installed while other nodes will have
only some of the software used. For each component that is configured and used on
a node, all the files should be backed up occasionally. The following procedure
provides the steps to perform the backups.
Node Types
The IM software run on three basic node types.
• Information Management Server.
• Information Management Client, with optional Desktop Tools components.
• Desktop Tools installation.
For a given node type, the backup steps depends on the type of software installed
and the software used on the node. If software is installed and not utilized, there will
be nothing to backup. The following steps identify how to backup an Information
Management 800xA Server node. The steps to backup the other node types will
reference the server backup procedure.
3BUF001092-510 399
Backing Up the History Database Section 13 History Database Maintenance
400 3BUF001092-510
Section 13 History Database Maintenance Backing Up the History Database
3BUF001092-510 401
Backing Up the History Database Section 13 History Database Maintenance
402 3BUF001092-510
Section 13 History Database Maintenance Backing Up the History Database
8. The HsBAR Output window may be opened over and hide the progress status
window. There is a check box for specifying that the HsBAR window be closed
automatically when finished, Figure 292. This is recommended. When the
window is not closed automatically, wait for the Close button to be activated on
the bottom of the output window, then click the Close button.
3BUF001092-510 403
Backing Up the History Database Section 13 History Database Maintenance
404 3BUF001092-510
Section 13 History Database Maintenance Restoring the History Database from Backup Files
3BUF001092-510 405
Restoring the History Database from Backup Files Section 13 History Database Maintenance
For any object in the original database that had more than one Log Configuration
aspect, the contents of those aspects will be merged into one Log Configuration
aspect per object.
406 3BUF001092-510
Section 13 History Database Maintenance Restoring the History Database from Backup Files
When finished with the aforementioned preparatory steps, run the restore utility as
described below:
1. Start the Backup/Restore utility. When the Backup/restore utility is started, the
Create Backup option is selected by default.
2. Click the Restore configuration from backup files option, Figure 294.
3BUF001092-510 407
Restoring the History Database from Backup Files Section 13 History Database Maintenance
3. Click Next. This displays a dialog for setting up the restore operation,
Figure 295.
4. Specify the location of the backup files from which the History database will
be recreated. Enter the path directly in the Path of IM historian backup field, or
use the corresponding Browse button. Figure 291 shows the Browser dialog
being used to browse to C:\Backup\T3.
– As an option specify a new mount point path for file-based logs. Use the
corresponding Browse button and refer to Mount Point on page 422 for
further guidelines.
– Also, specify that a different Oracle tablespace definition file be used.
This may be required when restoring the files to a machine that has a
different data drive layout than the original backup machine. Use the
corresponding Browse button and refer to Moving hsBAR Files to a
Different Data Drive Configuration on page 423 for further guidelines.
– Also, specify additional hsBAR options. These options are described in
Specifying Additional hsBAR Options on page 419.
408 3BUF001092-510
Section 13 History Database Maintenance Restoring the History Database from Backup Files
5. Click Next when finished with this dialog. This opens the progress status
window and the HsBAR Output Window.
If the restore operation fails with Oracle Error Message 1652 - Unable to extend
tmp segment in tablespace tmp - it may be due to a large OPC message log which
exceeds the tmp tablespace capacity during the restore operation.
Use the Database Instance Maintenance wizard to increase the tmp tablespace.
The default size is 300 megabytes. Increase the tablespace in 300-megabyte
increments and retry the restore operation until it runs successfully. Refer to
Maintaining the Oracle Instance on page 135.
6. The HsBAR Output window may be opened over and hide the progress status
window. There is a check box for specifying that the HsBAR window be closed
automatically when finished, Figure 296. This is recommended. If the window
is not closed automatically, then wait for the Continue button to appear on the
bottom of the output window, then click the Continue button when ready.
3BUF001092-510 409
Restoring the History Database from Backup Files Section 13 History Database Maintenance
7. After the HsBAR window is closed, monitor the Progress in the Progress Status
window, Figure 297. Ignore the error messages that indicate error deleting
aspect.
410 3BUF001092-510
Section 13 History Database Maintenance Restoring the History Database from Backup Files
3BUF001092-510 411
Synchronizing the Aspect Directory with the History Database Section 13 History Database
412 3BUF001092-510
Section 13 History Database Maintenance Synchronizing the Aspect Directory with the History
Figure 299. Synchronizing the Aspect Directory with the History Database
3BUF001092-510 413
Synchronizing the Aspect Directory with the History Database Section 13 History Database
2. Click Next to continue. This displays a dialog for mapping controller object
locations, Figure 301. It is NOT necessary to make any entries in this dialog
unless the locations have changed since the backup was made.
3. Enter the mapping specifications if necessary (typically NOT REQUIRED),
then click Next to continue. This displays the Progress Status, Figure 302.
414 3BUF001092-510
Section 13 History Database Maintenance Synchronizing the Aspect Directory with the History
3BUF001092-510 415
Synchronizing the Aspect Directory with the History Database Section 13 History Database
416 3BUF001092-510
Section 13 History Database Maintenance Adjusting Service Group Mapping
7. When the execution complete message is displayed, Figure 305, click Finish.
3BUF001092-510 417
Adjusting Service Group Mapping Section 13 History Database Maintenance
also be used to make adjustments. For instance, to merge two restored Service
Group configurations into one Available Service Group.
In this dialog, Service Group definitions being restored or synchronized are called
Imported Service Groups. The actual Service Groups which are available in the
Aspect Directory are called Current or Available Service Groups. The
backup/restore utility automatically maps Imported Service Groups to an Available
Service Group by matching the host name for each Imported Service Group with the
host name for an Available Service Group. (The host name is specified in the
Service Group’s Service Provider configuration.)
If Service Group configurations in the Aspect Directory have changed since the
creation of the History database configuration now being restored or synchronized,
some Imported Service Groups may not be mapped to any Available Service Group.
This condition is illustrated in Figure 306 where the Current Service Group column
for Imported Service Group eng38 is blank. If this occurs, use the dialog manually
to map the Imported Service Group to an Available Service Group.
418 3BUF001092-510
Section 13 History Database Maintenance Specifying Additional hsBAR Options
To do this:
1. From the Imported Service Groups list (top left pane), select the Imported
Service Group whose mapping is to be changed.
When this is done, the host names for the imported and currently mapped
Service Providers for the selected Imported Service Group are indicated in the
respective Service Provider lists (bottom right panes).
2. From the Available Service Groups list (top right pane), select the current
Service Group to be mapped to the selected imported Service Group. When
this is done, the host name for the Service Provider for the selected Available
Service Group is indicated in the Current Service Providers list (bottom left
pane). This helps when matching the host name for the Imported Service
Group with the host name for the Available Service Group.
3. When satisfied that the appropriate Available Service Group was selected, click
the left (<<) arrow button. This puts the name of the selected Available Service
Group in the Current Group column for the Imported Service Group.
4. Check the host names for the Imported and Current mapped Service Providers
in the respective Service Provider lists (bottom right panes) to verify that the
Imported and Available Service Groups were mapped as intended.
5. Repeat this procedure to map additional Imported Service Groups to the same
Available Service Group if needed.
To change the Current Service Group for an Imported Service Group, first unmap
the Current Service Group. To do this, select the Imported Service Group, then click
the right (>>) arrow button. This removes the name of the currently mapped Service
Group. At this point the Imported Service Group is not mapped to any Available
Service Group. Repeat the above procedure starting at step 2 to do this.
3BUF001092-510 419
Running hsBAR Section 13 History Database Maintenance
-c < compression level > is used to specify the compression ratio to use for backing
up the History database. Refer to Compression Level / Ratio on page 421.
-f < log file name > is used to specify the file to use for storing program output
during a backup or restore procedure. Refer to Log File on page 422.
-o indicates flat files are to be excluded when backing up the History database on
the hard drive. Refer to Excluding Flat Files on Backup on page 424.
-a is used to specify an alternate location for the temporary and oracle DB export
files used during history database backup. Default directory for these files is
C:\HsData\History\
-k This option can be used when restoring oracle-only database backup to skip
the creation of empty datafiles. If backup was performed with the datafiles, -k is
ignored
-b instructs hsBAR to extract oracle data info file and put it in the specified
directory. Therefore this option can only be used on existing database backup files
and -mr has to be specified.
-e Used on restore only. This option tells hsBAR to ignore the Oracle data info
file found in backup and use the one specified instead. This includes table
information stored in stats. file (as in prior 2.5 versions of History). The stats
information from the specified Oracle data info file will be used instead.
Running hsBAR
As an option, use hsBAR from a command line. This backup procedure stores the
files that hold file-based logs, and Oracle tables that hold configuration data and any
logs that are Oracle-based (message logs and asynchronous property logs).The
hsBAR executable resides in %ABB_ROOT%History\bin and is run from the
command line. (The default path for %ABB_ROOT% is c:\Program Files\ABB
Industrial IT\).
Run hsBAR using the following command:
“%HS_HOME%\bin\hsBAR” <options>
The syntax for hsBAR is as follows:
“%HS_HOME%\bin\hsBAR” -m b|r -h source/destination
[ -c compression level ] [ -f log file name ]
[ -p [ -n new mount point directory ] ] [ -o ]
420 3BUF001092-510
Section 13 History Database Maintenance hsBAR Explanations and Examples
Due to a limitation in the ORACLE import facility, paths specified for hsBAR
cannot contain spaces in the directory or file names. This includes the
environment variable, %HS_DATA%
Operation Mode
In order for the hsBAR program to execute, the operation mode must be specified by
entering the -m option, followed by either:
-b (back up the current History database) or
-r (restore a previously backed up History database)
Storage
In addition to specifying the mode (through the -m option), in order for the hsBAR
program to execute, a storage location must also be specified. On an hsBAR backup
operation, the -h option must be used to specify the destination to which the
History database is to be backed up. On an hsBAR restore operation, the -h option
must be used to specify the source from which the History database is to be restored.
The drive, directory path, and file name must all be specified to use as the backup
destination or restore source location, as no default is provided:
For backup “%HS_HOME%\bin\hsBAR” -m b -h C:\Temp\history
For Restore “%HS_HOME%\bin\hsBAR” -m r -h C:\Temp\history
In the above example, the backup operation will produce in the C:\Temp directory
one or more zipped archives (depending on the number of directories containing
History database-related files) of the form history-drive.zip (such as history-C.zip or
history-D.zip, depending on the drive containing the database files). The restore
operation in the above example will search in the C:\Temp directory for zipped
archives matching the format history-drive.zip, and will then attempt to restore the
files contained within the found archives.
3BUF001092-510 421
hsBAR Explanations and Examples Section 13 History Database Maintenance
Log File
All progress messages and errors related to hsBAR are printed to a log file. The
default log file for a backup operation is %HS_LOG%historyBackup.log,
while the default log file for a restore operation is
%HS_LOG%historyRestore.log. The hsBAR program, however, can write
messages to an alternate log file specified by entering the -f option, followed by
the full directory path and log file name:
For backup “%HS_HOME%\bin\hsBAR” -m b -h C:\Temp\history -f
C:\Temp\backup.log
For restore “%HS_HOME%\bin\hsBAR” -m r -h C:\Temp\history -f
C:\Temp\restore.log
Mount Point
The mount point specifies the location to which the History database is to be
restored on an hsBAR restore operation. The -p option is used to restore flat files
(file-based logs) to the mount point itself, and not necessarily to a location that is
relative to the root. If this option is omitted, all flat files will be restored relative to
the root, and not relative to the mount point itself. The -n option may be used in
conjunction with the -p option. This is used to specify a new mount point. If the -p
option is used alone (without the -n option), the default mount point for the History
database C:\HsData\History (%HS_DATA%) will be used. If the -p and -n
options are used together, the old History database will be deleted from its original
location and restored to the new specified mount point. This option is especially
useful in that it can take a History database that was backed up on several different
drives and restore it all to a centralized location on one drive:
“%HS_HOME%\bin\hsBAR” -m r -h C:\Temp\history -p -n C:\Temp
422 3BUF001092-510
Section 13 History Database Maintenance hsBAR Explanations and Examples
3. Run hsBAR with the following options to use the modified datafiles_config
file:
3BUF001092-510 423
Starting and Stopping History Section 13 History Database Maintenance
424 3BUF001092-510
Section 13 History Database Maintenance Schedule History Backups
2. To stop all processes under PAS supervision, click Stop All. PAS is still active
when all processes are stopped. Click Start All to start all processes. The
Restart All button stops and then restarts all processes.
3. Click Close to exit PAS when finished.
3BUF001092-510 425
Starting and Stopping Data Collection Section 13 History Database Maintenance
426 3BUF001092-510
Section 13 History Database Maintenance Activating/Deactivating a Property Log
determines the earliest allowable time that the log can be activated. Start time also
determines the hour, minute, and second when the first sample is collected.
To activate a property log as a unit:
1. Go to the structure where the logged object resides and click the Log
Configuration aspect.
2. Use the Enabled check box to activate (checked) or deactivate (unchecked) all
logs in the property log, Figure 309.
To activate an individual log within a property log, select the log in the hierarchy,
click the IM Definition tab, and then click Activate under the IM Historian Log
State control, Figure 310.
To activate or deactivate property logs on a log set basis, refer to Activating Logs
with Log Sets on page 428.
3BUF001092-510 427
Activating Logs with Log Sets Section 13 History Database Maintenance
428 3BUF001092-510
Section 13 History Database Maintenance Viewing Log Runtime Status and Configuration
Log Set
Aspect
Aspect View
Selected
Log Set
Click Mode to display
Activate/Deactivate menu
Figure 311. Accessing the Configuration View for the Log Set Aspect
3BUF001092-510 429
Viewing Log Runtime Status and Configuration Section 13 History Database Maintenance
Figure 312. Accessing the Configuration View for the Log List Aspect
Click Update Log Lists to display the log list (Figure 313). This list provides both
system-level and log-level information. The System Information section indicates
the total number of logs configured for the applicable Service Group. It also
indicates the number of logs whose state is Active, Inactive, or Pending.
The log-level information is displayed as columns in the log list. Select columns to
show or hide. Also, specify a filter to show a specific class of the logs. The list can
be filtered based on log state, log type, log name, archive group, and log set. These
procedures are described in Filtering the Log List and Visible Columns on page 431.
The context menu is used to activate or deactivate logs in the list. This is described
in Activating/Deactivating Logs on page 433.
Double-clicking on a specific Log Information row, the Log Configuration aspect
containing the log will open as an overlap window.
430 3BUF001092-510
Section 13 History Database Maintenance Filtering the Log List and Visible Columns
System
Information
Log
Information
3BUF001092-510 431
Filtering the Log List and Visible Columns Section 13 History Database Maintenance
Showing/Hiding Columns
To show/hide specific columns simply check to show or uncheck to hide the
corresponding column.
Field Description
Log State This is used to show all logs regardless of their state, or show just Active, Inactive,
or Pending logs.
Log Type This is used to select any combination of these log types: Numeric (property),
Message, Report. Profile logs are applicable if Profiles is installed.
Log Name Include logs whose name fits the specified text string. * is a wild card text string. By
itself * indicates all logs.
Archive Include logs assigned to an archive group whose name fits the text string specified
Group this field. * is a wild card text string. By itself * indicates all archive groups.
Log Set Include logs assigned to a log set whose name fits the text string specified this field.
* is a wild card text string. By itself * indicates all log sets.
Filter Example
The following example show how to apply a filter. In this case the filter is based on
log name. The filter will reduce the list to only those logs whose name includes the
text string *30s* Figure 314. The filter result is shown in Figure 315.
432 3BUF001092-510
Section 13 History Database Maintenance Activating/Deactivating Logs
Activating/Deactivating Logs
Activate and deactivate individual logs, a subset of displayed logs, or the entire log
list. To do this select the logs to be activated or deactivated, and then right-click and
choose the applicable menu item from the context menu, Figure 316.
3BUF001092-510 433
Presentation and Status Functions Section 13 History Database Maintenance
After executing an Activate All or Deactivate All from the Inform IT History Log
List, the State does not get updated. Use the Update Log Lists button to refresh
the list.
Presentation
This tab, Figure 317, is used to configure presentation attributes for the 800xA
operator trend display.
The default settings for these attributes are set in one of the following ways:
• Object Type (i.e. in the Property Aspect).
• Object Instance (i.e in the Property Aspect).
• Log (in the Log Configuration Aspect).
• Trend Display.
These are listed are in order from lowest to highest precedence.
434 3BUF001092-510
Section 13 History Database Maintenance Presentation
For example, override a value set in the Object Type by writing a value in the Object
Instance. A value set in the Object Type, Object Instance, or the Log can be
overridden in the Trend Display.
To override a presentation attribute the check box for the attribute must be marked.
If the check box is unmarked the default value is displayed. The default value can be
a property name or a value, depending on what is specified in the Control
Connection Aspect.
Engineering Units are inherited from the source. It is possible to override it. Normal
Maximum and Normal Minimum are scaling factors, used for the scaling of Y-axis
in the Trend Display. The values are retrieved from the source but are possible to
override.
The Number of Decimals are retrieved from the source but are possible to override.
The Display Mode can be either Interpolated or Stepped. Interpolated is
appropriate for real values, Stepped is for binary.
Click Apply to put the changes into effect.
3BUF001092-510 435
Status Section 13 History Database Maintenance
Status
This tab is used to retrieve and view data for a selected log, Figure 318. As an
option, use the Display and Settings buttons to set viewing options. Refer to:
• Data Retrieval Settings on page 436.
• Display Options on page 437.
Click Read to retrieve the log data. The arrow buttons go to the next or previous
page. Page size is configurable via the Data Retrieval Settings. The default is 1000
points per page.
436 3BUF001092-510
Section 13 History Database Maintenance Status
Set the number of points per page in the Page Size area. Default is 1000 and
maximum is 10,000.
Set the time range for retrieval of data in the Time Frame area.
• Latest retrieves one page worth of the most recent entries.
• Oldest retrieves one page worth of the oldest entries.
• At time is used to specify the time interval for which entries will be retrieved.
The data is retrieved from the time before the specified date and time.
Display Options
The Display button displays the Display Options dialog which is used to change the
presentation parameters data and time stamp, Figure 320.
Values are displayed as text by default. Unchecking the check box in the Quality
area, displays the values as hexadecimals.
Timestamps are displayed in local time by default. Unchecking the first check box
in the Time & Date area, changes the timestamps to UTC time.
Timestamps are displayed in millisecond resolution by default. Unchecking the
second check box in the Time & Date area changes the timestamp resolution to one
second.
3BUF001092-510 437
Property Log Tab Section 13 History Database Maintenance
438 3BUF001092-510
Section 13 History Database Maintenance Status Light
Status Light
The Service Status row shows status information for the configured Service Group.
The name of the configured group is presented in the row. If no history sources have
been defined (refer to Configuring Node Assignments for Property Logs on page
181, the text ‘Not Specified’ is displayed. Status for configured Service Group may
be:
• OK: All providers in the service group are up and running.
• OK/ERROR: At least one service provider is up and running.
• ERROR: All providers in the service group are down.
3BUF001092-510 439
History Control Aspect Section 13 History Database Maintenance
History
Control
Aspect
History
Control
Aspect View
Selected
Node’s
History
Object
Figure 322. Accessing the Configuration View for the History Admin Aspect
Use the View selection to get information on: Hard Drive Usage, Database Usage,
and History Logs for a particular node. The configuration view allows selection of
History nodes from an available list. Additionally, the tree view has selections for
OMF Network Status, PAS Control, and User Tag Management status.
PAS Control
Refer to Starting and Stopping History on page 424.
Synchronization
For object and log name synchronization, use the Resynchronize Names button. To
force sysncronization of every object on the selected node with the Aspect Server,
use the Force Synchronization button. The Check Names button produces a report
of the object names, log names, and object locations in the aspect directory that are
440 3BUF001092-510
Section 13 History Database Maintenance User Tag Management Status
Field Description
History Managers The History Managers field indicates the node name for the local History
Manager and any available remote History Managers.
Status A History Manager’s status is set by the system:
• ACTIVE - History Manager is ready to receive data.
• INACTIVE - History is on the node, but is not running.
Version This indicates the version of the History software for the corresponding
History Manager. When working with a remote History Manager via the
local History Manager, both must use the same version of History software.
3BUF001092-510 441
Database Maintenance Functions Section 13 History Database Maintenance
442 3BUF001092-510
Section 13 History Database Maintenance Using the hsDBMaint Menu
3BUF001092-510 443
Using the hsDBMaint Menu Section 13 History Database Maintenance
444 3BUF001092-510
Section 13 History Database Maintenance Using the hsDBMaint Menu
3BUF001092-510 445
Database Free Space Report Section 13 History Database Maintenance
Choose option 1 - Create free space report now. This creates the Free Space
Report. An example is shown in Figure 327.
446 3BUF001092-510
Section 13 History Database Maintenance Database Free Space Report
Also, view a free space report on the Tablespace/Datafiles tab of the Instance
Maintenance wizard. Refer to Maintaining the Oracle Instance on page 135.
• OPC Message log uses INFORM_HS_RUNTIME and HS_INDEX
tablespaces.
• numeric logs use HS_CONFIG and HS_ICONFIG tablespaces.
• PDL application uses Adjust the HS_PDL and HS_IPDL tablespaces.
• Consolidation of OPC and PDL messages, and PDL data use the
INFORM_HS_RUNTIME and HS_INDEX tablespaces. If they run out of
space, OPC and PDL message consolidation will fail. If the HS_PDL and
HS_IPDL (index) tablespaces run out of space, PDL data consolidation will
fail.
3BUF001092-510 447
Entry Tables Report Section 13 History Database Maintenance
This function may take a couple hours to complete for a large database, and
generates a high CPU load.
448 3BUF001092-510
Section 13 History Database Maintenance Entry Tables Report
The file is given a .txt extension for viewing in a text editor. The entry table provides
statistics to verify whether or not numeric logs are functioning properly. It can also
be used to look up the Log ID for a specific log name. Log IDs are used in SQL
queries against historical tables.
Rows marked with an asterisk (*) indicate logs with errors. A summary is provided
at the end of the report. This information is useful for level-4 (development)
support.
This menu shows the current settings for log ID to process and lines per page. The
current settings can be changed before running the report.
If any of the settings need to be changed, choose the corresponding option from this
menu and follow the dialog. When finished with the dialog, the Entry Tables Report
menu is shown and the current settings will be changed according to the dialog.
When the settings are correct, at the menu prompt, choose the Create entry table
report now option. This generates the report in the Command Prompt window,
Figure 329. A short summary is provided at the end of the report, Figure 330.
Adjust the height and width of the Command Prompt window to display the
report properly.
3BUF001092-510 449
Update Deadband Ratio Online Section 13 History Database Maintenance
450 3BUF001092-510
Section 13 History Database Maintenance Extend Tablespace for Oracle-based History Files
In most cases, since the amount of process variation will not be at either extreme,
the effective log period will fall somewhere in between the configured (minimum)
and maximum value. The effective log period calculated according to the above
formula is stored in the EST_LOG_TIME_PER attribute. This attribute is updated
periodically according to a schedule which repeats as often as every five days. If
necessary, use the History Utility to manually update this attribute.
EST_LOG_TIME_PER is used by the History Manager when it needs to determine
which log to retrieve data from (seamless retrieval). For instance, consider an
application with a composite log where the primary log stores data at 30 second
intervals over a one-day time period, and the secondary log stores data at one-hour
intervals over a one-week time period. The data compaction ratio is 5:1 which
extends the effective log period of the primary log to five days. If a request is made
to retrieve data that is three days old, the History Manager will know to get the data
from the primary log rather than the secondary log, even through the configured log
period of the primary log is only one day.
When EST_LOG_TIME_PER is updated, both internal History memory and the
History database are updated with the new values of EST_LOG_TIME_PER and
COMPACTION_RATIO. This will allow History Managers access to the
information and when the History node is restarted the information will also be
available.
To update deadband ratio and estimated log time period, choose Update Deadband
Ratio Online from the Main Menu. This will generate a prompt that asks whether
or not to continue. At the prompt, enter y (Yes) to continue, or n (No) to cancel and
exit out of the hsDBMaint menu.
For a large database, this function may take several hours to complete.
3BUF001092-510 451
Extend Tablespace for Oracle-based History Files Section 13 History Database Maintenance
If these settings are acceptable, simply select option 4 - Extend tablespace with
current settings. If one or more of these settings need to be changed, then follow
the applicable procedure below.
Specify the Tablespace to Extend
The default tablespace is INFORM_HS_RUNTIME. To extend a different
tablespace, select option 1 - Change tablespace to extend. This displays the Table
Spaces available for extension menu, Figure 332.
Enter the number corresponding to the tablespace to be extended (for example 3 for
HS_REPORTS). The Tablespace Extension menu is returned with the new
tablespace indicated for option 1.
Specify the Amount of Tablespace
The default amount is 50 Mb. To change this, select option 2 - Change size to
extend by. This displays a prompt to enter the size to extend table space by in Mb.
Enter the size as shown in Figure 333.
452 3BUF001092-510
Section 13 History Database Maintenance Extending Temporary Tablespace
The Tablespace Extension menu is returned with the new value indicated for option
2.
Change the Directory
The default directory is c:\HsData\History\oracle. The oracle Instance
Wizard requires the default directories and will not work if the directories are
changed.
To specify a different directory for the extended tablespace data, select option 3 -
Change directory to put data file. This displays a prompt to enter the new data file
directory. Enter the directory. The Tablespace Extension menu is displayed with the
new directory indicated for option 3.
Apply the Changes
At this point, the criteria for extending tablespace should be correct, and the
tablespace can actually be extended.
To extend tablespace with the current settings for size, directory, and tablespace,
select option 4 - Extend tablespace with current settings.
This extends the specified tablespace by the specified amount, and then returns the
Tablespace Extension menu.
Repeat above procedures to extend another tablespace. When finished, return to the
Main Menu by selecting option 0-Return to main menu.
3BUF001092-510 453
Managing Rollback Segments (Oracle UNDO Rollback) Section 13 History Database Maintenance
454 3BUF001092-510
Section 13 History Database Maintenance Directory Maintenance for File-based Logs
Maintenance Overview on page 455. To get started, use one of the following
methods:
• Flat File Maintenance via the Instance Maintenance Wizard on page 137.
• Using the Directory Maintenance Functions from the Menu on page 456.
This list shows one default directory: c:\HsData\History\HSFF. File-based logs are
always stored sub-directories under a directory named HSFF. This is why the HSFF
directory as a file quota of 0 (each log constitutes one file). These subdirectories are
automatically created under the HSFF directory as needed.
The subdirectory names are three-digit sequential numbers starting with 000. For
instance, c:\HsData\History\HSFF\000. Each subdirectory stores up to
1000 files. When the number of files exceeds 1000, another subdirectory (\001) is
created to store the excess. Subdirectories are added as required until the quota on
that disk is reached. New logs create files on the next disk (if space has been
allocated). Disks are used in the order they are added.
In the example above, the default location has three subdirectories. A total of 2,437
files are stored in these directories. Directories \000 and \001 hold 1000 files each.
Directory \003 holds 437 files.
3BUF001092-510 455
Directory Maintenance for File-based Logs Section 13 History Database Maintenance
This menu is used to add a directory, change the quota for a directory, delete a
directory, or list all directories. To return to the main menu, choose option 0 -
Return to Main Menu.
For guidelines on using one of these functions, refer to:
• Add a Directory on page 456.
• Update Directory List on page 456.
• Delete From Directory List on page 457.
• List the Directory on page 457.
Add a Directory
To add a directory, choose option 1 - Add to Directory List. This initiates a dialog
for configuring a new directory. Specify a quota and path for the new directory.
Follow the guidelines provided by the dialog. Make sure the disk where the
directory is being added has enough space for the directory. A directory with the
name HSFF will be added under the path entered. For example, if the path is
\disk3, a new directory will be created as follows: \disk3\HSFF
456 3BUF001092-510
Section 13 History Database Maintenance Reset Object Status States
This menu shows the current settings for logs to process, and attributes to reset. The
current settings can be changed before executing the reset operation. To change any
3BUF001092-510 457
Cascade Attributes for Composite Logs Section 13 History Database Maintenance
of the settings, choose the corresponding option from this menu and follow the
dialog. For log name to process, choose:
• All logs.
• All logs in a specific composite log hierarchy.
• Just one specific log.
For attributes to reset, choose:
• Both attrsUpdated and userEntered bits.
• Just attrsUpdated.
• Just userEntered.
When the settings are correct, choose the Reset specified log(s) with current
settings now option.
To return to the database maintenance menu, choose the Return to Main Menu
option.
2. Specify the log to pass attributes to. The default is for all logs. If this is
acceptable, skip this step and go directly to step 3. To specify a specific log,
choose option 1 - Change log ID to process. This displays the prompt: Do
you want to do all composite logs? [yn]
458 3BUF001092-510
Section 13 History Database Maintenance Stagger Collection and Storage
Stagger
Stagger is typically applied to a number of logs with the same alignment, storage
interval, and sample interval, and where sample and storage interval are equal.
Stagger assigns a blocking rate greater than the storage interval. The first Activation
Time is modified to distribute the load. For example, consider an application with
120 logs with the following configuration:
3BUF001092-510 459
Stagger Collection and Storage Section 13 History Database Maintenance
Phasing
Phasing is typically applied to a number of logs with the same alignment, storage
interval, and sample interval, and the storage interval is significantly greater than the
sample interval. For example, consider an application with 120 logs sampled at a 1-
minute rate and an average calculated on a daily basis.
In this case, the 150s blocking rate would not prevent a CPU load spike at 00:00:00
(midnight). Phasing allows a set of logs to have slightly different blocking rates to
prevent all logs from processing at the same time. For example, for a blocking rate
of 30 minutes, phasing can be implemented by sub-dividing the 120 logs into groups
and assigning different blocking rates to each group as follows:
40 logs w/ Blocking Rate = 29m
40 logs w/ Blocking Rate = 30m
40 logs w/ Blocking Rate = 31m
In this case a spike will only occur once every 26,979 minutes (29x30x31).
460 3BUF001092-510
Section 13 History Database Maintenance Stagger Collection and Storage
The summary categorizes the logs by sample rate and storage rate. Each line of this
summary provides the following information for a category of logs:
Total Total number of logs in this category. This includes both cyclic
subscriptions to OCS objects and periodic retrieval of OPC/HDA
from controllers.
Prim Number of logs in this category from cyclic subscriptions to OCS
objects.
OPC/HDA Number of logs in this category from periodic retrieval of
OPC/HDA from controllers (refer to Figure 338).
Time First Activation time for logs in this category.
Sample Sample rate for logs in this category (in seconds).
Storage Storage rate for logs this category (in seconds).
Blocking Shortest blocking rate for any logs in this category (in seconds)/
Range Longest blocking rate for any logs in this category (in seconds).
AvgRate Calculated average number of requests per minute for this
category based on number of logs, and sample, storage, and
blocking rates. This equates to the average number of disk
access transactions per minute for the system.
The prompt following the summary is used to either exit this function (by pressing
n), or continue with the Stagger function (by pressing y).
3BUF001092-510 461
Stagger Collection and Storage Section 13 History Database Maintenance
Other Considerations
It is generally accepted that the Information Management requests per minute
should be kept in the 500 to 1,000 requests per minute range. If the requests per
minute are over 1,000, the performance of the Connectivity Servers can suffer.
Because most 800xA sources are exception driven, a one-second log typically does
not have 60 values per minute. If the rate is over 1,000 per minute, the custom
selection should be used to increase the blocking rates, decreasing the average
OPC/HDA request rate to the Connectivity Servers.
462 3BUF001092-510
Section 13 History Database Maintenance Stagger Collection and Storage
3BUF001092-510 463
Stagger Collection and Storage Section 13 History Database Maintenance
Enter a new blocking rate (without units) at the prompt. The range is 0 to 4095. For
example, enter 12.
Enter value( 0-4095 ): 12
When the prompt for units is displayed, enter the units by entering the appropriate
integer code: 3 = seconds, 4 = minutes, 5 = hours, 6 = days, 7 = weeks. In the
example below, the integer code for minutes (4) is selected.
Enter unit(3 for omfSECONDS etc.):4
When the prompt for stagger is displayed, Figure 341, enter an integer value within
the specified range. The range is based on the blocking rate specified.
The stagger value specifies the number of divisions within the specified blocking
rate that will be used to distribute the load. After entering the stagger value, a
summary of the changes made is displayed, Figure 342.
The prompt following this summary can be used to either accept these values (by
pressing n), or change these values (by pressing y). In this case, the specified
blocking rate is 12 minutes. The selected stagger divides each blocking interval into
24 30-second time slots. Thus at every 30-second time slot,
This procedure must be repeated for each category. When completed for each
category, a new summary will be displayed with a prompt to choose whether or not
the changes made should be applied, Figure 343.
464 3BUF001092-510
Section 13 History Database Maintenance PDL Maintenance Functions
Press y to apply the new stagger values, or n to ignore the changes made. The
Information Management processes (PAS) must be restarted in order for the stagger
changes to take effect.
3BUF001092-510 465
Create/Drop User Indexes Section 13 History Database Maintenance
To access this function, from the hsDBMaint main menu, choose the Create/Drop
User Indexes option. This displays the Table Index Maintenance menu, Figure 345.
Dropping an Index
To drop an index:
1. From the Table Index Maintenance menu, choose Action to perform (1), and
then enter d to select the drop action.
2. Choose Current index name (2), then enter the name of the index to drop.
3. Choose Drop the index (7) to drop the selected index.
Creating an Index
To create an index:
1. From the Table Index Maintenance menu, choose Action to perform (1), and
then enter c to select the create action.
2. Choose Current index name (2), and then enter the name for the new index.
3. Choose Name of table index belongs to (3), then enter the name of the table.
4. Choose Name of column index will be based on (4), then enter the column
names. Separate each name with a comma, and DO NOT enter any white space
between names. Press <Enter> to display a list of column names.
5. Choose Name of tablespace to create index in (5), then enter the tablespace
name. Press <Enter> to display a list of tablespace names.
466 3BUF001092-510
Section 13 History Database Maintenance Create or Drop Oracle Instance
6. Choose Size of Initial and Next extent(s) (6), then enter the size in kilobytes.
7. Choose Create the index... (7) to create the new index based on the entries
made for steps 3-6.
Creating indexes requires additional disk space. Be sure to extend the PDL
tablespace or PDL Index tablespace (if it exists) before creating a new index.
Also, indexes make retrieval faster, but slow down storage and deletion
operations. This should be considered when adding indexes to tables.
3BUF001092-510 467
Create or Drop a Product Database Section 13 History Database Maintenance
-a: Auto adjust table spaces and flat files to fit the History configuration. Must
be used in conjunction with -c or -m.
This menu shows the current settings for type of database to create/drop, and the
operation to perform (create or drop). Change the current settings before executing
the create or drop operation.
To change any of the settings, choose the corresponding option from this menu and
follow the dialog. For type of database choose History or PDL.
For operation to perform, choose Create or Drop.
Creating History will automatically create PDL if the option is installed.
When finished with the dialog, the Create/Drop Product Database menu will return
and the current settings will be changed according to the dialog. When the settings
468 3BUF001092-510
Section 13 History Database Maintenance Clean History Database
are correct, choose the Create/drop product database with current settings now
option. When the operation is finished, the Main Menu will be displayed again.
To return to the database maintenance main menu, choose the Return to Main
Menu option (0).
Choose option 2 - Clean History database now. This cleans up the database, and
then returns the Clean History Database menu. To return to the Main Menu, choose
option 0 - Return to main menu.
3BUF001092-510 469
Purge History Data Section 13 History Database Maintenance
To restore or initialize History logs, from the hsDBMaint main menu, choose
Restore/Initialize History Logs.
After specifying whether or not to purge, a prompt for a time to compare data time
stamps with is displayed, Figure 349. The default is the current time.
When the function is finished, a table is displayed that indicates the number of logs
checked, and the number of logs found to have future data, Figure 350. When
History detects a time change of greater than 75 minutes (this is configurable via the
environment variable HS_MAX_TIME_CHANGE), History will log an operator
message stating that fact.
470 3BUF001092-510
Section 13 History Database Maintenance History Resource Considerations
3BUF001092-510 471
Resource Requirements Related to History Configuration Section 13 History Database
CPU
CPU performance is related to capacity for History collection and storage functions.
This can be adjusted by configuring sample rates, deadbands, and blocking rates.
Disk Space
Disk space resources are used by the History filesets that are loaded on disk when
History is installed, by Oracle tablespace for logs and archive database entries, and
by file-based logs. Disk space allocation for Oracle and file-based logs is calculated
and set by the Database Instance Configuration wizard based on the selections made
in the wizard at post-installation or later if required, refer to Maintaining the Oracle
Instance on page 135.
• The disk space required for the History filesets is fixed at 30 meg.
• Oracle tablespace is used by message logs, report logs, PDLs, asynchronous
property logs, synchronous property logs with Oracle storage type, log
configuration data, and archive database entries. Extend Oracle tablespaces for
runtime based on the per/entry requirements in Considerations for Oracle or
File-based Storage on page 180.
• File space for file-based property logs is automatically allocated. Additional
disk space can be allocated. Memory requirements per file are described in
Considerations for Oracle or File-based Storage on page 180.
472 3BUF001092-510
Section 13 History Database Maintenance Resource Requirements for User Applications
3BUF001092-510 473
Expanding dB Block Buffer Space Section 13 History Database Maintenance
The following procedure can be used to adjust the Oracle shared memory. It is
recommended that the value is modified and the server is rebooted. After reboot, the
new values will be used.
Here is an example that changes the value to 600Meg.
The procedure to type in the command line is as follows (commands to be typed are
shown in bold and the responses are shown in normal text):
C:\Users\800xaInstall> sqlplus /
SQL*Plus: Release 11.1.0.7.0 - Production on Tue Jun 8
16:41:58 2010
Copyright (c) 1982, 2008, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.1.0.7.0 - Production
SQL> show parameter memory
NAME TYPE VALUE
------------------------------------------------------
hi_shared_memory_address integer 0
memory_max_target big integer 532M
memory_target big integer 532M
shared_memory_address integer 0
474 3BUF001092-510
Section 13 History Database Maintenance Determining Resource Usage for Your Application
The memory in use for a given server should always be less than the memory
installed. If a system has 700Meg for oracle shared memory, has 3GB of memory
installed, and the current memory usage is 2.5GB, it would be safe to add 500Meg
or less to the oracle shared memory. If the value is changed to a higher value and the
memory in use is consistently higher than the installed memory, then the value
should be reduced to keep the memory equal or less than the memory installed.
The Process Administration Service (PAS) utility can also be opened by typing
pasgui in the Run dialog (Start > Run) and clicking OK.
3BUF001092-510 475
Determining Resource Usage for Your Application Section 13 History Database Maintenance
476 3BUF001092-510
Section 13 History Database Maintenance Determining Resource Usage for Your Application
longest batch time to guarantee all the property data can be archived with
the PDL. If all functionality for history data is clearly defined, a database
satisfying 95% of the planned functionality can be created on the first try.
In addition, if care is taken to only define what is needed (1 w logs instead
of 1 year logs), overall system performance will be much better.
– Are hierarchies needed?
• Will DUAL history be used? If YES, consider the following:
– DUAL history allows history configurations to be distributed on multiple
history nodes. A composite log with a primary log on two different history
nodes is a DUAL history log. With a DUAL history log, each primary log
collects data from the same data source independently. This means the
data and timestamps will be similar but not equal for the two primary logs.
– DUAL history allows clients of history (OS, REPORTS) to get data for
logs when one of the two nodes is down.
– DUAL history creates twice the load on controllers and the DCN.
Having two nodes configured with DUAL history logs makes loading an issue.
With DUAL history, both nodes use the same amount of RTA/OMF resources
to collect data. However, the retrieval of data can be directed to one node more
than the second node.
In addition to the seamless algorithm applied for retrieval requests, the
following criteria have been added to better handle selection of a log for a
request by access name:
– Uptime of node where log exists.
– local is chosen over remote (local should always be faster than remote).
– sequence number of log. (all other conditions being equal, the lowest
sequence numbered log will be used. Sequence number is the ‘-1-o’ or ‘-2-
o’ attached to the generated log name.
These additional sorting parameters will allow configurations to:
– Evenly distribute retrieval load on two History nodes (on average) by
creating half the logs with the first log created on Node A and the other
half with the first log on Node B.
3BUF001092-510 477
Temporary File Disk Space for History Section 13 History Database Maintenance
– Favor one History node over the other by creating all the logs with the first
log on NodeA. (One history node may be faster and have more memory
than the other.)
Environment Variables
Environment variables are created when installing Information Management
software, and are used by Information Management to specify locations for data,
configuration, and executable files. The location of these files may vary from one
installation to the next. The environment variables provide a uniform and consistent
method for specifying the location of these files for all Information Management
installations.
Most environment variables are used internally by Information Management
applications during installation and start-up. Some environment variables can be
used while performing system administration functions, for example to run the
hsBAR.exe file when backing up History data (Running hsBAR on page 420).
The environment variables that are most likely to be used are described in Table 42.
478 3BUF001092-510
Section 13 History Database Maintenance Environment Variables
These variables are generally used in commands entered via the Windows
Command Prompt. The variables may also be used in Windows dialogs that require
a path specification (for example, Open File dialogs).
Table 42. Environment Variables
3BUF001092-510 479
Linking Information Management History with the 800xA System Trend Function Section 13 History
Enter the set command at the prompt to list all environment variables
480 3BUF001092-510
Section 13 History Database Maintenance Linking Information Management History with the 800xA
These components are automatically added to the Basic Platform Service Group
when the Information Management Process Administration Supervision (PAS) is
started during post-installation.
Confirm that these components have been added as follows (reference Figure 352):
1. Go to the Service structure in the Plant Explorer workplace.
2. Select the Basic History Service Group for the Information Management node.
3. Click the Service Group Definition aspect.
4. Go to the Special Configuration tab.
5. Check for the presence of IM History Log, IM Archive Link, and IM Importer
link in the Supported Links (right) list.
6. If one or more of these required items are not in the Supported Links list, select
the item from the Available Links list and click the right arrow button. This
puts the item in the supported links list.
7. Click Apply.
Figure 352. Checking IM History Links for Basic History Service Group
3BUF001092-510 481
Integrating System Message and History Servers Section 13 History Database Maintenance
482 3BUF001092-510
Section 13 History Database Maintenance Integrating System Message and History Servers
3BUF001092-510 483
Integrating System Message and History Servers Section 13 History Database Maintenance
484 3BUF001092-510
Section 14 Open Data Access
Overview
This section provides instructions for setting up real-time and historical data access
for third-party applications via the Open Data Access (ODA) Server. This server
supports client applications that use ODBC data source, for example: Crystal
Reports or Microsoft Excel (without DataDirect add-in).
Virtual database tables in the ODA Server map 800xA system data types to ODBC
data type. When the client application submits an SQL query toward a virtual table,
the ODA Server parses the query, and returns the requested data to the client.
There is one predefined table named numericlog to support access to historical
process data.
For real-time data access, configure custom tables to expose aspect object properties
that need access. These tables are then combined to form one or more virtual real-
time databases. The custom-built real-time databases support read and write access.
The custom views are used to impose restrictions on data access for certain users.
One predefined table named generic_da is provided for real-time data access. This
is a read-only table that exposes all properties for all real-time objects.
Client applications which access data via ODA may run locally on the Information
Management server where the ODA Server is installed, or on a remote computer
client. Remote computer clients required the Information Management Client
Toolkit. To install this software, refer to Industrial IT 800xA Information
Management Installation.
The architecture for Open Data Access is illustrated in Figure 354, and described in
ODA Architecture on page 487. When ready to begin configuring ODA access, refer
to What to Do on page 488 for guidelines.
3BUF001092-510 485
Section 14 Open Data Access
ODBC
ABB ODA ODA Data Source
Provider
IMHDA AIPHDA
OPC DA Proxy Server Server
Calculation Softpoint
Services Server History Logs
Figure 354. Historical and Real-time Data Access Via ODA Server
486 3BUF001092-510
Section 14 Open Data Access ODA Architecture
ODA Architecture
Client applications that use Open Data Access must connect to a specified ODA
database. This is a virtual database which merges the predefined numericlog and
generic_da tables with one user-configured real-time database. The real-time
database is configured to have one or more custom table definitions which expose
selected object properties.
For example, Figure 355 shows two sets of user configured table definitions
(motor1, pump1, valve1; and motor2, pump2, valve2). These table definitions are
combined to form two real-time database definitions: DatabaseX and DatabaseZ.
These two database definitions are then used to create two ODA databases:
Database1 and Database2, each of which includes the two predefined tables
(numericlog and generic_da), and one custom real-time database definition. The
client application may connect to one ODA database at a time.
DatabaseX DatabaseZ
Client Application
3BUF001092-510 487
Default Set-up Section 14 Open Data Access
The contents and operating parameters for the ODA database are specified in a
configuration file that was set up using the ODBC Data Source Administrator. This
file is used by the ODBC data source. This configuration file specifies:
• Which user-configured real-time database to use for real-time data access.
Clients connect to one database at one time.
• Which OPC HDA server to use for historical data access: the 800xA OPC HDA
server, or the History Server (IM) OPC HDA server.
The 800xA OPC HDA server is the default, and is recommended because it
supports seamless access to both operator trend logs and history logs. This
server also supports the ability to access log attributes.
The IM OPC HDA server can only access history logs. This server does not
support access to log attributes. It is provided primarily to support earlier data
access applications configured to use this server.
• For remote client connections, this file also specifies the server’s IP address.
Default Set-up
The default connection is to an ODA database named DATABASE1. Without
requiring any configuration, this database supports read access to all system objects
via the generic_da table, and read/write access to history data via the numericlog
table and 800xA OPC HDA server.
The default set up can be changed to use the IM OPC HDA server, and/or specify a
different real-time database table which includes user-configured ODA table
definitions. Further, additional ODA databases can be created where each one
specifies a different real-time database. This is used to connect the client application
to a different ODA database, depending on the particular data access requirements.
What to Do
ODA can be used with the default set up (DATABASE1). This supports access via
the predefined numericlog and generic_da tables. To use the functionality supported
by the custom-built real-time database tables (read/write access, and restricting
access on a user-basis), then use the following procedures to configure ODA to meet
the data access requirements:
488 3BUF001092-510
Section 14 Open Data Access Configuring Real-time Data Access Via the ODA Server
• Configure the virtual database tables for real-time data access. Then combine
these tables to form one or more real-time databases. This is described in
Configuring Real-time Data Access Via the ODA Server on page 489.
Access to historical process data stored in property logs is supported by a
predefined mapping table that does not require configuration by the end-user.
• Set up an ODA database per the data access requirements and specify:
– Which user-configured real-time database the client will be able to access.
Only one can be connected at one time.
– Whether to use the 800xA OPC HDA server, or IM OPC HDA server for
accessing history data.
– For remote client connections, specify the server’s IP address.
As an option, create additional ODA databases to provide different views to
real-time data. Since a client application can only connect to one ODA
database at a time, only one real-time database view is available at a time. This
set up is performed via the ODBC Data Source Administrator as described in
Setting Up the ODA Database on page 508.
3BUF001092-510 489
Configuring Real-time Data Access Via the ODA Server Section 14 Open Data Access
children of the object type (as defined by a formal instance list). The table will
contain a row for each instantiated object created from the object type. An example
is shown in Figure 356.
490 3BUF001092-510
Section 14 Open Data Access Configuring Real-time Data Access Via the ODA Server
defined for the database. When adding table definitions that point to an object type,
those table definitions will provide access to objects of that type that are instantiated
under the object root. To expose objects in more than one structure, configure a
dedicated database for each structure.
For example, since DB2 includes a table definition for an object which is
instantiated in the Control structure, its object root must be defined within the
Control structure (probably at the top). In that case the AI table definition in DB2
will be limited to AI objects in the Control structure. Assuming the object root for
DB1 is specified at the top of the Location structure, the tables in DB1 will be
limited to PID, DI, and AI objects instantiated in the Location structure.
Multiple databases can be configured to expose different sets of properties for
different users. Client applications can only connect to one real-time database at a
time.
The database objects must be inserted under the Databases group in the Library
structure. Configure as many database objects as required to provide different views
of the real-time database.
3BUF001092-510 491
Configuring Real-time Data Access Via the ODA Server Section 14 Open Data Access
Figure 358. Object Structure for Databases and Database Table Definitions
492 3BUF001092-510
Section 14 Open Data Access Creating ODA Table Definitions
Use either the default aspect name, or enter a different name. The table
represented by this aspect will be named from within the Table Definition
aspect as described later. Specify a unique Aspect Description to help identify
the table. This is particularly important when creating multiple tables with the
same table name.
For example, in Figure 359 the Aspect Description is specified as Engineer’s
View. This description will be used as the Table Description when adding tables
to the database. If the aspect has no description, the object type description (if
3BUF001092-510 493
Creating ODA Table Definitions Section 14 Open Data Access
any) will be used. Change the Aspect Description later as described in Using
the Description Property on page 502.
3. Click Create when finished. This adds the aspect in the object type’s aspect
list.
4. To show the configuration view for this aspect, select the object type and then
click on the aspect, Figure 360.
494 3BUF001092-510
Section 14 Open Data Access Creating ODA Table Definitions
Table Name
Specify the table name as shown in Figure 361. This name is referenced in the ODA
Database Definition when adding tables to the database (Creating a Database Object
on page 504). This is also the name used in SQL queries. More than one table with
the same name can be created; however, table names must be unique within the
database where they are used.
• If the same table name for more than one table needs to be used, use the
aspect’s Description property to differentiate tables. If a unique Aspect
Description is not already specified, do it later as described in Using the
Description Property on page 502.
• If table names that have leading numeric characters are specified (for
example 01Motor, or are completely numeric (01), when using those names
in queries, the names will have to be entered in double quotation marks
(“01Motor”)
Specify
Table Name
3BUF001092-510 495
Creating ODA Table Definitions Section 14 Open Data Access
– Object Attributes - These are not actually properties, but rather attributes
of the object such as the object name, hierarchical name, parent, and so on.
Always include object attributes. It is strongly recommended that to always
include the :NAME object attribute in the table. This is the most efficient way of
referencing an object.
– Name Categories - An object can have a number of different names. Each
name is assigned as an aspect. The main name is the name associated with
the object in the Plant Explorer. This is generally the name used for
absolute object references. It is accessed as an object attribute as described
above. Other names may be accessed as Name categories, but this is
seldom necessary.
• Aspect Properties - These are properties related to aspects of the object (or
object type). Properties can be filtered on an aspect-by-aspect basis. For
example, in Figure 362 the ProcessDataSim object has four aspects, but only
one (Calculation) is selected.
• Child Object Properties - For object types, the properties of children objects
are grouped on an object-by-object basis in a manner similar to the aspect
properties. This is not applicable for object instances. For example, in
Figure 362, the ProcessDataSim object has two child objects: RealSim and
upDown. Each of these objects has aspects and special properties.
496 3BUF001092-510
Section 14 Open Data Access Creating ODA Table Definitions
The Aspect column in the property list indicates to which group each property
belongs, Figure 363. For aspect properties, the aspect name is shown. Object
attributes and name categories are indicated by these respective text strings in the
Aspect column: (Object Attributes) or (Name Categories). For properties of child
objects, the object relative name is indicated in the Aspect column, in the form:
.relName:aspectName.
To remove a property group, click the Filter View button in the ODA Table
Definition aspect, Figure 365. This displays the Property Filter dialog.
3BUF001092-510 497
Creating ODA Table Definitions Section 14 Open Data Access
Property groups that are visible in the list are marked with a check. To remove a
group, uncheck the corresponding check box. As this is done, the applicable
properties are removed from the property list. Replace any group and corresponding
properties that have been removed by reselecting the check box. Figure 365 shows
the following groups selected:
• The RealSim object (child of the ProcessDataSim object).
• The Real PCA and Relative Name Aspects of the RealSim object.
• The Object Attributes for the RealSim object.
A group that already has properties selected cannot be removed. To remove such
a group, unselect the properties first.
Unchecking groups whose properties will not be used significantly reduces the
property list. This makes it easier to select the properties to be included in the table.
498 3BUF001092-510
Section 14 Open Data Access Creating ODA Table Definitions
DO NOT click the Hide Unused button at this point. This hides unselected
properties. Since no properties have been selected yet, this will hide all properties.
To proceed, click Exit to close the Filter dialog. Then follow the guidelines for
selecting properties in Selecting Properties on page 499.
Selecting Properties
To select a property, click the far left column. Selected properties are indicated by
black dots, Figure 366. Clicking on a selected property will un-select it.
Remember, it is recommended that the NAME object attribute be selected as an
efficient means of referencing the object in data queries. Do not confuse this
NAME attribute with other name properties in the property list.
Selected
Property
Indication
When finished selecting properties, simplify the view by showing just selected
properties. To do this, right click on the list and choose Show Selected from the
context menu, Figure 367. The result is shown in Figure 368.
3BUF001092-510 499
Creating ODA Table Definitions Section 14 Open Data Access
To bring back the hidden unselected properties, click Show All. If any mistakes are
made selecting the wrong properties, start over by clicking Deselect All. To select
all properties, click Select All. The Deselect All and Select All functions operate on
the properties that are currently visible on the grid. Hidden properties will not be
selected or deselected.
500 3BUF001092-510
Section 14 Open Data Access Creating ODA Table Definitions
Item Description
Name This is the property’s column name in the database table. This defaults to the
Property name. Column names cannot be duplicated within the table. If duplicate
names occur, they must be changed.
Aspect This identifies the aspect or other category to which this property belongs. Refer
to Filtering the Property List on page 495. For aspect properties, the aspect name
is shown. Object attributes and name categories are indicated by these respective
text strings in the Aspect column: (Object Attributes) or (Name Categories). For
properties of children objects, the object relative name is indicated in the Aspect
column along with the aspect.
Property This is the name of the property, object attribute, or Name category.
Data Type Data Type
Writable This indicates whether or not the column is writable. Properties that cannot be
configured as writable are dimmed.
False (default) - is not writable
True - is writable
Quality This indicates whether to create a column for data quality. The column name in
the table will be the data column’s name with _qual appended.
False (default) - no column for quality
True - include column for quality
3BUF001092-510 501
Creating ODA Table Definitions Section 14 Open Data Access
Item Description
Timestamp This indicates whether to create a column for the time stamp. The column name in
the table will be the data column’s name with _time appended.
False (default) - no column for timestamp
True - include column for timestamp
Description This is used to specify an optional description for the column. When the system
provides a description, it will be the default.
502 3BUF001092-510
Section 14 Open Data Access Creating ODA Table Definitions
This does NOT permit adding multiple tables having the same name to the same
database. It is merely intended to help select the correct table when adding tables
to the database.
To specify the Description property for a Database Table Definition aspect:
1. Select the ODA Table Definition aspect, right-click and choose Properties
from the context menu.
2. Then use the Description field in the Properties dialog to specify a meaningful
description that will help identify the table, Figure 370.
3. Click Apply. The description will be indicated in the aspect list, Figure 371.
3BUF001092-510 503
Creating a Database Object Section 14 Open Data Access
To show the configuration view for the database, click on the ODA Database
Definition aspect, Figure 373.
504 3BUF001092-510
Section 14 Open Data Access Creating a Database Object
3BUF001092-510 505
Creating a Database Object Section 14 Open Data Access
The bottom portion of this dialog has a check box and buttons to adjust the list
of tables, Figure 375. The Exclude instance tables check box is used to limit
the list to tables defined in the Object Type structure. The Search Under Root
button is used to limit the list to instance tables that exist under the root object
specified in the Object Root field on the ODA Database Definition aspect.
2. Select a table, then click Add Table. The added table is listed immediately in
the Database Definition view, Figure 376. The dialog remains open so more
tables can be added.
Table Added
to Database
Tables that were created for object instances rather than object types are indicated
by the (Instance) label in the Object Type column.
506 3BUF001092-510
Section 14 Open Data Access Creating a Database Object
The scope of the database is determined by the Object Root specification. Only
objects under the specified object root will be accessible via this database. This
defaults to the Control structure. A different structure can be specified if necessary.
Further, the scope can be made more restrictive by selecting an object within a
structure. For example, to restrict the database scope to a specific controller within
the Control structure. To change the object root specification:
1. Click the Browse button for the Object Root field, Figure 377. This displays a
browser similar to the Plant Explorer.
2. Use this browser to select the structure (and folder if necessary) where the
objects that will populate this table reside, Figure 378.
3BUF001092-510 507
Setting Up the ODA Database Section 14 Open Data Access
If all tables having duplicate names are required in the database, rename the tables
so that each has a unique name. Do this via their respective Database Table
Definition aspects as described in Creating ODA Table Definitions on page 493).
If one or more tables are not required, select those tables and then click Delete
Tables. If a mistake is made, cancel the delete command by clicking Cancel
BEFORE applying the change. This restores all deleted tables.
508 3BUF001092-510
Section 14 Open Data Access Setting Up the ODA Database
• which OPC HDA server to use for historical data access: the 800xA OPC HDA
server, or IM OPC HDA server.
• for remote client connections, the server’s IP address. A remote client must
also change the OpenRDA ODBC 32 setup parameter, and edit the openrda.ini
text file.
One ODA database named DATABASE1 is provided as standard. By default this
ODA database uses the 800xA OPC HDA server, and connects to a real-time
database named Database1, which by default has no assigned table definition
aspects. Add table definitions to Database1 as described in Creating a Database
Object on page 504.
The default set up supports access via the predefined numericlog and generic_da
tables, and to table definitions added to Database1 (if any). To change the default set
up, use the IM OPC HDA server, and/or specify a different real-time database.
Further, additional ODA databases can be created where each one specifies a
different real-time database. This is used to connect the client application to a
different ODA database, depending on the data access requirements.
After successfully creating the object, go to the IM node on which the ODA server
is running. Using Windows Explorer, navigate down to the configuration directory
for ODA. Generally the path would be:
C:\Program Files\ABB Industrial IT\Inform IT\ODA Provider\Server\cfg
In this directory, make a backup copy of the file oadm.ini. This is the ODA
configuration file for the server. After making a copy, edit the original file using a
text editor.
Database sources are kept in this file and have the following form:
[DataSource_1]
Type=6
DataSourceWorkArounds2=8192
DataSourceLogonMethod=Anonymous
DataSourceIPType=DAMIP
DataSourceIPSchemaPath=C:\Program Files\ABB Industrial
IT\Inform IT\ODA Provider\Server\ip\schema
3BUF001092-510 509
Setting Up the ODA Database Section 14 Open Data Access
ServiceName=ABB_ODA
DataSourceName=DATABASE1
This is the definition of the default database for ODA. To include the new database
that was added in Process Portal, copy the above text, paste it directly after the
original, and edit the DataSource_1 and DATABASE1 entries.
The DataSource_ entry is a numerical progression of how many data sources you
have. So, to add a second datasource, then the entry should be for DataSource_2.
The DataSourceName then needs to point to the new database name, so
DATABASE1 would be changed to Ctrl1. So what we should see is now:
[DataSource_1]
Type=6
DataSourceWorkArounds2=8192
DataSourceLogonMethod=Anonymous
DataSourceIPType=DAMIP
DataSourceIPSchemaPath=C:\Program Files\ABB Industrial
IT\Inform IT\ODA Provider\Server\ip\schema
ServiceName=ABB_ODA
DataSourceName=DATABASE1
[DataSource_2]
Type=6
DataSourceWorkArounds2=8192
DataSourceLogonMethod=Anonymous
DataSourceIPType=DAMIP
DataSourceIPSchemaPath=C:\Program Files\ABB Industrial
IT\Inform IT\ODA Provider\Server\ip\schema
ServiceName=ABB_ODA
DataSourceName=Ctrl1;
510 3BUF001092-510
Section 14 Open Data Access Setting Up the ODA Database
Save the file, and reboot the machine. If further databases are required, the same
procedure can be used, keeping in mind that the first modification to the
DataSource_x number must increment upwards with each new database.
Go to the System DSN tab on the ODBC Data Source Administrator dialog and
select the data source named ABBODA, Figure 381.
3BUF001092-510 511
Setting Up the ODA Database Section 14 Open Data Access
2. Click Configure. This displays the ODBC Data Source Setup dialog. The
Database name defaults to ABBODA. Typically, the default is used unless
additional databases are configured.
512 3BUF001092-510
Section 14 Open Data Access Setting Up the ODA Database
3BUF001092-510 513
Setting Up the ODA Database Section 14 Open Data Access
To get a list of databases, including the default database, click on the selection
box. View the example in Figure 383 that lists the default and other configured
databases.
Field Description
Data Source The default data source name.
Name
Service Host The machine to connect to. By default, it is localhost for this machine. But, this
would be changed for the client machines to the name of the machine that ODA is
running on.
514 3BUF001092-510
Section 14 Open Data Access Logging Status Information for Numericlog Queries
Field Description
Service Port The Service Port should stay as 19986.
Service Data The name of the database to be connected to in the ODA.
Source
3BUF001092-510 515
Logging Status Information for Numericlog Queries Section 14 Open Data Access
The log file is associated with a specific data provider. The default file name for
ABBDataAccess Provider is ODAProviderLog.txt. This file is located in the ODA
Provider/Log folder. The default path is
C:\ProgramData\ABB\IM\ODA
To open the log file in the default text editor, double-click, or choose Open from the
context menu. An example log file is shown in Figure 385.
For each query the log file indicates the data source (table name), and echoes the
query. The log file also provides information regarding the status of internal ODA
Server components. This information is intended for ABB technical support
personnel for debugging.
516 3BUF001092-510
Section 15 Configuring Data Providers
3BUF001092-510 517
Data Provider Applications Section 15 Configuring Data Providers
518 3BUF001092-510
Section 15 Configuring Data Providers Data Provider Applications
NOTES:
1) The Information Management and
Connectivity Servers may reside on the
Display and Client Services same node.
Display Services 2) The Industrial IT Process Values,
DataDirect History Values, and Alarm/Event dialogs
Desktop Trends in DataDirect do not use data providers.
ODBC
Oracle IMHDA AIPHDA Data Providers
DBA AIPOPC
OPC DA Proxy
IMHDA AIPHDA
Server Server
Calculation Softpoint
Services Server
Oracle-based
Message Logs
Numeric Logs
History Logs
PDLs
3BUF001092-510 519
Overview of Data Provider Functionality Section 15 Configuring Data Providers
ADSS Default
Type Description
Label(1) Name(2)
OPC HDA Provides access to historical data from an OPC HDA server.
Returns up to 65,534 values per request. Two OPC HDA Data
providers are provided by default:
AIPHDA AIPHDA AIPHDA connects to the 800xA OPC HDA server. This server
supports seamless access to trend logs and history logs. It
also supports access to log attributes.
IMHDA IMHDA
IMHDA connects to the History Server OPC HDA server. This
server can only access history logs and is provided primarily
to support earlier data access applications configured to use
this server. It does not support access to log attributes.
AIPOPC AIPOPC OPC DA Provides access to realtime data from object properties. This
data provider connects to the default Aspect System.
ADO DBA ADO Provides access to Oracle data for Display Services,
DataDirect, and Batch Management PDL Browser. Site-
specific configuration is required during post-installation.
DCSOBJ DCS ABB OCS Used on Enterprise Historian 3.2/x or earlier for access to
Process realtime process data from Advant OCS objects, for example
Data CCF_CONTIN_LOOP in systems with MOD 300 software,
and PIDCON for systems with Master software.
DCSLOG LOG ABB OCS Used on Enterprise Historian 3.2/x or earlier for access to
History historical process (numeric) data. Returns up to 3200 values
Data per request.
OPC OPC OPC Provides access to realtime data from third party DA 1.0 &
2.0 data access server. Site-specific configuration is required.
520 3BUF001092-510
Section 15 Configuring Data Providers Overview of Data Provider Functionality
ADSS Default
Type Description
Label(1) Name(2)
ADO-DEMO DEMO ADO Supports Oracle data access for the Display Services
demonstration displays.
COM COM Service Connects data providers with client applications.
Provider
DDR DDR Display Manages display requests and command execution on the
Manager server. These displays are described in Information
Management Configuration for Display Services.
(1) Used to uniquely identify data provider in the ADSS Configuration tool.
(2) Used by data access applications to reference a specific data provider when there is more than one of the same
type.
Typically the default data providers are sufficient to support most data access
applications. Certain data access applications require small site-specific
adjustments. These are summarized in Table 45.
Additional data providers can be configured as required by the application. For
example, there may be a need to create an additional ADO data provider to support
SQL access to numeric logs, or access to another third-party database.
3BUF001092-510 521
Overview of Data Provider Functionality Section 15 Configuring Data Providers
Node #1 Node #2
Local Data Server Remote Data Server
Service
Provider
Data Providers
To install remote data providers for Oracle Data Access on a platform that is
external to an Industrial IT system, consult ABB for additional guidelines.
522 3BUF001092-510
Section 15 Configuring Data Providers Overview of Data Provider Functionality
be routed via the correct data provider based on the type of data being requested.
When multiple data providers of the same type are connected, if a specific data
provider is not referenced, the data request will default to the data provider with
channel 0. If a different data provider needs to be used, it must be explicitly
referenced. Each of the client applications use a different method for referencing
data providers.
Display Services uses channel number for all data provider references EXCEPT
data statements. The default channel is zero (0). When using data statements in
Display scripts, and a data provider reference is required, use the -name argument.
DataDirect is used to choose whether to use channel number or -name. The default
setup is to use -name. This is recommended because channel number does not
support some data access functions, including process value update, history update,
history retrieval of raw data, and history bulk data retrieval. If channel number is
used, the same channel number applies to all data providers. The default channel is
zero (0). Select a different channel if necessary.
Batch Management PDL PFC display is hardcoded to use an ADO data provider
with the name DBA. This is the default name for the ADO data provider. Be sure
that the ADO data provider supporting those applications is named DBA, and that
no other active data providers use DBA.
3BUF001092-510 523
Data Service Supervision Configuration Dialog Section 15 Configuring Data Providers
Table 47. Operation of Channel Number, Type, and Name Based on Software Version
Versions Description
Both Service Provider The data provider connects to the service provider with name, type and
and Data Provider are channel number. When connecting only one data provider of a given
Version 3.2 or higher type to a service provider, the default channel number and name can be
used. When more than one data provider of the same type connects to a
service provider, each data provider MUST have a unique name. The
service provider does not allow multiple data providers of the same type
to connect unless their names are unique.
NOTE: Assign a unique channel number to each data provider when
connecting multiple data providers of the same type to a service
provider. This is not required to connect to the service provider; however,
channel number is used by many Display scripting functions and
DataDirect.
Assigning a new channel number also assigns the data provider a
unique default name. For example:
DSCOBJ -channel 0 is assigned the name DCS
DCSOBJ -channel 8 is assigned the name DCS8
Therefore, if a new channel number is assigned, a new name does not
have to be assigned. A name other than the assigned default can be
used.
Service Provider = 3.2 The Data Provider connects to the Service Provider with type and
or higher channel number. The Service Provider then uses these attributes to
Data Provider = 3.0 or generate a unique name as described above.
3.1
Service Provider = 3.0 The Data Provider connects to the Service Provider with type and
or 3.1 channel number. The version 3.0 Service Provider ignores Data Provider
Data Provider = 3.2 or names.
higher
524 3BUF001092-510
Section 15 Configuring Data Providers Data Service Supervision Configuration Dialog
dialog, from the Windows task bar, choose Start>Settings>Control Panel and then
double-click on the ADSS Config Icon, Figure 388.
Double-click ADSS Config Icon to Display the ABB Data Service Supervision Config Dialog
Figure 388. Opening the ABB Data Services Supervision Configuration Dialog
The left side of this dialog is a browser that is used to select the data providers and
their respective attributes, Figure 389. Use the +/- button to show/hide data
provider’s and attributes.
The Supervision process for ADSS supervises all service and data providers. ADSS
is automatically started as a Windows process. To monitor and control ADSS, use
the Services icon in the Windows control panel. ADSS starts the service provider
and data providers according to their respective configurations. The service provider
(COM) is always started first. For details on configuring a data provider’s attributes,
refer to Editing Data Provider Configuration Files on page 536.
When a provider is selected in the browser, the following information is displayed
for the selected provider:
Provider, Status Indicates the selected provider’s label and status. This label is
only used in this dialog. This is NOT the name that desktop
client applications use to reference the data provider.
Argument Value Indicates the current value for the selected attribute. This field
can be edited to modify the service’s configuration. These
attributes are described in Table 49.
3BUF001092-510 525
Data Service Supervision Configuration Dialog Section 15 Configuring Data Providers
Click to
Show/Hide
Providers
ADO
Data Provider
ADO
Data Provider
Attributes
Other
Data Providers
The context (right-click) menu, Figure 390, provides access to the following
provider management functions: Adding a Data Provider, Starting and Stopping
Providers, Deleting a Data Provider, and Adding Arguments to a Data Provider.
526 3BUF001092-510
Section 15 Configuring Data Providers Enabling Write Access
3BUF001092-510 527
Configuring User Preferences for Write Access Section 15 Configuring Data Providers
3. From the User Preferences dialog, Figure 392, Click + to expand the RIGHTS
section, select OBJECTWRITE or LOGWRITE, or both, and check the check
box to change the preference value to True.
4. Reboot the computer to register the changes. Any changes made to user
preferences will not take affect until the computer is restarted.
528 3BUF001092-510
Section 15 Configuring Data Providers General Procedures
General Procedures
This section describes the following procedures:
• Adding a Data Provider on page 529.
• Editing Data Provider Configuration Files on page 536.
• Starting and Stopping Providers on page 549.
• Deleting a Data Provider on page 549.
• Adding Arguments to a Data Provider on page 550.
These procedures are supported by the Data Service Supervision Configuration
Dialog.
3BUF001092-510 529
Copying an Existing Data Provider Section 15 Configuring Data Providers
If the data provider to be copied is currently running (Status = Started), then stop
it. Right-click and choose Stop Provider from the context menu.
3. After the data provider to be copied is stopped, right-click again and choose
Copy Provider from the context menu. This displays a dialog for specifying
the unique name for the new data provider, Figure 393.
4. Enter a new name, then click OK. This adds the new data provider in the
browser under Supervision. This name labels the data provider in the browser.
This name DOES NOT uniquely identify the data provider for data access by
display elements.
5. Configure any data provider attributes as required.
If both the original data provider and the copy are being connected to the same
service provider, then either the -name or -channel arguments, or both MUST
be edited. The Service Provider will not allow connection of more than one
Data Provider of the same type, unless they have different names.
Assigning a new channel number also assigns a new name. Therefore, if a new
channel number is assigned, then a name is not required. The assigned default
name does not have to be used.
To access the -channel or -name argument, click Arguments. Then enter a
new channel number or name in the Argument Value field, Figure 394.
If the -channel or -name argument is not included in the argument list, then add
the argument to the list.
Typically, defaults can be used for all other attributes. For details regarding
attributes, refer to Table 49 in Editing Data Provider Configuration Files on
page 536.
6. Click Save to save the new data provider.
530 3BUF001092-510
Section 15 Configuring Data Providers Adding a Remote Data Provider
3BUF001092-510 531
Adding a Remote Data Provider Section 15 Configuring Data Providers
remote computer where the Oracle database is installed. In example 2, the computer
where the Oracle database is installed does not have Display Services software.
532 3BUF001092-510
Section 15 Configuring Data Providers Adding a New Data Provider from Scratch
3BUF001092-510 533
Adding a New Data Provider from Scratch Section 15 Configuring Data Providers
Server Type in this dialog does NOT refer to the type of data provider, but rather
the method by which Windows handles this process (COM or Command Line).
Currently, only command line processes are supported. Thus the Server Type is
fixed as CMD and cannot be changed.
3. Enter a unique name for the data provider in the Server Name field. This is the
name that the data provider will be identified by in the browser under
Supervision. This name DOES NOT uniquely identify the data provider for
data access by display elements.
4. Enter the path to the executable (.exe) file for the data provider. Enter the path
directly in the Server Path field, or use the Browse button. The executable files
reside in: c:\Program Files\ABB Industrial IT\ Inform
IT\Display Services\Server\bin.
The file names correspond to the process names as indicated in Table 48.
534 3BUF001092-510
Section 15 Configuring Data Providers Adding a New Data Provider from Scratch
ABB OCS History Historical process data (numeric log ADSdpDCS.EXE LOG
Data data).
5. Click OK when finished. This adds the new data provider under Supervision in
the browser.
6. Configure any data provider attributes as required.
If a data provider of the type being added is already connected to the service
provider, then edit either the -name or -channel arguments, or both. These
arguments uniquely identify the data provider. The name is NOT automatically
updated when a new data provider name in the New Server dialog (step 2) is
specified. The Service Provider will not allow connection of more than one
Data Provider of the same type, unless they have different names.
Assigning a new channel number also assigns a new name. Therefore, it is not
necessary to edit the name unless something other than the assigned default
name is preferred.
Refer to the procedure for Copying an Existing Data Provider to learn how to
access the -name and -channel arguments.
3BUF001092-510 535
Editing Data Provider Configuration Files Section 15 Configuring Data Providers
Parameter Description
Server Type This does NOT refer to the type of data provider, but rather the method by which
Windows handles this process (COM or Command Line). Currently, only
command line processes are supported. Thus the Server Type is fixed as CMD
and cannot be changed.
Path This is the path to the executable (.exe) file for the data provider. The path is
initially defined when the data provider was added as described in Adding a Data
Provider on page 529.
The executable files reside in: c:\Program Files\ABB Industrial IT\
Inform IT\Display Services\Server\bin.
The file names correspond to the process names as indicated in Table 46.
Wait This is the time to wait on the initial attempt to start up. The value is in
milliseconds. The default is 100.
536 3BUF001092-510
Section 15 Configuring Data Providers Editing Data Provider Configuration Files
Parameter Description
Arguments Arguments depend on the data service type. For arguments common to all data
providers, refer to Table 50. For specific arguments for each type of service, refer
to:
Table 51 for ADSspCOM.EXE
Table 52 for ADSdpDDR.EXE
Table 53 for ADSdpADO.EXE
Table 54 for ADSdpOPCHDA.EXE
Table 55 for ADSdpDCS.EXE
Table 56 for ADSdpDCS.EXE
Table 57 for ADSdpOPC.EXE
Priority The default is -1.
CanRestart This specifies whether or not the service is restarted by supervision if the service
goes offline.
TRUE- Data Service can be restarted. This is the default value.
FALSE - Data Service can not be restarted.
MaxRestarts This is the number of times the data provider should attempt to re-start if its
server has stopped. The default is 3.
StartAfter This specifies the place in the starting order for a Data Service. COM is always
the first data service to be started, followed by DDR. After DDR, all other services
are given equal precedence.
WaitForLogin This specifies whether or not the Service will wait for login before it is started.
TRUE- Do not start until client is logged in. Data Service can be restarted.
FALSE - Start immediately. Do not wait for log in. This is the default value.
StartupMode This specifies whether or not the service is started automatically by supervision,
or must be started manually via the ABB Data Service Supervision Config dialog.
AUTOMATIC - Start automatically by supervision. This is the default value.
MANUAL - Must be started manually via the dialog.
3BUF001092-510 537
Editing Data Provider Configuration Files Section 15 Configuring Data Providers
Parameter Description
RestartDelay This is an optional parameter to specify the interval between re-start attempts (in
milliseconds). The default is 100.
TerminateDelay This specifies the time to delay (in milliseconds) when the service is asked to
stop. The default is 0.
Argument1 Description
-name This name is used by certain data access applications to identify a specific
data provider when there are more than one data provider of that type.
Specifically, the -name argument is used by DataDirect when the Use
Channel Numbers option is disabled, and by display scripts that use the
Data statement.
When more than one Data Provider of the same type is connected, each
Data Provider MUST have a unique name. Assigning a new channel number
also assigns a new name. Therefore, if a new channel number is assigned,
the name does not have to be edited unless the assigned default name is
not preferred.
-channel This also uniquely identifies a specific data provider when there are more
than one data provider of that type. The -channel argument is used by
DataDirect when the Use Channel Numbers option is enabled, and by
display scripts that do not use the Data statement.
Default data providers are assigned CHANNEL 0. If an application client
specifies an invalid channel, the application defaults to channel 0. To prevent
defaulting to channel 0 in the event of an invalid channel specification, be
sure that no data providers are assigned channel number 0. To do this,
change the channel number of the default data providers, and do not use
channel number 0 for any additional data providers.
538 3BUF001092-510
Section 15 Configuring Data Providers Editing Data Provider Configuration Files
Argument1 Description
-server This is the name or IP address of the service provider. For a local node
installation, this should be the name of the local node. To not have the data
provider start automatically when the corresponding Display server is
started, set the value for Server in Arguments to NoNode.
NOTE:
1. For specific arguments for each type of service, see Table 51 for ADSspCOM.EXE, Table 52 for
ADSdpDDR.EXE, Table 53 for ADSdpADO.EXE, Table 54 for ADSdpOPCHDA.EXE, Table 55 for
ADSdpDCS.EXE, Table 56 for ADSdpDCS.EXE, Table 57 for ADSdpOPC.EXE.
Table 51 describes the arguments that are unique for the ADSspCOM.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 50.
The no reverse DNS setting on the COM process is set to -UseReverseDNS 0 by
default so DNS lookup does not prevent data provider communication. COM
arguments are not reset upon upgrade of a system and defaults may have to be reset.
Table 51. ADSspCOM.EXE Arguments
Argument Description
-server The host name or TCP/IP address where the service provider runs. For a local
node installation, this should be the name of the local node. To not have the
service provider start automatically when the corresponding ADSS is started,
set the value for Server to NoNode.
-port The TCP/IP socket port number. Three sockets are used, starting from the one
specified.
Default: 19014
Range: 1000<= n <= 65000
3BUF001092-510 539
Editing Data Provider Configuration Files Section 15 Configuring Data Providers
Argument Description
-AliveTimeOut Used to disconnect clients if they are quiet for the time specified. It can also be
used on data providers to indicate how long to wait to disconnect a data provider
from the main service provider.
Default: 60
Range: 0 <= x <= ...
Unit: Seconds
Remote data providers to an Information Management node may not recognize
that the Information Management node has been restarted causing the data
provider to wait for the period defined by this argument before reconnecting.
Normally, the Data Provider on the remote node detects that the Com Port has
been closed and kills itself so that it can reconnect after the node restarts. In
some cases, the Data Provider doesn't detect the Com Port closing and hangs
until the -AliveTimeout argument expires causing the Data Provider to wait
before killing itself so that it can reconnect. Set this argument to a reasonable
period (about 300 seconds) so the node has time to reboot. This is especially
important for the AIPHDA and IMHDA data providers if their values retain an old
default value of 6000 seconds.
-license_key & These are the license key and text as entered when the display server is
-license_text installed, or when the license was updated. The key is entered in quotes.
-UseReverse ADSspCOM is configured to avoid DNS lookup (default for a new system). This
DNS 0 is used when there are DNS server problems. For example, ADSspCOM may
time out accessing the DNS server and clients trying to connect will time out as
their time out is likely to be less than the DNS server time out.
Table 52 describes the arguments that are unique for the ADSdpDDR.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 50.
540 3BUF001092-510
Section 15 Configuring Data Providers Editing Data Provider Configuration Files
Argument Description
-server The host name or TCP/IP address where the service provider runs.
For a local node installation, this should be the name of the local
node. To not have the data provider start automatically when the
corresponding ADSS is started, set the value for Server to NoNode.
-port The TCP/IP socket port number. Three sockets are used, starting
from the one specified.
Default: 19014
Range: 1000<= n <= 65000
-datapath This is the path to the directory where the display files reside. The
default is: c:\Program Files\ABB Industrial IT\ Inform
IT\Display Services\Server\Data
-Allow_System_Access This argument is required to support bulk data export for client
applications such as DataDirect. This argument is provided by
default. In addition to this argument, the SYSTEMACCESS user
preference must be enabled. This is the default setting for this
preference.
Table 53 describes the arguments that are unique for the ADSdpADO.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 50.
Table 53. ADSdpADO.EXE Arguments
Argument Description
-server The host name or TCP/IP address where the service provider runs. For a
local node installation, this should be the name of the local node. To not
have the data provider start automatically when the corresponding ADSS
is started, set the value for Server to NoNode.
-port The TCP/IP socket port number. Three sockets are used, starting from
the one specified.
Default: 19014
Range: 1000<= n <= 65000
3BUF001092-510 541
Editing Data Provider Configuration Files Section 15 Configuring Data Providers
Argument Description
-dbtype This is the type of database to which this data provider connects. The
choices are: MDB, ODBC, and GENERIC. For MDB and ODBC
databases, be sure to also configure the corresponding username and
password. For GENERIC databases, be sure to specify the connection
string (-constr argument).
-dbname This is the name or full path to the database. For dbtype MDB, use the full
path. For dbtype ODBC, use the ODBC name.
-user This is the user name for logging into the database. This is required when
the dbtype is ODBC or MDB.
-password This is the password for logging into the database. This is required when
the dbtype is ODBC or MDB.
disallow_sql_write Add this argument to the argument list to NOT allow any SQL statements
other than those that begin with the SELECT keyword.
-constr This is the full connect string. This is only required when the dbtype is
GENERIC.
-CmdTMO <n> This is the command timeout in seconds (n = number of seconds). The
default is 30 seconds
ReConnINT <n> This is the reconnect interval in seconds (n = number of seconds). The
default is 0. When n = 0, reconnect is disabled, otherwise the provider will
attempt connecting the database at the specified interval, until successful
connection or the reconnect timeout period (if specified) is exceeded.
-ReConnTMO <n> This is the Reconnect timeout period in seconds (n = number of
seconds). The default is 0. When n = 0 there's no timeout and the
provider will keep on trying to connect to the database. If the reconnect
timeout period is exceeded, the provider will terminate.
542 3BUF001092-510
Section 15 Configuring Data Providers Editing Data Provider Configuration Files
Argument Description
FatalErrors This is a listing of error codes which should be treated as Fatal (for
"<n1>;<n2>;....;<nX example, missing connection to the database). If such an error occurs,
>" the data provider will terminate. The format to specify error codes is a list
of numbers separated by semicolons (;) and enclosed in quotation marks.
For example: -FatalErrors "11; 567;-26"
Entering an asterisk in place of error codes will cause the provider to
terminate at any error. For example: –FatalErrors “*”
-ReturnAllErrors Refer to ReturnAllErrors Argument below.
"TRUE|FALSE"
ReturnAllErrors Argument
Normally, on errors the ADO provider returns an array containing:
Some data base providers are capable of generating several errors as a result of one
request. If ‘ReturnAllErrors’ is FALSE (default), only the last error code/text is
returned. If ‘ReturnAllErrors’ is TRUE, the returned array will contain all the error
codes/texts encountered:
Example:
The connection to the SQL server fails. If ReturnAllErrors FALSE (default):
3BUF001092-510 543
Editing Data Provider Configuration Files Section 15 Configuring Data Providers
If ReturnAllErrors TRUE:
Table 54 describes the arguments that are unique for the ADSdpOPCHDA.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 50.
Table 54. ADSdpOPCHDA.EXE Arguments
Argument Description
-OPCHDAServer <xxx> ProgID for OPCHDA server.
If the AIPOPCHDA Server is being used, the Server ProgID is
ABB.AfwOpcDaSurrogate.
If the IMOPCHDA Server is being used, the Server ProgID is
HsHDAServer.HDAServer.
-OPCHDAServerHost <xxx> H Host name of the computer where the OPCHDA server resides.
This is only required if OPCHDA server is on a remote host.
-ALLOW_LOG_WRITE If this argument is not specified, clients cannot write to history
logs. In addition to this argument, the LOGWRITE user
preference must be enabled. The default setting for this
preference is disabled. For details on how to set this user
preference, refer to Configuring User Preferences on page 579.
544 3BUF001092-510
Section 15 Configuring Data Providers Editing Data Provider Configuration Files
Argument Description
-Browser_Separator This applies to the AIPHDA data provider and is used when an
Aspect System has properties being historized whose name
contains the forward slash (/) character, or whose ancestor
objects contain that character. One application for this is the
Harmony Connectivity Server where the default separator
cannot be used.
The OPCHDA browser uses the forward slash character as a
separator by default, and will not parse objects and properties
correctly if they use this character. In this case, the OPCHDA
browser must use a different separator character. The
supported separator characters are "\","-" , "," and ".".
As an example, to declare the backslash as the separator, add
the argument as follows: -Browser_Separator \
For Aspect Systems where the / character is not used in the
property names nor in ancestor object names, no change in
configuration is necessary.
NOTE: A data provider will support access to a data source that
uses a special browser separator, but it will not support the
other data source. For example, the data provider will not
support both a special browser separator configured for 800xA
for Harmony process opbjects and data sources that use the
default browser separator such as AC 800M process objects.
Creating multiple data providers, one for each different required
browser separator, and selecting the applicable data provider as
required in the client application will solve this problem.
Table 55 describes the arguments that are unique for the ADSdpDCS.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 50.
3BUF001092-510 545
Editing Data Provider Configuration Files Section 15 Configuring Data Providers
Argument Description
server The host name or TCP/IP address where the service provider runs. For
a local node installation, this should be the name of the local node. To
not have the data provider start automatically when the corresponding
Display server is started, set the value for Server to NoNode.
-port The TCP/IP socket port number. Three sockets are used, starting from
the one specified.
Default: 19014
Range: 1000<= n <= 65000
-name This is the assigned name for the data provider.
-Allow_Object_Write If this argument is specified, clients can write to process objects. If this
argument is not specified, write transactions are not allowed.
In addition to this argument, the OBJECTWRITE user preference must
be enabled. The default setting for this preference is disabled.
For details on how to set this user preference, refer to the section on
configuring user preferences in Configuring User Preferences on page
579.
Table 56 describes the arguments that are unique for the ADSdpLOG.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 50.
546 3BUF001092-510
Section 15 Configuring Data Providers Editing Data Provider Configuration Files
Argument Description
server The host name or TCP/IP address where the service provider runs. For
a local node installation, this should be the name of the local node. To
not have the data provider start automatically when the corresponding
Display server is started, set the value for Server to NoNode.
-port The TCP/IP socket port number. Three sockets are used, starting from
the one specified.
Default: 19014
Range: 1000<= n <= 65000
-name This is the assigned name for the data provider.
LogHandler
-Allow_Log_Write If this argument is specified, clients can modify existing log entries, and
add new log entries. If this argument is not specified, write transactions
are not allowed.
In addition to this argument, the LOGWRITE user preference must be
enabled. The default setting for this preference is disabled.
For details on how to set this user preference, refer to the section on
configuring user preferences in Configuring User Preferences on page
579.
Table 57 describes the arguments that are unique for the ADSdpOPC.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 50.
3BUF001092-510 547
ADO Data Provider for Oracle Access Section 15 Configuring Data Providers
Argument Description
-OPCprogID <xxx> Logical name for OPC server.
For remote OPC servers such as Symphony, Freelance, or SattLine, use
the Data Provider Configuration Wizard to get this information.
-OPChost <xxx> Host name of the computer where the OPC server resides. This is only
required if OPC server is on a remote host. Do not specify if OPC server
and data provider are on the same host.
For remote OPC servers such as Symphony, Freelance, or SattLine, use
the Data Provider Configuration Wizard to get this information.
-Allow_Object_Write If this argument is specified, clients can write to OPC objects. If this
argument is not specified, write transactions are not allowed.
In addition to this argument, the OBJECTWRITE user preference must be
enabled. The default setting for this preference is disabled.
For details on how to set this user preference, refer to the section on
configuring user preferences in Configuring User Preferences on page
579.
-EventUpdateRate There is no event subscription on OPC, but a low update rate can simulate
<nnn> this. Default: 250, Unit: milliseconds
-EventDeadBand Percent of difference between low and high engineering units range of
<nnn> analog values. (EU values are specified on the OPC server). Default: 0.00,
Unit: Percent
-CacheWrites If true, the OPC initialization and setup for write commands will never be
removed, resulting in better performance for following writes to the same
object. This uses more memory. Default: Not specified not cached). Range:
Specified | Not specified
548 3BUF001092-510
Section 15 Configuring Data Providers Starting and Stopping Providers
Argument Value
-port 19014
-channel 0
-pass history - Password for the history user.
-server localhost
-dbtype ODBC
-ReconnINT 10 - Retry interval (in seconds) for reconnecting to the Oracle database if the
data provider is disconnected from the Oracle database.
-user history - Username for the history user.
-dbname Defaults to localhost. DO NOT change this specification
-name DBA
-FatalErrors “03114” (quotation marks required) - Indicates that oracle error code “03114” for
disconnect will be considered fatal.
3BUF001092-510 549
Adding Arguments to a Data Provider Section 15 Configuring Data Providers
550 3BUF001092-510
Section 15 Configuring Data Providers Checking Display Server Status
2. Enter the hostname for the data server where the data providers are located.
Leaving the hostname blank defaults to localhost. As an option, specify the
maximum time to wait for the server to respond.
3. Click OK. This displays the server status window, Figure 401.
3BUF001092-510 551
Checking Display Server Status Section 15 Configuring Data Providers
This window provides the following information related to the Display Server:
552 3BUF001092-510
Section 16 Authentication
3BUF001092-510 553
Configuring Authentication Section 16 Authentication
Configuring Authentication
Authentication may be configured on an individual basis for each operation
associated with aspect categories (except softpoints). Table 59 lists the operations
for which authentication may be configured. The operations are listed by aspect
category, which are grouped by aspect system.
554 3BUF001092-510
Section 16 Authentication Configuring Authentication
2) Select an
Operation
3) Select the
Authentication
Level
If the Inform IT Authentication Category aspect does not yet exist, then add it to the
aspect category’s aspect list. If the list of operations for the aspect category is not
complete, then any missing operations may be added. To do this, click Add, then
enter the operation name and click OK, Figure 404.
3BUF001092-510 555
Configuring Authentication Section 16 Authentication
556 3BUF001092-510
Section 16 Authentication Configuring Authentication
3BUF001092-510 557
Configuring Authentication for Aspects Related to Softpoint Configuration Section 16 Authentication
558 3BUF001092-510
Section 16 Authentication Configuring Authentication for Aspects Related to Softpoint Configuration
3BUF001092-510 559
Configuring Authentication for Aspects Related to Softpoint Configuration Section 16 Authentication
560 3BUF001092-510
Section 17 System Administration
Overview
This section describes administrative procedures for Information Management
applications in the 800xA system. Access to most of these functions is via the
Administrative Tools utility on the Windows Control Panel, Figure 407. To access
the PAS administrative tools, from the Windows task bar choose:
Start>Settings>Control Panel>Administrative Tools>PAS.
3BUF001092-510 561
Configuring OMF Shared Memory Section 17 System Administration
562 3BUF001092-510
Section 17 System Administration PAS Window
PAS Window
To access the PAS window, log in as an Domain Administrator-level user or a user
in the PAS operator list. Open the PAS window from the Windows task bar:
Start>Settings>Control Panel>Administrative Tools >PAS>Process
Administration.
This window is for starting and stopping all processes under PAS supervision. It
also provides access to advanced functions for debugging.
The process list shows all processes specified in the Windows registry under
HKEY_LOCAL_MACHINE\SOFTWARE\ABB\PAS\Startup. The information
provided in this list is described in Table 60. PAS Window controls are described in
Table 61.
3BUF001092-510 563
Starting and Stopping All Processes Section 17 System Administration
Field Description
Process Name Name of the processes.
Supervision Enabled When a process is removed from PAS supervision (Advanced Functions on
or Disabled page 565), an X icon is placed to the left of the process name.
Process State State of the supervised process, normally Started or Stopped.
Priority Order in which processes run. When the processor is busy, this determines
which processes will be run at all.
Button Description
Start All/Stop All Start or stop all processes. Refer to Starting and Stopping All Processes.
Restart All Stop and then restart all processes.
Reset Resets failed processes to the Stopped state.
Refresh Clears the process list and queries PAS for the current process list. This may
be used if the list gets out of sync with the PAS internal state.
Advanced>> Expands the window to show controls for advanced functions. Refer to
Advanced Functions on page 565.
Connect This button is only visible when the PAS window is disconnected from PAS. If
this occurs, use this button to reconnect the PAS window to PAS
564 3BUF001092-510
Section 17 System Administration Advanced Functions
Advanced Functions
Click Advanced>> in the PAS Window to show the Advanced functions,
Figure 409.
3BUF001092-510 565
Advanced Functions Section 17 System Administration
Button Description
Start Process/Stop Start or stop the selected process.
Process CAUTION: PAS does not perform dependency checks before
starting/stopping an individual process. Therefore, as a rule, stop or start
all processes rather than individual processes.
Disable/Enable This button alternately disables and re-enables PAS supervision for a
Administration process. For example, if a process fails to start for some reason,
consider removing it from PAS supervision so as not to affect the start-up
of other processes. This is generally used for troubleshooting.
Reinitialize Settings This is for technical support personnel. It is used when the registry
information for a process has been changed. When this occurs, the
process must be reinitialized in order for the changes to take effect.
Table 63. Advanced Functions for PAS Service
Button Description
Stop PAS/Start PAS Start PAS starts PAS. If Autostart flag in the registry is set or is not
specified, PAS will begin the Start All sequence as soon as it starts.
Stop PAS stops PAS. Before PAS service stops, it will shutdown all
processes including the ones that have disabled administration.
NOTE: Only Domain Administrator users are permitted to use start/stop.
Users in the PAS Operator list are not permitted to start/stop PAS.
Global Reinitialize This command can only be issued when all processes are stopped. It
tells the PAS service to completely erase all its data structures and
reinitialize them with current registry settings. Any changes in the
registry including PAS global setting, Node Type, and individual process
settings will take effect when this command is issued.
Send Message This displays a dialog for sending messages as an alternative method for
interacting with the PAS service, Figure 410. The messages that have
been sent to a process can also be read. This functionality is generally
reserved for debugging by technical support personnel.
View Logs Displays execution log for PAS service.
566 3BUF001092-510
Section 17 System Administration Advanced Functions
Stopping Oracle
To stop Oracle, first stop the Windows services for the Oracle database instance and
Listener. This is done via the Services utility in the Administrative Tools on the
Windows Control Panel. To do this:
1. Launch the Services utility as shown in Figure 411.
2. Scroll to the OracleServiceADVA service, right-click and choose Stop
from the context menu. Figure 412.
3. Scroll to the service named OracleTNSListener, right-click and choose Stop
from the context menu.
3BUF001092-510 567
Advanced Functions Section 17 System Administration
568 3BUF001092-510
Section 17 System Administration Managing Users
Managing Users
The Information Management installation creates default user accounts and user
groups. To add, remove, or otherwise edit any user accounts, use this section to
understand how to manage user access for the Information Management server.
User access is handled on four levels:
• Windows User - The computer where the server software is installed requires
you to log on with a Windows username and password. Windows users are
created and managed via the User Manager on the Windows control panel. A
default user configuration is created by the Windows installation. Other
Windows users may be created by the installing user.
The Information Management and Oracle software installations create
additional users and groups. These are described in Windows Users and Groups
for Information Management on page 570.
• Oracle Access - Oracle user authority is required by certain Information
Management processes for access to the Oracle database. A default user
configuration is created by the Information Management and Oracle software
installations. These users are described in Oracle Users on page 572.
• Display Client Users - Standard user files are provided for the desktop
applications - DataDirect, Desktop Trends, and Display Client. Additional
users can be created by copying and renaming an existing user file. New
passwords, configure user preferences, and customize language translations
can also be specified. Refer to Managing Users for Display Services on page
575.
Change the default passwords for certain Windows users and Oracle users
immediately after installing the Information Management software. This helps
protect the system from unauthorized access. Guidelines are provided in Securing
the System on page 572.
3BUF001092-510 569
Windows Users and Groups for Information Management Section 17 System Administration
570 3BUF001092-510
Section 17 System Administration Windows Users and Groups for Information Management
HistoryAdmin Group
This group is created by the Information Management software installation. Users in
this group have access to History configuration and History database maintenance
functions. These users can access History database maintenance functions and other
Information Management administrative procedures seamlessly without having to
change users, or enter Oracle passwords.
This group is included in the PAS OperatorList by default. This grants all users in
the HistoryAdmin group access to PAS, even if these users are not Domain
Administrator-level users. This enables HistoryAdmin users to start and stop PAS
services as required by certain History database maintenance functions.
To grant HistoryAdmin users access to History database configuration, but deny
access to PAS, remove this group from the PAS OperatorList. Instructions for
editing the PAS OperatorList are provided in PAS OperatorList on page 574.
ORA_DBA Group
This group is created by the Oracle software installation. Users in this group have
Oracle access for database maintenance functions. Such access is generally reserved
for technical support personnel.
3BUF001092-510 571
Oracle Users Section 17 System Administration
Oracle Users
The default Oracle user configuration created by the Oracle and Information
Management software installations is described in Table 64.
Table 64. Oracle Users
User(1) Description
SYS Created by Oracle.
SYSTEM Created by Oracle.
OUTLN Created by Oracle.
DBSNMP Created by Oracle.
HISTORY This user is created when the Oracle database instance is created. This user
has read access to Oracle tables and views.
OPS$OCSHIS This is the oracle account for the IM history database.
OPS$_____ This is an Oracle user account created for the Windows user that installed the
Information Management software. This user account is classified as
EXTERNAL, meaning it does not have an Oracle password. To log into Oracle
when you are already logged in as the Windows user that installed the
Information Management software, just enter a slash(/) following the sqlplus
command. For example, sqlplus /.
(1) Since Information Management users are not licensed for write access to Oracle tables, the HISTORY user account
is the only Oracle user that operators should use.
The default Oracle user configuration does not have to be changed; however, it is
strongly recommended that the default passwords be changed for all Oracle users
with the exception of the HISTORY user, and users whose password is indicated as
EXTERNAL. This procedure is described in Securing the System on page 572.
572 3BUF001092-510
Section 17 System Administration Securing the System
3BUF001092-510 573
PAS OperatorList Section 17 System Administration
PAS OperatorList
OperatorList is a configuration parameter for the PAS Service in the Windows
Registry. This parameter specifies a list of groups and users other than Domain
Administrator-level users who can use the PAS window to start and stop processes
under PAS supervision. Groups and users in this list have complete access to all
functions in the PAS window, except starting and stopping the PAS service itself.
Only Domain Administrator users can start and stop PAS. This list can be edited to
grant/deny users and groups access to PAS.
574 3BUF001092-510
Section 17 System Administration Managing Users for Display Services
2. Navigate to the location in the registry where the processes under PAS
supervision are specified -
HKEY_LOCAL_MACHINE\SOFTWARE\ABB\PAS, Figure 415.
3BUF001092-510 575
Creating New Users Section 17 System Administration
576 3BUF001092-510
Section 17 System Administration Creating New Users
When a folder is copied and renamed, the user name is the second word in the
name (following the underscore, for example in AID_aid.svg, the user name is
aid).
When a new user is created, then create a unique password for that user. Refer to
Creating User Passwords on page 578.
Also, configure user preferences and/or customize language translations for that
user. Refer to Configuring User Preferences on page 579, and Customizing
Language Translations on page 585.
3BUF001092-510 577
Creating User Passwords Section 17 System Administration
578 3BUF001092-510
Section 17 System Administration Configuring User Preferences
3BUF001092-510 579
Configuring User Preferences Section 17 System Administration
RO = Read Only
RW = Read/Write
580 3BUF001092-510
Section 17 System Administration Configuring User Preferences
3BUF001092-510 581
Configuring User Preferences Section 17 System Administration
582 3BUF001092-510
Section 17 System Administration Configuring User Preferences
3BUF001092-510 583
Configuring User Preferences Section 17 System Administration
584 3BUF001092-510
Section 17 System Administration Customizing Language Translations
The first time you log in using a custom language, a prompt to define any unknown
text strings is made, Figure 423. Either define the strings at this time, skip some
strings on an individual basis, or skip all definitions at this time.
3BUF001092-510 585
Customizing Language Translations Section 17 System Administration
To customize the language, choose User > Language from the menu bar and then
use the Edit Language dialog, Figure 424.
The Texts list displays the English version of all text used in the user interface.
Selecting a text line from the list displays the translation for that text according to
the language chosen for this session. English text is the default. Edit the translation
in the Translation field, and then click Apply.
Some texts have special characters. DO NOT remove these special characters.
• & is used in menu texts to indicate that the next character is a mnemonic key.
• % is used by the system to fill in additional text such as the name of a display.
586 3BUF001092-510
Section 17 System Administration Checking Display Server Status
Enter the hostname, then click OK. As an option, specify the maximum time to wait
for the server to respond. This displays the server status window, Figure 426.
Leaving the hostname blank defaults to localhost.
3BUF001092-510 587
Checking Display Server Status Section 17 System Administration
Licenses These fields show the total number of purchased licenses, and
the available licenses not currently being used:
• Build - Build access allows use of Display Services both in
the Build mode and the Run mode.
• MDI - Multiple Document (Display) Interface. MDI Run
access allows multiple displays to be run at the same time.
• SDI - Single Document (Display) Interface. SDI Run access
allows one display to be run at a time.
• ADD - DataDirect.
• DP - Number of data providers (refer to Data Providers
Connected). Information regarding data providers is in
Section 15, Configuring Data Providers.
588 3BUF001092-510
Section 17 System Administration Software Version Information
3BUF001092-510 589
Disk Maintenance - Defragmenting Section 17 System Administration
590 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP
Requirements
Certain Information Management functions require the OMF domain to be extended
to the TCP/IP network and all other ABB nodes that exist on the TCP/IP network.
Some of these functions are:
• Consolidating history data from different Information Management servers.
• Using one Information Management server to schedule event-triggered data
collection for logs that reside on a different node.
• Using a Display client to view History trend data for any Information
Management node within the OMF domain.
Example Applications
The OMF TCP/IP domain can be defined by Multicast communication, point-to-
point communication, or a combination of Multicast and point-to-point
communication. Example applications are described in:
• OMF TCP/IP Domain with Multicast Communication.
• OMF TCP/IP Domain with Point-to-Point and Multicast.
• OMF TCP/IP Domain with Point-to-Point Exclusively.
3BUF001092-510 591
OMF TCP/IP Domain with Multicast CommunicationAppendix A Extending OMF Domain to TCP/IP
from communicating with any other OMF on the TCP/IP network having a different
Multicast address.
Multicast addresses look very similar to IP addresses. In fact, Multicast addresses
occupy a reserved section of the IP address range: 224.0.0.2 to 239.225.225.225.
The default Multicast address used when multicasting is first enabled is 226.1.2.3.
The default address is okay for systems that have their own TCP/IP network (for
small plants with no company intranet). For large companies with complex
intranets connecting multiple sites, the default address is NOT recommended.
EH - 2
(HP- UX) EH - 3
(2000)
226.1.3.4
226.1.3.4
Any valid address may be selected and assigned to each of the Information
Management nodes that are required to be in the same Domain. Some companies
have the network administrator maintain control over usage of Multicast addresses.
592 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP OMF TCP/IP Domain with Point-to-Point and
This helps prevent crossing of Multicast defined Domains, and the problems that
may result.
Once the OMF TCP/IP Domain where the Information Management node will
reside is defined, use the Communication Settings dialog to enable the OMF TCP/IP
socket, and assign the Multicast address. Use the following three fields in the lower
left corner of this dialog: TCP/IP Multicast enabled, Multicast address,
MulticastTTL. This procedure is described in Configuring OMF for TCP/IP on page
599.
TCP/IP protocol must be configured before configuring the Multicast address,
and enable the OMF TCP/IP socket. This is described in Configuring TCP/IP
Protocol on page 599.
3BUF001092-510 593
OMF TCP/IP Domain with Point-to-Point and Multicast Appendix A Extending OMF Domain to
Domain 1 Domain 2
EH1 EH1 - EH3
EH1 - EH2
Multicast Address = 226.1.3.4
IP Address = 172.28.66.190
TCP/IP Network
EH2 EH3
594 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP OMF TCP/IP Domain with Point-to-Point and
this domain by network hardware that does not support multicast. Therefore a
second domain is established for EH1 and EH4 for point-to-point communication.
In this case use the Communication Settings dialog to configure TCP/IP Multicast
enabled, Multicast address, and MulticastTTL for EH1, EH2, and EH3. This
procedure is described in Configuring OMF for TCP/IP on page 599.
The Socket Configuration Lists on EH1 and EH4 must be configured for point-to-
point communication. For EH4, because all OMF communication to this node
should be via point-to-point, Multicast should be disabled to prevent unwanted
OMF communication on the default Multicast address. Do this by deleting Multicast
from the Socket Configuration List. This is described in Configuring OMF Socket
Communication Parameters on page 602. For EH2 and EH3, leave the Socket
Configurations at the default values.
3BUF001092-510 595
OMF TCP/IP Domain with Point-to-Point and Multicast Appendix A Extending OMF Domain to
Domain 1 Domain 2
EH1 EH1 - EH4
EH1 - EH2 - EH3
Multicast Address = 226.1.3.4
IP Address = 172.28.66.190 Bridge Gateway w/
Multicast Disabled
TCP/IP Network
596 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IPOMF TCP/IP Domain with Point-to-Point Exclusively
3BUF001092-510 597
OMF TCP/IP Domain with Point-to-Point ExclusivelyAppendix A Extending OMF Domain to TCP/IP
Domain 1 EH1
EH1 - EH2 - EH3
IP Address = 172.28.66.190
TCP/IP Network
EH2 EH3
598 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP Configuring TCP/IP Protocol
3BUF001092-510 599
Configuring OMF for TCP/IP Appendix A Extending OMF Domain to TCP/IP
Use the TCP/IP Configuration section to configure the required parameters and
enable OMF on TCP/IP. Refer to Table 66 for details.
All processes under PAS supervision must be stopped and then restarted for any
changes to take affect. Refer to Starting and Stopping History on page 424.
600 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP Configuring OMF for TCP/IP
Field Description
TCP/IP Multicast Enable or disable OMF for TCP/IP network via this check box (check
enabled indicates enabled). Do not enable this parameter, unless this
functionality is going to be used.
If distributed History logs are being used, then check TCP/IP Multicast
enabled, and increase the OMF shared memory:
• for History nodes that send History data to a consolidation node, add
5 meg to the OMF shared memory requirement.
• for a consolidation node that collects from one or more History nodes,
add 5 meg to the OMF shared memory requirement for each node
from which it collects History data. For example, if a consolidation
node is receiving from eight History nodes, the consolidation node will
require 8*5 = 40 meg additional OMF shared memory.
Multicast address This is the Multicast address used by OMF (omfNetworkExt and
omfNameProc). To extend the OMF domain to TCP/IP, the Multicast
Address must be the same in all nodes on a network, otherwise the nodes
will not be able to see each other.
A valid multicast group address that enables routing of multicast
messages must be in the range: 224.0.0.2 to 239.225.225.225. The
default multicast address is 226.1.2.3. This is a valid multicast address
and can be used by OMF; however, it is recommended that this default
address NOT be used. Not using the default minimizes the possibility of
conflicts with other unknown nodes on the TCP/IP network. Contact the
network administrator to obtain an appropriate multicast address to ensure
secure communication between intended nodes.
3BUF001092-510 601
Configuring OMF Socket Communication Parameters Appendix A Extending OMF Domain to
Field Description
MulticastTTL This is the time-to-live value which indicates the number of router hops to
do before a message is discarded (not sent any more). This prevents
endless message loops that may occur as a result of unexpected or
unusual network partitioning.
0 = multicast messages are not sent at all
1 = multicast messages are sent only on the local subnet
>1 = multicast messages are forwarded to one or more hops
To extend the OMF domain to TCP/IP, this must be set >= 1; otherwise,
nodes will not hear each other.
Socket The socket configuration specifies this node’s scope of OMF access to
Configuration List other nodes in the domain.
The default is Multicast Send Receive. This means this node can see
and be seen by all other nodes in its multicast domain.
Restrict OMF access between certain nodes in a multicast domain by
configuring their respective socket configurations. For details on how to do
this, refer to Configuring OMF Socket Communication Parameters on page
602.
602 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP Configuring OMF Socket Communication
The default setting is multicast Send Receive. This means this node can see
(receive the signal from) and be seen by (send the signal to) all other nodes in its
domain.
To change access within the domain, edit the socket configuration list for all nodes
in the domain. A summary of the possible entries is provided in Table 67.
Table 67. Socket Configuration Options
multicast SEND This node can be seen by other nodes in the domain
that are configured to RECEIVE network maintenance
messages from this node. To do this, enter multicast
in the IPconfig field, click the Send check box, then
click Add.
multicast RECEIVE This node can see other nodes in the domain that are
configured to SEND network maintenance messages
to this node. To do this, enter multicast in the IPconfig
field, click the Receive check box, then click Add.
3BUF001092-510 603
Example Changing the Socket Configuration Appendix A Extending OMF Domain to TCP/IP
multicast SEND This node can be seen by other nodes in the domain
that are configured to RECEIVE network maintenance
messages from this node. To do this, enter multicast
in the IPconfig field, click the Send check box, then
click Add.
point-to-point SEND This node can only be seen by (send network
maintenance messages to) the node whose IP
address is specified. To do this, enter the node’s IP
address, click the Send check box, then click Add.
point-to-point RECEIVE This node can only see (receive network maintenance
messages from) the node whose IP address is
specified. To do this, enter the node’s IP address, click
the Receive check box, then click Add.
604 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP Example Changing the Socket Configuration
Domain 1 Domain 2
EH1 EH1 - EH3
EH1 - EH2
Multicast Address = 226.1.3.4
IP Address = 172.28.66.190
TCP/IP Network
EH2 EH3
3BUF001092-510 605
Example Changing the Socket Configuration Appendix A Extending OMF Domain to TCP/IP
a. Select the entry in the list. This puts the value for each of the configuration
parameters (IP config, Send, and Receive) in their respective fields.
b. Remove the check from the Receive box, Figure 435, then click Add.
This changes the entry to multicast Send. EH1 can now be seen by other nodes in
the domain configured to receive from EH1; however, EH1 can only see nodes from
which it is specifically configured to Receive.
2. Add an entry for EH2 to specify it as a node from which EH1 can receive:
a. Enter the IP address for EH2, 172.28.66.191, in the IP config field.
b. Enter a check in the Receive check box, and make sure the Send box is
unchecked. This configuration is shown in Figure 436.
606 3BUF001092-510
Appendix A Extending OMF Domain to TCP/IP OMF Shared Memory Size
Repeat the above procedure to modify the socket configurations for EH2 and EH3.
Refer to Figure 434 for the respective socket configurations.
To remove an entry from the Socket Configuration List, select the entry then click
Remove.
3BUF001092-510 607
Shutdown Delay Appendix A Extending OMF Domain to TCP/IP
• for History nodes that send History data to a consolidation node, add 5 meg to
the OMF shared memory requirement.
• for a consolidation node that collects from History nodes, add 5 meg to the
OMF shared memory requirement for each node from which it collects data.
For example, if a consolidation node receives from eight History nodes, the
consolidation node will require 8*5 = 40 meg additional OMF shared memory.
To change the OMF Shared Memory size and then set it back to the default, click
Set Default. When finished, click OK.
Processes under PAS must be stopped and restarted for changes to take affect.
Refer to Starting and Stopping History on page 424.
Shutdown Delay
This delays the Windows shutdown to let PAS shut down processes for History
Services. The default is 20 seconds. The delay may need to be increased depending
on the size of the History database. Use the Communication Settings dialog. To
launch this tool, from the Windows task bar, choose:
Start>Settings>Control Panel>Administrative Tools>PAS>Settings.
608 3BUF001092-510
Appendix B General Log Property Limits
Introduction
Table 68 lists limits on general log properties. Refer to the section related to the
specific log type for more detailed information.
3BUF001092-510 609
Appendix B General Log Property Limits
610 3BUF001092-510
Appendix C Open Source Code Information
Introduction
This appendix contains information on the following open source code:
• cygwin.
• mkisofs.
cygwin
System 800xA Information Manager includes the software cygwin.exe which is
licensed to ABB Inc. pursuant to the GNU General Public License. You can obtain a
copy of that license at:
http://www.gnu.org/copyleft/gpl.html
The source code for the cygwin.exe software is installed with System 800xA
Information Manager in the following folder:
\ABB Industrial IT\Inform IT\History\cygwin
This folder is placed in C:\Program Files by default.
mkisofs
System 800xA Information Manager includes mkisofs.exe. The source code for the
mkisofs.exe software is installed with System 800xA Information Manager in the
following folder:
\ABB Industrial IT\Inform IT\History\cdrtools
This folder is placed in C:\Program Files by default.
3BUF001092-510 611
mkisofs Appendix C Open Source Code Information
The source code is made available to you under the terms of the Common
Development and Distribution License (CDDL) which you can find at:
http://www.opensource.org/licenses/cddl1.php
Any warranty or support offered by ABB for the mkisofs.exe software is offered
only by ABB and not by the entity that first makes the software available under the
CDDL and any terms in any ABB license for the software that differ from those in
the CDDL are offered only by ABB.
612 3BUF001092-510
INDEX
3BUF001092-510 613
Index
614 3BUF001092-510
Index
Unique ID 522 F
data quality 95 file storage, History 247
Data Retrieval 584 file-based storage, property logs 454
Data Service Supervision (ADSS) 524 Filter Log List 431
data source, log configuration 216 Flat File
Database hsBAR Exclude Option 420, 424
Clean History 469 Instance Maintenance Wizard 137
create, drop 468 Free Space Report 446
Database Usage 440
Database1 518 G
DCSLOG, Data Provider 520 Generic Control Network Configuration 59
DCSOBJ, Data Provider 520 grade change 301
DDR, Data Provider 521
Deadband 235
H
deadband compaction 234
Hard Drives status 440
Deadband Storage Interval 238
Harmony Connectivity 518, 545
Defragmenting 589
hierarchical log 176, 193, 259
Device Behavior, Archive Device 326
history
Device Type, Archive Device 325
data type 200
direct log 258
service provider 36
direct log type 176, 191
History Access Importer 357
disable signal 64
History Control 440
Disappears with Acknowledge option 56
history log 31, 175 to 176, 193
Disk Usage Summary
History Log List 429
Instance Maintenance Wizard 137
History Log Template 176
domain 591
History Logs status 440
Dual Log 218
History Managers 441
dual log 177, 203
history source aspect 181
History Status 441
E HS_CONFIG, Environment Variable 479
effective log period 237 HS_DATA, Environment Variable 479
Entry Tables Report 449 HS_HOME, Environment Variable 479
environment variables 478 hsBackupApp.exe 426
EST_LOG_TIME_PER attribute 237, 451 hsBAR Options 419
Estimated Period 238 hsDBMaint 442
Event Collector 396 Hysteresis, alarm filter 52
event configuration 53 to 54
Event Filter 166
I
Event Text 61
IMHDA, Data Provider 520
event-driven data collection 179, 204
Inform IT Authentication Category 555
3BUF001092-510 615
Index
616 3BUF001092-510
Index
3BUF001092-510 617
Index
618 3BUF001092-510
Index
V
Version Info 589
View Report Logs 173
Volume Name Counter, Archive Device 327
Volume Name Format, Archive Device 327
Volume Quota, Archive Device 327
Volume, Read-Only 349
Volumes, Archive Device 327
3BUF001092-510 619
Index
620 3BUF001092-510
Contact us
3BUF001092-510
ABB AB Copyright © 2003-2010 by ABB.
Control Systems All Rights Reserved
Västerås, Sweden
Phone: +46 (0) 21 32 50 00
Fax: +46 (0) 21 13 78 45
E-Mail: [email protected]
www.abb.com/controlsystems
ABB Inc.
Control Systems
Wickliffe, Ohio, USA
Phone: +1 440 585 8500
Fax: +1 440 585 8756
E-Mail: [email protected]
www.abb.com/controlsystems