3BUF001092-610 A en System 800xa Information Management Configuration
3BUF001092-610 A en System 800xa Information Management Configuration
ABB may have one or more patents or pending patent applications protecting the intellectual property in the ABB
products described in this document.
The information in this document is subject to change without notice and should not be construed as a commitment
by ABB. ABB assumes no responsibility for any errors that may appear in this document.
Products described or referenced in this document are designed to be connected, and to communicate information and
data via a secure network. It is the sole responsibility of the system/product owner to provide and continuously ensure
a secure connection between the product and the system network and/or any other networks that may be connected.
The system/product owners must establish and maintain appropriate measures, including, but not limited to, the
installation of firewalls, application of authentication measures, encryption of data, installation of antivirus programs,
and so on, to protect the system, its products and networks, against security breaches, unauthorized access, interference,
intrusion, leakage, and/or theft of data or information.
ABB Ltd and its affiliates are not liable for damages and/or losses related to such security breaches, any unauthorized
access, interference, intrusion, leakage and/or theft of data or information.
ABB verifies the function of released products and updates. However system/product owners are ultimately responsible
to ensure that any system update (including but not limited to code changes, configuration file changes, third-party
software updates or patches, hardware change out, and so on) is compatible with the security measures implemented.
The system/product owners must verify that the system and associated products function as expected in the environment
they are deployed.
In no event shall ABB be liable for direct, indirect, special, incidental or consequential damages of any nature or kind
arising from the use of this document, nor shall ABB be liable for incidental or consequential damages arising from use
of any software or hardware described in this document.
This document and parts thereof must not be reproduced or copied without written permission from ABB, and the
contents thereof must not be imparted to a third party nor used for any unauthorized purpose.
The software or hardware described in this document is furnished under a license and may be used, copied, or disclosed
only in accordance with the terms of such license. This product meets the requirements specified in EMC Directive
2014/30/EU and in Low Voltage Directive 2014/35/EU.
Trademarks
All rights to copyrights, registered trademarks, and trademarks reside with their respective owners.
Table of Contents
1 Configuration Overview
1.1 Configuration Requirements ............................................................................. 24
2 Verifying Services
2.1 Information Management Service Providers ..................................................... 29
2.2 Opening Plant Explorer ..................................................................................... 30
2.3 Checking Service Group/Service Provider Objects .......................................... 30
2.4 Creating Service Group/Service Provider Objects ............................................ 32
2.5 History Server ................................................................................................... 34
3 Configuring SoftPoints
3.1 SoftPoint Set-up Requirements ......................................................................... 38
3.2 Configuring SoftPoints ...................................................................................... 38
3.2.1 Adding a New SoftPoint Object Type ................................................... 40
3.2.2 Configuring a SoftPoint Object Type .................................................... 42
3.2.3 Adding Signal Types ............................................................................ 42
3.2.4 How to Delete a Signal Type ............................................................... 44
3.2.5 Deleting a SoftPoint Object Type ......................................................... 44
3.3 Configuring Signal Properties .......................................................................... 44
3.3.1 Signal Properties .................................................................................. 45
3.3.2 Configuring the Overall Range ............................................................ 45
3.3.3 How to Use Special Characters in the Engineering Units Tag ............. 47
3BUF001092-610 A 5
Table of Contents
4 Configuring Calculations
4.1 Using Calculations ........................................................................................... 79
4.1.1 Calculation Aspects ............................................................................. 80
4.1.2 User Interface ...................................................................................... 81
4.2 Set-up and Administration ................................................................................. 81
4.3 Configuring Global Calculations Parameters .................................................... 82
4.4 Configuring Calculations ................................................................................... 83
4.4.1 Adding a Calculation Aspect to an Object ........................................... 84
4.4.2 Using the Calculation Aspect ............................................................... 86
4.4.3 Editing the Calculation ......................................................................... 86
4.4.4 Mapping Calculation Variables ............................................................. 88
4.4.5 Writing VBScript .................................................................................. 90
4.4.6 Settings.UpdateStatus ......................................................................... 91
3BUF001092-610 A 6
Table of Contents
3BUF001092-610 A 7
Table of Contents
4.11.3 Use Case 3 - SoftPoint Signals Show Bad Data Quality when
Calculations are not Updating Data Quality ......................................... 133
4.11.4 Use Case 4 – Usage of External Libraries and COM/DCOM from a
Calculation Aspect ............................................................................... 133
4.11.5 Use Case 5 – Usage of Calculations in a Redundant Configuration ....133
4.11.6 Use Case 6 - Calculations Service Diagnostic Messages ................... 134
3BUF001092-610 A 8
Table of Contents
8 Historizing Reports
8.1 Report Log Configuration Guidelines ................................................................ 177
8.2 Adding a Report Log ......................................................................................... 178
8.3 Report Log Aspect ............................................................................................ 179
8.3.1 Configure Report Log Operating Parameters ...................................... 180
8.3.2 Delete a Report Log .............................................................................181
8.3.3 View a Report Log ............................................................................... 181
3BUF001092-610 A 9
Table of Contents
3BUF001092-610 A 10
Table of Contents
3BUF001092-610 A 11
Table of Contents
3BUF001092-610 A 12
Table of Contents
3BUF001092-610 A 13
Table of Contents
3BUF001092-610 A 14
Table of Contents
3BUF001092-610 A 15
Table of Contents
16 Authentication
16.1 Usage within Information Management .............................................................573
16.2 Configuring Authentication ................................................................................ 574
16.3 Configuring Authentication for Aspects Related to SoftPoint Configuration ..... 577
17 System Administration
17.1 Configuring OMF Shared Memory ................................................................... 582
17.2 Start-up and Shutdown ..................................................................................... 582
17.2.1 PAS Window ........................................................................................ 583
17.2.2 Starting and Stopping All Processes ................................................... 584
17.2.3 Advanced Functions ............................................................................ 585
17.2.4 Stopping Oracle ................................................................................... 588
17.3 Managing Users ............................................................................................... 589
17.3.1 Windows Users and Groups for Information Management .................. 589
17.3.2 Oracle Users ........................................................................................ 591
17.3.3 Securing the System ...........................................................................592
17.3.4 PAS OperatorList ................................................................................. 594
17.4 Managing Users for Display Services ............................................................... 596
17.4.1 Creating Users ..................................................................................... 597
17.4.2 Creating User Passwords .................................................................... 598
17.4.3 Configuring User Preferences ............................................................. 600
3BUF001092-610 A 16
Table of Contents
3BUF001092-610 A 17
3BUF001092-610 A 18
About this User Manual
The System 800xA Safety AC 800M High Integrity Safety Manual (3BNP004865*)
must be read completely by users of 800xA High Integrity. The recommendations
and requirements found in the safety manual must be considered and implemented
during all phases of the life cycle.
Any security measures described in this user manual, for example, for user access,
password security, network security, firewalls, virus protection, and so on, represent
possible steps that a user of an 800xA System may want to consider based on a risk
assessment for a particular application and installation. This risk assessment, as well
as the proper implementation, configuration, installation, operation, administration,
and maintenance of all relevant security related equipment, software, and procedures,
are the responsibility of the user of the system.
This User Manual provides instructions for configuring Information Management
functionality for the 800xA System. This includes:
• Data access via data providers and Open Data Access.
• SoftPoint Services.
• Calculations.
• History Services for:
– Logging numeric (process) data collected from object properties.
– Logging alarms/events.
– Storing completed reports executed via the application scheduler.
– Archival of the aforementioned historical data.
– Consolidation of message, PDL, and property logs.
These Information Management functions comprise a repository of process, event and
production information, and provide a single interface for all data in the system.
This user manual is intended for application engineers who are responsible for configuring
and maintaining these applications. This user manual is not the sole source of instruction
for this functionality. It is recommended that you attend the applicable training courses
offered by ABB.
3BUF001092-610 A 19
About this User Manual
User Manual Conventions
Electrical warning icon indicates the presence of a hazard that could result in electrical
shock.
Warning icon indicates the presence of a hazard that could result in personal injury.
Tip icon indicates advice on, for example, how to design your project or how to use
a certain function.
Although Warning hazards are related to personal injury, and Caution hazards are
associated with equipment or property damage, it should be understood that operation
of damaged equipment could, under certain operational conditions, result in degraded
process performance leading to personal injury or death. Therefore, fully comply with
all Warning and Caution notices.
Terminology
A complete and comprehensive list of terms is included in System 800xA Terminology
and Acronyms (3BSE089190*). The listing includes terms and definitions that apply to
the 800xA System where the usage is different from commonly accepted industry standard
definitions.
3BUF001092-610 A 20
About this User Manual
Released User Manuals and Release Notes
3BUF001092-610 A 21
3BUF001092-610 A 22
1 Configuration Overview
1 Configuration Overview
3BUF001092-610 A 23
1 Configuration Overview
1.1 Configuration Requirements
3BUF001092-610 A 24
1 Configuration Overview
1.1 Configuration Requirements
• Calculations Configuration
Calculations Services is used to configure and schedule calculations that are applied
to real-time database objects, including both SoftPoints and actual process points.
To configure calculations, refer to Section 4 Configuring Calculations.
3BUF001092-610 A 25
1 Configuration Overview
1.1 Configuration Requirements
• History Configuration
History Services database configuration requires a History database instance. The
default instance is sized to meet the requirements of most History applications. It
may be necessary to drop the default instance and create a new larger instance
depending on performance requirements. Refer to Section 5 Configuring History
Database. Once the Oracle database instance is created, use the following
procedures to configure historical data collection, storage, and archival for the
system:
– Configuring Property Logs
Process data collection refers to the collection and storage of data from aspect
object properties. This includes live process data, SoftPoint data, and lab data
(programmatically generated, or manually entered). The objects that perform
process data collection and storage are called property logs. Process data
collection may be implemented on two levels in the 800xA system. The standard
system offering supports short-term data storage with trend logs. When
Information Management History Server is installed, long-term storage and
permanent offline storage (archive) is supported with history logs. For instructions
on how to integrate and configure trend and history logs, refer to Section 9
Historical Process Data Collection.
– Alarm/Event Message Logging
All alarm and event messages for the 800xA system, including Audit Trail
messages, are collected and stored by the 800xA System Message Server.
This provides a short-term storage facility with the capacity to store up to 50,000
messages. If the system has Information Management History Server installed,
the messages stored by 800xA System Message Server may be forwarded to
a message log for extended online storage. This message log can store up to
12 million messages. Refer to Section 7 Alarm/Event Message Logging.
– Archive Configuration
The archive function supports permanent offline storage for historical data
collected in property, message, and report logs, as well as the 800xA System
alarm/event message buffer. When a history log becomes full, the oldest entries
are replaced by new entries, or deleted. Archiving copies the contents of selected
logs to a designated archive media. For instructions on configuring archiving
refer to Section 11 Configuring the Archive Function.
– Configuring Log Sets
Log sets allow one command to start or stop data collection for a number of
property, message, or report logs simultaneously. For instructions on configuring
log sets, refer to Section 6 Configuring Log Sets.
– Configuring Report Logs
3BUF001092-610 A 26
1 Configuration Overview
1.1 Configuration Requirements
Report logs hold the finished output from reports scheduled and executed via
the Application Scheduler and the Report action plug-in. Completed reports
may also be stored as completed report objects in the Scheduling structure. In
either case the Report action must be configured to send the report to a report
log, or a completed report object. This is described in the section on creating
reports in System 800xA Information Management Data Access and Reports
(3BUF001094*). For instructions on configuring the report log to store finished
reports, refer to Section 8 Historizing Reports.
– Consolidating Historical Data
Historical process data, alarm/event data, and production data from remote
history servers can be consolidated on a dedicated consolidation node. For
instructions on consolidating historical data, refer to Section 12 Consolidating
Historical Data.
– History Database Management
The History Services database may require some configuration and
management, before, during, and after configuration of History Services. For
example, it may be necessary to drop and then re-create the history database,
allocate disk space for storage of historical data, or stagger data collection for
property logs to optimize performance. These and other history administration
procedures are described in Section 13 History Database Maintenance.
3BUF001092-610 A 27
3BUF001092-610 A 28
2 Verifying Services
2.1 Information Management Service Providers
2 Verifying Services
3BUF001092-610 A 29
2 Verifying Services
2.2 Opening Plant Explorer
The service group/service provider objects for History, Archive and Application Scheduler
are automatically created for their respective Information Management servers when the
Process Administration Service (PAS) is initialized during the post installation set-up for
the Information Management server. The Calculations service provider must be manually
created.
The Calculations and Application Scheduler may also be installed and run on other types
of server nodes in the 800xA system (not Information Management servers). In this case,
the service group/service provider objects for those services must be created manually
for each of those non-Information Management servers. This procedure is described in
this section.
For SoftPoint Server, the service group/service provider object are created when a
SoftPointSoftPoint Generic Control Network object is created in the Control structure.
Other SoftPoint server set-up is also required. This is described in SoftPoint Set-up
Requirements on page 38.
All procedures related to service group/service provider objects are done via the Plant
Explorer. The remainder of this section describes how to open the Plant Explorer
workplace, how to verify that these objects exist, create the objects if necessary, and
enable/disable a service provider.
3BUF001092-610 A 30
2 Verifying Services
2.3 Checking Service Group/Service Provider Objects
After restoring a system containing IM logs a check should be done that the Service
Group isn’t missing for the IM log in Log Configuration. If the Service Group is missing
a new group has to be configured.
3BUF001092-610 A 31
2 Verifying Services
2.4 Creating Service Group/Service Provider Objects
3BUF001092-610 A 32
2 Verifying Services
2.4 Creating Service Group/Service Provider Objects
3. Configure the service provider object to point to the node where the service (in this
case Calculations) must run (reference Figure 2.2).
a. Select the Service Provider object in the object browser.
b. Select the Service Provider Definition aspect from the object’s aspect list.
c. Select the Configuration tab.
d. Use the Node pull-down list to select the node where the service will run.
e. Check the Enabled check box.
f. Click Apply.
3BUF001092-610 A 33
2 Verifying Services
2.5 History Server
3BUF001092-610 A 34
2 Verifying Services
2.5 History Server
3BUF001092-610 A 35
3BUF001092-610 A 36
3 Configuring SoftPoints
3 Configuring SoftPoints
SoftPoint Services is used to configure and use internal process variables not connected
to an external physical process signal. Once configured, the SoftPoints may be accessed
by other 800xA system applications as if they were actual process points. For example,
SoftPoint values may be stored in property logs in History Services. Reporting packages
such as Crystal Reports may access SoftPointSoftPoints for presentation in reports.
Desktop tools such as Desktop Trends and DataDirect can read from and write to
SoftPoints.
SoftPoint alarms can be configured and are directly integrated with system alarms and
events. Engineering unit definitions for SoftPoints include minimum/maximum limits and
a unit descriptor.
SoftPoint Services can be configured as redundant on a non-redundant Information
Manager Server pair or on a redundant connectivity server. For greatest efficiency
between Calculations and SoftPoints, put the primary Calculation Server on the same
machine as the primary SoftPoint Server.
The SoftPoint Services software may run on multiple servers. Each SoftPoint server can
have up to 2500 SoftPoint objects. Each SoftPoint object can have up to 100 signals;
however, the total number of signals per server cannot exceed 25,000.
Data types supported are: Boolean, integer (32bit), single precision floating point (32 bit)
and string. Also, double precision floating point (64 bit) is supported as an extended data
type.
The SoftPoint object types specify the signal properties for the SoftPoint objects. This
includes signal ranges for real and integer signals, alarm limits, event triggers, and
whether or not the signals will be controllable. The process objects inherit the properties
of the object types from which they are instantiated.
3BUF001092-610 A 37
3 Configuring SoftPoints
3.1 SoftPoint Set-up Requirements
3BUF001092-610 A 38
3 Configuring SoftPoints
3.2 Configuring SoftPoints
Some SoftPoint properties are configured on an individual basis for instantiated SoftPoint
objects in the Control structure. For example the event text associated with limiter trip
points is configured for real and integer signal types.
Further, most of the default settings configured in the SoftPoint object types may be
adjusted on an individual basis for SoftPoint objects in the Control structure.
When creating new SoftPoint objects, or making changes to existing SoftPoint objects,
the new configuration will not go into effect until the changes are deployed.
When changes are applied to a SoftPoint object type, those changes are propagated
to all SoftPoint objects that have been instantiated from that object type. A warning
message is generated any time a change is applied to a SoftPoint object type, even
if there are no instantiated objects for that type. Disable this warning message while
configuring SoftPoint object types, and enable it only after the SoftPoint configuration
has been deployed. To enable/disable this warning message, refer to
Enabling/Disabling Warning for Changes to Object Types on page 73.
The basic procedures covered in this section are:
• Adding a New SoftPoint Object Type on page 40.
• Configuring a SoftPoint Object Type on page 42.
• Adding Signal Types on page 42.
• Deleting a SoftPoint Object Type on page 44.
• Configuring Signal Properties on page 44.
• Configuring Alarms and Events for Signal Types on page 49.
• Defining Alarm Text Groups on page 57
• Instantiating a SoftPoint Object in the Control Structure on page 62.
• Adjusting Alarm/Event Properties on page 67.
• Adjusting Properties for Signals on page 70.
• Deploying a SoftPoint Configuration on page 72.
• Name Rules on page 74.
3BUF001092-610 A 39
3 Configuring SoftPoints
3.2 Configuring SoftPoints
3BUF001092-610 A 40
3 Configuring SoftPoints
3.2 Configuring SoftPoints
3. In the New Object dialog click the Common tab and select SoftPoint Process
Object Type, Figure 3.1.
4. In the New Object dialog, assign a name to the new object, then click Create. This
adds the new object type to the SoftPoint Object Type group, Figure 3.2.
3BUF001092-610 A 41
3 Configuring SoftPoints
3.2 Configuring SoftPoints
2) Process Object
1) New Object Type Selected 3) Configuration View
Configuration Aspect Selected
3BUF001092-610 A 42
3 Configuring SoftPoints
3.2 Configuring SoftPoints
Integer
Real
String
Signals must be added and deleted in the Object Type structure. Signals can not be
added or deleted directly on SoftPoint objects in the Control structure. Do not copy,
cut, paste, move or rename signal types. When a signal type is added to a SoftPoint
object type, a signal of this signal type is added to all SoftPoint objects based on the
SoftPoint object type. Deleting a signal type deletes all signals of this type.
The Add Signal buttons are located in the upper left part of the SoftPoint Process Object
Configuration aspect, Figure 3.4.
Binary
String
Integer Real
Figure 3.4: Add Signal Buttons
3BUF001092-610 A 43
3 Configuring SoftPoints
3.3 Configuring Signal Properties
3BUF001092-610 A 44
3 Configuring SoftPoints
3.3 Configuring Signal Properties
3BUF001092-610 A 45
3 Configuring SoftPoints
3.3 Configuring Signal Properties
3BUF001092-610 A 46
3 Configuring SoftPoints
3.3 Configuring Signal Properties
2. Select a character.
3. Click OK. The Character map dialog box will be closed and the character added to
the engineering unit.
4. Repeat step 1-3 to select additional characters.
5. Click Apply.
3BUF001092-610 A 47
3 Configuring SoftPoints
3.3 Configuring Signal Properties
3BUF001092-610 A 48
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
Authentication for SoftPoint signal types is configured on an individual basis for each
signal property. Instantiated signals inherit authentication from their respective SoftPoint
object types. Authentication may be changed for instantiated signals in the Control
structure.
To configure authentication for a SoftPoint signal type, Figure 3.10:
1. Click the Property Permissions tab on the Signal Configuration aspect.
2. Select the Property from the property list.
3. Select the Authentication Level from the pull-down list and click Apply.
Selected property
Trip points must occur within the signal range and should be configured BEFORE
adding the limiters. Refer to Configuring the Overall Range on page 45.
3BUF001092-610 A 49
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
Binary signals, and limiters for real and integer signals, can be configured to generate
an alarm and/or event when a state change occurs and how alarm acknowledgement
will occur. This is done via the signal type’s Alarm Event Configuration aspect. This
procedure is described in Configuring Alarm and Event Handling on page 53.
3BUF001092-610 A 50
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
Figure 3.11: Alarm Event Configuration Aspect for Real and Integer Signal Types
3BUF001092-610 A 51
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
2. Select 1Limiter1 from the Limiters list under the Limiter tab and select the Use
Limiter check box.
3BUF001092-610 A 52
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
7. Click Apply.
8. Repeat steps 2-7 to configure up to four limiters. An example of a completed limiter
list is shown in Figure 3.13.
To remove a limiter, select the limiter and clear the Use Limiter check box under the
Limiter tab and click Apply.
Alarm/event handling for limiters is configurable. Event generation is enabled for all
limiters by default. Refer to Configuring Alarm and Event Handling on page 53.
The event text for each limiter must be configured in the instantiated object’s
Alarm/Event Configuration aspect in the Control structure. This is covered in
Instantiating a SoftPoint Object in the Control Structure on page 62.
3BUF001092-610 A 53
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
Events
To specify that an event be generated when a signal state changes (reference Figure
3.14):
1. Select the signal type object.
2. Select the Alarm Event Configuration aspect.
Figure 3.14 demonstrates how to enable and configure events for a binary signal
type. When configuring events for a real or integer signal limiter, the Alarm Event
Configuration aspect will have a third tab for selecting the limiter whose event is being
configured (Figure 3.11).
3. Click the Is an event check box on the Event tab. The default is to have events
generated for both state changes On-to-Off, and Off-to-On). This is specified by
having both Log status changes check boxes unchecked.
To generate events for both state changes, both the Is an event and Is an alarm
check boxes must be checked, and both boxes for log status change unchecked.
3BUF001092-610 A 54
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
4. As an option, use the check boxes to make the event occur exclusively for one state
change or the other:
• Checking Log status changes ‘On’ and NOT checking Log status changes
‘Off’ will result in events only when the state changes from Off to On (rising
edge), Figure 3.15.
Figure 3.15: Event Generated Only When State Changes from Off to On
• Checking Log status changes ‘Off’ and NOT checking Log status changes
‘On’ will result in events only when the state changes from On to Off (falling
edge), Figure 3.16.
Figure 3.16: Event Generated Only When State Changes from On to Off
5. As a further option, change the Alarm text group. Alarm text groups specify the
messages to be displayed for certain events. All events belonging to the same group
will be displayed and/or printed with the same text for a particular change of state.
Alarm text groups are configured as described in Defining Alarm Text Groups on
page 57. To change the alarm text group for a signal, use the Alarm text group
pull-down list. This list contains all Alarm text groups that have been configured in
the system.
If alarm handling for the signal is needed, refer to Alarms on page 55.
Alarms
To show a signal in the alarm list, alarm line and printouts when the alarm status is
activated, specify that the signal Is an alarm. This is done via the Alarm tab on the Alarm
Event Configuration aspect. Also, the signal must be defined as an event by checking
the Is an event check box under the Event tab of the Alarm Event Configuration aspect
(Events on page 54).
3BUF001092-610 A 55
3 Configuring SoftPoints
3.4 Configuring Alarms and Events for Signal Types
The Alarm tab is also used to specify how alarm acknowledgement will occur, alarm
priority, and a time delay to filter out spurious alarms.
To configure alarm handling for a signal:
1. Make sure the Is an event check box is selected on the Events tab.
2. Select the Alarm tab and check the Is an alarm check box, Figure 3.17.
Option Description
Normal acknowledge Show alarm as long as it remains active or
unacknowledged.
No acknowledge Show alarm as long as it remains active.
Disappears with Show alarm as long as it remains unacknowledged. When
acknowledge the alarm is acknowledged, it is removed from the alarm
list regardless of the current alarm status. The system
will not detect when the alarm becomes inactive. Thus
printout is not available for when an alarm became
inactive, only when it was acknowledged.
Alarm list colors are set in the Alarm and Event List Configuration aspect.
4. Enter a priority as a number from 1 (lowest) to 1000 (highest). The priority is used
to sort the alarm list with the top priority alarm first. The default is 1.
3BUF001092-610 A 56
3 Configuring SoftPoints
3.5 Defining Alarm Text Groups
5. Set a time delay. The time delay is the number of seconds the event has to remain
active before any event actions take place. This filters out very short events. Entering
zero (0) means there is no delay.
6. Click Apply.
3BUF001092-610 A 57
3 Configuring SoftPoints
3.5 Defining Alarm Text Groups
3. This displays the New Alarm Text Group dialog as in Figure 3.19. Enter a name for
the new group in this dialog. To change an existing group, Delete the text group
from the list and add a new text group.
3BUF001092-610 A 58
3 Configuring SoftPoints
3.5 Defining Alarm Text Groups
4. Enter the status texts in their respective boxes as described in Table 3.3. All texts
can be up to 16 characters maximum. Refer to Name Rules on page 74. Properties
with an asterisk (*) can only be change from the default group.
5. Click Apply.
6. Restart the alarm and event server for the node where the SoftPoints are deployed.
To do this:
a. Go to the Service Structure.
b. Select the applicable Alarm and Event Service Provider under the Alarm and
Event Services category.
c. Click on the Service Provider Definition aspect.
d. Click on the Configuration tab.
e. Uncheck the Enabled check box and click Apply.
f. Check the Enabled check box and click Apply.
Alarm Text Group can also be created under Library Structure. To add new Alarm text
group right click the Alarm Text Group and click New Object as shown in Figure 3.20.
3BUF001092-610 A 59
3 Configuring SoftPoints
3.5 Defining Alarm Text Groups
3BUF001092-610 A 60
3 Configuring SoftPoints
3.6 Deleting An Alarm Text Group
3BUF001092-610 A 61
3 Configuring SoftPoints
3.7 Instantiating a SoftPoint Object in the Control Structure
Deploy the SoftPoint Generic Control Network to take effect of any changes made
for SoftPoint objects and signals.
There are two methods for creating SoftPoint objects. Either create the objects
one-object-at-a-time, or perform a bulk instantiation of a specified number of objects
based on a selected object type. Refer to the applicable instructions below.
• Creating One Object at a Time on page 62.
• Bulk Instantiation of SoftPoint Objects in the Control Structure on page 64.
3BUF001092-610 A 62
3 Configuring SoftPoints
3.7 Instantiating a SoftPoint Object in the Control Structure
3. In the New Object dialog, select the object type whose properties will be used and
enter a unique name for the object.
4. Click Create, Figure 3.22.
This adds a new SoftPoint object in the Control structure, Figure 3.23.
Instantiated
SoftPoint Object
3BUF001092-610 A 63
3 Configuring SoftPoints
3.7 Instantiating a SoftPoint Object in the Control Structure
3BUF001092-610 A 64
3 Configuring SoftPoints
3.7 Instantiating a SoftPoint Object in the Control Structure
3. Click the Add Objects button in the top left corner of this aspect.
This displays a dialog for selecting the object type on which to base the objects
being instantiated, and for specifying the number of objects to be instantiated. An
example is shown in Figure 3.25.
4. Use the browser on the left side to select the object type from which to instantiate
the new objects (for example, Area1_Pump is selected in Figure 3.25).
3BUF001092-610 A 65
3 Configuring SoftPoints
3.7 Instantiating a SoftPoint Object in the Control Structure
5. Specify the number of objects to create in the Number of new objects field. In this
example, nine (9) new objects will be created.
6. Enter the Name of the new object that will be used as the base name for all new
objects to be created. Each new object will be made unique by appending a
sequential numeric suffix to the same base name starting with the specified Starting
number (in this case: 2).
7. Click OK. The result is shown in Figure 3.26.
Bulk instantiated
SoftPoint objects
3BUF001092-610 A 66
3 Configuring SoftPoints
3.8 Adjusting Alarm/Event Properties
3BUF001092-610 A 67
3 Configuring SoftPoints
3.8 Adjusting Alarm/Event Properties
4. To enter the event text, click the Event2 tab and then enter the text in the Message:
field, Figure 3.29. The maximum number of characters in the Message field is 50.
If additional event text is required, use the Extended event text: edit window.
3BUF001092-610 A 68
3 Configuring SoftPoints
3.8 Adjusting Alarm/Event Properties
5. The Alarm tab, Figure 3.30, is used to adjust the alarm handling parameters originally
configured as described in Configuring Alarm and Event Handling on page 53.
6. The Alarm2 tab, Figure 3.31, is used to configure class. Classes are groups of alarms
without any ranking of the groups. The range is 1 to 9999.
3BUF001092-610 A 69
3 Configuring SoftPoints
3.9 Adjusting Properties for Signals
For details regarding all other tabs on this aspect, refer to Adjusting Alarm/Event
Properties for Binary Signals on page 67.
3BUF001092-610 A 70
3 Configuring SoftPoints
3.9 Adjusting Properties for Signals
Most of the other signal properties are inherited from the respective signal types. As an
option, disable the default settings from the Range tab, Figure 3.34.
3BUF001092-610 A 71
3 Configuring SoftPoints
3.10 Deploying a SoftPoint Configuration
3BUF001092-610 A 72
3 Configuring SoftPoints
3.11 Enabling/Disabling Warning for Changes to Object Types
3. Click Deploy to start the deploy process. During this time, SoftPoint processing is
suspended, current values and new inputs are held. This process is generally
completed within five minutes, even for large configurations. Completion of the
deploy process is indicated by the Deploy ended message, Figure 3.36.
3BUF001092-610 A 73
3 Configuring SoftPoints
3.12 Name Rules
Description Characters
Lower case a-z, å, ä, ö, æ, ø, ü
Upper case A-Z, Å, Ä, Ö, Æ, Ø, Ü
3BUF001092-610 A 74
3 Configuring SoftPoints
3.12 Name Rules
3BUF001092-610 A 75
3 Configuring SoftPoints
3.13 Working with SoftPoints Online
2. The SoftPoint Object dialog shows the signals configured for this SoftPoint object.
Double-click on any signal to show the corresponding faceplate, Figure 3.39.
3BUF001092-610 A 76
3 Configuring SoftPoints
3.13 Working with SoftPoints Online
3. Open the faceplate for another signal either by using the pull-down list, Figure 3.40,
or by clicking the up/down arrows.
Attempting to access a SoftPoint object with a very long name, or performing a group
OPC write to more than 64 SoftPoint signals at a time may result in the following error
message:
The buffer size is insufficient
Error code:
E_ADS_DS_INSUFFICIENT_BUFFER
(0x8ABB04215)
3BUF001092-610 A 77
3BUF001092-610 A 78
4 Configuring Calculations
4.1 Using Calculations
4 Configuring Calculations
The calculations Service group and provider must be manually created on any node the
calculations will be configured to run. See the procedure Creating Service Group/Service
Provider Objects on page 32.
3BUF001092-610 A 79
4 Configuring Calculations
4.1 Using Calculations
Input No. 2
Calculation Output No. 1
Input No. 4
Input No. 4
3BUF001092-610 A 80
4 Configuring Calculations
4.2 Set-up and Administration
Aspect Object in
Control Structure
Calculation Aspect
Figure 4.3: Calculation Aspect for SoftPoint Object in the Control Structure
3BUF001092-610 A 81
4 Configuring Calculations
4.3 Configuring Global Calculations Parameters
Deviating from the defaults may cause system overload and errors. These special
parameters should only be adjusted by qualified users.
3BUF001092-610 A 82
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 83
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 84
4 Configuring Calculations
4.4 Configuring Calculations
The Show All check box may need to be enabled to find this aspect.
3BUF001092-610 A 85
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 86
4 Configuring Calculations
4.4 Configuring Calculations
Use the variable grid to map input and output variables to their respective OPC tags.
Refer to Mapping Calculation Variables on page 88. Then use the edit window to write
the VBScript. Refer to Writing VBScript on page 90.
Variable
Grid
Edit Window
for VBScript
Trace
Window
3BUF001092-610 A 87
4 Configuring Calculations
4.4 Configuring Calculations
Use the Insert Line and Delete Line buttons on the Tool Bar, Figure 4.9 to add and delete
variables in this map as required. Up to 50 variables can be specified. The variables
defined in this grid can be used without having to declare them in the VBScript. The
parameters needed to specify each variable are described in Table 4.2.
Insert Delete
3BUF001092-610 A 88
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 89
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 90
4 Configuring Calculations
4.4 Configuring Calculations
4.4.6 Settings.UpdateStatus
The Settings.UpdateStatus defaults to False. This is to optimize performance. Certain
properties, mostly related to debugging, will not be updated when the calculation is run
on line. These are described in Reading Calculation Status Information on page 123. Set
the update status to true for debug purposes if necessary; however, it is generally
recommended not to run calculations in this mode.
If a relatively few calculations are running at a modest rate of execution, run the
calculations with Settings.UpdateStatus = True. This allows the Result property to link
calculations. Refer to Calculation Result on page 93.
3BUF001092-610 A 91
4 Configuring Calculations
4.4 Configuring Calculations
The calculation can also periodically set the update status true every n number of
executions to periodically update status-related parameters. An example is provided in
Improving Performance on page 130.
Settings.UpdateStatus
To set the update status to true, either change the value of Settings.UpdateStatus to
True, or by comment out the line.
DO NOT delete the line. This would require re-entering the line to reset the update
status to False.
When True, versioning can not be used. This is because online values of the
Calculation aspect differ between environments.
Online Variables
Online input variables declared in the grid contain read-only values that correspond to
the value of the referenced object and property. The variable within the script can be
modified, but these changes are not persisted between executions and the value after
execution is not written to the actual process value.
3BUF001092-610 A 92
4 Configuring Calculations
4.4 Configuring Calculations
Online output variables declared in the grid contain the actual process values of the
referenced objects and properties before script execution. Also any modification of these
variables from within the script will modify the actual process values after execution
completes.
Offline Variables
Offline values are used in place of the actual (Online) values when the Online flag in the
State column is set to False. These values are also used when executing the calculation
in offline mode. Offline variables are not supported in an engineering environment.
Offline input variables are like constants. They contain the value as designated in the
Offline Value column during execution. The variable may be modified within the script,
but changes are not persisted between calculation executions. Further Offline values
are not updated when the Update Status is set to False (default).
Offline output variables function like internal SoftPoints. Before calculation execution,
the variable is set to the value contained in the Offline Value column. During execution,
the variable can be modified via the script. After execution, the change (if any) is written
back to the Offline Value. This offline value is not accessible to any client other than the
calculation itself. This is useful for creating static variables (values are persisted between
executions) where the values are only used internally within the calculation. For example,
an internal counter could record the number of times the calculation has executed.
This property is only applicable when the Update Status is set to True.
Before execution, the previous result value is loaded into the variable and is available
for use by the script. The result value can be set within the calculation’s VB script, for
example result.value = 1. After execution, the value is written back to the calculation's
Result property. The result value is then accessible by OPC clients, and may also be
referenced within other calculations as an input variable in the calculation’s variable grid.
3BUF001092-610 A 93
4 Configuring Calculations
4.4 Configuring Calculations
The result quality may also be set within the calculation script, for example: result.quality
= 0. However, any change to this value is not saved once the calculation has completed.
The result quality always reverts back to quality = good once the calculation has
completed.
If history is to be collected for the result of a calculation, it is best not to collect directly
from the Result property. For best performance, write the result to a SoftPoint from
which the data may then be collected.
The calculation result can be used to link calculations without having to use an external
SoftPoint. When Update Status is set to True, the result variable of the calculation aspect
can be used to link one or more calculations. The result of one calculation can be used
as the input and trigger for another calculation. This is illustrated in the following example.
Example:
M1 consists of two calculations: CalcA and CalcB. The result property of CalcA will be
used to trigger CalcB.
The script for CalcA sets its result property to the some input, in this case INPUT1, Figure
4.12.
CalcB then subscribes to CalcA's result property as an input, Figure 4.13. In this case,
the event flag is also set to True so that each time the result of CalcA changes, CalcB
executes. CalcB could also be executed manually or scheduled instead.
3BUF001092-610 A 94
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 95
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 96
4 Configuring Calculations
4.4 Configuring Calculations
In addition, some OPC properties will not convert to the appropriate data type before
allowing the data to be committed. For example, attempting to write the value 255 to the
NAME:DESCRIPTION property of an object will not work. However writing the value
"255" or using the CSTR conversion function will work. This is illustrated in Figure 4.16.
3BUF001092-610 A 97
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 98
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 99
4 Configuring Calculations
4.4 Configuring Calculations
It may be useful to propagate the data quality from one SoftPoint signal to another,
typically from an input to output. Given a calculation with an input variable IN_1 and an
output variable OUT_1, the following line of code will propagate quality from the input to
the output: OUT_1.Quality = IN_1.Quality
3BUF001092-610 A 100
4 Configuring Calculations
4.4 Configuring Calculations
Calculation Services supports Windows Regional settings for French, German, and
Swedish languages. If floating point values are to be used in calculations, be sure to
include the SetLocale function in the calculation script in order for the calculation to
process the floating point values correctly. An example is illustrated in Figure 4.17. The
syntax is:
SetLocale(“lang”)
where lang is the code for the native language:
• fr = French.
• de = German.
• sv = Swedish.
Figure 4.17: SetLocale Function for Floating Point Values in Native Language
3BUF001092-610 A 101
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 102
4 Configuring Calculations
4.4 Configuring Calculations
The Calculation Editor View does not prompt the user when exiting and changes are
pending. Use Save button before leaving the display or pin the aspect.
Icon Notes
Saving changes completely replaces the original calculation. To
preserve the original, make a second copy to edit.
Enable/Disable. Each Calculation must be assigned to a specific
Service Provider BEFORE enabling the calculation. This procedure
is described in Distributing Calculations on Different Servers on page
124.
Run Calculation on the Server (Online). The calculation must be saved
and enabled before it is run.
Run Calculation on the client (offline). The calculation must be saved
before it is run.
3BUF001092-610 A 103
4 Configuring Calculations
4.4 Configuring Calculations
Cyclic Schedule
To specify a cyclic schedule, make sure the Schedule check box is unchecked, and
then check the Cycle check box. In the corresponding fields specify the interval unit
(hours, minutes, seconds), and the number of intervals. The example in Figure 4.20
shows a one-second cycle.
3BUF001092-610 A 104
4 Configuring Calculations
4.4 Configuring Calculations
Time-based Schedule
To specify a time-based schedule, make sure the Cycle check box is unchecked, and
then check the Schedule check box. In the corresponding fields specify the time
components (year, month, day, hour, minute, second). The * character is a wildcard
entry which means every. Figure 4.21 shows a time-based schedule that will execute
at 12:01:01 (one minute and one second after noon) on every day of every month, for
every year.
Offline Execution
In the offline mode, the calculation uses the offline values specified on the variable grid.
This way the calculation execution does not affect actual process values. The calculation
must be saved before it can run.
Online Execution
In the online mode the calculation uses the online values specified on the variable grid.
Save and enable the calculation before running the calculation in this mode.
Assign each Calculation to a specific Service Provider BEFORE enabling the
calculation. This is described in Distributing Calculations on Different Servers on page
124.
3BUF001092-610 A 105
4 Configuring Calculations
4.4 Configuring Calculations
3BUF001092-610 A 106
4 Configuring Calculations
4.4 Configuring Calculations
4. Click Create. This creates the new job under the Job Descriptions branch, and adds
the Schedule Definition aspect to the object’s aspect list.
5. Click the Scheduling Definition aspect to display the configuration view, Figure
4.22. This figure shows the scheduling definition aspect configured as a periodic
schedule. The calculation will be executed once every hour, starting July 2th at17:00
(5:00 PM), and continuing until July 9th at17:00.
3BUF001092-610 A 107
4 Configuring Calculations
4.4 Configuring Calculations
6. Specify the calculation to be scheduled by entering the path to the object containing
the calculation in the Object field. Either type the full pathname to the object in the
field, or drag the object from the object browser in the Plant Explorer.
7. Next select the calculation aspect using the Calculation Aspect combo box. As an
option, type the name of the calculation aspect.
8. Click Apply to save the action. The referenced calculation will execute according
to the schedule defined in the associated job description object.
3BUF001092-610 A 108
4 Configuring Calculations
4.5 Instantiating Calculation Aspects On Object Types
3BUF001092-610 A 109
4 Configuring Calculations
4.6 OPC Access to Calculation Properties
3BUF001092-610 A 110
4 Configuring Calculations
4.7 Making Bulk Changes to Instantiated Calculations
3BUF001092-610 A 111
4 Configuring Calculations
4.7 Making Bulk Changes to Instantiated Calculations
3. Using the Calculation Status Viewer, select the group of calculations to be modified
with the change. Be sure to include the source calculation in the selected set. Then
right click on the Calculation Status Viewer list and choose Clone from the context
menu, Figure 4.25.
This menu item will be available only if the following conditions are met:
• The current user has aspect modification privileges.
• All calculations in the selected set are disabled.
• The set of selected calculations contains more than one element.
3BUF001092-610 A 112
4 Configuring Calculations
4.7 Making Bulk Changes to Instantiated Calculations
5. Next, select the properties to be cloned. Checking a box indicates that this property
should be cloned from the source calculation to all other calculations in the Clone
list. In this example, the TRACEENABLED property is selected, Figure 4.26. Note
that TRIGGERTEXT is for the Scheduler.
Clone
Source
Properties
to be
Cloned
3BUF001092-610 A 113
4 Configuring Calculations
4.8 Object Referencing Guidelines
3BUF001092-610 A 114
4 Configuring Calculations
4.8 Object Referencing Guidelines
Each component (plant2, section1, and so on) references a specific object in the structure
hierarchy. Each component is delimited by a separator character. The dot (.) and slash
(/) are default separators. Specify different characters to use as separators using the
aspect [Service Structure]Services/Aspect Directory-> SNS Separators. The Name Server
must be restarted after any changes.
If a separator character is actually part of an object name, it must be delimited with
a “\” backslash in the object reference. For example, the string AI1.1 will search for
an object named 1 below an object named AI1. To search instead for an object named
AI1.1, reference the object as: AI1\.1.
Absolute object references can be refined using the following syntax rules:
• . (dot) - search for the following object (right of dot) below the previous object left of
dot). This includes direct children as well as their offspring.
• .. (double dot) - go one level up to the parent.
• [down] - (default) searches from parent to children.
• [up] - searches from child to parent.
• [direct] - Limits search to just direct children. Refer to Direct on page 115.
• [structure] - Limits search to a specified structure, for example Functional Structure
or Control Structure. Refer to Structure Category on page 116.
• [name] - Limits search to the main object name (of category name). Refer to Name
on page 117.
This syntax can be used for powerful queries. Also, restricting a search using these
keywords can significantly improve the search speed. For example, the following query
path will start from a Signal object, and will retrieve the board for the signal and the rack
where this board is located:
.[up][Control Structure]{Type Name}board[Location Structure]{Type Name}rack
Direct
By default, queries search for the referenced object at all levels below the parent object,
not just for direct children. To limit the search to direct children, use the [Direct] keyword.
For example, the query path plant2.motor3 will find motor3 in all of these structures:
heyden.plant2.section3.motor3, plant2.section3.drive4.motor3, and plant2.motor3. The
query [Direct]plant2.motor3 will find motor3 only at plant2.motor3.
For further considerations regarding the Direct keyword, refer to Using the [Direct]
Keyword on page 120.
3BUF001092-610 A 115
4 Configuring Calculations
4.8 Object Referencing Guidelines
Structure Category
All structures are searched by default. Restrict the search to a specific structure by
specifying the structure name, Table 4.9. For example:
[Functional Structure]drive3.motor2 searches only in the Functional Structure.
The query [Functional Structure]drive3.motor2[]signal4 starts in the Functional Structure
until it finds an object named motor2 below an object named drive3, and then searches
all structures for an object named signal4 below motor2.
Use the [Direct] keyword in combination with a structure reference, for example:
[Direct][Functional Structure]plant2.section1. Do not specify several structures. For
example, the following syntax is not allowed: [Functional Structure][Control
Structure]motor2. If this functionality is needed, use the IAfwTranslate::PushEnvironment
method of the Name Server.
To negate a query path, add the not symbol. For example, [!Functional Structure]motor2
searches for a motor2 not occurring in the Functional Structure.
For further considerations regarding Structure, refer to Using the Structure Name on
page 121.
3BUF001092-610 A 116
4 Configuring Calculations
4.8 Object Referencing Guidelines
Name
An object can have a number of different names. Each name is assigned as an aspect.
The name categories are described in Table 4.10. The main name is the name that
associated with the object in the Plant Explorer. This is generally the name used for
absolute object references. By default a query will search for all names of a given object.
This may provide unexpected results when objects, other than the one that was intended
to be referenced, have names that fit the specified object reference. For example the
query A* may return objects whose main name does not start with A, but whose OPC
Source Name does start with A. This can be avoided by using the [name] keyword. This
restricts the search to main names (category name) only. For example: {Name}A*.
Examples
3BUF001092-610 A 117
4 Configuring Calculations
4.8 Object Referencing Guidelines
It is recommended that relative naming be done at the object type-level. This way all
instances of the object type will automatically have the proper relative name.
This is illustrated in the following example where a motor object type requires a calculation
that references two signals to start and stop the motor. The motor object type is
instantiated in two units - Unit 1 and Unit 2, Figure 4.28. The main names (Name aspect)
for the signals are instance-specific and thus change from one unit to the next.
Type: Motor
Calculation
3BUF001092-610 A 118
4 Configuring Calculations
4.8 Object Referencing Guidelines
Follow these guidelines to create a reusable solution (based on example in Figure 4.28):
1. Create a motor object type with two binary signals.
2. Use the Relative Name aspect for each of the two binary signals to specify a relative
name: for example START and STOP, Figure 4.29.
3BUF001092-610 A 119
4 Configuring Calculations
4.8 Object Referencing Guidelines
3. Add the Calculation aspect to the motor object type. When adding the variables for
the two binary signals, specify the object reference, Figure 4.30.
The dot (.) at the beginning of the object reference specifies that the reference will
start at the object that owns the calculation aspect, in this case the motor object.
Only motor objects in the Control Structure will be referenced.
4. Instantiate the motor object type in the Control Structure under the objects
representing Units 1 and 2 respectively. When the calculation runs for the motor on
Unit 1, the references for START and STOP will point to B117 and B118 respectively.
When the calculation runs for the motor on Unit 2, the references for START and
STOP will point to B217 and B218 respectively.
3BUF001092-610 A 120
4 Configuring Calculations
4.9 Managing Calculations
[Direct][Control Structure]Signal5
3BUF001092-610 A 121
4 Configuring Calculations
4.9 Managing Calculations
In addition, the ability to collect and display performance statistics generated by the
Calculation server can be configured.
Some tasks performed in the Status Viewer may be lengthy, for example enabling or
disabling a large number of calculations. User should not close or change the Status
Viewer aspect while any Status Viewer task is in progress. The view must remain
open until the task is finished; otherwise, the client which is running the task will be
disconnected from the Calculation service.
Take the following measures to ensure that the view remains unchanged and the
client remains connected:
• If the Calculation Status Viewer aspect is hosted within the Plant Explorer
workplace, use the Pin / Unpin feature to ensure that no other aspect replaces
it during the task. Also, do not close the hosting workplace until the operation is
complete.
• If the Calculation Status Viewer aspect is being displayed in a child window, do
not close the window until the operation has completed. Also do not close the
workplace from which the child was launched, as this will close all child windows.
To access this viewer, go to the Service structure, select the Calculation Server, Service,
and then select the Calculation Status Viewer Aspect, Figure 4.31.
3BUF001092-610 A 122
4 Configuring Calculations
4.9 Managing Calculations
Sorting of columns with numbers or dates is done alphabetically which can yield
unusual sort for dates and numbers.
The following properties will not be updated when the Settings.UpdateStatus line in
the script is set to False (default condition): Result, Execution Time, last Execution,
Status, Status Message.
3BUF001092-610 A 123
4 Configuring Calculations
4.9 Managing Calculations
The Enable button in calculation window does not compile the VBScript before actually
enabling the calculation. Test script in offline mode.
3BUF001092-610 A 124
4 Configuring Calculations
4.10 Performance Tracking
Figure 4.34: Calculation Services Dialogs (Dual on left and Redundant on right)
This dialog lists all Calculation Service Groups and Service Providers that exist in the
current system. Assign the selected calculation to a Calculation Service Group. In a
non-redundant configuration, only one Calculation Service Provider exists in a Calculation
Service Group. In a redundant configuration, two service providers exist in one group.
Their can be multiple Calculation Service Groups.
3BUF001092-610 A 125
4 Configuring Calculations
4.10 Performance Tracking
The SoftPoint Server must be running on the same machine as the Calculation Server
for data to be collected.
Detailed instructions for configuring SoftPoint objects are provided in Section 3 Configuring
SoftPoints. The following sections provide guidelines related to the PerformanceMonitor
object for collecting performance statistics.
3BUF001092-610 A 126
4 Configuring Calculations
4.10 Performance Tracking
PerformanceMonitor
SoftPoint Object Type
3BUF001092-610 A 127
4 Configuring Calculations
4.10 Performance Tracking
Create the PerformanceMonitor instance under the SoftPoint Object category in the
Control Structure. Remember to use the name of the Calculation Service Provider when
naming the SoftPoint object, Figure 4.37.
3BUF001092-610 A 128
4 Configuring Calculations
4.10 Performance Tracking
After creating the Performance Monitor object, use the SoftPoint configuration aspect to
deploy the new SoftPoint configuration. Refer to Deploying a SoftPoint Configuration on
page 72 for details.
3BUF001092-610 A 129
4 Configuring Calculations
4.10 Performance Tracking
Figure 4.39: Viewing Performance Data Via the SoftPoint Object Dialog
3BUF001092-610 A 130
4 Configuring Calculations
4.10 Performance Tracking
Data persistence can account for up to one-third of the total execution time of a
calculation. This is controlled on an individual calculation basis by the
Settings.UpdateStatus on page 91 command in the calculation script. By default this
command is set to False. This means STATUS, STATUSMESSAGE, EXECUTIONTIME,
LASTEXECUTION, TRACEMESSAGE, RESULT properties, and offline output values
are NOT updated in the aspect directory upon completion of the calculation execution.
It is generally recommended that the calculation update status be set to False.
Set the Update Status on page 91 in the calculation script to true to have these properties
updated. For example, update the calculation properties every n number of executions
by using the Settings.UpdateStatus in a loop that sets the value to TRUE after n iterations.
An example is shown in Figure 4.40.
Execution times for calculations may be reduced up to 33%. For calculations that do not
execute often or for calculations with logic that require a significant amount of time to
execute, this savings will be negligible. The savings will be most valuable for smaller
calculations that execute often.
3BUF001092-610 A 131
4 Configuring Calculations
4.11 Calculations Service Recommendations and Guidelines
3BUF001092-610 A 132
4 Configuring Calculations
4.11 Calculations Service Recommendations and Guidelines
4.11.3 Use Case 3 - SoftPoint Signals Show Bad Data Quality when
Calculations are not Updating Data Quality
The quality of calculations inputs are used as the output if the calculation body does not
expressly set the quality attribute of the output variable. A race condition can occur where
bad data quality can be sent to SoftPoints before the input received indicating the true
data quality from the input signal. This can occur when the SoftPoint Server is restarted
either manually or as part of the reboot of the subsystem that executed the SoftPoint.
It is recommended when using a SoftPoint as an output destination that as part of the
calculation the data quality should always be set.
3BUF001092-610 A 133
4 Configuring Calculations
4.11 Calculations Service Recommendations and Guidelines
Calculations defined to execute on the Service Group where the Service Group is define
with a primary and secondary Service Provider will execute on the primary Service
Provider and if a failure occurs of the primary Service Group the calculations will be
moved to the secondary Service Provider as defined in the Service Group. The execution
on the secondary service is independent and no status or state information is exchanged
between the primary and secondary servers other than what is available in the aspect
directory. When the primary server has returned to service, the calculations will be moved
back to the primary Service Provider and the secondary Service Provider will return to
standby.
3BUF001092-610 A 134
5 Configuring History Database
5.1 Changing the Default History Database Instance
3BUF001092-610 A 135
5 Configuring History Database
5.1 Changing the Default History Database Instance
The Maintenance option is for viewing and adjusting table sizing. Do NOT use this
option unless the requirements for managing Oracle databases is known.
3BUF001092-610 A 136
5 Configuring History Database
5.2 Maintaining the Oracle Instance
The History database configuration and all Oracle-based Historical data will be lost
when the database instance is dropped.
3. Click Yes to continue dropping the database instance. The time to complete this
process may take up to several minutes. Progress is indicated by the History
Database Instance Delete message.
Once the default database instance has been dropped, create a new properly sized
instance. Refer to the Post Installation book Information Management section for
information on creating a new database instance.
3BUF001092-610 A 137
5 Configuring History Database
5.2 Maintaining the Oracle Instance
3. Make the changes to the data file and then click OK to the Data Files changes.
4. On the History Database Maintenance display click Apply to update Oracle and the
configuration file. Clicking Cancel after changes are made displays a message to
verify the cancel.
3BUF001092-610 A 138
5 Configuring History Database
5.2 Maintaining the Oracle Instance
Add Datafiles for a Tablespace by clicking the Add New Data File button. This adds a
new row to the table. Modify any row as required directly in the table as described below.
Click OK to save the changes or Cancel to exit without saving.
3BUF001092-610 A 139
5 Configuring History Database
5.2 Maintaining the Oracle Instance
If there is 1 GB of Flat File space on a drive and this is reduced to 0 MB on that drive,
then the wizard automatically makes the Flat file space 1 GB. This keeps the Flat File
space and prevents it from growing beyond its current size.
3BUF001092-610 A 140
5 Configuring History Database
5.2 Maintaining the Oracle Instance
3BUF001092-610 A 141
5 Configuring History Database
5.2 Maintaining the Oracle Instance
If the history account password is modified, any reports or other methods that may have
hard coded the password would have to be updated to run with the new password.
3BUF001092-610 A 142
5 Configuring History Database
5.2 Maintaining the Oracle Instance
• Use the database character set for the password characters, which includes
underscore (_), dollar ($), and number sign (#) characters.
• Passwords can contain only alphanumeric characters and the underscore (_), dollar
sign ($), and pound sign (#).
Oracle recommends the enclosure of the special characters within double quotation
marks because some scripting environments /OSes use these for env vars, etc
• No need to specify the following passwords in quotation marks:
– Passwords beginning with an alphabet (a-z, A-Z) and containing numbers (0-9)
or special characters ($,#,_). For example: abc123, a23a and ab$#.
– Passwords containing only numbers.
– Passwords containing only alphabetical characters.
3BUF001092-610 A 143
3BUF001092-610 A 144
6 Configuring Log Sets
Log sets is used to start or stop data collection for a number of property or message logs
simultaneously with one command. A log can be assigned to one or two log sets. The
only restriction is that all logs in the log set must reside in the same node. For
convenience, build log sets prior to building the logs so the log can be assigned to a log
set when it is configured. Logs can still be added to log sets later.
Log set should be avoided if possible. They exist primarily for migrations from
Enterprise Historian systems to 800xA. There are configurations where log sets are
required when profile logs are used. However, for non-profile applications they should
be avoided if possible.
Trend logs cannot be started and stopped with log sets. This functionality is only
applicable for history logs.
The operating parameters for a log set are specified in a Log Set aspect. A dedicated
aspect is required for each log set that needs to be configured. To create a Log Set
aspect, create a Log Set object under a specific server node in the Node Administration
structure refer to Adding a Log Set on page 146. The Log Set aspect is automatically
created for the object. To configure the Log Set aspect, refer to Log Set Aspect on page
147.
As an alternative, you can attach Log Set aspects to objects in different structures, for
example, a process object in the Control structure. When you do this, a copy of the Log
Set aspect is also placed in the Node Administration structure.
Once the log sets are built, assign the logs to their respective log sets. This is done via
each log’s configuration aspect:
• For property logs this is done via the IM Definition tab of the Property Log
Configuration aspect and is described in Assigning a Log to a Log Set on page 265.
• For message logs, refer to Message Log Aspect on page 155.
3BUF001092-610 A 145
6 Configuring Log Sets
6.1 Adding a Log Set
3BUF001092-610 A 146
6 Configuring Log Sets
6.2 Log Set Aspect
5. Enter a name for the object in the Name field, for example: LogSet1.
6. Click Create. This adds the object under the Log Sets group, and creates a
corresponding Log Set Aspect .
Figure 6.2: Selecting the Inform IT Log Set in the New Object Dialog
3BUF001092-610 A 147
6 Configuring Log Sets
6.2 Log Set Aspect
Figure 6.3: Accessing the Configuration View for the Log Set Aspect
3BUF001092-610 A 148
6 Configuring Log Sets
6.2 Log Set Aspect
3BUF001092-610 A 149
3BUF001092-610 A 150
7 Alarm/Event Message Logging
This section describes how to integrate the 800xA System Message Server with
Information Management History Server, and how to configure message logs. All alarm
and event messages for the 800xA system, including Process, Operator, and Audit Trail
messages, are collected and stored by the 800xA System Message Server. This provides
a short-term storage facility with the capacity to store up to 50,000 messages. The
messages can be organized into filtered lists for viewing. Refer to System 800xA
Operation (3BSE036904*).
If Information Management History Server is installed, the messages stored by 800xA
System Message Server may be forwarded to a Information Management message log
for extended online storage. This message log can filter and store up to 12 million
messages. In addition, an Information Management History Server allows messages
from multiple servers to be consolidated onto a dedicated consolidation node, and the
messages can be saved on an archive media for permanent offline storage.
Audit Trail
Alarms/Events
Begin with any one of the following topics to suite your depth of knowledge of message
logs:
• If unfamiliar with message logs, refer to Message Log Overview on page 152 for a
quick introduction.
• To configure message logs refer to Configuring Message Logs on page 153.
3BUF001092-610 A 151
7 Alarm/Event Message Logging
7.1 Message Log Overview
3BUF001092-610 A 152
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
The Alarm and Event List aspect may also be configured to retrieve archived messages
on a specified archive volume, or archived messages which have been published or
restored.
3BUF001092-610 A 153
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
As an alternative, you can attach Message Log aspects to objects in different structures,
for example, a process object in the Control structure. When you do this, a copy of the
Message Log aspect is also placed in the Node Administration structure.
To proceed, first create the Message Log object in the Node Administration structure,
as described in Adding a Message Log on page 154. Then configure the Message Log,
as described in Message Log Aspect on page 155.
3BUF001092-610 A 154
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
4. Right-click on the Message Logs group and choose New Object from the context
menu. This displays the New Object dialog with the Inform IT Message Log object
type selected.
5. Enter IMMSGLOG for the object in the Name field then click Create. This adds the
object under the Message Logs group. Using IMMSGLOG causes the log name to
default to IMMSGLOG and the IM server IP address will be automatically appended
(refer to Message Logging for 800xA System Alarm and Event Server on page 157).
3BUF001092-610 A 155
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
3BUF001092-610 A 156
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
7.2.4 Message Logging for 800xA System Alarm and Event Server
The 800xA System Alarm/Event Server requires a message log of the type
OPC_MESSAGE. The log name defaults to IMMSGLOG_IP_ipaddress where ipaddress
is the IP address of the node where the log resides, Figure 7.4. This log will collect from
the default system.
If an IMMSGLOG is being configured to consolidate messages from other OPC message
logs, and the IMMSGLOG is to be dedicated to consolidation and will not be used to
collect messages for the local node, use a prefix to avoid appending the local node IP
address, for example: CIMMSGLOGforEng444. For further information on setting up
message log consolidation, refer to Consolidating Message Logs and PDLs on page
403.
3BUF001092-610 A 157
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
3BUF001092-610 A 158
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
3BUF001092-610 A 159
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
3BUF001092-610 A 160
7 Alarm/Event Message Logging
7.2 Configuring Message Logs
3BUF001092-610 A 161
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
Figure 7.6: Configuration for Reading Message Log from Alarm and Event List
3BUF001092-610 A 162
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
These aspects may be added to any object in any structure. However, it is generally
recommended that a generic object be created and then all three aspects added to that
object. This allows use of the same Alarm and Event List view in multiple locations in
the plant (in more than one structure). To do this, create one object with these three
aspects and use the Insert Object function to represent it in multiple locations. Otherwise,
the three aspects would need to be added to multiple objects.
The A/E Linked Server Configuration aspect must be configured to specify the Information
Management server where the message log resides. It must also be configured to specify
one of four classes of alarm/event data to read:
• Messages currently stored in the message log.
• Published messages.
• Restored messages.
• Messages residing on a specified archive volume.
To read more than one class of message, configure a separate object with all three
aspects for each class.
The Alarm and Event Configuration aspect must be configured to direct the Alarm Event
List aspect to access messages from the Information Management message log rather
than the 800xA system message service. This is done via the Source field on this aspect.
Finally, the Alarm and Event List aspect must be configured to use to the Alarm and
Event List Configuration aspect described above.
3BUF001092-610 A 163
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
3BUF001092-610 A 164
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
2. Add the three below mentioned aspects to the new Generic type object.
a. The Alarm and Event List aspect is located under the Alarm and Events
category in the New Aspect dialog, Figure 7.8.
3BUF001092-610 A 165
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
3BUF001092-610 A 166
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
• Click Copy Template and then Apply. An Alarm and Event List Configuration
aspect is now added to the object. Refer to Figure 7.10.
3BUF001092-610 A 167
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
• The A/E Linked Server Configuration aspect is located under the Industrial
IT Archive Group, Figure 7.11.
3BUF001092-610 A 168
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
3. Configure the A/E Linked Server Configuration aspect to specify the Information
Management server where the message log resides, and to select the type of
message to read. To do this:
a. Select the A/E Linked Server Configuration aspect.
b. Use the pull-down list to select the service group for Information Management
server, Figure 7.12.
c. Select one of the four mutually exclusive radio buttons to select the type of
messages to read, Figure 7.13. The categories are described in Table 7.2.
3BUF001092-610 A 169
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
3BUF001092-610 A 170
7 Alarm/Event Message Logging
7.3 Accessing a Message Log with an Alarm and Event List
To read more than one type of message, configure a separate object with all three
aspects for each type.
e. Enable the Condition Events, Simple Events, and Tracking Events options in
the Categories area.
f. Click Apply.
3BUF001092-610 A 171
7 Alarm/Event Message Logging
7.4 Efficient String Attribute Filtering
4. Configure the aspect. Filter Configuration can be made in the following three ways:
• Filtering based on Event Categories on page 173.
• Filtering based on Event Attributes on page 173.
• Filtering based on Combination of Categories and Attributes on page 175.
3BUF001092-610 A 172
7 Alarm/Event Message Logging
7.5 Creating an Inform IT Event Filter
In the Filter Tab, under the Categories section, all the categories must be selected.
3BUF001092-610 A 173
7 Alarm/Event Message Logging
7.5 Creating an Inform IT Event Filter
Example 1: Stores all the messages except those with sub string Test message
CCC, as shown in Figure 7.16.
Example 2: Collects only those messages that satisfy the specified criteria, as shown
in Figure 7.17. All other messages are excluded from collection.
Example 3: Filters all the messages that consist either of the following words: Test,
Message or CCC.
3BUF001092-610 A 174
7 Alarm/Event Message Logging
7.5 Creating an Inform IT Event Filter
3BUF001092-610 A 175
7 Alarm/Event Message Logging
7.5 Creating an Inform IT Event Filter
Filtering produces an event message. While setting a filter permanently saves storage
space, the filter can also be used to manage bursts of activity or other special conditions.
3BUF001092-610 A 176
8 Historizing Reports
8.1 Report Log Configuration Guidelines
8 Historizing Reports
This section describes how to configure report logs to store finished reports scheduled
and executed via the Application Scheduler and Report action plug-in. Completed reports
may also be stored by embedding them in a completed report object in the Scheduling
structure. This must be specified within the Report Action.
Reports stored either as objects, or in a report log can be archived on either a cyclic or
manual basis. This is described in Section 11 Configuring the Archive Function. Access
to archived reports logs is via the View Reports aspect as described in the section on
viewing archive data in System 800xA Information Management Data Access and Reports
(3BUF001094*). When reviewing a restored report log, it will be presented in the tool
appropriate for the type of file the report was saved as (i.e. Internet Explorer will be
launched if the report was saved in .html format).
Generally, one report log is dedicated to one report. For instance, a WKLY_SUM report
will have a log especially for it. Each week’s version of the WKLY_SUM report is put into
the report log. Different report logs are then used for other reports. More than one type
of report can be sent to one report log if desired. Each report log can be specified as to
how many report occurrences to save.
3BUF001092-610 A 177
8 Historizing Reports
8.2 Adding a Report Log
3. In the object tree for the selected node, navigate to InformIT History_nodeName
Service Provider > InformIT History Object.
Under the InformIT History Object find containers for each of the Inform IT History
object types. The History objects (in this case, a report log) must be instantiated
under the corresponding container.
3BUF001092-610 A 178
8 Historizing Reports
8.3 Report Log Aspect
4. Right-click on the Report Logs group and choose New Object from the context
menu. This displays the New Object dialog with the Inform IT Report Log object
type selected.
5. Enter a name for the object in the Name field, for example: ReportLog1, then click
Create. This adds the object under the Report Logs group, and creates a
corresponding Report Log Aspect.
3BUF001092-610 A 179
8 Historizing Reports
8.3 Report Log Aspect
3BUF001092-610 A 180
8 Historizing Reports
8.3 Report Log Aspect
3BUF001092-610 A 181
3BUF001092-610 A 182
9 Historical Process Data Collection
Process data collection refers to the collection and storage of numeric values from aspect
object properties. This includes properties for live process data, SoftPoint data, and lab
data (programmatically generated, or manually entered). This functionality is supported
by property log objects.
Process data collection may be implemented on two levels in the 800xA system. The
standard system offering supports storage of operator trend data via trend logs. When
the Information Management History Server function is installed, extended and permanent
offline storage (archive) is supported via history logs. This section describes how to
integrate and configure operator trend and history logs.
Begin with any one of the following topics depending on your current depth of knowledge
on property logs:
• To learn the concept of property logs, refer to Property Log Overview on page 184.
• If History Source aspects have not yet been configured in the system, refer to
Configuring Node Assignments for Property Logs on page 189. This must be done
before the logs are activated and historical data collection begins.
• To learn about important considerations and configuration tips before actually
beginning to configure property logs, refer to History Configuration Guidelines on
page 194.
• For a quick demonstration on how to configure a property log, refer to Building a
Simple Property Log on page 197.
• For details on specific property log applications, refer to:
– Lab Data Logs for Asynchronous User Input on page 212.
– Event-driven Data Collection on page 213.
– History Logs with Calculations on page 219.
3BUF001092-610 A 183
9 Historical Process Data Collection
9.1 Property Log Overview
1. One exception to this scenario is when a property log is created to collect (consolidate) historical data
from another Information Management server in another system.
3BUF001092-610 A 184
9 Historical Process Data Collection
9.1 Property Log Overview
History Server
Connectivity
Server Hierarchical Primary History Log
Source
Primary History Log
History Server 2
3BUF001092-610 A 185
9 Historical Process Data Collection
9.1 Property Log Overview
Data Presentation in trend format is supported by 800xA System trend displays, Desktop
Trends, and trend displays built with Display Services. These displays are simple graphs
of process variables versus time, as shown in Figure 9.3. All analog and digital attributes
recorded in History can be trended. Trend sources can be changed and multiple trends
can be displayed on one chart to compare other tags and attributes. Stored process data
may also be viewed in tabular format in Microsoft Excel using DataDirect add-ins (refer
to Installing Add-ins in Microsoft Excel on page 294).
The following sections provide a brief overview of the historical collection and storage
functionality supported by history logs:
• Blocking and Alignment on page 187.
• Calculations on page 187.
• Data Compaction on page 187.
• Event Driven Data Collection on page 187.
• Offline Storage on page 188.
• Considerations for Oracle or File-based Storage on page 188.
3BUF001092-610 A 186
9 Historical Process Data Collection
9.1 Property Log Overview
9.1.3 Calculations
Although both trend and history logs can have calculations performed on collected data
prior to storage, it is generally recommended that raw data be collected and stored, and
then the required calculations can be performed using the data retrieval tool, for example,
DataDirect. To do this for history logs, the STORE_AS_IS calculation algorithm is
generally used.
This calculation algorithm causes data to be stored only when it is received from the
data source. For OPC type logs, this calculation lets the OPC server’s exception-based
reporting be used as a deadband filter. This effectively increases the log period so the
same number of samples stored (as determined by the log capacity) cover a longer
period of time. Refer to STORE AS IS Calculation on page 237 for further details.
3BUF001092-610 A 187
9 Historical Process Data Collection
9.1 Property Log Overview
9.1.9 Consolidation
Data from property logs on history server nodes in different systems (as configured via
the 800xA system configuration wizard) may be consolidated onto one central
consolidation node as illustrated in Figure 9.4.
3BUF001092-610 A 188
9 Historical Process Data Collection
9.2 Configuring Node Assignments for Property Logs
$HSTC100,MEASURE-1-o $HSTC100,MEASURE-1-o
Data Source
$HSTC200,MEASURE-1-o $HSTC200,MEASURE-1-o
Data Source
History Server
in System A
$HSTC300,MEASURE-1-o $HSTC300,MEASURE-1-o
Data Source
History Server Consolidation Node
in System B in System C
3BUF001092-610 A 189
9 Historical Process Data Collection
9.2 Configuring Node Assignments for Property Logs
For trend logs the node assignment is established when the Log Configuration aspect
is instantiated on an object, for example a Control Application object in the Control
structure. When this occurs, the Log Configuration aspect looks upward in the object
hierarchy to find and associate itself with the first History Source aspect it encounters.
The History Source aspect points to a History Service group on a specified Connectivity
Server. To ensure that logs are assigned to the correct nodes, add one or more History
Source aspects in the applicable structure where the log configuration aspects will be
instantiated. Care must be taken to situate the History Source aspects such that each
log will find the History Source that points to its respective Connectivity Server. Two
examples are illustrated in Figure 9.5 and Figure 9.6.
• Property logs will not collect data until they are associated with a History Source
aspect.
• History Source aspects must be placed in the same structure where log
configuration aspects are to be instantiated.
• Add History Source aspects as early as possible in the engineering process. The
change to update the History Server for a Service Group can take a long time
for large configurations. All logged data will be lost during change of Service
Group for affected log configurations.
Control Structure
History Source
Control Application 1
Property Log
Property Log
Control Application 2
Property Log
Property Log
3BUF001092-610 A 190
9 Historical Process Data Collection
9.2 Configuring Node Assignments for Property Logs
Control Structure
Control Application 1
History Source
Property Log
Property Log
Control Application 2
History Source
Property Log
Property Log
3BUF001092-610 A 191
9 Historical Process Data Collection
9.2 Configuring Node Assignments for Property Logs
OPCDA_Connector, Service
Control Application 2
eng64AC800Mopc, Service Grp
History Source 2
tar22AC800Mopc, Service Grp
Property Logs
3BUF001092-610 A 192
9 Historical Process Data Collection
9.2 Configuring Node Assignments for Property Logs
3BUF001092-610 A 193
9 Historical Process Data Collection
9.3 History Configuration Guidelines
3. Click on the History Source aspect and use the Service Group pull-down list to select
the Service Group for a specific node, Figure 9.9 (use the same node specified in
the OPC Data Source aspect (Figure 9.7).
4. Click Apply. This completes the History set-up requirements.
Before configuring property logs, refer to History Configuration Guidelines on page 194
for important considerations and tips. For a quick demonstration on how to configure a
property log, refer to Building a Simple Property Log on page 197.
3BUF001092-610 A 194
9 Historical Process Data Collection
9.3 History Configuration Guidelines
3BUF001092-610 A 195
9 Historical Process Data Collection
9.3 History Configuration Guidelines
To archive History data, build archive devices and archive groups. First build the archive
devices, and then build the archive groups. Refer to Section 11 Configuring the Archive
Function.
Log sets are used to start and stop data collection for multiple logs as a single unit. Build
logs sets before the property logs if this functionality is used. Refer to Section 6
Configuring Log Sets.
Next configure message logs (Section 7 Alarm/Event Message Logging), and Report
Logs (Section 8 Historizing Reports).
After creating the supporting History objects required by the application, build the basic
property log structures to be used as templates for the property logs. A quick
demonstration is provided in Building a Simple Property Log on page 197.
Finish the remainder of the History database configuration using the Bulk Log
Configuration Import/Export utility. Refer to Bulk Configuration of Property Logs on page
268.
3BUF001092-610 A 196
9 Historical Process Data Collection
9.4 Building a Simple Property Log
Source
There are two basic parts to this tutorial: creating a log template, and configuring the log
configuration aspect. Start with Creating a Log Template below.
3BUF001092-610 A 197
9 Historical Process Data Collection
9.4 Building a Simple Property Log
3BUF001092-610 A 198
9 Historical Process Data Collection
9.4 Building a Simple Property Log
Log Template Selected Click Log Template Aspect Property Log Placeholder
2. The one exception to this rule is when a property log is created to collect historical data from an earlier
platform such as an Enterprise Historian with MOD 300 or master software.
3BUF001092-610 A 199
9 Historical Process Data Collection
9.4 Building a Simple Property Log
• The Collector Link option is for direct trend logs that collect from a remote Enterprise
Historian.
• Click OK when finished. This displays the configuration view for the direct type trend
log.
Figure 9.12: Basic History Trend Log Configuration View - Log Definition Tab
3BUF001092-610 A 200
9 Historical Process Data Collection
9.4 Building a Simple Property Log
1. Use the Log Definition tab to specify a log name, Figure 127. Again, use a name
that identifies the function this log will perform. For example, the name in Figure
9.13 identifies the log as a trend log with a 1-week storage size.
The Log Type, Source, and Service Group are fixed as specified in the New Log
Template dialog and cannot be modified.
2. Click the Data Collection tab and configure how this log will collect from the OPC
data source. For this tutorial, set the Storage Interval Max Time to
1 Minutes, and the Storage Size Time to 1 Weeks, Figure 9.14. For further
information regarding trend log data collection attributes, refer to Collection Attributes
for Trend Logs on page 228.
3BUF001092-610 A 201
9 Historical Process Data Collection
9.4 Building a Simple Property Log
3BUF001092-610 A 202
9 Historical Process Data Collection
9.4 Building a Simple Property Log
3BUF001092-610 A 203
9 Historical Process Data Collection
9.4 Building a Simple Property Log
Collection Mode defaults to Synchronous and cannot be modified. For the purpose
of this tutorial, all attributes on this tab may be left at their default values.
3BUF001092-610 A 204
9 Historical Process Data Collection
9.4 Building a Simple Property Log
3. Click on the Data Collection tab, Figure 9.17. For this example, this log needs to
collect samples at a 1-minute rate and store 52 weeks worth of data. Certain data
collection attributes must be configured to implement this functionality. These
attributes are described in the following steps.
a. Configure the Sample Interval to establish the base sample rate, for example:
1 Minutes. The Storage Interval automatically defaults to the same value. Also,
the Sample Blocking Rate defaults to a multiple of the Sample Interval, in this
case, one hour.
b. Then enter the Log Period, for example: 52 Weeks. This determines the Log
Capacity (log capacity = log period * storage interval).
It is strongly recommended that the defaults be used for calculation algorithm (Store
As Is) and storage type (type 5). Typically, calculations are not used for data collection.
Desktop tools can perform calculations during data retrieval.
Store As Is causes data to be stored only when it is received from the data
source. In this case the sample and storage intervals are essentially ignored
for data collection, and are used only to properly size the log.
4. Click Apply when finished with both the trend and history log configurations. This
completes the template configuration. Deadband is not here.
3BUF001092-610 A 205
9 Historical Process Data Collection
9.4 Building a Simple Property Log
3BUF001092-610 A 206
9 Historical Process Data Collection
9.4 Building a Simple Property Log
This adds the Log Configuration aspect to the object’s aspect list.
3BUF001092-610 A 207
9 Historical Process Data Collection
9.4 Building a Simple Property Log
The Logged pane in the Log Configuration view shows the name of the object with one
or more property logs being added. A property log must be added for each property
whose value will be logged.
3BUF001092-610 A 208
9 Historical Process Data Collection
9.4 Building a Simple Property Log
This displays the New Property Log dialog, Figure 9.21. This dialog is used to select
one of the object’s properties for which to collect data, and apply the Log Template
which meets the property’s data collection requirements. If necessary, change the
default data type specification for the selected property.
2. From the Property list, select the property for which data will be collected. Then
select a template from the Template list.
Data Type
1) Select a (Recommended
Property not to change)
2) Select a
Log Template
The supported data types for property logs are listed in Table 9.1. When a property is
selected, the default data type is automatically applied. This is the native data type for
the property as determined by the OPC server. The OPC/DA data type is mapped to the
corresponding 800xA data type. DO NOT change the data type. When the storage type
for the History log is configured as TYPE5 (recommended), the log is sized to
accommodate the data type.
3BUF001092-610 A 209
9 Historical Process Data Collection
9.4 Building a Simple Property Log
Template
OPC Object
3BUF001092-610 A 210
9 Historical Process Data Collection
9.5 Property Log Applications
This aspect does not require any further configuration; however, some data collection
parameters may be adjusted via the tabs provided for each component log. These tabs
are basically the same as the tabs provided for log template configuration. Some
configuration parameters are read-only and cannot be modified via the log configuration
aspect. Also some parameters that were not accessible via the log configuration template
are accessible via the log configuration aspect.
Two additional tabs are provided for the log configuration aspect. The Presentation tab
is used to configure presentation parameters for viewing log data on a trend display. The
Status tab is used to view log data directly via the log configuration aspect.
Post-configuration Requirements
History logs will not collect data until they are activated. Refer to Starting and Stopping
Data Collection on page 442.
Test the operation of one or more property logs. Use the Status tab on the Log
Configuration aspect (Status on page 260), or use one of the installed desktop tools such
as DataDirect or Desktop Trends.
What to Do Next
This concludes the tutorial for configuring a basic property log. To create a large quantity
of similar logs, use the Bulk Configuration utility as described in Bulk Configuration of
Property Logs on page 268. To learn about common property log applications, refer to
Property Log Applications on page 211. For more information regarding the configuration
of log attributes, refer to Property Log Attributes on page 221.
3BUF001092-610 A 211
9 Historical Process Data Collection
9.5 Property Log Applications
When added with synchronous logs, the asynchronous lab data log does not receive
data from the data source. The connection is only graphical in this case.
Lab Data Log Added With Lab Data Log Added Alone
a Direct Log
Figure 9.23: Representation of Lab Data Log in Property Log Hierarchy
3BUF001092-610 A 212
9 Historical Process Data Collection
9.5 Property Log Applications
When configuring a history-type lab data log, most of the tabs specific to history logs are
not applicable. The only attribute that must be configured for a Lab Data log is Log
Capacity as described in Data Collection Attributes on page 227.
A lab data log CANNOT be the source for another log. Secondary hierarchical logs
cannot collect from lab data logs.
3BUF001092-610 A 213
9 Historical Process Data Collection
9.5 Property Log Applications
The event-driven property log is configured much like any other property log as described
in Building a Simple Property Log on page 197. The only special requirements when
configuring the log for event driven data collection are:
• The direct trend log must always be active in order to collect data before the event
actually triggers data collection.
• The event-driven log must be a history log, and it must be the (first) primary log
connected to the direct trend log, Figure 9.24. Also this log must be configured to
start deactivated. The log will go active when the event trigger occurs.
Always Active
Figure 9.24: Example, Event-driven Data Collection
Create a Job
To schedule event-driven data collection, create a job with a Data Collection action in
Application Scheduler. The Scheduling Definition aspect will be set up to have specified
logs collect for a one-hour period, every day between 11:30 AM and 12:30 PM. Jobs
and scheduling options are also described in System 800xA Information Management
Data Access and Reports (3BUF001094*).
To create a job in the Scheduling structure:
1. In the Plant Explorer, select the Scheduling Structure.
2. Select Job Descriptions and choose New Object from the context menu.
3BUF001092-610 A 214
9 Historical Process Data Collection
9.5 Property Log Applications
3. In the New Object dialog, Figure 9.25, select Job Description and assign the Job
object a logical name (for example StartLogging1).
3BUF001092-610 A 215
9 Historical Process Data Collection
9.5 Property Log Applications
4. Click Create. This creates the new job under the Job Descriptions group, and adds
the Schedule Definition aspect to the object’s aspect list.
5. Click on the Scheduling Definition aspect to display the configuration view, Figure
9.26. This figure shows the scheduling definition aspect configured as a periodic
schedule. The trigger event will occur once every day, starting July 3rd at 12:00 PM,
and continuing until August 3rd at 12:00 PM.
3BUF001092-610 A 216
9 Historical Process Data Collection
9.5 Property Log Applications
To add an action:
1. From the Job object (for example StartLogging1), choose New Aspect from the
context menu.
2. In the New Aspect dialog, browse to the Scheduler category and select the Action
aspect (path is: Scheduler>Action Aspect>Action Aspect). Use the default aspect
name, or specify a new name.
3. Click Create to add the Action aspect to the job.
4. Click on the Action aspect to display the configuration view.
5. Select Data Collection Action from the Action pull-down list. This displays the Data
Collection plug-in, Figure 9.27.
3BUF001092-610 A 217
9 Historical Process Data Collection
9.5 Property Log Applications
b. Select the object whose log will be added to the list, then select the log in the
log hierarchy, and click Add Log To List. This adds the selected log to the log
list, Figure 9.29.
3BUF001092-610 A 218
9 Historical Process Data Collection
9.5 Property Log Applications
3BUF001092-610 A 219
9 Historical Process Data Collection
9.5 Property Log Applications
When calculations are implemented for data collection, the calculation is performed
before the data is stored in the log. This allows a single value to be stored that represents
a larger time span of values, or key characteristics of the data. For example, a property
value can be sampled every minute and the values put into a trend log. History logs can
then calculate and store the hourly average, minimum, and maximum values. This is
illustrated in Figure 9.30. When applying calculations for data covering the same time
span, add the history logs at the same level as shown in Figure 9.30 (all collecting directly
from the trend log).
In addition to any mandatory attributes that must be configured, the Calculation Algorithm
must also be configured. Change the calculation algorithm before specifying the Storage
Interval. Refer to Calculation Algorithm in Data Collection Attributes on page 227.
Retain the defaults for the remaining attributes, or configure additional attributes as may
be required by the application. Refer to Property Log Attributes on page 221 for details.
More detailed examples are described in Property Log Configuration Reference on page
264:
• Example 3 - Storing Calculated Values.
• Example 4 - Storing Calculated Values in Logs.
3BUF001092-610 A 220
9 Historical Process Data Collection
9.6 Property Log Attributes
History Log
Trend Log
History Log
Source
Storage Rate: 10 seconds
Time: 2 Weeks Storage Rate: 1 Minute
Time period: 13 Weeks
Calculation: Minimum
History Log
3BUF001092-610 A 221
9 Historical Process Data Collection
9.6 Property Log Attributes
Limitations are:
• Most log attributes can be edited while configuring a log template and before saving
it. Exceptions are: Collection Mode and Log Capacity (for synchronous logs). Log
capacity must be configured for lab data logs.
• The configuration of a Log Template that has Log Configuration aspects instantiated
cannot be modified. To modify the template, delete the instantiated Log Configuration
aspects.
• To modify certain attributes on Log Configuration aspects, the log must be
deactivated.
• After saving the log configuration aspect, the following attributes can not be edited
(in addition to the ones above): IM Data Source, Storage Interval, and Storage Type.
3BUF001092-610 A 222
9 Historical Process Data Collection
9.6 Property Log Attributes
• Source.
This specifies the type of data source.
– For direct trend logs, set the source to OPC or an appropriate plug-in such as
TTD.
– For all hierarchical logs the source automatically defaults to the name of the
log from which the hierarchical log collects (either a direct log or another
hierarchical log).
– Source is not applicable for lab data logs, or for direct history logs that collect
from remote (Enterprise Historian-based) logs (Linked).
• Log Type.
When adding a log, specify the type:
– Direct - Log collects directly from the data source. Use this log type for the first
component log in the property log hierarchy when building a log that collects
from an OPC object property. This log type is used to collect from a remote
(Enterprise Historian-based) log when Linked is checked.
– Lab Data - Log entries are asynchronous (manually entered or generated from
user program). Lab data logs must be added directly to the Property Log
placeholder in the log hierarchy.
– Hierarchical - Log collects either from a direct log, or another hierarchical log.
This selection is made automatically and cannot be changed when adding a
log to direct log, or another hierarchical log.
• Collector Link.
The Collector Link specification is required for history logs. Check the Linked check
box, then select the IM History Log for the Server Type from the pull-down list.
This establishes the log as a history type for hierarchical and lab data logs.
3BUF001092-610 A 223
9 Historical Process Data Collection
9.6 Property Log Attributes
• Log Name.
This is the name by which each component log is identified within the property log
hierarchy. Use a meaningful name. One way is to describe the data collection scheme
in the name, for example: Log_1m_13wAvg (log for data collected @ 1-minute
sample rate and storing the 13-week calculated average).
• Service Group.
The Service Group pull-down list contains all History servers defined in the system.
Use this to specify the server where this log will reside. The list defaults to the local
node.
The Default (History Source Aspect) is only applicable for trend logs.
3BUF001092-610 A 224
9 Historical Process Data Collection
9.6 Property Log Attributes
IM Data Source
The IM data source field is activated on the Log Configuration aspect view. This field is
dimmed on the log template view. The IM data source defaults to a unique ID for the
object name:property name, log template name and its path in the structure. For
hierarchical logs connected to a direct log, the IM data source is fixed and cannot be
changed.
For applications where History collects data from a remote history log configured in an
existing history database, the Collection Type is set to API. In this case enter the name
using the following format: $HSobject,attribute-n-oIPaddress
The IP address is only required when collecting from multiple sites and identical log
names are used on some sites. Refer to Consolidating Historical Process Data on
page 370.
Log Set
This is the primary Log Set to which this log belongs. The Log Set pull-down list includes
all log sets configured on the local history server. Logs can be assigned to a second log
set via the Second Log Set field. To delete a log from a log set, clear the Log Set field.
For details on configuring log sets, refer to Section 6 Configuring Log Sets.
3BUF001092-610 A 225
9 Historical Process Data Collection
9.6 Property Log Attributes
Archive Group
This is the archive group to which this log belongs. The Archive Group pull-down list
includes all archive groups configured on the local history server. To delete a log from
an archive group, clear the Archive Group field. Logs may also be assigned to an archive
group via the Archive Group aspect. This is described in Configuring Archive Groups on
page 340.
If available archive groups are not listed in the Archive Group combo box, click Apply
changes to the Log Template aspect. This makes the archive groups available for
selection via the combo box.
Start State
This is the initial log state when History Services is restarted via PAS. Choices are:
START_INACTIVE
START_ACTIVE
START_LAST_STATE (default) - restart in state log had when node shut down
Hierarchical logs do not start collecting and storing data until the direct log becomes
active. The entire property log with all of its component logs can be activated and
de-activated as a unit via the Enable check box in the Log Configuration aspect.
Collection Mode
The collection mode determines how data is processed when it is received by History.
Collection Mode also determines the valid choices for Collection Type and Storage Type.
Collection mode is a read-only attribute whose value is fixed based on the selected Log
Type.. The mode can be either synchronous or asynchronous:
SYNCHRONOUS - periodic Time Sequence data. This is the collection mode for logs
which collect from a specified object property. The time interval between samples received
by History must always be the same. Calculations and deadband can be applied to
synchronously collected data.
If Collection Mode is Synchronous, all Storage Types are allowed. However, the data
must be entered in time forward order. This means that once an entry is stored at a
particular time, the log will not allow entries with a date before that time. This is true for
all Collection Types: API, OPC HDA, and USER_SUPPLIED.
Also, entries are received and stored at the configured storage rate. For example, for a
user supplied, synchronous log, attempting to add an entry every second for a log with
a 1 minute storage interval, only one entry over a one-minute period will be stored. If
only one entry is received every day for a one minute log, history will insert at least one
"NO DATA" entry between each value received.
3BUF001092-610 A 226
9 Historical Process Data Collection
9.6 Property Log Attributes
Collection Type
This determines the application for which History collects data. Collection type is only
configurable for primary history logs. The options are:
• OPC HDA- The History log collects OPC data from a trend log.
• API - History periodically collects data from a remote history log.
• USER_SUPPLIED: Samples are sent to History from a user application such as
DataDirect. The time between samples sent is expected to be equal to the sample
rate of the log.
When Collection Mode is Asynchronous, the Collection Type is undefined; however, the
log operates as if the collection type was defined as User Supplied.
Dual Log
This configuration parameter is not used.
3BUF001092-610 A 227
9 Historical Process Data Collection
9.6 Property Log Attributes
Aggregate
For hierarchical trend logs apply any one of the aggregates described in Table 9.2.
Aggregates are not applicable for Direct-type logs.
Although hierarchical trend logs support this functionality, it is a good practice to
collect and store raw data, and then perform the required calculations using the data
retrieval tool, for example, DataDirect.
3BUF001092-610 A 228
9 Historical Process Data Collection
9.6 Property Log Attributes
3BUF001092-610 A 229
9 Historical Process Data Collection
9.6 Property Log Attributes
Storage Interval
The storage interval is the rate at which samples are collected and stored. This is
configured using Min Time, Max Time and Exception Deviation (%) properties.
Set Min Time to the base rate at which samples are to be collected. Samples will be
stored at this rate unless data compaction is implemented using Exception Deviation
(%). The shortest possible time between two samples is one (1) sec.
Exception Deviation (%) establishes a deadband range by which to compare the
deviation of a sample value. If the current sample value deviates from the previous
sample value by an amount equal to or greater than the specified Exception Deviation
(%), then the sample is stored; otherwise, the sample is not stored.
Max Time establishes the maximum time that may elapse before storing a sample. This
way, if a process value is very stable and not changing by the specified Exception
Deviation (%), samples are stored at the rate specified by the Max Time.
To store all samples (no data compaction), set the Exception Deviation (%) to 0, and set
the Max Time equal to the Min Time.
It is always recommended to have Max Time greater than Min Time. When Max Time
equals Min Time, extra values can be inserted into the log when the next sample
point is delayed from the source.
Storage Size
Storage size can either be specified as number of points stored in the log or as the time
range covered in the log. If the number of points is specified, the time covered in the log
will be calculated and displayed in the time edit box.
3BUF001092-610 A 230
9 Historical Process Data Collection
9.6 Property Log Attributes
Data Data
Buffer Calculation
Source
Storage
Send to
Secondary
Secondary Data Log(s)
Figure 9.35 shows that data sent to a secondary log does not undergo data compaction,
even if the primary log has deadband configured. The secondary log gets all values that
result from the calculation. Thus, the calculation performed by the secondary log is more
accurate than if done using data stored after deadband compaction.
History can be configured to perform calculations on collected data before the data is
actually stored in the log. When a calculation is active, the specified algorithm, such as
summing or averaging, is applied to a configured number of data samples from the data
source. The result of the calculation is stored by History.
For example, configure History to collect five values from the data source, calculate the
average of the five values, and then store the average. This calculation is actually
configured by time, not the number of values. The calculation is applied to the values
collected over a configured time period.
3BUF001092-610 A 231
9 Historical Process Data Collection
9.6 Property Log Attributes
Data compaction can also be configured. This uses a mathematical compaction algorithm
to reduce the amount of data stored for properties. Both calculations and deadband can
be used concurrently for a log. The calculation is done first. Then the deadband algorithm
is applied to see if the result should be stored.
For examples of data collection applications, refer to Property Log Configuration
Reference on page 264.
The Data Collection tab for history logs is shown in Figure 9.36. Use this tab to configure:
Log Period, Sample Interval, Storage Interval, Sample Blocking Rate, Calculation
Algorithm, Storage Type, Entry Type (when storage type = 5), Start Time, Log Capacity
(for lab data log only), Bad Data Quality Limit.
Log Period
Disk space is allocated to logs according to their log capacity, which is a function of the
log time period and storage interval. The log period can be adjusted on an individual log
basis to keep log capacity at a reasonable size for each log. For all synchronous logs
with or without data compaction, log capacity is calculated as: log capacity = (log
period/storage interval).
When using compaction (or Store As Is), the log period is effectively increased so the
same number of samples (as determined by the log capacity) are stored over a longer
period of time.
Enter the log period as an integer value ≤ 4095 with time unit indicator for Weeks, Days,
Hours, Minutes, Seconds. The maximum is 4095 Weeks.
3BUF001092-610 A 232
9 Historical Process Data Collection
9.6 Property Log Attributes
Sample Interval
This is the rate at which data is sampled from a data source. For USER_SUPPLIED,
this is the expected time interval between samples sent to History. Whether or not the
sample is stored depends on whether or not a calculation algorithm is specified, and
whether or not a compaction deadband (or Store As Is) is specified.
Enter the sample interval as an Integer value with a time unit indicator: Weeks, Days,
Hours, Minutes, Seconds. Monthly is NOT a legal sample interval.
If data is not received within 50% of the interval prior to the expected time, or 100% of
the interval after the expected time, the entry value is marked with status NO_DATA.
Time stamps may drift due to load in the system. An example of a case where time drift
can cause NO_DATA entries to be stored is illustrated in Figure 9.37.
For 1:40 time slot, NO_DATA is stored since the actual time
(1:50) is offset from the expected time (1:40)
by 100% of the sample interval (10s)
NO_DATA
0:00 0:11 0:22 0:33 0:44 0:55 1:06 1:17 1:28 1:39 1:50 2:01 2:12
Actual Times
Figure 9.37: Example of Time Drift
Storage Interval
This is the time interval at which data is stored in the log, except when deadband is active
(or Store As Is is used). Enter the storage interval as an Integer value with a time unit
indicator: Weeks, Days, Hours, Minutes, Seconds. The maximum is 4095 Weeks. The
storage interval must be a multiple of the sample interval. For example, if the sample
interval is 5, the storage interval can be 10, 15, 20, 25, and so on. When using Store As
Is or Instantaneous as the calculation algorithm, the storage interval must be set equal
to the sample interval.
3BUF001092-610 A 233
9 Historical Process Data Collection
9.6 Property Log Attributes
A special storage interval is monthly. This causes a value to be stored at the end of the
month. The timestamp of a monthly log depends upon the storage interval and alignment
of its data source:
• If the data source is a daily log which is aligned to 06:00, then the monthly log values
will have a time of 06:00.
• If the data source is hourly or less, then the monthly log will be around midnight
(depending on the alignment of the data source). For example, an hourly log at 30
minutes after the hour will produce a monthly value at 00:30.
• If the data source is an ‘x’ hours log aligned to 06:00 (for example, 8 Hours log with
values stored at 06:00, 14:00, 22:00, and so on), then the monthly log will have a
time of 06:00.
3BUF001092-610 A 234
9 Historical Process Data Collection
9.6 Property Log Attributes
Storage Type
Log entries can be stored in Oracle tables or in files maintained by History. File storage
is faster and uses less disk space than Oracle storage. File-based storage is only
applicable for synchronous property logs. The default storage type is file-based TYPE5
which supports variable size based on the type of data being collected. ORACLE is
mandatory for Asynchronous logs. This type supports collection for floating point data
only. Other storage type are available. For further information refer to File Storage vs.
Oracle Tables on page 254.
Entry Type
For TYPE5 Storage Type the Entry Type can be specified to conserve the disk space
required per entry. The Entry Type attribute specifies whether or not to save the original
value with the modified value when a client application is used to modify a stored history
value.
Disk space is allocated for both the original and modified value whether or not the value
is actually modified. Therefore, only choose to save the original value when the modify
history value function is being used and original values need to be saved.
The choices are:
• Do Not Save Original Value - If a log entry is modified, the original value is not
saved.
• Save Original Value - If a log entry is modified, the original value is saved along
with the modified value. This allocates disk space for both values, whether or not
the value is actually modified.
• Not Applicable - For storage types other than TYPE5.
Log Capacity
This is the maximum number of entries for a log. If a log is full, new entries replace the
oldest entries.
For asynchronous collection, the log capacity is configurable. Enter an integer value.
The maximum log capacity is different for each Storage Type as indicated in Table 9.3.
3BUF001092-610 A 235
9 Historical Process Data Collection
9.6 Property Log Attributes
For synchronous collection, this field is not configurable. Log capacity is calculated as
follows:
Log Capacity = (Log Period/ Storage Interval).
If the specified log period or storage interval result in a calculated log capacity that
exceeds the maximum log capacity (Table 9.3), an error message will be generated
and either the log period or storage interval must be adjusted. For further information
regarding these log attributes, refer to:
• Log Period on page 232.
• Storage Interval on page 233.
Calculation Algorithm
Although both trend and history logs can perform calculations on collected data prior to
storage, it is a good practice to collect and store raw data, and then perform the required
calculations using the data retrieval tool, for example, DataDirect. To do this, use the
STORE_AS_IS calculation algorithm, Figure 9.38.
This calculation causes data to be stored only when it is received from the data source.
For OPC type logs, this calculation can use the OPC server’s exception-based reporting
as a deadband filter. This effectively increases the log period so the same number of
samples stored (as determined by the log capacity) cover a longer period of time. Refer
to STORE AS IS Calculation on page 237 for further details.
3BUF001092-610 A 236
9 Historical Process Data Collection
9.6 Property Log Attributes
History Server
Connectivity
Server Hierarchical Primary History Log
Other calculations may be implemented for certain property log applications. These are
described in Other Calculation Algorithms on page 239.
STORE AS IS Calculation
This calculation causes data to be stored only when it is received from the data source.
This functionality has three different applications, depending on whether it is applied to
an OPC type log, or and API (OMF) type log.
Collecting from Operator Trend Logs (OPC Data)
STORE_AS_IS is an option in the Calculation Algorithm section on the Data Collection
tab of the Log Configuration aspect. For OPC type logs, this calculation uses the OPC
server’s exception-based reporting as a deadband filter. When collecting from an OPC/DA
source, OPC returns data at the subscription rate, only if the data is changing. This
effectively increases the log period so the same number of samples stored (as determined
by the log capacity) cover a longer period of time.
For all calculations other than STORE_AS_IS, if data is not changing and samples are
not being received, History will insert previous values with the appropriate time stamps.
This is required for calculations to occur normally.
With STORE_AS_IS, no other calculations are performed (including INSTANTANEOUS);
therefore, data does not need to be stored at the specified sample rate. Samples will
only be recorded when a tag’s value changes, or at some other minimum interval based
on the log configuration.
Data Forwarding of Deadbanded Data to a Consolidation Node
3BUF001092-610 A 237
9 Historical Process Data Collection
9.6 Property Log Attributes
STORE_AS_IS can implement data compaction on the collection node, before data
forwarding to the consolidation node. This saves disk space on the collection node.
When data forwarding deadbanded data to a consolidation node, be sure to use
STORE_AS_IS rather than INSTANTANEOUS as the calculation algorithm for the
hierarchical log on a consolidation node. This eliminates insertion of NO_DATA entries
for missing samples.
Time Drift in Subscription Data
When running synchronous logs (API or USER_SUPPLIED) at a fast rate, it is possible
for time stamps to drift over time, eventually forcing a sample interval to be missed. For
all calculations other than STORE_AS_IS, History adds an entry with a status of
NO_DATA for the missing sample interval. This is a false indication of missing data, and
can cause a spike or drop in a trend display (refer to Figure 9.37). By using
STORE_AS_IS, rather than INSTANTANEOUS, these spikes will not occur.
The STORE AS IS calculation is generally applied to the primary (first) hierarchical log.
The log where the STORE AS IS calculation is applied can be the data source for a
hierarchical log, if the hierarchical log is also STORE AS IS (for example data forwarding
to a consolidation node). Do not use the STORE AS IS calculation when further
calculations are required.
Deadband Storage Interval
In the case where a process is very stable and values are not changing, there may be
very long periods where no samples are stored. This may result in holes in the trend
display where no data is available to build the trends. Use the deadband storage interval
on the Deadband Attributes tab to force a value to be stored periodically, even if the
value does not change. This ensures that samples are stored on a regular basis. Refer
to Deadband Compaction % on page 244.
Shutdown and Startup
Since data is not being stored if it is not changing, History will insert the current value
and current timestamp when it is shutdown. This guarantees that the data can be
interpolated correctly up until the time History was shutdown.
The following is applicable for API (OMF-based), but NOT for operator trend logs or
user-supplied logs.
At startup, NO_DATA entries will be inserted for STORE_AS_IS type logs to show that
the node was down, and that data was not collected during that time. This will allow
interpolation algorithms to work correctly, and not presume that the data was OK during
the time the node was down.
3BUF001092-610 A 238
9 Historical Process Data Collection
9.6 Property Log Attributes
If a calculation algorithm other than INSTANTANEOUS is specified for the log, the
algorithm is performed at each storage interval. The storage interval must be a multiple
of the sample interval. For example, if the sample interval is 5 Seconds, the storage
interval can be 10, 15, 20, or 25 Seconds, and so on.
When the calculation is performed, the specified number of data samples are entered
in the algorithm, and the result is passed to the deadband function. If the calculation
algorithm is INSTANTANEOUS (no calculation), the sample and storage intervals must
be equal, and each data sample is treated as a result.
Start Time
This is the earliest allowable system time that the log can become active. If the specified
start time has already past, the log will begin to store data at the next storage interval.
If the start time is in the future, the log will go to PENDING state until the start time arrives.
Enter the start time in the following format:
3BUF001092-610 A 239
9 Historical Process Data Collection
9.6 Property Log Attributes
The order of the month, day, and year and the abbreviation of the month is
standardized based upon the language being used.
Use the start time to distribute the load. Both the blocking rate and start time may be
defined initially when configuring the History database. If the attributes are not configured
initially, or if adjustments are required, use the Stagger Collection function in the
hsDBMaint utility as a convenient means for adjusting these parameters. Refer to Stagger
Collection and Storage on page 472.
It is recommended NOT to use small differences (less than one minute) to distribute the
load. Managing log collection at slightly different times will result in significant overhead.
The time when a log is scheduled to start collecting data is referred to as the log’s
alignment time. Alignment time is the intersection of the log’s start time and its storage
interval. For instance if the start time is 7/3 2003 10:00:00 AM, and the storage interval
is 15 Seconds, the log will store data at 00, 15, 30, and 45 seconds of each minute
starting at 10:00:00 AM on July 3, 2003. In this case, this is the alignment time.
If the start time for a log has already passed, it will be aligned based on the storage
interval. For example, a log with a storage interval of 1 Minute will store data at the
beginning of each minute. So its alignment time is the beginning of the next minute.
The log stays in PENDING state until its alignment time. For the log whose storage
interval is 1 minute, it will be PENDING until the start of the next minute.
The default is: 1/1/1990 12:00:00 AM. The order of the month, day, and year and the
abbreviation of the month is standardized based upon the language being used.
3BUF001092-610 A 240
9 Historical Process Data Collection
9.6 Property Log Attributes
3BUF001092-610 A 241
9 Historical Process Data Collection
9.6 Property Log Attributes
A deadband storage interval is defined on an individual basis for each log. The deadband
storage interval is a multiple of the log’s storage interval. It is the maximum time that can
elapse without storing a sample when compaction is activated. Even if the process is so
stable that there are no samples outside the deadband, samples will still be stored at
the deadband storage interval. The deadband storage interval is always measured from
the time the last sample was stored. The deadband storage interval should be at least
10 times larger than the storage interval to ensure a high compaction ratio.
When deadband is active, a sample may be stored for any one of the following reasons:
• sample outside the deadband.
• value status changed (possible statuses are: no data, bad data, good data).
• deadband storage interval elapsed since last stored sample.
Figure 9.40 shows how samples are stored when deadband is active. The deadband
function holds each sample until it determines whether or not the sample is required to
plot the trend. Samples that are not required are discarded.
A sample is stored when deadband is activated at 0s. The sample at 10s is held until
the next sample at 20s. Since there is no change in slope between 0s and 20s, the 10s
sample is discarded and the 20s sample is held. This pattern is repeated at each
10-second interval until a change in slope is detected at 50s. When this occurs, the
samples at 40s and 50s are stored to plot the change. At 90s another change in slope
is detected, so the samples at 80s and 90s are stored.
Even though there are spikes between 80s and 90s, these values were not sampled so
the slope remains constant (as indicated by the dotted line). Another change in slope is
detected at 100s. The sample at 100s is stored to plot the slope. The sample at 90s,
also used to plot the slope, was already stored. The slope remains constant thereafter.
The next sample stored is 140s since the Deadband Storage Interval (40s) elapses
between 100s and 140s. The sample at 180s is also stored due to the Deadband Storage
Interval.
3BUF001092-610 A 242
9 Historical Process Data Collection
9.6 Property Log Attributes
Compaction activated,
sample stored
0 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150160 170 180 190 200
Compaction effectively increases the log period so the same number of samples stored
(as determined by the log capacity) cover a longer period of time. If the process is very
unstable such that every sample is stored and there is no data compaction, the effective
log period will be equal to the configured log period. If the process is perfectly stable
(maximum data compaction, samples stored only at deadband storage interval), the
effective log period is extended according to the following formula:
effective log period = (deadband storage interval/storage interval) * configured log
period
In most cases, process variation is not at either extreme, so the effective log period will
fall somewhere in between the configured (minimum) and maximum value.
The calculated effective log period is stored in the EST_LOG_TIME_PER attribute. This
attribute is updated periodically according to a schedule which repeats as often as every
five days. If necessary, manually execute the update function via the hsDBMaint utility.
This is described in Update Deadband Ratio Online on page 464.
When requesting by name, EST_LOG_TIME_PER determines which log to retrieve data
from (seamless retrieval). For instance, consider a case where the trend log stores data
at 30 second intervals over a one-day time period, and the history log stores data at
one-hour intervals over a one-week time period. The data compaction ratio is 5:1 which
extends the effective log period of the trend log to five days. Making a request to retrieve
three-day-old data causes the History retrieval function to get the data from the primary
log rather than the hierarchical log, even through the configured log period of the primary
log is only one day.
3BUF001092-610 A 243
9 Historical Process Data Collection
9.6 Property Log Attributes
When EST_LOG_TIME_PER is updated, both internal History memory and the history
database are updated with the new values of EST_LOG_TIME_PER and
COMPACTION_RATIO. This will ensure the information will be available after a re-start
of the History node.
Before using compaction, refer to examples 2 & 5 in Property Log Configuration Reference
on page 264.
Deadband Compaction %
This is the minimum change between a sample and previously stored value that causes
the new sample to be stored. Enter this as a percent of engineering units range (Range
Max. - Range Min.). For example, 1 means 1% of range.
Estimated Period
This attribute indicates the time between the oldest and newest stored values. This is
filled in by running hsDBMaint. If necessary, use the History Utility to manually update
this attribute. Refer to Update Deadband Ratio Online on page 464.
3BUF001092-610 A 244
9 Historical Process Data Collection
9.6 Property Log Attributes
Compaction Ratio
This is a read-only attribute that indicates the number of entries sampled for each entry
stored. It is filled in by running hsDBMaint. Refer to Update Deadband Ratio Online on
page 464.
Expected Ratio
This attribute indicates the expected ratio for number of entries sampled/each entry
stored.
3BUF001092-610 A 245
9 Historical Process Data Collection
9.6 Property Log Attributes
• Log Capacity.
• Calculation Algorithm.
3BUF001092-610 A 246
9 Historical Process Data Collection
9.6 Property Log Attributes
Storage Interval
(Sample Interval)
0 5 10 15 Seconds 20 60
3BUF001092-610 A 247
9 Historical Process Data Collection
9.6 Property Log Attributes
3BUF001092-610 A 248
9 Historical Process Data Collection
9.6 Property Log Attributes
400
390
0 10 20 30 40 50 60 70 80
3BUF001092-610 A 249
9 Historical Process Data Collection
9.6 Property Log Attributes
0 8h 16h 24h
Data samples from Data Source at: Results calculated at each Storage Interval
1h
2h
3h
4h Calculate
Calculates result at 8h
5h Average
6h
7h
8h
9h
.
. Calculate
Calculates result at 16h
. Average
16h
17h
.
. Calculate
Calculate and store result at 24h
. Average
24h
3BUF001092-610 A 250
9 Historical Process Data Collection
9.6 Property Log Attributes
32h
Calculate
40h Stores result at 48h Results stored at
48h Average each Storage Interval
56h
64h Calculate
Stores result at 72h
72h Average
The sample blocking rate is used for primary history logs that are connected directly
to the direct trend log. Subsequent (secondary) history logs do not use the sample
blocking rate. The sample blocking rate from the primary log determines when values
are stored for all history logs in the property log.
3BUF001092-610 A 251
9 Historical Process Data Collection
9.6 Property Log Attributes
3BUF001092-610 A 252
9 Historical Process Data Collection
9.6 Property Log Attributes
400
390
0 10 20 30 40 50 60 70 80
3BUF001092-610 A 253
9 Historical Process Data Collection
9.6 Property Log Attributes
This storage type is recommended for all history logs that collect from operator trend
logs.
3BUF001092-610 A 254
9 Historical Process Data Collection
9.6 Property Log Attributes
Even though TYPE2 storage provides a reduction in disk usage, it may not be applicable
in all cases. TYPE2 storage is recommended when:
• The storage interval is 1 hour or less and is divisible into 1 hour, and the data source
is TTD. TTDs, do not have object status, and times are always aligned.
• The storage interval is one minute or less when collecting from process objects. A
time stamp is recorded every 500th storage. During the interim, values are assumed
to be exactly one storage interval apart (as noted below). Of course larger storage
intervals can be configured, but they should be confirmed as acceptable. Since
TYPE2 logs only correct their alignment after the 500th storage, they should only
be used on fast logs. For example, a one-day storage log, if not properly aligned,
will take 500 days for the alignment to correct.
TYPE2 logs should not be used for logs with storage intervals greater than one hour.
Use TYPE3 logs instead.
TYPE3 (8-byte) is similar to TYPE2. TYPE3 provides full precision time stamp (like
TYPE1), and stores a status when a value is modified. TYPE3 does not store the original
value when the value is modified. TYPE3 logs grow to 16-bytes when the storage interval
is greater than 30 minutes.
The complete information provided for TYPE1 entries is described below. The differences
between TYPE1 and other types are described where applicable.
Time Stamp
Each data entry in a log is time stamped with Universal Time (UTC) Coordinate
(Greenwich Mean Time). It is not affected by the time zone, nor daylight saving time.
This gives consistency throughout all external changes such as Daylight Saving Time
and any system resets that affect the Local time.
TYPE1, TYPE3, TYPE5, and Oracle-based logs support microsecond resolution for
timestamps. For TYPE2 logs (file-based 4-byte entries), the time is an estimate of the
actual time for the entry. The time for the entries are all defined to be a storage interval
apart. A reference time is stored for each block of 500 values. The time for subsequent
values after the reference time = Reference time + (n * storage interval) where n is the
sequential number of the value in the block (0 to 499). So if a log is configured to store
a value every 15 seconds, the entries will have timestamps at 0, 15, 30, 45s of each
minute.
3BUF001092-610 A 255
9 Historical Process Data Collection
9.6 Property Log Attributes
Data Value
The data value format for all types is floating point. If the data being collected is not
floating point (for example, integer), the data will be converted to floating type and then
stored in that format. This may result in some decimal places being truncated when the
data is stored.
The only exception is when TYPE5 is used in combination with the STORE AS IS
Calculation. In this case the TYPE5 log maintains the data type of the collected data.
Status
Status information is stored according to OPC DA specifications.
Object Status
Object status is the status of the data source when the value was sampled. This field is
not used when the data source is TTD. The format of the status (what is represented
and how) is dependent upon the data source object type. When a hierarchical log is
created, the object status is obtained from the last primary log entry used in the
calculation.
If a User Object that inherits a Basic Object is the source for a log, its status will also be
collected and stored.
Object status is not stored for TYPE2 or TYPE3 logs.
3BUF001092-610 A 256
9 Historical Process Data Collection
9.6 Property Log Attributes
3BUF001092-610 A 257
9 Historical Process Data Collection
9.6 Property Log Attributes
3BUF001092-610 A 258
9 Historical Process Data Collection
9.7 Presentation and Status Functions
9.7.1 Presentation
Configure presentation attributes for the Trend Display on this tab, Figure 9.47. The
default settings for these attributes are set in one of the following ways:
• Object Type (i.e. in the Property Aspect).
• Object Instance (i.e in the Property Aspect).
• Log (in the Log Configuration Aspect).
• Trend Display.
These are listed are in order from lowest to highest precedence.
3BUF001092-610 A 259
9 Historical Process Data Collection
9.7 Presentation and Status Functions
For example, to override a value set in the Object Type write a value in the Object
Instance. A value set in the Object Type, Object Instance, or the Log can be overridden
in the Trend Display.
To override a presentation attribute the check box for the attribute must be marked. If
the check box is unmarked the default value is displayed. The default value can be a
property name or a value, depending on what is specified in the Control Connection
Aspect.
Engineering Units are inherited from the source. It is possible to override it. Normal
Maximum and Normal Minimum are scaling factors, used for the scaling of Y-axis in the
Trend Display. The values are retrieved from the source but are possible to override.
The Number of Decimals are retrieved from the source but are possible to override.
The Display Mode can be either Interpolated or Stepped. Interpolated is appropriate
for real values, Stepped is for binary.
Click Apply to put the changes into effect.
9.7.2 Status
This tab is used to retrieve and view data for a selected log, Figure 9.48. As an option,
use the Settings button to set viewing options. Refer to:
• Data Retrieval Settings on page 261.
• Display Options on page 262.
3BUF001092-610 A 260
9 Historical Process Data Collection
9.7 Presentation and Status Functions
Click Read to retrieve the log data. Use the arrow buttons to go to the next or previous
page. Page size is configurable via the Data Retrieval Settings. The default is 1000
points per page.
Click Action to:
• Import Data from File. Opens csv file to Insert, Replace, or Insert/Replace data.
• Export Data to File. Opens wizard to export csv file.
• Copy Data to Clipboard.
• Restore Log from Backup. Backup object needed in Maintenance structure and file
on backup server.
3BUF001092-610 A 261
9 Historical Process Data Collection
9.7 Presentation and Status Functions
Set the number of points per page in the Page Size area. Default is 1000 and maximum
is 10,000.
Set the time range for retrieval of data in the Time Frame area.
• Latest retrieves one page worth of the most recent entries.
• Oldest retrieves one page worth of the oldest entries.
• At time is used to specify the time interval for which entries will be retrieved. The
data is retrieved from the time before the specified date and time.
Display Options
The Display button displays the Display Options dialog used to change the presentation
parameters data and time stamp, Figure 9.50.
Values are displayed as text by default. Unchecking the check box in the Quality area,
displays the values as hexadecimals.
Timestamps are displayed in local time by default. Unchecking the first check box in the
Time & Date area, changes the timestamps to UTC time.
Timestamps are displayed in millisecond resolution by default. Unchecking the second
check box in the Time & Date area changes the timestamp resolution to one second.
3BUF001092-610 A 262
9 Historical Process Data Collection
9.7 Presentation and Status Functions
3BUF001092-610 A 263
9 Historical Process Data Collection
9.8 Property Log Configuration Reference
3BUF001092-610 A 264
9 Historical Process Data Collection
9.8 Property Log Configuration Reference
3BUF001092-610 A 265
9 Historical Process Data Collection
9.8 Property Log Configuration Reference
3BUF001092-610 A 266
9 Historical Process Data Collection
9.8 Property Log Configuration Reference
The enabled check box is only for Property logs (basic history).
To activate an individual log within a property log, select the log in the hierarchy, click
the IM Definition tab, and then click Activate under the IM Historian Log State control,
Figure 9.54.
3BUF001092-610 A 267
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3BUF001092-610 A 268
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
Import filters in the Bulk Configuration tool can be used to shorten the time it takes to
read information from aspect objects into the Excel spreadsheet. The most effective filter
in this regard is an import filter for some combination of object types, properties, and
data type. Since some objects may have hundreds of properties, the time savings can
be significant.
A list of properties can be created for which logs have already been created in the event
that the log configurations for those properties need to be modified or deleted.
Extensive use of the Bulk Configuration tool may cause disks to become fragmented.
This, in turn, may impair the response time and general performance of the system.
Therefore, check the system for fragmented files after any such procedure, and
defragment disks as required.
History configuration impacts not only the History server disk(s), but also the disk(s)
on any Connectivity Servers where trend logs are configured. Therefore, check the
disks on any Connectivity Server where trend or history logs are located.
3BUF001092-610 A 269
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3BUF001092-610 A 270
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3BUF001092-610 A 271
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
This sets up the workbook for bulk configuration by adding the applicable row and
column headings. Also, a dialog is presented for selecting a system (system created
via the 800xA Configuration Wizard), Figure 9.57.
3BUF001092-610 A 272
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
Excel sheet columns from A through E may disappear hiding any log names. This
can be corrected by unfreezing panes and refreezing panes.
3. Click Set System to connect the spreadsheet to the system indicated in the Select
a System dialog. Use the pull-down list to change the system if needed.
When finished initializing the worksheet, continue with Creating a List of Object Properties
on page 273.
3BUF001092-610 A 273
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
The following example shows how to specify an import filter using a combination of two
object types and three properties. To do this:
3BUF001092-610 A 274
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
1. Enter the object type specification in the Object column of the Import Filter row (cell
C2), Figure 9.58. Specify more than one object type if needed. If the object type
names are known, enter the names directly in the cell using the following syntax:
TYPE:object type1:object type2
where:
TYPE: causes the following text strings to be interpreted as object type names.
Each object type name is separated by a colon (:).
To interpret the text as a pattern to match object names (for example: *TC100*), omit
the TYPE: keyword.
The example specification in Figure 9.58 will import all objects of the type ggcAIN
and ggcPID (TYPE:ggcAIN:ggcPID).
Use the Object Type pick list if the spelling of the object type names is not known.
To do this, right click in the Object Column for the Import Filter row (cell C2) and
choose Object Type List from the context menu, Figure 9.59.
3BUF001092-610 A 275
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
Figure 9.59: Opening the Object Type List (Using Context Menu on Object Column)
This displays a dialog for selecting one or more object types, Figure 9.60. Scroll to
and select the applicable object types (in this case ggcAIN and ggcPID).
Select as many object types as required, although selecting too many will defeat
the purpose of the filter. Select an object type again to unselect it.
Click OK when finished.
3BUF001092-610 A 276
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
2. Enter the property specification in the Property column of the Import Filter row.
Specify one or more properties using the following syntax:
NAME:property1:property2, where each property is separated by a colon (:). For
example: NAME:Link.PV:Link.SP:Link.OUT, Figure 9.61.
To interpret the text as a pattern to match property names (for example: *value*),
omit the NAME: keyword.
3BUF001092-610 A 277
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3BUF001092-610 A 278
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
4. When finished with the import filter specification, choose Bulk Import>Load Sheet
from System, Figure 9.63.
This displays a dialog for selecting the root object in the Aspect Directory under
which the importer will look for objects and properties that meet the filtering criteria,
Figure 9.64.
3BUF001092-610 A 279
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
5. Use the Select an Object dialog to specify the scope of properties to be imported.
Use the browser to select the root object whose properties will be imported. Typically,
the Include Child Objects check box to the right of the browser is checked. This
specifies that properties of the selected object AND properties of all child objects
under the selected object will be imported. If the box is not checked, only properties
for the selected object will be imported.
The other two check boxes specify:
• Whether or not to limit the import to properties that already have logs.
• Whether or not to enter the log configuration information in the spreadsheet
when importing properties that have logs.
ALWAYS check one or both of these check boxes; otherwise, no properties will be
imported. The operation of these check boxes is described in Table 9.7.
For example, the selections in Figure 9.65 will import the properties for objects in
the Group_01 Generic OPC Object branch based on the filter set up in steps 1&2.
It will import the log configuration and log template specifications for any properties
that already have log configurations.
3BUF001092-610 A 280
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3BUF001092-610 A 281
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
Only objects and properties that meet the filtering criteria will be imported.
6. Click Add when finished. This runs the importer.
A partial listing of the import result based on the specified filter is shown in Figure
9.66. (The complete list is too lengthy to show.)
Use the Sheet Filter (row below Input Filter), or the sorting and filtering tools in
Microsoft Excel to make further adjustments to the list of imported objects and
properties.
Once the list is refined, continue with the procedure covered in Assigning Log Templates
below.
3BUF001092-610 A 282
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
To do this:
1. Select a cell in the Property Template column. For example, Figure 9.67 shows the
cell for the first object property being selected.
If the name of the template is known, enter it directly in the spreadsheet cell. If the
name is not known, right-click and choose Template List from the context menu,
Figure 9.67.
This displays a pull-down list for all property log templates that exist in the system,
Figure 9.68.
3BUF001092-610 A 283
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3. The Log Configuration aspect name is specified in much the same manner as the
property template. To create a new Log Configuration aspect for the property log
because when one does not yet exist, enter a Log Configuration aspect name directly
in the spreadsheet cell.
If no Log Configuration aspect names have been specified yet in the entire column,
typing the letter L in a cell will automatically fill in the cell with: Log Configuration
(matching the Column Name). This is a suitable aspect name. It is not required for
these names to be unique.
To add the property log to an existing Log Configuration aspect, right-click and
choose Log Config List from the context menu. This displays a list of Log
Configuration aspects that exist for that particular object. If no Log Configuration
aspects exist for the object, the list will indicate None.
As with the Property Template specification, to use the same Log Configuration
aspect name for contiguous object properties in the list, click on corner of the cell,
and pull the corner down to highlight the Log Configuration column for as many
object properties as required.
3BUF001092-610 A 284
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
When finished assigning Log Templates and Log Configuration Aspect names, continue
with the procedure described in Configuring Other Log Attributes below.
Then click the Load Report tab at the bottom of the spreadsheet, Figure 9.71.
3BUF001092-610 A 285
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
An example report is shown in Figure 9.72. The report shows the disk space requirements
on a per-server basis. The information for each server is presented in a separate column.
For the example in Figure 9.72 there is only one server.
Parameters in Load
Description
Report
Load Per Server The report shows the disk space requirements on a
per-server basis. In this example, there is only one server
(in this case ROC2HFVP01).
Existing This row indicates the disk space (in bytes) used by existing
property logs. In this case, 0 (zero) indicates that no space
is used since there are no existing logs.
Proposed This row indicates the space requirement (in bytes) for the
property logs which will be created based on this
spreadsheet. In this case 3,891,200 bytes will be required.
Total This row indicates the sum of the Existing and Proposed
disk space requirements.
Grand Total This row indicates the sum of the totals for all servers.
3BUF001092-610 A 286
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
Compare the results of this report with the current disk space allocation for file-based
logs. Use the Instance Maintenance Wizard or hsDBMaint function as described in
Directory Maintenance for File-based Logs on page 468. Make any adjustments as
required, following the guidelines provided in the Directory Maintenance section. When
finished, continue with the procedure covered in Creating the Property Logs on page
287.
3BUF001092-610 A 287
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3BUF001092-610 A 288
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
An example result is shown in Figure 9.76. The presence of a red not verified message
in the far left (A) column indicates a problem within the corresponding row. The item that
caused the failure, for example the log template, will also be shown in red. Verified rows
are shown in black.
Some rows may be marked as not verified because the verification function was run
before History Services was able to create the logs. Clear these rows by re-running the
verification function. Rows that were not verified on the previous run, but are verified on
a subsequent run are shown in green. The next time the function is run, the green rows
will be shown in black.
Guidelines for handling not verified rows are provided in Handling Not Verified Rows on
page 292.
To get a summary for each time the verification function is run, click the Status and
Errors tab near the bottom of the spreadsheet, Figure 9.77. A new entry is created each
time the verify function is run. The summary is described in Table 9.9.
3BUF001092-610 A 289
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
When using the Bulk Configuration tool in Microsoft Excel, the Update System Status
information will be incorrect when a sheet has been filtered. After filtering, only the
visible rows are processed during the Update Log Aspects from Sheet operation.
However, the Status and Errors shows the number of rows written to system as the
total (unfiltered) number of rows. This is the normal mode of operation.
3BUF001092-610 A 290
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
To further confirm that property logs have been instantiated for their respective object
properties, go to the Control structure and display the Log Configuration aspect for one
of the objects. An example is shown in Figure 9.78.
3BUF001092-610 A 291
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
3BUF001092-610 A 292
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
9.9.12 Troubleshooting
On occasion, the Bulk Configuration Tool may fail to create logs for one or more properties
in the property list because the tool was temporarily unable to access the Aspect Directory
for data. The rows for these properties will have a red not verified message in column
A of the spreadsheet after running the verification function (Verifying the Update on page
288). Also, the item that caused the failure (for example, the log template) will be shown
in red. If this occurs, use the following procedure to modify the property list to show not
verified rows, and then re-run the Bulk Configuration tool to create the logs for those
rows:
1. Select column A, and then choose Data>Filter>Auto Filter from the Excel menu
bar, Figure 9.79.
3BUF001092-610 A 293
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
This will hide all rows that have no content in Column A. These rows have been
verified. Only rows that have not been verified will be shown, Figure 9.81.
3. Re-run the bulk configuration process again by choosing Bulk Import>Update Log
Aspect from Sheet (refer to Creating the Property Logs on page 287).
Depending on the quantity of rows that were not verified, steps 1-3 may need to be
repeated more than once. Continue to repeat these steps until the error message stops.
Also run the verification function again as described in Verifying the Update on page 288.
3BUF001092-610 A 294
9 Historical Process Data Collection
9.9 Bulk Configuration of Property Logs
9. Verify that the Inform IT Bulk Import box is checked in the Add-ins tool, Figure
9.82, then click OK.
10. Verify the Bulk Import add-in tools are added in Excel, Figure 9.83.
11. Restart Excel before using the Add-in.
3BUF001092-610 A 295
9 Historical Process Data Collection
9.10 What To Look For When Logs Do Not Collect Data
3BUF001092-610 A 296
10 Profile Data Collection
10.1 Profile Historian Overview
This section provides an overview of the Profile Historian function in the 800xA system,
and instructions for configuring this functionality.
3BUF001092-610 A 297
10 Profile Data Collection
10.2 Profile Historian Configuration Overview
The AccuRay Object Server, Profile Historian Server, and Profile Historian Client
applications all run on the Windows platform. These applications may be installed on
the same computer or dedicated computers. The Profile Historian Server must be installed
on the Information Management Server.
TCP/IP
Machines 1..n
Profile Historian Profile Historian
Server Client
Frame S1-1
Profile Logs in
AC 800M AccuRay
History Database Client
Controllers Object
Frame S1-2 1..n Server Displays
Reel/Grade Report
Configuration
3BUF001092-610 A 298
10 Profile Data Collection
10.3 Configuring a Log Set for Profile Logs
3BUF001092-610 A 299
10 Profile Data Collection
10.4 Configuring Profile Logs
4. From the Profile Logs group, choose New Object from the context menu. This
displays the New Object dialog with the Inform IT Profile Log object type selected.
5. Enter a name for the object in the Name field, for example: CA1Profile, then click
Create. This adds the object under the Profile Logs group, and creates a
corresponding Profile Log Aspect. This aspect is used to configure the profile log
attributes.
6. To display this aspect, select the applicable Profile Log object, then select the Inform
IT History Profile Log aspect from the object’s aspect list.
7. Use this aspect to configure the Profile Log Attributes.
3BUF001092-610 A 300
10 Profile Data Collection
10.4 Configuring Profile Logs
3BUF001092-610 A 301
10 Profile Data Collection
10.4 Configuring Profile Logs
3BUF001092-610 A 302
10 Profile Data Collection
10.4 Configuring Profile Logs
Data Source
The profile log has multiple data sources - one for each of the quality measurements
being stored in the profile log. The Data Source entered on the Main tab specifies the
root portion of the OPC tag. The full data source tags for each of the quality measurements
are automatically filled in on the Data Source tab based on the root portion of the OPC
tag.
3BUF001092-610 A 303
10 Profile Data Collection
10.4 Configuring Profile Logs
This displays the Select Data Source dialog for browsing the OPC server for the
applicable object.
2. Use this dialog to select the OPC object which holds the profile data as demonstrated
in Figure 10.6. Typically, the object is located in the Control structure, under the
Generic OPC Network object for the Profile data server.
3BUF001092-610 A 304
10 Profile Data Collection
10.4 Configuring Profile Logs
3. Click Apply when done. This enters the object name in the Data Source field, Figure
10.7.
In addition, the data source tags for the quality measurements are automatically entered
under the Data Source tab, Figure 10.8.
The!DATA_SOURCE! string represents the root of the OPC tag which is common for
all data sources. The string following the colon (:) specifies the remainder of the tag for
each measurement, for example !DATA_SOURCE!:FirstBox,
!DATA_SOURCE!:ProfileAverage, and so on.
Log Set
Use the Log Set pull-down list on the Main tab to select the log set which was configured
specifically for the profile logs. The name of this log set must match the machine name,
for example PM1 as shown in Figure 10.9. Log Sets should not be used for xPAT
configurations.
3BUF001092-610 A 305
10 Profile Data Collection
10.4 Configuring Profile Logs
Array Size
The Array Size attribute is located on the Collection tab, Figure 10.10. Array size must
be configured to match the number of data points per scan for the machine from which
the data points are being collected. The default is 100.
To determine the correct array size for the profile log via the Control Connection aspect
of the selected OPC tag, Figure 10.11, select the Control Connection aspect, check the
Subscribe to Live Data check box, then read the value for ProfileArray (for example,
600 in Figure 10.11).
If Array Size is configured too large or too small, the file will not be appropriately sized
for the expected log period. For instance, consider a case where the log period is
eight days and the machine generates 200 data points per scan. If the array size is
set to 100, the log will only be capable of holding four days worth of data. If the array
size is set to 400, the log will store 16 days worth of data, and needlessly consume
more disk space.
3BUF001092-610 A 306
10 Profile Data Collection
10.4 Configuring Profile Logs
Log Period
Disk space is allocated to logs according to their log capacity, which is a function of the
log time period and storage interval. The log period can be adjusted on an individual log
basis to keep log capacity at a reasonable size for each log. For profile logs, log capacity
is calculated as: log capacity = (log period/storage interval). Enter the log period as an
integer value ≤ 4095 with time unit indicator for Weeks, Days, Hours, Minutes, Seconds.
The maximum is 4095 Weeks. For example: 40d = 40 days, 8h = 8 hours. The maximum
is 4095 Weeks.
Storage Interval
This is the time interval at which data is stored in the log. Enter the storage interval as
an Integer value with a time unit indicator: Weeks, Days, Hours, Minutes, Seconds. For
example: 40d = 40 days, 8h = 8 hours. The maximum is 4095 Weeks.
3BUF001092-610 A 307
10 Profile Data Collection
10.4 Configuring Profile Logs
Machine Position
This attribute is located on the Data Source tab, Figure 10.12. Enter the name of the
object which holds the Reel Footage value. An example is shown in Figure 10.13. Using
this example, the Machine Position value would be
PM1/SQCS/MIS/RL/ACCREP:ReelFootage.
Figure 10.13: Finding the Object Which Holds the Reel Footage Value
3BUF001092-610 A 308
10 Profile Data Collection
10.4 Configuring Profile Logs
3BUF001092-610 A 309
10 Profile Data Collection
10.4 Configuring Profile Logs
3BUF001092-610 A 310
10 Profile Data Collection
10.4 Configuring Profile Logs
For each machine specified, the Reel Report Configuration Tool provides a template to
facilitate report configuration. This template provides place holders for each of the
signals/variables which must be specified. Add signals/variables as required.
To collect data for a particular report, specify the tag corresponding to each
signals/variable. The signals/variables are read from OPC data items stored in an OPC
server.
Trigger, Measurement,
& Reel Number for PM1
Reel Turn-up Event
3BUF001092-610 A 311
10 Profile Data Collection
10.4 Configuring Profile Logs
Start by adding a machine, and then specify the OPC data items to be read for that
machine.
3BUF001092-610 A 312
10 Profile Data Collection
10.4 Configuring Profile Logs
This adds a new machine with the default structure as shown in Figure 10.17. The
default machine structure provides a default specification for REEL turn-up, GRADE
change, DAYSHIFT, and RollSetup.
3BUF001092-610 A 313
10 Profile Data Collection
10.4 Configuring Profile Logs
2. Specify the machine name. To do this, click on Machine Name, and then enter a
new name, for example: PM1, Figure 10.18. This machine name must also be used
as the name for the log set to which all profile logs for this machine will be assigned.
3BUF001092-610 A 314
10 Profile Data Collection
10.4 Configuring Profile Logs
These signals are mandatory in order to generate a Reel Turn-up report. They cannot
be deleted. One or more additional measurements can be added to collect on an optional
basis. To specify the OPC tags for these signals:
1. Click on the OPC tag place holder for the REEL Report Trigger, Figure 10.19.
2. Enter the OPC tag for the trigger that indicates the occurrence of a reel turn-up. An
example is shown in Figure 10.20.
3BUF001092-610 A 315
10 Profile Data Collection
10.4 Configuring Profile Logs
3. Repeat this procedure to specify the OPC tag for the REEL Number counter and
ReelTrim. An example completed specification is shown in Figure 10.21.
3BUF001092-610 A 316
10 Profile Data Collection
10.4 Configuring Profile Logs
b. Select the Report Heading place holder and enter the new report heading, for
example: ProductionSummary. There are no specific requirements for the
heading, except that it must be unique within the event (in this case, within the
REEL for PM1).
c. Right-click on the report heading and choose Add New Tag from the context
menu. This adds an OPC tag place holder for the new measurement.
d. Specify the OPC tag for the measurement. This procedure is the same as for
configuring any other signal. An example is shown in Figure 10.23.
3BUF001092-610 A 317
10 Profile Data Collection
10.4 Configuring Profile Logs
Configuring ROLLSetup
This report is used to examine sets and rolls within a reel. This setup divides the reel
lengthwise into a specified number of sets. Each set is divided by width into a specified
number of rolls. This is illustrated in Figure 10.27.
The default specification for ROLLSetup is shown in Figure 10.28. The variables are
described in Table 10.4. These signals are mandatory in order to generate a ROLLSetup
report. They cannot be deleted. One or more additional measurements can be added to
collect on an optional basis. The procedure for specifying the signals is the same as the
procedure described in Specifying Signals for Reel Turn-up on page 314.
3BUF001092-610 A 318
10 Profile Data Collection
10.4 Configuring Profile Logs
3BUF001092-610 A 319
10 Profile Data Collection
10.4 Configuring Profile Logs
3BUF001092-610 A 320
10 Profile Data Collection
10.4 Configuring Profile Logs
3BUF001092-610 A 321
3BUF001092-610 A 322
11 Configuring the Archive Function
3BUF001092-610 A 323
11 Configuring the Archive Function
11.1 Archive Media Supported
3BUF001092-610 A 324
11 Configuring the Archive Function
11.3 Accessing Archived Data
3BUF001092-610 A 325
11 Configuring the Archive Function
11.4 Planning for Reliable Archive Results
3BUF001092-610 A 326
11 Configuring the Archive Function
11.6 Archive Configuration Guidelines
To proceed with planning for this configuration, follow the procedures described in
Determining the Quantity and Size of Archive Groups on page 327.
3BUF001092-610 A 327
11 Configuring the Archive Function
11.6 Archive Configuration Guidelines
Use the following procedure to calculate a reasonable size for an archive entry. This
procedure must be done for each unique log configuration (storage rate/log period). To
illustrate, the following calculation is for the log configuration that has 500 logs @ 2-second
storage rate, and 20-day log period:
1. Calculate the archive interval. It is recommended that an interval which is no greater
than 1/4 the log time period be used.
A smaller archive interval is recommended for large log periods (one year). This limits
the amount of data that may be lost in the event of a hardware failure. Do not make
the archive interval too small. This causes a greater percentage of the disk to be used
for overhead.
For a 20-day log period, the recommended archive interval = 1/4 * 20 = 5 days.
2. Calculate the amount of archive data that one log generates over the archive interval.
The following calculation is based on the 2-second storage interval.
21 bytes per archive entry * 30 points per minute * 60 minutes
* 24 hours * 5 days = 4.5 megabytes3
For systems using deadband, the amount of data per archive interval will be reduced.
3BUF001092-610 A 328
11 Configuring the Archive Function
11.6 Archive Configuration Guidelines
3. Calculate the maximum archive entry size for any archive group supporting this log
configuration:
size of one platter side / (reasonable estimate for number of missed archives + 1)
= 2.2 gigabytes/4 = 550 megabytes.
By limiting each scheduled archive operation to 550 megabytes, archive data for up
to three missed archive attempts can be accumulated before exceeding the capacity
of the 2.2 gigabytes archive volume in an archive device. If the cumulative size of
the current archive entry and the previous failed attempts exceeds the capacity of
the archive media, the archive entry will be split into some number of smaller entries
such that one entry will not exceed the available space.
4. Calculate number of archive groups needed for the 500 logs @ 2-second storage
rate. This is done in two steps:
a. # of logs per archive group = result of step 3 / result of step 2 =
550 megabytes / 4.5 megabytes = 125 logs per group
b. # archive groups = total # logs of this type / # logs in one archive group=
500 / 125 = 4 archive groups
Steps 1-4 must be repeated for each of log configuration (storage rate, log period
combination).
3BUF001092-610 A 329
11 Configuring the Archive Function
11.6 Archive Configuration Guidelines
3BUF001092-610 A 330
11 Configuring the Archive Function
11.6 Archive Configuration Guidelines
3BUF001092-610 A 331
11 Configuring the Archive Function
11.7 Configuring Archive Devices
Several disk archive devices can be configured on the same partition to satisfy several
different archive schemes.
Removed archive volumes don't recapture used disk space. The second step of
manually deleting the extra or deleted volumes ensures archive data is not deleted
as a result of a configuration mistake.
When deleting volumes from a Disk Drive archive device, delete the folder manually.
Look for a folder under the Device File with the name nnArch where nn is the volume
number. Delete folders that match the volumes that were deleted from the device.
Refer to the computer’s documentation for instructions on connecting storage devices
to the computer where the archive service runs.
The operating parameters for each archive device are specified in the corresponding
archive device aspect. This requires adding one or more archive device objects for each
archive media (MO or disk drive), and then configure the archive device aspects. For
instructions, refer to Adding an Archive Device on page 332.
3BUF001092-610 A 332
11 Configuring the Archive Function
11.7 Configuring Archive Devices
4. Select the Archive Service Aspect from this object’s aspect list.
Node Where Archive Device Object is being Added Archive Service Aspect
Click Here
5. Click Archive Device in the Create New Archive Object section. This displays the
New Archive Device dialog, Figure 11.2.
6. Enter a name for the object in the Name field, for example: ArchDev2OnAD, then
click OK.
Keep the Show Aspect Config Page check box checked. This will automatically
open the configuration view of the Archive Device aspect.
3BUF001092-610 A 333
11 Configuring the Archive Function
11.7 Configuring Archive Devices
This adds the Archive Device object under the Industrial IT Archive object and creates
an Archive Device Aspect for the new object. Use this aspect to configure the device
as described in Archive Device Aspect below.
This aspect also has a main view for managing the archive volumes on the archive
device. This is described in System 800xA Information Management Data Access and
Reports (3BUF001094*).
The archive device operating parameters which must be configured, and the manner in
which they are configured depends somewhat on the type of device (MO or hard disk).
Review the guidelines below. Details are provided in the sections that follow. When done,
click Apply to apply the changes.
3BUF001092-610 A 334
11 Configuring the Archive Function
11.7 Configuring Archive Devices
Guidelines
To configure an archive device, first specify the type of media for which the device is
being configured, and then configure the operating parameters according to that media.
The Archive Path specifies the drive where archive data will be written. If the archive
media is a hard disk, specify the directory.
The Device Behavior and Overwrite Timeout fields are used to specify how archiving
will proceed when the current archive media (volume) becomes full.
When using a disk drive, partition the disk into one or more volumes. This is not applicable
for MO drives. To configure volumes, specify the number and size of the volumes, and
set up volume naming conventions. Optionally, specify whether or not to have archive
volumes automatically published when the volumes have been copied to a removable
media (CD or DVD) and the media is remounted.
For disk-based archiving, it is recommended that automatic backup of archive data to
another media be configured. This way when a volume becomes full, an ISO image or
shadow copy, or both are created on another media. This is not applicable for MO media
which are removable and replaceable.
Optionally, specify whether or not to generate an alarm when the archive media (or
backup media) exceed a specified disk usage limit. If these features are enabled, the
alarms recorded in the 800xA System message buffer.
Attempting to apply an invalid parameter setting causes the Device State field to be
highlighted in red to indicate an invalid configuration.
Device Type
Specify the type of media for which the archive device is being configured:
MO Drive (single magnetic/optical drive).
Disk Drive
Archive Path
This specifies the location where data will be archived. If the Device Type is an MO drive,
enter the drive letter, for example: D: or E: If the Device Type is Disk Drive, specify the
full path to the directory where data will be archived for example: E:\archive (root directory
is not a valid entry for disk drive).
3BUF001092-610 A 335
11 Configuring the Archive Function
11.7 Configuring Archive Devices
When using archive backup (Configuring Archive Backup on page 338) follow these
guidelines for configuring Device Behavior and Overwrite Timeout.
3BUF001092-610 A 336
11 Configuring the Archive Function
11.7 Configuring Archive Devices
When using archive backup, it is recommended that the device behavior be set to Wrap
When Full, and set the overwrite timeout to allow for periodic checking of the backup
destination to make sure its not full. The overwrite timeout should be two to three times
the interval that is checked for the backup destination. If checking once a week, then set
the overwrite timeout to two or three weeks. Also, the number of volumes and volume
size must be configured to hold that amount of data (in this case two to three weeks).
This is covered in Configuring Volumes on page 337. Following these guidelines will
ensure reliable archival with ample time to check the backup destination and move the
archive files to an offline permanent storage media.
Configuring Volumes
Hard disks can be partitioned into multiple volumes. This involves configuring the number
of volumes, the active volume, volume quota, volume name format, and volume name
counter. Specify whether or not to have archive volumes automatically published when
the media where the volumes have been copied are remounted. Configuring volumes
is not applicable for MO drives.
The Volumes field specifies the number of volumes on the media that this archive device
can access. The maximum range is 64. The quantity specified here will result in that
number of Archive Volume objects being created under the Archive Device object after
the change is applied, Figure 11.4.
Figure 11.4: Specified Number of Volumes Created Under Archive Device Object
Volume Quota is the partition size for each volume in megabytes. For example, a 20
gigabyte hard disk can be partitioned to five 4000-megabyte partitions where 4000 (MB)
is the Volume Quota and five is the number of volumes as determined by the Volumes
field. Typically, size volumes on the hard disk to correspond to the size of the ROM media
to which the archive data will be copied, for example 650 MB for CD ROM, or 4000 MB
for DVD. The minimum Volume Quota value is 100M. The maximum Volume Quota is
100 GB.
3BUF001092-610 A 337
11 Configuring the Archive Function
11.7 Configuring Archive Devices
The Volume Name Format is a format string for generating the volume ID when initializing
a volume during timed archive. Enter any string with/without format characters (all strftime
format characters are accepted). This can be changed when manually initializing a
volume. The default value is ‘%d%b%y_####’:
%d = day of month as a decimal number [1,31].
%b = abbreviated month name for locale, for example Feb or Sept.
%y = year without century as a decimal number [00,99].
#### is replaced by the Volume Name Counter value to make it a unique volume ID, the
number of #’s determines the number of digits in the number.
The Volume Name Counter is used to generate a unique volume id when the volume
is initialized. The default value is 1. This number is incremented and appended to the
Volume ID each time the volume is initialized.
The Active Volume field is the volume number of the current volume being archived, or
to be archived.
The Create Auto-Publish Volumes check box is not operational in this software version.
The Local Disk Utilization Warning check box is not operational in this software version.
The Backup Disk Utilization Warning check box is not operational in this software
version.
3BUF001092-610 A 338
11 Configuring the Archive Function
11.7 Configuring Archive Devices
When archive backup is configured, as volumes are backed up, the volumes are marked
backed up. Volumes cannot be overwritten until they are marked backed up. If necessary,
override this feature and mark volumes as backed up even if they are not. This is
described in the section on managing archive media in System 800xA Information
Management Data Access and Reports (3BUF001094*).
When using Archive Backup, ensure that there is always space available in the
destination device to fit the size of the Archive Volume(s) to be copied to it. Regular
maintenance to make offline backups of copied files allowing for the deletion of copied
Archive Volumes is highly recommended.
Also, follow the guidelines for configuring Device Behavior and Overwrite Timeout
as described in Device Behavior and Overwrite Timeout.
When ISO backups are selected, individual archives are limited to 2GB. The ISO file
format does not support files greater than 2GB. If ISO backups are not used, the
maximum individual archive can be 50 GB and the maximum archive volume size
can be 100 GB.
Archive backup is configured with the Backup Archive Path and Backup Type fields.
The Backup Archive Path specifies the destination for ISO Image files and/or archive
copies when backing up archive data. The path must specify both the disk and directory
structure where the archiving is to take place, for example: E:\archivebackup.
The ISO image and shadow copy can be made on a remote computer which does not
require Information Management software. The directory on the remote node must be
a shared folder with the correct privileges (the write privilege must be one of the
privileges). For example: \\130.110.111.20\isofile.
There are two requirements for the computer where the destination directory is located:
• The destination directory must exist on the computer BEFORE configuring the
archive device.
• The computer must have a Windows user account identical to the account for
the user under which the Archive Service runs. This is typically the 800xA system
service account.
If these conditions are not met, the error message shown in Figure 11.5 will be displayed
after applying the archive device configuration. The object will be created (Apply
Successful initial indication); however, the archive device will not be operational, and
the error message will appear when the message bar updates.
3BUF001092-610 A 339
11 Configuring the Archive Function
11.8 Configuring Archive Groups
The Backup Type specifies the backup method. The options are:
ISO Image creates an ISO image file for the currently active volume when the volume
becomes full. The ISO Image file can then be copied to a ROM media for permanent
storage.
When ISO backups are selected, individual archives are limited to 2GB. The ISO file
format does not support files greater than 2GB. If ISO backups are not used, the
maximum individual archive can be 50GB and the maximum archive can be 100GB.
Copy Files creates a shadow copy rather than an ISO Image file.
BOTH creates both an ISO image and a shadow copy.
3BUF001092-610 A 340
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3BUF001092-610 A 341
11 Configuring the Archive Function
11.8 Configuring Archive Groups
Click here
6. Enter a name for the object in the Name field, for example: ArchGroupOnAD, then
click OK.
Keep the Show Aspect Config Page check box checked. This will automatically
open the configuration view of the Archive Device aspect.
This adds the Archive Group object under the Industrial IT Archive object and creates
an Archive Group Aspect for the new object. Use this aspect to configure one or more
archive groups as described in Archive Group Aspect on page 342.
3BUF001092-610 A 342
11 Configuring the Archive Function
11.8 Configuring Archive Groups
When configuring the History database, the primary function of this aspect is for adding
and configuring archive groups. This is described in Adding and Configuring an Archive
Group on page 344.
After configuring an archive group, make changes to the group, or to the archive entries
added to the group. This is described in Adjusting Archive Group Configurations on page
353.
This aspect also is used to invoke manual archive operations on an archive group basis.
This is described in System 800xA Information Management Data Access and Reports
(3BUF001094*).
3BUF001092-610 A 343
11 Configuring the Archive Function
11.8 Configuring Archive Groups
2. Use this dialog to specify the Group name, description (optional), and the Industrial
IT Archive Service Group whose service provider will manage this archive group.
3BUF001092-610 A 344
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3BUF001092-610 A 345
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3BUF001092-610 A 346
11 Configuring the Archive Function
11.8 Configuring Archive Groups
2. Use this dialog to select an entry type, Figure 11.12, then click OK. The options are
described in Table 11.2.
3BUF001092-610 A 347
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3BUF001092-610 A 348
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3BUF001092-610 A 349
11 Configuring the Archive Function
11.8 Configuring Archive Groups
The dialog has two preset configurations used to select Completed Reports - one for
signed report objects, and one for unsigned report objects. To use a preset configuration,
use the browser to go to the Scheduling structure and select the Reports object, Figure
11.14. Then use the Preset Configurations pull-down list to select either Report Objects
(unsigned reports), or Signed Report Objects. In either case, the filters and other settings
are automatically configured for the selection made. Figure 11.14 shows the preset
configuration for signed report objects. The configuration for unsigned reports is essentially
the same, except that the Require Signatures area is disabled for unsigned reports.
Make sure the reports are digitally signed. Do not let the archive utility build up to the
maximum number of reports before unsigned reports are deleted.
When the preset configuration is Signed report Objects, and if the completed reports
(FileViewer aspects in the completed reports objects) are not signed, the Scheduled
Platform object archiving skips the archiving of these report objects. When the aspects
are signed, these skipped reports will be archived with the next archive entry.
There is a possibility that the unsigned reports missed by the archive will be deleted
before they are archived due to the Report Preferences setting in the Reports tree of
the scheduling structure. Make sure that reports are digitally signed. Do not let the archive
utility build up to the maximum number of reports before unsigned reports are deleted.
3BUF001092-610 A 350
11 Configuring the Archive Function
11.8 Configuring Archive Groups
Select
Browse
Config.
If another type of object must be selected, adjust one or more settings, or simply
understand what each setting does, refer to Table 11.3.
3BUF001092-610 A 351
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3BUF001092-610 A 352
11 Configuring the Archive Function
11.8 Configuring Archive Groups
When selecting an entry from the entries list for an archive group, the context menu (or
Group Entries button) is used to modify or delete the entry, Figure 11.16. The Modify
Entry command displays the object selection dialog as described in Adding Entries to
Archive Groups on page 346.
3BUF001092-610 A 353
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3BUF001092-610 A 354
11 Configuring the Archive Function
11.8 Configuring Archive Groups
It is important to strike a balance between logs and time intervals such that the archive
media is used effectively. Some common guidelines are:
• The archive interval should be between 1/4 of the log period and two weeks. If you
have a four day log, you would want to archive it once a day. This gives archive
three chances to catch up if the IM was down during an archive. If you have a four
year log, having a one year archive interval would create huge archives. It makes
more sense to archive once every two weeks. If it is two weeks of 60 second data,
that is 20,000 values, which is reasonable. If the log has one second data, that is
1.2 million numeric values, that is a significant amount of data per log and the archive
interval should be lowered to keep the numeric entries per archive to 100000 or
less.
• Once the intervals are selected the number of logs in each archive group should be
determined. This depends on the size of your media and entries per log per interval.
If you have smaller media, CD size, then you want to keep the number of logs
smaller, if you have DVD size, then you have more logs. If using just disk archives
and want archive volumes configured at the greatest capacity, then you can use the
limits. A practical limit is 2500 log per archive group. Limit should be decided by the
overall size of the archive. If 2500 logs with one week of data is 1GB and CD size
media was selected (640 Mega), this configuration will not work. For a given media
size, it is recommended that 10 or more archives fit on a media.
• Make sure that the archive created, either ISO images or copies do not fill up the
backup destination. After an archive configuration is created, it will create archives
at the rate the 800xA system collects data. If a 500/second numeric configuration
creates 1GB of archives per day, a 2000/second archive will consume 4 times that,
or 4GB per day. The end user must plan for and remove these archives from the
backup destination to keep archive running. Once the archive backup destination
fills up, archive will stop archiving until more disk space is available.
3BUF001092-610 A 355
11 Configuring the Archive Function
11.8 Configuring Archive Groups
For scheduled archives, even if no new samples were collected, at least one sample
(the last valid point) will be archived.
3BUF001092-610 A 356
11 Configuring the Archive Function
11.8 Configuring Archive Groups
3. Browse to the Scheduling Objects category and select the Job Description object
(Object Types>ABB Systems>Scheduling Objects>Job Description), Figure 11.18.
4. Assign the object a logical name.
Job Description
Selected
Object
Name
3BUF001092-610 A 357
11 Configuring the Archive Function
11.8 Configuring Archive Groups
5. Click Create. This creates the new job under the Job Descriptions group, and adds
the Schedule Definition aspect to the object’s aspect list.
6. Click on the Scheduling Definition aspect to display the configuration view, Figure
11.19. This figure shows the scheduling definition aspect configured as a periodic
schedule. A new archive will be created every three days, starting June 9th at 5:00
PM, and continuing until September 9th at 5:00 PM.
3BUF001092-610 A 358
11 Configuring the Archive Function
11.8 Configuring Archive Groups
To add an action:
1. Right-click on the Job object (for example ScheduleArchGrp1 in Figure 11.20) and
choose New Aspect from the context menu.
2. In the New Aspect dialog, browse to the Scheduler category and select the Action
aspect (path is: Scheduler>Action Aspect>Action Aspect), Figure 11.21.
Use the default aspect name, or specify a new name.
3BUF001092-610 A 359
11 Configuring the Archive Function
11.8 Configuring Archive Groups
Select the archive group to be scheduled by this action. The Service Group and Archive
Devices fields will be automatically filled in according to the selected archive group. If a
different archive device needs to be specified, use the corresponding pull down list to
display a list of available devices.
The Last Archive Time is filled in by History. This is the time the last archive operation
occurred for a group. This time is remembered by the system so that the next archive
operation will archive data starting at that time. Change (reset) this time to cause the
next archive operation to go back farther in time, for example to account for a failed
archive, or skip ahead to a later time, for example to stop the archive from collecting
data for a certain time period. To do this, change the date and time, then press Reset.
The last archive time can also be reset from the Archive Group aspect as described in
Adjusting Archive Group Configurations on page 353.
3BUF001092-610 A 360
11 Configuring the Archive Function
11.8 Configuring Archive Groups
Stagger
Logs Effect
Scheduled
Archive
Time
3BUF001092-610 A 361
11 Configuring the Archive Function
11.9 Adding a Read-Only Volume for a Mapped Network Drive
After selecting the Maintenance... button, use the Maintenance Dialog, Figure 11.24,
to create and delete archive logs.
Specify parameters as in the numeric log archive group entry dialog. Next, go through
the system and create or delete archived log references that match specified criteria.
The Validate Archive Logs button verifies the correctness of the references, updates
them if they are not, and reports the number of updated references to the user.
3BUF001092-610 A 362
11 Configuring the Archive Function
11.9 Adding a Read-Only Volume for a Mapped Network Drive
Additional read-only volumes can be created for reading archive volumes that have been
copied to a mapped network drive. These volumes should not be associated with (created
beneath) any archive device object. The recommended method is to use the Archive
Service Aspect. This is basically the same procedure used to add archive devices and
archive groups (Adding an Archive Device on page 332).
To add a volume for a mapped network drive:
1. In the Plant Explorer, select the Node Administration structure.
2. Expand the object tree for the node where the volume is to be added.
3. In the object tree for the selected node, expand the Industrial IT Archive Service
Provider and select the Industrial IT Archive object.
4. Select the Archive Service Aspect from this object’s aspect list.
5. Click Archive Volume. This displays the New Archive Volume dialog.
6. Enter a name for the object in the Name field, for example: Drive H, Figure 11.25,
then click OK.
3BUF001092-610 A 363
11 Configuring the Archive Function
11.9 Adding a Read-Only Volume for a Mapped Network Drive
7. This adds the Archive Volume object under the Industrial IT Archive object and
creates an Archive Volume aspect for the new object.
8. Use the config view to select the Service Group that will manage archive transactions
for this volume, and set the volume path to the mapped network drive.
3BUF001092-610 A 364
11 Configuring the Archive Function
11.10 PDL Archive Configuration
3BUF001092-610 A 365
11 Configuring the Archive Function
11.10 PDL Archive Configuration
Table 11.4: Legal Combinations for Archive Mode, Delete Mode, and Delay Mode
Archive Mode Legal Delete Modes Applicable Delay Mode
No_auto_archive No_auto_delete Not Applicable
After_job (campaign) Delete Delay
After_batch (reel or grade) Delete Delay
On_job_end (campaign) No_auto_delete Archive Delay
After_archive Archive Delay + Delete Delay
On_batch (reel or grade) No_delete Archive Delay
After_archive Archive Delay + Delete Delay
Use the Archive Mode button to select the archive mode. The choices are:
• No_auto_archive - Archive PDLs on demand via View Production Data Logs.
• On Job End - Archive PDLs automatically after specified delay from end of job (for
Batch Management, job = campaign).
If On Job End is chosen for a Batch Management campaign with multiple batches
in the campaign, Information Management archives data at the end of each batch in
the campaign, and at the end of the campaign. For example, if there are three batches
in a campaign, and On Job End is selected as the Archive mode, there will be three
archives: batch 1, batch 1 and batch 2, and batch 1, batch2, and batch 3.
• On Batch End - Archive PDLs automatically after specified delay from end of batch.
(For Profile Historian, batch = reel or grade.)
For Profile Historian applications, configure the archive mode as ON BATCH END.
Use the Delete Mode button to select a delete mode. The choices are:
• No_auto_delete - Delete PDLs on demand via PDL window.
• After Archive - Delete PDLs from PDL database automatically, a specified time
after they have been archived.
3BUF001092-610 A 366
11 Configuring the Archive Function
11.10 PDL Archive Configuration
• After Job End - Delete PDLs after specified delay from end of job (for Batch
Management, job = campaign). The PDL does not have to be archived.
• After Batch End - Delete PDLs after a specified delay from the end of the batch.
The PDL does not have to be archived to do this. (For Profile Historian, batch = reel
or grade.)
For Profile Historian applications, configure the delete mode as AFTER ARCHIVE.
Also the Delete Delay must be set to a value greater than or equal to the profile log
time period.
For Profile Historian applications, configure the Delete Delay equal to or slightly
greater than the profile log time period.
3BUF001092-610 A 367
3BUF001092-610 A 368
12 Consolidating Historical Data
12.1 Consolidating Batch PDL Data with History Associations
This section describes how to consolidate historical process data, alarm/event data, and
production data from remote history servers on a dedicated consolidation node. This
includes data from 800xA 5.1 or later Information Management nodes in different systems.
The consolidation node collects directly from the History logs on the remote nodes.
For instructions on consolidating historical process data, refer to Consolidating Historical
Process Data on page 370.
For instructions on consolidating alarm/event data stored in message logs, and production
data stored in PDLs, refer to Consolidating Message Logs and PDLs on page 403.
3BUF001092-610 A 369
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
When setting up numeric consolidation on nodes that will also consolidate Batch PDL
data with history associations, the naming convention on the Numeric Log template on
the consolidation node must match the naming convention for the numeric log template
on the source node.
3BUF001092-610 A 370
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3BUF001092-610 A 371
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
This starts the process of reading the log configuration information from the specified
remote history server. The progress is indicated in the dialog shown in Figure 12.2.
When the Importer Module successfully loaded message is displayed, the read-in
process is done.
3BUF001092-610 A 372
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
8. Click Next, then continue with the procedure for Assigning Objects to the Imported
Tags on page 374.
3BUF001092-610 A 373
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
This displays a dialog for specifying the location and naming conventions for the
Log Template object, and the Log Configuration aspects, Figure 12.4.
3BUF001092-610 A 374
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
2. Typically, the default settings can be used. This will create the Log Template in the
History Log Template Library and provide easy-to-recognize names for the
template and Log Configuration aspects. Adjust these settings if necessary. Click
Next when done.
This displays a dialog which summarizes the log/object association process, Figure
12.5. At this point no associations have been made. There are no tags assigned to
new objects.
3BUF001092-610 A 375
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3. Click Assign Objects. This displays the Assign Objects dialog, Figure 12.6. This
dialog lists all logs that have been imported from the remote history node and are
now in the resulting xml file. The status of each log is indicated by a color-coded
icon in the Access Name column. The key for interpreting these icons is provided
in the lower left corner of the dialog.
4. Use the Assign Objects dialog to make any adjustments to the imported tag
specifications that may be required.
3BUF001092-610 A 376
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3BUF001092-610 A 377
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
To modify the object and property names to make the History configurations match:
1. Split the Access Name into three fields - field 2 will become the object name path,
and field 3 will become the property name. In this example, the [control
Structure]Root/ is removed from the object path. If this is not removed from the
object path, the import will not work correctly. To split the Access Name:
a. Make sure the selected Object String is Access Name.
b. Click the Add button associated with the Split String At field. This displays the
Add String Split Rule dialog, Figure 12.7.
c. For Split Character, enter the character that splits the structure and the object
path. This is, the forward slash (/). Leave the other parameters at their default
settings: Occurrence = 1st, Search Direction = Forward.
d. Click OK. Figure 12.8
e. Click the Add button associated with the Split String At field. This displays the
Add String Split Rule dialog.
f. For Split Character, enter the character that splits the object and property names.
For Information Management, the colon (:) character is used. Leave the other
parameters at their default settings: Occurrence = 1st, Search Direction =
Forward.
g. Click OK.
3BUF001092-610 A 378
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
2. Make Field 2 of the split Access Name (text left of the split character) the Object
String, and trim (remove) the split character (in this case, the comma) from the
Object String. To do this (reference Figure 12.9):
a. In the String for Object Name section, select 2 from the Field pull-down list.
This specifies Field 2 (text left of split character) as the Object String.
b. In the Trim section check the Right check box and enter 1. This specifies that
one character will be trimmed from the right end of the text string.
As changes are made, the Object String will change accordingly. The resulting
Object String is shown in Figure 12.10.
3BUF001092-610 A 379
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3. Make Field 3 of the split Access Name (text right of the split character) the Property
String. To do this, in the String for Object Property section, select 2 from the Field
pull-down list, Figure 12.11. This specifies Field 2 (text right of split character in the
Access Name) as the Object Property String.
After any desired adjustments are made, continue with the procedure for Creating the
Object/Tag Associations on page 380.
3BUF001092-610 A 380
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3BUF001092-610 A 381
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
2. Click Close. This displays the Assign Objects summary again. This time the summary
indicates the result of the assign objects process, Figure 12.14.
3. Click Next. This displays a dialog for specifying parameters for creating the new
Log Template and Log Configuration aspects.
4. Continue with the procedure for Specifying the Service Group and Other
Miscellaneous Parameters on page 382.
3BUF001092-610 A 382
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Enter a prefix for each group’s folder name. The specified counter type (A-Z or 0-9) will
be appended to the prefix to make each folder name unique. For example, by specifying
eng130_ as the prefix, and 0...9 as the Counter Type, the following group names will be
used: eng130_0, eng130_1, and so on.
Set the Source Type to Remote Historian.
Click Next when done. This displays an import preview where settings can be confirmed,
Figure 12.16.
3BUF001092-610 A 383
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Read the preview. When ready, click Next to start the import.
The progress status is displayed while the log configurations are being imported, Figure
12.17. When the Import has completed successfully message appears, click Finish.
3BUF001092-610 A 384
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
The Log Configuration aspects have now been created for the imported log configurations.
A History Source object must be present above the Log Configuration aspects in the
structure where they reside, Figure 12.18. If the History Source object is not present,
refer to the procedure for adding History Source objects in Configuring Node
Assignments for Property Logs on page 189.
The Log Configuration aspect’s Status tab (or any Desktop tool such as DataDirect or
desktop Trends) can now be used to read historical data from the remote log. An example
is shown in Figure 12.19.
3BUF001092-610 A 385
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Figure 12.19: Example, Viewing Historical Data from an Imported Log Configuration
Finish the set-up for historical data consolidation by Collecting Historical Data from
Remote History Servers on page 386.
3BUF001092-610 A 386
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
The basic steps are described below. Further details are provided in the referenced
sections:
• Determine the storage interval of the remote logs. The storage interval for the
new logs being created should match or be a multiple of the storage rate for the
remote logs being collected. To determine the storage rate of the remote logs, refer
to a Log Configuration aspect created for an imported log configuration. Refer to
Determining the Storage Rate on page 387.
• Create a new Log Template. This procedure is basically the same as Building a
Simple Property Log on page 197, except that the direct log in the property log
hierarchy must be an IM 3.5 Collector link type, rather than a basic history trend log.
Also, the storage interval recorded in the previous step will be used. For details refer
to Creating a New Log Template on page 388.
• Use the Bulk Import tool to instantiate the property logs. When this is done, change
the Item IDs for all objects, and apply the new Log Template. Refer to Using the
Bulk Import Tool to Instantiate the Property Logs on page 390.
3BUF001092-610 A 387
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Figure 12.20: Example, Looking Up the Storage Rate for the Remote Logs
3BUF001092-610 A 388
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
When configuring the Data Collection parameters, set the storage interval to match or
be a multiple of the remote log’s storage interval, Figure 12.22.
3BUF001092-610 A 389
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Modify the Item IDs in the spreadsheet to make the IM data source for the direct logs
reference the remote history log names. The IM Data Source specification has the
following form: $HSobject, attribute-n-oIPaddress.
The IP address is only required when collecting from multiple sites, and some sites
use identical log names. The IP address is not required when all data sources are
unique.
3BUF001092-610 A 390
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Scroll to the Item ID column. The remote log name is embedded in the corresponding
object’s Item ID, Figure 12.24.
Item ID
Column
Log Name
The portion of the Item ID text string before the start of the .$HS string must be stripped
away and be replaced with DS, Figure 12.25. This instructs the Bulk Import tool to use
the remainder of the Item ID ($HSobject, attribute-n-oIPaddress) as the log’s data source.
Replace this
part with DS
Figure 12.25: Trimming the Item ID
Be sure to remove the wildcard (*) character from the Find criteria; otherwise, the
entire Item ID (not just the prefix) will be replaced.
3BUF001092-610 A 391
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
The result of the find and replace operation is shown in Figure 12.27.
When consolidating a dual log configuration, edit the ItemID string to specify BOTH
logs in the dual log configuration. The syntax for this is described in Additional ItemID
Changes for Dual Logs on page 392.
If dual logs are not being consolidated, skip this section and continue with Selecting
the Log Template Configuration on page 394.
The correct syntax depends on if it is a simple dual log configuration (no secondary
history logs), or a hierarchical configuration with secondary history logs.
Simple Dual Log Configuration
For a simple dual log configuration (two history logs), Figure 12.28, the syntax is:
DS.$HSobject,attribute-1-d or DS.$HSobject,attribute-1-d2
The number following the first dash (-1) identifies the first log added to the log template.
The d following the second dash identifies the matching dual log, in this case, the second
log added to the template. The number following -d defaults to: first dash number+1.
With this configuration, the matching dual log is always -2; therefore the number does
not have to be entered following d. For example:
DS.$HS02-B1-001,MEASURE-1-d = DS.$HS02-B1-001,MEASURE-1-d2
3BUF001092-610 A 392
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Consolidation Log
Direct Trend Log
1 3
Direct Trend Log Consolidation Log
2 4
syntax = DS.$HSobject,attribute-3-d OR
DS.$HSobject,attribute-3-d4
Figure 12.29: Hierarchical Dual Log Configuration, Example 1
3BUF001092-610 A 393
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
1 2
Direct Trend Log Consolidation Log
3 4
syntax = DS.$HSobject,attribute-2-d4
Figure 12.30: Hierarchical Dual Log Configuration, Example 2
This Delete operation deletes all the history data of the logs available in the Excel
sheet from the 800xA system.
2. From the Property template Column for the first (top) object, choose Template List
from the context menu.
3. Select the template from the list provided Figure 12.31.
3BUF001092-610 A 394
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
4. Click on a corner of the cell, and pull the corner down to highlight the Property
Template column for the remaining objects. The template specification will
automatically be entered in the highlighted cells when the mouse button is released,
Figure 12.32.
Figure 12.32: Applying the New Log Template Specification to All Objects
5. When finished, run the Bulk Configuration utility to update the log aspects from the
spreadsheet. To do this, choose Bulk Import>Update Log Aspect from Sheet.
To check the results, go to the location where the Log Configuration aspects have been
instantiated, and use the Status tab of a Log Configuration aspect to read historical data
for the log. These logs will be located under the same root object as the imported log
configuration aspects. An example is shown in Figure 12.33.
3BUF001092-610 A 395
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3BUF001092-610 A 396
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Once the TTD log configurations are present in the Aspect Directory, follow the guidelines
in this section to create property logs to collect from the TTD logs. There are two basic
steps as described below. Further details are provided in the referenced sections:
• Create a new Log Template. This procedure is basically the same as Building a
Simple Property Log on page 197, except that the direct log in the property log
hierarchy must be an IM Collector link type, rather than a basic history trend log.
Also, the Collection Type must be OPC HDA. For details refer to Creating a New
Log Template on page 397.
• Use the Bulk Import tool to instantiate property logs to collect from the TTD logs.
When this is done, specify the Item IDs for all objects to reference the TTD logs,
and apply the new Log Template. Refer to Using the Bulk Import Tool to Instantiate
the Property Logs on page 398.
3BUF001092-610 A 397
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3BUF001092-610 A 398
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3BUF001092-610 A 399
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Modify the Item IDs in the spreadsheet to make the IM data source for the direct logs
reference the TTD log names. The IM Data Source specification has the following form:
DS.object:property,TTDlogname. To do this, do the following for each TTD log
specification:
1. Obtain the object name from the Object column. The object name is the text string
that follows the last delimiting character in the full text string, Figure 12.37.
2. Obtain the property name from the Property column, Figure 12.38.
3BUF001092-610 A 400
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
3. Obtain the TTD log name from the applicable log template which was created in the
Aspect Directory for the TTD log group, Figure 12.39.
4. Enter the Data Source specification in the Item ID column, Figure 12.40.
3BUF001092-610 A 401
12 Consolidating Historical Data
12.2 Consolidating Historical Process Data
Replace the current Log Template specification with the new Log Template. To do this,
right click on the Property template Column for the first (top) object, and choose Template
List from the context menu. Then select the template from the list provided Figure 12.41.
Click on a corner of the cell, and pull the corner down to highlight the Property Template
column for the remaining objects. The template specification will automatically be entered
in the highlighted cells when the mouse button is released.
When finished, run the Bulk Configuration utility to update the log aspects from the
spreadsheet: choose Bulk Import>Update Log Aspect from Sheet.
To check the results, go to the location where the Log Configuration aspects have been
instantiated, and use the Status tab of a Log Configuration aspect to read historical data
for the log. An example is shown in Figure 12.42.
3BUF001092-610 A 402
12 Consolidating Historical Data
12.3 Consolidating Message Logs and PDLs
Avoid consolidating Batch Management created PDL Message Logs if a PDL Message
Log does not exist on the Information Management Consolidation node.
Perform the following procedure if a PDL Message Log has not been configured on
the Information Management Consolidation node, or if it has been configured
incorrectly. Although a PDL Message Log created by Batch Management will be
consolidated from the target node, if a PDL Message Log has not been configured
on the Information Management Consolidation node, or if it has been configured
incorrectly, it will not contain any messages for any of the tasks when viewed on
the Consolidation node.
a. Correctly configure the PDL Message Log on the Consolidation node.
b. Use SQLPLUS on the target node to set the Archive Status to 0 for all tasks in
the task table.
c. This forces the PDL Message Logs to be reconsolidated at which time the
messages will be consolidated.
2. When consolidating message logsm, a user account must be set up on the source
node. This account must use the same user name and password as the installing
user for the target consolidation node. Also, this user must be added to the
HistoryAdmin group. Detailed instructions for adding user accounts are provided in
the section on Managing Users on page 589.
3. On the consolidation node, create the Job Description object and configure the
schedule and start condition(s) through the respective aspects. Refer to Setting Up
the Schedule on page 404.
3BUF001092-610 A 403
12 Consolidating Historical Data
12.3 Consolidating Message Logs and PDLs
4. Add an action aspect to the Job Description object. Refer to Setting Up the Schedule
on page 404.
5. Open the action aspect view, and select IM Consolidate Action from the Action
pull-down list. Then use the dialog to specify the source and target nodes, and the
source and target log names. Each action is used to specify logs for ONE source
node. Refer to Using the Plug-in for IM Consolidation on page 406.
6. Repeat Step 4. and Step 5. for each source node from which the consolidation node
will consolidate message and PDL data.
Reuse of Batch ID (TCL) is supported by PDL Consolidation. A warning is displayed
in the log file when processing a remote task entry (TCL batch) that already exists
on the local node with a different start time. The user can decide what to do about
the duplicate. If nothing is done, the local entry will eventually be removed through
the archive process.
3BUF001092-610 A 404
12 Consolidating Historical Data
12.3 Consolidating Message Logs and PDLs
4. Click Create. This creates the new job under the Job Descriptions group, and adds
the Schedule Definition aspect to the object’s aspect list.
5. Click on the Scheduling Definition aspect to display the configuration view, Figure
12.43. This figure shows the scheduling definition aspect configured as a weekly
schedule. Consolidation for the specified node will occur every Sunday at 11:00 PM,
starting July 3rd, and continuing until December 31st at 11:00 PM.
3BUF001092-610 A 405
12 Consolidating Historical Data
12.3 Consolidating Message Logs and PDLs
3. The password is used to connect to Oracle on the remote IM server. If the source
IM node is in the same domain and has the same service account as the destination
IM, the field can remain <default>. If the source IM is in another domain, the Oracle
Administration password for the remote IM must be entered. If the source IM
password is updated, the consolidation action must be updated to reflect the change.
3BUF001092-610 A 406
12 Consolidating Historical Data
12.3 Consolidating Message Logs and PDLs
4. To consolidate PDL data, select the Consolidate PDL check box. For Batch
Management, this includes PDL message logs, so it is not required to specify PDL
message logs in the message log list as described in Step 5.. However, the
PDL_MESSAGE log must exist on the consolidation node.
When consolidating PDL data for MOD 300 software applications, the corresponding
MODMSGLOG must be added to the message log list.
5. Specify the message logs to be consolidated as follows:
a. Click the Add Log button.
b. In the Add Log dialog enter the full log name of the consolidation log in the
Local Name field, and the full log name of the log from which messages are
being collected in the Remote Name field, Figure 12.45.
the $HS prefix and -n-o suffix must be used when specifying the local and remote
log names.
d. Click OK when finished. This puts the log specification in the list of message
logs to be consolidated, Figure 12.46.
Consolidate Log
Specification Added
3BUF001092-610 A 407
12 Consolidating Historical Data
12.3 Consolidating Message Logs and PDLs
Repeat Step a. through Step d.- for as many message logs as required.
3BUF001092-610 A 408
13 History Database Maintenance
This section provides instructions for optimizing and maintaining the history database
after configuration has been completed. Utilities for some of these maintenance functions
are available via the Windows task bar.
The procedures covered in this section are:
• Backup and Restore on page 413 describes how to create backup files for the current
History database configuration, restore a history database configuration from backup
files, and synchronize the Aspect Directory contents with the current Information
Management History database configuration.
• Schedule History Backups on page 442 using the Report Action of the Action Aspect
to schedule a History backup.
• Starting and Stopping History on page 441 describes how to stop and start history
processes under PAS supervision. This is required for various database maintenance
and configuration procedures.
• Starting and Stopping Data Collection on page 442 describes various methods for
starting and stopping data collection.
• Viewing Log Runtime Status and Configuration on page 445 describes how to use
the Log List aspect for viewing runtime status for logs.
• Presentation and Status Functions on page 449 describes how to use the Presentation
and Status tabs on the log configuration aspect for formatting history presentation
on Operator Workplace displays, and for viewing log data directly from the log
configuration aspect.
• History Control Aspect on page 455 describes how to read status and network
information for the History Service Provider for a selected node in the Node
Administration structure.
• Database Maintenance Functions on page 457 describes various maintenance
operations including extending tablespace, maintaining file-based storage, staggering
data collection.
3BUF001092-610 A 409
13 History Database Maintenance
13.1 Accessing History Applications
3BUF001092-610 A 410
13 History Database Maintenance
13.4 Hardware Redundancy
3BUF001092-610 A 411
13 History Database Maintenance
13.5 Software Backup
Each application assigns different value to their data. While the frequency of the backups
are important because it defines the amount of data you can recover, it is more important
that backups are created and stored on another file server. In many cases, just one of
these backups is better than no backup available.
3BUF001092-610 A 412
13 History Database Maintenance
13.6 Create a backup plan
13.7.1 Considerations
When backing up or restoring the History database, make sure the disk is ready and
available on the machine on which the procedure is to occur. Also, these preparatory
steps are required before performing a restore operation:
• All services under PAS supervision must be stopped.
• The Inform IT History Service Provider must be stopped.
• There must be NO applications accessing the Oracle database.
3BUF001092-610 A 413
13 History Database Maintenance
13.7 Backup and Restore
The log file should be checked after each backup and restore operation to make sure
that each backup or restore operation completed successfully.
Trend log configurations will only be backed up by this utility when they exist in a
property log structure in combination with a history-based log.
As an option, choose to backup just the Aspect System Definition file. This provides a
file that can be used to recreate the History database configuration in the Aspect Directory.
3BUF001092-610 A 414
13 History Database Maintenance
13.7 Backup and Restore
If the data files exceed two gigabytes, or if there are more than 25,000 files, then multiple
zip files will be created using the following naming convention:
• First File name-drive.zip
• Next File name-drive0001.zip
• Next File name-drive0002.zip
When backing up the History database, make sure the disk is ready and available on
the node on which the procedure is to occur. The log file should be checked after the
backup operation to make sure that the backup operation completed successfully.
Make sure the system drive is not getting full. Temp space is required to make the backup.
If the log file indicates that the Oracle export failed, use the option to export to a disk
with more space.
The following procedures can be used to backup all information related to Information
Management software. Information Management software includes:
• Inform IT History (includes Archive).
• Inform IT Display Services.
• Inform IT Application Scheduler.
• Inform IT Data Direct.
• Inform IT Desktop Trends.
• Inform IT Open Data Access.
• Inform IT Calculations.
Not all components require additional steps to backup local files. An Information
Management server will have all the software installed while other nodes will have only
some of the software used. For each component that is configured and used on a node,
all the files should be backed up occasionally. The following procedure provides the
steps to perform the backups.
Node Types
The IM software run on three basic node types.
• Information Management Server.
• Information Management Client, with optional Desktop Tools components.
• Desktop Tools installation.
3BUF001092-610 A 415
13 History Database Maintenance
13.7 Backup and Restore
For a given node type, the backup steps depends on the type of software installed and
the software used on the node. If software is installed and not utilized, there will be
nothing to backup. The following steps identify how to backup an Information Management
800xA Server node. The steps to backup the other node types will reference the server
backup procedure.
The IM Server must be connected to an active aspect system at this time. The following
steps require acesss to the aspect directory.
3BUF001092-610 A 416
13 History Database Maintenance
13.7 Backup and Restore
1. Select:
ABB Start Menu > ABB System 800xA > Information Mgmt > History > IM
Backup and Restore
2. Verify the Create Backup Files of Current Configuration option is enabled in the
IM Historian Backup/Restore Utility window.
3. Click Next. A window for setting up the backup operation is displayed (Figure 13.2).
4. Specify the location where the backup files are to be created in the New Directory
Path for the Backup field. This path must already exist and the directory must be
empty. If necessary, click Browse to create a new directory (Figure 13.2). Add a
D:\HSDATA\History as an additional option. Figure 13.2 shows the Browser dialog
being used to browse the existing path C:\Backup, and then create a folder named
T3 in the Backup directory.
The backup of the History data must be in a directory of its own, not the
D:\HSDATA\History directory. If the data is put into the D:\HSDATA\History directory,
it will get lost.
a. This option enables an alternate location to be specified for the temporary and
oracle DB export files used during history database backup.
To specify a different folder for the export other than the system drive, enter
the following in the additional options dialog:
d:\export
Where d:\export is a folder created on a disk with enough space for the export.
5. Verify the Only Generate Aspect Definition File option is disabled.
6. Click Next. The HsBAR Output Window is displayed.
3BUF001092-610 A 417
13 History Database Maintenance
13.7 Backup and Restore
Refer to Specifying Additional hsBAR Options on page 435 to specify additional options
for the hsBAR command.
After the HsBAR Output Window closes, monitor the progress in the Progress Status
area of the IM Historian Backup/Restore Utility window and click Finish when the
backup is complete. The Browser dialog is used
If a message appears stating that there are inconsistencies between the log
configurations in the Aspect System and the log configurations in Oracle, it may be
because the database was not cleaned before running the backup. Use the hsDBMaint
-clean function to clean the database and then rerun the backup. If this does not fix
the problem, contact ABB Technical Support for further assistance.
3BUF001092-610 A 418
13 History Database Maintenance
13.7 Backup and Restore
3BUF001092-610 A 419
13 History Database Maintenance
13.7 Backup and Restore
8. The HsBAR Output window may be opened over and hide the progress status
window. There is a check box for specifying that the HsBAR window be closed
automatically when finished, Figure 13.3. This is recommended. When the window
is not closed automatically, wait for the Close button to be activated on the bottom
of the output window, then click the Close button.
3BUF001092-610 A 420
13 History Database Maintenance
13.7 Backup and Restore
• History Archive Data: For each archive device, go to the location specified by the
Device Filename and copy the folders under that directory to a safe location. Do
this even if automatic backup is configured. If automatic backups are for local disks,
also transfer them to a safe media at this time. Archive state information must be
saved. The Archive folder from the following path must be saved to a safe location:
C:\ProgramData\ABB\IM\
• Reports: Save any report template files created in Microsoft Excel, DataDirect,
and/or Crystal Reports. Also save report output files created as a result of running
these reports via the Scheduling Services.
• Desktop Trends: Back up trend display, ticker display, and tag explorer files.
• Display Services: Back up the directories for custom users, as well as display and
user element definitions.
• DataDirect: Back up custom text files for object, object type, and attribute menus
used on the DataDirect windows.
All files should be copied to a remote storage medium after the saving is completed.
Use PAS to start all Information Management processes and the Inform IT History service
provider for this node.
1. To run PAS Process Administration, launch the Run dialog (Windows key-R), enter
PASGUI and press OK.
2. Click Start All in the PAS dialog box. This will start the Inform IT History Service
Provider.
Non-Information Management Server nodes. Any nodes that are installed and are
configured to use Desktop Trends, Data Direct, Application Scheduler (reporting) should
backup any application specific files after backup process is completed on the node. The
backup files include:
• Reports: Save any report template files created in Microsoft Excel, DataDirect,
and/or Crystal Reports. Also save report output files created as a result of running
these reports via the Scheduling Services.
• Desktop Trends: Back up trend display, ticker display, and tag explorer files.
• DataDirect: Back up custom text files for object, object type, and attribute menus
used on the DataDirect windows.
3BUF001092-610 A 421
13 History Database Maintenance
13.7 Backup and Restore
Be sure to follow the preparatory steps (1-4) in Restoring a History Database on page
422 before beginning the actual restore operation.
3BUF001092-610 A 422
13 History Database Maintenance
13.7 Backup and Restore
1. Stop all PAS processes. Refer to Starting and Stopping History on page 441.
2. Make sure no third-party applications are accessing the Oracle database.
3. Stop the Inform IT History Service Provider. To do this (reference Figure 13.4):
a. Go to the Service structure in the Plant Explorer and select the
Inform IT History Service Provider.
b. Select the Service Provider Definition aspect.
c. Click the Configuration tab.
d. Uncheck the Enabled check box then click Apply.
3BUF001092-610 A 423
13 History Database Maintenance
13.7 Backup and Restore
When finished with the aforementioned preparatory steps, run the restore utility as
described below:
1. Start the Backup/Restore utility. When the Backup/restore utility is started, the Create
Backup option is selected by default.
2. Click the Restore configuration from backup files option, Figure 13.5.
3. Click Next. This displays a dialog for setting up the restore operation, Figure 13.6.
3BUF001092-610 A 424
13 History Database Maintenance
13.7 Backup and Restore
4. Specify the location of the backup files from which the History database will be
recreated. Enter the path directly in the Path of IM historian backup field, or use the
corresponding Browse button. Figure 13.2 shows the Browser dialog being used
to browse to C:\Backup\T3.
• As an option specify a new mount point path for file-based logs. Use the
corresponding Browse button and refer to Mount Point on page 438 for further
guidelines.
• Also, specify that a different Oracle tablespace definition file be used. This may
be required when restoring the files to a machine that has a different data drive
layout than the original backup machine. Use the corresponding Browse button
and refer to Moving hsBAR Files to a Different Data Drive Configuration on
page 438 for further guidelines.
• Also, specify additional hsBAR options. These options are described in
Specifying Additional hsBAR Options on page 435.
3BUF001092-610 A 425
13 History Database Maintenance
13.7 Backup and Restore
5. Click Next when finished with this dialog. This opens the progress status window
and the HsBAR Output Window.
If the restore operation fails with Oracle Error Message 1652 - Unable to extend tmp
segment in tablespace tmp - it may be due to a large OPC message log which exceeds
the tmp tablespace capacity during the restore operation.
Use the Database Instance Maintenance wizard to increase the tmp tablespace. The
default size is 300 megabytes. Increase the tablespace in 300-megabyte increments
and retry the restore operation until it runs successfully. Refer to Maintaining the
Oracle Instance on page 137.
6. The HsBAR Output window may be opened over and hide the progress status
window. There is a check box for specifying that the HsBAR window be closed
automatically when finished, Figure 13.7. This is recommended. If the window is
not closed automatically, then wait for the Continue button to appear on the bottom
of the output window, then click the Continue button when ready.
3BUF001092-610 A 426
13 History Database Maintenance
13.7 Backup and Restore
7. After the HsBAR window is closed, monitor the Progress in the Progress Status
window, Figure 13.8. Ignore the error messages that indicate error deleting aspect.
8. Click Finish when the Click to Exit the Application message is displayed.
3BUF001092-610 A 427
13 History Database Maintenance
13.7 Backup and Restore
9. Start all processes under PAS supervision. For further details, refer to Starting and
Stopping History on page 441.
10. Start the Inform IT History Service Provider. To do this (reference Figure 13.4):
a. Go to the Service structure in the Plant Explorer and select the
Inform IT History Service Provider.
b. Select the Service Provider Definition aspect.
c. Click the Configuration tab.
d. Check the Enabled check box then click Apply.
3BUF001092-610 A 428
13 History Database Maintenance
13.7 Backup and Restore
Figure 13.10: Synchronizing the Aspect Directory with the History Database
3BUF001092-610 A 429
13 History Database Maintenance
13.7 Backup and Restore
2. Click Next to continue. This displays a dialog for mapping controller object locations,
Figure 13.12. It is NOT necessary to make any entries in this dialog unless the
locations have changed since the backup was made.
3BUF001092-610 A 430
13 History Database Maintenance
13.7 Backup and Restore
3. Enter the mapping specifications if necessary (typically NOT REQUIRED), then click
Next to continue. This displays the Progress Status, Figure 13.13.
3BUF001092-610 A 431
13 History Database Maintenance
13.7 Backup and Restore
3BUF001092-610 A 432
13 History Database Maintenance
13.7 Backup and Restore
7. When the execution complete message is displayed, Figure 13.16, click Finish.
3BUF001092-610 A 433
13 History Database Maintenance
13.7 Backup and Restore
In this dialog, Service Group definitions being restored or synchronized are called Imported
Service Groups. The actual Service Groups which are available in the Aspect Directory
are called Current or Available Service Groups. The backup/restore utility automatically
maps Imported Service Groups to an Available Service Group by matching the host
name for each Imported Service Group with the host name for an Available Service
Group. (The host name is specified in the Service Group’s Service Provider configuration.)
If Service Group configurations in the Aspect Directory have changed since the creation
of the History database configuration now being restored or synchronized, some Imported
Service Groups may not be mapped to any Available Service Group. This condition is
illustrated in Figure 13.17 where the Current Service Group column for Imported Service
Group eng38 is blank. If this occurs, use the dialog manually to map the Imported Service
Group to an Available Service Group.
3BUF001092-610 A 434
13 History Database Maintenance
13.7 Backup and Restore
To do this:
1. From the Imported Service Groups list (top left pane), select the Imported Service
Group whose mapping is to be changed.
When this is done, the host names for the imported and currently mapped Service
Providers for the selected Imported Service Group are indicated in the respective
Service Provider lists (bottom right panes).
2. From the Available Service Groups list (top right pane), select the current Service
Group to be mapped to the selected imported Service Group. When this is done,
the host name for the Service Provider for the selected Available Service Group is
indicated in the Current Service Providers list (bottom left pane). This helps when
matching the host name for the Imported Service Group with the host name for the
Available Service Group.
3. When satisfied that the appropriate Available Service Group was selected, click the
left (<<) arrow button. This puts the name of the selected Available Service Group
in the Current Group column for the Imported Service Group.
4. Check the host names for the Imported and Current mapped Service Providers in
the respective Service Provider lists (bottom right panes) to verify that the Imported
and Available Service Groups were mapped as intended.
5. Repeat this procedure to map additional Imported Service Groups to the same
Available Service Group if needed.
To change the Current Service Group for an Imported Service Group, first unmap the
Current Service Group. To do this, select the Imported Service Group, then click the
right (>>) arrow button. This removes the name of the currently mapped Service Group.
At this point the Imported Service Group is not mapped to any Available Service Group.
Repeat the above procedure starting at step 2 to do this.
3BUF001092-610 A 435
13 History Database Maintenance
13.7 Backup and Restore
-o indicates flat files are to be excluded when backing up the History database on the
hard drive. Refer to Excluding Flat Files on Backup on page 440.
-a is used to specify an alternate location for the temporary and oracle DB export files
used during history database backup. Default directory for these files is C:\HsData\History\
-k This option can be used when restoring oracle-only database backup to skip the
creation of empty datafiles. If backup was performed with the datafiles, -k is ignored
-b instructs hsBAR to extract oracle data info file and put it in the specified directory.
Therefore this option can only be used on existing database backup files and -mr has
to be specified.
-e Used on restore only. This option tells hsBAR to ignore the Oracle data info file found
in backup and use the one specified instead. This includes table information stored in
stats. file (as in prior 2.5 versions of History). The stats information from the specified
Oracle data info file will be used instead.
Operation Mode
In order for the hsBAR program to execute, the operation mode must be specified by
entering the -m option, followed by either:
3BUF001092-610 A 436
13 History Database Maintenance
13.7 Backup and Restore
Storage
In addition to specifying the mode (through the -m option), in order for the hsBAR program
to execute, a storage location must also be specified. On an hsBAR backup operation,
the -h option must be used to specify the destination to which the History database is to
be backed up. On an hsBAR restore operation, the -h option must be used to specify
the source from which the History database is to be restored. The drive, directory path,
and file name must all be specified to use as the backup destination or restore source
location, as no default is provided:
For backup “%HS_HOME%\bin\hsBAR” -m b -h C:\Temp\history
For Restore “%HS_HOME%\bin\hsBAR” -m r -h C:\Temp\history
In the above example, the backup operation will produce in the C:\Temp directory one
or more zipped archives (depending on the number of directories containing History
database-related files) of the form history-drive.zip (such as history-C.zip or history-D.zip,
depending on the drive containing the database files). The restore operation in the above
example will search in the C:\Temp directory for zipped archives matching the format
history-drive.zip, and will then attempt to restore the files contained within the found
archives.
3BUF001092-610 A 437
13 History Database Maintenance
13.7 Backup and Restore
Log File
All progress messages and errors related to hsBAR are printed to a log file. The default
log file for a backup operation is %HS_LOG%historyBackup.log, while the default log
file for a restore operation is %HS_LOG%historyRestore.log. The hsBAR program,
however, can write messages to an alternate log file specified by entering the -f option,
followed by the full directory path and log file name:
For backup “%HS_HOME%\bin\hsBAR” -m b -h C:\Temp\history -f
C:\Temp\backup.log
For restore “%HS_HOME%\bin\hsBAR” -m r -h C:\Temp\history -f C:\Temp\restore.log
Mount Point
The mount point specifies the location to which the History database is to be restored
on an hsBAR restore operation. The -p option is used to restore flat files (file-based logs)
to the mount point itself, and not necessarily to a location that is relative to the root. If
this option is omitted, all flat files will be restored relative to the root, and not relative to
the mount point itself. The -n option may be used in conjunction with the -p option. This
is used to specify a new mount point. If the -p option is used alone (without the -n option),
the default mount point for the History database C:\HsData\History (%HS_DATA%) will
be used. If the -p and -n options are used together, the old History database will be
deleted from its original location and restored to the new specified mount point. This
option is especially useful in that it can take a History database that was backed up on
several different drives and restore it all to a centralized location on one drive:
“%HS_HOME%\bin\hsBAR” -m r -h C:\Temp\history -p -n C:\Temp
Backup was Made on Information Management 4.n or Earlier. The -b and -e options
are used to move hsBAR files to a machine that has a different data drive layout than
the machine where the hsBAR backup was created. For example, if the source machine
has Oracle data files on drives C, D, E and F, and the files are being restored to a machine
with drives C, E, F and G, the restore will fail when hsBAR attempts to restore files to
the D drive.
3BUF001092-610 A 438
13 History Database Maintenance
13.7 Backup and Restore
In order for this restore to succeed, the files that were originally located on the D drive
must be restored to the G drive on the new machine. To do this:
1. Run hsBAR with the following options to extract the datafiles_config information:
hsBAR -mr -h c:\myDB -b C:\\
where myDB is the file name that was entered when the backup was made and -b
is the location to extract the datafiles_config file to.
2. Edit the datafiles_config file. An example is shown in Figure 13.18.
3. Run hsBAR with the following options to use the modified datafiles_config file:
hsBAR -mr -h c:\myDB-C.zip -e C:\datafiles_config
After editing the datafiles_config file, a file extension is added (for example
datafiles_config.txt), that extension must be included in the specification for the -e
option in hsBAR.
Backup was Made on Information Management 5.0 or Newer.. The -b and -e options
are used to move hsBAR files to a machine that has a different data drive layout than
the machine where the hsBAR backup was created. For example, if the source machine
has Oracle data files on drives C and E, and the files are being restored to a machine
with drives C and D, the restore will fail when hsBAR attempts to restore files to the D
drive.
3BUF001092-610 A 439
13 History Database Maintenance
13.7 Backup and Restore
In order for this restore to succeed, the files that were originally located on the E drive
must be restored to the D drive on the new machine. To do this:
1. Run hsBAR with the following options to extract the instance_config information:
hsBAR -mr -h c:\myDB -b C:\
where myDB is the file name that was entered when the backup was made and -b
is the location to extract the instance_config.txt file to.
2. Edit the instance_config file. An example is shown in Figure 13.19
3. Run hsBAR with the following options to use the modified instance_config file:
hsBAR -mr -h c:\myDB-C.zip -e C:\instance_config.
3BUF001092-610 A 440
13 History Database Maintenance
13.8 Starting and Stopping History
2. To stop all processes under PAS supervision, click Stop All. PAS is still active when
all processes are stopped. Click Start All to start all processes. The Restart All
button stops and then restarts all processes.
3. Click Close to exit PAS when finished.
3BUF001092-610 A 441
13 History Database Maintenance
13.9 Schedule History Backups
3BUF001092-610 A 442
13 History Database Maintenance
13.10 Starting and Stopping Data Collection
To activate an individual log within a property log, select the log in the hierarchy, click
the IM Definition tab, and then click Activate under the IM Historian Log State control,
Figure 13.22.
To activate or deactivate property logs on a log set basis, refer to Activating Logs with
Log Sets on page 444.
3BUF001092-610 A 443
13 History Database Maintenance
13.10 Starting and Stopping Data Collection
3BUF001092-610 A 444
13 History Database Maintenance
13.11 Viewing Log Runtime Status and Configuration
Log Set
Aspect
Aspect View
Selected
Log Set
Click Mode to display
Activate/Deactivate menu
Figure 13.23: Accessing the Configuration View for the Log Set Aspect
3BUF001092-610 A 445
13 History Database Maintenance
13.11 Viewing Log Runtime Status and Configuration
3. From the History service provider, navigate to and select the InformIT History
Object.
4. Select the Inform IT History Log List aspect.
Figure 13.24: Accessing the Configuration View for the Log List Aspect
Click Update Log Lists to display the log list (Figure 13.25). This list provides both
system-level and log-level information. The System Information section indicates the
total number of logs configured for the applicable Service Group. It also indicates the
number of logs whose state is Active, Inactive, or Pending.
The log-level information is displayed as columns in the log list. Select columns to show
or hide. Also, specify a filter to show a specific class of the logs. The list can be filtered
based on log state, log type, log name, archive group, and log set. These procedures
are described in Filtering the Log List and Visible Columns on page 447.
The context menu is used to activate or deactivate logs in the list. This is described in
Activating/Deactivating Logs on page 449.
Double-clicking on a specific Log Information row, the Log Configuration aspect containing
the log will open as an overlap window.
3BUF001092-610 A 446
13 History Database Maintenance
13.11 Viewing Log Runtime Status and Configuration
System
Information
Log
Information
Showing/Hiding Columns
To show/hide specific columns simply check to show or uncheck to hide the corresponding
column.
3BUF001092-610 A 447
13 History Database Maintenance
13.11 Viewing Log Runtime Status and Configuration
Filter Example
The following example show how to apply a filter. In this case the filter is based on log
name. The filter will reduce the list to only those logs whose name includes the text string
*30s* Figure 13.26. The filter result is shown in Figure 13.27.
3BUF001092-610 A 448
13 History Database Maintenance
13.12 Presentation and Status Functions
After executing an Activate All or Deactivate All from the Inform IT History Log List,
the State does not get updated. Use the Update Log Lists button to refresh the list.
13.12.1 Presentation
This tab, Figure 13.29, is used to configure presentation attributes for the 800xA operator
trend display.
3BUF001092-610 A 449
13 History Database Maintenance
13.12 Presentation and Status Functions
The default settings for these attributes are set in one of the following ways:
• Object Type (i.e. in the Property Aspect).
• Object Instance (i.e in the Property Aspect).
• Log (in the Log Configuration Aspect).
• Trend Display.
These are listed are in order from lowest to highest precedence.
For example, override a value set in the Object Type by writing a value in the Object
Instance. A value set in the Object Type, Object Instance, or the Log can be overridden
in the Trend Display.
To override a presentation attribute the check box for the attribute must be marked. If
the check box is unmarked the default value is displayed. The default value can be a
property name or a value, depending on what is specified in the Control Connection
Aspect.
Engineering Units are inherited from the source. It is possible to override it. Normal
Maximum and Normal Minimum are scaling factors, used for the scaling of Y-axis in the
Trend Display. The values are retrieved from the source but are possible to override.
The Number of Decimals are retrieved from the source but are possible to override.
The Display Mode can be either Interpolated or Stepped. Interpolated is appropriate
for real values, Stepped is for binary.
3BUF001092-610 A 450
13 History Database Maintenance
13.12 Presentation and Status Functions
13.12.2 Status
This tab is used to retrieve and view data for a selected log, Figure 13.30. As an option,
use the Display and Settings buttons to set viewing options. Refer to:
• Data Retrieval Settings on page 451.
• Display Options on page 452.
Click Read to retrieve the log data. The arrow buttons go to the next or previous page.
Page size is configurable via the Data Retrieval Settings. The default is 1000 points per
page.
3BUF001092-610 A 451
13 History Database Maintenance
13.12 Presentation and Status Functions
Set the number of points per page in the Page Size area. Default is 1000 and maximum
is 10,000.
Set the time range for retrieval of data in the Time Frame area.
• Latest retrieves one page worth of the most recent entries.
• Oldest retrieves one page worth of the oldest entries.
• At time is used to specify the time interval for which entries will be retrieved. The
data is retrieved from the time before the specified date and time.
Display Options
The Display button displays the Display Options dialog which is used to change the
presentation parameters data and time stamp, Figure 13.32.
Values are displayed as text by default. Unchecking the check box in the Quality area,
displays the values as hexadecimals.
Timestamps are displayed in local time by default. Unchecking the first check box in the
Time & Date area, changes the timestamps to UTC time.
Timestamps are displayed in millisecond resolution by default. Unchecking the second
check box in the Time & Date area changes the timestamp resolution to one second.
3BUF001092-610 A 452
13 History Database Maintenance
13.12 Presentation and Status Functions
3BUF001092-610 A 453
13 History Database Maintenance
13.12 Presentation and Status Functions
3BUF001092-610 A 454
13 History Database Maintenance
13.13 History Control Aspect
History
Control
Aspect
History
Control
Aspect View
Selected
Node’s
History
Object
Figure 13.34: Accessing the Configuration View for the History Admin Aspect
Use the View selection to get information on: Hard Drive Usage, Database Usage, and
History Logs for a particular node. The configuration view allows selection of History
nodes from an available list. Additionally, the tree view has selections for OMF Network
Status, PAS Control, and User Tag Management status.
13.13.2 Synchronization
For object and log name synchronization, use the Resynchronize Names button. To
force sysncronization of every object on the selected node with the Aspect Server, use
the Force Synchronization button. The Check Names button produces a report of the
object names, log names, and object locations in the aspect directory that are different
or not in the Oracle database. The Check Synchronization button produces a report
of all the items in the aspect directory that are different or are not in the Oracle database.
3BUF001092-610 A 455
13 History Database Maintenance
13.13 History Control Aspect
3BUF001092-610 A 456
13 History Database Maintenance
13.14 Database Maintenance Functions
The list of available functions depends on whether the menu is invoked while History is
online or offline. The online and offline menus are described in
• Online Database Maintenance Functions on page 458.
• Offline Database Maintenance Functions on page 459.
3BUF001092-610 A 457
13 History Database Maintenance
13.14 Database Maintenance Functions
3BUF001092-610 A 458
13 History Database Maintenance
13.14 Database Maintenance Functions
3BUF001092-610 A 459
13 History Database Maintenance
13.14 Database Maintenance Functions
3BUF001092-610 A 460
13 History Database Maintenance
13.14 Database Maintenance Functions
Choose option 1 - Create free space report now. This creates the Free Space Report.
An example is shown in Figure 13.39.
3BUF001092-610 A 461
13 History Database Maintenance
13.14 Database Maintenance Functions
Also, view a free space report on the Tablespace/Datafiles tab of the Instance
Maintenance wizard. Refer to Maintaining the Oracle Instance on page 137.
• OPC Message log uses INFORM_HS_RUNTIME and HS_INDEX tablespaces.
• numeric logs use HS_CONFIG and HS_ICONFIG tablespaces.
• PDL application uses Adjust the HS_PDL and HS_IPDL tablespaces.
• Consolidation of OPC and PDL messages, and PDL data use the
INFORM_HS_RUNTIME and HS_INDEX tablespaces. If they run out of space, OPC
and PDL message consolidation will fail. If the HS_PDL and HS_IPDL (index)
tablespaces run out of space, PDL data consolidation will fail.
This function may take a couple hours to complete for a large database, and generates
a high CPU load.
3BUF001092-610 A 462
13 History Database Maintenance
13.14 Database Maintenance Functions
where filename is the name of the file where the report output will be directed to.
or enter
hsDBMaint -report > report.txt 2> error.txt
This creates two text files in the location from which the report was ran.
The file is given a .txt extension for viewing in a text editor. The entry table provides
statistics to verify whether or not numeric logs are functioning properly. It can also be
used to look up the Log ID for a specific log name. Log IDs are used in SQL queries
against historical tables.
Rows marked with an asterisk (*) indicate logs with errors. A summary is provided at the
end of the report. This information is useful for level-4 (development) support.
This menu shows the current settings for log ID to process and lines per page. The
current settings can be changed before running the report.
If any of the settings need to be changed, choose the corresponding option from this
menu and follow the dialog. When finished with the dialog, the Entry Tables Report menu
is shown and the current settings will be changed according to the dialog. When the
settings are correct, at the menu prompt, choose the Create entry table report now
option. This generates the report in the Command Prompt window, Figure 13.41. A short
summary is provided at the end of the report, Figure 13.42.
Adjust the height and width of the Command Prompt window to display the report
properly.
3BUF001092-610 A 463
13 History Database Maintenance
13.14 Database Maintenance Functions
3BUF001092-610 A 464
13 History Database Maintenance
13.14 Database Maintenance Functions
If these settings are acceptable, simply select option 4 - Extend tablespace with current
settings. If one or more of these settings need to be changed, then follow the applicable
procedure below.
3BUF001092-610 A 465
13 History Database Maintenance
13.14 Database Maintenance Functions
Enter the number corresponding to the tablespace to be extended (for example 3 for
HS_REPORTS). The Tablespace Extension menu is returned with the new tablespace
indicated for option 1.
Specify the Amount of Tablespace
The default amount is 50 Mb. To change this, select option 2 - Change size to extend
by. This displays a prompt to enter the size to extend table space by in Mb. Enter the
size as shown in Figure 13.45.
The Tablespace Extension menu is returned with the new value indicated for option 2.
Change the Directory
The default directory is c:\HsData\History\oracle. The oracle Instance Wizard requires
the default directories and will not work if the directories are changed.
To specify a different directory for the extended tablespace data, select option 3 - Change
directory to put data file. This displays a prompt to enter the new data file directory.
Enter the directory. The Tablespace Extension menu is displayed with the new directory
indicated for option 3.
Apply the Changes
At this point, the criteria for extending tablespace should be correct, and the tablespace
can actually be extended.
To extend tablespace with the current settings for size, directory, and tablespace, select
option 4 - Extend tablespace with current settings.
3BUF001092-610 A 466
13 History Database Maintenance
13.14 Database Maintenance Functions
This extends the specified tablespace by the specified amount, and then returns the
Tablespace Extension menu.
Repeat above procedures to extend another tablespace. When finished, return to the
Main Menu by selecting option 0-Return to main menu.
3BUF001092-610 A 467
13 History Database Maintenance
13.14 Database Maintenance Functions
Use the Instance Maintenance Wizard and click on the Oracle UNDO Rollback link to
change the Extend By or Maz Size of the UNDO_1.DBF tablespace. If the current size
has reached the maximum size, then increase the maximum size. The recommended
guideline is to double the current size.
The temp or rollback size cannot be decreased. Therefore, take care to increase the
size by a reasonable amount that will provide the space needed. The only way to
revert back to a smaller size is to make a backup of the database, drop and recreate
the Oracle instance, and then restore the database.
3BUF001092-610 A 468
13 History Database Maintenance
13.14 Database Maintenance Functions
This list shows one default directory: c:\HsData\History\HSFF. File-based logs are always
stored sub-directories under a directory named HSFF. This is why the HSFF directory
as a file quota of 0 (each log constitutes one file). These subdirectories are automatically
created under the HSFF directory as needed.
The subdirectory names are three-digit sequential numbers starting with 000. For instance,
c:\HsData\History\HSFF\000. Each subdirectory stores up to 1000 files. When the number
of files exceeds 1000, another subdirectory (\001) is created to store the excess.
Subdirectories are added as required until the quota on that disk is reached. New logs
create files on the next disk (if space has been allocated). Disks are used in the order
they are added.
In the example above, the default location has three subdirectories. A total of 2,437 files
are stored in these directories. Directories \000 and \001 hold 1000 files each. Directory
\003 holds 437 files.
This menu is used to add a directory, change the quota for a directory, delete a directory,
or list all directories. To return to the main menu, choose option 0 - Return to Main
Menu.
For guidelines on using one of these functions, refer to:
• Add a Directory on page 470.
• Update Directory List on page 470.
• Delete From Directory List on page 470.
• List the Directory on page 470.
3BUF001092-610 A 469
13 History Database Maintenance
13.14 Database Maintenance Functions
Add a Directory
To add a directory, choose option 1 - Add to Directory List. This initiates a dialog for
configuring a new directory. Specify a quota and path for the new directory. Follow the
guidelines provided by the dialog. Make sure the disk where the directory is being added
has enough space for the directory. A directory with the name HSFF will be added under
the path entered. For example, if the path is \disk3, a new directory will be created as
follows: \disk3\HSFF
3BUF001092-610 A 470
13 History Database Maintenance
13.14 Database Maintenance Functions
This menu shows the current settings for logs to process, and attributes to reset. The
current settings can be changed before executing the reset operation. To change any
of the settings, choose the corresponding option from this menu and follow the dialog.
For log name to process, choose:
• All logs.
• All logs in a specific composite log hierarchy.
• Just one specific log.
For attributes to reset, choose:
• Both attrsUpdated and userEntered bits.
• Just attrsUpdated.
• Just userEntered.
When the settings are correct, choose the Reset specified log(s) with current settings
now option.
To return to the database maintenance menu, choose the Return to Main Menu option.
3BUF001092-610 A 471
13 History Database Maintenance
13.14 Database Maintenance Functions
To do this:
1. From the hsDBMaint main menu, choose the Cascade Attributes option. This
displays the Cascade Attributes Log Menu, Figure 13.49.
2. Specify the log to pass attributes to. The default is for all logs. If this is acceptable,
skip this step and go directly to step 3. To specify a specific log, choose option 1 -
Change log ID to process. This displays the prompt: Do you want to do all
composite logs? [yn]
Choose n. This displays the following prompt:
Enter the log ID of any log in the composite hierarchy>
Enter the log name. The Cascade Attributes Log Menu is returned.
3. To cascade attributes with current setting for log, choose option 2 - Cascade
attributes with current settings now. This passes the applicable attributes from
the data source to the specified log, and then returns the Cascade Attributes Log
Menu.
3BUF001092-610 A 472
13 History Database Maintenance
13.14 Database Maintenance Functions
Stagger
Stagger is typically applied to a number of logs with the same alignment, storage interval,
and sample interval, and where sample and storage interval are equal. Stagger assigns
a blocking rate greater than the storage interval. The first Activation Time is modified to
distribute the load. For example, consider an application with 120 logs with the following
configuration:
sample & storage interval = 30 seconds
alignment = 12:00:00
Blocking Rate = 150s
Without staggering, 120 messages with 5 entries each will be sent to storage every 2.5
minutes (150s). This will cause a spike in CPU usage. To distribute the load more evenly,
use the Stagger Collection function to apply a blocking rate of 10m, and specify that 1/10
of the logs (12 messages with 20 entries each) be sent to storage every minute.
Phasing
Phasing is typically applied to a number of logs with the same alignment, storage interval,
and sample interval, and the storage interval is significantly greater than the sample
interval. For example, consider an application with 120 logs sampled at a 1-minute rate
and an average calculated on a daily basis.
In this case, the 150s blocking rate would not prevent a CPU load spike at 00:00:00
(midnight). Phasing allows a set of logs to have slightly different blocking rates to prevent
all logs from processing at the same time. For example, for a blocking rate of 30 minutes,
phasing can be implemented by sub-dividing the 120 logs into groups and assigning
different blocking rates to each group as follows:
40 logs w/ Blocking Rate = 29m
40 logs w/ Blocking Rate = 30m
40 logs w/ Blocking Rate = 31m
In this case a spike will only occur once every 26,979 minutes (29x30x31).
3BUF001092-610 A 473
13 History Database Maintenance
13.14 Database Maintenance Functions
The summary categorizes the logs by sample rate and storage rate. Each line of this
summary provides the following information for a category of logs:
The prompt following the summary is used to either exit this function (by pressing n), or
continue with the Stagger function (by pressing y).
If continuing with the Stagger function, recommendations to improve performance for
logs in the first category are displayed, Figure 13.51.
3BUF001092-610 A 474
13 History Database Maintenance
13.14 Database Maintenance Functions
Other Considerations
It is generally accepted that the Information Management requests per minute should
be kept in the 500 to 1,000 requests per minute range. If the requests per minute are
over 1,000, the performance of the Connectivity Servers can suffer. Because most 800xA
sources are exception driven, a one-second log typically does not have 60 values per
minute. If the rate is over 1,000 per minute, the custom selection should be used to
increase the blocking rates, decreasing the average OPC/HDA request rate to the
Connectivity Servers.
3BUF001092-610 A 475
13 History Database Maintenance
13.14 Database Maintenance Functions
If the values are changed, instructions for entering a new blocking rate are displayed,
Figure 13.52.
Enter a new blocking rate (without units) at the prompt. The range is 0 to 4095. For
example, enter 12.
Enter value( 0-4095 ): 12
When the prompt for units is displayed, enter the units by entering the appropriate integer
code: 3 = seconds, 4 = minutes, 5 = hours, 6 = days, 7 = weeks. In the example below,
the integer code for minutes (4) is selected.
Enter unit(3 for omfSECONDS etc.):4
When the prompt for stagger is displayed, Figure 13.53, enter an integer value within
the specified range. The range is based on the blocking rate specified.
The stagger value specifies the number of divisions within the specified blocking rate
that will be used to distribute the load. After entering the stagger value, a summary of
the changes made is displayed, Figure 13.3.
3BUF001092-610 A 476
13 History Database Maintenance
13.14 Database Maintenance Functions
The prompt following this summary can be used to either accept these values (by pressing
n), or change these values (by pressing y). In this case, the specified blocking rate is 12
minutes. The selected stagger divides each blocking interval into 24 30-second time
slots. Thus at every 30-second time slot,
This procedure must be repeated for each category. When completed for each category,
a new summary will be displayed with a prompt to choose whether or not the changes
made should be applied, Figure 13.55.
3BUF001092-610 A 477
13 History Database Maintenance
13.14 Database Maintenance Functions
Dropping an Index
To drop an index:
1. From the Table Index Maintenance menu, choose Action to perform (1), and then
enter d to select the drop action.
2. Choose Current index name (2), then enter the name of the index to drop.
3. Choose Drop the index (7) to drop the selected index.
Creating an Index
To create an index:
1. From the Table Index Maintenance menu, choose Action to perform (1), and then
enter c to select the create action.
2. Choose Current index name (2), and then enter the name for the new index.
3. Choose Name of table index belongs to (3), then enter the name of the table.
4. Choose Name of column index will be based on (4), then enter the column names.
Separate each name with a comma, and DO NOT enter any white space between
names. Press <Enter> to display a list of column names.
5. Choose Name of tablespace to create index in (5), then enter the tablespace
name. Press <Enter> to display a list of tablespace names.
3BUF001092-610 A 478
13 History Database Maintenance
13.14 Database Maintenance Functions
6. Choose Size of Initial and Next extent(s) (6), then enter the size in kilobytes.
7. Choose Create the index... (7) to create the new index based on the entries made
for steps 3-6.
Creating indexes requires additional disk space. Be sure to extend the PDL tablespace
or PDL Index tablespace (if it exists) before creating a new index. Also, indexes make
retrieval faster, but slow down storage and deletion operations. This should be
considered when adding indexes to tables.
The History database configuration and all Oracle-based Historical data will be lost
when the database instance is dropped.
3BUF001092-610 A 479
13 History Database Maintenance
13.14 Database Maintenance Functions
If dropping a database after History has been running (database has live data), be
sure to archive any data that needs to be saved.
This menu shows the current settings for type of database to create/drop, and the
operation to perform (create or drop). Change the current settings before executing the
create or drop operation.
To change any of the settings, choose the corresponding option from this menu and
follow the dialog. For type of database choose History or PDL.
For operation to perform, choose Create or Drop.
Creating History will automatically create PDL if the option is installed.
When finished with the dialog, the Create/Drop Product Database menu will return and
the current settings will be changed according to the dialog. When the settings are correct,
choose the Create/drop product database with current settings now option. When
the operation is finished, the Main Menu will be displayed again.
To return to the database maintenance main menu, choose the Return to Main Menu
option (0).
3BUF001092-610 A 480
13 History Database Maintenance
13.14 Database Maintenance Functions
Choose option 2 - Clean History database now. This cleans up the database, and then
returns the Clean History Database menu. To return to the Main Menu, choose option
0 - Return to main menu.
After clean is executed, it is recommended the checkDB should be executed to verify
no errors remain. If additional errors remain, clean can be repeated until the database
is clean.
In some instances, problems with the aspect configurations may be contributing to
the problems reported by hsDBMaint -checkDB. In cases where two to three
hsDBMaint -clean/hsDBMaint -checkDB executions still report errors, reference the
troubleshooting guide or contact Technical Support.
3BUF001092-610 A 481
13 History Database Maintenance
13.14 Database Maintenance Functions
After specifying whether or not to purge, a prompt for a time to compare data time stamps
with is displayed, Figure 13.61. The default is the current time.
When the function is finished, a table is displayed that indicates the number of logs
checked, and the number of logs found to have future data, Figure 13.62. When History
detects a time change of greater than 75 minutes (this is configurable via the environment
variable HS_MAX_TIME_CHANGE), History will log an operator message stating that
fact.
3BUF001092-610 A 482
13 History Database Maintenance
13.15 History Resource Considerations
CPU
CPU performance is related to capacity for History collection and storage functions. This
can be adjusted by configuring sample rates, deadbands, and blocking rates.
3BUF001092-610 A 483
13 History Database Maintenance
13.15 History Resource Considerations
Disk Space
Disk space resources are used by the History filesets that are loaded on disk when
History is installed, by Oracle tablespace for logs and archive database entries, and by
file-based logs. Disk space allocation for Oracle and file-based logs is calculated and
set by the Database Instance Configuration wizard based on the selections made in the
wizard at post-installation or later if required, refer to Maintaining the Oracle Instance on
page 137.
• The disk space required for the History filesets is fixed at 30 meg.
• Oracle tablespace is used by message logs, report logs, PDLs, asynchronous
property logs, synchronous property logs with Oracle storage type, log configuration
data, and archive database entries. Extend Oracle tablespaces for runtime based
on the per/entry requirements in Considerations for Oracle or File-based Storage
on page 188.
• File space for file-based property logs is automatically allocated. Additional disk
space can be allocated. Memory requirements per file are described in
Considerations for Oracle or File-based Storage on page 188.
3BUF001092-610 A 484
13 History Database Maintenance
13.15 History Resource Considerations
3BUF001092-610 A 485
13 History Database Maintenance
13.15 History Resource Considerations
3BUF001092-610 A 486
13 History Database Maintenance
13.15 History Resource Considerations
The memory in use for a given server should always be less than the memory installed.
If a system has 700 M for oracle shared memory, has 3GB of memory installed, and the
current memory usage is 2.5GB, it would be safe to add 500 M or less to the oracle
shared memory. If the value is changed to a higher value and the memory in use is
consistently higher than the installed memory, then the value should be reduced to keep
the memory equal or less than the memory installed.
1. Click Stop All.
3BUF001092-610 A 487
13 History Database Maintenance
13.15 History Resource Considerations
2. Click Close when the dialog indicates that all processes are stopped.
a. Open a Windows command prompt by clicking:
Start > Run
b. Type cmd in the Run dialog and click OK.
Type the following exactly as shown in bold text (command line responses are
shown in normal text):C:\> sqlplus /nolog
SQL*Plus: Release 9.2.0.8.0 - Production on Wed Jun 17 10:37:22
2009
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
SQL> startup
Database mounted.
Database opened.
------------------------------------------------------
db_block_buffers8192
3BUF001092-610 A 488
13 History Database Maintenance
13.15 History Resource Considerations
SQL> startup
Database mounted.
Database opened.
3BUF001092-610 A 489
13 History Database Maintenance
13.15 History Resource Considerations
c. SQL> exit
Disconnected from Oracle9i Enterprise Edition Release 9.2.0.8.0
- Production
With the OLAP and Oracle Data Mining options
JServer Release 9.2.0.8.0 - Production
C:\Users\800xaInstall> sqlplus /
SQL*Plus: Release 11.1.0.7.0 - Production on Tue Jun 8 16:41:58 2010
Copyright (c) 1982, 2008, Oracle. All rights reserved.
Connected to:
Oracle Database 11g Release 11.1.0.7.0 - Production
SQL> show parameter memory
NAME TYPE VALUE
------------------------------------------------------
hi_shared_memory_address integer
memory_max_target big integer 532M
memory_target big integer 532M
shared_memory_address integer 0
SQL> alter system set memory_max_target=600M scope=spfile;
System altered.
SQL> alter system set memory_target=600M scope=spfile;
System altered.
SQL>exit
Then restart the PC to have the new values take effect.
The maximum value for these parameters is 2560M, 2.5 GB. This will set the
oracle memory use for 2.5GB for a given PC. This maximum setting should
only be used on system with 4GB of memory installed.
3BUF001092-610 A 490
13 History Database Maintenance
13.15 History Resource Considerations
d. Run the Process Administration Service (PAS) utility to start all processes under
PAS supervision.
i. From the Windows Taskbar select:
Start > Programs > Administrative Tools > PAS > Process Administration
The Process Administration Service (PAS) utility can also be opened by typing pasgui
in the Run dialog (Start > Run) and clicking OK.
3BUF001092-610 A 491
13 History Database Maintenance
13.15 History Resource Considerations
3BUF001092-610 A 492
13 History Database Maintenance
13.15 History Resource Considerations
3BUF001092-610 A 493
13 History Database Maintenance
13.15 History Resource Considerations
3BUF001092-610 A 494
13 History Database Maintenance
13.16 Environment Variables
Start Active logs are logs whose Log Start-up state is configured to be Active.
3BUF001092-610 A 495
13 History Database Maintenance
13.16 Environment Variables
3BUF001092-610 A 496
13 History Database Maintenance
13.17 Linking Information Management History with the 800xA System Trend Function
3BUF001092-610 A 497
13 History Database Maintenance
13.18 Integrating System Message and History Servers
6. If one or more of these required items are not in the Supported Links list, select the
item from the Available Links list and click the right arrow button. This puts the item
in the supported links list.
7. Click Apply.
Figure 13.64: Checking IM History Links for Basic History Service Group
3BUF001092-610 A 498
13 History Database Maintenance
13.18 Integrating System Message and History Servers
3BUF001092-610 A 499
3BUF001092-610 A 500
14 Open Data Access
This section provides instructions for setting up real-time and historical data access for
third-party applications via the Open Data Access (ODA) Server. This server supports
client applications that use ODBC data source, for example: Crystal Reports or Microsoft
Excel (without DataDirect add-in).
Virtual database tables in the ODA Server map 800xA system data types to ODBC data
type. When the client application submits an SQL query toward a virtual table, the ODA
Server parses the query, and returns the requested data to the client.
There is one predefined table named numericlog to support access to historical process
data.
For real-time data access, configure custom tables to expose aspect object properties
that need access. These tables are then combined to form one or more virtual real-time
databases. The custom-built real-time databases support read and write access. The
custom views are used to impose restrictions on data access for certain users.
One predefined table named generic_da is provided for real-time data access. This is
a read-only table that exposes all properties for all real-time objects.
Client applications which access data via ODA may run locally on the Information
Management server where the ODA Server is installed, or on a remote computer client.
Remote computer clients required the Information Management Client Toolkit. To install
this software, refer to the Information Management section of System 800xA Manual
Installation (3BSE034678*).
The architecture for Open Data Access is illustrated in Figure 14.1, and described in
ODA Architecture on page 503. When ready to begin configuring ODA access, refer to
What to Do on page 505 for guidelines.
3BUF001092-610 A 501
14 Open Data Access
ODBC
ABB ODA ODA Data Source
Provider
IMHDA AIPHDA
OPC DA Proxy Server Server
Calculation SoftPoint
Services Server History Logs
Figure 14.1: Historical and Real-time Data Access Via ODA Server
3BUF001092-610 A 502
14 Open Data Access
14.1 ODA Architecture
DatabaseX DatabaseZ
Client Application
3BUF001092-610 A 503
14 Open Data Access
14.1 ODA Architecture
The contents and operating parameters for the ODA database are specified in a
configuration file that was set up using the ODBC Data Source Administrator. This file
is used by the ODBC data source. This configuration file specifies:
• Which user-configured real-time database to use for real-time data access. Clients
connect to one database at one time.
• Which OPC HDA server to use for historical data access: the 800xA OPC HDA
server, or the History Server (IM) OPC HDA server.
The 800xA OPC HDA server is the default, and is recommended because it supports
seamless access to both operator trend logs and history logs. This server also
supports the ability to access log attributes.
The IM OPC HDA server can only access history logs. This server does not support
access to log attributes. It is provided primarily to support earlier data access
applications configured to use this server.
• For remote client connections, this file also specifies the server’s IP address.
Using ODBC
It is generally easier to use ODBC which is set up by default to use the predefined ODA
database - DATABASE1.
Client applications that use VBScripting for data access must use the
ABBOpenDataAccess OLE DB Provider. Each time the client application is used with
OLE DB, connect the OLE DB provider to the ODA database.
3BUF001092-610 A 504
14 Open Data Access
14.2 What to Do
14.2 What to Do
ODA can be used with the default set up (DATABASE1). This supports access via the
predefined numericlog and generic_da tables. To use the functionality supported by the
custom-built real-time database tables (read/write access, and restricting access on a
user-basis), then use the following procedures to configure ODA to meet the data access
requirements:
• Configure the virtual database tables for real-time data access. Then combine these
tables to form one or more real-time databases. This is described in Configuring
Real-time Data Access Via the ODA Server on page 505.
• Set up an ODA database per the data access requirements and specify:
– Which user-configured real-time database the client will be able to access. Only
one can be connected at one time.
– Whether to use the 800xA OPC HDA server, or IM OPC HDA server for
accessing history data.
– For remote client connections, specify the server’s IP address.
As an option, create additional ODA databases to provide different views to
real-time data. Since a client application can only connect to one ODA database
at a time, only one real-time database view is available at a time. This set up
is performed via the ODBC Data Source Administrator as described in Setting
Up the ODA Database on page 527.
• As an option, log status information related to numericlog queries. Refer to Logging
Status Information for Numericlog Queries on page 535.
3BUF001092-610 A 505
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
ODA Table Definition aspects may be added to object types in the Object Type structure,
or to instantiated objects in any other structure. Some object types, for example the AC
800M Control Program, do not permit ODA Table Definition aspects to be added. To
expose object properties under these circumstances, be sure to add the ODA Table
Definition aspects to the instantiated objects.
When adding ODA Table Definition aspects to an object type, each aspect represents
one table through which data from all objects of the containing type will be made
accessible. The table may be configured to also expose properties for children of the
object type (as defined by a formal instance list). The table will contain a row for each
instantiated object created from the object type. An example is shown in Figure 14.3.
Adding ODA Table Definition aspects directly to instantiated objects is used to expose
object properties that are unique to a specific instance. When adding the aspect to an
object instance, the resulting table will contain a single row, and only those properties
that belong to that object may be exposed, Figure 14.4.
If properties for children of an object instance need to be exposed, add an ODA Table
Definition aspect to those children.
Any number of tables can be defined for the same object or object type to be used in
the same or different databases. Once the ODA table definitions are created, assign the
table definition aspects to one or more Database objects to create real-time databases.
This is done in the Library structure. Each database will consist of one or more tables
defined by the ODA table definition aspects described above. Tables may be reused in
more than one database.
3BUF001092-610 A 506
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
The relationship between database objects and table definitions is illustrated in Figure
14.5. Two databases are defined. DB1 exposes properties for three object types. DB2
exposes one object type, and also exposes an instantiated object.
The scope of each database is restricted to a specified object root. Typically the object
root is specified at or near the top of a structure, for example Control structure, in the
Plant Explorer. When adding table definitions in the database for instantiated objects,
the instantiated objects must reside under the object root defined for the database. When
adding table definitions that point to an object type, those table definitions will provide
access to objects of that type that are instantiated under the object root. To expose
objects in more than one structure, configure a dedicated database for each structure.
For example, since DB2 includes a table definition for an object which is instantiated in
the Control structure, its object root must be defined within the Control structure (probably
at the top). In that case the AI table definition in DB2 will be limited to AI objects in the
Control structure. Assuming the object root for DB1 is specified at the top of the Location
structure, the tables in DB1 will be limited to PID, DI, and AI objects instantiated in the
Location structure.
Multiple databases can be configured to expose different sets of properties for different
users. Client applications can only connect to one real-time database at a time.
The database objects must be inserted under the Databases group in the Library structure.
Configure as many database objects as required to provide different views of the real-time
database.
3BUF001092-610 A 507
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
Figure 14.5: Object Structure for Databases and Database Table Definitions
3BUF001092-610 A 508
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
Use either the default aspect name, or enter a different name. The table represented
by this aspect will be named from within the Table Definition aspect as described
later. Specify a unique Aspect Description to help identify the table. This is particularly
important when creating multiple tables with the same table name.
For example, in Figure 14.6 the Aspect Description is specified as Engineer’s View.
This description will be used as the Table Description when adding tables to the
database. If the aspect has no description, the object type description (if any) will
be used. Change the Aspect Description later as described in Using the Description
Property on page 519.
3. Click Create when finished. This adds the aspect in the object type’s aspect list.
3BUF001092-610 A 509
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
4. To show the configuration view for this aspect, select the object type and then click
on the aspect, Figure 14.7.
3BUF001092-610 A 510
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
Table Name
Specify the table name as shown in Figure 14.8. This name is referenced in the ODA
Database Definition when adding tables to the database (Creating a Database Object
on page 521). This is also the name used in SQL queries. More than one table with the
same name can be created; however, table names must be unique within the database
where they are used.
• If the same table name for more than one table needs to be used, use the aspect’s
Description property to differentiate tables. If a unique Aspect Description is not
already specified, do it later as described in Using the Description Property on
page 519.
• If table names that have leading numeric characters are specified (for example
01Motor, or are completely numeric (01), when using those names in queries,
the names will have to be entered in double quotation marks (“01Motor”)
Specify
Table Name
3BUF001092-610 A 511
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
3BUF001092-610 A 512
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
• Child Object Properties - For object types, the properties of children objects are
grouped on an object-by-object basis in a manner similar to the aspect properties.
This is not applicable for object instances. For example, in Figure 14.9, the
ProcessDataSim object has two child objects: RealSim and upDown. Each of these
objects has aspects and special properties.
The Aspect column in the property list indicates to which group each property belongs,
Figure 14.10. For aspect properties, the aspect name is shown. Object attributes and
name categories are indicated by these respective text strings in the Aspect column:
(Object Attributes) or (Name Categories). For properties of child objects, the object
relative name is indicated in the Aspect column, in the form: .relName:aspectName.
3BUF001092-610 A 513
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
To remove a property group, click the Filter View button in the ODA Table Definition
aspect, Figure 14.12. This displays the Property Filter dialog.
Property groups that are visible in the list are marked with a check. To remove a group,
uncheck the corresponding check box. As this is done, the applicable properties are
removed from the property list. Replace any group and corresponding properties that
have been removed by reselecting the check box. Figure 14.12 shows the following
groups selected:
• The RealSim object (child of the ProcessDataSim object).
• The Real PCA and Relative Name Aspects of the RealSim object.
• The Object Attributes for the RealSim object.
A group that already has properties selected cannot be removed. To remove such a
group, unselect the properties first.
3BUF001092-610 A 514
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
Unchecking groups whose properties will not be used significantly reduces the property
list. This makes it easier to select the properties to be included in the table. DO NOT
click the Hide Unused button at this point. This hides unselected properties. Since no
properties have been selected yet, this will hide all properties.
To proceed, click Exit to close the Filter dialog. Then follow the guidelines for selecting
properties in Selecting Properties on page 515.
Selecting Properties
To select a property, click the far left column. Selected properties are indicated by black
dots, Figure 14.13. Clicking on a selected property will un-select it.
Remember, it is recommended that the NAME object attribute be selected as an
efficient means of referencing the object in data queries. Do not confuse this NAME
attribute with other name properties in the property list.
Selected
Property
Indication
When finished selecting properties, simplify the view by showing just selected properties.
To do this, right click on the list and choose Show Selected from the context menu,
Figure 14.14. The result is shown in Figure 14.15.
3BUF001092-610 A 515
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
To bring back the hidden unselected properties, click Show All. If any mistakes are
made selecting the wrong properties, start over by clicking Deselect All. To select all
properties, click Select All. The Deselect All and Select All functions operate on the
properties that are currently visible on the grid. Hidden properties will not be selected or
deselected.
3BUF001092-610 A 516
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
These columns are described in Table 14.1. Refer to Navigation and Configuration Tips
on page 518 for guidelines on navigating the property grid.
• The text string in the Name column in this grid will be used as the column name
in the database table. The default is to use the Property name. This may result
in duplicate column names which is NOT permitted.
For example, in Figure 14.15, two properties named NAME are selected. When
this occurs, change one or all duplicate names. Figure 14.16 shows the column
name changed for one of the NAME properties. In this case, it is recommended
the Object Attribute NAME keep the default name.
• If the column name includes a dot (.) character, the column name text string must
be delimited with double quotation marks when used in a query, for example:
SELECT “object1.value” FROM ai
3BUF001092-610 A 517
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
3BUF001092-610 A 518
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
3BUF001092-610 A 519
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
3. Click Apply. The description will be indicated in the aspect list, Figure 14.18.
3BUF001092-610 A 520
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
To show the configuration view for the database, click on the ODA Database Definition
aspect, Figure 14.20.
3BUF001092-610 A 521
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
3BUF001092-610 A 522
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
The bottom portion of this dialog has a check box and buttons to adjust the list of
tables, Figure 14.22. The Exclude instance tables check box is used to limit the
list to tables defined in the Object Type structure. The Search Under Root button
is used to limit the list to instance tables that exist under the root object specified in
the Object Root field on the ODA Database Definition aspect.
3BUF001092-610 A 523
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
2. Select a table, then click Add Table. The added table is listed immediately in the
Database Definition view, Figure 14.23. The dialog remains open so more tables
can be added.
Table Added
to Database
Tables that were created for object instances rather than object types are indicated
by the (Instance) label in the Object Type column.
3BUF001092-610 A 524
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
The scope of the database is determined by the Object Root specification. Only objects
under the specified object root will be accessible via this database. This defaults to the
Control structure. A different structure can be specified if necessary. Further, the scope
can be made more restrictive by selecting an object within a structure. For example, to
restrict the database scope to a specific controller within the Control structure. To change
the object root specification:
1. Click the Browse button for the Object Root field, Figure 14.24. This displays a
browser similar to the Plant Explorer.
2. Use this browser to select the structure (and folder if necessary) where the objects
that will populate this table reside, Figure 14.25.
3BUF001092-610 A 525
14 Open Data Access
14.3 Configuring Real-time Data Access Via the ODA Server
If all tables having duplicate names are required in the database, rename the tables so
that each has a unique name. Do this via their respective Database Table Definition
aspects as described in Creating ODA Table Definitions on page 508).
If one or more tables are not required, select those tables and then click Delete Tables.
If a mistake is made, cancel the delete command by clicking Cancel BEFORE applying
the change. This restores all deleted tables.
3BUF001092-610 A 526
14 Open Data Access
14.4 Setting Up the ODA Database
3BUF001092-610 A 527
14 Open Data Access
14.4 Setting Up the ODA Database
3BUF001092-610 A 528
14 Open Data Access
14.4 Setting Up the ODA Database
;
Save the file, and reboot the machine. If further databases are required, the same
procedure can be used, keeping in mind that the first modification to the DataSource_x
number must increment upwards with each new database.
3BUF001092-610 A 529
14 Open Data Access
14.4 Setting Up the ODA Database
3BUF001092-610 A 530
14 Open Data Access
14.4 Setting Up the ODA Database
Go to the System DSN tab on the ODBC Data Source Administrator dialog and
select the data source named ABBODA, Figure 14.28.
3BUF001092-610 A 531
14 Open Data Access
14.4 Setting Up the ODA Database
3BUF001092-610 A 532
14 Open Data Access
14.4 Setting Up the ODA Database
2. Click Configure. This displays the ODBC Data Source Setup dialog. The Database
name defaults to ABBODA. Typically, the default is used unless additional databases
are configured. The ODBC driver comes in both x86 and x64 bit versions. You need
to use the appropriate version of the tool to modify the existing entries. The tools
show you all configured, but will only allow modifications for entries that match the
architecture of the utility.
3BUF001092-610 A 533
14 Open Data Access
14.4 Setting Up the ODA Database
4. Select the Service Data Source. The data source, DATABASE1, that is created is
taken as the default. But, the user can configure more as described in Setting Up
the ODA Database on page 527.
To get a list of databases, including the default database, click on the selection box.
View the example in Figure 14.30 that lists the default and other configured
databases.
3BUF001092-610 A 534
14 Open Data Access
14.5 Logging Status Information for Numericlog Queries
This log does not record status information for queries against the real-time database
tables.
To enable (or disable) logging, use the EnableLogging tool and specify a new log file
name to maintain individual log files for each client connection.
The contents of the log file is deleted each time a new client connects to the ODA
Server. This minimizes the likelihood that the log file will grow too large. To preserve
the contents of a log file for a particular client connection, specify a unique file name
when logging is enabled.
3BUF001092-610 A 535
14 Open Data Access
14.5 Logging Status Information for Numericlog Queries
The log file is associated with a specific data provider. The default file name for
ABBDataAccess Provider is ODAProviderLog.txt. This file is located in the ODA
Provider/Log folder. The default path is as follows:
C:\ProgramData\ABB\IM\ODA
To open the log file in the default text editor, double-click, or choose Open from the
context menu. An example log file is shown in Figure 14.33.
For each query the log file indicates the data source (table name), and echoes the query.
The log file also provides information regarding the status of internal ODA Server
components. This information is intended for ABB technical support personnel for
debugging.
3BUF001092-610 A 536
14 Open Data Access
14.5 Logging Status Information for Numericlog Queries
3BUF001092-610 A 537
3BUF001092-610 A 538
15 Configuring Data Providers
15.1 Data Provider Applications
3BUF001092-610 A 539
15 Configuring Data Providers
15.1 Data Provider Applications
3BUF001092-610 A 540
15 Configuring Data Providers
15.1 Data Provider Applications
NOTES:
1) The Information Management and
Connectivity Servers may reside on the
Display and Client Services same node.
Display Services 2) The Industrial IT Process Values,
DataDirect History Values, and Alarm/Event dialogs
Desktop Trends in DataDirect do not use data providers.
ODBC
Oracle IMHDA AIPHDA Data Providers
DBA AIPOPC
OPC DA Proxy
IMHDA AIPHDA
Server Server
Calculation SoftPoint
Services Server
Oracle-based
Message Logs
Numeric Logs
History Logs
PDLs
3BUF001092-610 A 541
15 Configuring Data Providers
15.2 Overview of Data Provider Functionality
3BUF001092-610 A 542
15 Configuring Data Providers
15.2 Overview of Data Provider Functionality
Typically the default data providers are sufficient to support most data access applications.
Certain data access applications require small site-specific adjustments. These are
summarized in Table 15.1.
Additional data providers can be configured as required by the application. For example,
there may be a need to create an additional ADO data provider to support SQL access
to numeric logs, or access to another third-party database.
3BUF001092-610 A 543
15 Configuring Data Providers
15.2 Overview of Data Provider Functionality
Node #1 Node #2
Local Data Server Remote Data Server
Service
Provider
Data Providers
Figure 15.2: Architecture for Local/Remote Data Providers
To install remote data providers for Oracle Data Access on a platform that is external
to an Industrial IT system, consult ABB for additional guidelines.
3BUF001092-610 A 544
15 Configuring Data Providers
15.2 Overview of Data Provider Functionality
Display Services uses channel number for all data provider references EXCEPT data
statements. The default channel is zero (0). When using data statements in Display
scripts, and a data provider reference is required, use the -name argument.
DataDirect is used to choose whether to use channel number or -name. The default
setup is to use -name. This is recommended because channel number does not support
some data access functions, including process value update, history update, history
retrieval of raw data, and history bulk data retrieval. If channel number is used, the same
channel number applies to all data providers. The default channel is zero (0). Select a
different channel if necessary.
Batch Management PDL PFC display is hardcoded to use an ADO data provider with
the name DBA. This is the default name for the ADO data provider. Be sure that the
ADO data provider supporting those applications is named DBA, and that no other active
data providers use DBA.
Figure 15.3: Opening the ABB Data Services Supervision Configuration Dialog
The left side of this dialog is a browser that is used to select the data providers and their
respective attributes, Figure 15.4. Use the +/- button to show/hide data provider’s and
attributes.
3BUF001092-610 A 545
15 Configuring Data Providers
15.2 Overview of Data Provider Functionality
The Supervision process for ADSS supervises all service and data providers. ADSS is
automatically started as a Windows process. To monitor and control ADSS, use the
Services icon in the Windows control panel. ADSS starts the service provider and data
providers according to their respective configurations. The service provider (COM) is
always started first. For details on configuring a data provider’s attributes, refer to Editing
Data Provider Configuration Files on page 557.
When a provider is selected in the browser, the following information is displayed for the
selected provider:
Parameters Description
Provider, Status Indicates the selected provider’s label and status. This label is only
used in this dialog. This is NOT the name that desktop client
applications use to reference the data provider.
Argument Value Indicates the current value for the selected attribute. This field can
be edited to modify the service’s configuration. These attributes
are described in Table 15.5.
Click to
Show/Hide
Providers
ADO
Data Provider
ADO
Data Provider
Attributes
Other
Data Providers
3BUF001092-610 A 546
15 Configuring Data Providers
15.2 Overview of Data Provider Functionality
The context (right-click) menu, Figure 15.5, provides access to the following provider
management functions: Adding a Data Provider, Starting and Stopping Providers, Deleting
a Data Provider, and Adding Arguments to a Data Provider.
Some functions are not available while the selected provider is running. In this case,
stop the provider to see the complete menu.
3BUF001092-610 A 547
15 Configuring Data Providers
15.3 Enabling Write Access
3BUF001092-610 A 548
15 Configuring Data Providers
15.3 Enabling Write Access
3BUF001092-610 A 549
15 Configuring Data Providers
15.4 General Procedures
3. From the User Preferences dialog, Figure 15.7, Click + to expand the RIGHTS
section, select OBJECTWRITE or LOGWRITE, or both, and check the check box
to change the preference value to True.
3BUF001092-610 A 550
15 Configuring Data Providers
15.4 General Procedures
If the data provider to be copied is currently running (Status = Started), then stop it.
Right-click and choose Stop Provider from the context menu.
3. After the data provider to be copied is stopped, right-click again and choose Copy
Provider from the context menu. This displays a dialog for specifying the unique
name for the new data provider, Figure 15.8.
3BUF001092-610 A 551
15 Configuring Data Providers
15.4 General Procedures
4. Enter a new name, then click OK. This adds the new data provider in the browser
under Supervision. This name labels the data provider in the browser. This name
DOES NOT uniquely identify the data provider for data access by display elements.
5. Configure any data provider attributes as required.
If both the original data provider and the copy are being connected to the same
service provider, then either the -name or -channel arguments, or both MUST be
edited. The Service Provider will not allow connection of more than one Data Provider
of the same type, unless they have different names.
Assigning a new channel number also assigns a new name. Therefore, if a new
channel number is assigned, then a name is not required. The assigned default
name does not have to be used.
To access the -channel or -name argument, click Arguments. Then enter a new
channel number or name in the Argument Value field, Figure 15.9.
If the -channel or -name argument is not included in the argument list, then add the
argument to the list.
Typically, defaults can be used for all other attributes. For details regarding attributes,
refer to Table 15.5 in Editing Data Provider Configuration Files on page 557.
6. Click Save to save the new data provider.
7. Click Start to start the new data provider.
8. To verify proper start-up and review any status errors, refer to Checking Display
Server Status on page 570.
3BUF001092-610 A 552
15 Configuring Data Providers
15.4 General Procedures
3BUF001092-610 A 553
15 Configuring Data Providers
15.4 General Procedures
3BUF001092-610 A 554
15 Configuring Data Providers
15.4 General Procedures
Server Type in this dialog does NOT refer to the type of data provider, but rather the
method by which Windows handles this process (COM or Command Line). Currently,
only command line processes are supported. Thus the Server Type is fixed as CMD
and cannot be changed.
3. Enter a unique name for the data provider in the Server Name field. This is the name
that the data provider will be identified by in the browser under Supervision. This
name DOES NOT uniquely identify the data provider for data access by display
elements.
3BUF001092-610 A 555
15 Configuring Data Providers
15.4 General Procedures
4. Enter the path to the executable (.exe) file for the data provider. Enter the path
directly in the Server Path field, or use the Browse button. The executable files
reside in: C:\Program Files (x86\ABB Industrial IT\ Inform IT\Display
Services\Server\bin
The file names correspond to the process names as indicated in Table 15.4.
3BUF001092-610 A 556
15 Configuring Data Providers
15.4 General Procedures
5. Click OK when finished. This adds the new data provider under Supervision in the
browser.
6. Configure any data provider attributes as required.
If a data provider of the type being added is already connected to the service provider,
then edit either the -name or -channel arguments, or both. These arguments uniquely
identify the data provider. The name is NOT automatically updated when a new data
provider name in the New Server dialog (step 2) is specified. The Service Provider
will not allow connection of more than one Data Provider of the same type, unless
they have different names.
Assigning a new channel number also assigns a new name. Therefore, it is not
necessary to edit the name unless something other than the assigned default name
is preferred.
Refer to the procedure for Copying an Existing Data Provider to learn how to access
the -name and -channel arguments.
3BUF001092-610 A 557
15 Configuring Data Providers
15.4 General Procedures
3BUF001092-610 A 558
15 Configuring Data Providers
15.4 General Procedures
3BUF001092-610 A 559
15 Configuring Data Providers
15.4 General Procedures
For specific arguments for each type of service, see Table 15.7 for ADSspCOM.EXE, Table 15.8 for
ADSdpDDR.EXE, Table 15.9 for ADSdpADO.EXE, Table 15.10 for ADSdpOPCHDA.EXE, Table 15.11
for ADSdpDCS.EXE, Table 15.12 for ADSdpDCS.EXE, Table 15.13 for ADSdpOPC.EXE.
Table 15.7 describes the arguments that are unique for the ADSspCOM.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 15.6.
The no reverse DNS setting on the COM process is set to -UseReverseDNS 0 by default
so DNS lookup does not prevent data provider communication. COM arguments are not
reset upon upgrade of a system and defaults may have to be reset.
3BUF001092-610 A 560
15 Configuring Data Providers
15.4 General Procedures
Table 15.8 describes the arguments that are unique for the ADSdpDDR.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 15.6.
3BUF001092-610 A 561
15 Configuring Data Providers
15.4 General Procedures
Table 15.9 describes the arguments that are unique for the ADSdpADO.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 15.6.
3BUF001092-610 A 562
15 Configuring Data Providers
15.4 General Procedures
3BUF001092-610 A 563
15 Configuring Data Providers
15.4 General Procedures
ReturnAllErrors Argument
Normally, on errors the ADO provider returns an array containing:
Some data base providers are capable of generating several errors as a result of one
request. If ‘ReturnAllErrors’ is FALSE (default), only the last error code/text is returned.
If ‘ReturnAllErrors’ is TRUE, the returned array will contain all the error codes/texts
encountered:
Example:
The connection to the SQL server fails. If ReturnAllErrors FALSE (default):
If ReturnAllErrors TRUE:
Table 15.10 describes the arguments that are unique for the ADSdpOPCHDA.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 15.6.
3BUF001092-610 A 564
15 Configuring Data Providers
15.4 General Procedures
3BUF001092-610 A 565
15 Configuring Data Providers
15.4 General Procedures
Table 15.11 describes the arguments that are unique for the ADSdpDCS.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 15.6.
Table 15.12 describes the arguments that are unique for the ADSdpLOG.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 15.6.
3BUF001092-610 A 566
15 Configuring Data Providers
15.4 General Procedures
Table 15.13 describes the arguments that are unique for the ADSdpOPC.EXE data
provider. This data provider also has arguments common to all data providers as
described in Table 15.6.
3BUF001092-610 A 567
15 Configuring Data Providers
15.4 General Procedures
3BUF001092-610 A 568
15 Configuring Data Providers
15.4 General Procedures
The data provider will be deleted immediately (after the specified TerminateDelay
time). There is NO prompt to confirm or cancel.
3BUF001092-610 A 569
15 Configuring Data Providers
15.5 Checking Display Server Status
2. Enter the hostname for the data server where the data providers are located. Leaving
the hostname blank defaults to localhost. As an option, specify the maximum time
to wait for the server to respond.
3BUF001092-610 A 570
15 Configuring Data Providers
15.5 Checking Display Server Status
3. Click OK. This displays the server status window, Figure 15.16.
This window provides the following information related to the Display Server:
3BUF001092-610 A 571
15 Configuring Data Providers
15.5 Checking Display Server Status
Licenses These fields show the total number of purchased licenses, and
the available licenses not currently being used:
• Build - When logging in with Build access Display Services
can be used in both the Build mode and the Run mode.
• MDI - Multiple Document (Display) Interface. When logging
with MDI Run access, multiple displays can be run at the
same time.
• SDI - Single Document (Display) Interface. When logging
with SDI Run access, only one display at a time can be run.
• ADD - DataDirect.
• DP - This is the number of data providers connected.
Data Providers This shows the number of data providers connected. Display
Connected Services must have one data provider connected.
Clients Connected This shows how many clients are connected to this server. The
information provided includes the node name, type (MDI or SDI),
and date and time when the client connected.
3BUF001092-610 A 572
16 Authentication
16.1 Usage within Information Management
16 Authentication
3BUF001092-610 A 573
16 Authentication
16.2 Configuring Authentication
3BUF001092-610 A 574
16 Authentication
16.2 Configuring Authentication
2) Select an
Operation
3) Select the
Authentication
Level
If the Inform IT Authentication Category aspect does not yet exist, then add it to the
aspect category’s aspect list. If the list of operations for the aspect category is not
complete, then any missing operations may be added. To do this, click Add, then enter
the operation name and click OK, Figure 16.3.
Add the Inform IT Authentication Category aspect to the History Configuration, Aspect
System/Log Configuration, Aspect Type/Log Configuration, Aspect Category to allow
authentication of numeric logs.
3BUF001092-610 A 575
16 Authentication
16.2 Configuring Authentication
3BUF001092-610 A 576
16 Authentication
16.3 Configuring Authentication for Aspects Related to SoftPoint Configuration
3BUF001092-610 A 577
16 Authentication
16.3 Configuring Authentication for Aspects Related to SoftPoint Configuration
Authentication can be configured for the following aspect types related to SoftPoint
configuration:
• In the PCA Aspect System:
– Binary
– Integer
– Object
– Real
– String
• In the SoftPoint System Aspect System:
– Alarm Event Configuration
– Alarm Event Settings
– Collection
– Deploy
– Generic Control Network
– Process Object Configuration
– Signal Configuration
For these aspect types, when authentication is configured, that configuration applies to
all operations which may be performed on those aspects (unlike the other Information
Management aspects where authentication can be applied on an operation-by-operation
basis.
3BUF001092-610 A 578
16 Authentication
16.3 Configuring Authentication for Aspects Related to SoftPoint Configuration
3BUF001092-610 A 579
16 Authentication
16.3 Configuring Authentication for Aspects Related to SoftPoint Configuration
3BUF001092-610 A 580
17 System Administration
17 System Administration
3BUF001092-610 A 581
17 System Administration
17.1 Configuring OMF Shared Memory
3BUF001092-610 A 582
17 System Administration
17.2 Start-up and Shutdown
3BUF001092-610 A 583
17 System Administration
17.2 Start-up and Shutdown
3BUF001092-610 A 584
17 System Administration
17.2 Start-up and Shutdown
3BUF001092-610 A 585
17 System Administration
17.2 Start-up and Shutdown
3BUF001092-610 A 586
17 System Administration
17.2 Start-up and Shutdown
3BUF001092-610 A 587
17 System Administration
17.2 Start-up and Shutdown
3BUF001092-610 A 588
17 System Administration
17.3 Managing Users
3BUF001092-610 A 589
17 System Administration
17.3 Managing Users
3BUF001092-610 A 590
17 System Administration
17.3 Managing Users
HistoryAdmin Group
This group is created by the Information Management software installation. Users in this
group have access to History configuration and History database maintenance functions.
These users can access History database maintenance functions and other Information
Management administrative procedures seamlessly without having to change users, or
enter Oracle passwords.
This group is included in the PAS OperatorList by default. This grants all users in the
HistoryAdmin group access to PAS, even if these users are not Domain Administrator-level
users. This enables HistoryAdmin users to start and stop PAS services as required by
certain History database maintenance functions.
To grant HistoryAdmin users access to History database configuration, but deny access
to PAS, remove this group from the PAS OperatorList. Instructions for editing the PAS
OperatorList are provided in PAS OperatorList on page 594.
ORA_DBA Group
This group is created by the Oracle software installation. Users in this group have Oracle
access for database maintenance functions. Such access is generally reserved for
technical support personnel.
3BUF001092-610 A 591
17 System Administration
17.3 Managing Users
There is no default password for Oracle accounts. The password should be set while
creating the instance.
3BUF001092-610 A 592
17 System Administration
17.3 Managing Users
The Oracle Instance wizard can be used at any time to change the password for these
accounts. The runtime software does not depend on these accounts. See Figure 17.9.
If any reports or other Oracle access to these accounts use a hard coded password,
those scripts will have to be modified to use any new password.
All History Services Oracle access use Windows authentication to access the Oracle
database. Runtime access is independent of the passwords for these accounts.
3BUF001092-610 A 593
17 System Administration
17.3 Managing Users
3BUF001092-610 A 594
17 System Administration
17.3 Managing Users
3BUF001092-610 A 595
17 System Administration
17.4 Managing Users for Display Services
3BUF001092-610 A 596
17 System Administration
17.4 Managing Users for Display Services
3BUF001092-610 A 597
17 System Administration
17.4 Managing Users for Display Services
When a folder is copied and renamed, the user name is the second word in the name
(following the underscore, for example in AID_aid.svg, the user name is aid).
When a new user is created, then create a unique password for that user. Refer to
Creating User Passwords on page 598.
Password security was enhanced in version 6.1. As a result Display Services in 6.1
are no longer backward compatible with previous versions of Display Services. A 6.1
client can not connect to a older server and an older client can not connect to a 6.1
server.
Also, configure user preferences and/or customize language translations for that user.
Refer to Configuring User Preferences on page 600, and Customizing Language
Translations on page 604.
3BUF001092-610 A 598
17 System Administration
17.4 Managing Users for Display Services
Copy the value from the AdvaInform Display Password field and then paste it in the
Preferences.svd file, or simply enter the value directly.
3BUF001092-610 A 599
17 System Administration
17.4 Managing Users for Display Services
RO = Read Only
RW = Read/Write
3BUF001092-610 A 600
17 System Administration
17.4 Managing Users for Display Services
Any changes made to user preferences will not take affect until the computer is
restarted.
3BUF001092-610 A 601
17 System Administration
17.4 Managing Users for Display Services
3BUF001092-610 A 602
17 System Administration
17.4 Managing Users for Display Services
3BUF001092-610 A 603
17 System Administration
17.4 Managing Users for Display Services
3BUF001092-610 A 604
17 System Administration
17.4 Managing Users for Display Services
The first time you log in using a custom language, a prompt to define any unknown text
strings is made, Figure 17.18. Either define the strings at this time, skip some strings on
an individual basis, or skip all definitions at this time.
To customize the language, choose User > Language from the menu bar and then use
the Edit Language dialog, Figure 17.19.
The Texts list displays the English version of all text used in the user interface. Selecting
a text line from the list displays the translation for that text according to the language
chosen for this session. English text is the default. Edit the translation in the Translation
field, and then click Apply.
3BUF001092-610 A 605
17 System Administration
17.5 Checking Display Server Status
Some texts have special characters. Special Characters should not be removed.
• & is used in menu texts to indicate that the next character is a mnemonic key.
• % is used by the system to fill in additional text such as the name of a display.
3BUF001092-610 A 606
17 System Administration
17.5 Checking Display Server Status
Enter the hostname, then click OK. As an option, specify the maximum time to wait for
the server to respond. This displays the server status window, Figure 17.21.
3BUF001092-610 A 607
17 System Administration
17.6 Software Version Information
3BUF001092-610 A 608
17 System Administration
17.7 Disk Maintenance - Defragmenting
3BUF001092-610 A 609
3BUF001092-610 A 610
A Extending OMF Domain to TCP/IP
A.1 Example Applications
3BUF001092-610 A 611
A Extending OMF Domain to TCP/IP
A.1 Example Applications
Multicast addresses look very similar to IP addresses. In fact, Multicast addresses occupy
a reserved section of the IP address range: 224.0.0.2 to 239.225.225.225. The default
Multicast address used when multicasting is first enabled is 226.1.2.3.
The default address is okay for systems that have their own TCP/IP network (for small
plants with no company intranet). For large companies with complex intranets
connecting multiple sites, the default address is NOT recommended.
EH - 2
(HP- UX) EH - 3
(2000)
226.1.3.4
226.1.3.4
Figure A.1: Example System Configuration Using Multicast Only to Establish Domain
Any valid address may be selected and assigned to each of the Information Management
nodes that are required to be in the same Domain. Some companies have the network
administrator maintain control over usage of Multicast addresses. This helps prevent
crossing of Multicast defined Domains, and the problems that may result.
Once the OMF TCP/IP Domain where the Information Management node will reside is
defined, use the Communication Settings dialog to enable the OMF TCP/IP socket, and
assign the Multicast address. Use the following three fields in the lower left corner of this
dialog: TCP/IP Multicast enabled, Multicast address, MulticastTTL. This procedure is
described in Configuring OMF for TCP/IP on page 618.
TCP/IP protocol must be configured before configuring the Multicast address, and
enable the OMF TCP/IP socket. This is described in Configuring TCP/IP Protocol on
page 618.
3BUF001092-610 A 612
A Extending OMF Domain to TCP/IP
A.1 Example Applications
3BUF001092-610 A 613
A Extending OMF Domain to TCP/IP
A.1 Example Applications
Domain 1 Domain 2
EH1 EH1 - EH3
EH1 - EH2
Multicast Address = 226.1.3.4
IP Address = 172.28.66.190
TCP/IP Network
EH2 EH3
3BUF001092-610 A 614
A Extending OMF Domain to TCP/IP
A.1 Example Applications
The Socket Configuration Lists on EH1 and EH4 must be configured for point-to-point
communication. For EH4, because all OMF communication to this node should be via
point-to-point, Multicast should be disabled to prevent unwanted OMF communication
on the default Multicast address. Do this by deleting Multicast from the Socket
Configuration List. This is described in Configuring OMF Socket Communication
Parameters on page 621. For EH2 and EH3, leave the Socket Configurations at the default
values.
Domain 1 Domain 2
EH1 EH1 - EH4
EH1 - EH2 - EH3
Multicast Address = 226.1.3.4
IP Address = 172.28.66.190 Bridge Gateway w/
Multicast Disabled
TCP/IP Network
3BUF001092-610 A 615
A Extending OMF Domain to TCP/IP
A.1 Example Applications
3BUF001092-610 A 616
A Extending OMF Domain to TCP/IP
A.1 Example Applications
Domain 1 EH1
EH1 - EH2 - EH3
IP Address = 172.28.66.190
TCP/IP Network
EH2 EH3
3BUF001092-610 A 617
A Extending OMF Domain to TCP/IP
A.2 Configuring TCP/IP Protocol
3BUF001092-610 A 618
A Extending OMF Domain to TCP/IP
A.3 Configuring OMF for TCP/IP
Use the TCP/IP Configuration section to configure the required parameters and enable
OMF on TCP/IP. Refer to Table A.1 for details.
All processes under PAS supervision must be stopped and then restarted for any
changes to take affect. Refer to Starting and Stopping History on page 441.
3BUF001092-610 A 619
A Extending OMF Domain to TCP/IP
A.3 Configuring OMF for TCP/IP
3BUF001092-610 A 620
A Extending OMF Domain to TCP/IP
A.3 Configuring OMF for TCP/IP
3BUF001092-610 A 621
A Extending OMF Domain to TCP/IP
A.3 Configuring OMF for TCP/IP
The default setting is multicast Send Receive. This means this node can see (receive
the signal from) and be seen by (send the signal to) all other nodes in its domain.
To change access within the domain, edit the socket configuration list for all nodes in
the domain. A summary of the possible entries is provided in Table A.2.
3BUF001092-610 A 622
A Extending OMF Domain to TCP/IP
A.3 Configuring OMF for TCP/IP
Domain 1 Domain 2
EH1 EH1 - EH3
EH1 - EH2
Multicast Address = 226.1.3.4
IP Address = 172.28.66.190
TCP/IP Network
EH2 EH3
3BUF001092-610 A 623
A Extending OMF Domain to TCP/IP
A.3 Configuring OMF for TCP/IP
This changes the entry to multicast Send. EH1 can now be seen by other nodes in the
domain configured to receive from EH1; however, EH1 can only see nodes from which
it is specifically configured to Receive.
Add an entry for EH2 to specify it as a node from which EH1 can receive:
1. Enter the IP address for EH2, 172.28.66.191, in the IP config field.
2. Enter a check in the Receive check box, and make sure the Send box is unchecked.
This configuration is shown in Figure A.9.
3BUF001092-610 A 624
A Extending OMF Domain to TCP/IP
A.4 OMF Shared Memory Size
Repeat the above procedure to modify the socket configurations for EH2 and EH3. Refer
to Figure A.7 for the respective socket configurations.
To remove an entry from the Socket Configuration List, select the entry then click
Remove.
3BUF001092-610 A 625
A Extending OMF Domain to TCP/IP
A.5 Shutdown Delay
If distributed History logs are being implemented (this requires the TCP/IP Enabled option
to be checked in the Communication Settings dialog), then increase the OMF shared
memory:
• for History nodes that send History data to a consolidation node, add 5 meg to the
OMF shared memory requirement.
• for a consolidation node that collects from History nodes, add 5 meg to the OMF
shared memory requirement for each node from which it collects data. For example,
if a consolidation node receives from eight History nodes, the consolidation node
will require 8*5 = 40 meg additional OMF shared memory.
To change the OMF Shared Memory size and then set it back to the default, click Set
Default. When finished, click OK.
Processes under PAS must be stopped and restarted for changes to take affect.
Refer to Starting and Stopping History on page 441.
3BUF001092-610 A 626
B General Log Property Limits
Table B.1 lists limits on general log properties. Refer to the section related to the specific
log type for more detailed information.
3BUF001092-610 A 627
B General Log Property Limits
3BUF001092-610 A 628
Revision History
Revision History
Revision History
This section provides information on the revision history of this User Manual.
The revision index of this User Manual is not related to the 800xA 6.1 System Revision.
Revision History
The following table lists the revision history of this User Manual.
Revision
Description Date
Index
A Published for 800xA 6.1 September 2018
3BUF001092-610 A 629
3BUF001092-610 A 630
Index
Index
Numerics Archive Service Aspect, 363
/ character use in OPCHDA, 540, 565 Archive Service Aspect 325,, 332
Archive Volume Aspect, 364
A Aspect
A/E Linked Server Configuration, 162 A/E Linked Server Configuration, 162
ABB_ROOT, Environment Variable, 496 Alarm Event Configuration, 49
AccuRay Object Server, 297 Archive Service, 332
Active Volume, Archive Device, 337 Archive Volume, 364
Administrator privileges, PAS, 582 Calculation, 80
ADO data provider for Oracle database, 539, Calculation Status Viewer, 111
568 Generic Control Network Configuration, 64
ADSS Config, 545 Inform IT Authentication Category, 574
AID-PASSWORD, 602 Inform IT History Control, 455
AIPHDA, Data Provider, 542 Inform IT History Log List, 446
AIPOPC, Data Provider, 542 Inform IT History Profile Log, 301
Alarm Acknowledge options, 56 Inform IT PDL Auto Archive, 365
alarm configuration, 53 Inform IT View Report Logs, 181
Alarm Event Configuration, 49 ODA Database Definition, 521
alarm handling for a signal, 56 Process Object Configuration, 42
Alignment time (Message Log), 161, 240 Signal Configuration, 44
allocating disk space, 195 Aspect Category, 577
API, Collection Type, 227 Assign Objects, 374
Archive Authentication, 48, 573
backup configuration, 338 AUTOLOGOFF, 603
Configuration, 26 auto-publish, volume, 337
Functions, 323
Guidelines, 326 B
Media, 332 Backup and Restore utility, 413
Volumes, 325 Backup Archive Path, Archive Device, 339
Archive Action, 358 Backup directory, 417
Archive Device Backup Type, Archive Device, 340
Attributes, 335 Bad Data Quality Limit, 240
Configuration, 332 blocking rate, 234
Archive Group, 160, 341 Browser_Separator, 540
Archive Group Entry Options, 348 Build mode, 607
Archive Mode, 366 bulk configuration, 268
Archive Path, Archive Device, 335
3BUF001092-610 A 631
Index
3BUF001092-610 A 632
Index
3BUF001092-610 A 633
Index
3BUF001092-610 A 634
Index
3BUF001092-610 A 635
Index
3BUF001092-610 A 636
www.abb.com/800xA 800xA is a registered or pending trademark We reserve all rights to this document
of ABB. All rights to other trademarks reside and the items and images it contains.
www.abb.com/controlsystems with their respective owners. The reproduction, disclosure to third parties
or the use of the content of this document
We reserve the right to make technical – including parts thereof – are prohibited
changes to the products or modify the
3BUF001092-610 A