0% found this document useful (0 votes)
751 views486 pages

OPTIMA Operations and Maintenance Guide

Uploaded by

Yasir Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
751 views486 pages

OPTIMA Operations and Maintenance Guide

Uploaded by

Yasir Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 486

OPTIMA

Operations and
Maintenance Guide
8.0
Confidentiality, Copyright Notice & Disclaimer

Due to a policy of continuous product development and refinement, TEOCO Ltd. (and its affiliates,
together “TEOCO”) reserves the right to alter the specifications, representation, descriptions and all
other matters outlined in this publication without prior notice. No part of this document, taken as a
whole or separately, shall be deemed to be part of any contract for a product or commitment of any
kind. Furthermore, this document is provided “As Is” and without any warranty.

This document is the property of TEOCO, which owns the sole and full rights including copyright.
TEOCO retains the sole property rights to all information contained in this document, and without
the written consent of TEOCO given by contract or otherwise in writing, the document must not be
copied, reprinted or reproduced in any manner or form, nor transmitted in any form or by any
means: electronic, mechanical, magnetic or otherwise, either wholly or in part.

The information herein is designated highly confidential and is subject to all restrictions in any law
regarding such matters and the relevant confidentiality and non-disclosure clauses or agreements
issued with TEOCO prior to or after the disclosure. All the information in this document is to be
safeguarded and all steps must be taken to prevent it from being disclosed to any person or entity
other than the direct entity that received it directly from TEOCO.

TEOCO and Netrac® are trademarks of TEOCO.

All other company, brand or product names are trademarks or service marks of their respective
holders.

This is a legal notice and may not be removed or altered in any way.

COPYRIGHT © 2014 TEOCO LTD.

ALL RIGHTS RESERVED.

Your feedback is important to us: The TEOCO Documentation team takes many measures in
order to ensure that our work is of the highest quality.

If you found errors or feel that information is missing, please send your Documentation-related
feedback to [email protected]

Thank you,

The TEOCO Documentation team


Change History
This table shows the change history of this guide:

Edition Date Reason

1 22 September 2014 First edition.


Table of Contents

Table of Contents
1 Introduction 15
The Data Loading Process 15
The Gather Stats Process 16
Creating a Daily Gather Stats Job 17
Creating a Gather Stats Job per Interface 18
Disabling Oracle's Auto Gather Stats Jobs 19
Gathering Dictionary Statistics 19
Monitoring the Gather Stats Job 19
OPTIMA Default Network Communication Requirements 20
Installation and Upgrade 20
Prerequisites for Installing the OPTIMA Backend 20
Installing the OPTIMA Combined Backend Package 21
Installing Executables 22
Preparing Windows Application Servers 22
Preparing for Mediation 22
Preparing Windows Mediation Servers 23
Preparing Unix Mediation Servers 24
Upgrading OPTIMA Backend 26
Patching OPTIMA Backend 26
System Components 27
About PRIDs 29
About File Locations and Naming 30
Scheduling Programs 31
The Monitoring Process 32
Configuring Programs 32
About Versioning 33
About Log Files 33
About Environment Variables 34
Setting Up Environment Variables on a Windows OS 35
Setting Up Environment Variables on a UNIX OS 37
Encrypting Passwords for OPTIMA Backend Applications 38
Which OPTIMA Backend Applications are Affected by Password Security? 39
Starting and Stopping the Data Loading Process 40
Checking Log Files 41
External Programs 41
Database Programs 41
Maintenance 42
Troubleshooting 43
About the Log File Analyzer (Opxlog Utility) 44
Prerequisites for Using the Log File Analyzer 44
Configuring the Log File Analyzer 45
Example Uses for the Opxlog Utility 46
Example Log File Analyzer Configuration (INI) File 47

5
OPTIMA 8.0 Operations and Maintenance Guide

Rebooting the Database Server 48


Using OPTIMA Across Different Time Zones 49
Accessing Data from Outside of the OPTIMA Backend Applications 50
Managing Resources Through Consumer Groups 51
Common Error Codes 52

2 About Data Acquisition Tools 57


About the FTP Application 57
About the FTP Modes 59
Installing the FTP Application 60
Configuring the FTP Application 66
Maintenance of the FTP Application 81
FTP Message Log Codes 83
Troubleshooting 83
About the Database Acquisition Tool 84
Installing the Database Acquisition Tool 84
About the Database Acquisition Tool Modes 85
Configuring the Database Acquisition Tool INI File 89
Configuring the Database Acquisition Tool for Sun Solaris Machines and SQL Server
Databases 93
Running the Database Acquisition Tool 94
Database Acquisition Tool Message Log Codes 94
About the OPTIMA CORBA Client 96
OPTIMA CORBA Client Parameters 97
Example OPTIMA CORBA Client INI File 99
CORBA Client/Server Message Log Codes 99

3 About SNMP Data Acquisition 101


About SNMP Auto-Collection 103
About Simple Network Management Protocol (SNMP) 103
About Management Information Bases (MIBs) 104
About SNMP Versions 104
Prerequisites to Using the SNMP Poller Configuration Interface 105
Logging into the SNMP Poller Configuration Interface 106
Configuring the SNMP Poller 107
Loading MIBs and Creating Reports 107
Defining Devices (Agents) to be Polled 115
Finding and Loading Devices Automatically 117
Finding and Loading Devices Manually 125
Manually Defining Devices 127
Editing and Deleting Devices 129
Assigning Reports to Device Types 129
Importing and Exporting Device Types and Reports 131
Using the API to Acquire Device Information 134
Assigning Devices (Agents) to Machines 135
Assigning Scan Definitions to Machines 138
Viewing a Summary of the SNMP Poller Configuration 140
Selecting Web Service Settings 140
Generating an INI File of SNMP Poller Settings 141
Manually Tuning the SNMP Poller Settings INI File 144
Ping Testing with the SNMP Poller 146
Using Traceroute with the SNMP Poller 148
Running the SNMP Poller 149

6
Table of Contents

Maintenance of the SNMP Poller 149


SNMP Poller Message Log Codes 150
Troubleshooting 151
About the SNMP Discoverer 152
Installing the SNMP Discoverer 153
About the SNMP Assigner 153
Pings Algorithm Example 155
Normpings Algorithm Example 156
Installing the SNMP Assigner 156
About the Mediation Agent 157
Installing the Mediation Agent 158
Setting up the Web Server 158
Installing OPTIMA Services 159
Setting Permissions on Message Queues 159
Protecting the Web Service 159
Creating a System Environment Variable (Windows) 161

4 About the OPTIMA Parser 163


Parser Quick Start 164
The Parsing Process 166
Installing the OPTIMA Parser 166
Starting the OPTIMA Parser 166
Configuring the OPTIMA Parser 167
Maintenance 172
Checking for Error Files 172
Checking a Log File Message 172
Checking the Parser Statistics Log Messages 173
Checking the Parser Information Log Messages 176
Stopping the OPTIMA Parser 176
Checking the Version of the Parser 177
Checking the Parser is Running 177
Troubleshooting 177
Example Parser Configuration (INI) File 178

5 About Data Validation 181


The Validation Process 182
Installing the Data Validation Application 182
Configuring the Data Validation Application 183
Starting the Data Validation Application 185
Defining Reports 186
Maintenance 187
Checking for Error Files 187
Checking a Log File Message 187
Stopping the Data Validation Application 188
Checking the Version of the Data Validation Application 188
Checking the Application is Running 188
Troubleshooting 188
Data Validation Application Message Log Codes 189
7
OPTIMA 8.0 Operations and Maintenance Guide

Example Data Validation Configuration (INI) File 190

6 About the File Combiners 193


Combiner Quick Start 193
What is Combining? 195
Installing the File Combiners 195
Starting the File Combiners 196
Configuring the Single Input File Combiner 196
Defining Reports 197
Example Single Input File Combiner Configuration (INI) File 198
Configuring the Multiple Input File Combiner 200
Converting DATEFORMAT to Regular Expressions 204
How File Groups Are Created 205
Example Multiple Input File Combiner Configuration (INI) File 205
Maintaining File Combiners 206
Checking for Error Files 206
Checking a Log File Message 207
Stopping File Combiners 207
Checking the File Combiner Version 207
Checking the Application is Running 208
Troubleshooting File Combiners 208
File Combiner Message Log Codes 208

7 About Loading in OPTIMA 213


Loader Quick Start Section 214
Prerequisites 214
Create Raw Table 214
Add Grants 214
Configure a New Loader Report 214
Run the Loader 215
Installing the Loader 216
Starting the Loader 216
Starting the ETL GUI 216
Selecting the Loader Machine 217
About the ETL Loader Configuration Window 218
Configuring Reports 219
Defining the General Options for the Loader 219
Defining the Files and Directories for the Loader 220
Defining the Database and Processing Settings for the Loader 222
Defining the Table Settings for the Loader 225
Viewing the Log Messages for the Loader 232
Defining the Validator Options for the Loader 232
Saving the Configuration 234
Maintenance of the Loader 234
Checking a Log File Message 235
Checking the Loader Error Log Tables 235
Checking the Version of the Loader 236
Checking that the Loader is Running 236
Tuning the Loader 237
About the Loader Options and Database Values 239

8
Table of Contents

Troubleshooting 241
Loader Error Codes 242
Example Loader Configuration (INI) File 245
About the Loader Configuration (INI) File Parameters 247
About Direct Path Loading 251
Migrating to Direct Path Loading 252
Configuring for Direct Path Loading 252
Tuning Direct Path Loading 254
Error Handling for Direct Path Loading 255
Configuration (INI) File Parameters for Direct Path Loading 256

8 About the Process Monitor 257


How the Process Monitor Works 258
Installing the Process Monitor 259
Starting the Process Monitor 259
Configuring the Process Monitor 260
Defining Monitoring Settings for an Application 261
Maintenance 262
Checking a Log File Message 263
Stopping the Process Monitor 263
Checking the Version of the Process Monitor 263
Checking that the Application is Running 263
Process Monitor Message Log Codes 264
Troubleshooting 265
Example Process Monitor Configuration (INI) File 265

9 About the Directory Maintenance Application 267


The Directory Maintenance Process 268
Installing the Directory Maintenance Application 268
Starting the Directory Maintenance Application 268
Configuring the Directory Maintenance Application 269
Maintenance 274
Checking a Log File Message 275
Stopping the Directory Maintenance Application 275
Checking the Version of the Directory Maintenance Application 275
Checking that the Application is Running 275
Troubleshooting 276
Directory Maintenance Application Message Log Codes 276
Example Directory Maintenance Configuration (INI) File - Report Only 277
Example Directory Maintenance Configuration (INI) File - Report and
Maintenance 277

10 About the OPTIMA Summary Application 279


Quick Start 280
Prerequisites 280
9
OPTIMA 8.0 Operations and Maintenance Guide

Create Raw and Summary Tables 280


Add Grants 280
Configure a New Summary Report 281
Schedule Reports 282
Check the Log Viewer 282
About the DIFFERENCE_ENGINE Package 282
Supported Summary Types 283
Installing the OPTIMA Summary 284
Connecting to the OPTIMA Database 285
About OPTIMA Summary Version 285
Configuring the OPTIMA Summary 286
Processing Multiple Schedules Per Session 288
Configuring Sub-Hourly Summaries 289
About the OPTIMA Summary Dialog Box 289
Adding a New Summary Report 291
The Report Configuration Tab 292
The SQL Query Tab 302
The Column Mappings Tab 305
The Schedules Tab 307
Editing and Deleting a Summary Report 320
Editing Multiple Reports 320
Stopping the OPTIMA Summary Reports 322
Tuning the OPTIMA Summary 323
Troubleshooting the OPTIMA Summary 325
Viewing Log Messages 325
About Oracle DBMS_SCHEDULER Jobs 327
About Scheduling Oracle Jobs 328
Using the Summary for Direct Database Loading 329

11 About the Data Quality Package 331


Installing the Data Quality Package 331
Using the Data Quality Console 331
Configuring the Data Quality Package 332
About Global Configuration 333
About Data Source Configuration 341
About Data Quality Configuration 353
Using the Data Quality Configuration Package 358
About the Data Quality Configuration Package 359
Running the Data Quality Configuration Package 360
Logging for the Data Quality Configuration Package 361
Troubleshooting the Data Quality Configuration Package 361
Saving Configuration Information to Microsoft Excel 362
About the Standard Data Quality Reports 362
About the Enhanced Data Quality Reports 364
About the Raw Table Data Integrity Report 364
About the Summary Table Data Integrity Report 365
About the System Summary Report 365
About the File Availability Report 365
About the Health Check Report 365

10
Table of Contents

Example Workflow for Using the Availability Package 366


Troubleshooting the Data Quality Package 369

12 Scheduling Oracle Jobs 373


About Scheduling 373
About Gather Stats 373
Creating a Daily Gather Stats Job 374
Creating a Gather Stats Job per Interface 375
Disabling Oracle's Auto Gather Stats Jobs 375
Gathering Dictionary Statistics 376
Monitoring the Gather Stats Job 376

13 About the OSS Maintenance Package 377


Installing the OSS Maintenance Package 377
Maintaining Table Partitions 377
How Partition Maintenance Works 378
Configuring Partition Maintenance 381
Scheduling Partition Maintenance 383
Maintaining Tablespaces 383
Configuring Tablespace Maintenance 384
Scheduling Tablespace Maintenance 385
Gathering Statistics 386
Configuring Statistics Gathering 386
Scheduling Statistics Gathering 387
Scheduling the OSS Maintenance Package 388
Turning Off Components in the OSS Maintenance Package 389
Tuning the OSS Maintenance Package 389
Modifying the Estimate Percentage 389
Forcing the Gathering of GLOBAL Schema Stats 390
Modifying the PARTITION_STATS_METHOD 390
Maintenance 391
Troubleshooting the OSS Maintenance Package 391
Performance Reporting 392

14 About the OPTIMA Report Scheduler 395


Installing the Report Scheduler 395
Configuring the Report Scheduler 396
Scheduling Reports Across Different Time Zones 398
Configuring PID File Settings 400
Configuring Log File Settings 401
Starting the Report Scheduler GUI 402
Running Multiple Instances of the Report Scheduler 402
Maintenance of the Report Scheduler 403
Checking a Log File Message 403
Stopping the Report Scheduler Application 404
Checking the Report Scheduler is Running 404
Troubleshooting 404
Troubleshooting Exporting to Email 405

11
OPTIMA 8.0 Operations and Maintenance Guide

Example OPTIMA Report Scheduler Configuration (INI) File 407

15 About OPTIMA Alarms 409


About the Alarms Processor 409
Starting the Alarms Processor 409
Configuring the Alarms Processor 410
Alarms Processor Message Log Codes 412
About the Alarm Notifier 414
Installing the Alarm Notifier 414
Prerequisites for Using the Alarm Notifier 414
Starting the Alarm Notifier 415
About the Alarm Notifier Dialog Box 415
About the Current Actions Window 425
Configuring the Database for Alarm Notification 426
Maintaining Alarms 429
Troubleshooting the Alarm Notifier 430

16 About the SNMP Agent 433


The SNMP Interface 433
Configuring the SNMP Interface for a Single Agent and Multiple FMSs 434
Configuring the SNMP Interface for Multiple Agents and Multiple FMSs 435
Installing the SNMP Agent 436
Starting the SNMP Agent 436
Configuring the SNMP Agent 437
Summary of the SNMP Agent Modes 441
Detailed Description of the SNMP Agent Modes 441
Configuring Views 444
Maintenance 445
Stopping the SNMP Agent 445
Checking the Version of the SNMP Agent 445
Checking the Application is Running 445
Troubleshooting 446
SNMP Agent Message Log Codes 446
Example SNMP Agent Configuration (INI) File 452
About the SNMP MIB for Alarm Forwarding 454
Example SNMP Agent Traps 456
Example ColdStart Trap 456
Example HeartBeat Trap 457
Example Alarm Trap 459
Example End of Resync Trap 461

17 About the File Splitter 463


Example of Using the File Splitter 463
Installing the File Splitter 463
Starting the File Splitter 463
Configuring the File Splitter 464
Maintenance 466
Checking for Error Files 466
12
Table of Contents

Stopping the File Splitter 466


Checking the Version of the File Splitter 467
Checking the Application is Running 467
Troubleshooting 467
Example File Splitter Configuration (INI) File 468

18 Functions, Procedures and Packages 471


AIRCOM Schema Functions 471
AIRCOM Schema Procedures 472
AIRCOM Schema Packages 472
OSSBACKEND Schema Packages 473
WEBWIZARD Schema Functions 473
WEBWIZARD Schema Packages 473
VENDOR Schema Procedures 474
Scheduling Jobs 474
OPTIMA Backend Jobs and Schedules 476

Glossary of Terms 479

Index 483

13
Introduction

1 Introduction

The data loading process collects network performance data, typically from telecommunication
networks OMC platforms, and stores the data in the OPTIMA performance management database.
The data extraction, transformation and loading processes are carried out by the OPTIMA backend
programs, which are run continuously to ensure that network data is collected and presented in
near real-time.

Note: The extraction, transformation and loading processes are often referred to as ETL.

This guide contains the operation and maintenance (O&M) procedures for the data loading
processes handled by OPTIMA. For more information on SNMP data handled by Netrac mediation,
see the Netrac Server Installation Guide.

Note: This guide does not provide the specific configurations for each customer installation. Please
refer to the implementation plan for customer-specific information such as interfaces deployed,
machine IDs and administrator login details.

The Data Loading Process


This diagram shows the data loading process:

Data Loading Process

15
OPTIMA 8.0 Operations and Maintenance Guide

This table describes the data loading process:

Component Process

Network Performance Management (PM) files are created in the network.


Cron/batch files Schedule/combine program schedules.

FTP Sends the PM files to the backend machine or server.


Parser Converts the PM files into one or more CSV files.
Data Validation Checks the column order of the CSV files and splits large files to
match the database table structure.
Loader Transfers the CSV files into the database.
Monitor (PID) files Ensure that new instances of a program are not started if the
previous instance is still running.
Process Monitor Removes failed programs.
Logfile Utility Combines log messages from all logs into a common file for
analysis or loading into the database.
Directory Maintenance Removes or archives old files from specified directories.

After data has been loaded then there are a number of database-related programs that are used to
further analyze or manipulate the data:

Component Process

OPTIMA Summary Runs all summary processes, for example Busy Hour analysis or
daily summaries. This program can also be used to load data from
external Oracle databases. Summaries are organised as reports
and individual reports (or groups of reports) and are scheduled by
Oracle jobs.
Data Quality Used for Data Quality reports.

Configuration utility Used to define configuration settings for database programs,


these settings are stored in the database.
Alarms Monitoring and alerting. Alarm monitoring can be configured
based on missing data tables or any of the system logs to alert
administrators by SMS and/or e-mail to any problems with any of
the data loading programs.

The Gather Stats Process


To retrieve the data requested by an SQL command, information about a database's tables and
indexes is required. This information enables the Oracle Optimizer to calculate the best way to
access the data.

The required information is obtained by the Gather Stats process, which collects estimates of,
among other things:
• the table/index size(in blocks)
• the number of rows
• the low and high values of columns
• distinct values

16
Introduction

The configuration of the Gather Stats package depends on four metadata tables:

1. STATS_CONTROL - Controls full execution of the package and comes with the default
settings that will be used to populate the metadata tables

2. STATS_SCHEMA_METADATA - controls utilization at the schema level

3. STATS_TABLE_METADATA - controls utilization at table level

4. STATS_EXCLUDE_TABLES - controls which tables will not be included for the gather stats
process

OPTIMA includes the Gather Stats process in a package that is auto configured during the
Installation/Upgrade process. The only subsequent intervention normally necessary is to schedule
Oracle jobs.

Note: The default configuration suits most installations, but you may need to adjust some of the
parameters for one or more tables. If this is the case, contact TEOCO Support.

Important: If do not use the OIT (OPTIMA Installation Tool) to create vendors but instead create
them manually, you must also insert them manually in the Gather Stats process.

To do this:

1. Exclude the Error tables:


insert into aircom.stats_exclude_tables
values ('VENDOR_NAME','LIKE '''||'ERR%'||'''');
commit;

2. Populate all tables from a single Schema:


EXEC GATHER_STATS.populate_metadata_tables('VENDOR_NAME');

Creating a Daily Gather Stats Job


To create the Gather Stats job that will run daily (03:00) and gather statistics for all objects
(metadata tables) for all schemas:

begin
dbms_scheduler.create_job(
job_name => 'AIRCOM.GATHER_STATS_OPTIMA'
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN GATHER_STATS.collect_schema_stats;
END;'
,start_date => SYSDATE+1
,repeat_interval => 'FREQ=DAILY;BYHOUR=03'
,enabled => TRUE
,comments => 'Gather Stats');
end;
/

Warning: The scheduled job should be created under the schema that the Gather Stats package
has been installed on. Failure to do so may prevent the job from running successfully.

17
OPTIMA 8.0 Operations and Maintenance Guide

To see if the last run for the scheduled job was successful:

Select log_id, log_date, owner, job_name, job_class, operation,


status
from all_scheduler_job_log
where owner='AIRCOM'
and job_name ='GATHER_STATS_OPTIMA'
order by log_date desc;

If you want to force the job to run immediately after creation:

begin
dbms_scheduler.run_job('GATHER_STATS_OPTIMA',false);
end;
/

To verify that the job has terminated successfully:

Select owner, job_name, session_id


from all_scheduler_running_jobs
where owner='AIRCOM'
and job_name ='GATHER_STATS_OPTIMA';

To stop the job gracefully (force=false).

begin
dbms_scheduler.STOP_JOB('GATHER_STATS_OPTIMA');
end;
/

Creating a Gather Stats Job per Interface


Another option for configuring the Gathers Stats job is to create a scheduled job per Vendor-
Interface/Schema. You can use this PL/SQL block to create a job for all interfaces:

DECLARE
vhour varchar2(50);
begin
for i in (select schema_name from stats_schema_metadata) loop
vhour:='FREQ=DAILY;BYHOUR=0'||mod(i.num,4);
begin
-- drop gather_stats job if already exists
dbms_scheduler.drop_job('GT_'||i.schema_name);
exception
when others then
null;
end;
dbms_scheduler.create_job(
job_name => 'GT_'||i.schema_name
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN
GATHER_STATS.collect_schema_stats(pschema=>'||q'[']'||i.schema_
name||q'[']'||'); END; '
,start_date => SYSDATE
,repeat_interval => vhour
,enabled => TRUE
,comments => 'Gather Stats');
end loop;
end;
/

For RAC where node affinity is used, contact TEOCO Support.

18
Introduction

Disabling Oracle's Auto Gather Stats Jobs


Although not mandatory, it is recommended that you use a single Gather Statistics procedure to
manage your database statistics. The GATHER_STATS package will lock the schema stats whilst
collecting statistics. If you have chosen not to stop the default Oracle gather statistics jobs, there is
the risk of a concurrency issue.

To disable the default Oracle statistic gather jobs:

BEGIN
DBMS_AUTO_TASK_ADMIN.DISABLE(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => NULL);
END;
/

To verify that the Autotask Background Job has been disabled successfully:

Select client_name, operation_name, attributes, status


from DBA_AUTOTASK_OPERATION
where client_name = 'auto optimizer stats collection';

If you are unsure of the Oracle Database version that you have installed:

Select banner from v$version;

Gathering Dictionary Statistics


It is advisable to gather dictionary and fixed object statistics once a month. The following job
enables this functionality. Be sure to consult your Oracle DBA before performing this operation, as
it may already be in place for another job in the database. This PL/SQL job creates the
'GATHER_DICTIONARY_FIXED_STATS' job to run on the 1 st day of every month at 01:00 in the
morning:

Begin
dbms_scheduler.create_job(
job_name => 'GATHER_DICTIONARY_FIXED_STATS'
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN
gather_stats.collect_dictionary_stats; END;'
,start_date => SYSDATE
,repeat_interval => 'FREQ=MONTHLY;BYMONTHDAY=01;BYHOUR=01'
,enabled => TRUE
,comments => 'Gather Stats');
end;
/

Monitoring the Gather Stats Job


There are a number of log tables that can be used to monitor the execution of the Gather_Stats job.
Currently there are no public synonyms created for these log tables, the tables can only be
accessed by other users by schema followed by tablename.

The default retention for these log tables is 30 days.

The log tables to monitor are:

STATS_EVENT_LOG

Purpose: Log all event level information.

19
OPTIMA 8.0 Operations and Maintenance Guide

STATS_TABLES_LOG

Purpose: Log table level information for each table/interface, with execution times and CPU/IO
metrics.

OPTIMA Default Network Communication Requirements


This table gives details of the communication ports that OPTIMA processes use and expect to be
open.

Protocol Src Host Src Dst Host Dst Port


Port

IP/TCP OPTIMA Alarm Notifier server >1023 SMSC 5019


Server/SMSC
Service Proxy
IP/TCP OPTIMA Alarm Notifier server >1023 Mailserver 25, 465
OPTIMA Report Servers
IP/TCP Mediation Servers >1023 OMC 21 (+ data
port >1024),
22
IP/TCP OPTIMA Citrix servers >1023 AD Server(s) 53 (UDP), 88,
(UDP) 135, 389,
445, 636,
3268, 3269,
49155,
49156,
49157, 49158
IP/TCP OPTIMA users accessing through Citrix >1023 OPTIMA Citrix 80, 8080,
Server(s) 443, 1494,
2598

Installation and Upgrade


The complete OPTIMA backend for an interface will be installed and tested by TEOCO during
installation of the system. Please contact TEOCO Support if you wish to install a new interface or
move the installation to a new machine.

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

For installation of specific applications, see the relevant chapter.

Prerequisites for Installing the OPTIMA Backend


Before installing the OPTIMA backend, ensure that:
• .NET framework 3.5 SP1 and .NET 4.0 have been installed on the Application Server or
any other machine that will be running reports, and that they were installed before MS
Office, the configuration of which is affected by its detection of the .NET installations.
• You have downloaded the latest installation files including:
o The initial backend distribution package
o The latest backend patch distribution package

20
Introduction

• When applying a backend patch, the latest compatible database patch has also been
applied (for details, see the patch Release Notes).
• You have installed and patched the OPTIMA Installation Tool (OIT), for more information
see the OPTIMA Installation Tool User Reference Guide.
• Supported Oracle clients are installed on mediation servers, application servers and clients.
• The OPTIMA database has been upgraded to version 8.0, for more information, see the
Netrac Server Installation Guide.

Installing the OPTIMA Combined Backend Package


If you want to install one or more backend components, it is recommended that you use the
OPTIMA combined backend package. This enables you to install the required components quickly
and easily.

To do this:

1. Double-click the OPTIMA Backend setup.exe that you have downloaded.

2. In the dialog box that appears, type the password and then click OK.

3. On the License Agreement page, read the license agreement, and if you accept its terms,
scroll to the bottom of the agreement, select 'I accept the terms in the licence
agreement' and then click Next.

Note: To print out the license agreement for your own reference, click the Print button.

4. In the User Information dialog box, enter your name and the name of your organisation,
then click Next.

5. On the Setup Type page, select the type of installation that you require:
o Complete - This installs all components of the backend features

- or -
o Custom - This enables you to select which components of the backend you want to
install

6. Click Next.

If you have selected a Complete installation, you can go to step 7.

- or -

If you have selected a Custom installation, the Custom Setup page appears.

Ensure that the components to be installed are selected and those to be omitted are not
selected and then click Next.

Tip: If you wish to change the folder where the software is installed, you can click the
Change button. However, it is recommended that you use the default folder.

7. On the Ready to Install the Program page, click Install.

21
OPTIMA 8.0 Operations and Maintenance Guide

Installing Executables
Install all of the required application server executables from the backend build, for example:
• Alarm Service
• Alarm Notifier
• Alarm SNMP Agent
• Report Scheduler

Preparing Windows Application Servers


To prepare your Windows Application server, perform the following tasks:

1. Create an OPTIMA user with Administration permissions (needed for task scheduling).

2. Set up Remote Desk access (recommended for remote support).

3. Ensure that .NET 4.0 is installed.

4. Ensure that an appropriate version of Microsoft Excel (Office) is installed.

Note: Microsoft Excel is required for the Report Scheduler to produce Excel reports.

5. Install the OPTIMA Backend distribution package.

6. Copy the programs from the Windows Mediation directory of the OPTIMA Backend
Program Files directory to the OPTDIR/bin directory, and ensure that they are executable
(run with –v option).

Actions for a later time:


• Configure and schedule the Process Monitor (one instance per application server).
• Configure and schedule the Directory Maintenance process.
• Configure and schedule the Directory Maintenance Loader.
• Configure and schedule the Opxlog process.
• Configure and schedule the Opxlog loader.
• Configure and schedule the Report Scheduler.
• Configure the Alarm Notifier.

Preparing for Mediation


To prepare for mediation:
• Copy the relevant binaries to mediation from the backend installation program directory
• For UNIX operating systems, copy the relevant OS version of AILIB directory to the
directory defined by OPTDIR from the backend installation program directory
• Make InstallLibs.sh executable (chmod 755) on the mediation server
• Run InstallLibs.sh to set up the libraries
• Add the $OPTDIR/AILIB to LD_LIBRARY_PATH environment variable

22
Introduction

Preparing Windows Mediation Servers


To prepare your Windows Mediation server, perform the following tasks:

1. Using root or administrator user, create an oracle user, this is for the oracle client.

2. Using the oracle user, install the latest supported Oracle client.

3. Configure the TNSNAMES.ora to point to the OPTIMA instance.

4. Create an OPTIMA user with Administration permissions.

5. Enable all required tasks and job scheduling for the OPTIMA user.

6. Set up Remote Desk access (optional).

7. Install ActivePerl and the extra modules required for SFTP and SSH.

Note: Perl is required for FTP and Combiner processing. However, SFTP without SSH is
not an option for Windows.

8. Ensure that the PATH environment variable includes the Oracle library directories.

9. Ensure that the PERL5LIB environment variable points to the correct directories.

10. Define the base directory for the OPTIMA installation (for example, C:/Optima).

11. Ensure that the OPTIMA user has access / permissions to read Oracle libraries.

12. Using the OPTIMA user, define the following environment variables:
o OPTDIR (The OPTIMA base directory)
o HOSTNAME (server name)
o ORACLE_HOME
o LD_LIBRARY_PATH

For more information on environment variables, Setting Up Environment Variables on a


Windows OS on page 35.

13. Verify that the optima user has a TNS connection to the database (sqlplus
aircom/<password>@<tnsname>).

14. Using the OPTIMA user, create the following central location directories under the OPTDIR
directory that you have specified:
o bin (for binaries/executables)
o AILIB (for OS specific libraries)
o tmp
o log (for log files produced by each process)
o prids (for process monitor PID files)
o run (for shell or batch scripts)
o maintenance (for common OPTIMA processes (such as MNT, MON and so on)
o extdir (for loader external files)

At a later time you will need to set up and schedule the Alarm Service (opx_ALM_GEN_817) and
Alarm SNMP Agent (opx_ALM_GEN_820).

23
OPTIMA 8.0 Operations and Maintenance Guide

Preparing Unix Mediation Servers


To prepare your Unix Mediation server, perform the following tasks:

1. Using root or administrator user, create an oracle user, this is for the oracle client.

2. Using the oracle user, install the latest supported Oracle client.

3. Configure the TNSNAMES.ora to point to the OPTIMA instance.

4. Create an OPTIMA user with permissions to:


o Read oracle libraries
o Edit crontab
o Read and Write access to the OPTIMA mountpoint

5. Using the OPTIMA user, create the library directory (AILIB) under the defined OPTDIR
directory.

6. Copy and install the relevant OS-specific library files from backend distribution installation
to the AILIB directory with OPTIMA user.

Note: You may need to install different library file sets for backend programs and parsers.

7. Using the OPTIMA user, define the following environment variables:


o LD_LIBRARY_PATH' (to include Oracle libraries and all relevant OS-specific libraries
in .profile)
o OPTDIR (The OPTIMA base directory)
o HOSTNAME (server name)
o ORACLE_HOME
o LD_LIBRARY_PATH

For more information on environment variables, see Setting Up Environment Variables on a


UNIX OS on page 37.

8. With the OPTIMA user, copy the programs from the appropriate OS Mediation directory of
the Backend Program Files directory to the OPTDIR/bin directory and ensure that they are
executable (run with –v option).

9. Define the base directory for the OPTIMA installation (for example,/opt/optima).

Note: This may require a specific mountpoint to be defined.

10. Ensure that the OPTIMA user has access / permissions to read Oracle libraries.

11. Using the root user, install PERL (to check if PERL is installed on a UNIX platform the
command is 'perl –v').

Note: Perl is required for FTP and Combiner processing.

12. (Optional). If you want to use the FTP in secured mode (SFTP), install the Net::SFTP
CPAN module. You can only use this mode on UNIX.

13. (Optional) If you want to use the FTP in secured mode (SFTP) with SSH Key Authentication
(which is the strongly recommended SFTP option), install the Net::SFTP CPAN module and
the IO::Pty and Expect modules.

24
Introduction

14. (Optional) If you also want to use SFTP compression, then you must additionally install the
following CPAN Perl modules:
o Compress::Zlib
o Compress::Raw::Bzip2
o Compress::Raw::Zlib
o Crypt::Blowfish
o Crypt::CBC
o Crypt::DES
o Crypt::DH
o Crypt::DSA
o Crypt::Primes
o Crypt::RSA
o Crypt::Random
o Digest::BubbleBabble
o Digest::HMAC
o Digest::SHA
o IO::Compress
o IO::Zlib
o Net::SSH
o Net::SSH::Perl

15. Verify that the optima user has a TNS connection to the database (sqlplus
aircom/<password>@<tnsname>).

16. Using the OPTIMA user, create the following central location directories under the OPTDIR
directory that you have specified:
o bin (for binaries/executables)
o AILIB (for OS specific libraries)
o tmp
o log (for log files produced by each process)
o prids (for process monitor PID files)
o run (for shell or batch scripts)
o maintenance (for common OPTIMA processes (such as MNT, MON and so on)
o extdir (for loader external files)

17. At a later time you will need to set up and schedule the Alarm Service
(opx_ALM_GEN_817) and Alarm SNMP Agent (opx_ALM_GEN_820).

25
OPTIMA 8.0 Operations and Maintenance Guide

Upgrading OPTIMA Backend


To upgrade OPTIMA backend at a major release (not a patch release), carry out the following
actions.

1. Download the latest backend build and patches.

2. To prepare for mediation:


o Copy the relevant binaries to mediation from the backend installation program directory
o For UNIX operating systems, copy the relevant OS version of AILIB directory to the
directory defined by OPTDIR from the backend installation program directory
o Make InstallLibs.sh executable (chmod 755) on the mediation server
o Run InstallLibs.sh to set up the libraries
o Add the $OPTDIR/AILIB to LD_LIBRARY_PATH environment variable

3. Replace all of the application server executables with new version executables from the
backend distribution package (for example, Alarms, Alarm Notifier, Alarm SNMP Agent and
Report Scheduler).

4. Confirm that the required parsers are compatible by running them with -v initially and then
running them with data files.

5. Test and verify that all of the processes are running, executing the OPTIMA upgrade
acceptance tests.

6. Monitor the installation continually until you consider it to be stable.

Patching OPTIMA Backend


To patch the OPTIMA backend:

1. Download the latest OPTIMA Backend distribution package.

2. Run the setup.exe for the latest backend patch.

3. Click Next.

4. Read the License Agreement, and then select 'I accept the terms in the license agreement'.

5. Click Next.

6. Select the Complete option to install all of the required scripts and templates, and then click
Next.

7. Click Install.

8. When the installation is complete, click Finish.

9. Copy relevant new binaries from backend distribution to the mediation and application
server directories.

10. Copy relevant new libraries from backend distribution to the mediation server library
directory.

11. For UNIX operating systems, copy the relevant OS version of AILIB directory to the
directory defined by OPTDIR from the backend installation program directory

- Make InstallLibs.sh executable (chmod 755) on the mediation server

- Run InstallLibs.sh to set up the libraries

26
Introduction

- Add the $OPTDIR/AILIB to LD_LIBRARY_PATH environment variable

12. Replace all of the application server executables with new version executables from the
backend distribution package (for example, Alarms, Alarm Notifier, Alarm SNMP Agent and
Report Scheduler).

System Components
The following table summarizes all the components provided as part of the file data loading
architecture. For detailed configuration options for each component, see the following chapters.

Component Description Program Executable Name


Type

FTP Transfers files from external External opx_FTP_GEN_302


server to the OPTIMA backend
server. Has the ability to only
transfer new files using
existing listfile algorithm.
Parser Converts proprietary file External opx_PAR ...
formats to a common CSV
format. (Vendor-specific)

File Combiner Merges the CSV files output by External opx_CMB_GEN_903 (Multiple input)
certain parsers into new
combined CSV files. opx_CMB_GEN_900 (Single input)

SNMP Agent SNMP interface to the External opx_ALM_GEN_820


OPTIMA database.
SNMP Poller Collects information from External opx_DAP_GEN_301 (Poller)
SNMP Agents.
SNMP Poller GUI (Configuration GUI)
SNMP Mediation Manages the communication External opx_DAP_GEN_309
Agent between the SNMP
Discoverer, SNMP Assigner,
SNMP Poller, and the OPTIMA
database.
SNMP Discoverer Identifies SNMP devices and External opx_DAP_GEN_311
reports them to the OPTIMA
database.
SNMP Assigner Assigns discovered devices to External opx_DAP_GEN_312
poller instances.
ETL Loader Loads CSV file into an External and opx_LOD_GEN_110
OPTIMA table via an external Database
table. opx_LOD_GEN_110_GUI (Loader
Configuration GUI)
Direct Path Loader Loads files from the input External and opx_LOD_GEN_112
directory directly into a Global Database
Temporary Table.
OPTIMA Summary Summarizes data within the Database AIRCOM_OPTIMA_Console
OPTIMA database.
Data Quality Reports on the quality of data, Database opx_DAQ_WIN_420
for example, data that is
incomplete or missing.
Directory Maintains a maximum number External opx_MNT_GEN_610
Maintenance of most recent files in a
specified set of directories.

27
OPTIMA 8.0 Operations and Maintenance Guide

Component Description Program Executable Name


Type

Process Monitor Removes hung or crashed External opx_MON_GEN_510


processes.
Log File Analyzer Utility for transferring the latest External opx_SCR_GEN_813
(Opxlog Utility) log information to an external
Oracle table.
Data Validation and Splits CSV files into subfiles External opx_DVL_GEN_411
Rules Engine containing a subset of
columns. Can also be used to
re-order or check the columns
in the CSV file.
Data Acquisition Transfers the data in one External opx_DAP_GEN_333
database to another database
directly. Queries the database
containing this data and stores
the result to a CSV file.
Report Scheduler Configures the OPTIMA External OptimaReportSchedulerConfig
Scheduling System (Configuration GUI)
OptimaReportSchedulerGUI (Scheduler
GUI)
Alarms Notifier Polls the database for recently External AlarmNotifier
raised alarms and sends alarm
notifications via email or SMS.
Password Encrypter Encrypts the passwords for External opxcrypt
existing/legacy *.ini files for a
number of applications,
including those that are
created manually.
File Splitter Splits a single file that contains External opx_SPL_GEN_414
a variety of different objects
into a number of files
containing similar objects.
This is done prior to parsing,
and makes the parsing of the
data easier.

Note: Database programs are run in Oracle. External programs are run external to the Oracle
database in UNIX on the server. The executable name may be suffixed with additional identifiers for
a particular installation. For example a Nortel Parser may be renamed opx_PAR_NOR_711.

28
Introduction

About PRIDs
All OPTIMA components are given a unique identifier, known as a PRID, which is used to identify
all processes involved with the backend. The PRID is used extensively in configuration, error
logging and process monitoring.

The PRID is a nine character element that looks like this:

000aaabbb

It is made up of three sub fields as described in this table:

Field Subcomponent Description Values

000 Interface ID Unique identifier for the interface 000 – 999


(Nokia GSM, Ericsson UMTS,
Siemens LTE and so on). Allocated at design.

aaa Program ID Unique identifier for a type of backend 000 – ZZZ


program.
Allocated at design.

bbb Instance ID Unique identifier for an instance of a 000 – ZZZ


type of program on a machine.
Allocated on
For example, if there are a number of installation.
loader programs running (each for a
different type of CSV file) then each
will be allocated a unique Instance ID.

For example 000110001 is the first instance of a program of type 110 (a loader) running on
interface 001.

Tip: The Interface ID is entirely numeric, but the Program ID and Instance ID are alphanumeric
(using uppercase characters). This enables them to support a larger number of programs and
interfaces.

The Program ID is set when the program is originally created, but the Instance ID is calculated by
the package 'AIRCOM.OPTIMA_PACKAGE', which generates the Instance IDs in the following
order - 000, 001, …, 009, 00A, 00B, …, 00Z, 010, 011, …, 019, 01A, …, 01Z, …, 09Z, 0A0, ..,
0ZZ, 100, …, ZZZ .

If you are upgrading from a version older than 6.2, any existing Instance IDs will not be updated but
any new Instance IDs will be calculated by using the next available alphanumeric ID.

Important: It is critical for the correct operation of the backend that all processes have a unique
PRID allocated.

The PRID is set by one of the following methods:


• Assigned by the OPTIMA Installation Tool when adding an interface
• Configured manually in an INI file
• Assigned by a backend GUI when adding a report (Loader/Summary)

For external programs, the PRID will be read from the associated configuration file.

The database contains a table (INSTANCES) that provides a master list of PRIDs for all installed
elements. If any new components are configured, record the PRID in this table.

29
OPTIMA 8.0 Operations and Maintenance Guide

Note: This table is automatically populated when adding interfaces using the OPTIMA Installation
Tool or adding a report in the Loader/Summary GUI.

About File Locations and Naming


The exact file structure for a given installation is provided in the OPTIMA Implementation Plan.
However, the following table summarizes the default file locations and naming for the data loading
architecture:

File Type Name Location Comments

Executables and opx_<exename> $OPTDIR/bin All binaries and scripts start


process scripts with opx for easy
identification.
Scheduling scripts opx<scriptname>.sh $OPTIMA /run Wrappers script for all
backend processes which
can be used to start the
system.
Configuration <exename>_<PRID>. $OPTIMA/<interface Each backend program which
ini name>/<vendor>/<type>/ftp has a configuration file.
$OPTIMA/.../<type>/parsers
$OPTIMA/.../<type>/validate
$OPTIMA/.../<type>/loaders
Logs <hostname>_<exena $OPTIMA/log A log file is generated for
me>_<PRID>_<date> each backend process.
.log
By default, this happens on a
daily basis but can be set to
run on another time period.
For more information, see
Configuring the Process
Monitor on page 260.
Monitor <hostname>_<exena $OPTIMA/prids Each backend process
me>_<PRID>.pid generates a PID file which is
monitored by the Process
Monitor.
Backup Same as input file $OPTIMA/<interface All input files into the FTP,
name>/<vendor>/<type>/ftp/b parser and validate and
ackup loaders can be optionally
backed up in the backup
$OPTIMA/.../<type>/parsers/b directory.
ackup
$OPTIMA/.../<type>/validate/
backup
$OPTIMA/.../<type>/loaders/b
ackup
Error As input file $OPTIMA/<interface All input files into the FTP,
name>/<vendor>/<type>/ftp/e parser and validate and
rror loaders processes which fail
to be processed are stored in
$OPTIMA/.../<type>/parsers/e the error directory.
rror
$OPTIMA/.../<type>/validate/
error
$OPTIMA/.../<type>/loaders/e
rror

30
Introduction

File Type Name Location Comments

Input Input file name format $OPTIMA/.../<type>/parsers/i All input files into the parser,
unique to particular n validate and loaders
interface processes are placed in the
$OPTIMA/.../<type>/validate/i input directory.
n
$OPTIMA/.../<type>/loaders/i
n
Output Program specific $OPTIMA/.../<type>/parsers/o If multiple files are output
ut these should be place in the
output sub-directories.
$OPTIMA/.../<type>/validate/
out
Temporary <hostname>_<exena $OPTIMA/<interface Lock files and intermediate
me>_<PRIDn>.tmp name>/<vendor>/<type>/ftp/t files stored locally are placed
mp in the temporary directory.
$OPTIMA/.../<type>/parsers/t
mp
$OPTIMA/.../<type>/validate/t
mp
$OPTIMA/.../<type>/loaders/t
mp

Note: As the backend uses chained processes, the directories specified in the table above need
not be physical directories but symbolic links to the previous or next directory in the data flow chain.

Scheduling Programs
All OPTIMA external programs are run in scheduled mode using the Unix scheduler, cron tab. For
example, a parser may be scheduled to run on a periodic basis of every five minutes, in which
case, every five minutes the parser will:
• Be started by cron tab
• Process any input files that are available at that instance
• Exit

If a program instance does not complete before the next time it is scheduled, then multiple
instances of that program will occur. This is avoided by the use of a monitor file.

Before a program starts an instance, it checks if an associated monitor file exists. If one does exist,
then this indicates that an instance is already running and so the program immediately exits. If a
monitor file does not exist, the program starts and creates a monitor file. This file is uniquely
associated to the program instance using the PRID and the hostname environment variable in a
common directory. When the program has run, it removes the monitor file.

The Process Monitor ensures that monitor files are removed if programs crash or hang.

Multiple programs may be scheduled from a single cron entry by using a batch file. The programs
may be scheduled to run sequentially or concurrently, the latter achieved by running the program in
background mode (&) in the batch file.

31
OPTIMA 8.0 Operations and Maintenance Guide

The Monitoring Process


All external backend processes are monitored to ensure that they have not crashed, runaway or
hung. The Process Monitor uses the monitor files to check the health of programs.

Monitor files:
• Are created by all programs when each program starts running.
• Uniquely identify the OPTIMA program instance via the PRID contained in its filename, and
the hostname environment variable (that identifies on which machine it is running). The
program will also write the process identifier (PID) in the file.
• Provide a heartbeat function, which is created when the backend program regularly
updates the timestamp of the monitor file using a touch function. For example, a parser will
touch the file after parsing each file in the input directory.

The Process Monitor regularly scans all monitor files in the monitor directory to check:
• The PID in each file is still in the current OS process list. If it is not in the list, the associated
program has crashed and so the Process Monitor will remove the monitor file.
• The timestamp of the monitor file is not too old according to the user-specified grace
period. If the grace period has expired then the associated process is stopped from the
current OS process list. For example, a parser may have a three hour grace period. If the
parser monitor file has not been touched in the last three hours then the process is
stopped and the monitor file is removed.

Note: As all processes are scheduled, the parser will start again at the next schedule period.

For more information, see About the Process Monitor on page 257.

Configuring Programs
All external OPTIMA programs read their configuration settings on program startup from a local
configuration (INI) file. Database programs read their configuration from configuration tables in the
database such as the OPTIMA_Common table.

For external programs, the configuration file will be named using this convention,
ProgramName_PRID.ini.

The configuration file is specified in the command line of the program being run. This is usually in
the crontab entry or batch script. For example, opx_PAR_NOR_711_001711001.ini.

Configuration changes are made by editing the parameters in the configuration (INI) file with a
suitable text editor.

32
Introduction

About Versioning
All backend programs have a unique program ID and a descriptive code and version. The
descriptive code and version should be used when reporting bugs and problems to TEOCO
support. This is an example of the format:

ETL Loader program code LOD-GEN-010-GEN-110, release Version 8.0

You can either obtain the version details of the currently installed program from the log file or you
can print the information by typing the following command at the command prompt (in either
Windows or UNIX):

programname -v

About Log Files


All programs write messages to a log file or table (for database programs) in a standard format,
which is described in the following table:

Field Description Format

Host Name The name of the machine on which the program/database Text (256 characters).
resides.
PRID The automatically-assigned PRID uniquely identifies each nnncccccc
instance of the application. It is composed of a 9-character
identifier, made up of Interface ID, Program ID and Instance
ID.
The Interface ID is made up of 3 numbers but the Program ID
and Instance ID are both made up of 3 characters, which can
be a combination of numbers and uppercase letters.
For more information, see About PRIDs on page 29.
Date Data of message. YYYY-MM-DD
Time Time of message. HH:MM:SS
Severity Severity classification. Allocated at design
time.
Message Type Unique identifier within the program for this particular type of Integer
message.
Allocated at design
time.
Message Text Explanation of the message. Text (256 characters)
Patch Number The patch release number of the version of the software that Pn.n
produced the error.

The severity levels are defined as follows:

Severity Severity When Used


Level

Debug 1 Debugging message. Only generated for tracing within a program.


Information 2 For status messages, for example parsed file successfully.
Warning 3 For error condition where there is the potential for a service affecting fault.
Minor 4 For error condition where there is the potential for a service affecting fault
and corrective action is required.
Major 5 For error condition where there is a service affecting fault and urgent
corrective action is required.

33
OPTIMA 8.0 Operations and Maintenance Guide

Severity Severity When Used


Level

Critical 6 For error condition where there is a major service affecting fault and
immediate corrective action is required.

You can assign the level of severity message that is logged for each program individually. For
example, if a particular parser is assigned a Severity Level 3 (Warning) this would mean that only
messages of severity Warning or above would be logged.

For external programs, messages are recorded in a separate log file for each program instance.
The files have the following characteristics:
• All log files are stored in a common directory.
• A new log file is created. By default, this happens on a daily basis but can be set to run on
another time period. For more information, see Configuring the Process Monitor on page
260.
• The filename of the log file identifies the program, for example,
<hostname>_<exename>_PRID_<date>.log.

Log files may be archived using the Directory Maintenance program. For example, using this
program it will be possible to maintain a directory with only today’s log files.

A log file utility is provided to allow the quick analysis of log messages and also filter messages for
loading into the database.

A log file with a title including "WrongQuery" is generated when an error occurs during the
execution of an SQL script, within which the text of the last failing script is captured.

Notes:
• Log granularity is normally set to one day so that log entries are added to a file for the day,
then at midnight a new file is created for the next day’s log messages.
• The BAFC log files are first put through a log parser which converts tabs to commas, and
log severities from text, such as "critical", to a number, such as 6.
• The log files are then loaded into the LOGS.COMMON_LOGS table of the database by
configuring a standard 110 loader, this configuration can also be created by the OIT.
• The log parser is a perl application called opx_SCR_GEN_813 which is available under the
mediation folder in the backend installer.

About Environment Variables


A number of the OPTIMA backend programs require certain environment variables (for example,
hostname) to be set to specific values.

You can set these up in two ways:


• For the session that you are running
• For the session that you are running and all future sessions

The method is also slightly different depending on whether you are on a Windows OS or a UNIX
OS.

34
Introduction

Setting Up Environment Variables on a Windows OS


To set up an environment variable on a Windows OS just for the session that you are
running:

1. Open up the cmd prompt.

2. Run a 'SET' command.

A list of all of the environment variables available to your system appears:

3. To set the value for one of the environment variables, use the command:

set <ENVIRONMENT VARIABLE>=<VALUE>

Where

<ENVIRONMENT VARIABLE> is the name of the environment variable

<VALUE> is the value to which you want to set the environment variable

An example might be

set HOSTNAME=UKDC247DT

Which would set the HOSTNAME environment variable to be UKDC247DT

Tip: It is recommended that HOSTNAME is set to the same value as COMPUTERNAME; the
value of this environment variable should be listed when you run the 'SET' command.

To set up an environment variable on a Windows OS for the session that you are running
and all future sessions:

1. From the Windows Start menu, click Control Panel.

2. In the Control Panel dialog box, double-click System.

3. In the System Properties dialog box, select the Advanced tab.

4. Click the Environment Variables button.

35
OPTIMA 8.0 Operations and Maintenance Guide

The Environment Variables dialog box appears:

5. In the User variables pane, click New.

6. Type the name and value of the new environment variable, and then click OK:

36
Introduction

The new environment variable is added to the list:

7. Click OK.

8. Click OK.

The environment variable has been set up permanently.

Setting Up Environment Variables on a UNIX OS

Note: Before setting up an environment variable, you should check to see if it already exists - for
example, the HOSTNAME is usually set up during the OS installation, run:

echo $HOSTNAME

If no value is returned, then the environment variable can be set up.

To set up an environment variable on a UNIX OS just for the session that you are running:

1. Run:

export <ENVIRONMENT VARIABLE>=<VALUE>

Where

<ENVIRONMENT VARIABLE> is the name of the environment variable

<VALUE> is the value to which you want to set the environment variable

An example might be

export HOSTNAME=server1

Which would set the HOSTNAME environment variable to server1.

2. To check that the environment variable has been created, run:

echo $HOSTNAME

37
OPTIMA 8.0 Operations and Maintenance Guide

To set up an environment variable on a UNIX OS for the session that you are running and all
future sessions:

1. Open the '.profile' and/or '.bash_profile' file, which can be found in the OPTIMA user, and
add the following two lines:

<ENVIRONMENT VARIABLE>=<VALUE>

export <ENVIRONMENT VARIABLE>

Where

<ENVIRONMENT VARIABLE> is the name of the environment variable

<VALUE> is the value to which you want to set the environment variable

An example might be

HOSTNAME=server1

export HOSTNAME

Which would set the HOSTNAME environment variable to server1.

2. Save and close the file(s).

3. Log off and then log in again to allow the new environment variable to take effect.

4. To check that the environment variable has been created, run:

echo $HOSTNAME

Encrypting Passwords for OPTIMA Backend Applications


Password security (encryption) is enabled for all programs that have the password stored in the
*.ini file.

For existing/legacy *.ini files, or those that are created manually, you can use the command line
executable 'opxcrypt' to encrypt the passwords. For more information on the components for which
the *.ini file is created manually, see Which OPTIMA Backend Applications are Affected by
Password Security? on page 39.

To use opxcrypt:

1. Click Start on the taskbar, and then click Run.

2. In the Run dialog box, type cmd.

3. In the Command prompt that appears, type:

opxcrypt.exe [[-f File] or [-d Path]] [-r Recursive] [-t Tag]] or


[-v]

Where:
o -f File is a single, defined file containing the password to be encrypted. The
filename defined can be a wildcard (for Windows) or a regular expression (for UNIX) -
for example, *.ini.
o -d Path is the directory containing the *.ini files containing the passwords to be
encrypted. If the directory is not defined, then the current directory will be used by
default.

-d and -f are mutually exclusive.

38
Introduction

o -r Recursive is an optional tag, indicating that opxcrypt should also look for *.ini
files in all sub-directories of the defined directory.
o -t Tag is an optional parameter for the password. If this is not used, the default is
"Password" - however, this will not find passwords where 'Password' is a substring, for
example, SNMPPassword.
o -v is the print version string. This overrides all other command line parameters.

The password in the selected *.ini file is encrypted. It is placed in parentheses (brackets)
and also prefixed and suffixed by 'ENC' in the INI file - for example, password =
ENC(uvwxyz)ENC.

Note: The syntax supports comma-separated values, for cases where there are multiple IP
addresses/passwords. During encryption, the comma is only ever used as a separator - it is
excluded from the character set available for encoding purposes in order to avoid erroneously
splitting whole passwords.

Important: For the LDAP single sign-on to work, the user AI_PROXY is required. The password
must never be changed by an engineer except as part of a patch upgrade.

Which OPTIMA Backend Applications are Affected by Password


Security?
This table describes the OPTIMA backend applications that are affected by password security:

Application Encrypted Manual Encryption dll Configuration


Name Element in Encrypti Needed in App UI Name
the INI File on Directory?
Require
d?

Alarms Notifier MailPassword No Yes Alarms Notifier


Password
SMSC_Passwor
d
Alarms Processor Password Yes No N/A
FTP remotePass Yes No N/A
Loader Password No No Loader GUI, OIT
Report Scheduler Password No Yes Report
configuration tool
SMTPUserPass
word
SNMP Poller CommunityRead No No SNMP Poller
configuration tool
OPTIMA Password No Yes - Common directory OPTIMA frontend
WEBWIZARD Password No Yes - <WEBWIZARD>/bin WEBWIZARD
directory

The 'Manual Encryption Required' column indicates whether the user needs to use opxcrypt to
encrypt the appropriate element, or whether a configuration UI exists that will do this automatically.
If one does exist, this is shown in the 'Configuration UI Name' column.

For more information on using opxcrypt, see Encrypting Passwords for OPTIMA Backend
Applications on page 38.

39
OPTIMA 8.0 Operations and Maintenance Guide

The 'Encryption dll Needed in App Directory' column indicates whether or not the encryption .dll
(crypter.dll) must be present in the same directory as the application.

Important: OPTIMA performs internal decryption at the latest point possible prior to connection, in
order to maximise security and ensure that the decrypted password is not available for any longer
than it needs to be.

Note: The OIT encrypts the Oracle connection to the OSS_DICTIONARY and AIRCOM in the
project file. If you update the two passwords in the Database Connection section of the Project
Parameters form, they will be encrypted automatically when Loader ini files are created.

Starting and Stopping the Data Loading Process


To start the backend data loading process on a workstation, manually run this command:

$OPTDIR/bin/opxstart.sh opxstartsystem.sh

This table describes the function of each part of the command:

Command Function

opxstart.sh Ensures the backend process is run with the correct environment.
$OPTDIR Sets the root location of the directory tree, that is the location under which
(Environment Variable) ./bin, ./etc, ./parsers, ./loaders, ./validate, and so on can be found.
opxstartsystem.sh Places all of the backend job entries into the cron configuration. Each
process should then start at their next scheduled time.
The cron entries are stored in $OPTDIR/etc/optima.crontab.

To stop the backend process, manually run this command:

$OPTDIR/bin/opxstart.sh opxstopsystem.sh

The opxstopsystem.sh command removes the crontab configuration and stops all backend
processes based on a pattern match for filenames beginning with the string opx.

Note: During initial configuration, the environment as setup by opxstart.sh, should be


automatically configured for the OPTIMA user in the profile.

40
Introduction

Checking Log Files

External Programs
Status messages for all external programs are located in a common directory. A new log file is
created every day. You can choose the information level required in the log file by selecting a
severity level. The available levels are: Debug, Information, Warning, Minor, Major and Critical. This
restricts low level severity logging in required cases.

The Log file Utility combines log messages from all logs into a common file for analysis or loading
into the database.

Log messages for external programs can be viewed by a number of different methods. The method
used will depend on the specific issue that is being investigated.

Monitor a Particular Log File

In this case the specific log file associated with a program is identified and all messages displayed
to a terminal using the UNIX tail command. For example the following command will cause the
terminal screen to update with new messages as they are appended to the bottom of the log file:

tail –f <logfile name>

This is useful when monitoring, in real time, the operation of a particular program, for example a
loader.

Using the Log File Analyzer (opxlog Utility)

Use the Log File Analyzer (opxlog utility) to search all external program logs and retrieve
particular messages for specific programs or time periods. This is useful when diagnosing
programs across all external programs or searching for historical messages.

Using the Database Log Tables

During initial installation the system will be configured to load specific log messages from external
programs into the database. In general, messages with a severity of "Warning" and above are
loaded every hour into the following table:

LOGS.COMMON_LOGS

Use the Data Explorer, standard reports and modules to display these messages.

Database Programs
All database programs log messages to Oracle tables. These are detailed for each program in the
following chapters.

Use the Data Explorer, standard reports and modules to display these messages.

41
OPTIMA 8.0 Operations and Maintenance Guide

Maintenance
In usual operation, the data loading programs should not need any special maintenance.

However, it is recommended that the following basic maintenance checks are carried out for the
OPTIMA backend:

Check Frequency Reason

Log tables for any messages of Daily This will identify any major problems that have
Warning level or above for the occurred with the external programs.
previous day.
Note: An administration report will
be provided for this. For more
information, see the
implementation plan.
For broken Oracle jobs Daily This will indicate potential problems in the Oracle
processes.
All application input directories for a Weekly Files older than the scheduling interval should not
backlog of files. be in the input directories. If there is a backlog this
indicates a problem with a particular program.
The error directories for files. Weekly In normal operation files should not be located in
error directories. Check any files in these directories.
System resources on the server Weekly If resources, for example disk space, are limited
and loading workstations then backup files or data may need to be archived.
Note: An administration report will
be provided for this. For more
information, see the
implementation plan.

See the relevant chapters for application-specific maintenance tasks.

42
Introduction

Troubleshooting
Troubleshooting information is also provided for each application in the following chapters of this
guide.

Symptom Possible Causes Solution

Data not loaded for Raw data has not been Check error logs for any warning
an interface or received from the network. messages for this period. Investigate any
periods of missing messages of Warning or above severity
data. Invalid or corrupt data has
been received from the Check Process Monitor to ensure that all
network. processes are scheduled for that interface.
An application in the data For a file-based interface:
loading process has failed.
• Check all external directories for file
A backend application has backlogs or error files.
not been scheduled.
• If files are in an error directory for a
particular application, move these into
the input directories and see if these
are processed. If necessary, the
message severity level of the
application can be lowered to output
more debugging information.
Investigate any errors output.
• Check the FTP backup directories to
see if files have been received for this
time period. If files have not been
received for this time period,
investigate if files have been produced
from the network.
For a database-based interface or
summary data:
• Check that data exists in the source
database or raw tables for this period.
• Check that the relevant summary
report has run.
No data is being The backend has been Check cron entries for the OPTIMA user.
loaded. stopped.
Check for any broken Oracle jobs.
Database connection
problem. Check Process Monitor to ensure that all
processes for that interface are scheduled
Network problems. and are running.
Check for files in the input directories of
the FTP application. If these exist then
check the input directories of the following
applications to find at what point the
process is failing:
• FTP
• Parser
• Data Validation
• Loader
An application A monitor file exists that Remove the monitor file.
cannot be started has not been removed by
or scheduled. the Process Monitor. Check the Process Monitor settings.

43
OPTIMA 8.0 Operations and Maintenance Guide

About the Log File Analyzer (Opxlog Utility)


The Log File Analyzer is provided with the OPTIMA backend to combine, filter or search external
program log files. You can use the utility to perform individual searches for specific problem
analysis or schedule it to run on a regular basis to create concatenated filtered files of log
messages for loading into the database. The output will show all matched log messages.

You run the Log File Analyzer from the command prompt. You specify various options in a separate
INI file. For more information about these options, see Configuring the Log File Analyzer on page
45.

Important: Before running the Log Files Analyzer, you should ensure that you have set up all of the
prerequisites. For more information, see Prerequisites for Using the Log File Analyzer on page 44.

To start the Log File Analyzer, type the script name and a configuration file name into the command
prompt. If you are creating a new configuration file, this is when you choose the file name.

In Windows type:

opx_SCR_GEN_813.exe opx_SCR_GEN_813.ini

In Unix type:

opx_SCR_GEN_813 opx_SCR_GEN_813.ini

For more information about the configuration (INI) file, see Example Log File Analyzer
Configuration (INI) File on page 47.

Prerequisites for Using the Log File Analyzer


Before you try to run the Log File Analyzer, you should ensure that:
• Active PERL (v5.8.x) is fully installed.
• The PATH and PERL5LIB environment variables are set correctly.

Checking the PERL Version

To check which version of PERL is installed:

1. Type perl -v in the command prompt.

2. If PERL is not installed, install 5.8.9 build 825 - for Windows, use the 'ActivePerl-5.8.9.825-
MSWin32-x86-288577.msi' file available on the TEOCO Intranet. Ensure that you leave
everything as default.

Checking the Environment Variables

For Windows, the PATH and PERL5LIB environment variables should look similar to the following:
• PATH=C:\Perl\site\bin;C:\Perl\bin;...oracle_path ...
• PERL5LIB=C:\Perl\lib;C:\Perl\site\lib

For more information on setting the environment variables, please see About Environment
Variables on page 34.

44
Introduction

Configuring the Log File Analyzer


The Log File Analyzer is configured using a configuration (INI) file. Configuration changes are made
by the parameters in the configuration (INI) file with a suitable text editor. The Log File Analyzer
configuration (INI) file is divided into different sections.

The following table describes the parameters in the [DIR] section:

Parameter Description

source The directory containing the source (log) files.


outdir The output directory (use with option 'newf').
mondir The directory used for storing PID-files (use with option 'myprid').
tempdir The directory used for temporary files (use with option 'newf').
logdir The directory used for saving the log file generated for the Log File Analyzer.

The following table describes the parameters in the [OPTIONS] section:

Parameter Description

fileMask By default, the Log File Analyzer will analyze any file with the extension .log.
However, you can use this parameter to filter on log file names containing a
particular string.
For example, if you use the fileMask '.FTP.*' then the application will return
opx_FTP_GEN_302_111302111_20100108.log.
log_severity Sets the level of information required in the log file. The available options are:
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
StandAlone 0 – Run the application without a monitor file. Do not select this option if the
application is scheduled or the OPTIMA Process Monitor is used.
1 – Run the application with a monitor file.
myprid Specify the Program ID of the Log File Analyzer (in order to handle multiple
start-ups).
HeartBeatDeferTime Use this option to defer the HeartBeat function if it was run recently.
Specify a number of seconds - the default is 5.

newf Set the base name for the output file (use with -outdir). (Alternative to 'outf'.).
outf Set the output file name for appending. (Alternative to 'newf'.)
historyfile Set history file name. Match input files (and input lines) only with timestamps
greater than the previous one, as read in from the history file (if available). The
file is updated with the latest timestamp at the end. (In this way, one can
generate a sequence of logs that cover the input files without overlap.)

45
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

ds, de, hs, he Use this parameter instead of the historyfile parameter to set the start and end
times in terms of days, hours or minutes. Only one pair of values (ds/de,
hs/he) can be specified.
For example you can define 'hs' and 'he' to match input files/lines only with
timestamps from 'hs' hours ago to 'he' hours ago.
The default is ds=1, de=0.
sev You can choose to match to a minimum severity level. Possible values are:
1 : Match 1 (DEBUG) and above
2 : Match 2 (INFORMATION) and above (the default)
3 : Match 3 (WARNING) and above
4 : Match 4 (MINOR) and above
5 : Match 5 (MAJOR) and above
6 : Match 6 (CRITICAL) and above
rec 1 - (Default) Recurse into sub-directories.
0 - Do not recurse into sub-directories.
prid By default, the Log File Analyzer will analyze any file with the extension .log.
However, you can use this parameter to filter on log file names matching a
specified Program ID.
pridf By default, the Log File Analyzer will analyze any file with the extension .log.
However, you can use this parameter to filter on log file names matching any
of the IDs contained in the specified file.
quote 1 - Quote the log message in each output line (with "").
0 - (Default) Do not quote the log message.
txtsev 0 - Transform the severity strings into their numeric codes (for example
'INFORMATION' becomes '2').
The numeric codes are described in 'log_severity'.
1 - (Default) Do not transform.
header 1 - (Default) Insert a header with column names as the first line of the output
file.
0 - Do not insert header.
mdgno Filter lines by Message Number (in addition to all other rules).
If the message number field of the input line matches the msgno string
(regular expression), then proceed with normal filtering rules (including by
severity).

Example Uses for the Opxlog Utility


This table gives some example uses for the utility:

Use This To Do This


Command

opxlog Print to screen all log messages, for today, from files under /var/aircom
optima/log.
opxlog - Print all log messages under /aircom optima/archive/log.
source=/aircom
optima/archive/log

46
Introduction

Use This To Do This


Command

opxlog -ds=1 Print all log messages from midnight yesterday to now.
opxlog -ds=7 Print all log messages for the past week.
opxlog -ds=1 -de=1 Print all log messages from yesterday only (midnight to midnight).
opxlog -ds=1 -de=1 - Print all Minor, Major and Critical messages from yesterday.
sev=3
opxlog -hs=1 Print all messages from the previous hour to now. Based on whole hours. For
example, if now is 12:30 then it would print all messages from 11.00 to 12:30.
opxlog -hs=1 -he=1 Print all messages from the previous hour. For example, if now is 12:30 then it
would print all messages from 11.00 to 12:00.
opxlog -hs=1 -he=1 - Print all messages from the previous hour for PRID 000010001 only.
prid=000010001
opxlog -hs=1 -he=1 - Append all messages from the previous hour to file my.log in local directory.
outf=my.log
opxlog | grep mytext Print all log messages for today containing the text mytext to screen. This is
useful for finding messages with a particular error codes or string – Unix only.
opxlog | grep mytext > Create file myfile.log containing all log messages containing the text
myfile.log mytext to screen – Unix only.
opxlog | sort -k2 Print all log messages sorted in date/time order to screen – Unix only.

Example Log File Analyzer Configuration (INI) File


This topic shows an example Log File Analyzer configuration (ini) file:

[DIR]
source=/OPTIMA_DIR/<application_name>/in
outdir=/OPTIMA_DIR/<application_name>/out
mondir=/OPTIMA_DIR/<application_name>/pid
tempdir=/OPTIMA_DIR/<application_name>/temp
logdir=/OPTIMA_DIR/<application_name>/log

[OPTIONS]
fileMask=.PAR.*
fileMask=.FTP.*
fileMask=*.*
log_severity=1
StandAlone=0
myprid=001813001
HeartBeatDeferTime=5
newf=commonlog
outf=test\output.log
historyfile=test\history
ds=2
de=1
hs=3
he=0
sev=1
rec=1
prid=12345678
pridf=PRIDfile
quote=0
txtsev=1
header=1
mdgno=500

47
OPTIMA 8.0 Operations and Maintenance Guide

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

Rebooting the Database Server


To reboot the server, you must follow the shutdown and then start-up sequences.

To shutdown the server:

1. Ensure all users are logged out as you do not want them to perform transactions on the
database.

2. Stop the Report Scheduler using the Windows Report Client.

3. Stop the loaders/cron. Using Edit crontab, comment out the FTP process, this will prevent
any further collection of data.

Important: The other applications should finish processing collected files before you
continue shutting-down the server.

Tip: For a faster shutdown process, export the crontab using crontab -
1>$OPTDIR/scripts/crontab.txt, which should then be deleted using crontab -r.

4. Shutdown all databases, including Citrix and test databases, using the shutdown
immediate command.

This may take some time as the command waits until all transactions are complete before
committing the database to disk and stopping the RDBMS.

5. Shutdown the server.

To start the server:

1. Start the server.

The Oracle Listener and databases (Citrix, Production, Test) will startup automatically.

2. Start the Report Scheduler.

3. Start the loaders/cron.

4. If you backed up and removed the crontab, replace it with the crontab
$OPTDIR/scripts/crontab.txt command.

5. If you commented out the FTP process, make the process active again.

6. Perform checks as described in this table:

Step Check This

1 Oracle listeners started.


2 All databases started successfully.
3 Citrix is running via the Citrix Management Console.
4 Users can connect and that main functionality works.

48
Introduction

Step Check This

5 Scheduled report runs, both email and file produced.


6 OPTIMA Online working.
7 These backend components have a successful new log entry, that is with no errors:
• Parser
• Data Validation
• Loader
• Directory Maintenance
• Process Monitor
8 New data is in the database, since the reboot.

Using OPTIMA Across Different Time Zones


If your network is spread across more than one time zone, the associated time zone difference can
cause discrepancies in any OPTIMA application which handles data - particularly, report scheduling
and summary.

For example, you may be running a daily network summary that covers a network across multiple
time zones. If the last hour of data from the farthest part of the network is 5 hours behind the rest of
the network, there will be a delay of 5 hours on the summary. This in turn will affect the schedule.

If time zone support is not used and the client and database machines are in different time zones,
there could be ambiguity in scheduled time.

You may also have network elements that have child nodes that span time zones - for example,
MSCs with BSCs in regions that have different time zones. If time zone support is not used, this
could cause problems because there would be data from two different time zones coming in - for
example, 9am ET (Eastern Time) is 8am CT (Central Time). This means that if the BH is
summarized at 9am, it would not be truly representative of the elements in both time zones.

To manage time zone support, there are a number of different time definitions used in OPTIMA,
which are described in this table:

Term Description

Local Time Date and time of data, stored as the date and time of the data.
Also known as consistent.
Natural Time Date and time of data, driven by the local time zone.
Universal Time Date and time of data, driven by the universal time zone.
Also known as the System Time.
Selected Time Date and time of data, driven by the selected time zone.
By default, this is the same as the Universal Time.
User Time Zone The time zone that the connected user/process is within. This is displayed in the
OPTIMA Message Log.
Note: If the client is run over Citrix, the User Time Zone is still regarded as where the
client is located, not where the Citrix server is located.
Universal Time Zone The time zone in which the database is located.
Also known as the System, Global or Database Time Zone.

Important: Currently, time zone support for alarm forwarding is not available.

49
OPTIMA 8.0 Operations and Maintenance Guide

Accessing Data from Outside of the OPTIMA Backend Applications


It is possible to have direct access to OPTIMA's data and configuration tables from outside
OPTIMA - for example, using SQLPLUS or TOAD.

However, for most users, this is just read-only access - for example, they cannot edit or delete any
data or tables.

Important: A special 'power user', the DBACCESS user, can access the database to create new
objects.

In Oracle, when the OPTIMA database is created, all users are assigned the Oracle role
'OPTIMA_DEFAULTS', which does not require a password.

A separate, dedicated role exists for each OPTIMA backend application, which are described in the
following table:

Application Role

Alarm Handler OPTIMA_ALARMHANDLER_PROCS


Alarms Processor OPTIMA_ALARM_PROCS
Data Quality GUI OPTIMA_DQ_USERS
Loader and Loader GUI OPTIMA_LOADER_PROCS
Report Scheduler OPTIMA_REPSCH_PROCS
SNMP Agent OPTIMA_SNMPAGENT_PROCS
SNMP Poller OPTIMA_ SNMPPOLLER_USERS
Summary GUI OPTIMA_SUMMARY_USERS

These application roles are session-based, and only activated when the user logs into the
appropriate application - if the same user tries to use an application outside OPTIMA to access the
data and configuration tables (for example, SQLPLUS or TOAD) they will only have read-only
access again.

Important: Because grants are assigned through roles, users cannot grant themselves other rights.
Therefore, to extend the privileges for a user, the database administrator must:
• Grant the appropriate application role
• Grant the 'OPTIMA_DEFAULTS' role, and make this one the default role for the user

50
Introduction

Managing Resources Through Consumer Groups


In order to enable you to effectively manage database memory allocation, OPTIMA supports the
use of Oracle consumer groups and resource plans. An Oracle resource consumer group is a
collection of users with similar requirements for resource consumption. For more information on
Oracle consumer groups, see your Oracle documentation.

The OPTIMA consumer groups are defined as follows:


• For the OPTIMA front end, the consumer groups are based on the user type:

User Type Consumer Group

OPTIMA_ADMINISTRATOR OPTIMA_ADMINISTRATORS_CG
OPTIMA_ADVANCED_USER OPTIMA_ADVANCED_USERS_CG
OPTIMA_USER OPTIMA_USERS_CG
OPTIMA_USER_ADMINISTRATOR OPTIMA_USER_ADMINISTRATORS_CG
OPTIMA_ALARM_ADMINISTRATOR OPTIMA_ALARM_ADMINISTRATORS_CG

This means that when a new user is created and assigned to a particular user type, they
will be assigned to the corresponding consumer group at the same time.
• For the OPTIMA back end, the consumer groups are based on the backend application:

Application Consumer Group

Alarm Handler OPTIMA_ALARMHANDLER_PROCS_CG


Alarms Processor OPTIMA_ALARM_PROCS_CG
Data Quality GUI OPTIMA_DQ_USERS_CG
Loader and Loader GUI OPTIMA_LOADER_PROCS_CG
Report Scheduler OPTIMA_REPSCH_PROCS_CG
SNMP Agent OPTIMA_SNMPAGENT_PROCS_CG
SNMP Poller Configuration Interface (GUI) OPTIMA_ SNMPPOLLER_USERS_CG
Summary GUI OPTIMA_SUMMARY_USERS_CG

This means that when a user logs into a particular application, they will be assigned to the
corresponding consumer group at the same time. For example, when
OPTIMA_LOADER_PROCS logs into the Loader, they will automatically be assigned to the
OPTIMA_LOADER_PROCS_CG consumer group, and receive the specified allocation of
database memory for a member of that group.

These resource groups must be used in conjunction with resource plans, which define how
resources are balanced across the system (in terms of % share) according to business rules.

Note: This percentage share will only be enforced when resource consumption has reached
capacity (in other words, 100%).

As a simple example, a 'DAYTIME' plan may distribute the resources in one way, while another
'NIGHTTIME' plan distributes them in another way:

51
OPTIMA 8.0 Operations and Maintenance Guide

Consumer Group Plan 1 - 'DAYTIME' (% Plan 2 -


Share) 'NIGHTTIME' (%
Share)

OPTIMA_ADMINISTRATORS_CG 20 50
OPTIMA_ADVANCED_USERS_CG 10 10
OPTIMA_USERS_CG 60 20
OPTIMA_USER_ADMINISTRATORS_CG 5 10
OPTIMA_ALARM_ADMINISTRATORS_C 5 10
G

OPTIMA has a default resource plan, which is assigned at the start of the deployment of OPTIMA.
This contains a number of subplans associated to the consumer groups for the different
components of OPTIMA - for example, Loader, SNMP, Summary and so on.

Common Error Codes


This section describes the error codes common to all backend programs:

Error Description Severity


Code

0 Error Accessing Input Directory. CRITICAL


Error Accessing Output Directory. CRITICAL

Error Accessing Backup Directory. CRITICAL

Error Accessing Error Directory. CRITICAL

Error Accessing Temp Directory. CRITICAL

Error Accessing Pid File Path Directory. CRITICAL

<XMLErrorCode> at line No. <lineNumber>. WARNING

Wrong Granularity period type defined. MAJOR

Error: Empty Input File - <inputFileName> MINOR

Started Logging for <programCode>. INFORMATION

1 Module Initialisation Successful. DEBUG


2 Start Instance: (<SystemPid>). Number of Iterations <MaxIterations> [Run INFORMATION
Continuous On], Sleep Time < m_PollingTime> seconds.
3 End Instance: ( <SystemPid> , TotalProcessTime(s)= <endProcessTime- INFORMATION
StartProcessTime> [, Groups= <TotalFileGroupCount>, TotalFileSize(Kb)=
<m_TotalFileGroupSize/1024>, TotalFileCount= <totalFileCount>] ).
4 Application has been terminated. ( <SystemPid>, Current Iteration= WARNING
<CurrentIteration>, TotalProcessTime(s)= <endProcessTime-
StartProcessTime>).
5 Iteration <CurrentIteration> [of <MaxIterations>] started. DEBUG
6 Iteration <CurrentIteration> [of <MaxIterations>] finished. DEBUG
TotalIterationTime(s)= <endIterationTime-StartIterationTime>.
7 Remaining iterations cancelled. INFORMATION
101 WrongGranularity specified for Age. MAJOR
102 Wrong Granularity type mentioned. MAJOR
103 No FileCount specified for directory maintenance. MAJOR
104 Access denied. Could not delete file. MAJOR

52
Introduction

Error Description Severity


Code

105 Directory to be maintained or temporary directory can't be accessed. MAJOR


106 Can't access time details of a file. MAJOR
107 Wrong Directory Maintenance Type. MAJOR
108 Directory Maintenance Successful for <DirPath> files deleted INFORMATION
Directory Maintenance Successful for <DirPath> files archived [files deleted] INFORMATION

666 Started logging for Nokia 3G XML Parser Version - <ProgramCode>. INFORMATION
1003 Error Accessing Input Directory. CRITICAL
1004 Error Accessing Output Directory. CRITICAL
1005 Error Accessing Backup Directory. CRITICAL
1006 Error Accessing Error Directory. CRITICAL
1008 Error CRITICAL
1009 Started Parsing Input File : <fileName>. INFORMATION
1011 Error Accessing PID Directory. CRITICAL
1012 Ended Parsing Input File : <fileName>. DEBUG
1020 Error Accessing Temp Directory. CRITICAL
1021 File : <FileName> Successfully Copied To Archive Dir <ArchiveDir> INFORMATION
Refreshing list of files in Input Directory. DEBUG

1022 Unable to Move Input File : <fileName> to Backup Directory. WARNING


File: <FileName> Copy Failed To Archive Dir <ArchiveDir> WARNING

Successfully moved Input File : <fileName> to Backup Directory DEBUG


<DirBackup>.
Successfully moved Input File : <fileName> to Error Directory <ErrorDir>. INFORMATION

1024 Unable to Move Input File : <inputFileName> to Error Directory. WARNING


Unable to Move Input File : <objInputFile> to Backup Directory. WARNING

Unable to Move temporary combiner Log File <fileName> to Combiner Log WARNING
Directory <CombinerDir>.
Unable to delete File : <fileName>. WARNING

1031 Error Accessing Combiner Directory. CRITICAL


1032 Start Instance. INFORMATION
1033 End Instance. INFORMATION
4002 Exiting Application. Error : Unable to create the Pid file at specified path. WARNING
4003 Exiting Application. Error : Another instance of the application may be WARNING
running.
4007 Exiting Application. Error : Another instance of the application may be WARNING
running, but could not obtain the elapsed time of process.
4008 Exiting Application. Error : Another instance of the application may be WARNING
running, but could not look up pid on OS task list.
7000 Validating input file <fileName>. DEBUG
7001 Could not create input file stream. MAJOR
Input File : <filename> INFORMATION

7002 Could not open input file stream. MAJOR

53
OPTIMA 8.0 Operations and Maintenance Guide

Error Description Severity


Code

Completed Processing File Group (PID=<SysPid>, TotalProcessTime(s)= INFORMATION


<endTime-StartFileGroupTime>) , TotalFileSize(Kb)= <FileGroupSize/1024>,
TotalFileCount= <FileGroupCount>.
7003 Could not find header line in the input file. MAJOR
7004 Error in processing input file header line. MAJOR
7005 Failed to create report output streams. MAJOR
Error Found when processing file, so moved Input File : <filename> to Error INFORMATION
Directory
7006 Failed to create new counters output stream. MAJOR
Unable to Move Input File : <filename> to Error Directory WARNING

7007 Failed to create bad lines output stream. MAJOR


Moved Input File : <filename> to Backup Directory. DEBUG

7008 Unable to Move Input File : <filename> to Backup Directory. WARNING


7009 Unable to Move temporary combiner Log File <combinerLogFile> to WARNING
Combiner Log Directory: <combinerDir>.
7010 <ReportCount> Output file[s] produced. DEBUG
Successfully created output file <filename> INFORMATION

Deleting temp new counter file because no new counters where found DEBUG

7011 Could not create output file <filename>. WARNING


Deleting temp bad line file because no bad lines where found DEBUG

7012 Refreshing List of Files in Input Directory. DEBUG


Closing bad line file stream <BadLineFileName> DEBUG

7013 Error Found when processing file, deleting input file : <filename>. INFORMATION
Closing new line file stream <NewCounterFileName>. DEBUG

7014 Input folder is empty. DEBUG


Closing report file stream <FileName>. DEBUG

7015 Finish validating input file <FileName> DEBUG


7100 Reading header line from input file. DEBUG
7101 Read header line from input file <inputHeaderLine>. DEBUG
7102 Header line is empty. DEBUG
7200 Failed to parse header line. DEBUG
7201 Header line contains no columns. DEBUG
7450 Write New Counter Data Lines. DEBUG
7500 Initialise Reports. DEBUG
7550 Write Report Data Lines. DEBUG
7600 Initialise New Counter Primary Columns. DEBUG
7700 Initialise In Position. DEBUG
Open Reports File Streams. DEBUG

7701 Input file header column <ColumnName> is not in any reports for new DEBUG
counters.
Creating output stream for report <FileName>. DEBUG

54
Introduction

Error Description Severity


Code

7702 Input file header column <ColumnName> is not in any reports. WARNING
Failed to create output stream. DEBUG

7703 Failed to open output stream for report. DEBUG


7800 Write Header For Reports. DEBUG
7900 Open New Counter File Stream. DEBUG
7950 Write New Counter File Header. DEBUG
7970 Open Bad Lines File Stream. DEBUG
40001 Could not open input file. WARNING
40002 Input file is empty. INFORMATION

55
OPTIMA 8.0 Operations and Maintenance Guide

56
About Data Acquisition Tools

2 About Data Acquisition Tools

This chapter describes the following data acquisition tools:


• FTP
• DB Parser
• OPTIMA CORBA Client

As well as using these tools, an alternative method of data acquisition is to use the OPTIMA
Summary application to load data from one database directly into another. For more information,
see Using the Summary for Direct Database Loading on page 329.

Important: These acquisition methods apply to file based data acquisition only. Network data
acquisition carried out by OPTIMA using Simple Network Management Protocol (SNMP) is
described in the next chapter. SNMP data acquisition carried out using the new Netrac interface is
handled by Netrac as described in the SNMP Agent User Guide for Netrac.

About the FTP Application


The FTP application transfers data files from a remote server using File Transfer Protocol (FTP).
One instance of the application can be configured for each remote server from which files are to be
extracted.

This diagram shows an overview of the FTP process :

FTP Process

When scheduled, the FTP application regularly monitors a remote directory for new files. When
new files are detected, they are transferred to the local machine. Transfer takes place using a local
temporary file to ensure that the Parser does not start to parse the file before transfer is complete.
Status and progress messages are recorded in a log file.

A local list file ensures that files are not transferred twice. The list file keeps a record of all files that
exist on the remote server that have been downloaded. The list file is refreshed every time the
application is run.

57
OPTIMA 8.0 Operations and Maintenance Guide

The script can be configured to only look for new files on the remote server for a given number of
previous days. For example, if configured for three days then only directories for the latest three
days are searched. This facility is based on all files on the remote server being located in a new
directory each day.

The full functionality of the FTP application is shown in this diagram:

FTP Functionality

The FTP application supports these common functions:

Function Action

Logging Status and error messages are recorded in a daily log file.
Error Files If the application detects an error in the input file that prevents processing of that
file, then the file is moved to an error directory and processing continues with the
next file.
Monitor Files The application runs in a scheduled mode. A monitor (PID) file, created each time
the application is started, ensures that multiple instances of the application cannot
be run. The PID file is also used by the OPTIMA Process Monitor to ensure that the
application is operating normally.
PRID The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface ID,
Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID are
both made up of 3 characters, which can be a combination of numbers and
uppercase letters.
For more information, see About PRIDs on page 29.
Backup The application can store a copy of each input file in a backup directory.

58
About Data Acquisition Tools

As well as providing basic transfer of data files from a remote server, the FTP application also
provides the following functions:

Item Description

Archive Storage You can backup any files that are transferred by the FTP, as well as store any
historical error files.
Overload Storage If the FTP input folder reaches its defined limit (in terms of percentage of disk space
usage), to avoid overloading the disc, any files over the limit can be moved to a
separate disk and processed from there after the input folder has been emptied.

Note: When moving files to the archive and/or overload folders, the FTP tars these files to keep the
number of files stored to a minimum.

To use these functions, you must ensure that both your folder mounts and FTP INI file parameters
are configured correctly.

Note: The Parser, Data Validation and Combiner applications also support these functions. For
more information, see the respective chapters for these applications.

About the FTP Modes


The FTP can be run in a number of different modes:

Mode Description

FTP The standard FTP application, which transfers data files from a
remote server using File Transfer Protocol.

FTP PUT The FTP application that transfers data files in the opposite
direction – that is, from the local client to the remote server –
using File Transfer Protocol.
Important: If you are running the FTP application on Windows,
you can only use this mode if you are using the 24-hour time
format.
SFTP with SSH Key Authentication The FTP application in secured mode, using SSH Key
Authentication.
(Windows or UNIX)
This mode uses passwordless key authentication, rather than the
password parameter stored in the INI file, and so is particularly
useful for systems where the security policy requires account
passwords to be changed periodically.
Important: This method is strongly recommended, as it is faster
and more reliable than the regular FTP method (described
below).

59
OPTIMA 8.0 Operations and Maintenance Guide

Mode Description

SFTP The FTP application in secured mode, using password


authentication.
(UNIX only)
In this mode, you can also enable data compression during the
SFTP transfer, which can reduce the volume of data transferred
and the time required for the data transfer, provided that the
SFTP server supports this option.
Security authentication is based on the 'RemotePass' password
parameter stored in the FTP Parameters section of the ini file.
Important: This method is supported in 8.0, but is not used by
the OPTIMA backend as a default. It is strongly recommended
that you use the SFTP with SSH Key Authentication method, as
it is faster and more reliable.

Important: If you want to use the FTP in either secured mode or secured mode with SSH Key
Exchange Authentication, then you must install a number of additional modules. For more
information, see:
• Prerequisites for Using the SFTP with SSH Key Exchange Authentication (for Windows
Servers) on page 62
• Prerequisites for Using the SFTP on page 61

Installing the FTP Application


The FTP application is a Perl script and requires Perl v5.8 or later to be installed on the local
workstation.

To install the FTP application, install the following files in the backend binary directory:
• opx_FTP_GEN_302.exe (Windows)
• opx_FTP_GEN_302 (Unix)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

Note: For Windows, a Perl Interpreter must also be installed. TEOCO recommends using
ActivePerl. You can read more about ActivePerl at this location:

http://www.activestate.com/activeperl/

To start the FTP, type the script name and a configuration file name into the command prompt. If
you are creating a new configuration file, this is when you choose the file name.

In Windows type:

opx_FTP_GEN_302.exe opx_FTP_GEN_302.ini

In Unix type:

opx_FTP_GEN_302 opx_FTP_GEN_302.ini

In usual operation within the data loading process, all applications are scheduled. You should not
need to start the FTP.

60
About Data Acquisition Tools

Important: If you want to use the FTP in either secured mode or secured mode with SSH Key
Exchange Authentication, then you must install a number of additional modules. For more
information, see:
• Prerequisites for Using the SFTP on page 61
• Prerequisites for Using the SFTP with SSH Key Authentication (for Unix Servers) on page
65

Prerequisites for Using the SFTP


If you want to use the FTP in secured mode (SFTP), then you must install the Net::SFTP CPAN
module.

To do this for non-Windows operating systems:

Important: You cannot use this method with Windows operating systems. Instead, you must use
Prerequisites for Using the SFTP with SSH Key Exchange Authentication (for Windows Servers) on
page 62. This method is strongly recommended for both Windows and non-Windows operating
systems, as it is faster and more reliable.

1. Install the following components:


o Perl 5.8.x
o OpenSSL
o GMP 4.3.1

You can check if these are already installed, but this is done in different ways, depending
on your OS. This table describes the options:

Check OS Command

If Perl has already been installed, and Sun Solaris pkginfo | grep -i perl
at which version
perl -v
HP-UX swlist | grep -i perl
perl -v
RedHat Linux rpm -qa | grep -i perl
perl -v
If openssl and GMP have been installed Sun Solaris pkginfo | grep -i gmp
pkginfo | grep -i openssl

HP-UX swlist | grep -i gmp


swlist | grep -i openssl

RedHat Linux rpm -qa | grep -i gmp


rpm -qa | grep -i openssl

61
OPTIMA 8.0 Operations and Maintenance Guide

2. If you are installing the Net::SFTP CPAN module on a server that has HTTP network
access to the Internet, then run:

perl-MCPAN -e 'install Net::SFTP'

- or -

If you are installing the Net::SFTP CPAN module on a server that does not have HTTP
network access to the Internet, then:
o Copy the SFTP CPAN library distribution (provided by TEOCO) for your platform to the
server where the OPTIMA FTP program will run.

This consists of a gzip compressed TAR file containing all of the required perl modules.
o Decompress the file and extract it to the 'perl5' library directory on the server.

Tip: Default locations are usually /usr/lib/perl5 or /usr/local/lib/perl5.

3. Follow the operating system-specific instructions that are included with the distribution.

If you also want to use SFTP compression, you must additionally install the following CPAN Perl
modules:
• Compress::Zlib
• Compress::Raw::Bzip2
• Compress::Raw::Zlib
• Crypt::Blowfish
• Crypt::CBC
• Crypt::DES
• Crypt::DH
• Crypt::DSA
• Crypt::Primes
• Crypt::RSA
• Crypt::Random
• Digest::BubbleBabble
• Digest::HMAC
• Digest::MD2
• Digest::SHA
• IO::Compress
• IO::Zlib
• Net::SSH
• Net::SSH::Perl

62
About Data Acquisition Tools

Prerequisites for Using the SFTP with SSH Key Exchange Authentication (for
Windows Servers)
To use the FTP in secured mode (SFTP) with SSH Key Authentication (which is the strongly
recommended SFTP option) on a Windows server, you must complete the following prerequisites:

Note: If you are an TEOCO installation engineer, all of these files are available on the intranet.
Otherwise, please contact Product Support.

1. If your version of Perl is older or newer than '5.8.9 build 825':


o Uninstall it from your machine
o Install Active Perl using the 'ActivePerl-5.8.9.825-MSWin32-x86-288577.msi' file (leave
everything as default)

2. Install OpenSSH using the 'setupssh.exe'. This is stored in 'setupssh381-20040709.zip'.

3. Reboot the machine if required, to ensure that the PATH and PERL5LIB environment
variables are updated.

4. Check that the environment variables have been updated:


o On the command prompt, type: set p.
o The PATH environment variable needs to look something like:
PATH=C:\Perl\site\bin;C:\Perl\bin;C:\Program Files\OpenSSH\bin;……
o The PERL5LIB environment variable needs to look something like:
PERL5LIB=C:\Perl\lib;C:\Perl\site\lib.
o To update the environment variables (for this session only), in the command prompt,
type set PATH=C:\Perl\site\bin;C:\Perl\bin;C:\Program
Files\OpenSSH\bin;%PATH and set
PERL5LIB=C:\Perl\lib;C:\Perl\site\lib.

Note: To ensure that the environment variables will always be available:

In Windows, select Start, Control Panel, System.

In the dialog box that appears, click the Advanced tab, and then click the
Environment Variables.

In the System Variables pane, add or amend the PATH and the PERL5LIB
environment variables.

To see the new/updated environment variables on the command prompt, you need to
open a new one.

5. Check that the correct Perl version has been installed, by typing perl -v in the command
prompt.

6. On the 'C:\' drive, browse to the Perl location, and delete the Perl folder and all of its
contents.

7. Run the 'Perl_5.8.9.825_with_packages.exe' file, which will re-create and re-populate the
original Perl folder on the C:\ drive with all necessary packages.

63
OPTIMA 8.0 Operations and Maintenance Guide

8. Open a command prompt and:


o Change to the installation directory (the default is 'C:\Program Files\OpenSSH').
o CD into the bin directory, in order to be in 'C:\Program Files\OpenSSH\bin', using the
command C:\>cd Program Files\OpenSSH\bin.
o Use mkgroup to create a group permissions file. For local groups, use the '-l' switch,
and for domain groups, use the '-d' switch:

mkgroup -l >> ..\etc\group OR mkgroup -d >> ..\etc\group


o Use mkpasswd to add authorised users into the passwd file. For local users, use the '-l'
switch. For domain users (for example, the AIRCOMINT domain), use the '-d' switch:

mkpasswd -l [-u <username>] >> ..\etc\passwd OR mkpasswd -d


[-u <username>] >> ..\etc\passwd

Notes:
o To add users from a domain that is not the primary domain of the machine, add the
domain name after the user name.
o Omitting the username switch adds ALL users from the machine or domain, including
service accounts and the Guest account.

9. Still using the command prompt, create the SSH authentication private key:
o Start the OpenSSH server, by typing: net start opensshd.
o Create the private and public key by typing: ssh-keygen -t rsa.
o When prompted to enter the file in which to save the key, type id_rsa.
o When prompted to enter the passphrase, press <ENTER>.
o When prompted to enter it again, press <ENTER>.
o The private key included in the 'id_rsa' file should be created in 'C:\Program
Files\OpenSSH\bin'. Ensure that a copy also exists in 'C:\Documents and
Settings\<user_name>\.ssh' as well.

Tip: If the '.ssh' folder does not exist and Windows does not allow you to create one
manually, then you should:

Run ssh user@server_name or ssh user@IP_address, where 'user' is the username


for connecting to the remote server, and not the user that you use to log on to your
local machine.

When prompted to continue connecting, choose Yes. The .ssh folder created for you
under 'C:\Documents and Settings\<user_name>\'.
o Make a copy of the public key (file name id_rsa.pub), which should be in the same
location as the private key/file (for example, 'C:\Program Files\OpenSSH\bin') as
id_rsa.pub_username.

10. FTP the 'id_rsa.pub_username' public key/file into the home/.ssh folder on the server where
you will connect to download the files using the FTP application:
o The .ssh folder on the server may need to have the mode set up as: 'chmod 700 .ssh'
o Run cat id_rsa.pub_<username> >> authorized_keys
o It may be that the 'authorized_keys' and 'id_rsa' files in the .ssh folder need to have the
mode set up as 'chmod 600 authorized_keys_id_rsa'

11. Check the IP Address and Host Name of your Windows machine, by typing ipconfig
/all in the command prompt.

64
About Data Acquisition Tools

12. Log in as root on the server, to be able to add your IP address and host name in the
/etc/hosts file.

13. Ensure that you can connect to the server machine by typing ssh user@IP_address in
the command prompt.

Important: If you have followed all of the steps, you should NOT get prompted for
password. If you are, double check that you have got the private key/file in your home dir
and the public key/file is in place in the server machine.

14. You can now run the FTP application in secure mode.

Prerequisites for Using the SFTP with SSH Key Authentication (for Unix Servers)
To use the FTP in secured mode (SFTP) with SSH Key Authentication (which is the strongly
recommended SFTP option) on a Unix server, you must complete the following prerequisites:

Note: If you are an TEOCO installation engineer, all of these files are available on the intranet.
Otherwise, please contact Product Support.

1. Ensure that you have installed:


o The initial modules that are required for all SFTP installations. For more information,
see Prerequisites for Using the SFTP on page 61.
o The IO::Pty and Expect modules

2. Install the Net::SFTP::Foreign CPAN module on Sun Solaris:


o Determine the default Perl 5 Library include path using: perl -le 'print foreach @INC'
o (S)FTP copy the Sun_Solaris_perl5_libs.tar.gz file to the perl5 base directory directory
above, for example /usr/local/lib/perl5
o Change the directory to /usr/local/lib/perl5, using the cd command:
cd /usr/local/lib/perl5
o Unzip and extract the file to /usr/local/lib/perl5 using gunzip
Sun_Solaris_perl5_libs.tar.gz and tar xvf Sun_Solaris_perl5_libs.tar

3. On each host where the FTP Application is installed, as the 'optima' user generate an SSH
key pair using the following command:

ssh-keygen -t rsa

4. Accept all default options.

The RSA public key is written to the ~/.ssh/id_rsa.pub file and the private key to the
~/.ssh/id_rsa file.

5. Make a local copy of the id_rsa.pub file:

cp id_rsa.pub opt_authorized_keys

6. Copy the opt_authorized_keys file to the remote SFTP server, for example, using SFTP,
and append the public key in the opt_authorized_keys file into the authorized_keys file on
the remote server:

cd ~/.ssh

cat opt_authorized_keys >> authorized_keys

65
OPTIMA 8.0 Operations and Maintenance Guide

Configuring the FTP Application


The FTP is configured using a configuration (INI) file. Configuration changes are made by editing
the parameters in the configuration (INI) file with a suitable text editor. For more information about
configuration (INI) file parameters, see Configuration (INI) File Parameters on page 68.

Using Commenting
The FTP configuration (INI) file supports the following types of commenting:
• Windows, using this symbol (;).
• UNIX, using this symbol (#).

Lines are parsed for the first occurrence of a comment symbol. Once a comment symbol is found,
the rest of the line is ignored. Lines using the [Grouping] notation are also ignored but only if this
symbol ([) is found at the beginning of the line.

Using Environment Variables


The FTP configuration (INI) file supports the following methods of environment variable usage:
• Windows, using %ENV_VAR%
• UNIX, using $ENV_VAR

To set an environment variable:

In Windows type:

SET ENV_VAR=xyz

Tip: Use echo %ENV_VAR% to check the settings.

In UNIX type:

ENV_VAR=xyz; export ENV_VAR

Tip: Use echo $ENV_VAR to check the settings.

Note: If you are batching the program, then the environment may not inherit the user environment.
In this case, it is safer to reset environment variables before running the FTP application.

Using Regular Expressions


Regular expressions can be used in the FTP configuration (INI) file to define complex search
criteria. For its regular expressions, the FTP configuration (INI) file uses the Perl engine. Perl is
widely documented on the internet. For example, you can read more about Perl regular
expressions at this location:

http://search.cpan.org/dist/perl/pod/perlre.pod

66
About Data Acquisition Tools

This table gives examples of some regular expressions that you might use in the FTP configuration
(INI) file:

Regular Description
Expression

^ Match the beginning of a string. For example, the expression ^CSV will match
CSV at the beginning of a string.
$ Match the end of a string. For example, the expression CSV$ will match CSV at
the end of a string.
. Match any character except newline. For example, the expression C.V will
match a C followed by any single character (except newline) followed by a V.
* Match 0 or more times. For example, the expression CS*V will match a C
followed by zero or more S's followed by a V.
+ Match 1 or more times. For example, the expression CS+V will match a C
followed by one or more S's followed by a V.
? Match 1 or 0 times. For example, the expression CS?V will match a C followed
by an optional S followed by a V.
| Alternation. For example, the expression C|V will match either C or V.
() Grouping. For example, the expression CSV(04|05) will match CSV04 and
CSV05.
[] Set of characters. For example, the expression [CSV] will match any one of C,
S, and V.
{} Repetition modifier. For example, the expression CS{2,4}V will match a C
followed by 2, 3 or 4 S's followed by a V.

\ Quote (escape) the next character. For example, the expression C\.V will match
C.V exactly.

Using Multiple Virtual IP Addresses


The FTP application can connect to multiple virtual IP addresses that can be defined as a list of
addresses in comma separate variables format. The remote directory should be the same for all
hosts to connect to.

The FTP program connects to the IP addresses in the following way:

1. The application tries to connect to the first IP address in the list.

2. If it successfully connects, it starts downloading files and exits itself after finishing.

Note: If it does not connect to the first IP address, it tries the next IP address in the list and
it logs a message for the failed connection. If all IP addresses fail to connect, then it logs a
message to indicate this.

The FTP downloads from the first host with the following settings in the configuration (INI) file:

remoteHost = 192.168.3.253, 192.168.3.35, 192.168.3.37

remoteUser = optima,optima,optima

remotePass = optima,optima,optima

remoteDir = /export/home/optima

67
OPTIMA 8.0 Operations and Maintenance Guide

The FTP downloads from the second host if the first host is not valid with the following settings in
the configuration (INI) file:

remoteHost = 192.168.3.39, 192.168.3.253, 192.168.3.35, 192.168.3.37

remoteUser = xxx,optima,optima,optima

remotePass = yyy,optima,optima,optima

remoteDir = /export/home/optima

Configuration (INI) File Parameters


TEOCO uses four logical groupings for the parameters in its FTP configuration (INI) files. The
purpose of these groups is simply to assist you in finding associated configuration (INI) file settings.
The following table describes the parameter groups:

This Contains the Parameters For:


Group:

Processing Specifying how the FTP program will run. For example, it includes the parameters for
setting when the FTP process should start and for how many days it should collect data.
For more information, see Processing Parameters on page 69.
Directory Specifying the directories that are used by the FTP program. For example, it includes the
parameters for setting the directories for processing files and outputting logs. For more
information, see Directory Parameters on page 73.
Filename Matching FTP files and directories. For example, it includes the parameters for setting the
file and directory masks and for prepending directory names to filenames. For more
information, see Filename Parameters on page 74.
FTP Specifying FTP connection details. For example, it includes the parameters for setting the
location and login details of the remote host. For more information, see FTP Parameters on
page 76.

Important: If you are using this FTP PUT option, you should read all references in this chapter
(except where this parameter is specifically mentioned) as follows:
• Local/client machine becomes the source
• Server/remote host becomes the destination
• 'Download' becomes 'upload'

68
About Data Acquisition Tools

Processing Parameters

The following table describes the Processing parameters for the FTP script:

Parameter Description Default Required

SFTP 0 - Use simple FTP. 2 Yes, in all INI files.


1 - Use secured FTP.
2 - Use secured FTP with SSH Key
Exchange Authentication.
3 (UNIX only) - Use the native SFTP
command available in the system.
If you are using this option, you must
also set the 'SFTPcommand' parameter.
4 (UNIX only) - Use secured FTP with
the remotePass=xxx INI parameter
instead of public/private SSH key.
Important: Before you can use any of
the secured FTP options, you must
ensure that you follow the pre-requisites.
For more information, see Installing the
FTP Application on page 60.
SFTPcompression Used if you are using secured FTP 1 Yes, if you are
using SFTP.
1 - Enable compression during SFTP file
transfers, minimising the volume of data
transferred, and time required for the
data transfer.
Important: For this option to function
correctly, the SFTP server from where
the data is received from must also
support the compression option.
If this option is not already enabled, this
can be achieved by setting the
"Compression yes" option in the
/etc/ssh/sshd_config file on the SFTP
server, and restarting the sshd daemon.
0 - Do not compress ASCII files.
SFTPcommand The path to the native SFTP command. /usr/bin/sftp Yes, if SFTP=3.
BindAddress Can only be used if you are using - No.
secured FTP.
If the FTP application is installed on a
machine with multiple interfaces or
aliased addresses, this parameter
specifies the interface from which to
transmit - for example:
BindAddress=127.0.0.1
localFile Indicates whether a local directory (1) or 0 No.
the FTP (0) is being used.
If you have chosen to use a local
directory, you must specify the
localFileList parameter as well.
localFileList The file listing command: - Yes, if localFile is
set to 1.
For UNIX and cygwin, it should be 'Is -n',
and for Windows, it should be 'dir'.

69
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description Default Required

PRID Uniquely identifies each instance of the - Yes, in all INI files.
application. It is composed of a 9-
character identifier, made up of Interface
ID, Program ID and Instance ID.
The Interface ID is made up of 3
numbers but the Program ID and
Instance ID are both made up of 3
characters, which can be a combination
of numbers and uppercase letters.
For more information, see About PRIDs
on page 29.
startOnDay Number of days back from today to start 0 No.
searching for files to download.
If you set this to a negative value (for
example, startOnDay=-1), it will collect
files with a date in the future. You should
do this if you are working across multiple
timezones.
numberOfDays Number of days counting back from - Yes, in all INI files.
startOnDay to search for new files on the
remote host.
datedDir 0 - Directories are not dated. However, - Yes, in all INI files.
files can be dated if dateFormat is not set
to 0.
n - The directory n levels down from
remoteDir is a dated directory of format
dateFormat.
dateFormat Date format to use for datedDir, where - Yes, in all INI files.
the available options are:
M-D-YYYY
M_D_YYYY
MDYYYY
M-D-YY
M_D_YY
noFilesInList The number of downloaded files that will - Yes, if
be maintained in a list file if datedDir and datedDir=dateForm
dateFormat are both set to 0. For at=0.
example, if noFilesInList=1000, then
1000 files will be maintained in the list
file.
MaxFilesInDownloadDir The maximum number of files that 10000 No.
should be in the FTP Download Directory
at any time.
If the number of files in the output
directory is greater than this, then no
more files will be downloaded.
verbose 0 - Run silently. 0 No.
1 - Print status messages to the screen.
backup Indicates whether a copy of the input file 1 No.
will be copied to the backup directory (1)
or not (0).
If you do not choose to backup, the input
file is deleted after it has been
processed.
useMonitorFile 0 - Script does not use a monitor file. 1 No.
1 - Script uses a monitor file.

70
About Data Acquisition Tools

Parameter Description Default Required

unzipCommand The executable file and its path, to use to - No.


unzip downloaded files. For example:
unzipCommand=C:\Programs\gunzi
p.exe
Notes:
• Only set this parameter if files are to
be unzipped.
• Include all arguments.
zipExtension File extension of zipped files, for Yes, if
example, .zip. unzipCommand is
configured.
unzipAfterTar If you are using arch.tar[.gz], the tar 0 Yes, if using
contains compressed files, so you should arch.tar [gz.].
set this option to 1 in order to unzip
them.
untarCommand The executable file and its path, to use to - No.
untar downloaded files. For example:
untarCommand=C:\Programs\gtar.
exe -xf # windows
untarCommand=/usr/local/bin/gt
ar -xf # unix
tarExtension File extension of tarred files, for example, - Yes, if
.gtar. untarCommand is
configured.
alternativeOutputExtractBeforeArc 0 - inactive 0 No.
hive
1 - Extracts remote files from tar or zip
files (according to other options such as
untarCommand and unzipAfterTar)
before archiving them to the overload
folder.
logseverity Sets the level of information required in 2 No.
the log file. The available options are:
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
HeartbeatDeferTime If the heartbeat function has been run 5 No.
recently, you can choose to defer
running it again by a defined number of
seconds.
archiveBackup Defines whether the FTP will tar the files 0 No.
it moves to the archive directory (1) or
not (0).
archivePeriodMask The regular expression to define the tar .* No.
file naming/grouping used when
archiving and/or using the overload
functionality.
For more information, see Defining How
Files are Grouped in Tar Files on page
78.

71
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description Default Required

archiveMaxFiles The maximum number of files to place 100 No.


inside a tar file.
archiveMaxSize The maximum size of a tar file in bytes. 100000000 No.
(100MB)
archiveCommand The path to use to execute the tarring /bin/tar No.
process.
Tip: If possible, it is recommended that
you use gtar.
maxOutputFilesystemPercent Specifies the FTP output directory usage 90 No.
threshold (in % used), beyond which the
overload directory should be used.
The default is 90%, which means that
when usage reaches 91% the overload
directory will be used.
alternativeOutputFileSystem Defines whether the FTP should use an 0 No.
overload directory (1) or not (0) in the
case of a file system overload.
The definition of an overload is based on
the acceptable threshold for file system
usage, which is set in the
maxOutputFilesystemPercent parameter.
If the file system becomes overloaded,
then files will be tarred and moved to the
specified overload directory.
Important: If you then want the parser to
take files from here, then you must
configure the parser using the
OverloadDir parameter.
For more information, see Configuring
the OPTIMA Parser on page 167.
maxAlternativeFilesystemPercent Specifies the overload directory usage 90 No.
threshold, beyond which the FTP should
stop processing.
The default is 90%, which means that
when overload directory usage reaches
91% the FTP will stop processing.
diskUsageCommand If you are using an overload directory, df -k No.
then use this parameter to specify the $OPTDIR |
command that will report the disk usage tail -1
level in the file system, returning a line
with a % in it.
MaxProcesses The maximum number of additional sub- 0 No.
processes.
Note: If SFTP=1, MaxProcesses cannot
be greater than 0.

72
About Data Acquisition Tools

Parameter Description Default Required

ProcessLevel The directory level at which FTP will split 0 Yes, if


into multiple processes. MaxProcesses is
greater than 0.
1 is sub-directory level
2 is one level below sub-directory level
3 is two levels below sub-directory level
and so on.
Tip: If there are significantly more
subdirectories than the maximum
number of additional sub-processes, Try
to choose the earliest possible level (in
other words, the lowest value).
WindowsDSTFix To be used when connecting to Windows 1 Yes, for Windows
FTP servers and set automatically if FTP servers.
FTPStyle=stdWINDOWS. Adds the
directory to the filename in the listfile and
creates new listfiles for +1 hour and -1
hour. Used in conjunction with
PrependTimestamp=1, this protects
against the timestamp changing on the
server at DST changeover.
DownloadlfFileSizeChanged 0 – Do not record the file size. 0 No.
1 - Record the file size in the
downloaded list file, so that when it
changes, it is seen as a new file. This
only applies if PrependTimestamp = 0 or
2 (otherwise download occurs anyway
because timestamp is updated).

Directory Parameters

The following table describes the Directory parameters for the FTP script:

Parameter Description Default Required


?

optimaBase Root path to the backend file OPTIMA Yes, in all


system. INI files.
LogDirectory Location of log directory. <optimaBase>/log No.
ProcDirectory Location of monitor (PID) file <optimaBase>/prid No.
directory.
FTPOutDirectory Location of the output directory. <optimaBase>/ftp/<PRID>/in No.
You can specify multiple output
directories by listing directories
separated by commas. With
multiple directories, files are
rotated between each directory
when they are downloaded.
FTPDownloadDir Location of the download <optimaBase>/ftp/<PRID>/tmp No.
directory.
FTPErrorDir Location of the error directory. <optimaBase>/ftp/<PRID>/error No.
FTPFileListDir Location of the file list directory. <optimaBase>/ftp/<PRID>/list No.
FTPBackupDir Location of the backup directory. <optimaBase>/ftp/<PRID>/backup No.

73
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description Default Required


?

FTPAltDirectory If you have configured the FTP to <dir> No.


use an overload directory, then
use this parameter to define the
directory path for the overload
directory.
Important: You must create the
overload directory in the required
location before trying to run the
FTP.

Important: If you are using the FTP PUT functionality (in other words, the 'upload' parameter is set
to 1) then all of these parameters except for the FTPOutDirectory will still be used. This means that
all of the log directories, PID file directories and so on will remain on the client machine. However,
instead of the FTPOutDirectory, you should specify the FTPInDirectory, which represents the
source directory for the files that you want to upload. For more information, see FTP Parameters on
page 76.

Filename Parameters

The following table describes the Filename parameters for the FTP script:

Parameter Description Default Required?

dirMask Regular expression mask used for Yes, in all INI files.
directories.
Use ^$ to prevent directory recursion.
fileMask Regular expression mask used for files. Yes, in all INI files.

excludeMask Regular expression mask used to exclude ^$ No.


files.
MAXFileSize 0 - Download all files. 0 No.
n - Download files that are smaller than n
bytes.
MINFileSize 0 - Download all files. 0 No.
n - Download files that are larger than b
bytes.
UseFolderFileLimit Indicates whether the folder file limit should 0 No.
be used (1) or not (0).
FolderFileLimit The maximum number of output files that 10,000 No.
can be created in each output (sub) folder.
This must be in the range of 100-100,000 for
Windows, or 100-500,000 on Sun/UNIX,
otherwise the application will not run.
Warning: Depending on the number of files
that you are processing, the lower the file
limit, the more output sub-folders that will be
created. This can have a significant impact
on performance, so you should ensure that if
you do need to change the default, you do
not set the number too low.
Important: If you are using FTP PUT (in
other words, the 'upload' parameter is set to
1), then the 'FolderFileLimit' parameter is not
used.

74
About Data Acquisition Tools

Parameter Description Default Required?

PrependSubDir Set depth from remoteDir to use directory No.


name to prepend to filename after
download.
Tip: You can prepend multiple directories to
the file by specifying comma-separated
values.
PrependSubStr Regular expression to use a matched path No.
of the directory name to prepend.
Note: You must set PrependSubDir to use
this option.
PrependString Prepend fixed string to filename. No.

PrependTimestamp 0 - Do not prepend the creation timestamp. 0 No.


1 - add creation time to filename and take
note of it when determining if file has been
downloaded before (include in downloaded
list file).
2 - add creation time to filename but ignore it
when determining if file has been
downloaded before (exclude from
downloaded list file).
PrependHostname 0 - Do not prepend the remote hostname/IP 0 No.
address at the front of the filename.
1 - Prepend the remote hostname/IP
address at the front of the filename.
PrependCollectDateTime 1 - Add a prefix label of 'CDT' (Collect Date 0 No.
Time) and the data collection date/time to
the file name.
This uses the format:
CDT_yyyymmddhhmiss_<filename>.
Tip: This is useful for comparing the
collection time to the loading time.
Note: If you also select to prepend the
hostname, the hostname will appear before
this prefix.
0 - Do not include this prefix.
ReplaceColonWith Replace this symbol (:) in filename with '_' No.
another value.
Note: Spaces are always removed from
filenames.
ReplacePoundWith Replace this symbol (#) in filename with '_' No.
another value.
Note: Spaces are always removed from
filenames.
RemoveFromFileName Remove a specified character or series of No.
characters - defined as a regular expression
- from the filename.
For example, to remove a prefix of
'200711200918' from a filename, you would
set this parameter to '[0-9]{12}'.
PrependSeparator Separate filename from prepend strings. '.' No.

75
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description Default Required?

AppendSubDir Set depth from remoteDir to use directory No.


name to append to filename after download.
Tip: You can append multiple directories to
the file by specifying comma-separated
values.
AppendSeparator Separate filename from append strings. ‘.' No.
AppendString Append fixed string to filename. No.

AppendBefore This parameter works in combination with No.


the other Append parameters. For example,
if you set AppendBefore=.txt
AppendSubDir=1 AppendSeparator=_
remote path = 20050503/1300/a
filename of the file to be downloaded =
<file_name>.txt, then the FTP produces the
following output: <file_name>_20050503.txt.
AppendSubStr Regular expression to use a matched path No.
of the directory name to append.
Note: You must set AppendSubDir to use
this option.
removeZipExtBeforeMatch Remove zipExtension before checking if No.
files need to be downloaded. This parameter
also handles archived files on remoteHost.

FTP Parameters

The following table describes the FTP parameters for the FTP script:

Parameter Description Default Required?

remoteHost Remote hostname (or IP address) from Yes, in all INI files.
which to download files.
If you are using SFTP, you can also use
this parameter to specify an alternative
port for the FTP traffic, if you do not want
to use the default.
To do this, add the port number after the
hostname/IP address, separating the two
with a colon (':').
For example:
remoteHost=100.200.300.1:4567
Where
100.200.300.1 is the IP address
4567 is the port number
remoteUser Username for login to the remote host. Yes, in all INI files.

remotePass Password for login to the remote host. Yes, in all INI files.

76
About Data Acquisition Tools

Parameter Description Default Required?

remoteDir Parent directory on remote host from Yes, in all INI files.
which to download files.
{dir1}[,{dir2}]
Tip: You can specify multiple remote
directories by listing directories
separated by commas.
Important: If you are using FTP PUT (in
other words, the 'upload' parameter is set
to 1), then this will be the destination
folder on the remote host.
remoteArchiveDir Archive directory (flat structure) on No.
remoteHost to which files, when
downloaded, are moved using the same
final filename as used by OPTIMA.
removeOnDownload 0 - Files are not deleted from 0 Yes, in all INI files.
remoteHost.
1 - Files are deleted from remoteHost
when downloading is complete
FTPType The mode of FTP: ASCII Yes, in all INI files.
• ASCII
• BINARY
FTPActive Indicates whether the FTP is in passive 0. No.
mode (0) or active mode (1).
In passive mode, the client initiates both
connections to the server, solving the
problem of firewalls filtering the incoming
data port connection to the client from
the server.
In active mode, the client connects from
a random unprivileged port (N > 1023) to
the FTP server's command port, port 21.
Then, the client starts listening to port
N+1 and sends the FTP command
PORT N+1 to the FTP server. The server
will then connect back to the client's
specified data port from its local data
port, which is port 20.
FTPSafetyPeriod Safety period, in minutes, for files still No.
being written to local machine.
FTPStyle The style of FTP: Yes, in all INI files.
• stdUNIX
• stdWINDOWS
- or -
DIR,X,X,X,SIZE,DATE,TIME,NAME,DAT
E,TIME,SIZEORDIR,NAME
FTPDateFormat The FTP date format. Yes, if FTPSafetyPeriod
or PrependTimestamp,
and standard FTPStyle
are not used.
FTPTimeFormat The FTP time format. Yes, if FTPSafetyPeriod
or PrependTimestamp,
and standard FTPStyle
are not used.

77
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description Default Required?

FTPDirMatch • drwx for Unix drwx Yes, if remoteHost is a


Windows server.
• <DIR> for Windows
upload 1 - Use the FTP to upload files (for 0 No
example, reports) from the client to the
server
0 - Use the FTP to download files from
the server to the client
Important: If you are using this FTP
PUT option:
• You should read all references in
this chapter (except where this
parameter is specifically mentioned)
as follows:-local/client machine is the
source, Server/remote host is the
destination, and 'download' is
'upload'
• The 'FolderFileLimit' parameter is
not used
• You must use the 24-hour (not 12-
hour) time format (Windows only)
uploadFileList The file listing command: Yes, if upload=1
For UNIX and cygwin, it should be 'Is -n',
and for Windows, it should be 'dir'.
FTPInDirectory The filepath from which to get the files on Yes, if upload=1
the client for upload.

Defining How Files are Grouped in Tar Files

If you are using the archiving or overload functionality of the FTP application, then you can use a
regular expression in the archivePeriodMask parameter to indicate the file name and which files
should be included in the tar file, subject to the limitations specified by the archiveMaxFiles and
archiveMaxSize parameters.

The default value for archivePeriodMask is *., which means that all files are matched and nothing is
extracted. In this case, all of the files will go into the same tar file. However, by using bracketed
sections, you can choose to match and group according to a narrower definition.

Consider a simple example, where you have the following files that will be tarred:
• File1: '1320_abc_20100322_1152_qwexoiqwe'
• File2: '2344_abc_20100322_1156_crwercwerw'
• File3: '4234_abc_20100322_1242_wcrwercwer'

Where
• 20100322 represents the creation date
• 1152, 1153 and 1242 all represent the creation time

You may want to group these files in tar files according to the date and hour they were created, and
use this for the file name. To do this, you could use the regular expression abc_(\d{8}_\d{2})\d{2}_.

This would create two tar files:


• 'tarfile_20100322_11', containing File1and File2
• 'tarfile_20100322_12', containing File3

78
About Data Acquisition Tools

Example FTP Configuration (INI) File


This section describes an example FTP Configuration (INI) file for Windows:

[Processing parameters]

SFTP=0

SFTPcompression=0

localFile=0

localFileList=dir

PRID=001302011

startOnDay=0

numberOfDays=20

datedDir=0

dateFormat=0

noFilesInList=10000000

MAXFilesInDownloadDir=10000

verbose=1

backup=1

useMonitorFile=1

unzipCommand=C:\Programs\gunzip.exe

zipExtension=.zip

unzipAfterTar=0

untarCommand=C:\Programs\gtar.exe

tarExtension=.gtar

LogSeverity=1

HeartbeatDeferTime=5

archiveBackup=0

archivePeriodMask=.*

archiveMaxFiles=100

archiveMaxSize=10000

archiveCommand= /bin/gtar

maxOutputFilesystemPercent=90

alternativeOutputFilesystem=0

maxAlternativeFilesystemPercent=90

diskUsageCommand=df -k. | tail -1

79
OPTIMA 8.0 Operations and Maintenance Guide

[Directory parameters]

optimaBase=/OPTIMA_DIR/<application_name>

LogDirectory=/OPTIMA_DIR/<application_name>/log

ProcDirectory=/OPTIMA_DIR/<application_name>/pid

FTPOutDirectory=/OPTIMA_DIR/<application_name>/out

FTPDownloadDir=/OPTIMA_DIR/<application_name>/download

FTPErrorDir=/OPTIMA_DIR/<application_name>/error

FTPFileListDir=/OPTIMA_DIR/<application_name>/list

FTPBackupDir=/OPTIMA_DIR/<application_name>/backup

FTPAltDirectory=/opt/AIoptima/opt_perf/ftp/

[Filename parameters]

dirMask=.*

fileMask=.*

excludeMask=^$

MAXFileSize=2000000000

MINFileSize=0

UseFolderFileLimit=1

FolderFileLimit=100

PrependSubDir=1,2

PrependSubStr=[0-9]{4}.*

PrependString=_BSC1_

PrependTimestamp=0

PrependHostname=1

PrependCollectDateTime=1

ReplaceColonWith=_

ReplacePoundWith=|

RemoveFromFileName=[0-9]{12}.

PrependSeparator=_

AppendSubDir=1

AppendSeparator=_

AppendString=_BSC2_

AppendBefore=.xls

AppendSubStr=[0-9]{4}.*

80
About Data Acquisition Tools

removeZipExtBeforeMatch=0

[FTP parameters]

remoteHost=192.168.3.35

remoteUser=optima

remotePass=ENC(kknbeX)ENC

remoteDir=/data02/home/optima/upender/ftp/input_files/pound

removeArchiveDir=<...path...>

removeOnDownload=0

FTPType=ASCII

FTPActive=1

FTPSafetyPeriod=10

FTPStyle=stdWINDOWS

FTPDateFormat=MM-DD-YY

FTPTimeFormat=HH:MIAM

FTPDirMatch=<DIR>

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

Maintenance of the FTP Application


In usual operation, the FTP application should not need any special maintenance. During
installation, the OPTIMA Directory Maintenance application will be configured to maintain the
backup and log directories automatically.

However, TEOCO recommends the following basic maintenance checks are carried out for the FTP
application:

Check The When Why

Backup directory to ensure Weekly A file not transferring indicates a problem with the
files have been transferred application.
Log messages for error Weekly In particular any Warning, Minor, Major and
messages Critical messages should be investigated.

81
OPTIMA 8.0 Operations and Maintenance Guide

Checking a Log File Message


The log file for each application is stored in the directory defined in the configuration (INI) file for
that application.

A new log file is created every day. The information level required in the log file is defined in the
General Settings dialog box and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

Stopping the FTP Application


The FTP application is designed to be scheduled and will terminate when all required files on the
remote server have been downloaded. For more information, see Starting and Stopping the Data
Loading Process on page 40.

Checking for Error Files


Files categorised as error files by the FTP application are stored in the directory as defined in the
configuration (INI) file.

The log file is expected to have information related to any error files found in the particular
directory. For more information about the log file, see Checking a Log File Message on page 82.

Checking the Version of the FTP Application


If you need to contact TEOCO support regarding any problems with the FTP application, you must
provide the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_FTP_GEN_302.exe -v

In Unix:

opx_FTP_GEN_302 -v

For more information about obtaining version details, see About Versioning on page 33.

82
About Data Acquisition Tools

Checking the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

FTP Message Log Codes


For more information on FTP message log codes, see Common Error Codes on page 52.

Troubleshooting

FTP Application

Symptom Possible Cause Solution

Application not Application has not been Use Process Monitor to check last run status.
transferring files. scheduled.
Check crontab settings.
Crontab entry removed.
Check configuration settings.
Application has crashed and
Process Monitor is not configured. Check process list and monitor file. If there is
a monitor file and no corresponding process
Incorrect configuration settings. with that PID, then remove the monitor file.
Server not accessible - network Note: The Process Monitor will do this
problems. automatically.
Check log for error messages that may
indicate the network problem.
Application exits Another instance is running. Use Process Monitor to check instances
immediately. running.
Collection of Each individual FTP client has no If the date and time can be read out of the file
current data cannot control over the order in which it name, or if data from different days is sent to
be prioritised over collects its data. It scans a set of different directories, configure instances to
older days. directories and subdirectories in collect for particular days. INI file options
the order returned by the FTP fileMask, excludeMask and dirmask may be
server (normally alphabetical), useful for this purpose.
and INI file settings do not change
the order. If you have access to the machines from
which the files are collected, you can
temporarily move all files older than a
specified number of days to another directory.

83
OPTIMA 8.0 Operations and Maintenance Guide

About the Database Acquisition Tool


The Database Acquisition Tool is used when you have to transfer data from one database to
another and a direct link between them is not allowed. The Database Acquisition Tool queries the
source database containing the required data and stores the result in a CSV file.

The Database Acquisition Tool can connect to the following databases:


• Oracle
• SQLServer
• InterBase
• ODBC
• DB2
• Informix
• Sybase
• MySQL
• PostgreSQL

Note: The database client libraries for these different databases should be on the system path in
order for the Database Acquisition Tool to connect to them.

Installing the Database Acquisition Tool


To install the Database Acquisition Tool, install the following files in the backend binary directory:
• opx_DAP_GEN_333.exe (Windows)
• opx_DAP_GEN_333 (Unix)

Important: If you want to run the Database Acquisition Tool on a UNIX (Sun Solaris) machine to
retrieve data from an SQL Server database, there are a number of additional pre-requisites:

1. Install the unixODBC library from http://www.unixodbc.org/, following the online instructions.

2. Install the FreeTDS library from http://www.freetds.org/, following the online instructions.

It is recommended that you use the ODBC-only configuration.

3. Configure two INI files:


o Modify the main Database Acquisition Tool INI file.
o Create an additional ini file, .odbc.ini, in the root location on the client machine that will
be used to run the Database Acquisition Tool. This supplies the interface to the
Windows machine.

For more information, see Configuring the Database Acquisition Tool for Sun Solaris
Machines and SQL Server Databases on page 93.

84
About Data Acquisition Tools

About the Database Acquisition Tool Modes


The Database Acquisition Tool queries the database using the following two methods:

Method Description

Query Mode The tool queries the database using a static SQL statement and the entire result set
of the query is output to a CSV file. In this mode, the tool does not store a history of
rows that have already been parsed from the database. Hence, no .lst file is created.
Date Query Mode The query is filtered by the date and time field in the query. The tool maintains a
history of rows previously parsed from database in .lst files. This means that the tool
outputs only new rows to the output file.

About the Query Mode


In this mode, the Database Acquisition Tool queries the database and outputs the entire result set
into a CSV file.

To do this, the Database Acquisition Tool needs the following:


• Database connection details
• Valid SQL query statement for the database being used
• Name to use as part of the output file name
• DATETIMEFORMAT for formatting any DATETIME fields in the output file

Example of the Query Mode

The following is an example of Table, Data, and INI file in the Query Mode:

An example of table in query mode

An example of data in query mode

INI Example:

[MAIN]

InterfaceID=001

ProgramID=333

InstanceID=001

85
OPTIMA 8.0 Operations and Maintenance Guide

[DIR]

LogDir=/OPTIMA_DIR/<application_name>/log

TempDir=/OPTIMA_DIR/<application_name>/temp

PIDFileDir=/OPTIMA_DIR/<application_name>/prid

DirTo=/OPTIMA_DIR/<application_name>/out

[DBConfiguration]

DBString=OPT70

UserID=aircom

Password=ENC(Krw'jdep)ENC

DBClient=Oracle

[OPTIONS]

QueryMode=0

[QUERYMODE]

Name=MyQuery2

Query=select * from dbparser3

DateTimeFormat=YYYY/MM/DD HH24:MI:SS

Note: The Program ID, Interface ID and Instance ID make up the PRID. For more information, see
About PRIDs on page 29.

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

86
About Data Acquisition Tools

About the Date Query Mode


In this mode, the Database Acquisition Tool queries the database and filters the result set on a
starting date and time and on an ending date and time. It outputs only new rows which have not
been read previously from the database to the CSV file.

To do this, the Database Acquisition Tool needs the following information:


• Database connection details.
• A valid SQL query statement for the database being used.
• A name to use as part of the output file name.
• A valid SQL query statement to query the current time on the database machine.
• The DateTimeField column in the query to be used in the where cause.
• The SQL query will have a where cause with begindatetime parameter and enddatetime
parameter.
• The SQL query should be ordered by the DateTimeField column to increase performance
of the Database Acquisition Tool.
• A list of key columns which make each row in the result set unique.
• The granularity and look back period which will be used by the tool to calculate the
begindatetime and enddatetime parameters.
• The tool will keep track of rows downloaded already by creating LST files in the LST folder.
The list folder will contain sub folders for each day. Each of these sub folders will have up
to 24 LST files for each hour. The content of each of these files will contain information
about which rows have been read from the database already. Each line in the CSV file will
be in format:

KeyField1 Value, ….,KeyFieldN Value, Minute and second portion of the DateTimeField.
• The tool will adjust the DateTimeField values in the output file if AdjustForDST is set.
• The DateTimeFormat for formatting any DateTime fields in the output file.

Example of Date Query Mode

The following is examples of a table, some data, and an INI file in the Date Query Mode:

An example of table in the date query mode

An example of data in the date query mode

INI example:
87
OPTIMA 8.0 Operations and Maintenance Guide

[MAIN]

InterfaceID=001

ProgramID=333

InstanceID=001

LogSeverity=2

Verbose=1

[DIR]

LogDir=/OPTIMA_DIR/<application_name>/log

TempDir=/OPTIMA_DIR/<application_name>/temp

PIDFileDir=/OPTIMA_DIR/<application_name>/prid

DirTo=/OPTIMA_DIR/<application_name>/out

DirLst=/OPTIMA_DIR/<application_name>/lst

[DBConfiguration]

DBString=OPT70

UserID=aircom

Password=ENC(ZqoT'h/r)ENC

DBClient=Oracle

[OPTIONS]

QueryMode=1

[DATEQUERYMODE]

Name=MyQuery

Query=select * from dbparser3 where DATENTIME between :begindatetime AND


:enddatetime order by DATENTIME

CurrentDateTimeQuery=select sysdate from dual

Granularity=3

LookBackPeriod=100

AdjustForDST=0

OffsetWhenDSTActive=0

OffsetWhenDSTInactive=0

88
About Data Acquisition Tools

DateTimeField=DATENTIME

DateTimeFormat=YYYY/MM/DD HH24:MI:SS

NumberKeyFields=2

KeyField1=BSC

KeyField2=CELL

Note: The Program ID, Interface ID and Instance ID make up the PRID. For more information, see
About PRIDs on page 29.

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

Configuring the Database Acquisition Tool INI File


The Database Acquisition Tool is configured using a configuration (INI) file. Configuration changes
are made by editing the parameters in the configuration (INI) file with a suitable text editor. The
Database Acquisition Tool configuration (INI) file is divided into different sections.

The following table describes the parameters in the [DIR] section:

Parameter Description

DirLog The location of the log files.


DirLst The location of LST files. This directory is only needed when using
QueryMode=1.
DirTemp The location of temporary files created by the tool.
DirTo The location of output CSV files.
PIDFileDir The location of the monitor (PID) file.

The following table describes the parameters in the [MAIN] section:

Parameter Description

InstanceID The three-character program instance identifier (mandatory).

InterfaceID The three-digit interface identifier (mandatory).


Iterations This parameter is used when the application does not run in continuous
mode so that it will be able to check for input files in the input folder for the
number of required iterations before an exit. Integer values are allowed,
like 1,2,3,4 and so on.

89
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

LogGranularity Defines the frequency of logging, the options are:


0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily
LogLevel (or LogSeverity) Sets the level of information required in the log file. The available options
are:
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
PollingTime (or The pause (in seconds) between executions of the main loop when
RefreshTime) running continuously.
ProgramID The three-character program identifier (mandatory).
RunContinuously 0 - Have the Database Acquisition Tool run once.
1 - Have the Database Acquisition Tool continuously query the database.
Verbose 0 - Run silently. No log messages are displayed on the screen.
1 - Display log messages on the screen.

The following table describes the parameters in the [DBConfiguration] section:

Parameter Description

DBClient One of the following:


Oracle - Oracle client
SQLServer - SQL Server client
InterBase - InterBase client
SQLBase - SQLBase client
ODBC - ODBC client
DB2-DB2 client
Informix - Informix client
Sybase - Sybase client
MySQL - MySQL client
PostgreSQL - PostgreSQL client
DBString Name of database the Database Acquisition Tool will connect to.
Password A string containing a password to use when establishing the connection.
UserID A string containing a user name to use when establishing the connection.

Important: If you want to run the Database Acquisition Tool on a UNIX (Sun Solaris) machine to
retrieve data from an SQL Server database, then you must set these parameters in a specific way.
For more information, see Configuring the Database Acquisition Tool for Sun Solaris Machines and
SQL Server Databases on page 93.

90
About Data Acquisition Tools

The following table describes the parameters in the [OPTIONS] section:

Parameter Description

QueryMode 0 - The Database Acquisition Tool will read remaining parameters from the
[QUERYMODE] section.
1 - The Database Acquisition Tool will read remaining parameters from the
[DATEQUERYMODE] section.

The following table describes the parameters in the [QUERYMODE] section:

Parameter Description

AdjustforDST 0 - Disable DST adjustment.


1 - Enable DST adjustment in accordance to the offset settings.
DateTimeFormat The format of the outputted date:
WWW - Replaced by the locale's abbreviated weekday name.
DAY - Replaced by the locale's full weekday name.
DD - Replaced by the day of the month as a decimal number.
MONTH - Replaced by the locale's full month name.
MON - Replaced by the locale's abbreviated month name.
MMM - Replaced by the locale's abbreviated month name.
MM - Replaced by the month as a decimal number.
YYYY - Replaced by the year as a decimal number.
YY - Replaced by the last two digits of the year as a decimal number.
RR - Replaced by the year as a decimal number.
AM - Replaced by the locale's equivalent of either am or pm.
PM - Replaced by the locale's equivalent of either am or pm.
HH24 - Replaced by the hour (24-hour clock) as a decimal number.
HH - Replaced by the hour (12-hour clock) as a decimal number.
MI - Replaced by the minute as a decimal number.
SS - Replaced by the second as a decimal number.
Name A string which be included in the output file name.
The output file will be outputted to DirTo and file name format be will
Name__YYYYMMDDHH24MISSsss.csv.
OffsetWhenDSTActive Define time adjustment in minutes whenever DST is active.
OffsetWhenDSTInactive Define time adjustment in minutes whenever DST is inactive.
Query The SQL used to query the database.
The user should check the SQL statement is valid before using in the INI file.
The query can be validated by running in the database vendor query tool.

91
OPTIMA 8.0 Operations and Maintenance Guide

The following table describes the parameters in the [DATEQUERYMODE] section:

Parameter Description

AdjustforDST 0 - Disable DST adjustment.


1 - Enable DST adjustment in accordance to the offset settings.
CurrentDateTimeQuery The SQL to get current system data and time on the database machine.
This shows a sample:
Oracle
SELECT sysdate FROM dual
SQL Server
select getdate()
MS Access
select now()
Informix
select current as DATE_AND_TIME from systables where tabid =
1
DateTimeField The field in the query which is the date and time field.
DateTimeFormat The format of the outputted date:
WWW - Replaced by the locale's abbreviated weekday name.
DAY - Replaced by the locale's full weekday name.
DD - Replaced by the day of the month as a decimal number.
MONTH - Replaced by the locale's full month name.
MON - Replaced by the locale's abbreviated month name.
MMM - Replaced by the locale's abbreviated month name.
MM - Replaced by the month as a decimal number.
YYYY - Replaced by the year as a decimal number.
YY - Replaced by the last two digits of the year as a decimal number.
RR - Replaced by the year as a decimal number.
AM - Replaced by the locale's equivalent of either am or pm.
PM - Replaced by the locale's equivalent of either am or pm.
HH24 - Replaced by the hour (24-hour clock) as a decimal number.
HH - Replaced by the hour (12-hour clock) as a decimal number.
MI - Replaced by the minute as a decimal number.
SS - Replaced by the second as a decimal number.
Granularity A number to indicate the time period to use in the SQL:
1 - The last quarterly interval
2 - The last hour
3 - The last day
4 - The last week, week start from Sunday
5 - The last month
KeyField# Key field number # starting from KeyField1, KeyField2 and so on.
LookBackPeriod The number of periods to look back.

92
About Data Acquisition Tools

Parameter Description

Name A string which will be included in the output file name.


The output file will be outputted to DirTo and file name format be will
Name__YYYYMMDDHH24MISSsss.csv.
NumberKeyFields The number of key fields.
OffsetWhenDSTActive Define time adjustment in minutes whenever DST is active.
OffsetWhenDSTInactive Define time adjustment in minutes whenever DST is inactive.
Query The SQL used to query the database.
The user should check the SQL statement is valid before using in the INI file. The
query can be validated by running in the database vendor query tool.

Important: If you want to run the Database Acquisition Tool on a UNIX (Sun Solaris) machine to
retrieve data from an SQL Server database, then you must also configure a separate .odbc.ini file
as well. For more information, see Configuring the Database Acquisition Tool for Sun Solaris
Machines and SQL Server Databases on page 93.

Configuring the Database Acquisition Tool for Sun Solaris Machines


and SQL Server Databases
If you want to run the Database Acquisition Tool on a UNIX (Sun Solaris) machine to retrieve data
from an SQL Server database, then you must:
• Modify the main Database Acquisition Tool INI file
• Configure a separate .odbc.ini file

Modifying the Main Database Acquisition Tool INI File

You should modify the [DBConfiguration] section of the main Database Acquisition Tool INI file as
follows:

Parameter Description

DBClient ODBC
DBString The name of database the Database Acquisition Tool will connect to.
Important: This must match the name of the [DBName] section of the .odbc.ini
file.

For more information, see Configuring the Database Acquisition Tool INI File on page 89.

Configuring a Separate .odbc.ini File

It is recommended that you give the main section of the configuration (INI) file the same name as
the database from which you want to retrieve data. This will make it easier to refer to in the future.

93
OPTIMA 8.0 Operations and Maintenance Guide

The following table describes the parameters in the [DBName] section:

Parameter Description

Description An optional description explaining, for example. what the INi file contains and
does.
Driver The network path for the driver or client library.
Trace To create logging files for the data retrieval process, specify 'Yes', otherwise set
as 'No'.
Server The IP address used to connect to the database.
Port The port used to connect to the database.
UID The user or schema name for the database.
PWD (Optional) The password for the user/schema name corresponding to the defined
UID.
Database The name of the database.

This is an example INI file:

[database1]
Description=ODBC connection to DB name
Driver=/usr/local/freetds/lib/libtdsodbc.so
Trace=No
Server=127.0.0.1
Port=162
UID=admin
PWD=password
Database=database1

Running the Database Acquisition Tool


To start the Database Acquisition Tool, type the executable name and a configuration file name into
the command prompt. If you are creating a new configuration file, this is when you choose the file
name.

In Windows type:

opx_DAP_GEN_333.exe opx_DAP_GEN_333.ini

In Unix type:

opx_DAP_GEN_333 opx_DAP_GEN_333.ini

Database Acquisition Tool Message Log Codes


This section describes the message log codes for the Database Acquisition Tool:

Message Description Severity


Code

3334 DBClient value is not supported. CRITICAL


7000 Could not create output file <target_file>. WARNING
Application Starts. INFORMATION

Application End. INFORMATION

Successfully created output file <target_file>. DEBUG

94
About Data Acquisition Tools

Message Description Severity


Code

8000 <exceptionerrorMsg>. CRITICAL


8001 Datefield not valid date type. DEBUG
Index: <index>. DEBUG

Connected to database <DBname>. DEBUG

Database exception when connecting to database {<DBname>} UserID DEBUG


{<UserID>}.
Empty result set for query. DEBUG

Result set return <numberOfColumns> columns. DEBUG

CSV Header: <headerLine>. DEBUG

CSV Line: <csvLine> DEBUG

Result set return <rowNumber> rows. DEBUG

SQL Statement: <SQLStatement>. DEBUG

Parameter begindatetime: <beginDateTime>. DEBUG

Parameter enddatetime: <endDateTime>. DEBUG

Loading .lst file into memory for starting hour: DEBUG


<currResultSetDateHour>.
Index does not exist in lst file <currResultSetDateHour>. DEBUG

Index exist in lst file <currResultSetDateHour>. DEBUG

Database current date & time is <dateTime>. DEBUG

<timePeriodMsg>. DEBUG

Updating lst files. DEBUG

Updating lst file for starting hour: <dateTimeHour>. DEBUG

8002 Empty result set or no new rows found. Deleted temp file. INFORMATION
8003 Empty result set. Deleted temp file. INFORMATION
8800 Failed to open file <fileName>. WARNING
8810 Reading lst file <fileName>. DEBUG
8811 First time seen data for this date and hour. File does not exist. DEBUG

95
OPTIMA 8.0 Operations and Maintenance Guide

About the OPTIMA CORBA Client


Common Object Request Broker Architecture (CORBA) allows distributed systems to be defined
independently of a specific programming language.

The OPTIMA CORBA Client connects to the CORBA interface for PM data acquisition using the
CORBA Naming Service running on the host or data source, which is usually an Element
Management System (EMS) or a Network Management System (NMS).

This picture shows the process flow for the OPTIMA CORBA Client:

Process flow for the OPTIMA CORBA Client

Note: The getHistoryPMData (…) is an asynchronous method, so the CORBA client does not wait
till the PM file generation by the server. It just request to the server to generate the file. Then the
server generates the PM file on the location passed in the getHistoryPMData method.

Equipment vendors publish the details of their specific CORBA interface as Interface Definition
Language (IDL) files. These IDL files are used to create the CORBA client and server applications.

As IDL files use a proprietary file format, a specific CORBA client is required for each specific
CORBA interface. The requested data will be output as CSV files, either directly on the OPTIMA
Mediation Device (MD) or on the NMS or EMS and downloaded to the MD via the OPTIMA FTP
application.

You should refer to specific interface documents for the OPTIMA CORBA client deployed on a
particular network.

96
About Data Acquisition Tools

The OPTIMA CORBA Client supports these common functions:

Function Action

Logging Status and error messages are recorded in a daily log file.
Error Files If the application detects an error in the input file that prevents processing of
that file, then the file is moved to an error directory and processing continues
with the next file.
Monitor Files The application runs in a scheduled mode. A monitor (PID) file, created each
time the application is started, ensures that multiple instances of the application
cannot be run. The PID file is also used by the OPTIMA Process Monitor to
ensure that the application is operating normally.
PRID The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface ID,
Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID
are both made up of 3 characters, which can be a combination of numbers and
uppercase letters.
For more information, see About PRIDs on page 29.
Backup The application can store a copy of each input file in a backup directory.

For more details on these common functions, see Introduction on page 15.

OPTIMA CORBA Client Parameters


The following table describes the parameters for the OPTIMA CORBA client:

Parameter Description

EMS_SESSION_FACTORY The fixed string for the connection to the server.


This value should be obtained from the server.
AIRCOM_TEST For testing purposes only.
This should be set to 0 for the production environment.
EMS_USER The username for the session created between the client and server.
Important: If you enter this incorrectly, you will receive a connection fail
exception.
EMS_PASS The password for the session created between the client and the server.
Important: If you enter this incorrectly, you will receive a connection fail
exception.
STATUS_DIR The directory path specifying the location of the status file ('ems-pmdata-
last-time.txt').
This file contains the last time for which request has been sent to server
successfully, and cannot be left blank.
START_TIME If this value is not specified, its value will be taken from 'ems-pmdata-last-
time.txt' file. The time in this file can be modified manually for testing.
The START_TIME should be earlier than the END_TIME.
END_TIME The END_TIME can be any date and time or 'now'.
If the END_TIME is set as 'now', the actual value is taken from the current
UTC time.
The END_TIME should be later than the START_TIME.

97
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

MAX_TIME_MINUTES The maximum period of time in minutes (up to a limit of 30) for which the
data can be requested.
Based on this value, the START_TIME and END_TIME are modified
accordingly.
FTP_ADDR The full link(either the ftp hyperlink or the local drive path) of the file in
which the data from the server is to be stored.
The filename contains a placeholder for END_TIME which is replaced by
the value specified for the END_TIME parameter.
FTP_USER The username for FTP data.
FTP_PASS The password for FTP data.
TP_SELECT_LIST A server parameter for the gethistoryPMdata() method, which must be
one of these types.
• If the request is for data from a single managed element (ME), for
example: 'TP_SELECT_LIST=
EMS:Huawei/T2000,ManagedElement:4063236'
• If the request is for data from all managed elements of the network to
be fetched. For each ME one file will be produced at FTP_ADDR.
This can be done by using the parameter value
'ManagedElementFromServer' or (on certain supported servers) by
leaving this parameter blank.
WAIT_FOR_UPLOAD A server parameter for the gethistoryPMdata() method.
PM_PARAMETERS Specifies the list of parameter types for which you want to query the
performance.
If this list is null, the performance of all the parameter types is queried.

MANAGED_ELEMENT_FIL The name of the file (including the directory path) which contains the list
ENAME of managed elements.
If this parameter is not specified, a file called “ManagedElement.txt” will
be created in the current working directory if
TP_SELECT_LIST=ManagedElementFromServer.
ME_COUNT This parameter is used by the getAllManagedElementNames method.
It fetches the specified number (the default is 20) of managed elements in
one iteration.
GRANULARITY The granularity of the data to be fetched from server.
The default value is 15 minutes (15min).
DATA_SAFETY_PERIOD The time in minutes that the client will wait before making a request to
fetch data.
WAIT_BEFORE_REQUEST The time in milliseconds that the client will wait before requesting data for
ING the next Managed Element.
_ANOTHER_ME
Values can start from 0, and a recommended value would be between 0
and 999.
This is particularly useful for avoiding putting excessive demands on the
server when it is not able to respond quickly.

98
About Data Acquisition Tools

Example OPTIMA CORBA Client INI File


This section describes an example OPTIMA CORBA Client ini file:

EMS_SESSION_FACTORY=TMF_MTNM.Class/Ericsson.Vendor/Ericsson/SOO-
TMFv1_3_anknms.EmsInstance/3.0.Version/Ericsson/SOO-
TMFv1_3_anknms.EmsSessionFactory_I

AIRCOM_TEST=0

EMS_USER=corbausr

EMS_PASS=huawei1

STATUS_DIR=/home/optima/TestCorba/execution/status

START_TIME=20100820040000.0Z

END_TIME=20100102011203.0Z

MAX_TIME_MINUTES=20

FTP_ADDR=10.6.104.44:/home/optima/TestCorba/execution/out/data-
END_TIME.csv

FTP_USER=optima

FTP_PASS=optima

TP_SELECT_LIST=ManagedElementFromServer

WAIT_FOR_UPLOAD=0

PM_PARAMETERS=

MANAGED_ELEMENT_FILENAME=/home/optima/TestCorba/execution/status/ME.txt

ME_Count=30

GRANULARITY=15min

DATA_SAFETY_PERIOD=30

WAIT_BEFORE_REQUESTING_ANOTHER_ME=1000

CORBA Client/Server Message Log Codes


This section describes the message log codes for the CORBA client:

Message Description Severity


Code

3000 FTP_ADDR not set -- no upload has been triggered. CRITICAL


3000 Failed to <actionType>.<name> [<exceptionMsg>]. CRITICAL
3000 Failed to <actionType>.<actionName> (<emsName>). CRITICAL
3000 Failed to <actionType>.<actionName>. CRITICAL
3000 getHistoryPMData: destination = <destination> startTime = INFORMATION
<start_time> endTime = <end_time>.

99
OPTIMA 8.0 Operations and Maintenance Guide

Message Description Severity


Code

3000 <INIfileStartTime> === <systemStartTime>. DEBUG


3000 <INIfileEndTime> === <systemEndTime>. DEBUG
3000 pmTPSelectList <listPosition> <emsName>. DEBUG
3000 pmParameters <parameterPosition> <parameterName>. DEBUG
3000 Sleep for <time> seconds. DEBUG
3016 Finished. INFORMATION

This section describes the message log codes for the CORBA server:

Error Description Severity


Code

3016 Failed to <actionType>.<actionName>. CRITICAL


3016 FInished. INFORMATION

100
About SNMP Data Acquisition

3 About SNMP Data Acquisition

OPTIMA can use SNMP (Simple Network Management Protocol) to detect SNMP devices such as
routers, servers and switches on the network and to collect reports from them. It can be configured
to use SNMP auto-collection or to use collection criteria that are manually provided.

Important: SNMP data acquisition carried out using the new Netrac interface is handled by Netrac
as described in the SNMP Agent User Guide for Netrac.

This table describes which of the SNMP data acquisition components are used under what
circumstances by OPTIMA. There are three possible approaches to data acquisition. These are
represented by following the actions described in rows:
• ABC
• DBC
• E

Row To Do This You Will Require And You Will Need To

A Automatically • SNMP Poller • Use the SNMP Poller Configuration Interface


discover devices Configuration to:
on the network. Interface assign scan definitions to Discoverer instances
(see Assigning Scan Definitions to Machines
[Recommended] • Mediation Agent on page 138)
• SNMP Discoverer and
generate Discoverer INI files
(see Selecting Web Service Settings on page
140)

• Set up the:
Mediation Agent
(see About the Mediation Agent on page 157)
and
Web Services
(see Setting up the Web Server on page 158)

• Install the Discoverer instances with their INI


files on all collection devices and schedule
them to run daily
B Automatically • Mediation Agent • Install a single instance of the SNMP Assigner
assign devices to and schedule it to run daily (after the SNMP
SNMP Pollers. • SNMP Assigner Discoverers)
[Recommended]

101
OPTIMA 8.0 Operations and Maintenance Guide

Row To Do This You Will Require And You Will Need To

C Automatically • SNMP Poller • Use the SNMP Poller Configuration Interface


collect reports Configuration to:
from new Interface define SNMP Poller instances
devices. (see Assigning Devices (Agents) to Machines
• Mediation Agent on page 135)
[Recommended]
• SNMP Poller and
generate web-based SNMP Poller INI files
(see Selecting Web Service Settings on page
140)

• Set up the:
Mediation Agent
(see About the Mediation Agent on page 157)
and
Web Services
(see Setting up the Web Server on page 158)

• Install SNMP Poller instances with INI files and


schedule them to run at the required interval
D Add new devices • SNMP Poller • Use the SNMP Poller Configuration Interface
programmatically Configuration to:
via the Interface define Discoverer instances
Discoverer API. (see Assigning Scan Definitions to Machines
• Mediation Agent on page 138)
[If scanning is not
• SNMP Discoverer and
permitted or
generate Discoverer INI files
possible]
(see Selecting Web Service Settings on page
140)

• Set up the:
Mediation Agent
(see About the Mediation Agent on page 157)
and
Web Services
(see Setting up the Web Server on page 158)

• Install the Discoverer instances with their INI


files on all collection devices and schedule
them to run daily

• Create a custom script or application to read


third party devices and add them via the
Discoverer API
(seeUsing the API to Acquire Device
Information on page 134)
E Manually • Mediation Agent • Use the SNMP Poller Configuration Interface
configure report to:
collection. • SNMP Poller define devices
(see Manually Defining Devices on page 127)
assign devices to SNMP Poller instances
(see Assigning Devices (Agents) to Machines
on page 135)
and
generate non web-based SNMP Poller INI files
(see Selecting Web Service Settings on page
140)
• Install SNMP Poller instances with INI files and
schedule them to run at the required interval.

102
About SNMP Data Acquisition

About SNMP Auto-Collection


SNMP auto-collection involves the following components acting in conjunction with the OPTIMA
database:
• SNMP Poller Configuration Interface
• SNMP Poller
• SNMP Discoverer
• SNMP Assigner
• Mediation Agent

This picture illustrates how these components interact:

About Simple Network Management Protocol (SNMP)


In network management systems, Simple Network Management Protocol (SNMP) is used to
monitor network-attached devices for conditions that require the attention of the administrator.

An SNMP-managed network is made up of three main components:

Component Description

Managed device (also A network node that contains an SNMP agent.


known as network
elements) Managed devices reside on a managed network, and collect/store
management information, which is then made available to Network
Management Systems using SNMP.
Examples of managed devices are routers, bridges, hubs and printers.
Agent A network-management software module, that resides in managed devices.
An agent translates localised management information into a form that is
compatible with SNMP.

103
OPTIMA 8.0 Operations and Maintenance Guide

Component Description

Network Management Uses different applications to monitor and control managed devices, and
System (NMS) provide the bulk of processing and memory resources required for network
management.
One or more NMSs can exist on any managed network.

About Management Information Bases (MIBs)


SNMP uses an extensible design, where the available information is defined by Management
Information Bases (MIBs).

MIBs use the notation defined by ASN.1, and describe the structure of the management data of a
device subsystem, using a hierarchical namespace containing object identifiers (OID). An example
MIB could be 1.3.6.1.4.1.XXXX.1.2.102.

The MIB hierarchy can be depicted as a tree with a nameless root, the levels of which are assigned
by different organizations:
• The top-level MIB OIDs belong to different standards organizations
• Lower-level OIDs are allocated by associated organizations

The original MIB for managing a TCP/IP Internet was called MIB-I. MIB-II, published later, added a
number of useful variables missing from MIB-I.

Each OID identifies a variable that can be read or set using SNMP. The OIDs describe a tree
structure, where each number separated by a decimal point represents a branch on that tree. Each
OID begins at the root level of the OID domain and gradually becomes more specific.

About SNMP Versions


In practice, SNMP implementations often support multiple versions - typically SNMPv1, SNMPv2c,
and SNMPv3. This table describes these versions:

Version Description

SNMPv1 The initial implementation of the SNMP protocol.


Has been criticised for its poor security.
SNMPv2 Revises version 1 and includes improvements in the areas of performance, security,
confidentiality, and manager-to-manager communications.
Introduced GETBULK, an alternative to iterative GETNEXTs for retrieving large amounts of
management data in a single request.
SNMP v2c comprises SNMP v2 with the simple community-based security scheme of
SNMP v1 and is widely considered the de facto SNMP v2 standard.
SNMPv3 Primarily added security and remote configuration enhancements to SNMP.
SNMPv3 is the current standard version of SNMP.

Typically, SNMP uses UDP ports 161 for the Agent and 162 for the Manager.

The Manager may send requests from any available ports (source port) to port 161 in the agent
(destination port). The agent response will be given back to the source port. The Manager will
receive traps on port 162. The agent may generate traps from any available port.

104
About SNMP Data Acquisition

You can use the SNMP Poller Configuration Interface to:


• Define vendor interfaces
• Configure the SNMP Poller
• Configure the SNMP Discoverer

The SNMP Poller Configuration Interface enables you to establish configuration details for SNMP
Auto-Collection on the OPTIMA database:

Prerequisites to Using the SNMP Poller Configuration Interface


Before you can use the SNMP Poller Configuration Interface, ensure that:

1. You have installed:


o .NET 3.5 (and the associated Service Pack)
o Oracle Client for 10g or 11g (as applicable)

2. You have an OPTIMA 8.0 database, with a completely up-to-date schema.

3. You have run the 'create_SNMP_tables.sql' script on the database to which you want to
connect.

4. You have installed the SNMP Poller Interface, using the setup.exe provided with the
installation package.

5. You have a pre-defined set of MIBs (Management Information Bases), stored as CSV files.
These contain data for the managed objects (that is, the characteristics of the managed
device that you want to manage).

105
OPTIMA 8.0 Operations and Maintenance Guide

Logging into the SNMP Poller Configuration Interface


To log into the SNMP Poller Configuration Interface:

1. Select Start>All Programs>Aircom International>AIRCOM OPTIMA Backend 8.0>Bin


Folder and double-click SNMP Poller GUI.exe.

The SNMP Poller Configuration database connection dialog box appears:

2. Type in the connection details (database name, username and password).

Note: The user must have been granted the OPTIMA_SNMPPOLLER_USERS role.
OPTIMA_SNMPPOLLER_USER is the default user provided.

Click Load Configuration.

The SNMP Poller Configuration Interface appears:

In this dialog box, you can configure the SNMP Poller. The current configuration details are
loaded and kept in memory, until you save them.
106
About SNMP Data Acquisition

Configuring the SNMP Poller


To configure the SNMP Poller, it is recommended that you follow these steps, working through
the tabs in the SNMP Poller Configuration Interface:

1. On the Reports-Managed Objects tab, load the managed objects and create reports in
report groups.
For more information, see Loading MIBs and Creating Reports on page 107.

2. On the Devices tab, define the devices on which you want to run the reports.
For more information, see Defining Devices (Agents) to be Polled on page 115.

3. On the Reports-Devices tab, set which reports you want to run on which devices.
For more information, see Assigning Reports to Device Types on page 129.

4. On the Pollers-Devices tab, define the poller machines, and set which devices they will
poll.
For more information, see Assigning Devices (Agents) to Machines on page 135.

5. On the Summary tab, view the details that you have configured.
For more information, see Viewing a Summary of the SNMP Poller Configuration on page
140.

6. From the Actions menu, select Webservice settings.


For more information, see Selecting Web Service Settings on page 140.

7. Generate an INI file containing the settings that you have configured.
For more information, see Generating an INI File of SNMP Poller Settings.

8. Manually tune the INI file with a number of additional parameters, if required.
For more information, see Manually Tuning the SNMP Poller Settings INI File on page 144.

Important: As you complete the details on each tab, it is recommended that you save your
configuration using the 'Save to database' button.

Loading MIBs and Creating Reports


On the Reports-Managed Objects tab of the SNMP Poller GUI, you can load the MIBs and define
which ones you want to report on by creating reports. You must then add these reports to report
groups so that you can assign them to devices later.

Important: To do this, you must have a pre-defined set of MIBs (Management Information Bases),
stored as CSV files. If they are not stored as CSV files, then you can convert them using the MIB to
CSV option. For more information, see Converting MIB Files to CSV Files on page 111.

To load managed objects:

1. From the MIBs menu, click Load Managed Objects from CSV file.

2. In the dialog box that appears, locate the CSV file containing the MIBs that you want to
load, and then click Open.

107
OPTIMA 8.0 Operations and Maintenance Guide

The required managed objects are loaded into the Managed Objects pane:

Tip: You can remove a managed object (or an entire branch of managed objects) from the
Managed Objects pane by right-clicking it, and then clicking Remove Managed Object
(or Remove Managed Object Branch) from the menu that appears.

You can now create reports including these managed objects.

To create a report:

1. Right-click in the Reports pane, and from the menu that appears, click Add Report:

2. In the Report Details dialog box, type the name of the report and a description of what it
contains:

108
About SNMP Data Acquisition

3. Click OK.

The (empty) report is created.

4. To add a managed object to the report, in the Managed Objects pane, select the required
managed object and then either:
o Drag it into the Reports pane, and drop it onto the report name

- or -
o Drag it into the Report Managed Objects pane, and drop it into the white space

The managed object appears in the Report Managed Objects pane:

Tip: You can assign an entire group of managed objects to a report, by dragging and
dropping the folder that contains them.

This picture shows an example report, which will return management data on the 'System' group of
managed objects (for example, sysName and sysLocation):

SNMP Report

After you have created a report, you must add it to a report group, so that it can be assigned to a
device later on.

109
OPTIMA 8.0 Operations and Maintenance Guide

To create a report group:

1. Ensure that you have created at least one report.

2. Right-click in the Report Groups pane, and from the menu that appears, click Add Report
Group:

3. In the Report Groups dialog box, type the name of the report group and a description of
what it contains:

4. Click OK.

The (empty) report group is created.

5. To add a report to the report group, in the Reports pane, select the required report and
then either:
o Drag the report into the Report Groups pane, and then drop it onto the report name

- or -
o Drag the report into the Reports in group pane, and then drop it into the white space

Tip: You can select multiple reports, by clicking on each one by holding down the Ctrl
button.

110
About SNMP Data Acquisition

The report appears in the Reports in group pane:

6. Click the 'Save to database' button to save the report groups and reports.

Converting MIB Files to CSV Files


When you are loading SNMP MIBs into the SNMP Poller GUI, you can only load them as CSV files.
If you do not already have the MIB files saved in CSV format, then you can convert them.

Important: To use this conversion option, you must have the Java Runtime Environment (JRE)
installed, and the Java command should be set on the system PATH. The conversion option uses a
number of jar files (MibToCSV.jar and MIB parser library jar files), which are installed automatically
the first time that you open the MIB to CSV dialog box, and are stored in C:\Program Files
(x86)\Aircom International\Optima Backend 8.0\Bin\MIBToCSV.

To do this:

1. From the MIBS menu, click RunMIBToCSV.

The MIB to CSV dialog box appears:

111
OPTIMA 8.0 Operations and Maintenance Guide

2. In the Location of MIB files pane:


o Click the Browse button
o Locate the folder containing the MIB files that you want to convert
o Click OK

3. In the Location to place CSV files pane:


o Click the Browse button
o Locate the folder into which you want to save the converted CSV files
o Click OK

Note: The Command to run MIBToCSV appears automatically, and is read-only.

This picture shows an example of MIB to CSV dialog box:

4. Click the Run MIBToCSV button.

The selected MIB files are converted into CSV files.

112
About SNMP Data Acquisition

The progress of the conversion is displayed in the Log pane:

5. If any errors have occurred during the conversion process, you can save the error log as a
separate file to be, for example, distributed to relevant groups. To do this:
o Click the Save Log to file button
o In the dialog box that appears, choose a suitable location and type an appropriate
filename
o Click Save

Importing and Exporting MIB Files


To load MIB files into the SNMP Poller GUI for creating reports, you can import MIB files using an
*.smc file.

To do this:

1. In the SNMP Poller Configuration dialog box, from the MIBs menu, click Import.

2. In the dialog box that appears, locate the required *.smc file and then click Open.

The MIBs are imported.

You can also export all of the loaded MIB files in the same format, and merge the MIBs within
different systems.

To do this:

1. In the SNMP Poller Configuration dialog box, from the MIBs menu, click Export.

2. In the dialog box that appears, locate the required folder, type a name for the *.smc file and
then click Save.

The MIBs are exported.

113
OPTIMA 8.0 Operations and Maintenance Guide

Editing and Deleting Reports and Report Groups


On the Reports-Managed Objects tab on the SNMP Poller GUI, you can edit and delete reports
and report groups.

Important: Before deleting a report or report group, ensure that it is not in use, otherwise you may
affect the rest of your configuration.

To edit the details of a report:

1. In the Reports pane, right-click the report that you want to edit.

2. From the menu that appears, click Edit Report.

The Report Details dialog box appears, in which you can edit the name and description of
the report.

3. Click OK.

To remove a managed object from a report:

1. In the Report Managed Objects pane, right-click the managed object that you want to
remove from the report.

2. From the menu that appears, click Remove.

Tip: To remove all managed objects from a report, right-click in the Report Managed
Objects pane, and from the menu that appears, click Remove All.

To delete a report:

1. In the Reports pane, right-click the report that you want to delete.

2. From the menu that appears, click Delete Report.

3. Click Yes to confirm the deletion.

To edit the details of a report group:

1. In the Report Groups pane, right-click the report group that you want to edit.

2. Right-click, and from the menu that appears, click Edit Report Group.

The Report Group Details dialog box appears, in which you can edit the name and
description of the report group.

3. Click OK.

To remove a report from a report group:

1. In the Reports in group pane, right-click the report that you want to remove from the report
group.

2. From the menu that appears, click Remove report from group.

Tip: To remove all reports from a report group, right-click in the Reports in group pane,
and from the menu that appears, click Remove All.

114
About SNMP Data Acquisition

To delete a report group:

1. In the Report Groups pane, right-click the report group that you want to delete.

2. From the menu that appears, click Delete Report Group.

3. Click Yes to confirm the deletion.

Defining Devices (Agents) to be Polled


On the Devices tab of the SNMP Poller Configuration Interface, you can define the devices
(agents) on which you want to run the reports. Devices can be organised by vendor or by function.

To define a device to be polled:

1. Select the required view to determine whether devices are organised by vendor or by
function.

2. Right-click in the left pane, and from the menu that appears, click Add Vendor or Add
Function as appropriate.

3. In the dialog box that appears, type the vendor or function name and then click OK, for
example:

4. Right-click the vendor name or function name, and from the menu that appears, click Add
Type Group.

5. In the dialog box that appears, type the name of the type group and then click OK:

6. Right-click the type group name, and from the menu that appears, click Add Type.

In the dialog box that appears, type the device type and then click OK:

You now have the correct structure for organising your individual devices and you can
define the individual devices to be polled in two ways:
o Find and load existing devices, either automatically or manually

- or -
o Add devices manually

When you have defined your devices, click the 'Save to database' button to save them.
115
OPTIMA 8.0 Operations and Maintenance Guide

Adding and Removing Recognition Rules


In the SNMP Poller Configuration Interface, on the Devices tab, in the Rules pane, you can define
the recognition rules to be used when devices identified by the SNMP Discoverer are automatically
assigned to items in the left-hand pane. This picture shows an example where two rules have been
created to identify an HP device and some Cisco devices:

The first rule identifies any devices that include a description beginning with HP.

The second rule uses a regular expression to find any device that includes Cisco anywhere in the
description.

The third rule uses a regular expression to find any device that includes SCE or NZR anywhere in
the description.

For any function defined in the Devices tab of the SNMP Poller Configuration Interface GUI, it is
possible to combine multiple rules to implement the AND operation. However, a single rule is
required to implement the OR operation, using the example Regex (third) rule format.

To add a recognition rule for a function that you have selected in the left-hand pane:

1. From the Rule Type drop-down list, select the required rule type from:
o Begins with
o Ends with
o Contains
o Does not contain
o Equals
o Does not equal
o Regex (Regular Expression)

Note: If, in the left-hand pane, you select All device type groups/All device types/Any
functions, the Rule Type drop-down list is unavailable.

2. If applicable, select from the OID drop-down list the required object identifier.

3. If applicable, type a search parameter in the Content/Regex/Keyword column.

4. If required, type a description of the rule in the Description column.

5. Click the Return key. The rule is listed below the entry row and you can add further rules if
required.

To remove a recognition rule:

1. Right click on the rule.

2. From the menu that appears, click Delete. The rule is removed.

116
About SNMP Data Acquisition

Finding and Loading Devices Automatically


When defining the devices (or agents) to be polled by the SNMP Poller, you can configure the
SNMP Discoverer (via a Mediation Agent) to find and load devices automatically.

Note: If you want to find and load devices manually, using the SNMP Poller Configuration Interface,
see Finding and Loading Devices Manually on page 125.

To find and load devices automatically:

1. Configure one or more scan definitions, which specify the search criteria for the devices.

You can do this:


o Automatically, using the SNMP Scan Definition Editor which stores scan definitions on
the OPTIMA database. For more information, see Creating the Scan Definition File
Automatically on page 119.
o Manually, by creating your own scan definition file in XML format, or editing an existing
one. For more information about the file structure, see About Scan Definition Files on
page 121.

2. Open the SNMP Poller Configuration Interface. The SNMP Discoverer scans for devices
based on the parameters defined in the Scan Definition file. Any devices that are found are
saved in the database, and displayed on the Already Discovered subtab of the Devices
tab:

Devices that are found automatically are also automatically assigned to a device type
according to the recognition rules specified in the Rules pane on the Devices tab if
possible.

3. Choose the Devices assigned for a selected item option and then an item in the left-hand
pane to see the devices assigned for that item.

- or -

Select the Unassigned devices option to see a list of devices that have been found
automatically but could not be assigned automatically because they do not comply with any
of the recognition rules.

4. Click on each of the unassigned devices and drag it onto the required item in the left-hand
pane. The devices are removed from the unassigned devices list and appear when the
Devices assigned for a selected item option is chosen.

117
OPTIMA 8.0 Operations and Maintenance Guide

Tips: To automate the search for devices even further, you can:
• Schedule the automatic device scan via Discoverer as a service using Cron.
• Configure a system alarm to be raised whenever a new device is discovered. For more
information on how to do this, see Creating Alarms for Discovered Devices on page 124.

Configuring Timeouts for Automatically Discovered Devices


Once a device has been discovered and assigned, you can configure a time-out value for it. To do
this:

1. In the left pane of the Devices tab, select the item to which the device is assigned.

2. In the SNMP Devices pane, on the Already discovered tab, select the Devices assigned
for a selected item option.

3. Right-click on the required device and from the menu that appears, select Edit Device. The
Device Details dialog box appears:

4. Edit the time-out value as required and click OK.

5. In the SNMP Poller Configuration Interface, click Save to database.

118
About SNMP Data Acquisition

Creating the Scan Definition File Automatically


The quickest and easiest way to create a scan definition that will be stored on the OPTIMA
database is to use the SNMP Scan Definition Editor.

To do this:

1. In the SNMP Poller Configuration Interface, from the Actions menu, select Run Scan
Definition Editor. The SNMP Scan Definition Editor appears:

2. Click New. A new name in the format Scan000 is generated and can be selected from the
Scan Definitions drop-down list.

3. Select the newly generated name. It appears in the Name field and you can change it if you
wish.

4. Specify the port number of the port with which the scanning will be conducted, the
community string that identifies the logical group of devices to be included in the scan, and
the SNMP version to be used.

5. To add a single IP address:


o Click the Add IPs (Single) button
o In the Single IPs pane that appears, type the IP address on which you want to search:

119
OPTIMA 8.0 Operations and Maintenance Guide

Tip: If you want to delete an IP address, click the Delete button that appears when you
hover to the right of the required address:

6. To add a range of IP addresses:


o Click the Add IPs (Range) button
o In the IP Address Range pane that appears, type the start and end IP addresses to
define the required range:

Tip: The number of addresses that fall within this range is displayed in brackets to the right
of the range.
If you want to delete the range, click the Delete button that appears when you hover to the
right of the end address:

7. If you have defined an IP address range, you can add an exclusion, which will ignore a
particular IP address (or addresses) within this range. To do this:
o Click the Add Exclusion button to the right of the range:

120
About SNMP Data Acquisition

o In the Exclusions pane that appears, if you want to exclude a range of addresses,
then type the addresses that you want to exclude:

- or -

If you want to exclude a single address, then click the green left arrow button to
change to a single address, and then type the address that you want to exclude:

8. Repeat steps 5-7 to add all of the IP addresses, ranges and exclusions that you require.

9. Click Save.

You can also export the scan definition to an XML file using the Export button, and you can import
scan definition XML files that you have previously exported or have created manually, by using the
Import button. For more information about the XML file structure, see About Scan Definition Files
on page 121.

About Scan Definition Files


Scan definition files specify the IP addresses to be interrogated.

If you want to create or edit a scan definition file manually, then this topic describes the structure for
the scan definition XML file.

All Scan Definition Files

All XML files containing scan definitions begin with the header:
<?xml version="1.0" encoding="utf-8" ?>
More than one scan definition file can be included in the XML file which is delimited by <ScanDefinitions></ScanDefinitions>
tags.

Individual scan definitions are enclosed by a pair of <ScanDefinition> tags, and made up of:
• Four variables - Name, Port, Version, Community
• If required, a series of IP addresses, IP address ranges and IP address range exclusions,
all enclosed within their own tags

The following examples demonstrate this.

121
OPTIMA 8.0 Operations and Maintenance Guide

Scan Definition File Including Particular IP Addresses, Address Ranges and Ranges with
Exclusions

In this example:
• Each individual IP address is specified in its own pair of <StartIP> tags
• A range of IP adresses is specified using <StartIP> and <End IP> tags
• A range of exclusions is specified using <ScanExclusions><ScanExclusion> and
</ScanExclusion></ScanExclusions> tags
<?xml version="1.0" encoding="utf-8"?>
<ScanDefinitions
xmlns="http://schemas.datacontract.org/2004/07/Aircom.Optima.SNMP.ScanDef
initionEditor" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<ScanDefinition>
<Name>192.168.13.20-28</Name>
<Port>161</Port>
8.02</Version>
<Community>public</Community>
<ScanAddress>
<ScanAddress>
<StartIp>192.168.13.21</StartIp>
<EndIp i:nil="true" />
</ScanAddress>
<ScanAddress>
<StartIp>192.168.13.22</StartIp>
<EndIp>192.168.13.31</EndIp>
<ScanExclusions>
<ScanExclusion>
<StartIp>192.168.13.30</StartIp>
<EndIp>192.168.13.31</EndIp>
</ScanExclusion>
<ScanExclusion>
<StartIp>192.168.13.29</StartIp>
<EndIp i:nil="true" />
</ScanExclusion>
</ScanExclusions>
</ScanAddress>
<ScanAddress>
<StartIp>192.168.13.20</StartIp>
<EndIp i:nil="true" />
</ScanAddress>
</ScanAddress>
</ScanDefinition>
</ScanDefinitions>

122
About SNMP Data Acquisition

Scan Definition File Including All IP Addresses

In this example, all the IP addresses within each Name are included:

<?xml version="1.0" encoding="utf-8"?>


<ScanDefinitions
xmlns="http://schemas.datacontract.org/2004/07/Aircom.Optima.SNMP.ScanDef
initionEditor" xmlns:i="http://www.w3.org/2001/XMLSchema-instance">
<ScanDefinition>
<Name>192.168.13.20-28</Name>
<Port>161</Port>
8.02</Version>
<Community>public</Community>
</ScanDefinition>
<ScanDefinition>
<Name>Scan001</Name>
<Port>161</Port>
8.01</Version>
<Community>public</Community>
</ScanDefinition>
<ScanDefinition>
<Name>Scan002</Name>
<Port>161</Port>
8.01</Version>
<Community>public</Community>
</ScanDefinition>
<ScanDefinition>
<Name>Scan003</Name>
<Port>161</Port>
8.01</Version>
<Community>public</Community>
</ScanDefinition>
<ScanDefinition>
<Name>Scan004</Name>
<Port>161</Port>
8.01</Version>
<Community>public</Community>
</ScanDefinition>
<ScanDefinition>
<Name>Scan005</Name>
<Port>161</Port>
8.01</Version>
<Community>public</Community>
</ScanDefinition>
<ScanDefinition>
<Name>Scan006</Name>
<Port>161</Port>
8.01</Version>
<Community>public</Community>
</ScanDefinition>
</ScanDefinitions>

123
OPTIMA 8.0 Operations and Maintenance Guide

Exporting and Importing Scan Definition Files


Having created a scan definition, as well as saving it to the OPTIMA database, you can, if you wish,
export it as an xml file. It can then subsequently be imported if necessary.

To export scan definition files:

1. In the SNMP Poller Configuration Interface, from the Actions menu, select Run Scan
Definition Editor. The SNMP Scan Definition Editor appears.

2. To export all scan definitions to a file, click Export.

- or -

To export an individual scan definition to a file, select the required scan definition from the
drop-down list in the Scan Definitions field and click Export.

If you have selected an individual file in the Scan Definitions field, a message appears.
Click No to save an individual file or Yes to save all files.

3. In the Save As dialog box, select the folder in which your exported file is to be stored, type
a file name for the file and click Save.

To import scan definition files:

1. In the SNMP Poller Configuration Interface, from the Actions menu, select Run Scan
Definition Editor. The SNMP Scan Definition Editor appears.

2. Click Import.

3. In the Open dialog box, select the required file and click Open. If scan definitions with the
same names already exist, you will be asked if you wish to overwrite them.

4. An import summary appears showing how many scan definitions have been added and
how many replaced. Click OK.

Creating Alarms for Discovered Devices


You can configure OPTIMA to raise a system alarm each time that a new device is found.

To do this, create a system alarm using the Alarms Editor, in which the Set Alarm SQL and Clear
Alarm SQL query the DISCOVERED_DATETIME parameter from the AIRCOM.SNMP_DEVICE
table.

For example, SELECT * FROM AIRCOM.AIRCOM.SNMP_DEVICE WHERE


%PARAM_DISCOVERED_DATETIME.

124
About SNMP Data Acquisition

Finding and Loading Devices Manually


On the Devices tab of the SNMP Poller Configuration Interface, you can define the individual
devices that will be polled. One way to do this is to search for existing devices, load their details
into the SNMP Poller Configuration Interface, and then assign them to the corresponding device
type.

To do this:

1. In the SNMP Devices pane, on the Discover now tab, click the Find Devices button.

The Discover Devices dialog box appears:

Tip: The Discover Devices dialog box can be used to scan the network for any existing
SNMP devices in the network, and identify any SNMP ALIVE devices.

2. Define the criteria that you want to use to search for existing devices. The criteria are
described in this table:

Item Description

IP Address Begin Range Set the start and finish range for the IP addresses of the required
and IP Address End devices. If you know the IP address of the device that you are looking
Range for, type the same value in the start and finish range.
Exclude IP Begin and If required, exclude certain IP addresses. This is particularly useful if you
Exclude IP End only eliminate a portion of IP addresses within the range because you
already know that these are not applicable (for example, they may be
assigned to servers).
For example, rather than scan a whole network from (begin range)
192.168.0.0 to (end range) 192.168.255.255, you could choose to
exclude a group of servers in between, from (exclude IP begin)
192.168.10.10 to (exclude IP end) 192.168.120.150.

125
OPTIMA 8.0 Operations and Maintenance Guide

Item Description

Port Choose the IP address port from which the device transmits the
information.
Read Community Define the Read Community, which is the community string to use in all
poller requests.

3. When you have set all of the criteria, click the Run Discovery button. The Discover
Devices dialog box appears.

4. In the Discover Devices dialog box, select the folder to which you want to save the results,
and then click OK.

All of the found devices that meet the chosen criteria and are SNMP-compliant are
displayed in the Discovered Devices pane:

The results are saved automatically in a time-stamped CSV file in the folder that you
specified, for example:

126
About SNMP Data Acquisition

Warning: The details of the devices identified are not yet saved to the database even if
you click the Save to database button.

5. Click OK.

The discovered devices appear on the Discover now sub-tab on the Devices tab of the
SNMP Poller Configuration Interface:

6. To assign them to their corresponding type, drag and drop them onto the type name in the
left-hand pane. The selected devices appear on the Already discovered sub-tab when the
Devices assigned for a selected item option is selected.

7. Click the Save to database button. The device details are saved to the SNMP_DEVICE,
SNMP_DEVICE_TUPLES and SNMP_TUPLES database tables.

Tip: If you have run a device discovery before, rather than re-scanning the network again (and
creating superfluous data in the network), you can load the previous results. To do this:
o In the Discover Devices dialog box, click the 'Load discovered devices' button
o Locate the required file
o Click Open

The devices discovered at that time and date are loaded into the Discovered Devices
pane.

Manually Defining Devices


On the Devices tab of the SNMP Poller Configuration Interface, as well as searching for existing
devices, you can also manually define devices for a particular device type. To do this:

1. In the left-hand pane, if required, select the By vendors option.

2. If required, select the device type:

3. In the SNMP Devices pane, on the Already discovered sub-tab, select the Unassigned
devices option.

127
OPTIMA 8.0 Operations and Maintenance Guide

4. Right-click in the SNMP Devices pane and from the menu that appears, click Add Device.

The Device Details dialog box appears:

5. Define the required details for the device:

Item Description

IP Address The IP address of the device that the SNMP Poller will connect to.
Port The number of the port from which the device transmits the information.
Read Community The community string used in all poller requests.
Note: The community string is encrypted when it appears in the Selected Type
Devices pane, but is decrypted if you choose to edit the device details.
Hostname The host name of the device.
Time Out The period of time after which the device will be considered as 'timed out'.
The unit of measure used for this value is 10 milliseconds. For example, if you
set the value to 100, the timeout will be 1 second (100*10 milliseconds = 1000
milliseconds).
Retry The number of retries that the SNMP Poller will attempt after a 'timeout' when
polling data from this device.
SNMP Version The version of SNMP to use.

6. Click OK.

The device is listed in the SNMP Devices pane when the Unassigned devices option is
selected.

Note: If you create a device using the Devices Details dialog box and click the Save to database
button, the device details are saved to the appropriate table in the database (SNMP_DEVICE
table). The device only qualifies as a partially discovered device when you drag-and-drop it to a
device type in the Device Types frame of the Devices tab and you click the Save to database
button (updating the SNMP_DEVICE_TUPLES and SNMP_TUPLES database tables).

128
About SNMP Data Acquisition

Once you have dragged and dropped a device to assigned it to a device type, it is removed from
the Unassigned devices list and appears under the Devices assigned for a selected item list
when the associated device type is selected:

Editing and Deleting Devices


On the Device tab of the SNMP Poller Configuration Interface, you can:
• Rename vendors, device type groups and device types
• Edit and delete devices

Tip: Before deleting a report or report group, ensure that it is not in use, otherwise you may affect
the rest of your configuration.

To rename a vendor, device type group or device type:

1. Right-click the item that you want to rename.

2. From the menu that appears, click Edit Vendor, Type Group or Type as required.

3. In the dialog box that appears, rename the item, and then click OK.

To edit a device:

1. In the Selected Type Devices pane, right-click the device that you want to edit.

2. From the menu that appears, click Edit Device.

3. In the Device Details dialog box, edit the device parameters as required.

4. Click OK.

To delete a device:

1. In the Selected Type Devices pane, right-click the device that you want to delete.

2. From the menu that appears, click Delete Device.

3. Click Yes to confirm the deletion.

Assigning Reports to Device Types


On the Reports-Devices tab of the SNMP Poller Configuration Interface, if you have created:
• Reports (in report groups)
• The devices (agents) that you want to use to poll data

You can map the two together, assigning the required reports to the device types that will run them.
To do this:

129
OPTIMA 8.0 Operations and Maintenance Guide

1. In the All Report Groups pane, select the required report group:

2. Select the required view to determine whether devices are organised by vendor or by
function.

3. Drag the report group into the left pane, and then drop it onto the required device.
Depending on the tree level onto which you drop it, the report group will be assigned at a
different level. For example:
o If you drop it onto the vendor name, the report group will be assigned to all device
types associated with this vendor:

o If you drop it onto the device type group, the report group will be assigned to all device
types associated with this group:

130
About SNMP Data Acquisition

o If you drop it onto the device type, the report group will be assigned to all devices of
that device type:

Tip: To undo an assignment, right-click the report group name, and from the menu that
appears, click Remove Report Group. Then click Yes to confirm the deletion.

4. Click the 'Save to database' button to save the assignments that you have configured.

Tip: You can import and export the vendors, device type groups, device types, report
groups, reports and associated OIDs that are saved on this tab, in order to share them
across different systems. For more information, see Importing and Exporting Device Types
and Reports on page 131.

Importing and Exporting Device Types and Reports


In the SNMP Poller Configuration Interface, on the Reports-Devices tab you can import device
types and their associated reports using an *.asd file.

To do this:

1. In the left-hand pane, right-click in the white space.

2. From the menu that appears, click Import.

3. In the dialog box that appears, locate the required file, and then click Open.

4. If there are potential conflicts between the report groups/reports that already exist and
those that are being imported, then you will be asked if you want to overwrite the
duplicates:

Important:
o Duplicate vendors and device type groups will always be merged
o Identical report groups (those with the same name and the same reports as a report
group already present in the SNMP Poller) and/or identical reports (those with the
same name and same OIDs as a report already present in the SNMP Poller) are
ignored, and are not included in the import messages

131
OPTIMA 8.0 Operations and Maintenance Guide

The results of the imported are displayed:

As device types are unique per vendor, if you import a device type group which includes
device types that already exist in another device type group under the same vendor, the
duplicate device types are ignored:

5. Click OK.

You can also export device types and reports in the same format, and merge them within different
systems. To do this:

1. Select the By vendors option.

2. In the left-hand pane, right-click the appropriate element, depending on what you want to
export:
o Right-click in white space to export all details of all vendors:

132
About SNMP Data Acquisition

o Right-click a vendor to export all of its subitems (device type groups, device types and
reports and so on):

o Right-click a device type group to export the vendor, device type group, device types,
report groups, reports and OIDs for that device type group only:

o Right-click a device type to export the vendor, device type group, device type, report
groups, reports and OIDs for that device type only:

3. In the dialog box that appears, locate the required folder, type a name for the *.asd file and
then click Save.

The selected data is exported.

133
OPTIMA 8.0 Operations and Maintenance Guide

Using the API to Acquire Device Information


You can use the Application Program Interface in conjunction with or instead of the Discoverer to
pass device information to the OPTIMA database.

To call the API:

Use an http POST to send an XML file containing details of devices to the database.

The command must include the path to the public add function. Typically the url will end
with:

.../aircom/optima/discovery-api/2011/02/Add

Here is an example XML file:

<SnmpDevice>
<IpAddress>12.1.1.0</IpAddress>
<Port>777</Port>
<ReadCommunity>comm</ReadCommunity>
<Hostname>hosta</Hostname>
<Vendor>Motorola</Vendor>
<Type></Type>
<Functions>
<Function>RNC</Function>
</Functions>
<Timeout>100</Timeout>
<Retry>1</Retry>
<SnmpVersion>2</SnmpVersion>
<Discovered>04-05-11 13:49:31</Discovered>
<CollectionDeviceProximity>
<CollectionDevice MachineId="101" roundtrip="29"/>
<CollectionDevice MachineId="102" roundtrip="25"/>
<CollectionDevice MachineId="103" roundtrip="55"/>
<CollectionDevice MachineId="104" roundtrip="150"/>
</CollectionDeviceProximity>
<PollerInstance></PollerInstance>
<OidWeight>10</OidWeight>
</SnmpDevice>

This table describes the conditions that apply to the parameters used:

For This Parameter This Applies

<IpAddress> Must be valid and unique.


<Port> Must be unique.
<ReadCommunity> A string that is not encrypted will be encrypted.
An encrypted string will not be unencrypted.
<Vendor> Only mandatory if <Type> specified. Must exist in database if specified.
<Type> Not mandatory. Must exist in database if specified.
<Function> Not mandatory. Must exist in database if specified.
<Timeout> Must be an integer.
<Retry> Must be an integer.
<SnmpVersion> Must be an integer.
<Discovered> Must be in the format shown in the example.

134
About SNMP Data Acquisition

For This Parameter This Applies

<CollectionDeviceProximity> MachineId must be correct.


<PollerInstance> Must be PRID (assigned) or blank (unassigned).
<OidWeight> Mandatory. Use 0. The value is subsequently calculated so will probably
change.
Note: If the calculated OidWeight is 0 (meaning that there are no SNMP
attributes to be reported) or the CollectionDeviceProximity has no pings, then
the SNMP Assigner will not assign the device.

Assigning Devices (Agents) to Machines


On the Pollers-Devices tab of the SNMP Poller Configuration Interface, you can assign devices
(agents) to the machines from which you want to poll the data.

To do this:

1. In the SNMP Poller Configuration Interface, select the Pollers-Devices tab.

2. Select the device type containing the device(s) that you want to assign. The associated
PRIDS are listed in the right-hand pane. Click on the plus symbol beside a PRID to see
the PRID details:

Note: If a device has not yet been assigned to a machine, it will not have a PRID.

3. To assign a particular device, drag it into the Poller Instances pane and drop it either onto
the required machine or onto an existing instance.

If you choose an existing instance, the PRID is added to the list of PRIDs already existing
for that instance in the Devices assigned to the selected Discoverer pane.

135
OPTIMA 8.0 Operations and Maintenance Guide

If you choose a machine so that a new instance is created, the Instance Parameters
dialog box appears:

Complete the details for the polling device on this machine. This table describes the
editable parameters:

Item Description

LOG The folder for the log files created by the SNMP Poller.
TEMP The folder for the temporary files created by the SNMP Poller.
PRID The folder for the monitor (PRID) file.
OUT The folder for the generated reports.
CONFIG The folder for a copy of the SNMP Poller configuration file
PollerConfig_[PRID].xml. This is normally retrieved from the web
service. This copy is used if the web service is unable to retrieve the file.
ERROR The folder for incomplete reports.
LOG SEVERITY The extent of logging required. Choose from: Debug (default),
Information, Warning, Minor, Major and Critical.
LOG GRANULARITY The frequency of logging required. Choose from Continuous, Daily,
Weekly and Monthly.
ITERATION GAP The delay before polling the next device.
THREADS The number of processes to be run in parallel.

136
About SNMP Data Acquisition

Item Description

FOLDER FILE LIMIT The maximum number of output files that can be created in the output
directory.
NO. OF ITERATIONS The number of times that the process is to be run.
VERBOSE Determines whether or not the log file information is displayed on screen.
STANDALONE Determines whether or not a monitor file is run with this process.
EnableDebugCSV Determines whether or not CSV file debugging is performed.
RemoveSubFolder Indicates whether the subfolder level identifying the device host name
should be removed from the output folder structure (1) or not (0).
AddHostName Indicates whether the host name of the device should be added as the
first column in the output CSV files (1) or not (0).
DeviceCheckThreads Defines the number of devices that will be checked for availability in
parallel.
MaxRepetition Indicates the number of instances of each column object that the SNMP
Poller will try to get in each report.
OutputType Indicates the format used to report the octet string OIDs.
TerminatingASCIINumber If the OutputType parameter is set to 2, this indicates the number of
the character with which to terminate the string.
NonPrintableCharacter If the OutputType parameter is set to 2 or 3, this indicates the character
with which to replace non-printable characters.

Tip: For fields requiring a path, you can use the Browse button to locate the required
destination.

You can increase or decrease the number of threads (simultaneous running processes).

4. Click OK.

The device appears in the Poller Instances pane and you can click on it to see the details
in the Devices assigned to the selected Discoverer pane:

It is also shaded in the top pane, and has a PRID assigned to it:

Tip: To remove an assignment:

o In the Devices assigned to the selected Discoverer pane, right-click the required
device name, and from the menu that appears, click Remove Device. Then click Yes
to confirm the removal.

- or -
o Right-click in the Devices assigned to the selected Discoverer pane, and from the
menu that appears, click Remove All Devices. Then click Yes to confirm the removal.

5. Click the 'Save to database' button to save the assignments that you have configured.

137
OPTIMA 8.0 Operations and Maintenance Guide

Assigning Scan Definitions to Machines


On the Pollers-Devices tab of the SNMP Poller Configuration Interface, you can assign scan
definitions to the machines from which you want to poll the data.

To do this:

1. In the SNMP Poller Configuration Interface, select the Discoverers-Scan Definitions


tab.

2. To ensure that the details shown on the tab are up to date (if for example you have been
using the SNMP Scan Definition Editor), click the Reload data button.

3. Select the scan definition that you want to assign:

4. To assign a particular scan definition, drag it into the Discoverer Instances pane and drop
it either onto the required machine or onto an existing instance.

If you choose an existing instance, the PRID is added to the list of PRIDs already existing
for that instance in the Scan definitions assigned to the selected Discoverer pane.

If you choose a machine so that a new instance is created, the Instance Parameters
dialog box appears:

138
About SNMP Data Acquisition

Complete the details for the polling device on this machine, by either typing or clicking the
Browse button to find the appropriate location:

Item Description

LOG The folder for the log files created by the SNMP Poller.
TEMP The folder for the temporary files created by the SNMP Poller.
PRID The folder for the monitor (PRID) file.
CONFIG The folder where the snmpdiscoveryrules.xml and scanndef.xml files are
stored for local use if they are not to be retrieved by the web service.
LOG SEVERITY The extent of logging required. Choose from: Debug, Information (default),
Warning, Minor, Major and Critical.
LOG GRANULARITY The frequency of logging required. Choose from Continuous, Daily, Weekly
and Monthly.
ITERATION GAP The delay before polling the next device.
THREADS The number of processes to be run in parallel.
FOLDER FILE LIMIT The maximum number of output files that can be created in the output
directory.
NO. OF ITERATIONS The number of times that the process is to be run.
VERBOSE Determines whether or not the log file information is displayed on screen.
STANDALONE Determines whether or not a monitor file is run with this process.
Timeout The time in seconds that the Discoverer will spend attempting to identify a
device.
Retry The number of times that the Discoverer will try to identify a device again
after failing once.

Tip: For fields requiring a path, you can use the Browse button to locate the required
destination.

5. Click OK.

The scan definition is added to the Discoverer Instances pane and you can click on it to
see the details in the Scan Definitions assigned to the selected Discoverer pane:

Tip: To remove an assignment:


o In the Scan Definitions assigned to the selected Discoverer pane, right-click the
required scan definition, and from the menu that appears, click Remove Scan
Definition. Then click Yes to confirm the removal.

- or -
o Right-click in the Scan Definitions assigned to the selected Discoverer pane, and
from the menu that appears, click Remove All Scan definitions. Then click Yes to
confirm the removal.

6. Click the 'Save to database' button to save the assignments that you have configured.
139
OPTIMA 8.0 Operations and Maintenance Guide

Viewing a Summary of the SNMP Poller Configuration


On the Summary tab, you can view a summary of the details that you have configured for the
SNMP Poller.

This picture shows an example:

SNMP Poller Summary tab

The Summary tab lists:


• The devices attached to each machine
• The reports attached to each device
• The reports that are not in use

If you select an item in the Pollers pane, you can view more information in the Details pane.

Selecting Web Service Settings


Having configured the SNMP Poller you must decide whether polling instances are to be controlled
centrally using:
• A Web Service and Mediation Agent (for more information, see About the Mediation Agent
on page 157)

- or -
• Locally (for more information, see About the SNMP Agent on page 433).

This decision determines the content of the INI files generated at the next step.

To control polling instances centrally:

1. In the SNMP Poller Configuration Interface, from the Actions menu, select WebService
settings. The WebService Settings dialog box appears:

140
About SNMP Data Acquisition

2. Select the Control Instances centrally option.

3. Type the required web address into the WebService URL field.

4. Click OK.

Generating an INI File of SNMP Poller Settings


When you have finished configuring the SNMP Poller, you can generate an INI file containing all of
the settings that you have configured. To do this:

1. In the SNMP Poller Configuration Interface, click Create INI files:

2. In the Browse For Folder dialog box that appears, select the required location for the new
INI file:

Tip: You can create a new folder by selecting the required folder level and then clicking the
Make New Folder button.

3. Click OK.

A new INI is created.

For more information, see:


o Example Poller INI File for Local Control on page 142.
o Example Poller INI File for Central Control on page 143.
o Example Discoverer INI File for Central Control on page 143.

141
OPTIMA 8.0 Operations and Maintenance Guide

Example Poller INI File for Local Control


This example shows a Poller INI file for local control that is generated by the SNMP Poller
Configuration Interface:

[DIR]
LogDir=log3
TempDir=tmp
PIDFileDir=pid
DirTo=out
DirError=err

[MAIN]
MachineID=105
ProgramID=301
InstanceID=m01
LogGranularity=3
LogSeverity=1
PollingTime=300
StandAlone=0
RunContinuous=0
Iterations=1
Verbose=0
UseFolderFileLimit=1
FolderFileLimit=10000

[OPTIONS]
Threads=10
EnableDebugCSV=0
RemoveSubFolder=1
AddHostName=1
DeviceCheckThreads=20
MaxRepetition=5
OutputType=1
TerminatingASCIINumber=0
NonPrintableCharacter=.
OutputIncompleteReports=1

[SNMP_DEVICES]
NumberOfDevices=1
Device1=1

[1]
IPAddress=111.111.111.111
Port=100
Hostname=none
SNMPVersion=1
CommunityRead=ENC(lp\eaZ)ENC
RetryNo=2
Timeout=3
NumberOfReportsUsed=2
Report1=R1
Report2=R2

[REPORTS]
NumberOfReports=2
Report1=R1
Report2=R2

[R1]
Name=R1
NumberOfManagedObjects=3
mo1=sysDescr,.1.3.6.1.2.1.1.1.0,0

142
About SNMP Data Acquisition

mo2=sysObjectID,.1.3.6.1.2.1.1.2.0,0
mo3=sysUpTime,.1.3.6.1.2.1.1.3.0,0
mo4=syslocation,.1.3.6.1.2.1.1.6.0,0

[R2]
Name=R2
NumberOfManagedObjects=2
mo1=sysName,.1.3.6.1.2.1.1.5.0,0
mo2=sysServices,.1.3.6.1.2.1.1.7.0,0

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

Example Poller INI File for Central Control


This example shows a Poller INI file for central control that is generated by the SNMP Poller
Configuration Interface:

[DIR]
LogDir=log3
ConfigDir=asd

[MAIN]
MachineID=105
ProgramID=301
InstanceID=m01
WebServiceUrl=http://centralurl:12345

Example Discoverer INI File for Central Control


This example shows a Discoverer INI file for central control that is generated by the SNMP Poller
Configuration Interface:

[DIR]
LogDir=a
TempDir=a
PIDFileDir=a
ConfigDir=a

[MAIN]
MachineID=000
ProgramID=311
InstanceID=001
LogGranularity=3
LogSeverity=1
PollingTime=300
StandAlone=0
RunContinuous=0
Iterations=1
Verbose=0
WebServiceUrl=http://centralurl:12345

[OPTIONS]
Threads=10

143
OPTIMA 8.0 Operations and Maintenance Guide

TimeOut=100
RetryNo=1

Manually Tuning the SNMP Poller Settings INI File


In addition to the settings that you can make using the SNMP Poller Configuration Interface, you
can set a number of parameters in the INI file directly, if required, to fine-tune your SNMP Poller
settings.

You can define the following details in the generated INI file, by adding the appropriate
parameter(s):

INI File Section Parameter Description


[DIR] DirError Specifies the error directory, which will be used for
any output files that fail to process correctly.
This parameter can be used in conjunction with the
OutputIncompleteReports parameter to store
incomplete reports (that is, reports in which some of
the required data has not been polled successfully).
[MAIN] FolderFileLimit Defines the maximum number of output files that
can be created in each output (sub) folder, up to a
limit of 100,000 on Windows and 500,000 on
Sun/UNIX.
Depending on the number of files that you are
processing, a lower file limit can cause more output
subfolders to be created. This can have a significant
impact on performance, so you should ensure that if
you do need to change the default, you do not set
the number too low.
UseFolderFileLimit Indicates whether the FolderFileLimit
parameter should be used (1) or not (0).
OptimaPingPath Indicates the path to the optimaping program used
for ping testing in UNIX environments. It need only
be specified if the program is not in the system path
or working folder.
[OPTIONS] addHostName Indicates whether the host name of the device
should be added as the first column in the output
CSV files (1) or not (0).
If it is possible that any of the CSV files may be
taken out of the default output folder structure, you
should set this parameter to 1, so that the hostname
can still be identified.
removeSubFolder Indicates whether the subfolder level identifying the
device host name should be removed from the
output folder structure (1) or not (0).
If this parameter is set to 1, then the output folder
structure will be:
$outDir/<ReportClassName>/<hostname>_Y
YYYMMDDHHMISS.csv
rather than:
$outDir/<ReportClassName>/<Device
Hostname>/<hostname>_YYYYMMDDHHMISS.cs
v
DeviceCheckThreads Defines the number of devices that will be checked
for availability in parallel.
The default is 10, so the SNMP Poller will check 10
devices in parallel.

144
About SNMP Data Acquisition

INI File Section Parameter Description


MaxRepetition Used as the default value for the Max Repetition
parameter of the SNMP GETBULK message,
indicating the number of instances of each column
object that the SNMP Poller will try to get in each
report.
The default is 5, so the SNMP Poller will try to get 5
instances of each column object in each report.
Tip: If you set MaxRepetition to 0, then the SNMP
Poller will use the SNMP GETNEXT request, rather
than SNMP GETBULK request.
Note: If the PDU size is too large, then the SNMP
Poller will reduce the Max Repetition during run time
for each device.
OutputType Indicates the format used to report the octet string
OIDs:
0 (Default) - The SNMP Poller will use the SNMP
API formatting, as follows:
• If the octet string contains any non-printable
values, then the CSV output will look similar to
this:
09 4F 63 74 65 74 53 74 72 3A 3A 67 65
74 5F 70 .OctetStr::get_p|| 72 69
6E 74 61 62 6C 65 5F 68 65 78 28 29
rintable_hex()
• If the octet string contains no non-printable
values, then the CSV output will be plain text.
1 - Outputs the octet string as plain text, with any
non-printable values replaced with the character
specified in the NonPrintableCharacter
parameter.
2 - Outputs the octet string as plain text up to the
terminating character specified in the
TerminatingASCIINumber parameter. Any non-
printable values are replaced with the character
specified in the NonPrintableCharacter
parameter.
3 - Outputs the octet string as HEX.
Terminating If the OutputType parameter is set to 2, this
ASCIINumber indicates the number of the character with which to
terminate the string. The default is 0.
NonPrintable If the OutputType parameter is set to 2 or 3, this
Character indicates the character with which to replace non-
printable characters.
The default is '.' (full stop or period).
OutputIncomplete Defines where incomplete reports (that is, reports in
Reports which some of the required data has not been polled
successfully) will be stored.
0 - Incomplete files are moved to the error directory
specified in the DirError parameter in
the[DIR]section of the INI file.
1 - Reports are sent to the directory specified in the
DirTo parameter in the [DIR] section of the INI
file.

145
OPTIMA 8.0 Operations and Maintenance Guide

INI File Section Parameter Description


EnableDebugCSV Defines how the *,csv data is output.
0 - CSV data is output in the format: value
1 - CSV data is output in the format: SNMP_TYPE
(value) (OID).

Typically, the OutputType parameter defined in the OPTIONS section of the INI file influences the
default output type in use for all managed objects defined in the INI file. However, you can override
the output type for any managed object by manually defining a CUSTOM_OUTPUT section in the
INI file with the appropriate managed object(s) and output type parameter.

For example, you can add a CUSTOM_OUTPUT section to the example Poller INI file as shown
here:

[CUSTOM_OUTPUT]
Number=2
Output1=.1.3.6.1.2.1.1.1.0,3
Output2=.1.3.6.1.2.1.1.6.0,2

This means managed object - .1.3.6.1.2.1.1.1.0 - will override the OutputType value 1 (plain text -
non printable characters replaced with value of NonPrintable Character parameter) and use
OutputType value 3 (HEX string).

Also, managed object - .1.3.6.1.2.1.1.6.0 - will override the OutputType value 1 (plain text - non
printable characters replaced with value of NonPrintable Character parameter) and use OutputType
value 2 (plain text - up to TerminatingASCIICharacter value and non-printable characters replaced
by NonPrintableCharacter value). All other managed objects will use the OutputType value 1.

This feature is particularly useful in overcoming issues related to the SNMP Poller returning
required strings alongside junk strings because the default output type (for example, plain text) is
different from the polled string output type (for example, Hex string).

Ping Testing with the SNMP Poller


You can configure the SNMP Poller to perform ping tests on the network. To do this you must add
some parameters to the OPTIONS section of the SNMP Poller INI file. This table describes the
required parameters:

Parameter Indicates

ping_run Whether ping test is run (1) or not (0). The default value is 1.
ping_timeout The time in milliseconds to wait for each reply. Values in the range 1 to 10000 are
valid. The default value is 5000.
ping_requests The number of echo requests to send. Values in the range 1 to 5 are valid. The
default value is 5.

When configured to do so, the SNMP poller will use ICMP echo requests to ping each device in the
device availability threads.

The results of the ping tests are written to a csv file named with the convention
HOST_PORT_PINGSTATS_DATETIME.csv. For example:

Windows2000432_8010_PINGSTATS_20111222153633134.csv

The file contains the following columns:


• DateTime
• IP
• HostName

146
About SNMP Data Acquisition

• Packets_Transmitted
• Packets_Received
• Min_Resp_Time
• Avg_Resp_Time
• Max_Resp_Time

Whether or not the ping test was successful is shown in the PingStatus column of the performance
CSV file. This table describes the file:

Column Indicates

Datetime The date and time when the report started.


Prid The poller instance PRID value.
PingStatus Whether or not the the ping test was successful. If it was not, the
exception message is shown.
SysUpTime How long the device has been operational after the last reboot.
BulkSuccess The number of successful SNMP GETBULK requests.
BulkSuccessTotalTime The total time for successful SNMP GETBULK requests.
BulkSuccessMinTime The minimum time for a successful SNMP GETBULK request.
BulkSuccessMaxTime The maximum time for a successful SNMP GETBULK request.
BulkSuccessMeanTime The mean time for a successful SNMP GETBULK request.
BulkSuccessMedianTime The median time for a successful SNMP GETBULK request.
BulkFailed The number of timed out SNMP GETBULK requests.
BulkFailedTotalTime The total time taken by timed out SNMP GETBULK requests.
GetSuccess The number of successful SNMP GET(OID) requests.
GetSuccessTotalTime The total time for successful SNMP GETBULK requests.
GetSuccessMinTime The minimum time for a successful SNMP GET(OID) request.
GetSuccessMaxTime The maximum time for a successful SNMP GET(OID) request.
GetSuccessMeanTime The mean time for a successful SNMP GET(OID) request.
GetSuccessMedianTime The median time for a successful SNMP GET(OID) request.
GetFailed The number of timed out SNMP GET(OID) requests.
GetFailedTotalTime The total time taken by timed out SNMP GET(OID) requests.
NextMultSuccess The number of successful SNMP GETNEXT (list of OIDS) requests.
NextMultSuccessTotalTime The total time for successful SNMP GETNEXT (list of OIDS) requests.
NextMultSuccessMinTime The minimum time for a successful SNMP GETNEXT (list of OIDS)
request.
NextMultSuccessMaxTime The maximum time for a successful SNMP GETNEXT (list of OIDS)
request.
NextMultSuccessMeanTime The mean time for a successful SNMP GETNEXT (list of OIDS) request.
NextMultSuccessMedianTime The median time for a successful SNMP GETNEXT (list of OIDS)
request.
NextMultFailed The number of timed out SNMP GETNEXT (list of OIDS) requests.
NextMultFailedTotalTime The total time taken by timed out SNMP GETNEXT (list of OIDS)
requests.
NextSuccess The number of successful SNMP GETNEXT (OIDS) requests.
NextSuccessTotalTime The total time for successful SNMP GETNEXT (OIDS) requests.

147
OPTIMA 8.0 Operations and Maintenance Guide

Column Indicates

NextSuccessMinTime The minimum time for a successful SNMP GETNEXT (OIDS) request.
NextSuccessMaxTime The maximum time for a successful SNMP GETNEXT (OIDS) request.
NextSuccessMeanTime The mean time for a successful SNMP GETNEXT (OIDS) request.
NextSuccessMedianTime The median time for a successful SNMP GETNEXT (OIDS) request.
NextFailed The number of timed out SNMP GETNEXT (OIDS) requests.
NextFailedTotalTime The total time taken by timed out SNMP GETNEXT (OIDS) requests.

To perform a ping test, the Windows version of the SNMP Poller uses the Windows system ping
program. UNIX versions use the appropriate optimaping program (for HP Itanium, Linux Redhat or
Sun Solaris). The optimaping programs are installed in the same directory as the SNMP Poller
program (opx_DAP_GEN_301).

The optimaping program needs to run as setuid root:

su
chown root optimaping
chmod a+x optimaping
chmod u+s optimaping

If the optimaping program is not in the system path or working folder, you must specify the path to it
using the parameter OptimaPingPath in the MAIN section of the INI file.

Using Traceroute with the SNMP Poller


Traceroute is a diagnostic test that tracks the path taken by Internet Control Message Packets
(ICMP) to a destination address. You can configure the SNMP Poller to perform a traceroute test
when a ping test fails. To do this you must add some parameters to the OPTIONS section of the
SNMP Poller INI file. This table describes the required parameters:

Parameter Description

traceroute_run Indicates whether a traceroute test is run when a ping test fails (1) or not
(0). The default value is 1.
traceroute_timeout Indicates the time in milliseconds to wait for each reply. Values in the range
1 to 10000 are valid. The default value is 5000.
traceroute_maximum_hops This value, also known as Time to Live (TTL), indicates the number of
routers through which Internet Control Message Packets will pass before
returning an ICMP Time Exceeded error message. Values in the range 1 to
31 are valid. The default value is 31.
traceroute_numberOfPackets The number of packets to be sent. Values in the range 1 to 5 are valid. The
default value is 5.

When configured to do so, the SNMP poller will generate a text file with the traceroute output in a
sub folder of [DirTo] called TRACEROUTE. The filename of this file will be in this format:

x_x_x_x_port_YYYYMMDDHHMMSSmmm_traceroute_txt.csv

148
About SNMP Data Acquisition

Running the SNMP Poller


After you have configured the SNMP Poller, you can then start it.

To do this, type the executable name and a configuration file name into the command prompt.

In Windows type:

opx_DAP_GEN_301.exe opx_DAP_GEN_301_001301001.ini

In Unix type:

opx_DAP_GEN_301 opx_DAP_GEN_301_001301001.ini

Configuration file names should contain the PRID name, to make them unique.

After the SNMP Poller has collected information, the data is loaded into the database using the
Loader.

Tip: You can view the performance details for the SNMP Poller in a csv file, which can help with
troubleshooting problems.

Maintenance of the SNMP Poller


In usual operation, the SNMP Poller should not need any special maintenance. During installation,
the OPTIMA Directory Maintenance application will be configured to maintain the backup and log
directories automatically.

However, TEOCO recommends the following basic maintenance checks are carried out for the
SNMP Poller:

Check The When Why

Backup directory to ensure Weekly Files not transferring indicates a problem with
CSV files have been the application.
transferred.
Log file for error messages. Weekly In particular any Warning, Minor, Major and
Critical messages should be investigated.

Stopping the SNMP Poller


The SNMP Poller will terminate when all reports have been collected from the SNMP Agent. You
can stop the SNMP Poller before it has finished collecting reports by pressing CRTL-C or by closing
the console window.

Checking the Version of the SNMP Poller


If you need to contact TEOCO support regarding any problems with the Directory Maintenance
application, you must provide the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_DAP_GEN_301.exe -v

149
OPTIMA 8.0 Operations and Maintenance Guide

In Unix:

opx_DAP_GEN_301 –v

For more information about obtaining version details, see About Versioning on page 33.

Checking the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

SNMP Poller Message Log Codes


This section describes the message log codes for the SNMP Poller:

Message Description Severity


Code

1003 Requesting Report Output workers to stop when file group queue is DEBUG
empty.
3001 ThreadID: <threadID> OID: <oid> Value: <VBValueString>. DEBUG
3002 Start Verify Device Availability for device IP <ipAddress> Port <port>. DEBUG
End Verify Device Availability for device IP <ipAddress> Port <port>. DEBUG

3003 Invalid address for device IP <ipAddress> Port <port>. WARNING


3004 SNMP++ Session Create Fail: <errorMsgStatus> for device IP WARNING
<ipAddress> Port <port>.
3005 SNMP++ Session Create: <errorMsgStatus> for device IP DEBUG
<ipAddress> Port <port.
3006 Started Log Message Worker). DEBUG
Let the report worker process to stop. DEBUG

Let the log worker to stop. DEBUG

Starting Device Worker Thread Number - <deviceWorkerNo>. DEBUG

Waiting for all device workers to finish. DEBUG

All device workers finished. DEBUG

Adding Device - <ipAddress> Port - <port> to the device worker DEBUG


queue ).
Waiting for report worker threads for IP - <ipAddress> Port - <port> to DEBUG
finish.
Report worker threads for IP - <ipAddress> Port - <port> finished. DEBUG

Starting Report Output Worker Thread Number - <threadNumber>. DEBUG

Waiting for all Report Output workers to finish. DEBUG

All Report Output workers finished. DEBUG

Starting report worker threads for IP - <ipAddress> Port - <port>. DEBUG

3007 IP - <ipAddress> Port - <port> Report: <reportName> SNMP++ WARNING


GetNext Error, <ErrorMsgStatus> (<status>).
3008 ThreadID <threadID> Address/Port: <ipAddress>/<port> Report: WARNING
<reportName> SNMP++ GetNext Error, <errorMsgStatus> (<status>).
3009 Could not open file: TARGET. WARNING

150
About SNMP Data Acquisition

Message Description Severity


Code

3016 Application Started. INFORMATION


Application Finished. INFORMATION

3017 Community string Decryption Error : <errorMessage>. INFORMATION


4100 Successfully created report file <target_file>. INFORMATION
4101 Could not create report file <target_file>. WARNING
5000 Log Worker Started. DEBUG
5000 Log Worker Finish. DEBUG
5000 Report Output Worker Started. DEBUG
5000 Report Output Worker Finish. DEBUG
5001 Device worker thread started. DEBUG
5002 Device worker thread finished. DEBUG
5100 Device worker thread processing IP - <ipAddress> Port - <port>. DEBUG
5101 Device name: <hostname> with IP address: <ipAddress> is NOT WARNING
available at the moment.
5102 Device worker thread finish processing IP - <ipAddress> Port - <port>. DEBUG
6001 Started report worker thread for IP - <ipAddress> Port - <port> Report DEBUG
- <reportName>.
6002 Finished report worker thread for IP - <ipAddress> Port - <port> DEBUG
Report - <reportName>.
7501 Invalid address for device IP <ipAddress> Port <port>. WARNING
7502 SNMP++ Session Create Fail: <errorMsgStatus> for device IP WARNING
<ipAddress> Port <port>.
7502 SNMP++ Session Create: <errorMsgstatus> for device IP DEBUG
<ipAddress> Port <port>.

Troubleshooting
The following table shows troubleshooting tips for the SNMP Poller:

Problem Cause Solution

Cannot save User has insufficient Enable permissions


configuration (INI) file privileges on configuration
(INI) file or directory Make file writable

The file is read only or is Close the Parser to release the configuration
being used by another (INI) file
application
Application exits Another instance is running. Use Process Monitor to check instances
immediately running
Invalid or corrupt (INI) file
SNMP session not Network problem Report to system administrator
created
Report not created Error in the MIBReportINI Check the OIDs are valid
file

151
OPTIMA 8.0 Operations and Maintenance Guide

About the SNMP Discoverer


The main purposes of the SNMP Discoverer are to:
• Identify SNMP devices on the network
• Report the existence of discovered devices to the OPTIMA database

The SNMP Discoverer reports SNMP devices to the OPTIMA database via a mediation agent:

Once devices have been discovered they become visible on the Devices tab of the SNMP Poller
Configuration Interface. For more information, see Finding and Loading Devices Automatically on
page 117.

You can configure the SNMP Discoverer by editing the parameters in the Discoverer INI file. This
file is created if you have opted to control polling instances centrally and you click the Create INI
files button in the SNMP Poller Configuration Interface. For more information see Selecting Web
Service Settings on page 140 and Generating an INI File of SNMP Poller Settings on page 141. To
see what the INI file looks like, see Example Discoverer INI File for Central Control on page 143.

A device identified by the SNMP discovery process can be:


• Partially Discovered, such that it is available in the SNMP Poller Configuration Interface
(the IP address and community string are known), assigned to a device type, and details
are saved in the appropriate database tables (SNMP_DEVICE, SNMP_DEVICE_TUPLES
& SNMP_TUPLES)

- or -
• Fully Discovered, such that it is available in the SNMP Poller Configuration Interface (the IP
address and community string are known), the ping process is successful (the round trip
time (RTT) is assigned), it is assigned to a device type, and details are saved in the
appropriate database tables (SNMP_DEVICE, SNMP_DEVICE_TUPLES &
SNMP_TUPLES, SNMP_DEVICE_MACHINE)

The SNMP Assigner does not automatically assign a device without a corresponding RTT attribute
in the SNMP_DEVICE_MACHINE table.

The manual discover devices process fulfils only the partial discovery definition. You can only
assign a manually discovered device to a poller instance by manually dragging-and-dropping it on
the Pollers-Devices tab of theSNMP Poller Configuration Interface. For more information, see
Finding and Loading Devices Manually on page 125. Even at this stage the device is not fully
discovered. However, It is expected that device can be polled when the SNMP Poller runs as
scheduled.

152
About SNMP Data Acquisition

To ensure that a device is fully discovered and ready for the SNMP Assigner process the SNMP
Discoverer must run as scheduled or triggered in an ad-hoc manner.

Installing the SNMP Discoverer


Before you can use the SNMP Discoverer, install the following file to the backend binary directory.
• opx_DAP_GEN_311.exe (Windows)
• opx_DAP_GEN_311 (Unix)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

About the SNMP Assigner


The SNMP Assigner recognises newly discovered devices and allocates them to SNMP poller
instances, based on user configurable choices. The poller instances then collect reports from the
devices. The SNMP Assigner receives its configuration data from the Mediation Agent:

There are three stages to the processing carried out by the SNMP Assigner. These are:
• Gathering information already found by the SNMP Discoverer
• Performing calculations based on configuration data and performance information
• Updating the information for poller assignment

Note: The SNMP Assigner does not assign devices that have been manually created or assigned.

You can configure the SNMP Assigner by editing its INI file. This table describes the editable
parameters in the file:

Parameter Description

WebServiceUrl The full http reference for the web service, for example:
WebServiceUrl=http://www.foo.com/cgi-bin/aircom/optima
Rebuild Indicates whether the assigner ignores existing device assignments and recalculates
(1) or not (0).
Usually, without a change of algorithm, a rebuild will produce the same results as
before, and since it only updates devices which have changed, it will do nothing.
On a change of algorithm however, it should change significantly.

153
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

Algorithm The name of the device assignment algorithm. Select from:


• equals. Assigns to the next poller with less devices than the last. Suitable for a
geographically concentrated network without load issues.
• pings. Assigns to the poller with the lowest round trip value between the server
where the SNMP Discoverer resides and the device. Suitable for a
geographically widespread network without load issues. For more information,
see Pings Algorithm Example on page 155.
Note: Where there are a number of instances of the SNMP Discoverer on a
number of servers so that a device may be discovered more than once, the
SNMP Assigner uses the lowest round trip value to decide which poller on which
server the device is assigned to.
• normpings. Assigns according to the NormaliseRange setting (see below).
Devices returning ping values within the specified range are grouped and
assigned as for the equals algorithm. Devices returning ping values outside the
specified range are assigned as for the pings algorithm. For more information,
see Normpings Algorithm Example on page 156.
• weights. Assigns according to OID weight. A query is used to work out the OID
weight of a Poller and a Device, to estimate how loaded a poller is. Devices are
then assigned to the instance with the lowest OID weight.
For example Algorithm=weights
If this parameter is not specified, the equals algorithm is used by default.
NormaliseRange If the normpings algorithm is used, this parameter normalizes all pings within the
millisecond range to act in the same way.
For example: NormaliseRange=20, causes pings on devices within 20ms of each
other to be treated the same.
AdjustPrids Allows you to change the ratio of usage of machines based on their prid.
For example:
AdjustPrids=101301002,1.2 102301002,0.8 103301003,0
In this case the first machine will get 120% more devices than normal, the second
20% less, and the third machine will not be used for assignment.
Important: If a ping algorithm is in use and there is only one instance on the machine
with the lowest ping, there is no alternative to using that poller so it can't be locked
out.

This example shows an Assigner INI file:

[MAIN]

MachineID=123
ProgramID=312
InstanceID=001
Iterations=1
Verbose=1
LogSeverity=1

WebServiceUrl=http://www.foo.com/cgi-bin/aircom/optima

[DIR]

TempDir=./tmp
LogDir=./log
PIDFileDir=./pid

[ASSIGNER]

Rebuild=1

154
About SNMP Data Acquisition

Pings Algorithm Example


This example shows servers and devices in the USA, the UK and Australia. If the pings algorithm is
used, the round trip time in milliseconds is used to determine which devices are assigned to which
pollers. You can see from the diagram that this will result in each device being assigned to a poller
in its own country. The particular poller instance to which the device is assigned within the country
will be that with the least devices already assigned to it.

155
OPTIMA 8.0 Operations and Maintenance Guide

Normpings Algorithm Example


This example shows servers on the East and West coasts of the USA, and a device nearer the
East than the West. Under the Pings algorithm, the device would be assigned to the poller instance
with the least devices assigned to it in Florida. Under the Normpings algorithm, if the
NormaliseRange parameter is set to 15, then the device would be assigned to the poller instance
with the least devices assigned to it in either Florida or California.

Installing the SNMP Assigner


Before you can use the SNMP Assigner, install the following file to the backend binary directory.
• opx_DAP_GEN_312.exe (Windows)
• opx_DAP_GEN_312 (Unix)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

156
About SNMP Data Acquisition

About the Mediation Agent


The Mediation Agent comprises:
• A DAL (Data Access Layer) library, providing access to the database
• A standalone application, providing a configuration service and a data service
• Web service stubs, handling network requests and replies

Together, these configurable components manage the communication between the SNMP
Discoverer, the SNMP Assigner, the SNMP Poller and the OPTIMA database:

The configuration service provided by the Mediation Agent standalone application allows the SNMP
Discoverer, the SNMP Assigner, and the SNMP Poller to retrieve their configuration details.

The data service provided by the Mediation Agent standalone application allows Create, Read,
Update and Delete type operations to be carried out on entities in the OPTIMA database.

You can configure the Mediation Agent standalone application by editing its INI file. This table
describes the editable parameters in the file:

Parameter Description

DBString The name of the database.


UserID The user name for accessing the database.
Password The password for accessing the database.
DBClient The type of database in use.
Agent The name of the mediation agent.
ReconnectDelay The time taken to reconnect after an interruption.

157
OPTIMA 8.0 Operations and Maintenance Guide

This example shows a Mediation Agent INI file:

[MAIN]

MachineID=123
ProgramID=309
InstanceID=001
Iterations=1
Verbose=1
LogSeverity=1

[DIR]

TempDir=${OPTDIR}/tmp
LogDir=${OPTDIR}/log
PIDFileDir=${OPTDIR}/pid

[DBConfiguration]

DBString=VM148DB1
UserID=aircom
Password=ENC(l\mlofhY)ENC
DBClient=oracle
ReconnectDelay=2

[Other]
Agent=MATTHEW

Installing the Mediation Agent


Before you can use the Mediation Agent, install the following file to the backend binary directory.
• opx_DAP_GEN_309.exe (Windows)
• opx_DAP_GEN_309 (Unix)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

Setting up the Web Server


Having installed the Apache web server, you can set up the web server by:
• Installing OPTIMA services
• Setting Permissions on Message Queues
• Protecting the Web Service
• Creating a system environment variable (Windows)

158
About SNMP Data Acquisition

Installing OPTIMA Services


Assuming that Apache is installed in /var/www/, to install OPTIMA services:

1. Execute the following commands:

mkdir -p /var/www/cgi-bin/aircom/optima/dataservice/

mkdir -p /var/www/cgi-bin/aircom/optima/configservice/

2. From the webservice executables, copy Create, Add, Fetch, Update, Save and Delete to:
/var/www/cgi-bin/aircom/optima/dataservice/

3. From the webservice executables, copy FetchConfiguration to:


/var/www/cgi-bin/aircom/optima/configservice/

Setting Permissions on Message Queues


Write permissions are required for both Apache and the Mediation Agent when using message
queues. You can provide these using the umask command. To do this:

1. Locate vi /etc/sysconfig/httpd

2. Add line:

umask 0000

at the end.

3. Restart apache:

/etc/init.d/httpd restart

When opx_DAP_GEN_309 is run, type "umask 0000" before the run to set the permissions on the
message queues it creates.

To test that permissions are working correctly:

1. Add a printenv script to the cgi-bin.

2. Call the script.

3. Check the /var/log/httpd/error_log for mentions of permissions.

Protecting the Web Service


To set up Apache so that a user name and password are required by the browser when accessing
web services:

1. Create a digest file:

htdigest -c /etc/httpd/passwords/optima_digest "Optima Restricted"


optima_snmp

2. When prompted to add a new password, type, and then when prompted re-type the
password gooptimago.

3. Ensure the cgi-bins are protected with the digest file:

vi /etc/httpd/conf/httpd.conf

159
OPTIMA 8.0 Operations and Maintenance Guide

4. Ensure that the following exists in the httpd.conf file:

LoadModule auth_digest_module modules/mod_auth_digest.so

5. Ensure that the mod_auth_digest.so Apache module exists in the Apache modules
directory.

Failure to do this will result in an invalid user error being returned when a request is made
to the web service.

6. Ensure that the following exists in the httpd.conf file:

ScriptAlias /cgi-bin/ "/var/www/cgi-bin/"

And that this is inserted:


<Directory "/var/www/cgi-bin">
AllowOverride None
Options None
Order allow,deny
Allow from all
AuthType Digest
AuthName "Optima Restricted"
AuthDigestDomain /cgi-bin/
AuthUserFile /etc/httpd/passwords/optima_digest
Require valid-user
</Directory>

7. To get a finer level of password protection, change the above to be:

<Directory "/var/www/cgi-bin">
AllowOverride All
Options None
Order allow,deny
Allow from all
</Directory>

8. In the directory to be protected, create a file called .htaccess

This will protect all files in that directory and its subdirectories:

AuthType Digest
AuthName "Optima Restricted"
AuthUserFile /etc/httpd/passwords/optima_digest
Require valid-user

Important: This procedure works on Apache 2.2. Other versions of Apache require the
AuthUserFile to be replaced with an AuthDigestFile, both in httpd.conf and .htaccess.

Note: It is also possible to name specific files in .htaccess. For example:

<Files "printenv">

Require valid-user

</Files>

Then restart httpd:

/etc/init.d/httpd restart

160
About SNMP Data Acquisition

Creating a System Environment Variable (Windows)


In a Windows environment, so that message queues can communicate between the Mediation
Agent and the web services (found in the cgi-bin under Apache), you must create this system
environment variable:

OptimaTMPDIR

and set it to a directory suitable for containing temporary files (for example C:\temp).

You must also add this (PassEnv line) to the Apache configuration httpd.conf file:

<Directory "/var/www/cgi-bin">
...
PassEnv OptimaTMPDir
</Directory>

161
OPTIMA 8.0 Operations and Maintenance Guide

162
About the OPTIMA Parser

4 About the OPTIMA Parser

The OPTIMA Parser converts the raw network files from proprietary file format into comma
separated values (CSV) format.

Specific parsers are provided for each interface. This section describes settings that are common to
all parsers. You should refer to specific interface documents for the parsers deployed on a
particular network.

The parser's interface enables you to configure the required information for the CSV files, for
example directory settings, common settings and specific reports settings. This means that only the
required data is loaded into the database.

When the CSV files are created, the files are sent to the input directory for data validation. Once
validated, the CSV files are moved to the input directory for the loader. The data is then loaded into
the database table by the loader application.

The OPTIMA Parser supports these common functions:

Function Action

Logging Status and error messages are recorded in a daily log file.
Error Files If the application detects an error in the input file that prevents processing of that file, then
the file is moved to an error directory and processing continues with the next file.
Monitor Files The application runs in a scheduled mode. A monitor (PID) file, created each time the
application is started, ensures that multiple instances of the application cannot be run. The
PID file is also used by the OPTIMA Process Monitor to ensure that the application is
operating normally.
PRID The automatically-assigned PRID uniquely identifies each instance of the application. It is
composed of a 9-character identifier, made up of Interface ID, Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID are both
made up of 3 characters, which can be a combination of numbers and uppercase letters.
For more information, see About PRIDs on page 29.
Backup The application can store a copy of each input file in a backup directory.

For more details on these common functions, see Introduction on page 15.

163
OPTIMA 8.0 Operations and Maintenance Guide

Parser Quick Start


This section is intended to indicate the steps you must take to get the OPTIMA Parser running for
demonstration purposes.

If raw data input files are already available in CSV format, OPTIMA can be run without the Parser,
but in order to use the Parser, you must install at least one Parser configuration file and an
associated executable file. A Parser configuration file contains parameters that determine how a
proprietary file format is converted into CSV format. There is a Parser configuration file for each
proprietary file format that you want OPTIMA to interface with. You can install many, but this
section describes how to install one.

To install and run the Parser:

1. Choose which proprietary file format you want OPTIMA to interface with.

2. If you have access to the vendor interfaces on Plone, locate your chosen format from the
list shown at:

http://plone:9080/intranet/projects/vendor-interfaces/vendor-
interfaces-optima-interface-repository

Click on the link and download the associated zip file:

If you do not have access to Plone, and you are an TEOCO employee, you can contact the
VI team who will supply you with the required file. If you are not an TEOCO employee you
can obtain the file from Product Support

3. Extract the contents the downloaded zip file and any further zip files within it. Among the
folders and files extracted, find the Parser configuration file (.ini) and the executable file
(.exe for Windows, no suffix for Unix) required.

164
About the OPTIMA Parser

4. Move the executable file to the OPTIMA Backend Bin folder on the mediation server, for
example in Windows:

5. Move the Parser configuration file (.ini) to the appropriate sub folder under the OPTIMA
Backend Interface folder on the mediation server, for example:

OPTIMA Backend\Interface\ERI\UTRAN\Parser

6. Start the Parser by typing the executable file name followed by the configuration file name
at the command prompt. For example:

opx_PAR_ALC_635.exe op_PAR_ALC_635_prid.ini in Windows

- or -

opx_PAR_ALC_635 op_PAR_ALC_635_prid.ini in Unix.

Note: For more information on .ini files, see Example Parser Configuration (INI) File on page 177.

165
OPTIMA 8.0 Operations and Maintenance Guide

The Parsing Process


On start up, the OPTIMA Parser loads all the configuration settings from the INI file into memory. In
some cases, the settings could contain information on how data will need to be processed.

The Parser checks for any file(s) in the input folder that match the file mask. The Parser opens the
file and starts processing the data.

By default, the Parser extracts from the input raw file all available measurement objects. The output
file will be in comma separated value (CSV) format. While a file is being processed, it is stored in a
temporary folder. This is to prevent incomplete files being sent to the Data Validation application.

When the parsing process has finished successfully, the processed file is moved from the
temporary folder to the output folder. If any problem is encountered, the file will be moved to the
error directory and a message will be added to the log file.

Installing the OPTIMA Parser


Before you can use the OPTIMA Parser, you need to install a file to the backend binary directory.

This file is parser-specific, for example, the Nortel XML Parser requires the following file:
• opx_PAR_NOR_711.exe (Windows)
• opx_PAR_NOR_711 (Unix)

Tip: A full list of the latest parser files for each vendor is available from Product Support.

Starting the OPTIMA Parser


To start the OPTIMA Parser:

Type the executable file name and a configuration file name into the command prompt.

For example, the Nortel XML Parser requires the following to be typed in:

In Windows:

opx_PAR_NOR_711.exe opx_PAR_NOR_711.ini

In Unix:

opx_PAR_NOR_711 opx_PAR_NOR_711.ini

Note: In usual operation within the data loading architecture, all applications are scheduled. In
normal circumstances, you should not need to start the program manually. For more information,
see Starting and Stopping the Data Loading Process on page 40.

166
About the OPTIMA Parser

Configuring the OPTIMA Parser


The OPTIMA Parser is configured using a configuration (INI) file. Configuration changes are made
by editing the parameters in the configuration (INI) file with a suitable text editor. The commonly
available parameters are described in this section, others may be available to particular vendors for
the processing of their own files. The OPTIMA Parser configuration (INI) file is divided into sections.

The following table describes the common parameters in the [DIR] section:

Parameter Description

CombinerDir Location for logs generated by the combiner.


DirFrom The location of the input files.
DirTo Where the files are output after parsing.
DirBackup The location of the backup files.

ErrorDir Where files with errors are sent.


LogDir The location of the log files.
OverloadDir The location of the overload directory used by the FTP (if applicable).
This location should use the format '<... path ...>\overload'.
If this parameter is not found or is blank, then the overload functionality is not used.
PIDFilePath The location of the monitor files.
TarDirExe The location of the gtar executable, which is used to untar files.
This location should use the format '<... path ...>\gtar'.
Note: This is only applicable if the overload functionality is being used.
TempDir The location of temporary files created during the parsing process.

The following table describes the common parameters in the [MAIN] section:

Parameter Description

AdjustforDST 1 – Enable DST adjustment in accordance to the offset settings.


0 – Disable DST adjustment.

CheckOutputFile Indicates whether you want to monitor the output file system usage (1) or not
SystemUsage (0).
If this option is selected, then if the usage exceeds the threshold that you
have defined in the MaxOutputFilesystemPercent parameter, the parser will
stop.
ColumnsCase 0 (default) - The parser will convert output header columns and validation
Sensitive report column names into upper case when comparing.
1 - The parser will not convert output header columns and validation report
column names into upper case when comparing.
DiskUsageExe If you are using an overload directory, then this parameter specifies the
command that will report the disk usage level (free disk) in the file system,
returning the percentage.
The default value is '/bin/df -k'.
Important: This is used for Sun Solaris and other UNIX OS.

167
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

EnableBackup Indicates whether a copy of the input file will be copied to the backup
directory (1) or not (0).
If you do not choose to backup, the input file is deleted after it has been
processed.
EnableCombiner 1 – Create combiner history files.
0 – Do not create combiner history files.
EnableValidation 1 - The parser will perform validation, and read the other validation
parameters contained in the INI file - ColumnsCaseSensitive, MissingValue,
RemoveHeader, SafeMode, SeparatorOut, TrimData and TrimHeader.
The parser also looks for a 'CounterGroups' section and all of its related
measurements with the counters lists.
0 (default) - The parser will not perform validation.
FolderFileLimit The maximum number of output files that can be created in each output (sub)
folder.
This must be in the range of 100-100,000 for Windows, or 100-500,000 on
Sun/UNIX, otherwise the application will not run.
Warning: Depending on the number of files that you are processing, the
lower the file limit, the more output sub-folders that will be created. This can
have a significant impact on performance, so you should ensure that if you
do need to change the default, you do not set the number too low.
IncludeSubDirectories Used to process input files in the directory specified in the 'DirFrom'
parameter, and all of its sub-directories:
0 (default) - Do not search sub-directories for input files
1 - Search sub-directories for input files
InputFileMask Filter for input file to process, for example, *C*.*
InputFileName Option for including the file name (excluding the path) as the first column in
AsColumn the output CSV file.
0 - False. Will not create the first column as that of the input file name.
1 - True. This is the value normally set by the VI team. It creates the first
column
as that of the input file name.
If the value is other than 0 or 1, or the option is commented, the program
default value of 0 is applied.
InputFileSortOrder Specifies the order in which input files are processed:
0 (default) - no sort order
1 - Ascending order
2 - Descending order
InstanceID The three-character program instance identifier (mandatory).
InterfaceID The three-digit interface identifier (mandatory).
Iterations This parameter is used when the application does not run in continuous
mode so that it will be able to check for input files in the input folder for the
number of required iterations before an exit. Integer values are allowed, like
1,2,3,4 and so on.
LogGranularity Defines the frequency of logging, the options are:
0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily

168
About the OPTIMA Parser

Parameter Description

LogLevel Sets the level of information required in the log file. The available options are:
(or LogSeverity)
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
MaxOutput Specifies the parser output directory usage threshold (in % used), beyond
Filesystem which the parser should stop.
Percent
The default is 96%, which means that when usage reaches 97% the parser
will stop.
MissingValue This string value will be used for counters missing from the input file.
The default is an empty string.
NumberOfInput If you have configured the parser to perform validation, then use this
FilesPerBatch parameter to batch the input files. The batch size indicates the number of
files in each batch.
NumberOfMove The number of threads used to move input files to the error or backup
InputFileQueue directories.
Workers
The default (and minimum) value is 1.
NumberOfParser The total number of parser object instances, which perform the parsing,
Objects generate reports, post process report temp files and input files.
The parser objects are recycled from the Main thread -> Parsing threads ->
Report Generation threads -> Main thread.
The default value is NumberOfParsingInputFileQueueWorkers +
NumberOfParserOutputFileQueueWorkers instances, but for small input files
it is recommended that you set a value greater than this.
The minimum value for this parameter is 1.
NumberOfParsing The number of threads used to parse input files into memory.
InputFileQueue
Workers The default (and minimum) value is 1.

NumberOfParsing The number of threads used to generate reports and post-process the report
OutputFileQueue temp files to the report output folders.
Workers
The default (and minimum) value is 1.
OffsetWhen Define time adjustment in minutes whenever DST is active, for example,
DSTActive OffsetWhenDSTActive=+60.
OffsetWhen Define time adjustment in minutes whenever DST is inactive, for example,
DSTInactive OffsetWhenDSTInactive=-120.
Pollingtime If you have selected to run the parser continuously, type the number of
seconds that must pass between each check for input files. If the option is
commented, the default value is 5.
ParserOutputToInstance 0 (default) - Do not allow parsers to share output directories when Folder File
Subdirectory Limit is in use.
1 - Allow an additional folder to be appended to a subfolder so that multiple
parsers can share output directories without loss of data.
ProgramID The three-character program identifier (mandatory).
RefreshTime The pause (in seconds) between executions of the main loop when running
continuously.
RemoveHeader 0 (default) - The parser will write the header to the counter group output file.
1 - The parser will not write the header to the counter group output file.

169
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

ReportGen The number of threads that each parser object instance will create in order to
Workers validate/output their reports.
These threads enable each parser object instance to validate/output more
than one report at a time.
The default value is 1, but a value of 3 is recommended.
The number of actual threads that this will generate can be calculated as
follows:
• Total number of ReportGenWorkers threads created =
NumberOfParserObjects*ReportGenWorkers
• Total active number of ReportGenWorkers threads =
NumberOfParserOutputFileQueueWorkers*ReportGenWorkers
• Total idle number of ReportGenWorkers threads =
NumberOfParsingInputFileQueueWorkers* ReportGenWorkers
RunContinuous 0 - Have the Parser run once.
1 - Have the Parser continuously monitor for input files.
SafeMode 1 - Run the parser in Safe Mode. The parser will log any new counters found
in the input file that are not in the validation reports. These will be logged in
warning messages.
0 (default) - Do not run the parser in Safe Mode. The parser will not log any
new counters found in the input file that are not in the validation reports.
SeparatorOut This specifies the separator that is used in the output file.
For a space-separated file, use SeparatorOut=SPACE.
For a tab-separated file, use SeparatorOut=TAB.
The default is a comma (,).
StandAlone 0 – Run the application without a monitor file. Do not select this option if the
application is scheduled or the OPTIMA Process Monitor is used.
1 – Run the application with a monitor file.
StatsLogSeverity The severity level for log messages related to statistics.
If this value is equal to or greater than LogSeverity, then statistics will be
logged.
The minimum (and default) value is 1 - DEBUG.
The maximum value is 2 - INFORMATION.
TrimData 1 - The parser will trim the white space from the beginning and end of a
column data value after the data line has been split.
0 (default) - The parser will not trim the column data value.
TrimHeader 1 - The parser will trim the white space from the beginning and end of a
header column after the data line has been split.
0 (default) - The parser will not trim the header column.
TruncateHeader 0 - Do not truncate header column names.
1 - Truncate any header column name which is more than 30 characters
long.
UseFolderFile Indicates whether the folder file limit should be used (1) or not (0).
Limit
The default value is 0 ('OFF').
Verbose 0 - Run silently. No log messages are displayed on the screen.
1 - Display log messages on the screen.

170
About the OPTIMA Parser

The following table describes common parameters in the [OPTIONS] section:

Parameter Description

TrimStringFields 1 (default) - The parser will trim the white space from the beginning and end of a
string of data.
0 - The parser will not trim the string.
QuoteFields 1 - The quote character is added before and after the data.
0 (default) - The quote character is not added.

The following table describes common parameters in a [PAGEDATEQUERYMODE] section:

Parameter Description

IndexCheck 1 (default) - Checks the index file to see if the database row has been written to
CSV in an older instance of the parser.
0 - Does not checks the index file to see if the database row has been written to
CSV in an older instance of the parser.
FormatDateTime 1 (default) - Uses the format in the INI parameter DateTimeFormat to format any
date time fields read from the database.
0 - Does not use the format in the INI parameter DateTimeFormat to format any
date time fields read from the database.

If you are using validation, you should also define a [CounterGroups] section, using the following
parameters:

Parameter Description

NumberOf The expected number of counter groups in the report.


CounterGroups
CounterGroupn The name of each counter group, where n is the counter group number.

For each counter group mentioned there must be a corresponding section in the INI file that shows
the associated column mapping.

If you do not want to generate reports for any measurement objects that are currently inactive, then
you should define a [SUPPRESS_REPORTS] section, using the following parameters:

Parameter Description

NoOfReports The total number of reports that you want to suppress.


Reportn The name of each report that you do not want to generate, where n is the
sequential report number.
For example:
NoOfReports=3
Report1=Aal2PathVccTp
Report2=IpAccessHostGpb
Report3=NodeBFunction

171
OPTIMA 8.0 Operations and Maintenance Guide

Maintenance
In usual operation, the Parser should not need any special maintenance. During installation, the
OPTIMA Directory Maintenance application will be configured to maintain the backup and log
directories automatically.

Check The When Why

Input directory for a backlog of Weekly Files older than the scheduling interval should
files not be in the input directory. A backlog
indicates a problem with the program.
Error directory for files Weekly Files should not be rejected. If there are files in
the error directory analyze them to identify why
they have been rejected.
Log messages for error Weekly In particular any Warning, Minor, Major and
messages Critical messages should be investigated.

Checking for Error Files


Files categorised as error files by the Parser are stored in the directory as defined in the
configuration (INI) file.

The log file is expected to have information related to any error files found in the particular
directory. For more information about the log file, see Checking a Log File Message on page 172.

Checking a Log File Message


The log file for each application is stored in the directory defined in the configuration (INI) file for
that application.

A new log file is created every day. The information level required in the log file is defined in the
General Settings dialog box and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

172
About the OPTIMA Parser

Checking the Parser Statistics Log Messages


In the parser INI file, if you have set the StatsLogSeverity to be equal to or greater than
LogSeverity, then statistics log messages will also be generated.

This table describes the statistics log messages available:

Mess Name Parameter Description


age
Code

3 End of instance TotalProcessTime The total time taken processing


the instance.
TotalFileSize(Kb) The total size of all input files
parsed.
TotalFileCount The total number input files
parsed.
NumberOfCounterGroups The total number of counter
groups.
OutputFileCount The total number of output files
generated.
TotalRowCount The total number of data lines in
all of the output files generated.
50003 Stats for Parser Number The ID number of the parser
Object Instance object instance.
NumberOfCounterGroups The number of counter groups
that the parser object instance
found in input files.
TotalNumberOfCounterGroupsOutputFiles The total number of counter
groups output files generated by
the parser object instance.
TotalNumberRowsOutputed The total number of data lines in
the counter groups output files
generated by the parser object
instance.
TotalNumberOfParsingInputFileCalls The number of times that the
'parse file' method was called
by a parser worker thread on
this parser object.
TotalTimeInParsingInputFileCalls The total amount of time that
the parser object was in the
'parse file' method.
MeanTimeParsingInputFileCalls The mean time that the parser
object was in the 'parse file'
method.
TotalNumberOfCreateReportsCalls The number of times that the
'report generate' method was
called by a parser out worker
thread on this parser object.
TotalTimeInCreateReportsCalls The total amount of time that
the parser object was in the
'report generate' method.
MeanTimeInCreateReportsCalls The mean time that the parser
object was in the 'report
generate' method.

173
OPTIMA 8.0 Operations and Maintenance Guide

Mess Name Parameter Description


age
Code

TotalNumberOfPostProcessingReportsCalls The number of times that the


'parse the post process output
files' method was called.
TotalTimeInPostProcessingReportsCalls The total amount of time that
the parser object was in the
'post process output files'
method.
MeanTimeInPostProcessingReportsCalls The mean time that the parser
object was in the 'post process
output files' method.
50004, Summary for all NumberOfCounterGroups The number of counter groups
50006 parser object from all parser objects.
instances
TotalNumberOfCounterGroupsOutputFiles The total number of files
outputted from all parser
objects.
TotalNumberRowsOutputed The total number of data lines in
output files from all parser
objects.
TotalNumberOfParsingInputFileCalls The total number of times that
the parser object's 'parser file'
method was called.
MeanTimeParsingInputFileCalls The mean time spent in the
parser object's 'parse file'
method.
TotalNumberOfCreateReportsCalls The total number of times that
the parser object's 'report
generate' method was called.
MeanTimeInCreateReportsCalls The mean time spent in the
parser object's 'report generate'
method.
TotalNumberOfPostProcessingReportsCalls The total number of times that
the parser object's 'post process
report' method was called.
MeanTimeInPostProcessingReportsCalls The mean time spent in the
parser object's 'post process
report' method.
50005 Summary for CounterGroup The counter group name.
each counter
group
InFileCount The number of input files
parsed which included this
counter group.
OutFileCount The number of output files
created for this counter group.
RowCount The total number of data rows
outputted for this counter group.
50010 Statistics for the ID The parsing worker thread ID.
parsing worker
TotalThreadTime The total time that the parser
thread was running.

174
About the OPTIMA Parser

Mess Name Parameter Description


age
Code

TotalTaskTime The total time that the parser


thread spent in the parser
object's 'parsing' method.
MinTaskTime The minimum amount of time
that the parser thread spent in
the parser object's 'parsing'
method.
MaxTaskTime The maximum amount of time
that the parser thread spent in
the parser object's 'parsing'
method.
MeanTaskTime The mean time that the parser
thread spent in a parser object's
'parsing' method.
MedianTaskTime The median time that the parser
thread spent in a parser object's
'parsing' method.
50011 Statistics for the ID The parsing worker thread ID.
parser output
worker
TotalThreadTime The total amount of time that
the parser output thread was
running
TotalTaskTime The total amount of time that
the parser output thread spent
in the parser object's 'report
generation' method.
MinTaskTime The minimum amount of time
that the parser output thread
spent in the parser object's
'report generation' method.
MaxTaskTime The maximum amount of time
that the parser output thread
spent in the parser object's
'report generation' method.
MeanTaskTime The mean time that the parser
output thread spent in the
parser object's 'report
generation' method.
MedianTaskTime The median time that the parser
output thread spent in the
parser object's 'report
generation' method.
50012 Statistics for the ID The parsing worker thread ID.
post process
worker
TotalThreadTime The total amount of time that
the post process thread was
running.
TotalTaskTime The total amount of time that
the post process thread spent
moving input files.
MinTaskTime The minimum amount of time
that the post process thread
spent moving an input file.

175
OPTIMA 8.0 Operations and Maintenance Guide

Mess Name Parameter Description


age
Code

MaxTaskTime The maximum amount of time


that the post process thread
spent moving an input file.
MeanTaskTime The mean time that the post
process thread spent moving
input files.
MedianTaskTime The median time that the post
process thread spent moving
input files.

Checking the Parser Information Log Messages


This table describes the information log messages available:

Message Code Description

2 The instance start message.


50 The common backend parameters set in the INI file.
7001 Input file removed from the input file queue. [File:FILENAME]
7010 When batching is on:
Successfully created x report output files from y input files.
When batching is off:
Successfully created X report output files from input file. [File:FILENAME]
50000 The backend application name and version with revision.
70003 The parser's INI parameters.

Stopping the OPTIMA Parser


If the Parser is scheduled, then it will terminate when all files in the input directory have been
processed.

If the Parser is run continuously, then the input directory is monitored continuously. In this case, the
Parser can be terminated. For more information, see Starting and Stopping the Data Loading
Process on page 40.

176
About the OPTIMA Parser

Checking the Version of the Parser


If you need to contact TEOCO support regarding any problems with the Parser, you must provide
the version details.

You can either obtain the version details from the log file or you can type in the print command at
the command prompt.

For example, the Nortel XML Parser requires the following to be typed in:

In Windows:

opx_PAR_NOR_711.exe –v

In Unix:

opx_PAR_NOR_711 –v

For more information about obtaining version details, see About Versioning on page 33.

Checking the Parser is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

Troubleshooting
The following table shows troubleshooting tips for the OPTIMA Parser:

Symptom Possible Cause Solution

Application not Application has not been Use Process Monitor to check last run status.
processing input scheduled.
files. Check crontab settings.
Crontab entry removed.
Check configuration settings.
Application has crashed and
Process Monitor is not configured. Check process list and monitor file. If there is
a monitor file and no corresponding process
Incorrect configuration settings. with that PID, then remove the monitor file.
File(s) do not match the input Note: The process monitor will do this
mask(s). automatically.
Change the input masks.
Application exits Another instance is running. Use Process Monitor to check instances
immediately. running.
Invalid or corrupt (INI) file.
Files in Error Incorrect configuration settings. Check log file for more information on the
Directory. problems.
Invalid input files.
Check error file format.

177
OPTIMA 8.0 Operations and Maintenance Guide

Example Parser Configuration (INI) File


[DIR]
DirFrom=/OPTIMA_DIR/<application_name>/in
DirTo=/OPTIMA_DIR/<application_name>/out
DirBackup=/OPTIMA_DIR/<application_name>/backup
ErrorDir=/OPTIMA_DIR/<application_name>/error
LogDir=/OPTIMA_DIR/<application_name>/log
TempDir=/OPTIMA_DIR/<application_name>/temp
PIDFilePath=/OPTIMA_DIR/<application_name>/pid
CombinerDir=/OPTIMA_DIR/<application_name>/combiner

[MAIN]
InterfaceID=002
ProgramID=722
InstanceID=001
LogGranularity=3
LogSeverity=1
RunContinuously=0
PollingTime=1
StandAlone=0
InputFileNameAsColumn=1
TruncateHeader=0
InputFileSortOrder=0
IncludeSubDirectories=0
AdjustForDST=0
OffsetWhenDSTActive=+60
OffsetWhenDSTInactive=-60
InputFileMask=*
EnableBackup=1
EnableCombiner=0
NumberOfParsingInputFileQueueWorkers=1
NumberOfParserOutputFileQueueWorkers=1
NumberOfMoveInputFileQueueWorkers=1
NumberOfParserObjects=1
UseFolderFileLimit=1
FolderFileLimit=1000
Verbose=0
ParserOutputToInstanceSubdirectory=1
EnableValidation=0
TrimHeader=1
TrimData=0
separatorOut=,
MissingValue=
RemoveHeader=0
ColumnsCaseSensitive=0
SafeMode=0

[CounterGroups]
NumberOfCounterGroups=24
CounterGroup1=IubDataStreams
CounterGroup2=NbapCommon
CounterGroup3=PlugInUnit
...
CounterGroup23=DownlinkBaseBandPool
CounterGroup24=Sccpch

[IubDataStreams]
ColumNumber=105

Column1=DATETIME
Column2=DATETIMEZONE
Column3=DATETIMEUTC
178
About the OPTIMA Parser

Column4=EMS_NAME
Column5=NE_VERSION
...
Column100=pmCapAllocIubHsLimitingRatioSpi07
Column101=pmCapAllocIubHsLimitingRatioSpi06
Column102=pmCapAllocIubHsLimitingRatioSpi05
Column103=pmCapAllocIubHsLimitingRatioSpi04
Column104=pmHsDataFramesReceivedSpi15
Column105=pmHsDataFramesReceivedSpi14

[NbapCommon]
ColumNumber=28

Column1=DATETIME
Column2=DATETIMEZONE
Column3=DATETIMEUTC
Column4=EMS_NAME
...
Column25=NodeBFunction
Column26=Iub
Column27=NbapCommon
Column28=pmNoOfDiscardedMsg

[PlugInUnit]
ColumNumber=29

Column1=DATETIME
Column2=DATETIMEZONE
Column3=DATETIMEUTC
Column4=EMS_NAME
...
Column26=Subrack
Column27=Slot
Column28=PlugInUnit
Column29=pmProcessorLoad

[DownlinkBaseBandPool]
ColumNumber=67

Column1=DATETIME
Column2=DATETIMEZONE
Column3=DATETIMEUTC
Column4=EMS_NAME
Column5=NE_VERSION
...
Column63=pmCapacityDlCe
Column64=pmSamplesCapacityDlCe
Column65=pmSumCapacityDlCe
Column66=pmSumSqrCapacityDlCe
Column67=pmUsedADch

[Sccpch]
ColumNumber=32
Column1=DATETIME
Column2=DATETIMEZONE
Column3=DATETIMEUTC
Column4=EMS_NAME
...
Column29=pmNoOfTfc1OnFach1
Column30=pmNoOfTfc2OnFach1
Column31=pmNoOfTfc3OnFach2
Column32=pmMbmsSccpchTransmittedTfc

179
OPTIMA 8.0 Operations and Maintenance Guide

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

180
About Data Validation

5 About Data Validation

Data validation checks the CSV files created by the parser, ensuring column order, defaulting
missing data values and splitting files if required. Once validated, the files are loaded into the
database.

Important: The loader contains a number of validation options, which you can use instead of the
separate Data Validation application. For more information, see About Loading in OPTIMA on page
213.

The data validation application uses a configuration file (INI) to store information about processing
the files. The configuration file can be edited using a suitable text editor. For more information, see
Configuring the Data Validation Application on page 183.

This diagram shows the data validation process:

INI file

Data Validation Validated


Raw file(s) Parser CSV file(s)
CSV file(s)
(DVL)

Optima Loader
DB

Data Validation Process

The content of each output file from the data validation application is defined in a report, which is
stored in the configuration file. Within a report, you can specify which columns of data from the
input file will be included in the output file and the order in which they are required. You can define
multiple reports to create multiple output files from a single input file.

If the data validation application is running when you make changes to the configuration (INI) file,
you must restart the application for the changes to be effective.

181
OPTIMA 8.0 Operations and Maintenance Guide

The data validation program supports these common functions:

Function Action

Logging Status and error messages are recorded in a daily log file.
Error Files If the application detects an error in the input file that prevents processing of that file, then
the file is moved to an error directory and processing continues with the next file.
Monitor Files The application runs in a scheduled mode. A monitor (PID) file, created each time the
application is started, ensures that multiple instances of the application cannot be run. The
PID file is also used by the OPTIMA Process Monitor to ensure that the application is
operating normally.
PRID The automatically-assigned PRID uniquely identifies each instance of the application. It is
composed of a 9-character identifier, made up of Interface ID, Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID are both
made up of 3 characters, which can be a combination of numbers and uppercase letters.
For more information, see About PRIDs on page 29.
Backup The application can store a copy of each input file in a backup directory.

For more details on these common functions, see Introduction on page 15.

The Validation Process


On startup, the data validation application loads the report(s) from the configuration file into
memory. The report(s) contain the expected column(s) and the column(s) order for each output file.

The application checks for any file(s) in the input folder that match the file mask for the report and
opens these files. The first row (header) of a file is split into different columns to get the actual order
that will be compared with the column order listed in the report(s). If the column order comparison
matches successfully, the file is validated and moved into the correct output folder. The output file
may also be renamed based on the output file. If the column order does not match then the file
needs to be processed further and the column order for the whole file is updated.

While a file is being processed by the data validation application, it is stored in a temporary folder.
This is to prevent incomplete files being sent to the loader. When the validation process has
finished successfully, the processed file is moved from the temporary folder to the output folder.

The output filename may also have a text value attached depending on the settings in the report.
For more information about report settings, see Defining Reports on page 186.

Installing the Data Validation Application


Before you can use the data validation application, install the following file to the backend binary
directory:
• opx_DVL_GEN_411.exe (Windows)
• opx_DVL_GEN_411 (Unix)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

182
About Data Validation

Configuring the Data Validation Application


The data validation application is configured using a configuration (INI) file. Configuration changes
are made by editing the parameters in the configuration (INI) file with a suitable text editor. The
data validation configuration (INI) file is divided into different sections.

The following table describes the parameters in the [DIR] section:

Parameter Description

DirBackup The location of the backup files.


DirFrom The location of the input files.
DirTo Where the files are output after validation.
ErrorDir Where files with errors are sent.
LogDir The location of the log files.
MonFilePath The location of the monitor (PID) files.
TempDir The location of temporary files created during the validation process.

The following table describes the parameters in the [MAIN] section:

Parameter Description

DoCpyToErr Indicates whether the input file will be moved to the error folder when the
validation fails (1) or not (0).
EnableBackup Indicates whether a copy of the input file will be copied to the backup
directory (1) or not (0).
If you do not choose to backup, the input file is deleted after it has been
processed.
The default value is 0.
FolderFileLimit The maximum number of output files that can be created in each output
(sub) folder.
This must be in the range of 100-100,000 for Windows, or 100-500,000 on
Sun/UNIX, otherwise the application will not run.
Warning: Depending on the number of files that you are processing, the
lower the file limit, the more output sub-folders that will be created. This
can have a significant impact on performance, so you should ensure that if
you do need to change the default, you do not set the number too low.
The default is 10,000.
InputFileMask Filter for input file to process, for example, *C*.*
InputFileNameAsColumn If this is set to 1, it adds an INPUT_FILE_NAME_CMB column (with its
data values underneath) to the output file.
By default (0), this is not done.
InstanceID The three-character program instance identifier (mandatory).
InterfaceID The three-digit interface identifier (mandatory).
Iterations This parameter is used when the application does not run in continuous
mode so that it will be able to check for input files in the input folder for the
number of required iterations before an exit. Integer values are allowed,
like 1,2,3,4 and so on.
The default is 5.
Note: If, during an iteration, no input files are found in the input folder, the
program will exit.

183
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

LogGranularity Defines the frequency of logging, the options are:


0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily (default)
LogLevel (or Sets the level of information required in the log file. The available options
LogSeverity) are:
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
PollingTime (or The pause (in seconds) between executions of the main loop when running
RefreshTime) continuously.
ProgramID The three-character program identifier (mandatory).
RunContinuous 0 - Have the data validation application run once (default).
1 - Have the data validation application continuously monitor for input files.
Standalone 0 – Run the application without a monitor file. Do not select this option if
the application is scheduled or the OPTIMA Process Monitor is used.
1 – Run the application with a monitor file.

UseFolderFileLimit Indicates whether the folder file limit should be used (1) or not (0).
The default value is 0 ('OFF').
Verbose 0 - Run silently. No log messages are displayed on the screen.
1 - Display log messages on the screen.

The following table describes the parameters in the [OPTIONS] section:

Note: These settings are not mandatory and the user can decide to use them.

Parameter Description

AvoidLineWithSubStrings Do not process the line where a specific string is found.


CheckAllRows Check if any data is missing in any line
0 - Do not check if any data is missing in any line
1 - Check if any data is missing in any line
CheckLastRow 0 - Do not check for missing data in original file.
1 - Check for missing data in original file and log missing data in log file.
ColumnsCaseSensitive 0 (default) - The Data Validation Application will convert output header
columns and validation report column names into upper case when
comparing.
1 - The Data Validation Application will not convert output header columns
and validation report column names into upper case when comparing.
HeaderLineNumber The line number of the header line in the input file. Default = 1.

184
About Data Validation

Parameter Description

MissingValue This string value will be used for counters missing from the input file.
The default is an empty string.
OutputFileMask Type the extension to give to output file names, for example, .csv.
RemoveHeader 0 (default) - The Data Validation Application will write the header to the
counter group output file.
1 - The Data Validation Application will not write the header to the counter
group output file.
SeparatorIn Separator character for input files.
The possible characters are:
Comma ","
Pipe "|"
Tab "TAB"
Spaces "SPACE"
SeparatorOut Separator character for output files.
The possible characters are:
Comma ","
Semicolon ";"
Pipe "|"
Tab "TAB"
Spaces "SPACE"
TrimData Remove any spaces found around the data values. Values 0-1.
TrimHeader Remove any spaces found around the header columns. Values 0-1.
UseETLLoader 0 - End line of output files will be dependent on operating system - WIN32
\r\n - UNIX \n
1 - End line of output files will always be UNIX format - UNIX \n
WindowsInputFiles This parameter should be used when the input files are in the Windows
format (the lines end with \r\n).
0 (Default) - Input files are not Windows
1 - Input files are Windows
Important: If this parameter is not set correctly for the input files that are
used, then the data is still processed, but because of the extra character
added while transferring, the last column is ignored and the value for this is
filled up using the Missing Value parameter.

Starting the Data Validation Application


To start the data validation application, type the executable file name and a configuration file name
into the command prompt.

In Windows, type:

opx_DVL_GEN_411.exe opx_DVL_GEN_411.ini

In Unix, type:

opx_DVL_GEN_411 opx_DVL_GEN_411.ini

185
OPTIMA 8.0 Operations and Maintenance Guide

Defining Reports
Reports specify what information will be validated by the data validation application. You define
reports by editing parameters in the configuration (INI) file with a suitable text editor.

The following table describes the parameters in the [REPORTS] section:

Parameter Description

Column Type the column name.


ColumNumber Type the number of header columns in the report.
Number Type the number of reports to be validated.
Reportn Type the unique name of the report, where n is the execution
order position of the report, for example, Report1 will be
executed before Report2.
ReportActive 0 - Set the report to be non-active. Non-active reports are
ignored by the data validation application.
1 - Set the report to be active.

The following example shows the definitions for two reports called UTRANCELL_A and
UTRANCELL_B:

[REPORTS]
Number=2
Report1=UTRANCELL_A
Report2=UTRANCELL_B

[UTRANCELL_A]
ReportActive=1
ColumNumber=8
Column1=subNetwork
Column2=subNetwork_1
Column3=ManagedElement
Column4=Start_Date
Column5=End_Date
Column6=RncFunction
Column7=UtranCell
Column8=VS.RadioLinkDeletionUnsuccess

[UTRANCELL_B]
ReportActive=1
ColumNumber=9
Column1=subNetwork
Column2=subNetwork_1
Column3=ManagedElement
Column4=Start_Date
Column5=End_Date
Column6=RncFunction
Column7=UtranCell
Column8=VS.3gto2gHoDetectionFromFddcell.RescueCs
Column9=VS.3gto2gHoDetectionFromFddcell.RescuePs

For more information, see Example Data Validation Configuration (INI) File on page 190.

186
About Data Validation

Maintenance
In usual operation, the data validation application should not need any special maintenance. During
installation, the OPTIMA Directory Maintenance application will be configured to maintain the
backup and log directories automatically.

However, TEOCO recommends the following basic maintenance checks are carried out for the data
validation application:

Check The When Why

Input directory for a backlog of Weekly Files older than the scheduling interval should
files not be in the input directory. A backlog
indicates a problem with the program.
Error directory for files Weekly Files should not be rejected. If there are files in
the error directory analyze them to identify why
they have been rejected.
Log messages for error Weekly In particular any Warning, Minor, Major and
messages Critical messages should be investigated.

Checking for Error Files


Files categorised as error files by the data validation application are stored in the directory as
defined in the configuration (INI) file.

The log file is expected to have information related to any error files found in the particular
directory. For more information about the log file, see Checking a Log File Message on page 187.

Checking a Log File Message


The log file for each application is stored in the directory defined in the configuration (INI) file for
that application.

A new log file is created every day. The information level required in the log file is defined in the
General Settings dialog box and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

187
OPTIMA 8.0 Operations and Maintenance Guide

Stopping the Data Validation Application


If the data validation application is scheduled, then it will terminate when all files in the input
directory have been processed.

If the application is run continuously, then the input directory is monitored continuously. In this case,
the application can be terminated. For more information, see Starting and Stopping the Data
Loading Process on page 40.

Checking the Version of the Data Validation Application


If you need to contact TEOCO support regarding any problems with the Data Validation application,
you must provide the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_DVL_GEN_411.exe -v

In Unix:

opx_DVL_GEN_411 –v

For more information about obtaining version details, see About Versioning on page 33.

Checking the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

Troubleshooting
The following table shows troubleshooting tips for the data validation application:

Symptom Possible Cause Solution

Cannot save User has insufficient privileges on Enable permissions.


configuration (INI) configuration (INI) file or directory.
file. Make file writable
The file is read only or is being
used by another application. Close the data validation application to release
the configuration (INI) file.
New configuration Settings are not saved to the Check settings and location of file.
settings are not configuration (INI) file.
being used by the Restart the data validation application.
application. File created in the wrong location.
Data Validation application has
not restarted to pick up the new
settings.

188
About Data Validation

Symptom Possible Cause Solution

Application not Application has not been Use Process Monitor to check last run status.
processing input scheduled.
files. Check crontab settings.
Crontab entry removed.
Check configuration settings.
Application has crashed and
Process Monitor is not configured. Check process list and monitor file. If there is a
monitor file and no corresponding process with
Incorrect configuration settings. that PID, then remove the monitor file.
Note: The process monitor will do this
automatically.
Application exits Another instance is running. Use Process Monitor to check instances
immediately. running.
Invalid or corrupt (INI) file.
Files in Error Incorrect configuration settings. Check log file for more information on the
Directory. problems.
Invalid input files.
Check error file format.

Data Validation Application Message Log Codes


This section describes the message log codes for the Data Validation Application:

Message Description Severity


Code

1000 Validation instance started. Creating list of files in the Input Directory DEBUG
Validation instance finish processing Input Directory. DEBUG

1001 Input folder is empty. DEBUG


1002 Processing input file: <file_name_and_path>. DEBUG
1003 Requesting Validation workers to stop when file group queue is empty. DEBUG
2100 Error Found when processing file, so moved Input File : <filename> to WARNING
Error Directory.
2101 Unable to Move Input File : <filename> to Error Directory. WARNING
2102 Error Found when processing file, deleting input file : <filename>. INFORMATION
2103 Moved Input File : <filename> to Backup Directory. DEBUG
2104 Unable to Move Input File : <filename> to Backup Directory. WARNING
3200 Waiting for all Validation workers to finish. DEBUG
3201 All file Validation workers finished. DEBUG
4100 Successfully created report file <final_report_filename>. INFORMATION
4100 Successfully created skinny file <final_skinny_filename>. INFORMATION
4100 Successfully created bad file <finalBadLinesFileName>. INFORMATION
4101 Could not create report file <final_report_filename>. WARNING
4101 Could not create skinny file <final_skinny_filename>. WARNING
4101 Could not create bad file <finalBadLinesFileName>. WARNING
5000 Processing input file : <file_name_and_path>. DEBUG
5000 Validation Worker Started (<threadID>). DEBUG
5000 Started processing file <InputFileName>. DEBUG
5000 Finish processing file <InputFileName>. DEBUG

189
OPTIMA 8.0 Operations and Maintenance Guide

Message Description Severity


Code

5000 Validation Worker Finished (<threadID>). DEBUG

Example Data Validation Configuration (INI) File


[DIR]
LogDir=/OPTIMA_DIR/<application_name>/log
TempDir=/OPTIMA_DIR/<application_name>/temp
MonFilePath=/OPTIMA_DIR/<application_name>/pid
DirFrom=/OPTIMA_DIR/<application_name>/in
DirTo=/OPTIMA_DIR/<application_name>/out
DirBackup=/OPTIMA_DIR/<application_name>/backup
ErrorDir=/OPTIMA_DIR/<application_name>/error

[MAIN]
EnableBackup=1
LogGranularity=3
LogSeverity=2
PollingTime=10
RunContinuous=0
StandAlone=1
InterfaceID=000
ProgramID=222
InstanceID=ABC
Iterations=1
InputFileMask=*.csv
InputFileNameAsColumn=1
verbose=1
DoCpyToErr=1

UseFolderFileLimit=1
FolderFileLimit=100

[OPTIONS]
SeparatorIn=,
SeparatorOut=,
HeaderLineNumber=1
AvoidLineWithSubStrings=ignore
TrimHeader=1
TrimData=1
ColumnsCaseSensitive=0
MissingValue=NULL
RemoveHeader=0
WindowsInputFiles=1

[REPORTS]
Number=2
Report1=CL
Report2=CL2

[CL]
ColumNumber=12
Column1=Header_1
Column2=HEADER_2
Column3=HEADER_5
Column4=HEADER_4
Column5=HEADER_3
Column6=HEADER_6
190
About Data Validation

Column7=HEADER_13
Column8=HEADER_8
Column9=HEADER_9
Column10=HEADER_10
Column11=HEADER_11
Column12=Header_222

[CL2]
ColumNumber=2
Column1=HEADER_26
Column2=HEADER_27

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

191
OPTIMA 8.0 Operations and Maintenance Guide

192
About the File Combiners

6 About the File Combiners

The File Combiners enable you to merge the CSV files output by certain parsers into new
combined CSV files. CSV files can be combined before data validation or as part of the data
validation process. For more information on data validation, see About Data Validation on page
181.

Note: The File Combiners only work with specific parsers. For more information contact TEOCO
support.

There are two File Combiners:


• Single input File Combiner
• Multiple input File Combiner (recommended)

The File Combiners use a configuration file (INI) to store information about combining the files. The
configuration file can be edited using a suitable text editor.

The content of each combined file is defined in a report, which is stored in the configuration file.
Within a report, you specify which type of input files will be combined and which common columns
of data from the input files will be included in the output file. For more information, see Defining
Reports on page 197.

The File Combiners support these common functions:

Function Action

Logging Status and error messages are recorded in a daily log file.

Error Files If the application detects an error in the input file that prevents processing of that file, then
the file is moved to an error directory and processing continues with the next file.
Monitor Files The application runs in a scheduled mode. A monitor (PID) file, created each time the
application is started, ensures that multiple instances of the application cannot be run. The
PID file is also used by the OPTIMA Process Monitor to ensure that the application is
operating normally.
PRID The automatically-assigned PRID uniquely identifies each instance of the application. It is
composed of a 9-character identifier, made up of Interface ID, Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID are both
made up of 3 characters, which can be a combination of numbers and uppercase letters.
For more information, see About PRIDs.
Backup The application can store a copy of each input file in a backup directory.

For more details on these common functions, see Introduction on page 15.

Combiner Quick Start


This section is intended to indicate the steps you must take to get the OPTIMA multiple input File
Combiner running for demonstration purposes. It covers the essential parameters that must be
configured. Where more parameters exist but are not mentioned, the default settings will suffice.
For more information on the use of all the parameters that determine the behavior of the OPTIMA
multiple input File Combiner, see the remainder of this chapter.

OPTIMA provides a single input File Combiner and a Multiple input File Combiner. This section
describes how to set up the multiple input File Combiner, which is recommended. In order to use
the multiple input File Combiner, you must install a configuration file and an associated executable
file.
193
OPTIMA 8.0 Operations and Maintenance Guide

To configure and run the multiple input File Combiner:

1. Create a multiple input File Combiner configuration (.ini) file called


opx_CMB_GEN_903_prid so that the parameters shown in this example have values that
you have added (the values need not be the ones shown):

NAME=CELLHANDOVERS_CMB
PRID=001903001
IN_DIR=/OPTDIR/Interfaces/ERI/UTRAN/CMB/in
OUT_DIR=/OPTDIR/Interfaces/ERI/UTRAN/CMB/out
ERROR_DIR=/OPTDIR/Interfaces/ERI/UTRAN/CMB/error
BACKUP_DIR=/OPTDIR/Interfaces/ERI/UTRAN/CMB/backup
TEMP_DIR=/OPTDIR/Interfaces/ERI/UTRAN/CMB/tmp
LOG_DIR=/OPTDIR/log/
PRID_DIR=/OPTDIR/prid
NUM_REPORTS=2
REMOVEFROMFILENAME=_1_[_\dA-z]+
FILEFORMAT=.*
DATEFORMAT=CYYYYMMDD
KEYHEADERS=SENDERNAME,MEASTIMESTAMP,GRANULARITYPERIOD,MEASOBJINSTID
KEEPUNIQUEHEADERS=HOVERCNT,HOVERSUC,HORTTOCH,HOASBCL,HOASWCL,HOSUCB
CL,HOSUCWCL,HOTOLCL,HOTOKCL,HOUPLQA,HODWNQA,HOEXCTA,HODUPFT,HOTOHCS
,HOATTLSS,HOATTHSS,HOATTHR,HOSUCHR

[REPORTS]
REPORT1=NCELLREL
REPORT2=NECELASS

2. Move the configuration file (.ini) to the appropriate sub folder under the OPTIMA Backend
Interface folder on the mediation server, for example:

OPTIMA Backend\Interface\ERI\UTRAN\CMB

3. Create the required directories. For example in UNIX:

mkdir in out error backup

4. Add some input data in CSV file format to the input folder.

5. Copy the appropriate opx_CMB_GEN_903 executable file for your chosen platform from
the folder to which it has been extracted by the OPTIMA backend installation, normally
under:

C:\Program Files(x86)\AIRCOM International\OPTIMA Backend


8.0\Mediation

to the OPTIMA Backend Bin folder on the mediation server, for example in Windows:

6. Run the Combiner. For example:

opx_CMB_GEN_903.exe opx_CMB_GEN_903_prid.ini in Windows

- or -

opx_CMB_GEN_903 opx_CMB_GEN_903_prid.ini in Unix.

194
About the File Combiners

What is Combining?
The following process describes how files are combined by the File Combiners:

1. During parsing, the File Combiner-specific parser extracts data from raw input files and
stores it in CSV files. The file name of each CSV file contains the object type that the File
Combiner will use in the combining process, in the format:

<raw file name>_<object type>__<process date time stamp>.csv

2. On start up, the File Combiner loads the report(s) from the configuration file into memory.
The report(s) contain the types of the files that are to be combined and their expected
common columns. If there are multiple reports, the application will process them one at a
time.

3. The File Combiner application loads the CSV files from the combined log file(s) and checks
that the CSV files contain the types specified in the report.

4. The File Combiner opens the first CSV file and stores the number of rows it has in memory.
This row count is used as a reference value to ensure that all files to be combined have the
same number of rows. Files cannot be combined if they have different numbers of rows.

5. The File Combiner checks that the header columns of the CSV file match the common
columns specified in the report. If the column comparison matches successfully, the other
columns of the CSV file are stored in memory and the next CSV file is processed. When all
CSV files have been processed, the File Combiner combines all of the stored columns into
a new CSV file with a single common header. The new combined file is moved into the
correct output folder.

6. While a file is being processed by the File Combiner, it is stored in a temporary folder. This
is to prevent incomplete files being sent to the Loader input directory. When the combining
process has finished successfully, the processed file is moved from the temporary folder to
the Loader input directory.

If you are using the single input File Combiner, once a CSV file has been successfully
created and saved, its file name and directory path are logged by the parser in a combined
log file. For more information on parsing, see The Parsing Process on page 166.

Installing the File Combiners


Before you can use one of the File Combiners, you must install the appropriate file to the backend
binary directory. This table describes the available options:

For this File For this Operating System Install this file
Combiner

Multiple Input Windows (with ActivePerl) opx_CMB_GEN_903.exe

Unix opx_CMB_GEN_903

Single Input Windows opx_CMB_GEN_900.exe


Unix opx_CMB_GEN_900

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

195
OPTIMA 8.0 Operations and Maintenance Guide

Starting the File Combiners


To start the required File Combiner, type the appropriate executable file name and a configuration
file name into the command prompt.

This table describes the available options:

For this For this Operating Type the following into the command prompt
File System
Combine
r

Multiple Windows (with ActivePerl) C:\opx_CMB_GEN_903.exe opx_CMB_GEN_903.ini


Input
Unix $./opx_CMB_GEN_903.opx_CMB_GEN_903.ini

Single Input Windows C:\opx_CMB_GEN_900.exe opx_CMB_GEN_900.ini


Unix $./opx_CMB_GEN_900 opx_CMB_GEN_900.ini

Configuring the Single Input File Combiner


The File Combiners are configured using a configuration (INI) file. Configuration changes are made
by editing the parameters in the configuration (INI) file with a suitable text editor. The File Combiner
configuration (INI) file is divided into different sections.

The following table describes the parameters in the [DIR] section:

Single Input File Description


Combiner
Parameter

DirBackup The location of the backup files.


DirFrom The location of the log files output by the Parser.
DirIncomplete Where files that are incomplete are sent after the combining process.
DirTo Where the files are output after combining.
ErrorDir Where files with errors are sent.
LogDir The location of the log files created by the File Combiner.
PIDFileDir The location of the monitor (PID) files.
TempDir The location of temporary files created during the combining process.

The following table describes the parameters in the [MAIN] section:

Single Input File Description


Combiner
Parameter

EnableBackup Indicates whether a copy of the input file will be copied to the backup
directory (1) or not (0).
If you do not choose to backup, the input file is deleted after it has been
processed.
InputFileMask Filter for input file to process, for example, *C*.*
InstanceID The three-character program instance identifier (mandatory).

196
About the File Combiners

Single Input File Description


Combiner
Parameter

InterfaceID The three-digit interface identifier (mandatory).


Iterations This parameter is used when the application does not run in continuous mode
so that it will be able to check for input files in the input folder for the number
of required iterations before an exit. Integer values are allowed, like 1,2,3,4
and so on.
LogGranularity Defines the frequency of logging, the options are:
0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily
LogLevel (or Sets the level of information required in the log file. The available options are:
LogSeverity)
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
ProgramID The three-character program identifier (mandatory).
RefreshTime The pause (in seconds) between executions of the main loop when running
continuously.

The following table describes the parameters in the [OPTIONS] section:

Parameter Description

MoveUncombinedFiles Specifies what the File Combiner should do with uncombined files. The
available options are:
0 - Do not move file.
1 - Move file to directory specified in DirIncomplete parameter.
2 - Delete file.
3 - Move file to directory specified in DirTo parameter plus last sub-path of
original path.

For more information, see Example Single Input File Combiner Configuration (INI) File on page
198.

Defining Reports
Reports specify which input files will be combined and which common columns of data from the
input files will be included in the output file. You define reports by editing parameters in the
configuration (INI) file with a suitable text editor.

197
OPTIMA 8.0 Operations and Maintenance Guide

The following table describes the parameters in the [REPORTS] section:

Parameter Description

ExcludeFields Type the columns you want to exclude from the report.
KeyFields Type the common columns that you want to use in the report.
Number Type the number of reports to be combined.
Reportn Type the unique name of the report, where n is the execution order position of the
report, for example, Report1 will be executed before Report2.
Types Type the names of the files you want to combine.

The following example shows the definitions for two reports called UtranCell and UeRc:

[REPORTS]
Number=2
Report1=UtranCell
Report2=UeRc

[UtranCell]
Types=pmSamplesCs12RabEstablish,pmNoDirRetrySuccess,pmUlTrafficVolumePsSt
r64Ps8
KeyFields=ffv,SubNetwork,SubNetwork1,MeContext,st,vn,cbt,mff,NewVnNode,ne
un,nedn_SubNetwork,nedn_SubNetwork1,nedn_MeContext,mts,gp,ManagedElement,
RncFunction,UtranCell
ExcludeFields=DUMP

[UeRc]
Types=pmTransportBlocksAcUl,pmUlRachTrafficVolume
KeyFields=ffv,SubNetwork,SubNetwork1,MeContext,st,vn,cbt,mff,NewVnNode,ne
un,nedn_SubNetwork,nedn_SubNetwork1,nedn_MeContext,mts,gp,ManagedElement,
RncFunction,UeRc
ExcludeFields=DUMP

For more information, see Example Single Input File Combiner Configuration (INI) File on page
198.

Example Single Input File Combiner Configuration (INI) File


Here is an example Single Input File Combiner configuration file:

[COMMON]
DirFrom=/OPTIMA_DIR/<parser_name>/combine_log
DirTo=/OPTIMA_DIR/<application_name>/out
DirBackup=/OPTIMA_DIR/<application_name>/backup
ErrorDir=/OPTIMA_DIR/<application_name>/error
TempDir=/OPTIMA_DIR/<application_name>/temp
PIDFileDir=/OPTIMA_DIR/<application_name>/pid
LogDir=/OPTIMA_DIR/<application_name>/log
DirIncomplete=/OPTIMA_DIR/<application_name>/incomplete

[MAIN]
LogGranularity=3
LogLevel=1
StandAlone=0
InterfaceID=001
ProgramID=900
InstanceID=050
InputFileMask=*.log

198
About the File Combiners

[REPORTS]
Number=5
Report1=RNC_STATS
Report2=CELLRRCRABCONNSTATS
Report3=CELLTRANSCODES
Report4=CELLSHOSTATS
Report5=CPSTATS

[RNC_STATS]
Types=rnc_paging1UraUtran,rnc_dhtAllocAtt
KeyFields=nEUserName,ElementFromFileName,fileFormatVersion,senderName,sen
derTypePadded,senderType,vendorName,collectionBeginTime,measFileFooter,me
asTimeStamp,granularityPeriod,measObjInstId,measObjInstIdPadded,measObjIn
stIdSenderName,suspectFlag
ReportActive=1
ExcludeFields=DUMP

[CELLRRCRABCONNSTATS]
Types=rrcEstabAtt,pchUsageRate
KeyFields=nEUserName,ElementFromFileName,fileFormatVersion,senderName,sen
derTypePadded,senderType,vendorName,collectionBeginTime,measFileFooter,me
asTimeStamp,granularityPeriod,measObjInstId,measObjInstIdPadded,measObjIn
stIdSenderName,suspectFlag
ReportActive=1
ExcludeFields=DUMP

[CELLTRANSCODES]
Types=transFromCellDchAtt,rabsPerQosClass
KeyFields=nEUserName,ElementFromFileName,fileFormatVersion,senderName,sen
derTypePadded,senderType,vendorName,collectionBeginTime,measFileFooter,me
asTimeStamp,granularityPeriod,measObjInstId,measObjInstIdPadded,measObjIn
stIdSenderName,suspectFlag
ReportActive=1
ExcludeFields=DUMP

[CELLSHOSTATS]
Types=hhoAllOutAtt,hhoAllOutAtt
KeyFields=nEUserName,ElementFromFileName,fileFormatVersion,senderName,sen
derTypePadded,senderType,vendorName,collectionBeginTime,measFileFooter,me
asTimeStamp,granularityPeriod,measObjInstId,measObjInstIdPadded,measObjIn
stIdSenderName,suspectFlag
ReportActive=1
ExcludeFields=DUMP

[CPSTATS]
Types=rncUsageRatio,rncUsageRatio
KeyFields=nEUserName,ElementFromFileName,fileFormatVersion,senderName,sen
derTypePadded,senderType,vendorName,collectionBeginTime,measFileFooter,me
asTimeStamp,granularityPeriod,measObjInstId,measObjInstIdPadded,measObjIn
stIdSenderName,suspectFlag
ReportActive=1
ExcludeFields=DUMP

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

199
OPTIMA 8.0 Operations and Maintenance Guide

Configuring the Multiple Input File Combiner


The Multiple Input File Combiner is configured using a configuration (INI) file. Configuration
changes are made by editing the parameters in the configuration (INI) file with a suitable text editor.
The Multiple Input File Combiner configuration (INI) file is divided into different sections.

The following table describes the parameters in the configuration (INI) file:

Parameter Description Default Required


?

BACKUP_DIR The directory to which the input files are - Yes


backed up. This uses the same
subdirectory structure as IN_DIR.

CASESENSITIVE Indicates whether case-sensitivity will be 0 No


used for matching columns/validation (1)
or not (0).

DATEFORMAT The date format to use to match the date - Yes


part of the filename, in the form
YYYYMMDD.
This is converted into a series of regular
expressions. For more information,
seeConverting DATEFORMAT to Regular
Expressions on page 204.
DELIMITER The delimiter used between fields in the , No
input file. This is most commonly a
comma.
DO_BACKUP Indicates whether a copy of the input file 0 No
will be copied to the backup directory (1)
or not (0).
If you do not choose to backup, the input
file is deleted after it has been processed.
DUPLICATED_KEY_MODE 0 - If duplicated keys are found, the input 0 Yes
file will be moved to the error folder.
1 - When a duplicated key is found in an
input file, only the first instance of the
duplicated key will be used in the output.
2 - When a duplicated key is found in an
input file, only the last instance of the
duplicated key will be used in the output.
ERROR_DIR The directory to which files with errors are - Yes
sent.
EXCLUDEHEADERS The headers from any input report file that - No
is to be removed from the output file.
These should appear in the INI file as a
comma-separated string of exclude
header columns, which the File Combiner
divides into a list of strings.
FILEFORMAT A regular expression to determine the - Yes
combined name from the input file name.

200
About the File Combiners

Parameter Description Default Required


?

FOLDERFILELIMIT The maximum number of output files that 10000 No


can be created in each output (sub) folder.
This must be in the range of 100-100,000
for Windows, or 100-500,000 on
Sun/UNIX, otherwise the application will
not run.
Warning: Depending on the number of
files that you are processing, the lower the
file limit, the more output sub-folders that
will be created. This can have a significant
impact on performance, so you should
ensure that if you do need to change the
default, you do not set the number too
low.

IN_DIR The directory in which the input files are - Yes


found. If SINGLE_DIRECTORY=1 then
input files are to be found in this directory,
otherwise input files are expected in
subdirectories based on the name of the
REPORTS.
INPUTFILENAMEASCOLUMN If this is set to 1, it adds an 0 Yes
INPUT_FILE_NAME_CMB column (with
its data values underneath) to the output
file.
By default (0), this is not done.
KEEPUNIQUEHEADERS The headers that are in more than one - No
input report file that will have the
REPORTNAME prefixed to it to keep it
unique after combination.
These should appear in the INI file as a
comma-separated string of keep unique
header columns, which the File Combiner
divides into a list of strings.
KEYHEADERS The headers in all input report files that - No
are to be used as a ‘primary’ key to join
‘rows’ of files.
These should appear in the INI file as a
comma-separated string of key header
columns, which the File Combiner divides
into a list of strings.
LOG_DIR The directory for the log files created by - Yes
the File Combiner application.
Log files are named according to the
following format:
LOG_DIR + Path Separator + Program
Name + _ + PRID + _ + current local date
+ .log
LIST_DIR Not currently used. - Yes but not
used at this
A list of filegroups that have already been time.
processed.

201
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description Default Required


?

LOG_SEVERITY Sets the level of information required in 1 No


the log file. The available options are:
1 - Debug
2 - Information
3 - Warning
4 - Minor
5 - Major
6 - Critical
MAX_DELAY_ON_FILE The number of minutes to wait for 1 No
MIN_FILES_TO_COMBINE to be met
before combining anyway.
MAX_PROCESSES If you want to improve performance, 0 No
specify the maximum number of additional
sub-processes that you want to run.
Tip: Typically one sub-process is
sufficient per output file group.
MIN_FILES_TO_COMBINE The number of files (or elements of NUM_REP No
reports) which must be present before ORTS
files are combined.
For example, if you have 5 reports but
only 4 of them consistently get files and
MIN_FILES_TO_COMBINE= 4, then the 4
files will be combined even if the fifth one
never arrives.
NAME The value given to the output - Yes
measurement or the directory in which the
resulting combined files are placed.
NUM_ALIASES The number of aliases. - No
Each alias is an alternative name (or
series of alternative names) for a
particular column.
Each alias can be specified in an optional
section called [ALIAS] - for more
information see below.
When the Combiner finds the alias, it
replaces it with the required name that
has been specified.
NUM_REPORTS The number of measurement groups to be - Yes
combined. The report names themselves
should correspond either to the sub-
directory or to a component of the
filename. This is so that the expected
filename is matched to the group of files to
be combined.
OUT_DIR The directory to which the files are output - Yes
after combining.
In the case of OUT_DIR, the directory
under which the NAME directory is
created and to which the combined files
are output.

202
About the File Combiners

Parameter Description Default Required


?

PRID The automatically-assigned PRID - Yes


uniquely identifies each instance of the
application. It is composed of a 9-
character identifier, made up of Interface
ID, Program ID and Instance ID.
The Interface ID is made up of 3 numbers
but the Program ID and Instance ID are
both made up of 3 characters, which can
be a combination of numbers and
uppercase letters.
For more information, see About PRIDs
on page 29.
PRID_DIR The directory where the PRID files are - Yes
kept.
PRID files are named according to the
following format:
PRID+DIR + Path Separator +
Program Name + _ + PRID + .pid
REMOVEFROMFILENAME This parameter takes comma-separated - Yes
regular expressions to remove sections
from the value matched by FILEFORMAT.
This is so that more filenames can be
grouped together as a filegroup. This is
useful if there are sections in the filename
that cannot be excluded by one regular
expression.
REPORTn={REPORTNAME} The name of the measurement groups. - Yes
SAFETYPERIOD How old a file must be (in minutes) before 1 No
it is processed.
SINGLE_DIRECTORY Distinguishes between reports by filename 0 No
rather than by the sub-directory they are
in.
0=Off
1-On
STALEAFTER Not currently used. - Yes but not
used at this
The number of days to look back. time
Previously used in correlation with
LIST_DIR.
USEFOLDERFILELIMIT Indicates whether the folder file limit 0 No
should be used (1) or not (0).
VALIDATE_COLUMNS Indicates whether the combiner is also 0 No
configured to perform the data validation
on the output file (1) or not (0).
VERBOSE 0 - Run silently. No log messages are 0 No
displayed on the screen.
1 - Display log messages on the screen.

For more information, see Example Multiple Input File Combiner Configuration (INI) File on page
205.

If you have chosen to perform validation on the output file, then the ini file should also contain a
[VALIDATION] section with the following entries:

203
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

Column n The name of each column (new counter), where n is the column
number.
ColumnNumber The total number of columns (new counters) in the report.

If you have chosen to use aliases when combining, then the ini file should contain a [ALIASES]
section, with a row for each column name for which you want to specify aliases. Each row should
follow this format:

ALIASn = columnname, aliasname1, aliasname 2 ...aliasnamen

Where:
• columnname is the column name with which you want to replace any aliases.
• aliasname1, 2 and so on are the aliases for the column name. The Combiner will find and
replace these with the specified column name.

Converting DATEFORMAT to Regular Expressions


The Multiple Input File Combiner converts the DATEFORMAT pattern string in the INI file into a
regular expression.

This table describes the rules that it uses:

Item Rules

Month Replaces pattern MM with pattern [0-9] {2}


Replaces pattern M[tT][hH]([0-9]+) with pattern \w{[0-9]+}
Year Replaces pattern YYYY with pattern [0-9] {4}
Replaces pattern YY with pattern [0-9] {2}
Day Replaces pattern DD with pattern [0-9] {2}
Replaces pattern D with pattern \s?[0-9] {1,2}
Hour Replaces pattern HH24 with pattern [0-9] {2}
Replaces pattern HH with pattern [0-9] {2}
Replaces pattern [AP]M with pattern [AP]M/
Minutes Replaces pattern MI with pattern [0-9] {2}
Seconds Replaces pattern SS with pattern [0-9] {2}
Date Separators Replaces \ or / with /

To see an example of how the DATEFORMAT is processed, see Example of Converting


DATEFORMAT to Regular Expressions on page 204.

Example of Converting DATEFORMAT to Regular Expressions


In this example, the parser has generated a CSV file, which is named based on the following
format:

A20051023_1100_20051023_1400_xmlpd_total_number_of_successful_account_che
cks_20060329111650.csv

This means that the DATEFORMAT was set to AYYYYMMDD in the INI file.

204
About the File Combiners

Therefore, based on the rules of the Perl File Combiner, the DATEFORMAT is converted as
follows:

1. The month check converts AYYYYMMDD to AYYYY[0-9]{2}DD

2. The year check converts this to A[0-9]{4}[0-9]{2}DD

3. The day check converts this to A[0-9]{4}[0-9]{2}[0-9]{2}

4. The format is unchanged by the remaining checks.

How File Groups Are Created


The Multiple Input File Combiner creates file groups according to the following process:

1. The input filename is compared with the DATEFORMAT regular expression, to ensure that
the DATEFORMAT regular expression matches part of the filename.

If it does not match, then the input file is ignored.

2. The input filename is compared with the FILEFORMAT regular expression, to find which file
group the input file belongs to.

If the filename does not match any part of the FILEFORMAT regular expression, it does not
belong to any current file group and is ignored.

3. After the file group has been found, the File Combiner removes the list of
REMOVEFROMFILENAME patterns from the file group name.

For example, in a scenario where:


• The input filename is
A20051023_1100_20051023_1400_xmlpd_total_number_of_successful_accou
nt_checks_20060329111650.csv
• The FILEFORMAT regular expression is: A[0-9]{8}_[0-9]{4}_[0-9]{8}_[0-
9]{4}_xmlpd

The File Combiner will identify the file group as:

A20051023_1100_20051023_1400_xmlpd_

Example Multiple Input File Combiner Configuration (INI) File


Here is an example Multiple Input File Combiner configuration file:

NAME=CELLGPRS_CMB
PRID=001411028
IN_DIR=/OPTIMA_DIR/<application_name>/in
OUT_DIR=/OPTIMA_DIR/<application_name>/out
ERROR_DIR=/OPTIMA_DIR/<application_name>/error
BACKUP_DIR=/OPTIMA_DIR/<application_name>/backup
LOG_DIR=/OPTIMA_DIR/<application_name>/log
LIST_DIR=/OPTIMA_DIR/<application_name>/tmp
PRID_DIR=/OPTIMA_DIR/<application_name>/pid
DO_BACKUP=1
REMOVEFROMFILENAME=GPR[S]*[0-9]
FILEFORMAT=[A-Z]+[0-9]-[A-Z]{2}[0-9]{6}
DATEFORMAT=MTH2DD
STALEAFTER=360
KEYHEADERS=SDATE,STARTTIME,OBJECTID
EXCLUDEHEADERS=INPUT_FILE_NAME,EXID2,EXID3,EXID4,ELEMENT
SAFETYPERIOD=5

205
OPTIMA 8.0 Operations and Maintenance Guide

SINGLE_DIRECTORY=0
VERBOSE=1
LOG_SEVERITY=0
NUM_REPORTS=3
UseFolderFileLimit=0
FolderFileLimit=10000
VALIDATE_COLUMNS=0
CASESENSITIVE=1

[REPORTS]
REPORT1=CELLGPRS
REPORT2=TRAFDLGPRS
REPORT3=TRAFULGPRS

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

Maintaining File Combiners


Usually the File Combiners should not need any special maintenance. During installation, the
OPTIMA Directory Maintenance application will be configured to maintain the backup and log
directories automatically.

However, TEOCO recommends the following basic maintenance checks are carried out for the File
Combiners:

Check The When Why

Input directory for a backlog of Weekly Files older than the scheduling interval should not be in
files the input directory. A backlog indicates a problem with the
program.
Error directory for files Weekly Files should not be rejected. If there are files in the error
directory analyze them to identify why they have been
rejected.
Log messages for error Weekly In particular any Warning, Minor, Major and Critical
messages messages should be investigated.

Checking for Error Files


Files categorised as error files by the File Combiners are stored in the directory as defined in the
configuration (INI) file.

The log file is expected to have information related to any error files found in the particular
directory. For more information about the log file, see Checking a Log File Message on page 207.

206
About the File Combiners

Checking a Log File Message


The log file for each application is stored in the directory defined in the configuration (INI) file for
that application.

A new log file is created every day. The information level required in the log file is defined in the
General Settings dialog box and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

Stopping File Combiners


If the File Combiner is scheduled, then it will only terminate when all files in the input directory have
been processed.

However, if the File Combiner is run continuously, then the input directory is monitored continuously
and in this case, it can be terminated.

Checking the File Combiner Version


If you need to contact TEOCO support regarding any problems with the File Combiners, you must
provide the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

For this File For this Type the following into the command prompt
Combiner Operating
System

Multiple Input Windows C:\opx_CMB_GEN_903.exe perl.ini -v


UNIX $./opx_CMB_GEN_903 perl.ini -v

Single Input Windows C:\opx_CMB_GEN_900.exe opx_CMB_GEN_900.ini -v

UNIX $./opx_CMB_GEN_900 opx_CMB_GEN_900.ini -v

For more information about obtaining version details, see About Versioning on page 33.

207
OPTIMA 8.0 Operations and Maintenance Guide

Checking the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

Troubleshooting File Combiners


The following table shows troubleshooting tips for the File Combiners:

Symptom Possible Cause Solution

Cannot save User has insufficient privileges on Enable permissions.


configuration (INI) configuration (INI) file or directory.
file Make file writable
The file is read only or is being
used by another application. Close the File Combiner to release the
configuration (INI) file.
New configuration Settings are not saved to the Check settings and location of file.
settings are not configuration (INI) file.
being used by the Restart the File Combiner.
application File created in the wrong location.
The File Combiner has not
restarted to pick up the new
settings.
Application not Application has not been Use Process Monitor to check last run status.
processing input scheduled.
files Check crontab settings (Unix only).
Crontab entry removed.
Check configuration settings.
Application has crashed and
Process Monitor is not configured. Check process list and monitor file. If there is a
monitor file and no corresponding process with
Incorrect configuration settings. that PID, then remove the monitor file.
Note: The process monitor will do this
automatically.
Application exits Another instance is running. Use Process Monitor to check instances
immediately running.
Invalid or corrupt (INI) file.
Files in Error Incorrect configuration settings. Check log file for more information on the
Directory problems.
Invalid input files.
Check error file format.

File Combiner Message Log Codes


This section describes the cpp message log codes common to the File Combiner:

Message Description Severity


Code

9001 Failed to open combiner log file - <combinerFileName>. WARNING


9002 Started reading combiner log file - <combinerFileName>. DEBUG
9003 Blank line found in combiner log file. DEBUG
9004 Read CSV file name from combiner log - <fileName>. DEBUG
9005 CSV file moved to no matching report folder - <fileName>. INFORMATION
9006 CSV could not be moved to no matching report folder - WARNING
<fileName>.

208
About the File Combiners

Message Description Severity


Code

9007 Finish reading combiner log file - <combinerFileName>. DEBUG


9008 Start processing report - <reportName>. DEBUG
9009 No processing needed for this report, no matching CSV files. DEBUG
9010 Finish processing report - <reportName>. DEBUG
9020 CSV file moved to folder - <filename>. INFORMATION
9021 CSV could not be moved to <filename>. WARNING
9065 Key Column <keyField> from report <reportName> not found in WARNING
header of CSV file - <fileName>.
9101 MoveFiles option for report <reportName> could not be used WARNING
because number of report types is not the same.
9102 MoveFiles option for report <reportName> could not be used WARNING
because more than one CSV file matches the report type.
9121 Could not open CSV file <fileName> for reading data section from WARNING
file when processing report <reportName>.
9122 Number of columns in data row does not match number of header WARNING
columns for CSV file <fileName> processing report
<reportName>. Data line will be ignored.
9123 Could not open CSV file <fileName> for reading header section of WARNING
file when processing report <reportName>.
9130 The generated combined file for report <reportName> had rows WARNING
with duplicate key field value, please check your keys.
9132 CreateReportFileForDuplicateKeyRows=0 - No additional report INFORMATION
file created for rows with duplicate key field value.
9140 OutputCombineFileBeforeMerge=1 - Creating before merge INFORMATION
output file for report [<reportName>].

This section describes the Perl message log codes common to the File Combiner:

Error Description Severity


Code

1 Started Program. <opxProgramName> <opxVersion>. INFORMATION


2 Ending Program: Complete. DEBUG
3 Time: <timerun> Files Processed: <filesInInput> Files Output: INFORMATION
<filesOutput>.
4 Ending Program: Signal interrupt. INFORMATION
5 Ending Program: processDir <reportname>: Signal interrupt. INFORMATION
6 <programName> already running. CRITICAL
7 Ending Program: Signal interrupt. INFORMATION
8 Ending Program: Signal interrupt. INFORMATION
10 Ending Program: DATEFORMAT is not defined in INI file. CRITICAL
10 Ending Program: KEYHEADERS is not defined in INI file. CRITICAL
50 Error trying to create directory: <Dir>. MAJOR
100 processSingleFolder(<input_dir>). DEBUG
processDir(<reportname>). DEBUG

101 processDir(<reportname>): Filename (<filename>) does not match WARNING


Format (<fileformat>).

209
OPTIMA 8.0 Operations and Maintenance Guide

Error Description Severity


Code

102 processDir(<reportname>): Filename (<filename>) does not match WARNING


DateFormat (<dateformat>).
103 processDir(<reportname>): Filename (<filename>) processed as group INFORMATION
<fileGroupName>.
104 Group (<groupName>) ignored as it is not in the safetyperiod INFORMATION
(<safetyperiod>.
105 Group (<groupName>) ignored as it does not have INFORMATION
<MinNumberOfFiles> Files and the oldest file is not older than
<maxDelay>.
106 Group (<groupName>) has more files than INFORMATION
MIN_FILES_TO_COMBINE.
107 Group (<filegroup>) must be processed as a file is older than INFORMATION
<maxDelay>.
108 Group (<filegroup> will continue processing. INFORMATION
109 Group (<filegroup>) will NOT continue processing. INFORMATION
110 Processing <fullFilename>. INFORMATION
111 Deleting <processedFileName>. DEBUG
112 processDir(<reportname>): <directoryName>. DEBUG
114 processDir(<reportname>): Directory <directoryName> - does not exist DEBUG
115 processDir(<reportname>): Filename (<filename>) not identified by INFORMATION
REPORTS by the filename.
116 processDir(<reportname>): Files found in directory: <fileCountIn>. DEBUG
117 processDir(<reportname>): Files matching filemask: <fileCountOut>. DEBUG
118 processDir(<reportname>): Percentage DEBUG
processing:<PercentageValue>.
300 analyzeFile(<reportname>, <filename>). DEBUG
301 analyzeFile(<reportname>, <filename>): Cannot open <filename>. MAJOR
302 analyzeFile(<reportname>, <filename>): Missing KEYHEADER: MAJOR
<keyValue>.
303 analyzeFile(<reportname>, <filename>): Index Key: <indexKey>. DEBUG
304 analyzeFile(<reportname>, <filename>): Duplicate Index Key: MAJOR
<indexKey>.
305 analyzeFile(<reportname>, <filename>): Empty File. MAJOR
306 analyzeFile(<reportname>, <filename>): File is Header only. MAJOR
308 File <filename>: Duplicate Header/Alias <header> (already seen MAJOR
<seenHeader>{<useHeader>}).
400 outputFile(<filegroup>) DEBUG
401 outputFile(<filegroup>): TempFile: <tmpFileName>. DEBUG
402 outputFile(<filegroup>): Tempfile cannot be written: <tmpFileName>. MAJOR
404 outputFile(<filegroup>): Output File: <outFile>. DEBUG
405 outputFile(<filegroup>): File move failed: <fileName>. CRITICAL
406 outputFile(<filegroup>): Moved <tmpFileName> to <outFileName>. INFORMATION
407 outputFile(<filegroup>) File group contains no combine data. DEBUG
601 processFile(<directory>, <filename>, <reportname>). DEBUG
1000 moveStraight(<reportname>, <fullfilename>). DEBUG

210
About the File Combiners

Error Description Severity


Code

1001 moveStraight(<reportname>, <fullfilename>): File move failed: CRITICAL


<fileName>.
1002 moveStraight(<reportname>, <fullfilename>): Moved <fullfilename> to INFORMATION
<destFileName>.
1100 moveFileToErrorOrBackup(<fullfilename>). DEBUG
1101 moveFileToErrorOrBackup(<fullfilename>): Backed up <fullfilename> to INFORMATION
<destFileName>.
1102 moveFileToErrorOrBackup(<fullfilename>): File backup failed: MINOR
<fileName>.
1300 returnFileGroup(<filename>). DEBUG
1301 returnFileGroup(<filename>): Processing FILE GROUP - <filegroup>. DEBUG
2001 moveFileToErrorOrBackup(<fullfilename>, <destFileName>): File CRITICAL
move failed: <fileName>.
2002 moveFileToErrorOrBackup(<fullfilename>, <destFileName>): Moved INFORMATION
<fullfilename> to <destFileName>.

211
OPTIMA 8.0 Operations and Maintenance Guide

212
About Loading in OPTIMA

7 About Loading in OPTIMA

The ETL (Extract/Transform/Load) Loader Package primarily loads transformed performance data
into the database.

The Loader Package is configured via a Windows-based configuration utility with database
connectivity. Multiple loaders can be configured and the necessary configuration information is
written to both a configuration file and the loader configuration tables stored within the database.

This diagram shows the basic loader process using an external table:

Loader Process Using External Table

In this case the ETL Loader reads the current Loader report (configuration) information and then
sends the data to a temporary Oracle external table. The data in this temporary table is then
mapped to the destination table as specified in the loader configuration. The mapping of the raw
data to the temporary table and the mapping of the temporary table to the destination table are
defined using the ETL Loader Configuration window, identified in the diagram as Loader GUI .

In normal operation, the Loader requires:


• A Loader report to be configured.
• Data files in its input folder.

The loader can also be configured to use direct path loading rather than an external table. For more
information see About Direct Path Loading on page 251.

The Loader is invoked manually on the command line or automatically via a scheduler program,
such as the Unix Crontab functionality.

As well as loading data, the Loader also contains validation options, which enable you to check the
CSV files created by the parser, ensuring column order, defaulting missing data values and splitting
files if required.

213
OPTIMA 8.0 Operations and Maintenance Guide

Important: If you use the validation options, you do not need to use the separate Data Validation
application. However, for more useful information on the data validation process, see About Data
Validation on page 181.

Loader Quick Start Section


This section is intended to indicate the steps you must take to get the OPTIMA ETL (Extract,
Transform, Load) Loader running, using an external table (rather than direct path loading) for
demonstration purposes. It covers the essential parameters that must be configured. Where more
parameters exist but are not mentioned, the default settings will suffice. For more information on
the use of all the parameters that determine the behaviour of the OPTIMA Loader Application, see
the remainder of this chapter.

Prerequisites
To run the OPTIMA ETL Loader Application you will need to have:
• Created an OPTIMA database
• Run the OPTIMA Backend Installer

Create Raw Table


Create a correctly partitioned raw table. You can do this manually or using the OPTIMA Installation
Tool.

If you use the OPTIMA Installation Tool you do not need to add grants as this is done automatically.
For more information see the OPTIMA Installation Tool User Reference Guide.

Add Grants
Make these grants to the AIRCOM user:
• SELECT, INSERT and UPDATE on the destination table

To add these, in TOAD, sqlplus or a similar editor, type:

GRANT SELECT, INSERT, UPDATE ON

<SCHEMA>.<DST_TABLE>

TO AIRCOM

Configure a New Loader Report


To create a new Loader Report and configure its essential parameters:

1. From the Start menu, select All Programs, Aircom International, AIRCOM OPTIMA
Backend 8.0, Loader.

2. In the Connect to Optima database dialog box, type the required log on details and click
OK.

3. In the Machine Filter dialog box, select the machine on which the ETL Loader client will be
run and click OK.

4. In the ETL Loader Configuration window, click Add. The Configure Report window
appears.

214
About Loading in OPTIMA

5. On the General tab, type a report name.

6. On the Files and Directories tab, specify:


o The file mask for the files in the input directory, for example *.csv
o The paths to these directories: Input, Input temporary, Log file, Error, Backup, Monitor
file, INI file.
o The Field delimiter
o The platform, based on the format of the Loader input files
o Debug as the severity level (you can change this to Information once the Loader is
working correctly)

7. On the DB and Processing tab, complete the DB connectivity details, selecting External
Table as the Staging Option.

8. On the Table Settings tab, specify:


o The External table directory for the data File
o The External table directory for the database
o The Schema name for the Destination table
o The Destination table name

9. Click the Configure aliases button. The Configure loader file mappings dialog box
appears.

10. Click the Load headers from file button and browse to the file from which the first row will
be used to configure the header columns.

11. In the Configure loader file mappings dialog box, right-click and from the menu that
appears select Auto Create Aliases.

12. In the Configure loader file mappings dialog box, right-click and from the menu that
appears select default types to number.

13. Manually adjust individual entries in the Type and Data Format columns as necessary and
then click OK.

14. On the Table Settings tab of the Configure Report window, click the Configure
mappings button.

15. In the Configure loader table mappings dialog box, click to load column data for the
destination table from the database.

16. Click to match aliases to the destination table columns (where a match exists).

17. Click OK.

18. In the Configure Report window, click Apply then OK.

Run the Loader


An ini file is now available in the directory specified on the Files and Directories tab of the ETL
Loader Configuration window.

To run the Loader:

1. Move the input files into the input directory.

2. Run the loader as opx_LOD_GEN_110.exe opx_LOD_GEN_110_00011000M.ini

where opx_LOD_GEN_110_00011000M.ini is the name of the INI file.


215
OPTIMA 8.0 Operations and Maintenance Guide

Important: In a Unix environment, omit .exe from the above command line.

Once the Loader has finished, check the log file and the LOADER_LOG table.

Installing the Loader


Before you can use the Loader and the Loader Configuration window, install the following files
(from the appropriate folder depending on your Oracle version) to the backend binary directory.

For the External Table Loader client:


• opx_LOD_GEN_110 (Unix)
• opx_LOD_GEN_110.exe (Windows)

For the Direct Path Loader client:


• opx_LOD_GEN_112 (Unix)
• opx_LOD_GEN_112.exe (Windows)

For the Loader Configuration window (which can be used to configure both the External Table
Loader and the Direct Path Loader):
• opx_LOD_GEN_110_GUI.exe (Windows)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

You must also ensure that you have installed or upgraded your AIRCOM OPTIMA database and all
of the required packages.

Starting the Loader


The Loader is a console application. Before you can start the Loader, you must ensure you have a
valid configuration file and an OPTIMA database correctly configured and accessible.

To run the Loader, type in the executable file name and the configuration (INI) file name into the
command prompt. For example:

$OPTDIR/bin/opxLoad opxLoad_000000001.ini

Starting the ETL GUI


To start the ETL GUI:

1. Double-click the opx_LOD_GEN_110_GUI.exe:

216
About Loading in OPTIMA

The Connect to OPTIMA database dialog box appears:

2. Type a username and password. You can see a list of recently used usernames by clicking
the Browse button.

3. From the list, select the database to which the Loader will send the data.

Note: The database name must match the database alias on the local machine for the
remote database, which is normally configured in the tnsnames.ora file.

4. Click OK.

The Machine Filter dialog box appears.

Selecting the Loader Machine


Once you have connected to the appropriate database, you can select the machine on which the
Loader will be run by using the Machine Filter dialog box. This picture shows an example:

Machine Filter dialog box

To select the machine:

1. From the Machine list, select the name of the machine on which the Loader will be run.

2. Click OK.

217
OPTIMA 8.0 Operations and Maintenance Guide

About the ETL Loader Configuration Window


The ETL Loader Configuration window shows a list of reports, which are used to configure the
Loader. This picture shows an example:

ETL Loader Configuration window

This table describes the menu options:

Menu Option Description

File Exit Closes the ETL Loader Configuration window.


Report Add Adds a new report.
Modify Edits the current report.
Delete Deletes the current report.
Machine Select Machine Opens the Machine Filter dialog box which enables you to select the
name of the machine on which the Loader will be run.

View Refresh Refreshes the contents of the window.


Help About Provides useful information about the ETL Loader, for example, the
version number.

This table describes the information that is shown for each report:

Column Description

PRID The automatically-assigned PRID uniquely identifies each instance of the application. It
is composed of a 9-character identifier, made up of Interface ID, Program ID and
Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID are both
made up of 3 characters, which can be a combination of numbers and uppercase letters.
For more information, see About PRIDs on page 29.
Report The user assigned name for the report.
Name
Table The name of the table to be loaded.
Name
Load Type Internal use only.
Log The severity level of the log.
Severity

From the ETL Loader Configuration window, you can add, modify or delete loader configuration
reports.

218
About Loading in OPTIMA

Configuring Reports
When you select to add or modify a loader configuration in the ETL Loader Configuration window,
the Configure Report window appears. This window has the following tabs, which you can use to
configure the loader:
• General
• Files and Directories
• DB and Processing
• Table Settings
• Log Messages
• Validator Options

Important: When configuring reports in the Loader, it is recommended that you also read Tuning
the Loader on page 237, which suggests how to get the best results from it.

Defining the General Options for the Loader


The General tab allows you to set and modify the report name, how often new files are looked for
and the name of the executable file. This picture shows an example:

General tab

219
OPTIMA 8.0 Operations and Maintenance Guide

This table describes the information to complete the General tab:

Field Description

Report Name The name of the report.


If you are modifying a report, you can change the name here.
Time Interval The polling time, in seconds. This is how often the Loader will check the input
file.
Loader EXE Name The name of the Loader binary executable file.
(read-only)
PRID (read-only) The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface ID,
Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID
are both made up of 3 characters, which can be a combination of numbers
and uppercase letters.
For more information, see About PRIDs on page 29.

Defining the Files and Directories for the Loader


The Files and Directories tab allows you to specify the file settings and directories used in the
Loader process. This picture shows an example:

Files and Directories tab

This table describes the information to complete the File and Directories tab:

220
About Loading in OPTIMA

In This Field Do This

Input File Mask Type in the file mask which the Loader will match when selecting
files to process.

Input Directory Type the location of the input directory from which the raw data files
will be processed.
Input Temp Directory Type the location of the temporary directory.
Field Delimiter Type the value you want to configure the Loader to use as a
delimiter. If you want tab or space to be used as a delimiter, select
the appropriate checkbox instead.
Tab Select this checkbox if you want to configure the Loader to use tab
as a delimiter.
Space Select this checkbox if you want to configure the Loader to use
space as a delimiter.
UNIX Platform Select this radio button if the input files use a Unix style end of line
character (Hex = 0A).
Windows Platform Select this radio button if the input files use a Windows style end of
line character (Hex = 0D 0A).
Missing Field Values are Null Select this checkbox if you want a null value to be used when a
value is missing from a record in the Loader input file. If you do not
select this checkbox and there are missing field values, then an
error will occur.
Header Lines to Skip Select the number of lines from the top of the input file which you do
not want to be loaded. For example, you can use this option to skip
lines which contain headers or bad data.
Important: When you are deciding how many lines to skip, consider
whether you are going to be using the setting which causes the
header line to be removed. For more information on removing the
header line, see Defining the Validator Options for the Loader on
page 232. If the header line is to be removed, you will have one less
line to skip.
The direct path loader requires the header line. For more
information on using this setting for direct path loading, see
Configuring for Direct Path Loading on page 252.
Input Threshold (BYTE) Type the value of the input threshold in bytes.
The Loader can load several CSV files at once by combining them
into a single external file. The input threshold is the maximum size of
the single external file.
Copy File to Backup on Select this checkbox if you want a copy of the input file to be stored
Successful Load in the backup directory when the Loader process is successful.
Copy File to Error Directory when Select this checkbox if you want a copy of the input file to be stored
Load Unsuccessful in the error directory when the Loader process is unsuccessful.
Log File Directory Type the location of the log file directory.
Security Level Select the severity level for logging information when processing this
report.
Move File to Error Directory if Set the minimum % of records to be loaded . If the number of
Less than % Successfully records is less than this number, the input file will be moved to the
Processed error directory.
Error Directory Type the location of the error directory.
Backup Directory Type the location of the backup directory.
Monitor File Directory Type the location of the directory where the Loader instances PID
will be stored.
INI File Directory Type the location of the directory where the initial configuration will
be written.

221
OPTIMA 8.0 Operations and Maintenance Guide

Defining the Database and Processing Settings for the Loader


The DB and Processing tab allows you to determine how files are processed and loaded into the
database. This picture shows an example:

DB and Processing tab

To define the Database and Processing Settings:

1. Select the appropriate load and error logging options, depending on your requirements:

Load Error Also Description Recommendation Preferred


Option Logging Known Alternative
Option As

Insert On Insert This option uses Recommended. If the Bulk Insert


Only with the error log will succeed in
Error table to store the This option is recommended 50% or more of
Logging PK violations, for when Combiners are not the concatenated
Tables but will not used (and therefore no files, then use
update the raw updates are required), and Bulk then Error
table from the PK violations are expected Log Insert.
error table. in most files.
If PK or other
It is slightly In other words, this option errors are
slower than Bulk replaces single inserts for expected in 50%
insert if there is when bulk insert is expected or more of the
no PK violation, to fail on most files. concatenated
but if there is, it files use Insert
will be faster Only with Error
than Bulk Load Logging Tables.
then Single
Insert.

222
About Loading in OPTIMA

Load Error Also Description Recommendation Preferred


Option Logging Known Alternative
Option As

Off Bulk Load This is the Not recommended. It is


fastest method recommended to
of loading data. Any PK violations will cause change loaders
the file to not be loaded. currently on Bulk
However, if Load onto Bulk
loading fails then If the file is guaranteed to be
clean (in other words, with then Error Log
none of the Insert.
records in the no PK errors), even when
external file will concatenated, then this will This will still do
be loaded. be the fastest method of the Bulk Load,
inserting data. but will failover to
In other words, Insert Only with
this option Error Logging
requires 100% Tables.
data to be
loaded each
time otherwise
0% is loaded.
Failover Bulk then This option Recommended. None.
Error Log performs a bulk
Insert insert without This is the best option to use This is normally
error logging when: the best method
for Insert.
(equivalent to • No updates are required
Bulk Load). (in other words, no
However, if this Combiner is being used)
fails, then it • PK violations will occur
performs a on less than 50% of
Insert Only with concatenated files
Error Logging loaded
Tables.
Upsert On Bulk This option will Recommended. None.
Upsert perform a bulk
using insert with error This option is needed for This is normally
Error logging. when: the best method
for Upsert.
Logging
It will then bulk • Updates are required (in
Tables other words, a
update the PK
violations in the Combiner is being used)
error log table • There are more than
into the raw 500 columns in the raw
table. table
It is also the recommended
load option whenever
updates are required.
Off Merge This option will Not recommended. Bulk Upsert
run a MERGE using Error
(Previousl query to insert If updates are required (in Logging Tables
y called and update the other words, a Combiner is is normally
Upsert) records. being used), this option is an expected to
alternative to Bulk Upsert perform better
It will fail if the using Error Logging Tables than Merge.
raw table has for tables with less than 500
over 500 columns.
columns.
However Bulk Upsert using
Error Logging Tables is
normally expected to be the
best method for upserting
data.

Important: There are a number of other load options, which are not available on the
Loader GUI. You can only set these values by editing the database. For more information,
see About the Loader Options and Database Values on page 239.

223
OPTIMA 8.0 Operations and Maintenance Guide

2. Select the required staging option. This determines whether loading is performed with
external tables or with a direct path array using a Global Temporary Table. For more
information, see About Direct Path Loading on page 251.

3. This table describes the remaining fields on the DB and Processing tab:

In This Field Do This

Hints APPEND hint Use the APPEND hint option when loading the database with data
that will be appended to the end of a table.
This could provide increased performance under certain
circumstances. Contact TEOCO Support for more information, or
consult your Oracle documentation.
PARALLEL hint Use the PARALLEL hint option when loading the database with data
that will be divided between multiple threads.
This could provide increased performance under certain
circumstances. Contact TEOCO Support for more information, or
consult your Oracle documentation.
Degree of Use the up and down buttons to set the degree of parallelism.
Parallelism
Important: If you set the degree of parallelism to a value greater
than one, a warning appears and the APPEND hint and PARALLEL
hint options are disabled.
DB Name Type the name of the database as defined on the Unix loader client
machine containing the performance data table.
Username Type the username for the loader configuration instance.

Password Type the password for the loader configuration instance.

Loader Use this list to specify error codes that the Loader should ignore
Error during loading. If the Loader encounters any of the error codes in
Codes this list during loading, it will ignore them and behave as if the
loading process was successful.
For more information, see Adding and Removing Loader Error
Codes on page 225.
Important: You cannot use this option to ignore loading errors for
the 'bulk load with error tables' or 'bulk then upsert with error tables'
methods.

Important: You should not select either of the hint options if there is more than one loader
report for a raw table, or you are using any of the error logging load options. For more
information, see Tuning the Loader on page 237.

Notes:
o You can also set the hint options directly in the database. For more information, see
About the Loader Options and Database Values on page 239.
o For the "Insert Only with Error Logging Tables" and "Bulk then Error Log Insert" logging
options, rows generating error codes which are in the "Loader Error Codes" list will be
counted as successful. This impacts the ErrThreshold parameter (for more information,
see About the Loader Configuration (INI) File Parameters on page 247).
For example, to avoid files being sent unnecessarily to the error directories when some
rows have duplicate data, add:
"ORA-00001"
(unique constraint violation) to the list. A message will be generated in the
LOADER_LOG table:
"144: Counting <nnn> rows from error log table as successful."

224
About Loading in OPTIMA

Adding and Removing Loader Error Codes


To add an error code to the Loader Error Code list:

1. On the DB and Processing tab, click the Add Error Code button.

The Add Oracle Error Code dialog box appears.

2. Type the error code you want to add.

Note: You must type a valid Oracle Error Code otherwise an error message will be
displayed.

3. Click OK.

To remove an error code from the Loader Error Code list:

1. On the DB and Processing tab in the Loader Error Code list, select the error code that
you want to remove.

2. Click the Remove Error Code button.

Defining the Table Settings for the Loader


The Table Settings tab contains the settings for the temporary and permanent tables used within
the OPTIMA schema and for the location on the file system of the temporary external table and
logs. This picture shows an example:

Table Settings tab

225
OPTIMA 8.0 Operations and Maintenance Guide

This table describes the information to complete the Table Settings tab:

In This Field Do This

External Table Directory (Data File) Provided that you have selected the External Table staging
option on the DB and Processing tab (if you have not, this
field and the next two fields will not be enabled), type the
location of the directory where the data file will be copied to.
This will usually be a mapped drive pointing to the directory
specified in the External Table Directory (DB) field.
Note: If the loader client is running on the database server,
then this location will be the same as the External Table
Directory (DB) location.
External Table Directory (DB) Type the location of the external table directory on the
database server.
Note: TEOCO recommends that this directory is always a
local directory on the database server and not a mapped
drive pointing to a directory on another machine.

Data File Name Type the name of the external table.


Click the Configure Aliases button to map column positions to
meaningful aliases in the temporary table on a one-to-one
basis. For more information, see Configuring Loader File
Mappings on page 227.
Log File Setting No log File Select this option to prevent the creation of a log file.
Default Select this option to create a log file to the default location.
Location
Specify Select this option to create a log file to the specified location.
Location
Log File Location If you have selected Specify Location, type the location of the
log file.
Notes:
• The log file location must be a local directory on the
database server.
• The log file is produced by Oracle when accessing
external tables.
Bad File Setting No Bad File Select this option to prevent the creation of a bad file log.
Default Select this option if you want to generate a bad lines file to
Location the default location.
The bad lines file is produced when the number of columns in
a data row is different from the header row of that same file.
The 'bad' lines are removed from the output (external) file and
added to the bad lines file.
Specify Select this option to generate a bad lines file to a different
Location specified location.
Bad File Location If you have selected Specify Location, type the location of the
log file.
Notes:
• The bad file location must be a local directory on the
database server.
• The bad file is produced by Oracle when accessing
external tables.
Schema Name Type the name of schema in which the destination table can
be found.

226
About Loading in OPTIMA

In This Field Do This

Destination Table Name Type the name of the database target table.
Click the Configure Mappings button to:
• Define the one-to-one or counter expressions mappings
for raw data held in the external table to columns held in
the destination table.
• Define Threshold Crossing Alerts (TCAs), which are
loader-specific alarms raised on the data as it is loaded
into OPTIMA
For more information, see Configuring Loader Table
Mappings on page 228.
Threshold Crossing Alerts Select the Alarms enabled option if you want to enable any
TCAs that you have defined during the mapping
configuration.
From the Alarms Severity drop-down list, choose the severity
level for any TCAs that are raised.
SNMP If you have enabled TCAs - or want to use them in the future
- select the Forward SNMP traps option to send TCA
notifications by SNMP.
Select the type of event and probable cause for the TCA from
the available lists.

Configuring Loader File Mappings


When you click the Configure Aliases button on the Table Settings tab, the Configure Loader
File Mappings dialog box appears. This picture shows an example:

Configure Loader File Mappings dialog box

This table describes the information in the dialog box:

Column Description

Header The unique label given to the data position in the record. These are placeholder
strings which are redefined to meaningful names by the Alias mapping.
Alias Meaningful name given to the column position, which can be used in loadmap
expressions.
Type Oracle data type.
Size Oracle data size.

227
OPTIMA 8.0 Operations and Maintenance Guide

Column Description

Date Format If the data type is specified as Date then PL/SQL format string for the expected
date format is shown here.
Header Position The position of the header in the input file.

To configure the aliases, click one of the buttons as described in the following table:

Click This Button To Do This

Load Headers From Populate the header column from the first row of a data file.
File
If the file does not contain a header row, then the first row of data is used.
Auto Assign Alias Map the loaded headers directly to the alias column. Use this where the input file
will provide meaningful headers.
Define Alias Open the Assign Alias dialog box. Use this to modify an alias definition.
Import Alias From File Read from a file with alias definitions defined.
Export Alias From File Write alias definitions to a file.

Configuring Loader Table Mappings


When you click the Configure Mappings button on the Table Settings tab, the Configure loader
table mappings dialog box appears. This picture shows an example:

Configure Loader Table Mappings dialog box

This table describes the information in the dialog box:

Column Description

Alias Name Name of the defined aliases representing the data which is to be mapped.
Column Name Name of the target column when loading to the database.
Data Type The data type of the database column.
Position The column position in the table.
Formula The PL/SQL formula used to map aliased data to a column in the database.

228
About Loading in OPTIMA

Column Description

Load States if the column in the database is to be loaded and under what circumstances.
Right-click on a value in the Load column to access these options to be applied to the
whole column:
• Replace All - Load to Load if not null - This changes all instances of "Yes" in the
Load column to "Yes if not null". "Yes if not null" means load the value from the
input file provided it is not null.
• Replace All - Load if not null to Load - This changes all instances of "Yes if not
null" in the Load column to "Yes". "Yes" means load the value from the input file
irrespective of whether it is null or not.
PK States if the column in the database is a primary key column.
Operator, Alarm Value Define these values if you want a Threshold Crossing Alert (TCA) to monitor the value
of this column when it is loaded into the database, and signal if any of the loaded
values are incorrect.
TCAs are loader-specific alarms, which are raised as data is loaded into the OPTIMA
database using the Loader. They indicate a discrepancy between the expected values
according to the defined thresholds and the data loaded into the database after any
modification during the loading process.
A potential standard use may be to report on NULL values being inserted at load for
faster reporting. This needs evaluation against Data Quality Nullness reports.
• Set the operator such as =, >, < or BETWEEN
Note: If you select BETWEEN or NOT BETWEEN as the operator, you must enter
two values separated by a comma, representing the limits of the range.
• Set the value (used in the conjunction with the operator) for which an alarm will be
raised.
In the example picture above, TCAs have been set to trigger if the loaded value of
COUNTER1 is greater than 10, and/or the value of COUNTER2 is greater than 56.
Note: Like performance and system alarms, raised TCAs are written to the ALARMS
table, and can be forwarded using SNMP. For more information, see Defining the
Table Settings for the Loader on page 225.
These criteria are only available if the primary key of the destination table contains a
date.
Important: When you set the Operator and Alarm Values criteria, ensure that you
specify alarm values rather than acceptable values.

To configure the table mappings, click one of the buttons as described in the following table:

Click This Button To Do This

Remove Alias When Remove entries from the alias name column when a one-to-one match is found in the
Matched formula.
Load column data for the destination table from the database. For example Column
name, Data type, Position, Load and PK fields.
Match aliases to the destination table columns where a match exists.

Clear the loader report configuration.

Reload loader report information from database.

Run Open the Locate Records dialog box. In the Locate Records dialog box, you can
specify WHERE conditions to use when loading data between the external table and
destination table.
Clear Clear the Where Condition pane.

229
OPTIMA 8.0 Operations and Maintenance Guide

Adding a WHERE Condition

To add a WHERE condition:

1. In the Configure Loader Table Mappings dialog box, click Run. The Locate Records
dialog box appears. This picture shows an example:

2. In the Locate Records dialog box, complete the following information:

In This Field Do This

Field Select a field from the drop-down list.


Operator Select an operator from the drop-down list.
Value Type a value in the field
Null Value Select this checkbox to use a null value instead of specifying an operator and
value.

3. Click the Add button . The new condition is added to the Expression Builder
pane.

Tip: To remove a condition from the Expression Builder pane, click the Clear button
.

4. If you want to add an AND clause to your WHERE condition, repeat steps 2 to 3.

5. If you want to add an OR clause to your WHERE condition, click the OR tab at the bottom
of the Expression Builder pane and then repeat steps 2 to 3.

Tip: To remove all of the conditions you have added, click Reset.

230
About Loading in OPTIMA

6. When you have finished, click OK to save your changes and close the Locate Records
dialog box.

Your WHERE condition is added to the Where Condition pane in the Configure Loader
Table Mappings dialog box.

Defining TCAs for the Loader


When configuring the loader report, you can define TCAs (Threshold Crossing Alerts).

TCAs are loader-specific alarms, which are raised as data is loaded into the OPTIMA database
using the Loader. They indicate a discrepancy between the expected values according to the
defined thresholds and the data loaded into the database after any modification during the loading
process.

TCAs are based on columns loaded into raw tables.

To define TCAs:

1. On the Configure report dialog box, click the Table Settings tab.

2. Click the Configure Mappings button.

3. In the Configure loader table mappings dialog box, define the TCA threshold for each
column for which you want to raise TCAs.

Select the operator and the corresponding alarm value - for example, '>' and '10' to raise a
TCA if the column value is greater than 10:

4. Click OK.

5. On the Table Settings tab, in the Threshold crossing alerts pane:


o Select the Alarms enabled option to enable the TCAs that you have defined
o From the Alarms Severity drop-down list, choose the appropriate severity level for the
TCAs when they are raised

This picture shows an example:

6. Click OK to save the TCA.

231
OPTIMA 8.0 Operations and Maintenance Guide

Viewing the Log Messages for the Loader


Use the Log Messages tab to view the log messages produced for a report. To view a report's log
message history, type the number of days history required in the 'Number of days from current date
to go back' field and click Refresh. This picture shows an example:

Log Messages tab

This table describes the information that is shown for each log message:

Column Description

DATETIME The date and time when the message was logged.
PRID The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface ID,
Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID
are both made up of 3 characters, which can be a combination of numbers
and uppercase letters.
For more information, see About PRIDs on page 29.
MESSAGE_TYPE The type of message that was logged.
SEVERITY The severity level of the log message.
MESSAGE The message that was logged.

Defining the Validator Options for the Loader


You can use the Validator Options tab to configure the Loader to perform the validation of the
data before it is loaded.

Note: This tab is not applicable for direct path loading.

232
About Loading in OPTIMA

This picture shows an example:

Example Validator Options tab

To configure the validation options:

1. Select the Validate Input File option.

2. Choose any trimming options that you want to use when validating the data. This table
describes the options:

Option Description

Trim Header Removes any spaces found around the header columns.
Trim Data Removes any spaces found around the data values.

3. Select the required separator for input files - comma, SPACE, TAB or another character.

4. Choose any additional options that you want to use when validating the data. This table
describes these options:

Option Description

Windows Input Files Select this option if the files that are to be loaded/validated are in
Windows format (where the lines end with \r\n), and you want to convert
them to UNIX.
Important: If you have already set the Platform to be Windows on the
Files and Directories tab, then you do not need to set this value here as
well.
Remove Header Does not include the header in the output file.
Important: If you remove the header line, do not count it among the
Header Lines to Skip. For more information on skipping header lines, see
Defining the Files and Directories for the Loader on page 220.
Columns Case Sensitive Compares the header columns to ensure that they are the same case.

233
OPTIMA 8.0 Operations and Maintenance Guide

5. In the Missing Value box, type the value to be used for any columns which are not in the file
and are to be added to the database.

6. In the Header Line Number box, specify the number of lines that need to be skipped in
order to process the data.

7. You can choose to use Safe Mode.

Safe Mode enables you to generate a file containing the data for any new counters (or
columns in the parser file header) that the parser outputs but were not expected based the
configuration of the original report.

If you want to use Safe Mode:


o Select the Safe Mode option
o Define an appropriate directory for the generated new counter file to be stored.
o Select the primary and ignore columns for the new counter file:

Column Description

Primary Primary columns are those which will be needed to load the new counter file.
To add a primary column, click the Add Primary column button, type the name of
the column and then click OK.
Ignore Ignore columns are columns for any new counters that you know have been
added since the validation report was created, but are not interested in, and want
to exclude from the file.
To add an ignore column, click the Add Ignore column button, type the name of
the column and then click OK.

Saving the Configuration


Once you have configured the loader report, you must save the configuration information to the
loader database configuration.

To do this:

1. Click Apply from any of the Configure Report tabs.

2. In the Confirm dialog box, click OK to create an INI configuration file locally. The file is
created in the location specified in the Loader report configuration.

If you are loading on a Unix platform, then the INI file must be transferred to the
OPTIMA Unix platform and passed as a parameter to the Loader.

3. In the next Confirm dialog box, click OK. This creates Oracle directory objects in the
database that the Loader uses during processing.

Maintenance of the Loader


In usual operation, the Loader application should not need any special maintenance. During
installation, the OPTIMA Directory Maintenance application will be configured to maintain the
backup and log directories automatically.

However, TEOCO recommends the following basic maintenance checks are carried out for the
Loader:

234
About Loading in OPTIMA

Check The When Why

Input directory for a backlog of Weekly Files older than the scheduling interval should not be
files in the input directory. A backlog indicates a problem
with the program.
Error directory for files Weekly Files should not be rejected. If there are files in the
error directory analyze them to identify why they have
been rejected.
Log messages for error Weekly In particular any Warning, Minor, Major and Critical
messages messages should be investigated.

Checking a Log File Message


The log file for the Loader is stored in the directory as defined in the Directory settings dialog box.

A new log file is created every day. The information level required in the log file is defined in the
General Settings dialog box and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

Checking the Loader Error Log Tables


If you selected the Enable Error Tables option when you configured the database and processing
settings for the loader, you can view loader error information in two places within the database:
• The LOADER_LOG table provides a brief description of the number of errors for a
particular file (using the pre-batched filename) and the cause of the error. This picture
shows an example, as seen in TOAD:

Important: The filename will only be given in the LOADER_LOG table if the
INPUT_FILE_NAME has been defined as one of the aliases in the external table settings
(Loader File Mappings and Loader Table Mappings).

235
OPTIMA 8.0 Operations and Maintenance Guide

Also, the function test_for_filename will not log errors per file if a column name other than
INPUT_FILE_NAME is used.
• The ERROR_LOG table (called ERR_PRID, where PRID is the PRID value for the
instance) gives a detailed description of the load failures for each offending row. This
picture shows an example, as seen in TOAD:

Checking the Version of the Loader


If you need to contact TEOCO support regarding any problems with the Loader, you must provide
the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_LOD_GEN_110.exe -v

- or -

opx_LOD_GEN_112.exe -v

In Unix:

opx_LOD_GEN_110 -v

- or -

opx_LOD_GEN_112 -v

For more information about obtaining version details, see About Versioning on page 33.

Checking that the Loader is Running


The OPTIMA Process Monitor allows the status of all OPTIMA backend processes to be
determined. Once launched, the Loader should have an entry in the process table for a given
program instance. For more information, see About the Process Monitor on page 257.

To ensure a program is running, the Process Monitor will examine the PIDs directory for a PID file
matching the PRID. If the PRID file exists, the PID is tested to check that it exists in the process
table. If it does not, the PID is removed and the next scheduled Loader invocation should restart the

236
About Loading in OPTIMA

Loader process. The Process Monitor can also be configured to periodically terminate the Loader
instance to ensure that a zombie process does not run unchecked for an extended period of time.

The Process Monitor functionality can also be performed at the command line using standard Unix
commands, for example (using external table loading):
• To identify the PID of the running opx_LOD_GEN_110, go into the PIDs directory:
cd $OPTDIR/pids
• To display the file contents:
cat <hostname>_opx_LOD_GEN_110_000000001.pid

This should display the following information:

[PID]

PID = 9191
• To identify the PID in the process table:
ps –ef | grep 9191

If the process cannot be found in the process table, the program has terminated. The PID file
should be removed from the monitor directory as other attempts to invoke the Loader will fail whilst
the PID file exists.

Note: For more information, see About PRIDs on page 29.

Tuning the Loader


In order to get the best results when using the Loader, you should bear in mind the following:

Ensure that you choose the most appropriate load option

The load option that you choose depends on whether or not you are using combiners:

Configuratio Loader Recommended Load Option Alternative Load


n Requirements (Database Value) Option (Database
Value)

Loader with Upsert (Update Bulk Upsert using Error Logging Bulk Load then Upsert (5)
Combiner and Insert) Tables (12)
Loader without Insert Bulk Load then Error Log Insert Insert Only with Error
Combiner (15) Logging Tables (11)

Note: The specified database values indicate the LOAD_TYPE value stored in the
LOADER_PARAMETERS table. If you cannot access the Loader GUI, you can set the load type by
updating the LOADER_PARAMETERS table with the required LOAD_TYPE value. For more
information, see About the Loader Options and Database Values on page 239.

237
OPTIMA 8.0 Operations and Maintenance Guide

Set the hint option(s) correctly

When setting the hint option(s) on the DB and Processing tab of the Configure Report dialog box in
the Loader GUI, then you should follow these guidelines:
• If there is more than one loader report for a raw table, then all loader reports loading into
the same raw table should not have any hint options selected (in other words,
SQL_HINT_OPTION should = 0 for these reports). You can use this query for checking to
which reports this applies:
select

schema, dest_table_name,

count(*) "Loaders for a table > 1",

sum(case when sql_hint_option in (1,2,3) then 1 else 0 end)


"Append/Parallel hints"

from loader_parameters

group by schema, dest_table_name

having count(*) > 1

and sum(case when sql_hint_option in (1,2,3) then 1 else 0 end) > 0

order by count(*) desc

You should then go through the list of loader reports that are returned and manually update
the SQL_HINT_OPTION.
• For tables with one loader that have performance problems, you should select the
APPEND hint - in other words, SQL_HINT_OPTION should = 1 for these tables. You can
use this query for checking to which reports this applies:
select

schema, dest_table_name,

count(*) "Loaders for a table = 1",

sum(case when sql_hint_option =0 then 1 else 0 end) "Append


recommended"

from loader_parameters

group by schema, dest_table_name

having count(*) = 1

and sum(case when sql_hint_option =0 then 1 else 0 end) = 0

You should then go through the list of loader reports that are returned and manually update
the SQL_HINT_OPTION.

Note: If any of the reports use an Error Logging load option, then hints will be switched off
automatically. This is because an APPEND or PARALLEL hint may cause an insert using error
logging to fail.

238
About Loading in OPTIMA

Set an appropriate Input Threshold

On the Files and Directories tab of the Configure Report dialog box, it is recommended that you set
the input threshold to 10 million bytes (10MB).

Ensure that you choose the right level of Logging Severity

Logging severity (which is stored in the LOADER_PARAMETERS table as the LOG_SEVERITY


parameter) should normally be left as the default value of 2 (in other words, Information level).

You should only use Debug mode (1) as a temporary value in order to diagnose errors.

Use the Validation Options in the Loader, rather than using a separate Validator

It is recommended that you configure validation by using the Validation Options tab of the
Configure Report dialog box, rather than using a separate Validator.

You should only use a separate Validator if a parser output file needs to be split into two loader
input files, to be loaded into two different raw tables

Set up the correct ratio of Loaders to Mediation Devices

Loaders that are used for loading mediation machine-level files (for example, the common_logs log
loader and the maintain_dir loader) should have one loader per mediation device rather than one
loader per interface.

The OPTIMA Installation Tool currently creates one loader per interface; only one of these loaders
should be deployed per mediation machine per type.

About the Loader Options and Database Values


It is possible to set a number of the Loader options directly in the database, rather than using the
Loader GUI. This is particularly useful for fine-tuning the Loader.

Load Types

In the Configure Report dialog box, on the DB and Processing tab each combination of Load Option
and Error Logging Option has a unique LOAD_TYPE value, which is stored in the
LOADER_PARAMETERS table. If you cannot access the Loader GUI, you can change the Load
Option and/or Error Logging Option by editing the LOAD_TYPE value.

This table describe the available combinations and corresponding values:

Load Option Error Logging Also Known As LOAD_TYPE


Options Value

Insert On Insert Only with Error Logging 11


Tables
Off Bulk Load 1

Failover Bulk then Error Log Insert 15

Upsert On Bulk Upsert using Error Logging 12


Tables
Off Merge (Previously called Upsert) 4

239
OPTIMA 8.0 Operations and Maintenance Guide

In addition, there are a number of load options not available on the DB and Processing tab. They
are described in the following table:

Load Datab Description Recommendation Preferred


Option ase Alternative
Value

Bulk load 2 This option will use Not recommended. If the bulk insert will
then Bulk Load but, if the succeed in 50% or
single bulk load fails then This option was the previously more of the files
insert the same data will be recommended way of inserting loaded then use Bulk
loaded using single data, where Combiners were not then Error Log Insert.,
inserts. being used and the data was otherwise use Insert
clean most of the time. Only with Error
However this has now been Logging Tables.
replaced by Bulk then Error Log
Insert which uses error logging
inserts instead of single inserts.
Single 3 This option will insert Not recommended. Insert Only with Error
insert only only one record at a Logging Tables will
time. This method of This option is very slow, it is now ignore PK and other
loading is significantly much faster to use Insert Only violations, and will
slower than Bulk with Error Logging Tables when perform much faster
Load. PK errors are expected. than single inserts.
Bulk then 5 This option will Recommended in some If:
Upsert initially use Bulk circumstances.
Insert, but if this fails • Updates are
it will then run the This option can be used if the required in at least
MERGE query. Combiners have all counter half of the files
groups available when combining
for the majority of the time. - and/or -

In other words, this option should • The raw table


be used if each record is only contains more
updated once. than 500 columns

Do not use this option if the raw Then use Bulk Upsert
table contains more than 500 using Error Logging
columns. Tables instead.

If you want to use any of these options, you can do so by setting the appropriate LOAD_TYPE
value.

Hint Options

In the Configure Report dialog box, on the DB and Processing tab each hint option has a unique
SQL_HINT_OPTION value, which is stored in the LOADER_PARAMETERS table. If you cannot
access the Loader GUI, you can change the hint option by editing the SQL_HINT_OPTION value.

This table describes the available options and corresponding values:

Description SQL_HINT_OPTION Value

No hint selected 0

APPEND hint selected 1


PARALLEL hint 2
selected
Both hints selected 3

Note: If you select a value greater than one for the Degree of parallelism option on the DB and
Processing tab, an SQL_HINT_OPTION value of 0 is stored.

240
About Loading in OPTIMA

Troubleshooting

Loader Configuration Utility

Symptom Possible Cause Solution

Cannot save User has insufficient privileges on Enable permissions.


configuration (INI) configuration (INI) file or directory.
file. Make file writable
The file is read only or is being
used by another application. Close the Loader application to release the
configuration (INI) file.
New configuration Settings are not saved to the Check settings and location of file.
settings are not configuration (INI) file.
being used by the Restart the Loader application.
Loader. INI file not moved to the correct
directory on the Unix machine.
Loader has not restarted to pick
up the new settings.
DB External Table External table not created Check if an external (.ext) file exists on the
Creation Error or properly. DB machine in the external table directory. If
3120 Error not try and manually copy a file on the Parser
Loader machine to this directory and check
the directory paths and permissions.
If a file does exist then check that the external
file has the same number of columns as the
external table definition and the same data
types. Try defaulting the external table
columns to VARCHAR2(500) and loading all
data types as a VARCHAR2 into the external
table.
Check if the external table exists inside
TOAD (or similar program), and if not then
run the external table script which is
contained within the Loader Parameters table
for this report. Try doing a select * from
the external table if it does exist.
Check in the external file for bad end of line
characters and make sure that the correct file
end of line format (Unix or Windows) is
selected in the GUI.
Check that the Oracle external table directory
exists by using the following SQL:
select * from dba_directories

Loader Application

Symptom Possible Cause Solution

Application not Application has not been Use Process Monitor to check last run status.
processing input scheduled.
files. Check crontab settings.
Crontab entry removed.
Check configuration settings.
Application has crashed and
Process Monitor is not configured. Check process list and monitor file. If there is
a monitor file and no corresponding process
Incorrect configuration settings. with that PID, then remove the monitor file.
Note: The process monitor will do this
automatically.

241
OPTIMA 8.0 Operations and Maintenance Guide

Symptom Possible Cause Solution

Application exits Another instance is running. Use Process Monitor to check instances
immediately. running.
Invalid or corrupt (INI) file. Check that the INI file is configured correctly -
recreate if corrupted.
Files in Error Incorrect configuration settings. Check log file for more information on the
Directory. problems.
Invalid input files.
Check error file format.

Loader Error Codes


This section describes the cpp error codes common to the Loader:

Message Description Severity


Code

1000 Loader instance started. Creating list of files in the Input Directory. DEBUG
Loader instance finished processing the Input Directory. DEBUG

1001 Input folder is empty. Disconnecting from DB. DEBUG


1002 Processing input file : <file_name_and_path>. DEBUG

1003 Skipping input file because no longer exist or is a core dump file. DEBUG
Requesting database workers to stop when move temp external file DEBUG
queue is empty.
Requesting backup and error workers to stop when backup and error DEBUG
queue is empty.
1004 Input file is an empty file. DEBUG
1005 Input file is an empty file. DEBUG
1006 File group started: <fileGroupId>. DEBUG
1007 File group ready for processing: <fileGroupId>. DEBUG
1008 Still below file group threshold: <fileGroupId>. DEBUG
1009 Above file group threshold so start processing file group: DEBUG
<fileGroupId>.
1010 Main mediation thread finished processing input directory. DEBUG
Processing last file. DEBUG

1011 Adding last file to file group: <fileGroupId>. DEBUG


1012 File group ready for processing: <fileGroupId>. DEBUG
1050 Processing file group <fileGroupId> - size <fileGroupSize>. DEBUG
1051 Resetting values for next file group. DEBUG
1052 Adding file to next file group because was too large for last file group. DEBUG
1053 File group started: <fileGroupId>. DEBUG
1054 Resetting file group started flag: <fileGroupId>. DEBUG
1111 Terminating - stop processing input folder file. DEBUG
1112 Terminating - will not concatenate/validate file group. DEBUG
1113 Terminating - stopping move temp worker. DEBUG
1114 Terminating - stopping move temp worker. DEBUG

242
About Loading in OPTIMA

Message Description Severity


Code

1117 Terminating - stopping database worker. DEBUG


1119 Terminating - stopping backup and error worker. DEBUG
2100 Error Found when processing file, so moved Input File: <fileName> to WARNING
Error Directory.
2101 Unable to Move Input File : <fileName> to Error Directory. WARNING
2102 Error Found when processing file, deleting input file: <filename>. INFORMATION
2103 Moved Input File: <fileName> to Backup Directory. DEBUG
2104 Unable to Move Input File: <fileName> to Backup Directory. WARNING
3001 LoadData: Known error code encountered. MAJOR
3008 The percentage loaded <PercentageLoaded> % is lower than the MINOR
threshold (<ErrThreshold> %) so files will be moved to the error
directory.
3009 LoadFile: Termination signal from Loader Package with code: CRITICAL
<ErrorCode> Loader instance will be terminated. Error:
<errorDetails>.
3010 LoadFile: <errorDetails>. CRITICAL
3051 Optima_Loader package successfully prepared. DEBUG
3052 Executing OPTIMA_LOADER.LOAD_DATA procedure. DEBUG
3053 Finished executing procedure OPTIMA_LOADER.LOAD_DATA. DEBUG
3054 OPTIMA_ADMIN package successfully prepared. DEBUG
3055 Executing OPTIMA_ADMIN.AUTHENTICATE_ROLE procedure. DEBUG
3056 Finished executing procedure DEBUG
OPTIMA_ADMIN.AUTHENTICATE_ROLE.
3060 DatabaseLogout method. DEBUG
3070 Connecting to db method. DEBUG
3071 Connecting to database: <DatabaseName> as user <UserName>. DEBUG
Database authentication method. DEBUG

3072 Successfully connected to database. DEBUG


3100 Could not connect to database - terminating. CRITICAL
3101 Could not authenticate user - terminating. CRITICAL
3200 Waiting for all database workers to finish. DEBUG
Waiting for all backup and error workers to finish. DEBUG

3201 All database workers finished. DEBUG


All backup and error workers finished. DEBUG

4100 Successfully created skinny file <final_skinny_filename>. INFORMATION


Successfully created bad file <finalBadLinesFileName>. INFORMATION

4101 Could not create skinny file <final_skinny_filename>. WARNING


Could not create bad file <finalBadLinesFileName>. WARNING

5000 Could not move temp file to external directory. MAJOR


Error calling loader package for external file. MAJOR

Could not rename temp external file to external file, is external file MAJOR
locked?
Expected to find a temp external file. MAJOR

243
OPTIMA 8.0 Operations and Maintenance Guide

Message Description Severity


Code

Started processing file group <fileGroupId>. DEBUG

Processing file: <FileName>. DEBUG

Finished processing file: <FileName>. DEBUG

Created temp file <TempFileName> for file group <fileGroupDId>. DEBUG

Started processing temp file for file group <fileGroupId>. DEBUG

Successfully moved temp file <TempFileName> to external directory DEBUG


<TempExternalFile>.
Finishing MoveTmpFileToExterDir for file group <fileGroupId>. DEBUG

Database worker started (<threadId>). DEBUG

Started processing temp external file for file group <fileGroupId>. DEBUG

Finishing processing temp external file for file group <fileGroupId>. DEBUG

Database worker finished (<threadId>). DEBUG

Starting database worker Number - <workerID>. DEBUG

BackupErr worker started (<threadId>). DEBUG

Moving file group files to error folder. DEBUG

File group successful. DEBUG

Started processing backup and error for file group <fileGroupId>. DEBUG

Finishing processing backup and error for file group <fileGroupId>. DEBUG

Starting error and backup worker Number - <workerId>. DEBUG

The percentage loaded is <fileGroupPercentageLoaded> % ( INFORMATION


<fileGroupRecordsLoaded> rows processed.
5211 Wait success - waited <elapsedTime> seconds for the database DEBUG
worker to rename the temp external file.
5212 Wait fail - waited <elapsedTime> seconds for the database worker to MAJOR
rename the temp external file.
7000 Password Decryption Error : <errorMessage>. CRITICAL
7003 DB Connection Error : <errorDetails>. CRITICAL
7031 DB Role Authentication Error: <errorDetails>. CRITICAL
7777 When using validation mode one report should be defined in the INI CRITICAL
file.
8666 AppendToFile successful for file: <FileName>. DEBUG
8777 Validation successful for input file: <FileName>. DEBUG
8888 DB disconnection Error: <errorDetails>. CRITICAL
AppendToFile fails From : <FileName> To : <TempFileName>. WARNING

8999 Validation failed for input file: <FileName>. WARNING


106021 Existing Target file deleted. WARNING

244
About Loading in OPTIMA

Example Loader Configuration (INI) File


This shows an example configuration (INI) file for external table loading:

[MAIN]
InterfaceId=001
ProgramId=110
InstanceId=009
PRID=001110009
LogGranularity=3
LogSeverity=2
Verbose=0
RunContinuous=0
Pollingtime=10
Standalone=0
Iterations=5
UseFolderFileLimit=1
FolderFileLimit=10000

[LoaderConfiguration]
Database=OPTPROD62
UserName=AIRCOM
Password=ENC(l\mlofhY)ENC
ExtFileName=opx_LOD_GEN_110_001110009.ext
DoCpyToErr=1
DoBackup=0
FileMask=*.csv
ErrThreshold=100
NumberOfHeaderLines=1
InputThresHold=0
ValidateInputFile=1
TimeInterval=10
ExeName=opx_LOD_GEN_110
ReportName=CSV_NA_swFCPort_DC

[DIR]
LogDir=/OPTIMA_DIR/<application_name>/log
TempDir=/OPTIMA_DIR/<application_name>/tmp
PIDFileDir=/OPTIMA_DIR/<application_name>/prids
InputDir=/OPTIMA_DIR/<application_name>/out
BackupDir=/OPTIMA_DIR/<application_name>/backup
ErrDir=/OPTIMA_DIR/<application_name>/error
ExtTblDir=/OPTIMA_DIR/<application_name>/extdir

[VALIDATECONFIGURATION]
TrimHeader=1
TrimData=0
SeparatorIn=,
separatorOut=,
HeaderLineNumber=1
WindowsInputFiles=0
AvoidLineWithSubStrings=
InputFileNameAsColumn=0
MissingValue=
RemoveHeader=0
ColumnsCaseSensitive=0
SafeMode=1

245
OPTIMA 8.0 Operations and Maintenance Guide

[SAFE]

SafeDir=C:\Development\Test\opx_LOD_GEN_110\newCounters
IgnoreColumns=0
PrimaryColumns=1
PrimaryColumn1=DateTime

[REPORTS]
Number=1
Report1=VALDATION_REPORT

[VALDATION_REPORT]
ColumNumber=34
Column1=DateTime
Column2=IPADDRESS
Column3=PORT
Column4=Index
Column5=swFCPortCapacity
Column6=swFCPortIndex
Column7=swFCPortTxWords
Column8=swFCPortRxWords
Column9=swFCPortTxFrames
Column10=swFCPortRxFrames
Column11=swFCPortTxC2Frames
Column12=swFCPortRxC3Frames
Column13=swFCPortRxLCs
Column14=swFCPortRxMcasts
Column15=swFCPortTooManyRdys
Column16=swFCPortType
Column17=swFCPortNoTxCredits
Column18=swFCPortRxEncInFrs
Column19=swFCPortRxCrcs
Column20=swFCPortRxTruncs
Column21=swFCPortRxTooLongs
Column22=swFCPortRxBadEofs
Column23=swFCPortRxEncOutFrs
Column24=swFCPortRxBadOs
Column25=swFCPortC3Discards
Column26=swFCPortMcastTimedOuts
Column27=swFCPortPhyState
Column28=swFCPortTxMcasts
Column29=swFCPortLipIns
Column30=swFCPortLipOuts
Column31=swFCPortOpStatus
Column32=swFCPortAdmStatus
Column33=swFCPortLinkState
Column34=swFCPortTxType

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

246
About Loading in OPTIMA

About the Loader Configuration (INI) File Parameters

Note: Environment variables can be used within the directory specification, except for the External
Table Directory - ExtTblDir.

Warning: The direct modification of the GUI generated configuration file is not recommended.

This table describes the entries found in the [MAIN] section of the ETL GUI generated INI
configuration file:

Parameter Description

FolderFileLimit The maximum number of output files that can be created in each output (sub)
folder.
This must be in the range of 100-100,000 for Windows, or 100-500,000 on
Sun/UNIX, otherwise the application will not run.
Warning: Depending on the number of files that you are processing, the lower
the file limit, the more output sub-folders that will be created. This can have a
significant impact on performance, so you should ensure that if you do need to
change the default, you do not set the number too low.
The default value is 10,000.
Instance ID The three-character program instance identifier (mandatory).
Interface ID The three-digit interface identifier (mandatory).
Iterations This parameter is used when the application does not run in continuous mode
so that it will be able to check for input files in the input folder for the number of
required iterations before an exit. Integer values are allowed, like 1,2,3,4 and so
on.
LogGranularity Defines the frequency of logging, the options are:
0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily
LogLevel (or Sets the level of information required in the log file. The available options are:
LogSeverity)
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
PollingTime (or The pause (in seconds) between executions of the main loop when running
RefreshTime) continuously.
PRID The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface ID,
Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID
are both made up of 3 characters, which can be a combination of numbers and
uppercase letters.
For more information, see About PRIDs on page 29.
Program ID The three-character program identifier (mandatory).

247
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

RunContinuous 0 - Have the loader run once.


1 - Have the loader continuously monitor for input files.
Standalone 0 - Run the loader with a monitor file.
1 - Run the loader without a monitor file. Use this option if the Parser is
scheduled or the OPTIMA Process Monitor is used.

UseFolderFileLimit Indicates whether the folder file limit should be used (1) or not (0).
The default value is 0 ('OFF').
Verbose 0 - Run silently. No log messages are displayed on the screen.
1 - Display log messages on the screen.

This table describes the entries found in the [Loader Configuration] section:

Parameter Description

Database The database name as defined on the Loader server.


DoBackup Indicates whether a copy of the input file will be copied to the backup
directory (1) or not (0).
If you do not choose to backup, the input file is deleted after it has been
processed.
DoCpyToErr Indicates whether the input file will be moved to the error folder when the load
fails (1) or not (0).
ExeName The name of the Loader Executable.
ExtFileName The external table file name.
ExtTblDir The location of the directory where the data file will be copied to.
This will usually be a mapped drive pointing to a directory specified in the
External Table Directory (DB) field.
If the loader client is running on the database server, then this location will be
the same as the External Table Directory (DB) location.
FileMask The file mask that the loader will match when selecting files to process.
Password The encrypted password for the Loader configuration instance.

ReportName An instance-unique report name.


Time Interval The pause (in seconds) between the executions of the main loop when
running continuously.
Username The username for the specified database.

This table describes the entries found in the [DIR] section:

Parameter Description

LogDir The location of the directory where log files will be stored.

TempDir The location of the directory where temporary files will be stored.
PIDFileDir The location of the directory where PID files will be created.
InputDir The location of the input directory from where the raw data files will be
processed.
BackupDir The location of the raw file backup directory.
ErrDir The location of the error directory.
ErrThreshold The error threshold.

248
About Loading in OPTIMA

Parameter Description

NumberOfHeaderLines The number of header lines to skip.


InputThresHold The input threshold in bytes.
The loader can load several CSV files at once, by combining them into a
single external file. The input threshold is the maximum size of the single
external file.
ValidateInputFile Indicates whether the loader is also configured to perform the data validation
(1) or not (0).

This table describes the entries found in the [VALIDATECONFIGURATION] section:

Parameter Description

AvoidLineWithSubStrings Do not process the line where a specific string is found.


ColumnCaseSensitive If this is set to 1, the header columns are compared to ensure that they are
the same case. 0 indicates that they will not be compared.
FirstDataLine Indicates the line at which the data begins in the data file, after any header
lines.
By default this is set to 2 (in other words, the 2nd line of the file).
HeaderLineNumber The number of lines that need to be skipped in order to process the data.
InputFileNameAsColumn If this is set to 1, it adds an INPUT_FILE_NAME_CMB column (with its
data values underneath) to the output file.
By default (0), this is not done.
MissingValue The value used for any columns that are not in the file and are to be added
to the database.
RemoveHeader If this is set to 1, it does not include the header in the output file. 0 indicates
that the header will be included.
SafeMode Indicates whether the Safe Mode option has been selected (1) or not (0).
SeparatorIn Separator character for input files.
The possible characters are:
Comma ","
Pipe "|"
Tab "TAB"
Spaces "SPACE"
SeparatorOut Separator character for output files.
The possible characters are:
Comma ","
Semicolon ";"
Pipe "|"
Tab "TAB"
Spaces "SPACE"
TrimData If this is set to 1, any spaces found around the data values will be removed.
0 indicates that these spaces will not be removed.
TrimHeader If this is set to 1, any spaces found around the header columns will be
removed. 0 indicates that these spaces will not be removed.

249
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

WindowsInputFiles This parameter should be used when the input files are in the Windows
format (the lines end with \r\n), and you want the Validator to convert the
line endings for UNIX.
0 (Default) - Input files are not Windows
1 - Input files are Windows
Important:
• If you have already set the Platform to be Windows, then you do not
need to set this value here as well.
• If this parameter is not set correctly for the input files that are used,
then the data is still processed, but because of the extra character
added while transferring, the last column is ignored and the value for
this is filled up using the Missing Value parameter.

This table describes the entries found in the [SAFE] section, which is only produced if the
SafeMode option is selected on the Validator Options tab of the Loader:

Parameter Description

IgnoreColumn n The name of each ignore column, where n is the column number.
IgnoreColumns The total number of ignore columns, which are columns for any new counters
that you know have been added since the validation report.
PrimaryColumn n The name of each primary column, where n is the column number.
PrimaryColumns The total number of primary columns, which are those which will be needed
to load the new counter report.
SafeDir The location of the directory of the new counter report generated in safe
mode.

If you have chosen to use validation, then this table describes the entries found in the [REPORTS]
section:

Parameter Description

Number The total number of validation reports.


Report n The name of each report, where n is the report number.

Each validation report will have its own section, containing the following entries:

Parameter Description

Column n The name of each column (new counter), where n is the column number.
ColumnNumber The total number of columns (new counters) in the report.

250
About Loading in OPTIMA

About Direct Path Loading


In an environment where no operating system user or file access is allowed, the use of an external
table for data loading is not possible. In this case you can use SQL.NET connections to perform
direct path loading which is more secure.

With direct path loading, data is loaded into a Global Temporary Table (GTT). The GTT is loaded
with files directly from the input directory until the input threshold has been met or exceeded. No file
append is needed as the data is loaded directly from the files in the input directory.

Note: No other Oracle Session can see the data that is loaded into the GTT by the direct path
loader client.

This picture shows a simplified representation of the direct path loading process:

The direct path loader client:


• Produces log files to be loaded by the log parser/loader.
• Uses standard log severities from Debug to Critical.
• Allows crash recovery with the Process Monitor using a PID file.
• Has a unique PRID with a Program ID of 112 for each instance.
• Supports Backend Application Framework INI options.
• Supports TCA alarms in the same way as for external loading.
• Runs on Windows, HP, Sun, and Linux.
• Can load data using external loader (110) INI files provided that the database configuration
has been updated.
• Supports the loading of headers in the input file which are invalid Oracle identifiers using
the alias configuration in the same way as for external loadinDirect path loading does not
require validation configuration, as loading is performed by matching the header names in
the input file using the configuration in the LOADER_FILE_MAPPINGS table. It may
therefore be faster than external loading in situations where the order of the columns in the
input file varies between files. Direct path loading automatically trims any spaces in the
header column names. If there are additional columns in the input file they are ignored (no
bad file will be generated). If there are missing columns in the input file then they are
loaded into the staging table as NULL and the package does not load these values into the
raw table.
251
OPTIMA 8.0 Operations and Maintenance Guide

Migrating to Direct Path Loading


Individual loaders can be changed to use Direct Path Loading by using the ETL Loader
Configuration window. This section describes how to carry out a bulk migration of loaders.

Important: When migrating loaders that are based on combined measurement objects, ensure that
no column headings in a combined file have the same name. If there is more than one column with
the same name, the Direct Path Loading process will fail with the error
‘DIRPATHSETCOLUMNS_ERROR’.

To enable direct path loading:

1. Stop all loaders in the customer database.

2. Add additional columns to the LOADER_PARAMETERS table


(@upgrade_loader_parameters.sql).

3. Install the updated OPTIMA_LOADER package.

4. Update configuration to use direct path loading, for example to migrate all loaders for the
ERICSSON_UTRAN schema to work with the direct path loader client, execute the
following SQL:

UPDATE AIRCOM.LOADER_PARAMETERS SET STAGING_OPTION=2 WHERE


SCHEMA='ERICSSON_UTRAN';COMMIT;

5. Call the loader package to generate the SQLs and staging table:

EXEC AIRCOM.OPTIMA_LOADER.GENERATE_SCHEMA_SQLS ('ERICSSON_UTRAN');

6. Install new opx_LOD_GEN_112 binary in the bin folder

7. Update the run scripts to use the new loader binary

Configuring for Direct Path Loading


The steps under Migrating to Direct Path Loading describe how to migrate the configuration for a
whole schema to direct path loading. The ETL GUI allows you to convert a single PRID to direct
path loading, and to configure some other direct path loading options. To do this:

On the DB and Processing tab of the Configure Report window, select Direct Path as
shown in this picture.

252
About Loading in OPTIMA

The direct path configuration options become available. this table describes them:

Option Description Default

Oracle The number of Oracle Connections used for this loader PRID. This 1
Connections equates to the number of threads used in the load file worker. Each
separate thread populates its own copy of the staging Global
Temporary Table with data until the input threshold is met or exceeded.
It then calls the Loader Package to transfer the data.
For small tables use a value of 1. For large tables where performance is
critical you can specify up to 5 threads. Increasing the Oracle
connections significantly decreases loading times but uses additional
resources on the database and mediation machines.
Rows per Load Important: Under normal circumstances this parameter should be left 1000
NULL and you cannot change it here. The loader will then insert 1000
records into each array.
During direct path load into the staging table, the loader input file is
read by line and inserted into an array. This parameter determines how
many rows are added to the array before the array is sent to the
database to be loaded into the Global Temporary Table.
Note: A commit is NOT done at the end of inserting the array. The
commit frequency is determined by the input threshold.
Buffer Size Important: Under normal circumstances this parameter should be left MAX_
(bytes) NULL and you cannot change it here. It determines the buffer size ROWS_
required to store the array and is calculated automatically. If the buffer PER_
size is not big enough, the direct path load will fail with a LOAD
OCI_DPR_FULL error, for example: multiplied
by
WARNING 112106
Max
"Staging loader direct path convert error with file Record
C:\optdir\loader\in\950_201007220000_RNC_0.csv - Size
OCI_DPR_FULL - the internal stream is full -
(Row=0,Column=11) OCI_Error:"
The automatic calculation reads the value of the rows per load
(MAX_ROWS_PER_LOAD) and multiplies this by the record size in
bytes. The record size is calculated from the
LOADER_FILE_MAPPINGS configuration. For more information, see
Configuring Loader File Mappings
Important: When configuring loader file mappings, keep size values to
a minimum. For example, use VARCHAR2(50) rather than
VARCHAR2(500) where possible.

Notes:
• The settings on the Validator Options tab of the ETL GUI are not applicable to direct path
loading and you will not be able to access them if you have selected the Direct Path
Staging Option.
• Unlike external table loading, direct path loading requires a header line which must be read
even if it is not loaded. On the Files and Directories tab of the ETL GUI you can specify
Header lines to skip.

253
OPTIMA 8.0 Operations and Maintenance Guide

This table gives examples of what the Header lines to skip setting does in each case:

A Header lines to skip For external table loading For direct path loading means
setting of means

1 Skip line 1, load line 2 onwards. Read line 1, load line 2 onwards.
2 Skip lines 1 and 2, load line 3 Skip line 1, read line 2, load line 3
onwards. onwards.
3 Skip lines 1, 2 and 3, load line 4 Skip lines 1 and 2, read line 3, load line
onwards. 4 onwards.

• With direct path loading, header names in input files are processed as case insensitive.
This allows for the case to be different for the same column between input files, but does
not support two unique column headers which are only differ in their case.

Tuning Direct Path Loading


You should use the ETL GUI to set the direct path options rather than updating the Oracle
LOADER_PARAMETERS table directly, unless you use provided scripts to do so. This table
describes the columns in the LOADER_PARAMETERS table so that you can interpret it if
necessary:

LOADER_PARAMETE Default Equivalent Description


RS table column Loader GUI
Option

STAGING_OPTION External Table Staging Option Select External Table to use


the External Table Loader
client or Direct Path to use the
Direct Path Loader client.
LOADER_THREADS 1 Oracle Connections The number of Oracle
connections used by the direct
path loader client, that is, the
number of loader threads.
MAX_ROWS_PER_LOAD 1000 Rows per Load The number of records loaded
into each direct path array
before it is sent to the
database. This should
normally be left as NULL.
DIRECT_PATH_ Calculated Buffer Size (bytes) The size of the direct path
BUFFER_SIZE automatically as array. This should normally be
Rows per Load left as NULL.
multiplied by Row
Size

254
About Loading in OPTIMA

Error Handling for Direct Path Loading


There are some differences between the direct path loader client and the external loader client
when dealing with invalid data.

Direct Path Convert Errors

The Global Temporary Table used as the staging table in the direct path loader client can not show
data from Oracle sessions other than the current session. This means that the data loaded into the
GTT by the loader is not visible to an OPTIMA Administrator.

If there is a data error such that a non-numeric character is loaded into a NUMBER column, then
the following message will be displayed:

Staging loader direct path convert error with file


C:\Releases\Backend_8.0\Test\opx_LOD_GEN_112\in\201007220000_RNC_4.csv -
OCI_DPR_ERROR - an error happened while loading data -
(FileCount=5,Row=3,Column=4) OCI_Error:

Note: This error is only displayed if there is a datatype mismatch between the CSV data and the
GTT (loader file mappings). If there is a datatype mismatch between the GTT and the raw table
then the error is not displayed in the log.

The direct path loader normally loads 1000 records into an in-memory array. It then converts this to
the correct data types, and sends it to the database to be loaded. If there is an error in the
conversion, then the direct path loader will fail to load more than just the record that is invalid. All
the input files that are in the 1000 record array are consequently sent to the error directory. In the
above log message 5 files will be sent to the error directory. These include the file listed which was
the file loaded into record 1000 of the array, but not necessarily the file containing the error.

Note: The GTT column which has the invalid data is listed in the error message.The first column is
column 0. In the above message the 4th GTT column has the invalid data, which may not
necessarily be the 4th column in the input file. The LOADER_FILE_MAPPINGS table shows which
input file column is loaded into the GTT column that has caused the error.

If the direct path convert error is reported on row 0, column 0 then the error could be caused by a
missing INSERT grant on the GTT table to the OPTIMA_LOADER_PROC user (through the
OPTIMA_LOADER_PROCS role). The GTT tablename will be ETL_<PRID> and will exist in the
same schema as the raw table being loaded.

Type Info Get Errors

If a TYPEINFOGET_ERROR is found in the log file, check that the:


• Database Connection is valid
• GTT table specified in LOADER_PARAMETERS.EXT_TABLE_NAME exists
• GTT table has a SELECT grant to the OPTIMA_LOADER_PROC user. Ideally this will be
through the OPTIMA_LOADER_PROCS role.
• Error calling loader package.identifier 'AIRCOM.OPTIMA_LOADER' is declared.
• OPTIMA_LOADER_PROC user has EXECUTE permission on OPTIMA_LOADER package

255
OPTIMA 8.0 Operations and Maintenance Guide

Input File Error

If the log file contains the error "<inputfile> has no matching columns of the expected header
columns" check that the "Field Delimiter" has been defined correctly in the Loader Configuration
dialog box.

Configuration (INI) File Parameters for Direct Path Loading


While the direct path loader client can load data using external loader (110) INI files, some
parameters are ignored. This example shows the essential parameters used by the direct path
loader client:

[MAIN]
[LoaderConfiguration]
Database=OPTPROD62
UserName=AIRCOM
Password=ENC(l\mlofhY)ENC
DoCpyToErr=1
DoBackup=0
FileMask=*.csv
ErrThreshold=100
NumberOfHeaderLines=1
InputThresHold=0

[DIR]
InputDir=/OPTIMA_DIR/<application_name>/out
BackupDir=/OPTIMA_DIR/<application_name>/backup
ErrDir=/OPTIMA_DIR/<application_name>/error

Note: For more information on these parameters, see About the Loader Configuration (INI) File
Parameters on page 247

256
About the Process Monitor

8 About the Process Monitor

The Process Monitor continuously checks the running of OPTIMA backend applications on a
particular machine to ensure that they have not crashed, run away or hung.

Each backend application creates a monitor file, which is used by the Process Monitor to identify
and check the health of these applications. If a process crashes, runs away or hangs, the Process
Monitor will remove that instance of the application to ensure the smooth running of the data
loading process.

The Process Monitor uses a configuration file (INI) to store its settings. The configuration file can be
edited using a suitable text editor.

The Process Monitor uses global settings to monitor applications but you can also specify
monitoring requirements for individual applications by defining reports. For more information about
reports, see Defining Monitoring Settings for an Application on page 261.

This diagram shows the Process Monitor process:


Each process regularly
PID FILE DIRECTORY updates timestamp in
PID File Directory,
PID File 001001001 providing ‘heartbeat’

1. Check if process exists in PID File Directory

PROCESS MONITOR
MACHINE 001
2. Check if the process exists in the machine’s
process list Process List

3a. If the process does not exist, then delete from


PID File Directory
List of Processes
3b. If the process exists, then check the heartbeat of
LOADER (001)
to Monitor the process

COMBINER (002)
4a. If last timestamp is older than the acceptable
grace period, then issue SIGTERM to program and FTP (003)
add to SIGTERM List

4b. If last timestamp is within the acceptable grace


period, then return to Step 1 for the next process PARSER (004)

5. Check the heartbeat of processes in SIGTERM list


SIGTERM LIST

6a. If the process has stopped, then delete the PID


file and return to Step 1 for the next process

6b. If the process is still running, then issue a SIGKILL


to the OS

7. Check that the SIGKILL has succeeded, delete the


PID file and return to Step 1 for the next process

Process Monitor Process

If the configuration (INI) file is modified while the Process Monitor application is running, it has to be
restarted for the changes to have an effect.

257
OPTIMA 8.0 Operations and Maintenance Guide

The Process Monitor program supports these common functions:

Function Action

Logging Status and error messages are recorded in a daily log file.
Monitor Files The application runs in a scheduled mode. A monitor (PID) file, created
each time the application is started, ensures that multiple instances of the
application cannot be run. The PID file is also used by the OPTIMA
Process Monitor to ensure that the application is operating normally.
PRID The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface
ID, Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance
ID are both made up of 3 characters, which can be a combination of
numbers and uppercase letters.
For more information, see About PRIDs on page 29.

For more details on these common functions, see Introduction.

How the Process Monitor Works


The Process Monitor program is used to monitor all current running backend programs to detect
crashes and program hangs. The Process Monitor clears monitor (PRID) files and rogue processes
to allow the scheduled backend program to run again.

Important: An instance of the Process Monitor will need to be created for each distinct machine
that you are using. This is because the Process Monitor uses the hostname (the environment
variable that identifies the machine on which the backend application is running) to filter the monitor
directory and only monitors instances running on the same machine. The hostname environment
variable should be defined in the .profile (or equivalent, depending on the UNIX shell) running the
backend application.

Tip: To check that the hostname environment variable has been defined:
o Run the hostname command on any console (WIN/UNIX) and a value should be
returned (for example, server1).
o On UNIX check the .profile and/or .bash_profile file(s) for the HOSTNAME environment
variable (shown in capital letters, unlike the command). This should be equal to the
value returned by the command, for example HOSTNAME=server1.

The Process Monitor functions as follows:

1. On start up, it loads all the configuration settings from the Process Monitor INI file into
memory. The settings contain information on all the backend processes to be monitored.

2. The monitor files, created by each backend application, uniquely identify the application
instance using the PRID contained in its filename and the hostname. The Operating
System process identifier (PID), which identifies the unique process ID of a backend
application, is also written to the file. Each backend application regularly updates the
timestamp of the monitor file, which works as a 'heartbeat' for the process.

258
About the Process Monitor

3. In the Process Monitor INI file, a GlobalTimeOut period (also known as the 'grace period') is
specified. This is the maximum amount of time that the Process Monitor will allow between
'heartbeats' of the monitored process. As it runs, the Process Monitor regularly checks all
monitor files in the common monitor directory to ensure the grace period has not been
exceeded. Then:
o If the grace period has expired, then the Process Monitor issues a SIGTERM request,
which requests that the program cleanly shuts down the process.
o If the grace period has not expired, the Process Monitor checks that the PID in each file
is still in the current OS process list. If it is, then everything is working as it should, and
the Process Monitor moves on to check the next process. If not, this means that the
associated program has crashed, in which case the Process Monitor program removes
the monitor file.

4. The Process Monitor stores a list of SIGTERM requests that have been sent out, and
during the next iteration of monitoring, it checks if the process is still running. If it is, and the
elapsed time since the SIGTERM request is greater than the GlobalTimeOut period, then
the Process Monitor issues a SIGKILL, which forces the OS to terminate the process.

5. After it has issued a SIGKILL, the Process Monitor will wait a period of time for this to
succeed, as determined by the KillProcessDelay parameter.

When this time period has been exceeded, if the process has been terminated then the
monitor file is deleted. If it has not been terminated, then an error message is returned,
because this indicates a problem with the termination.

Installing the Process Monitor


Before you can use the Process Monitor, install the following file in the backend binary directory:
• opx_MON_GEN_510.exe (Windows)
• opx_MON_GEN_510 (UNIX)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

Starting the Process Monitor


To start the Process Monitor:

Type in the executable file name and the configuration (INI) file name into the command prompt:

In Windows:

opx_MON_GEN_510.exe opx_MON_GEN_510.ini

In Unix:

opx_MON_GEN_510 opx_MON_GEN_510.ini

Note: In usual operation within the data loading architecture, all applications are scheduled. In
usual circumstances, you should not need to start the program. For more information, see Starting
and Stopping the Data Loading Process on page 40.

259
OPTIMA 8.0 Operations and Maintenance Guide

Configuring the Process Monitor


The Process Monitor is configured using a configuration (INI) file. Configuration changes are made
by editing the parameters in the configuration (INI) file with a suitable text editor. The Process
Monitor configuration (INI) file is divided into different sections.

The following table describes the parameters in the [DIR] section:

Parameter Description

LogDir The location of the directory where log files will be stored.
PIDFileDir The location of the directory where monitor (PID) files will be created.
TempDir The location of the directory where temporary files will be stored.

The following table describes the parameters in the [MAIN] section:

Parameter Description

InstanceID The three-character program instance identifier (mandatory).


InterfaceID The three-digit interface identifier (mandatory).
Iterations This parameter is used when the application does not run in continuous mode so
that it will be able to check for input files in the input folder for the number of
required iterations before an exit. Integer values are allowed, like 1,2,3,4...
LogGranularity Defines the frequency of logging. The options are:
0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily
LogLevel (or Sets the level of information required in the log file. The available options are:
LogSeverity)
1 - Debug
2 - Information
3 - Warning
4 - Minor
5 - Major
6 - Critical
PollingTime (or The pause (in seconds) between executions of the main loop when running
RefreshTime) continuously.
ProgramID The three-character program identifier (mandatory).
RunContinuously 0 - Have the data validation application run once.
1 - Have the data validation application continuously monitor for input files.
StandAlone 0 – Run the application without a monitor file. Do not select this option if the
application is scheduled or the OPTIMA Process Monitor is used.
1 – Run the application with a monitor file.

260
About the Process Monitor

The following table describes the parameters in the [OPTIONS] section:

Parameter Description

GlobalTimeout Type the maximum time the Process Monitor should allow for the process being
monitored to update its timestamp.
This is known as the grace period. For information on how this is used, see How
the Process Monitor Works on page 258.
KillProcessDelay Type the maximum time the Process Monitor should wait after a SIGKILL signal
before deleting the monitor file.
TimeScale The time scale for the GlobalTimeout parameter. The available options are:
SEC - seconds
MIN - minutes
HOUR - hours
DAY - days
MONTH - months
YEAR - years

Defining Monitoring Settings for an Application


You can override the global monitoring settings defined in the [OPTIONS] section of the
configuration (INI) file by defining reports to monitor individual applications. For example, if the
Process Monitor is set to check all monitor files every 60 minutes but you want to check the Parser
every 30 minutes, you can define a report to do this.

You define reports by editing parameters in the configuration (INI) file with a suitable text editor.
The following table describes the parameters in the [REPORTS] section:

Parameter Description

NoOfReports The number of reports to create, one for each application being monitored.
Reportn Type the unique name of the report, where n is the execution order position
of the report, for example, Report1 will be executed before Report2.

Then each separate report has its own section:

Parameter Description

InterfaceID The three-digit interface identifier (mandatory).


ProgramID The three-character program identifier (mandatory).
InstanceID The three-character program instance identifier (mandatory).
EXEName Type the executable name of the application to be monitored.
UseHostname Indicates whether the hostname will be used in the PID filename when
monitoring the application (1) or not (0). By default this is set to 1, but if the
application is earlier than 6.2, then the hostname is not included, so it
should be set to 0.
This ensures that the Process Monitor application uses the correct PID file
name for comparing scheduled processes with actual running processes.
Comments Add any comments about the application that is being monitored.
Monitor 0 - Do not monitor the application.
1 - Monitor the application.
MaximumRunningTime Type the maximum time the application should take to do its job.

261
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

Timescale The time scale for the MaximumRunningTime parameter. The available
options are:
SEC - seconds
MIN - minutes
HOUR - hours
DAY - days
MONTH - months
YEAR - years

The following example shows the definitions for two reports called XMLParser and PAR_ERI720:

[Reports]
NoOfReports=2
Report1=XMLParser
Report2=PAR_ERI720

[XMLParser]
InterfaceID=000
ProgramID=711
InstanceID=001
EXEname=opxNorXML
UseHostname=0
Comments=XML parser
Monitor=1
MaximumRunningTime=10
TimeScale=SEC

[PAR_ERI720]
InterfaceID=001
ProgramID=720
InstanceID=001
EXEname=opx_PAR_ERI_720
UseHostname=0
Comments=parser ericsson 720
Monitor=1
MaximumRunningTime=30
TimeScale=SEC

For more information, see Example Process Monitor Configuration (INI) File on page 265.

Maintenance
In usual operation the Process Monitor should not need any special maintenance. During
installation the OPTIMA Process Monitor will be configured to maintain the backup and log
directories automatically.

However TEOCO recommends the following basic maintenance check to be carried out for Process
Monitor:

Check The When Why

Log messages for Weekly In particular any Warning, Minor, Major and Critical
error messages messages should be investigated.

262
About the Process Monitor

Checking a Log File Message


The log file for each application is stored in the directory defined in the configuration (INI) file for
that application.

A new log file is created every day. The information level required in the log file is defined in the
General Settings dialog box and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

Stopping the Process Monitor


If the Process Monitor application is scheduled, then the application will terminate once it has
finished monitoring the working of all the programs which are scheduled to be monitored.

If run continuously, then the Process Monitor process will monitor the working of all the programs
continuously. In this case, the application can be terminated. For more information, see Starting
and Stopping the Data Loading Process on page 40.

Checking the Version of the Process Monitor


If you need to contact TEOCO support regarding any problems with the Process Monitor, you must
provide the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_MON_GEN_510.exe -v

In Unix:

opx_MON_GEN_510 –v

For more information about obtaining version details, see About Versioning on page 33.

Checking that the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

263
OPTIMA 8.0 Operations and Maintenance Guide

Process Monitor Message Log Codes


This section describes the message log codes for the Process Monitor:

Message Code Description Severity

5101 Checking PRID Files. INFORMATION


5103 PID file does not have match MachineID of process monitor. DEBUG
5104 Processing PRID file <PRIDFile>. DEBUG
5105 PRID file contains PID <Pid>. DEBUG
5106 Could not read process ID from PID file. The PID is missing from WARNING
PRID file. File: <PridFile>.
Could not read process ID from PID file. File not found or wrong WARNING
access permission. File: <PridFile>.
5107 Pid value <Pid> is in the OS process list. DEBUG
5108 Process crashed: <Pid> File: <PridFile>. WARNING
PRID file contained PID <sPid> which is not in the OS process DEBUG
list. File: <PridFile>.
5109 (Pid value <sPid> still running.). DEBUG
5110 Process hanged: <Pid> File: <PridFile>. WARNING
Killed process with PID <Pid> from the OS process list. DEBUG

5111 (Could not remove PID <Pid> from OS process list. File: WARNING
<PridFile>).
5112 PidNotWithinTimeAllowed: Last update for prid file was DEBUG
<timeSinceLastTouch> seconds.
IsPidTimeAllowedToRunUp: Last update for prid file was DEBUG
<nTimeSinceLastUpdate> seconds.
5113 Using MaximumRunningTime time <seconds> seconds. DEBUG
5114 Using GlobalTimeout time <GlobalTimeout> seconds. DEBUG
5115 Deleted Prid file File: <PridFile>. DEBUG
5116 Could not delete PRID file. File: <PridFile>. WARNING
5117 Processing complete. INFORMATION
5119 Pid file <PRIDFile> no longer exist, no processing needed. DEBUG
5120 Could not delete PRID file. PRID file no longer exist. File: WARNING
<PridFile>.
5129 (Pid value <Pid> not been updated since maximum time DEBUG
allowed.).
5130 Pid been running <timeSincePidStarted> seconds. DEBUG
5131 PID value <Pid> is being using by a new process because DEBUG
number off seconds since PID process started is less then
number of seconds since the PRID file was last updated.
5140 Ignoring own PRID file <PRIDFile>. DEBUG
5141 Error in checking to see if <Pid> was on OS task list from file WARNING
<PridFile>.
5142 Error in checking elapsed time of <Pid> on OS task list. File: WARNING
<PridFile>.
5143 Error when trying to kill <Pid> on OS task list. File: <PridFile>. WARNING

264
About the Process Monitor

Troubleshooting
The following table shows troubleshooting tips for the Process Monitor:

Symptom Possible Cause Solution

Cannot save configuration The user has insufficient privileges Enable permissions.
(INI) file. on configuration (INI) file or
directory. Make file writable.

The file is read only or is being Close the Process Monitor to release
used by another application. the configuration (INI) file.

Process Monitor does not Settings are not saved to the Check settings in file and (INI) file
use new settings. configuration (INI) file. location.
File created in the wrong location Restart the Process Monitor backend
application.
Process Monitor has not restarted
to pick up the new settings.
Application not monitoring Application has not been Use Process Monitor to check last
programs. scheduled. run status.
Crontab entry removed. Check crontab settings.
Application has crashed and Check configuration settings.
Process Monitor is not configured.
Check process list and monitor file. If
Incorrect configuration settings. there is a monitor file and no
corresponding process with that PID,
then remove the monitor file.
Note: The process monitor will do
this automatically.
Application exits Invalid or corrupt (INI) file. Use Process Monitor to check
immediately. instances running.

Example Process Monitor Configuration (INI) File


[DIR]
PIFileDir=/OPTIMA_DIR/<application_name>/Pids
LogDir= /OPTIMA_DIR/<application_name>/PMLog
TempDIR=/OPTIMA_DIR/<application_name>/PMTemp

[MAIN]
InterfaceID=001
ProgramID=510
InstanceID=001
PollingTime=5
LogGranularity=3
LogSeverity=1
UseFolderFileLimit=0
FolderFileLimit=10000
StandAlone=0
RunContinuously=0

[OPTIONS]
TimeScale=SEC
GlobalTimeOut=60

265
OPTIMA 8.0 Operations and Maintenance Guide

[Reports]
NoOfReports=3
Report1=NortelXMLParser
Report2=CellStat
Report3=EricssonParser

[NortelXMLParser]
InterfaceID=000
ProgramID=711
InstanceID=001
EXEname=opxNorXML
Comments=nortel XML parser
Monitor=1
MaximumRunningTime=10
TimeScale=SEC

[CellStat]
InterfaceID=001
ProgramID=110
InstanceID=001
EXEname=CellStat
Comments=CellStat loader
Monitor=1
MaximumRunningTime=20
TimeScale=SEC

[EricssonParser]
InterfaceID=001
ProgramID=712
InstanceID=001
EXEname=EricssonParser
Comments=Ericsson Parser
Monitor=1
MaximumRunningTime=40
TimeScale=SEC

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

266
About the Directory Maintenance Application

9 About the Directory Maintenance Application

During the data extraction and loading process, a large number of directories are used for various
purposes. These directories need maintenance on a regular basis to ensure smooth running and
good performance for the whole system.

The Directory Maintenance application reports on and maintains user-specified directories based
on user-defined maintenance parameters.

The Directory Maintenance application uses a configuration file (INI) to store information about the
maintenance parameters. The configuration file can be edited using a suitable text editor.

This diagram shows the Directory Maintenance process:

Directory Maintenance Process

The Directory Maintenance application supports these common functions:

Function Action

Logging Status and error messages are recorded in a daily log file.
Monitor Files The application runs in a scheduled mode. A monitor (PID) file, created each
time the application is started, ensures that multiple instances of the
application cannot be run. The PID file is also used by the OPTIMA Process
Monitor to ensure that the application is operating normally.
PRID The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface
ID, Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance
ID are both made up of 3 characters, which can be a combination of numbers
and uppercase letters.
For more information, see About PRIDs on page 29.

For more details on these common functions, see Introduction.

267
OPTIMA 8.0 Operations and Maintenance Guide

The Directory Maintenance Process


On start up, the Directory Maintenance application loads all the configuration settings from the INI
file into memory. The settings contain information on all the directories to be maintained.

The application polls each configured directory at user-defined polling intervals to check if the files
have met the maintenance criteria, which includes maintenance by age and by file count. If a file
mask is specified in the settings, only those types of files are considered in the maintenance
process. Sub directories are also maintained if that particular option is chosen.

If the selected criterion is age then the files are maintained by age. Files older than the age
specified will be deleted or archived depending on the selected option.

If the selected criterion is file count, the number of files in the particular directory is considered for
maintaining the directory. If the file count is greater than the value specified, the excess files will be
archived or deleted according to the selected option.

The Directory Maintenance application displays the results of maintenance in a maintenance report.

Installing the Directory Maintenance Application


Before you can use the Directory Maintenance application, install the following file in the backend
binary directory:
• opx_MNT_GEN_610.exe (Windows)
• opx_MNT_GEN_610 (Unix)

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

Starting the Directory Maintenance Application


To start the Directory Maintenance application, type in the executable file name and the
configuration (INI) file name into the command prompt:

In Windows:

opx_MNT_GEN_610.exe opx_MNT_GEN_610.ini

In Unix:

opx_MNT_GEN_610 opx_MNT_GEN_610.ini

Note: In usual operation within the data loading architecture, all applications are scheduled. In
usual circumstances you should not need to start the program. For more information, see Starting
and Stopping the Data Loading Process on page 40.

268
About the Directory Maintenance Application

Configuring the Directory Maintenance Application


The Directory Maintenance application is configured using a configuration (INI) file. Configuration
changes are made by editing the parameters in the configuration (INI) file with a suitable text editor.
The Directory Maintenance configuration (INI) file is divided into different sections.

The following table describes the parameters in the [DIR] section:

Parameter Description

RootDir Type the root of the directory tree that the Directory Maintenance application
will report on and maintain.
ReportDir Type the location where Directory Maintenance report will be stored.
LogDir Type the name of the directory in which log files will be created.
TempDir Type the name of the directory in which temporary files will be created. The
temporary file is deleted once directory is maintained.
PIDFileDir Type the name of the directory in which the program monitor file will be
created.
DefaultArchiveRootDir Type the default root of the archive directory tree. Maintained directories will
be backed up here if the archive option is on for these directories.
The Directory Maintenance application uses the tree structure of the
directory maintained. For example, if RootDir=/dev/optima/, the folder
/dev/optima/parser is archived to DefaultArchiveRootDir/optima/parser.
Notes:
• The Directory Maintenance application will not maintain any folder
matching path mask DefaultArchiveRootDir/*
• The program will append a path separator to end of directory path if
missing.
• The folder must be created before the application runs.
• This parameter is required if NumberOfDir is not zero.
TarDirExe The location of the gtar executable that is used to tar the files before they
are moved to the archive directory.
This location should take the format '.../<... path ...>/gtar.
If this parameter is blank or missing, the files cannot be tarred.
GzipDirExe The location of the gzip executable that is used to gzip the files before they
are moved to the archive directory.
Important: This can only be done after the files are tarred, so if the
TarDirExe parameter is not set, then this will not be done.

The following table describes the parameters in the [MAIN] section:

Parameter Description

InterfaceID The three-digit interface identifier (mandatory).


ProgramID The three-character program identifier (mandatory).
InstanceID The three-character program instance identifier (mandatory).
LogGranularity Defines the frequency of logging, the options are:
0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily (default)

269
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

LogLevel or LogSeverity Sets the level of information required in the log file. The available options
are:
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
RunContinuously 0 - Have the Directory Maintenance application run once.
1 - Have the Directory Maintenance application continuously monitor for
input files.
PollingTime (or The pause (in seconds) between executions of the main loop when running
RefreshTime) continuously.
StandAlone 0 – Run the application without a monitor file. Do not select this option if
the application is scheduled or the OPTIMA Process Monitor is used.
1 – Run the application with a monitor file.
Iterations This parameter is used when the application does not run in continuous
mode so that it will be able to check for input files in the input folder for the
number of required iterations before an exit. Integer values are allowed,
like 1,2,3,4 and so on.
MaxFilesInArchive If the location of the gtar executable has been set, then this should be used
to define the maximum number of files to place inside a tar file.
Important: By default, this is set to 100, but if the backup has already been
tarred using the FTP, then this should be set to 1 here.

The following table describes the parameters in the [OPTIONS] section:

Parameter Description

DefaultFileMask Type the file mask of files to be reported on and maintained. For
example, DefaultFileMask=*.csv, will report and maintain all CSV
files.
MainThreadSleepMilliSeconds The time in milliseconds the main thread of the application will
sleep for in its main logic loop.
MaxNumberOfThreads The maximum number of threads the application can use while
running.
On UNIX this cannot be greater than 255 threads. On Windows
the maximum is slightly higher.

270
About the Directory Maintenance Application

Parameter Description

ThreadScope 0 - PTHREAD_SCOPE_PROCESS. The system scheduling


attributes of a thread created with
PTHREAD_SCOPE_PROCESS scheduling contention scope
are the implementation-defined mapping into system attribute
space of the scheduling attributes with which the thread was
created. Threads created with PTHREAD_SCOPE_PROCESS
scheduling contention scope contend directly with other threads
within their process that were created with
PTHREAD_SCOPE_PROCESS scheduling contention scope.
The contention is resolved based on the threads' scheduling
attributes and policies. It is unspecified how such threads are
scheduled relative to threads in other processes or threads with
PTHREAD_SCOPE_SYSTEM scheduling contention scope.
Note: PTHREAD_SCOPE_PROCESS is less resource
intensive.
1 - PTHREAD_SCOPE_SYSTEM. A thread created with
PTHREAD_SCOPE_SYSTEM scheduling contention scope
contends for resources with all other threads in the same
scheduling allocation domain relative to their system scheduling
attributes. The system scheduling attributes of a thread created
with PTHREAD_SCOPE_SYSTEM scheduling contention scope
are the scheduling attributes with which the thread was created.
DefaultMaxThreadRunningSeconds The maximum time in seconds a reported or maintained
directory should take to finish. This is the default value for the
MaxThreadRunningTimeSeconds parameter in maintained
directory sections.
If a directory being reported or maintained takes longer then this
value then thread processing that directory will be killed.
DoNotProcessPathMasks Type a comma-separated list of path masks you do not want the
application to report on or maintain. Any sub directories of the
root folder which match any of these path masks will not be
reported or maintained.
For example:
RootDir=/dev/optima
DoNotProcessPathMasks=bin/*
The program will ignore the /dev/optima/bin/ directory and all it
sub folders.
RootDir=/dev/optima
DoNotProcessPathMasks=bin
The program will ignore just the /dev/optima/bin/ directory.
Notes:
• The program will ignore blank fields in the comma separated
list.
• The program will append a path separator to start of field if
missing.
The program will append a path separator to end of the field if
the field does not end with a path separator or *.
DefaultArchive Indicates whether the Directory Maintenance application will
archive any files matching the DefaultFileMask (1) or just create
a report without performing any archiving (0).
DefaultMaxFilesToKeep The maximum number of files to keep in a directory. This is the
default value for the MaxFilesToKeep parameter in maintained
directory sections.
Note: This parameter is required when NumberOfDir is not zero.

271
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

DefaultMaxFileAgeToKeep The maximum age of files to keep in a directory. This is the


default value for the MaxFileAgeToKeep parameter in
maintained directory sections.
Note: This parameter is required when NumberOfDir is not zero.
DefaultMaxFileAgeTimeScale The timescale of the maximum age of files to keep in a directory.
This is the default value for the MaxFileAgeTimeScale parameter
in maintained directory sections. The available options are:
0 - Seconds
1 - Minutes
2 - Hours
3 - Days
4 - Weeks
Note: This parameter is required when NumberOfDir is not zero.
NumberOfDir The number of directory maintenance sections defined in the INI
file.
If not zero then there must be parameter in format
Dir1=SectionName
Dir2=SectionName
And so on.
If zero then the Directory Maintenance application will only report
on all directories found and perform no maintenance.

The following table describes the parameters in each [REPORT] section. There is one of these for
each directory that you want to maintain or monitor using settings different to the defaults defined in
the [MAIN] section:

272
About the Directory Maintenance Application

Item Description

PathMask This parameter specifies the path mask for this maintenance
section.
The program will append a path separator to the start of the
field if missing.
The program will append a path separator to the end of the field
if the field does not end with a path separator or *.
Example 1:
RootDir=/dev/optima
PathMask=parser/abc/*
Any directory matching path mask /dev/optima/parser/abc/* will
be maintained recursively using these settings.
Example 2:
RootDir=/dev/optima
PathMask=parser/abc
Only the directory matching the path mask
/dev/optima/parser/abc/ will be maintained using these settings.
Example 3:
RootDir=/dev/optima
PathMask=/*
Any directory matching path mask /dev/optima/* will be
maintained using these settings. Every directory found will use
these settings if the directory does not match a path mask in
another section.
If a directory matches more than one section path mask then
the least general path mask will be used. For example:
[Section1]
RootDir=/dev/optima
PathMask=/parser/*
[Section2]
RootDir=/dev/optima
PathMask=/parser/tmp/
In this case, directory /dev/optima/parser/tmp/a/ will use Section
2 settings.
ExcludePathMasks Type a comma-separated list of path masks to use to exclude
directories which match the PathMask parameter and also
match ExcludePathMasks.
Notes:
• The program will ignore blank fields in the comma
separated list.
• The program will append a path separator to start of the
field if it is missing.
• The program will append a path separator to end of the
field if the field does not end with a path separator or *.
FileMask Type the file mask for this maintenance section.

273
OPTIMA 8.0 Operations and Maintenance Guide

Item Description

MaintenanceType 0 - Report only


Note: TEOCO recommends using this setting when running the
Directory Maintenance application with a newly created
configuration (INI) file. This allows you to run the application to
check that the correct directories in your directory tree are
matching the correct path masks without having any files
deleted from the directories.
1 - Maintain directory by maximum number of files
2 - Maintain directory by maximum file age
MaxFileAgeToKeep The maximum age of files to keep in the directory.
MaxFileAgeTimeScale The timescale of the maximum age of files to keep in a
directory. The available options are:
0 - Seconds
1 - Minutes
2 - Hours
3 - Days
4 - Weeks
MaxFilesToKeep The maximum number of files to keep in the directory.
MaxThreadRunningTimeSeconds The maximum time in seconds a reported or maintained
directory should take to finish.
If a directory being reported or maintained takes longer than
this value then the thread processing that directory will be killed.
Archive 1- Archive the report for which it has been defined.
0 - Do not archive, report only.

Maintenance
In usual operation, the Directory Maintenance application should not need any special
maintenance. During installation the OPTIMA Directory Maintenance application will be configured
to maintain the backup and log directories automatically.

However TEOCO recommends the following basic maintenance checks are carried out for
Directory Maintenance application:

Check The When Why

Input directory for a backlog of Weekly Files meeting the maintenance criteria should not be in
files meeting the maintenance the input directory. A backlog indicates a problem with
criteria. the program.
Log messages for error Weekly In particular any Warning, Minor, Major and Critical
messages messages should be investigated.

274
About the Directory Maintenance Application

Checking a Log File Message


The log file for each application is stored in the directory defined in the configuration (INI) file for
that application.

A new log file is created every day. The information level required in the log file is defined in the
General Settings dialog box and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

Stopping the Directory Maintenance Application


If the Directory Maintenance application is scheduled, then it will terminate when all the
maintenance work for directories is finished.

If run continuously, then the Directory Maintenance application will monitor the directories
continuously. In this case, the application can be terminated. For more information, see Starting
and Stopping the Data Loading Process on page 40.

Checking the Version of the Directory Maintenance Application


If you need to contact TEOCO support regarding any problems with the Directory Maintenance
application, you must provide the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_MNT_GEN_610.exe -v

In Unix:

opx_MNT_GEN_610 –v

For more information about obtaining version details, see About Versioning on page 33.

Checking that the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

275
OPTIMA 8.0 Operations and Maintenance Guide

Troubleshooting
The following table shows troubleshooting tips for the Directory Maintenance application:

Symptom Possible Cause Solution

Cannot save User has insufficient privileges on Enable permissions.


configuration (INI) configuration (INI) file or directory.
file. Make file writable
The file is read only or is being
used by another application. Close the application to release the
configuration (INI) file.
New configuration Settings are not saved to the Check settings and location of file.
settings are not configuration (INI) file.
being used by the Restart the application.
application. File created in the wrong location.
Application has not restarted to
pick up the new settings.
Application not Application has not been Use Process Monitor to check last run status.
maintaining scheduled.
directories. Check crontab settings.
Crontab entry removed.
Check configuration settings.
Application has crashed and
Process Monitor is not configured. Check process list and monitor file. If there is a
monitor file and no corresponding process with
Incorrect configuration settings. that PID, then remove the monitor file.
Note: The Process Monitor will do this
automatically.
Application exits Another instance is running. Use Process Monitor to check instances
immediately. running.

Invalid or corrupt (INI) file. Check that the INI file is configured correctly -
recreate if corrupted.

Directory Maintenance Application Message Log Codes


This section describes the message log codes for the Directory Maintenance Application:

Message Description Severity


Code

6400 File age <fileAge> seconds for file Max(<MaxFileAgeToKeepSeconds>) DEBUG


File <name>.
6478 Delete error: <error> for file <filename>. DEBUG
6576 File Name <filename>. DEBUG
6577 Archive File Name <newFileName>. DEBUG
6578 Archive error: <error> for file <filename>. DEBUG
6579 Archive Success. DEBUG
6580 Archive folder with NO write access: <strNewFolder>. WARNING
6581 Cannot create archive folder path <newFileName>. WARNING
Cannot create archive folder because of no write permission, path WARNING
<strNewFolder>.

276
About the Directory Maintenance Application

Example Directory Maintenance Configuration (INI) File - Report


Only
[DIR]
RootDir=/OPTIMA_DIR/<application_name>/
ReportDir=/OPTIMA_DIR/<application_name>/Report
LogDir=/OPTIMA_DIR/<application_name>/Log
TempDir=/OPTIMA_DIR/<application_name>/Temp
PIDFileDir=/OPTIMA_DIR/<application_name>/PID

[MAIN]
InterfaceID=001
ProgramID=610
InstanceID=001
PollingTime=5
StandAlone=1
RunContinuous=0
LogGranularity=3
LogSeverity=2

[OPTIONS]
DefaultFileMask=*
MainThreadSleepMilliSeconds=500
MaxNumberOfThreads=50
ThreadScope=0
DefaultMaxThreadRunningSeconds=300

Example Directory Maintenance Configuration (INI) File - Report and


Maintenance
[DIR]
RootDir=/OPTIMA_DIR/<application_name>/root
ReportDir=/OPTIMA_DIR/<application_name>/report
LogDir=/OPTIMA_DIR/<application_name>/log
TempDir=/OPTIMA_DIR/<application_name>/temp
PIDFileDir=/OPTIMA_DIR/<application_name>/pid
DefaultArchiveRootDir=/OPTIMA_DIR/<application_name>/archive
TarDirExe=/OPTIMA_DIR/<application_name>/gtar -rf
GzipDirExe=/OPTIMA_DIR/<application_name>/gzip.exe -9

[MAIN]
InterfaceID=101
ProgramID=610
InstanceID=001
LogGranularity=3
LogSeverity=2
RunContinuously=0
PollingTime=5
StandAlone=0
Verbose=1
Iterations=1

277
OPTIMA 8.0 Operations and Maintenance Guide

[OPTIONS]
DefaultFileMask=*.csv
MainThreadSleepMilliSeconds=500
MaxNumberOfThreads=10
ThreadScope=0
DefaultMaxThreadRunningSeconds=300
#DoNotProcessPathMasks=/bin,/run,/lib

DefaultArchive=1
DefaultMaxFilesToKeep=2
DefaultMaxFileAgeToKeep=5
DefaultMaxFileAgeTimeScale=0

NumberOfDir=2
Dir1=interface
Dir2=backup

[interface]
#PathMask=*\interface\*
#PathMask=*\root\*
PathMask=\*
MaintenanceType=2
MaxFileAgeToKeep=2
MaxFileAgeTimeScale=0

#MaxFilesToKeep=1000000000000
Archive=1

[backup]
#PathMask=/backup/*
#PathMask=*/backup/*
#PathMask=backup/*
PathMask=*\backup\*
MaintenanceType=2
MaxFileAgeToKeep=2
MaxFileAgeTimeScale=0
#MaxFileAgeTimeScale=3
Archive=1

278
About the OPTIMA Summary Application

10 About the OPTIMA Summary Application

The OPTIMA Summary application summarizes data within the OPTIMA database.

The OPTIMA Summary application is a database-based program that runs within the Oracle server.
It uses configuration tables in the database to store information about aggregating data. These can
be modified using the configuration utility.

Note: In this document, each configuration is referred to as a report.

The OPTIMA Summary process can be used for the following:

For This Functionality The Summary Process

Time and Element Aggregation Aggregates data from a primary table over time and/or element and
inserts this data into a secondary table.
Busy Hour Calculation Calculates a busy hour from data in a primary table and stores it in a
secondary table.
Busy Hour Summarization Populates the busy hour summary tables using the data in the busy
hour tables and a specified raw table.
Direct Database Loading Loads data from any other third-party database directly into OPTIMA
over a direct database link.

This picture shows an overview of the OPTIMA Summary:

SUMMARY_REPORTS Configure

SUMMARY_SCHEDULES Configure

SUMMARY GUI

Read
SUMMARY_LOG

Log Log

OPTIMA_SUMMARY Call DIFFERENCE_ENGINE


Package Package

Insert/Update Read Read

Summary Table Source Table

Overview of the OPTIMA Summary

279
OPTIMA 8.0 Operations and Maintenance Guide

The OPTIMA_SUMMARY package reads its configuration from the SUMMARY_REPORTS and the
SUMMARY_SCHEDULES tables. It then calls the DIFFERENCE_ENGINE package to compare
the source table with the destination table. The OPTIMA_SUMMARY package then inserts and
updates the summary table with the new data from the DIFFERENCE_ENGINE comparison.

Note: For more information on the DIFFERENCE_ENGINE package, see About the
DIFFERENCE_ENGINE Package on page 282.

The packages log their messages to the SUMMARY_LOG table. The SUMMARY GUI is a
Windows application which is used to configure the SUMMARY_REPORTS and
SUMMARY_SCHEDULES tables and monitor the processing of the OPTIMA_SUMMARY package.

Quick Start
This section is intended to indicate the steps you must take to get the OPTIMA Summary
Application running for demonstration purposes. It covers the essential parameters that must be
configured. Where more parameters exist but are not mentioned, the default settings will suffice.
For more information on the use of all the parameters that determine the behavior of the OPTIMA
Summary Application, see the remainder of this chapter.

Prerequisites
To run the OPTIMA Summary Application you will need to have:
• Created an OPTIMA database
• Run the OPTIMA Backend Installer
• Run Create_Optima_Summary.sql
• Run the OPTIMA Console
• Checked that the LOGS.SUMMARY_LOG table has a partition for the current date

Create Raw and Summary Tables


Create correctly partitioned raw and summary tables. You can do this manually or using the
OPTIMA Installation Tool.

If you use the OPTIMA Installation Tool you do not need to add grants as this is done automatically.
For more information see the OPTIMA Installation Tool User reference Guide.

Add Grants
Make these grants to the AIRCOM user:
• SELECT on the source table
• SELECT, INSERT and UPDATE on the destination table

280
About the OPTIMA Summary Application

To add these, in TOAD, sqlplus or a similar editor, type:

GRANT SELECT ON

<SCHEMA>.<SRC_TABLE>

TO AIRCOM

GRANT SELECT, INSERT, UPDATE ON

<SCHEMA>.<DST_TABLE>

TO AIRCOM

Configure a New Summary Report


To create a new Summary Report and configure its essential parameters:

1. From the OPTIMA Console, click the New Summary Report button . The Select
Interface/Machine window appears. This picture shows an example:

2. Click the row representing the interface and machine to be used in the PRID for the new
report, then click OK. The Summary Report window appears with the Report
Configuration tab showing.

3. Ensure that the Report Enabled option at the top left of the dialog box is selected.

4. In the Report Configuration pane, select the Summary Time Aggregation type. If you
select Element Aggregation Only then you must also specify an amount and units
(minutes, hours, days or weeks) for the Summary Table Granularity.

5. In the Source Table(s) pane:


o from the Schema drop-down, select the schema in which the source table exists
o from the Table drop-down, select the source table for the Summary report
o from the Datetime Column, select the Oracle date column that forms part of the
primary key of the source table
o in the Entries Formula box, type the Entries Formula used for the CRC check. For
example, for a simple summary, type COUNT(*)
o in the Aggregated Elements box, type the non-date part of the logical primary key of
the data in the SQL query

6. In the Summary Table pane:


o from the Schema drop-down, select the schema in which the summary table exists
o from the Table drop-down, select the summary table

281
OPTIMA 8.0 Operations and Maintenance Guide

7. Click the SQL Query tab and type in an SQL query defining what you want your report to
summarize. This should be a SELECT clause which contains the following filter:

WHERE DATETIME BETWEEN :STARTDATE AND :ENDDATE

(assuming DATETIME is the name of the Primary Key date column in the source table).

8. Click the Column Mappings tab and map the columns of the SQL query on the left, to the
columns of the summary table on the right, using the Match Highlighted button.

Schedule Reports
To run a schedule:

1. Open the schedule from the report edit page or from the schedule explorer.

2. Check that the schedule configuration is correct.

3. Click the Set to SYSDATE button to allow the schedule to run.

4. Ensure that the summary_log table has a partition for today's date, and that the summary
destination table has a partition for the date of the data it should contain.

5. Set up a DBMS_SCHEDULER job to run:

begin
optima_summary.do_work;
end;

6. Check:
o the SUMMARY_LOG table for any errors
o the summary destination table to see if the data has been summarized

Check the Log Viewer


Check the log messages to ensure that a Summary Report has been produced:

From the Optima Console toolbar, click the Log Viewer button .

The Log Viewer is displayed. You can filter on the report that you have just created, using the
PRID.

About the DIFFERENCE_ENGINE Package


When the DIFFERENCE_ENGINE package compares the data in the source and destination
tables, it stores the data in temporary, non-partitioned DIFFERENCE_OUTPUT tables. The
DIFFERENCE_ENGINE package automatically creates one DIFFERENCE_OUTPUT table for
each concurrent DBMS_SCHEDULER job.

These tables are named using the format 'DIFFERENCE_OUTPUT_*', and are stored in the
dedicated OPS schema.

282
About the OPTIMA Summary Application

By default, the DIFFERENCE_OUTPUT tables are created in the CODESD tablespace, but you
can choose a different tablespace if required.To do this:

In the AIRCOM.OPTIMA_COMMON table, set the


DIFFERENCE_OUTPUT_TABLESPACE parameter value to the required tablespace.

Any new DIFFERENCE_OUTPUT tables will be created in this tablespace. Tables already
created in other tablespaces are not moved.

When the DBMS_SCHEDULER job runs for the first time, it locates the current JOB_NAME using
the current SID, and populates the DIFFERENCE_OUTPUT table to use in the SUMMARY_JOBS
table. A record of the mapping between the DBMS_SCHEDULER jobs and
DIFFERENCE_OUTPUT tables is stored in the OPS.DIFFERENCE_JOBS table.

Each time the job is subsequently run, the same DIFFERENCE_OUTPUT table is used, based on
the OPS.DIFFERENCE_JOBS table.

Supported Summary Types


This table describes all of the (non-managed) summary types that are supported:

Summary Type Summarization Resummarization (Insert and


(Insert Only) Update/Date Insert with Delete)

Time Aggregation - Hourly


Insert and Update/Date Insert with Delete
Time Aggregation - Daily
Insert and Update/Date Insert with Delete
Time Aggregation - Weekly
Insert and Update/Date Insert with Delete
Time Aggregation - Monthly
Insert and Update/Date Insert with Delete
Time Aggregation with Time Filter
Insert and Update/Date Insert with Delete
For example, Working Week
Element Aggregation With Time
Aggregation Insert and Update/Date Insert with Delete

Element Aggregation
Insert and Update/Date Insert with Delete
Without Time Aggregation
Time/Element with Element Filter
Insert and Update/Date Insert with Delete
Basic Busy Hour
Insert and Update/Date Insert with Delete
Busy Hour with Multi Rank
For example, Top 3 BH
Rolling Busy Hour
Insert and Update/Date Insert with Delete
Busy Hour Summary Standard
Date Insert with Delete
Busy Hour Summary Rolling
Date Insert with Delete
RollUp Busy Hour
Date Insert with Delete
Rolling Rollup Busy Hour
Date Insert with Delete
Rollup Rolling Busy Hour Summary

283
OPTIMA 8.0 Operations and Maintenance Guide

Installing the OPTIMA Summary


To install the OPTIMA Summary:

1. Double-click AIRCOM OPTIMA Backend.msi.

2. Follow the on-screen instructions to install the products and options that you require,
including:
o Entering your user name and company name
o Choosing the Setup Type you require

Complete: If you choose Complete setup, then the installer will install the following:
o OPTIMA Summary Application
o Mediation Device Binaries

Custom: If you choose Custom setup, then you will have the option to select which
application you would like to install

3. When the InstallShield Wizard Completed dialog box appears, click Finish.

4. In TOAD, sqlplus or a similar editor, log into the database as a SYS user, and grant the
following permission to the AIRCOM user:

grant execute on dbms_lock to aircom

After you have finished installing the OPTIMA Summary application, you will need to connect to the
database to get it running.

284
About the OPTIMA Summary Application

Connecting to the OPTIMA Database


When you open the OPTIMA Summary, you must connect to the database.

To connect to a database:

1. From the Start menu, point to Programs, select Aircom International, AIRCOM OPTIMA
Backend 8.0, AIRCOM OPTIMA Summary.

2. In the Database Login dialog box:


o From the Oracle Home drop-down list, select the appropriate Oracle version
o From the Database drop-down list, select the required database
o Type the user name and password

This picture shows an example:

3. Click Connect. If there is any error, you will need to ensure that the major, minor, and
interim version numbers for the tool and packages are the same. For more information, see
About OPTIMA Summary Version on page 285.

4. The OPTIMA Summary Configuration dialog box appears, in which you can configure the
OPTIMA Summary process.

Tip: If the Summary Configuration In the dialog box is not displayed, from the Tools menu,
click Summary.

About OPTIMA Summary Version


To check which release of the OPTIMA Summary packages you are using:

1. In the OPTIMA Summary Configuration dialog box, click the About Summary button
.

2. In the dialog box that appears, click the Version Info button.

A basic compatibility check is made to check whether the tool and packages in the relevant
schemas have the same major, minor and interim version numbers.

The modules involved in the compatibility check are:


o Application (Summary)
o All PL/SQL packages relevant to the summary process

285
OPTIMA 8.0 Operations and Maintenance Guide

The results of the check, including any compatibility errors, are displayed in the Summary
Version Information dialog box:

To check which patch version of the OPTIMA Summary packages you are using:

Run the following command from an SQL window, depending on which package you want to check:
• For the Summary package, run:
SELECT AIRCOM.OPTIMA_SUMMARY.GET_VERSION FROM DUAL;
• For the Difference Engine package, run:
SELECT OPS.DIFFERENCE_ENGINE.GET_VERSION FROM DUAL;

Configuring the OPTIMA Summary


Configuration of the OPTIMA Summary process consists of the following steps:

1. Check the LOGS.SUMMARY_LOG table.

Ensure that the LOGS.SUMMARY_LOG table has a partition for the current date. The
partitions for this table should be created by the OSS Maintenance package.

2. Create Raw and Summary Tables.

Ensure that raw and summary tables are created with correct partitions. You can do this
manually or by using the OPTIMA Installation Tool (OIT).

286
About the OPTIMA Summary Application

3. Configure the Report.

After the raw and summary tables have been created, configure the report by setting the
parameters for Source and Summary tables. For more information, see The Report
Configuration Tab.

Important: If you want to use sub-hourly summaries, then you must first configure your
system to allow them. For more information, see Configuring Sub-Hourly Summaries on
page 289.

4. Create Schedule(s).

After configuring a report, schedules are created by default. These schedules decide when
a particular report will run. You can edit these schedules to change the run time
parameters. For more information, see Viewing and Editing Report Schedules on page
314.

As well as these default report schedules, you can also create your own to correspond to
different time zones, for example. For more information, see Adding Report Schedules on
page 311.

5. Enable the Report and Schedule(s).

Ensure that the report and schedules are enabled. A schedule or a report will not run if it is
not enabled.

In addition, ensure that the Next Run Date in the schedule(s) is set to SYSDATE.For more
information, see Adding Report Schedules on page 311.

6. Configure DBMS_SCHEDULER to run the Summary package.

An Oracle DBMS_SCHEDULER job decides which schedule to run depending on the


priority. If it does not already exist, then create a job that calls the
OPTIMA_Summary.DO_WORK() procedure which in turn runs a query on the
SUMMARY_SCHEDULES to find the most urgent schedule to process.

You can create a job to run the following:

begin

aircom.optima_summary.do_work('SCHEMA', 'TABLE');

end;

Where 'SCHEMA' and 'TABLE' are optional filters defining the schema and table on which
the job should be run.

Tip: You can specify more than one schema in this filter, by separating each with a comma
- for example, ''ERICSSON_UTRAN,NOKIA_GPRS'.

The interval for this job should be set to SYSDATE+(1/24/60). This means that the
DBMS_Scheduler will wait one minute before the next job runs.

It is recommended that you create a number of jobs equal to 4*CPU_Count. CPU_Count is


an ORACLE parameter.

The recommended tool to create the jobs is TOAD.

Tip: You can also configure the OPTIMA Summary to process more than one schedule
each time it runs. For more information, see Processing Multiple Schedules Per Session
on page 288.

287
OPTIMA 8.0 Operations and Maintenance Guide

7. Check the Log Message.

When a report is run, log messages are generated. You can check the log messages to
make sure that the data has arrived.

This flowchart explains the OPTIMA Summary Process:

Create Raw + Summary Tables with correct partitions

Add/Configure Report

Edit Default Schedules/Create New Schedules (if required)

Ensure that the reports and schedules are enabled

Ensure that the Oracle Job exists to run OPTIMA_SUMMARY.Do_Work

Check log messages to see whether data has arrived

Flowchart of the OPTIMA Summary Process

Processing Multiple Schedules Per Session


When configuring the OPTIMA Summary, you can configure it to process a number of schedules
each time it runs, rather than just processing a single schedule and then closing.

The OPTIMA_COMMON table contains a parameter called


"SUMMARY_SCHEDULES_TO_PROCESS".

To set this parameter, insert a new record into the OPTIMA_COMMON table as follows:

INSERT INTO AIRCOM.OPTIMA_COMMON(PARAMETER, PVALUE) VALUES


('SUMMARY_SCHEDULES_TO_PROCESS',n);

COMMIT;

Where n is the number of schedules to run before closing.

288
About the OPTIMA Summary Application

Configuring Sub-Hourly Summaries


If you want to run summaries to a granularity of less than an hour (for example, 15 minutes), then
you must additionally perform the following tasks before configuring your summary report:

1. Install the TRUNC_MINS function for the AIRCOM schema. This should have a public
synonym created for it. This returns the date truncated to a defined number of minutes.

To calculate the truncated date for SYSDATE, run the following:

SELECT TRUNC_MINS (SYSDATE, n) FROM DUAL

Where

n is the number of minutes for the granularity

2. Grant EXECUTE privileges for this function to OPTIMA_SUMMARY_USERS, so that it can


be used in the SQL that defines the summary report.

To do this, run the following procedure:

GRANT EXECUTE ON AIRCOM.TRUNC_MINS TO OPTIMA_SUMMARY_USERS

About the OPTIMA Summary Dialog Box


This picture shows the OPTIMA Summary dialog box:

OPTIMA Summary dialog box

289
OPTIMA 8.0 Operations and Maintenance Guide

This table describes the parameters that are displayed:

Parameter Description

PRID The automatically-assigned PRID uniquely identifies each instance of the


application. It is composed of a 9-character identifier, made up of Interface
ID, Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance
ID are both made up of 3 characters, which can be a combination of
numbers and uppercase letters.
For more information, see About PRIDs on page 29.
Note: The program ID of the OPTIMA Summary is always 211.
Interface Interface ID for the summary
Enabled Indicates whether the report is enabled
Summary Table Table to store the aggregated data
Source Table Source table for the summary
Secondary Source Table Secondary Source Table for the summary
Time Agg Time truncation for aggregates
Description Description for the summary report

From the OPTIMA Summary toolbar, you can select these options:

Summary Configuration toolbar

This table describes the various toolbar options:

Button Toolbar Option Enables You To

Exit Close the application and exit.

New Summary Report Create a new summary report. For more information,
see Adding a New Summary Report on page 291.
Edit Report Make changes to the configuration of a single report.
For more information, see Editing and Deleting a
Summary Report on page 320.
Edit Multiple Reports Make changes to a number of reports simultaneously.
For more information, see Editing Multiple Reports on
page 320.
Delete Report Delete an existing report. For more information, see
Editing and Deleting a Summary Report on page 320.
Log Viewer View the list of log messages. For more information,
see Viewing Log Messages on page 325.
Schedule Explorer View and edit the list of report schedules. For more
information, see Viewing and Editing Report
Schedules on page 314.
Job Explorer View the list of Oracle DBMS_SCHEDULER jobs. For
more information, see About Oracle
DBMS_SCHEDULER Jobs on page 327.
Note: This tab is for use with Oracle versions prior to
10G. For subsequent versions use Schedule Explorer
which identifies running summaries with "Current
Session ID".
About Summary View version information of the OPTIMA Summary
Process. For more information, see About OPTIMA
Summary Version on page 285.

290
About the OPTIMA Summary Application

Adding a New Summary Report


Adding a new summary report consists of the following prerequisites:
• Ensure that you have created the Source Tables and the Destination tables
• Ensure that a SELECT grant on the source table to AIRCOM user and SELECT, INSERT,
UPDATE and DELETE grants on the destination table to AIRCOM user exist:

GRANT SELECT ON

<SCHEMA>.<SRC_TABLE>

TO AIRCOM

GRANT SELECT, INSERT, UPDATE, DELETE ON

<SCHEMA>.<DST_TABLE>

TO AIRCOM

• The following tables must have a valid partition for the current day:
o Summary_Log
o Destination Table

Note: It is important for all summary tables to have an ENTRIES column to enable
resummarization.

To add a new summary report in the OPTIMA Summary Configuration dialog box:

1. Click the New Summary Report button .

-or-

Right-click in the OPTIMA Summary Configuration dialog box and from the menu that
appears, click New Report.

2. In the dialog box that appears, click a particular column to select the machine/interface to
be used in the PRID for the new report.

291
OPTIMA 8.0 Operations and Maintenance Guide

This picture shows an example:

Note: For more information on PRIDs, see About PRIDs on page 29.

3. Click OK. The Create New Summary Report dialog box appears.

You can configure your report across a number of tabs:

This Tab Enables you to

Report Configuration Configure the report, source table and summary table.

SQL Query Type the SQL Query.


Column Mappings Map the columns in the SQL query to the columns in the summary table.
Schedules Create/Edit schedules.

The Report Configuration Tab


The Report Configuration tab of the Summary Report dialog box enables you to configure:
• General summary report information
• Information on the source table(s) to be summarized
• Information on the summary table that will be used

292
About the OPTIMA Summary Application

This picture shows an example of the Report Configuration tab:

The Report Configuration Tab

Configuring the Report


To configure the report:

1. If you want the report to be run by the OPTIMA Summary processing package, select the
Report Enabled option.

2. In the Report Configuration pane, from the Summary Time Aggregation drop-down list,
select the aggregation type. This will be applied to the PK date column in the source table
to aggregate the data to the granularity of the summary table. This means that the
granularity of the summary table will be based on the value in the Summary Time
Aggregation field.

For example, if you select Daily, then the summary table will have data for each day from
the source table.

Important:
o If you select Element Aggregation value, then the granularity of the source table will be
same as the granularity of the summary table. Granularity is required for element
aggregation to ensure that the scheduling is configured correctly.
o If you specify element aggregation without time aggregation, then you cannot use the
Managed Element Insert or Managed Element with Delete load options. For more
information, see Setting Advanced Options on page 296.

293
OPTIMA 8.0 Operations and Maintenance Guide

o If you select Element Aggregation Only, then you will need to specifically select the
Summary Table Granularity.
o If you want to use the 'Minutes' time aggregation granularity, then you must ensure that
you have defined the TRUNC_MINS function correctly. For more information, see
Configuring Sub-Hourly Summaries on page 289.
o If you select IW-Weekly or D-Weekly you can use the Weekly Offset drop-down list to
specify the start of a non-standard week. For example, for IW-Weekly the default start
of week is Monday so selecting an offset of 2 would result in a week starting on
Wednesday. For D-Weekly, the default start of week is determined by the Oracle NLS
setting and the offset is applied to that default.

3. If you have selected Element Aggregation as the aggregation type, in the Summary Table
Granularity field, define the granularity of the summary table in terms of minutes, hours,
days or weeks.

For example, if the granularity is specified as 1 day, then the summary table will have data
for each day.

4. In the Description text box, you can optionally type a text description for the report for
identification purposes.

Configuring the Source Table


To configure the source table, in the Source Table pane:

1. If the summary has two source tables, select the Enable Second Source Table option.

For example, you may require a join of two raw tables, or a raw table with a busy hour
definition table. The two tables will be joined by the OPTIMA Summary PL/SQL package.

If you select this option, then you will have to set the configuration for two source tables:
o Source 1 - Primary
o Source 2 - Secondary

If you do not select this option, then you will only have to set the configuration for the
primary source table.

Using a second source table enables the OPTIMA Summary to check that data is present
in both tables before summarizing/re-summarizing.

Important:
o You must still specify both tables in the SQL Query with the correct join.
o If you are using the Managed Element Insert or Managed Element with Delete load
options, you cannot specify a second source table.

2. From the Source # drop-down list, select which source table you want to configure - either
primary or (if you have selected the Enable Second Source Table option) secondary.

3. From the Schema drop-down list, select the schema in which the source table exists.

4. From the Table drop-down list, select the source table for the summary report.

5. From the Datetime Column drop-down list, select the Oracle date column that is used as
the primary key of the source table.

294
About the OPTIMA Summary Application

Note: If your secondary source is a CFG table with no date field in the primary key, then
this can be left empty.

6. If you are generating summaries across multiple time zones, and want to aggregate the
correct data using timestamp aggregation:
o Select the Enable Timestamp Aggregation option.
o From the Timestamp Column drop-down list, select the source table's timestamp
column name. This will be the column that is read across all of the raw tables in order
to ensure time zone consistency.

Important:
o When configuring the summary table, you should ensure that you choose the correct
time zone option, either Natural or Selected. For a description of these options, see
Configuring the Summary Table below.
o If you select the Enable Timestamp Aggregation option as well as the Enable
Second Source Table option, you must use the 'Override SQL' option in the
Advanced Options to specify the SQL that will be used to join the tables. For more
information, see Setting Advanced Options on page 296.

7. In the Source Join Elements box, use comma-separated values to define which common
elements join the two source tables.

The first column in the primary list should match the first column in the secondary list, and
so on.

For example, BSC in the primary source table may correspond to BSC_NAME in the
secondary source table, CELL may correspond to CELL_NAME and so on.

8. In the Entries Formula box, specify the formula used for the CRC check.

The Entries Formula is used to load the ENTRIES column, whose column name is
specified in the Summary Table Configuration. In normal usage for daily, weekly, and
monthly summaries, you need to specify COUNT(*).

When the table is loaded from multiple count groups, each counter group in the raw table
can be checked by selecting a column from each counter group. For example, NVL(Col
A,0) + NVL(Col B,0) + NVL(Col C,0) where Col A, Col B and Col C are single columns from
each counter group.

If the source table for the summary is a summary table in another report, for example, a
daily summary can be a source for a weekly summary, then the ENTRIES formula should
be SUM (ENTRIES), where ENTRIES is the ENTRIES column in the daily summary report.

Note: If you are defining two source tables, the entries formula will be defined once for
both. You should differentiate any columns that have the same name in the primary source
and secondary source by using the appropriate prefix, either 's1' or 's2' respectively.

9. In the Filter box, type the filter. This filter applies to the source table selected in the
Source# drop-down list. The filter enables you to restrict the number of rows in the source
table to be summarized.

This field can have either a date filter or an element filter or both.

Important: If you are using either of the Managed Element load options, then you do not
need to define a report filter.

An example of the Element filter is BSC = 'BSC1'. In this example, only BSC1 will be
summarized.

295
OPTIMA 8.0 Operations and Maintenance Guide

An example of the Time filter is a working week. A working week will summarize only the
working days and not Sunday.

To test a working week, you can type the following filter:

TO_CHAR(DATETIME,'D') IN (1,2,3,4,5,6)

where:

DATETIME is the date PK column name and (1,6) means Monday-Saturday, that is,
exclude Sunday. If you want to exclude Saturday as well, you will need (1,5).

Note: If you are defining two source tables, the filter will be defined once for both. You
should differentiate any columns that have the same name in the primary source and
secondary source by using the appropriate prefix, either 's1' or 's2' respectively.

10. In the Aggregated Elements box, specify the remaining non-date part of the logical
primary key of the data in the SQL query:
o If element aggregation is not being used, this is the primary key of the source table
minus the date column.

- or -
o If element aggregation is being used, this is the primary key of the source table query
minus the date column and any columns which are at a lower level to the aggregated
level.

- or -
o If the Managed Element Insert or Managed Element with Delete load option is being
using, this is a single column representing the managed element - for example, BSC
for a 2g network or RNC for a 3g network. This column must exist in the source and
summary tables, and for optimum performance it should be defined as the second
column in the primary key (after DATETIME).

For example, to aggregate CELLSTATS with primary key (DATETIME, BSC, CELL) to a
BSC level, you would enter BSC whereas for CELLSTATS hourly summary without
element aggregation, you would enter BSC, CELL.

If you want to use element aggregation by CFG table, these columns do not have to be the
same as the Source Join Elements.

Note: If you are defining two source tables, the aggregated elements will be defined once
for both. You should differentiate any columns that have the same name in the primary
source and secondary source by using the appropriate prefix, either 's1' or 's2'
respectively.

Important: The number of the columns and the data inside the Aggregated Element box
for both the source table and summary table must match. The column names must be
different, but the rest must be identical, because the source aggregated elements columns
will joined to their equivalent summary aggregated elements columns (first to first, second
to second, and so on).

296
About the OPTIMA Summary Application

Setting Advanced Options

The Advanced Options dialog box enables you to tune your summary report configuration further,
across a number of tabs.

Advanced Options Tab

This picture shows an example of the Advanced Options tab:

This table describes the options:

Item Description

Log Severity Set the severity of the log message that will be logged by the OPTIMA Summary
package.
The various log severity levels are:
• Debug
• Information
• Warning
• Minor
• Major
• Critical
Difference Engine Hint Use hint for the difference engine.
Tip: It is recommended to use the Hash hint.
Load Option Select one of the load options, which determines how data is handled in the
summary tables.
For more information, see About the Load Options for Summary Reports on page
299.

297
OPTIMA 8.0 Operations and Maintenance Guide

Item Description

Override SQL to join Select this option if:


Source 1 and Source 2
tables • You are configuring a report for CFG Historical tables, and want to join dates
within a day of each other
• You have selected to use timestamp aggregation with two source tables
You must define the SQL that will be used to join the tables.
Important: In the SQL, specify any columns that have the same name in both
source tables by using 's1' (for source table 1) or 's2' (for source table 2) as
appropriate.

Pre-process user SQL Tab:

Before running a particular report, it is possible for the package to run a user defined SQL to tune
the database session before running the main SQL Query in the SQL Query tab.

This picture shows an example of the Pre-process user SQL tab:

To tune the database session:

1. Select the checkbox.

2. Click SQL Example to insert a sample SQL

-or-

In the User-defined SQL text box, enter the SQL that should be executed before the
report is run. You can enter SQL to change a database parameter for the session. This will
only apply to the current report and the session will close after the report has run. Hence,
any changes to session parameters will be reset.

The following gives an example of this:

ALTER SESSION SET db_file_multiblock_read_count=16

It is also possible to call a PL/SQL procedure before the report is run. For example, to call a
SUMMARY_EXAMPLE procedure passing the parameter 100, type the following:

begin

aircom.summary_example(100);

end;

298
About the OPTIMA Summary Application

Important: If you change a database parameter for the session and have also chosen to
process multiple schedules per session, then this session parameter change is applied to
all schedules for this session, not just the one for which it is defined.

For more information, see Processing Multiple Schedules Per Session on page 288.

3. Click Validate SQL to check whether the specified SQL is correct.

Tip: If you need to clear the SQL, you can use the Clear SQL button .

Date Insert Query/Insert Query /Update Query Tabs:

These tabs show the query that will be run. This is dependent on the option selected in the Load
Options drop-down list in the Advanced Options tab.

Note: Minor changes are made to the SQL before it is run, for example, to replace to the schedule
filter with the filter defined for the current schedule.

About the Load Options for Summary Reports

When adding a new summary report, on the Advanced Options tab of the Report Advanced
Options dialog box you can specify the load option. This option is used to determine how data is
handled in the summary tables.

Important: Some load options also require you to define the aggregated elements fields in the
report configuration in a particular way. You must complete this configuration correctly for the
summary report to work as required.

This table describes the load options:

Load Option Description Level Aggregated


Element Field
Configuration

Insert with Delete Inserts new summary periods into Date Not required.
the Summary table.
If re-summary is required, then it
deletes the entire period and
inserts again.
Insert then Update Inserts data of new periods into Primary Key All primary key
the Summary table. columns except for
date, in a comma-
If re-summary is required, then it separated list.
inserts the missing rows and
updates for incomplete or
incorrect rows using the primary
key.
Insert Only Inserts data of new periods into Primary Key All primary key
the Summary table, with no re- columns except for
summary. date, in a comma-
separated list.
Update Only Only updates the primary keys Primary Key All primary key
that have changed. columns except for
date, in a comma-
separated list.

299
OPTIMA 8.0 Operations and Maintenance Guide

Load Option Description Level Aggregated


Element Field
Configuration

Date Insert Only Inserts new periods into the Date Not required.
Summary table and does not re-
insert, update or delete the data
after it has been inserted.
Managed Element Insert Inserts data on a per managed Managed A single managed
element basis into the Summary Element element column - for
(Recommended for table. The managed element is example, RNC.
Recent schedules) the element producing the files
which are loading the table - for
example, BSC for 2G data or RNC
for 3G data.
If a managed element has been
partially summarized for a period it
will not be re-summarized.
Note: The managed element
corresponds to the GPI (Grouping
Primary Identifier) column in an
OIT interface template.
Managed Element with Inserts data on a per managed Managed A single managed
Delete element basis into the Summary Element element column - for
table. example, RNC.
(Recommended for
Historic schedules) If a managed element has been
partially summarized for a period
and requires to be re-summarized,
the data for that managed element
for that period will be deleted and
re-inserted.

Note: For a more detailed description of the load types and their equivalent database values, see
Tuning the OPTIMA Summary on page 323.

Important:
• The Managed Element Insert load option enables you to summarize managed elements
that exist in different timezones, and therefore avoid the need to create schedules for each
timezone. This is because it excludes the most recent period per management element,
which means that you must have some data in the time period after the one you want to
summarize. For example, for a daily summary, there must be some data for Tuesday in the
source table in order to summarize Monday, or for a weekly summary, there must be some
data for Week 2 in the source table in order to summarize Week 1.
• You can override the load type at the schedule level; for example, you could set a recent
schedule to process managed element insert whereas the historic schedule could process
managed element with delete. However, you cannot have both a Managed Element-level
and a Primary Key-level schedule for the same summary report.

300
About the OPTIMA Summary Application

About Safety Periods


The load option used determines the role of safety periods as follows.

For the Managed Element Insert load option:

To support summarization of data in time zones that are in the future with respect to the database's
SYSDATE:
• If the Summary PRID is based on a daily summary (that is, weekly/monthly summaries)
and the 'DIFF_ENGINE_SAFETYPERIOD_HOURS' parameter has been set in
OPTIMA_COMMON, then this parameter will define the safety period in hours. This allows
weekly/monthly summaries to summarize the previous week/month on the first day of the
next week/month.
• Otherwise for all standard managed element insert summaries, the difference engine will
exclude the latest period for each managed element, and summarize all remaining periods.
This means that if there is 15 minute data for 15:00 then the 14:00-14:59 hourly summary
period can be summarized, even if the SYSDATE is 11:00.

Configuring the Summary Table


To configure the summary table:

1. From the Schema drop-down list, select the schema in which the summary table exists.

2. From the Table drop-down list, select the summary table. When you select a summary
table, the Datetime Column, Aggregated Elements, and the Entries Formula fields
acquire default values:

Field Description

Datetime The Oracle Date column in the Summary Table that is used in the Primary
Key.
Aggregated Elements The Summary Table's primary key minus the date column.
The Aggregated Elements column names should be specified as a comma
separated list.
Important: The number of the columns and the data inside the Aggregated
Element box for both the source table and summary table must match. The
column names must be different, but the rest must be identical, because the
source aggregated elements columns will joined to their equivalent summary
aggregated elements columns (first to first, second to second, and so on).
Entries Column The column used to store the Entries Formula value.

3. If you have chosen to enable timestamp aggregation, select the required time zone option.
This table describes these options:

Item Description

Natural Timezone If you select this option (the default), the time zone value will be ignored - in
other words, 10:00 in time zone 1 will be the same as 10:00 in time zone 2,
time zone 3 and so on.
The timestamp is returned as a date with the time zone information ignored,
and the data is then aggregated to daily, weekly, monthly and so on based on
this.

301
OPTIMA 8.0 Operations and Maintenance Guide

Item Description

Selected Timezone If you select this option (and then select a time zone from the drop-down list),
the time zone value will be used to aggregate the data at the correct time
across multiple time zones.
For example, you have three time zones: West (-1 hour), Central (the
meridian) and East (+ 1hour). If you choose Central as the Selected
Timezone, then a summary report configured to summarize the data across all
3 time zones at 10:00 Central time will aggregate the 09:00 data from West,
10:00 data from Central and 11.00 data from East.

This picture shows an example of the Summary Table Configuration:

An example of Summary Table Configuration

The SQL Query Tab


The SQL Query tab enables you to write a SQL query to define what you want to summarize.

This picture shows an example:

An example of the SQL Query tab of the Summary Report dialog box

Note: This example SQL query does not use time zones. To view a query that includes time zones,
see Example SQL Query Using Time Zones on page 304.

On the Report Configuration tab, you can choose a filter to apply to the source table selected in
the Source# drop-down list. These filters enable you to restrict the number of rows in the source
table to be summarized.

302
About the OPTIMA Summary Application

To use these filters, click the appropriate filter button. For example, click Source1 Filter to retrieve
the value for the Element filter that has been set for Source 1.

Tip: To retrieve Date filter, click Source1 Date Filter.

When you click on anyone of these buttons, placeholders are inserted and values are picked based
on the ones that you have specified in the Report Configuration tab.

Note: It is important to type AND or Where in the SQL statement where it is applicable as per SQL
rules.

To write the SQL:

1. Click the SQL Query tab.

2. Type the SQL to select the data to summarize from the Source Table.

You need to keep in mind the following rules for the Select query:
o It must include the clause Where %DATE1 OR DATETIME BETWEEN :STARTDATE
AND :ENDDATE.

Click Source1 Date Filter to insert the %DATE1 placeholder.

If you have a second source table, click Source2 Date Filter to insert the %DATE2
placeholder.
o The query must give an alias for the date column to itself if there is a date truncation
applied. For example, SELECT TRUNC(DATETIME,'DD')DATETIME

Notes:

If the date column is to have a different name in the summary table, it must still have
the alias for the date column name in the source table (it will still be mapped by
position to the correct column, as defined on the Column Mappings tab).

If you are using sub-hourly summaries, then you should use the TRUNC_MINS function
instead of TRUNC.
o An alias should not be applied to the columns in the aggregated elements PK list. Only
the date column should be aliased in the Primary Key.
o An alias should be given to the remainder of columns (counters).
o The query should not have :GROUPELEMENT bind variable. The only bind variables in
the query should be :STARTDATE and :ENDDATE.

The following is an example of a sample SQL:

SELECT TRUNC(DATETIME,'DD') DATETIME,BSC, CELL,

SUM(COL1) COL1,

SUM(COL2) COL2,

SUM(COL3) COL3,

SUM(COL4) COL4,

SUM(COL5) COL5,

COUNT(1) ENTRIES

FROM SUMTEST.CELLSTATS
303
OPTIMA 8.0 Operations and Maintenance Guide

WHERE DATETIME BETWEEN :STARTDATE AND :ENDDATE

GROUP BY TRUNC(DATETIME,'DD'),BSC,CELL

3. Click Validate to validate the SQL query.

Tip: You can click Clear to delete the SQL query.

Example SQL Query Using Time Zones


Here is an example SQL for an hourly summary, where the source timestamp column is converted
to be at timezone 'US/Mountain'.

SELECT FROM_TZ(CAST(TRUNC(TIME_STAMP AT TIME ZONE 'US/Mountain', 'HH24')


As TIMESTAMP), 'US/Mountain') TIME_STAMP, TRUNC(TIME_STAMP at time zone
'US/Mountain','HH24') DATETIME, BSC, CELL,

SUM(COL1) COL1,

SUM(COL2) COL2,

SUM(COL3) COL3,

COUNT(*) ENTRIES

FROM ERICSSON2G.SUM_CELLSTATS_TZ

WHERE DATETIME BETWEEN :STARTDATE-1 AND :ENDDATE+1

AND TRUNC(TIME_STAMP at time zone 'US/Mountain','HH24') BETWEEN


:STARTDATE and :ENDDATE

GROUP BY TRUNC(TIME_STAMP AT TIME ZONE 'US/Mountain', 'HH24') ,


TRUNC(TIME_STAMP at time zone 'US/Mountain','HH24') ,BSC,CELL

This picture shows this SQL query on the SQL Query tab:

SQL Query including time zones

304
About the OPTIMA Summary Application

The Column Mappings Tab


The Column Mappings tab enables you to map the columns in the SQL query to the columns in
the summary table.

This picture shows an example:

An example of the Column Mappings Tab of the Summary Report dialog box

The Column Mappings tab is divided into two panes:


• Left-hand pane: Indicates the following details of the SQL query:

Field Name Description

Query Alias Name Name of the column in the SQL query


Query Alias Data Type Data Type of the column in the SQL query
Query Alias Position Position of the columns in the SQL query

• Right-hand pane: Indicates the following details of the summary table:

Field Name Description

Query Alias Name Name of the column in the SQL query.


Note: This field is empty at first. It is populated later with the SQL query
column that matches the corresponding summary table column.
Query Alias Data Type Data Type of the column in the SQL query.
Note: This field is empty at first. It is populated later with the SQL query
column that matches the corresponding summary table column.
Query Alias Position Position of the column in the SQL query.
Note: This field is empty at first. It is populated later with the SQL query
column that matches the corresponding summary table column.

Summary Col Name Name of the column in the summary table.


Summary Col Data Type Data Type of the column in the summary table.
Summary Col Position Position of the column in the summary table.

305
OPTIMA 8.0 Operations and Maintenance Guide

Field Name Description

Load Column Indicates whether a particular column of the SQL query is mapped to the
corresponding column of the summary table. It takes the value 1 to indicate
that the column is mapped, else its value is 0.
Is PK Indicates whether the column is a primary key of the summary table.

To map the columns of the SQL query and summary table:

1. Click the Column Mappings tab. You will see that the left hand side pane has the values
for the SQL query columns and the right hand side has the values for the summary table
columns. The first three columns will be empty.

2. Select a particular row on the right hand side and click Match Highlighted->. The system
will map a column from the left hand side to the selected column on the right hand side,
delete the column from the left hand side and populate the row on the right hand side. After
you click Match Highlighted, the value of Load Column changes to 1 as the column has
been mapped.

Note: The color of the selected row on the right hand side indicates the following:
o No Color: The SQL query column is correctly mapped to the summary table column.
o Green: There is no mapping between the SQL query column and the summary table
column.
o Yellow: The SQL query column is mapped to the summary table column but the
Name/Data Type/Position is different between the SQL query column and summary
table column. However, as the column is not a primary key of the summary table, the
summary will work despite the mismatch.
o Red: This means that the Summary application will not work and it will result in an error
due to any one of the following reasons:

First, an SQL query column is mapped to a summary table column but the Query
Alias Name of the SQL query column is different from the Query Alias Name of the
summary table column and this column forms part of the Primary Key of the summary
table. To rectify this error, ensure that the query alias name of the summary table
column is same as the query alias name of the SQL query column for the primary key.

Second, the Query Alias Name is not displayed in the 'Datetime Column' or
'Aggregated Elements' fields in the source table configuration in the Report
Configuration tab and the field forms part of the Primary Key of the summary table. To
rectify this error, ensure that the Query Alias Name is displayed in the 'Datetime
Column' or 'Aggregated Elements' fields.

3. Click Clear and Load Summary Columns to remove all the mappings, reload the column
list from the summary table and repopulate the query columns in the left hand grid.

- or -

Select a particular row and click Remove Current Match to remove the mapping for that
row.

Tips:
o Click Sync By Name to map the columns with the same name.
o Click Sync By Position to map the columns with the same position.

306
About the OPTIMA Summary Application

The Schedules Tab


The Schedules tab for a report lists the schedules that specify when the report will be run.

When you click the Schedules tab, two default schedules for that report are automatically created.
These are:
• Recent Schedule:

A recent report checks the recent periods. The date period for a recent schedule is
between SYSDATE-RECENT and SYSDATE.

The following table lists the recent periods and next data formulas for various granularity
levels:

Destinati Date Period to Look Back Every Next Schedule Date


on Data Truncatio Time Formula
Granularit ns
y

Element Agg None 3 days to 0 days TRUNC(SYSDATE+1)+(3/24)


Only
(TRUNC(SYSDATE - 3) to
SYSDATE)
MI MI 4 hours to 0 hours SYSDATE +(30/24/60)
(SYSDATE - (4/24) to SYSDATE)
HR HH24 4 hours to 0 hours SYSDATE +(30/24/60)
(SYSDATE - (4/24) to SYSDATE)
DY DD 1 day to 0 days TRUNC(SYSDATE+1)+(2/24)
(TRUNC(SYSDATE - 1) to
TRUNC(SYSDATE))
WK IW or D 7 days to 0 days CASE WHEN TRUNC(SYSDATE,
'IW')=TRUNC(SYSDATE) THEN
(TRUNC(SYSDATE - 7) to TRUNC(SYSDATE+1)+(4/24)
SYSDATE) ELSE
Will be truncated to the start of the TRUNC(SYSDATE+7,'IW')+(4/24)
week. END

MO MM 28 days to 0 days CASE WHEN TRUNC(SYSDATE,


'MM')=TRUNC(SYSDATE) THEN
(TRUNC(SYSDATE - 28) to TRUNC(SYSDATE+1)+(5/24)
SYSDATE) ELSE
Will be truncated to the start of the TRUNC(SYSDATE+31,'MM')+(5/2
month. 4) END

Important: Weekly and monthly recent summaries will run on the first two days of the
week/month, at the same time of day. The summaries work as follows:
o If the summary is running on the first day of the week/month, then it will only
summarize the previous week/month if the last day in the previous week/month is
present for each managed element
o If the summary is running on the second day of the week/month, then it will always
summarize the previous week/month if it has not been summarized already on the the
previous day for each managed element

307
OPTIMA 8.0 Operations and Maintenance Guide

• Historic Schedule:

When it first runs, a historic report will process a set of past data equal to the default period
defined in the report, to ensure that the table has a set of historic data to start with. If you
want more historic data than this, then you must temporarily change the start date. The
date period for a historic schedule is between SYSDATE-HISTORIC and SYSDATE-
RECENT.

The following table lists the historic periods for various granularity levels:

Destination Date Period to Look Back Every Time Next Schedule Date
Data Truncations Formula
Granularity

Element Agg None 15 days to 3 days TRUNC(SYSDATE+1)+(


Only 19/24)
(TRUNC (SYSDATE - 15) to
TRUNC(SYSDATE - 3))
MI MI 2 days to 0.5 days TRUNC(SYSDATE+1)+(
22/24)
(TRUNC(SYSDATE - 2) to
TRUNC(SYSDATE - 0.5))
HR HH24 2 days to 0.5 days TRUNC(SYSDATE+1)+(
22/24)
(TRUNC(SYSDATE - 2) to
TRUNC(SYSDATE - 0.5))
DY DD 3 days to 1 day TRUNC(SYSDATE+1)+(
4/24)
(TRUNC (SYSDATE - 3) to
TRUNC(SYSDATE - 1))
WK IW or D 14 days to 7 days TRUNC(SYSDATE,'IW')
+8+(21/24)
(TRUNC(SYSDATE-14) to
TRUNC(SYSDATE-7))
Will be truncated to the start of the week.
MO MM 56 days to 28 days TRUNC(SYSDATE+31,'
MM')+1+(20/24)
(TRUNC(SYSDATE-56) to
TRUNC(SYSDATE-28))
Will be truncated to start of month.

For more information on how a period of a schedule is processed, see Processing of a


Period by a Schedule on page 316.

308
About the OPTIMA Summary Application

This picture shows an example of the Schedules Tab:

Note: The recommended load options for schedules using standard time-based aggregations (HR,
DY, WK or MO summaries) are:
• Recent Schedule - Managed Element Insert
• Historic Schedule - Managed Element with Delete

These are the defaults created by the OPTIMA Installation Tool.

To configure the default schedules for the report:

1. Click the Schedules tab.

2. Click Yes in the confirmation dialog box that appears.

The schedule for a report consists of the following parameters:

Option Description

Schedule_ID The identifier of the schedule.

Report_Type Indicates whether a report has a recent or historic schedule.


Dependencies Indicates whether this schedule is dependent on a parent
schedule. Being dependent on a parent schedule means that a
child schedule can run only after it has received data from the
parent schedule.
For example, a daily schedule will be dependent on an hourly
schedule as it will be executed only after data for all the hours in a
day has come in after executing an hourly schedule.
When the parent schedule has processed, and if the current
schedule should be run, then the NEXT_RUN_DATE of the current
schedule is set to SYSDATE where SYSDATE is the current date.
For more information on how dependencies are executed, see
About Dependencies on page 318.
Parent_Schedule_Report This is used in case there are dependencies. It is the schedule
which if updated, will require the current schedule to run. A historic
schedule is normally the parent of another historic schedule while a
recent schedule is normally the parent of another recent schedule.

309
OPTIMA 8.0 Operations and Maintenance Guide

Option Description

Parent_Schedule_ID Indicates the ID of the schedule which if updated will require the
current schedule to be run.
Note: A historic schedule will normally be the parent schedule of
another historic schedule and a recent schedule will be the parent
schedule of another recent schedule.
Start_Period The start date of the period. This is relative to the current date, also
known as the SYSDATE and is shown as (as SYSDATE - x) where
x is in days.
For example, if it is specified as (SYSDATE-1), it means that the
start period for the data will be from yesterday as SYSDATE is the
current date.
End_Period The end date of the period. This is also relative to the current date
and is specified as (SYSDATE - x) where x is in days.
Note: A second is always subtracted from the end date. So, in
order to process a daily summary of the 24th, you should pass start
date as 24/11/2008 00:00:00 and end date as 25/11/2008 00:00:00
which is then converted to process the 24th.
For example, if the end period is specified as (SYSDATE-0), then
only the data till today will be picked up. Hence, if start period is
(SYSDATE-1) and end period is (SYSDATE-0), then the data will
be processed only for yesterday.
Priority The priority of the schedule. The most urgent schedule should be
given a 1, while a lower priority schedule should be given a higher
number. The Oracle job will process a higher priority schedule
before the lower priority schedule.
Note: More than one schedule can have the same priority number.
Enabled Indicates whether the schedule will run.
Current_Process_Start_Date Indicates the date and time when the current schedule started
processing.
Next_Run_Date Indicates the date when the schedule is next scheduled to run.
Last_Run_Date Indicates the date when the schedule was last run.
Next_Schedule_Date_Formula An ORACLE formula that is used to calculate when the schedule
should next be run after it has finished processing.
For example, a value of SYSDATE + (15/24/60) will mean the
schedule will run 15 minutes after it has completed processing and
a value of TRUNC (SYSDATE+1) +(20/24) will run the next day at
8pm.

Note: Schedules are also run according to the Run Order. The run order lists the order in
which the schedules will be run, and is determined by an algorithm that takes into account
how long the schedule has been waiting to run. This algorithm increases the priority of a
schedule for each hour that the schedule is delayed from running (by subtracting one from
the priority value), meaning that it will be higher in the run order. This means that a lower
priority schedule that has been waiting a longer time than a higher priority schedule could
be run first.

For example, consider two schedules A and B; Schedule A has a priority of 3 while
Schedule B has a priority of 5. Schedule A has been delayed from running by 1 hour
meaning that its run order is 2 (Priority minus 1 [hour]). However, Schedule B has been
delayed from running by 4 hours, so its run order is 1 (Priority minus 4 [hours]). Therefore
Schedule B will be run first.

3. Click the Close button .

4. In the Close Report dialog box that appears, click Yes to save the report.

310
About the OPTIMA Summary Application

Tip: You also have the option to edit these schedules. For more information, see Viewing
and Editing Report Schedules on page 314.

As well as the two default schedules, you can create your own additional recent report schedules.
This is particularly useful if your network spans multiple timezones, because it means that you can
create separate schedules for each timezone. For more information, see Adding Report Schedules
on page 311.

Important: If you use the Managed Insert load option, then you do not have to create separate
schedules per timezone. For more information, see About the Load Options for Summary Reports
on page 299.

Adding Report Schedules


As well as the The Schedules Tab on page 307, on the Schedules tab you can create your own
additional report schedules, both historic and recent.

This is particularly useful if your network spans multiple timezones, because it means that you can
create separate recent schedules for each timezone. Different timezones need to be processed at
different times, based on when the data has loaded into the raw table - for example, based on your
own location, each other time zone used will have a different 00:00, which could be before or after
your own, and therefore the point at which a day's worth of data is collected will be different as well.
Different schedules will be needed to compensate for this.

Important: If you use the Managed Insert load option, then you do not have to create separate
schedules per timezone. For more information, see About the Load Options for Summary Reports
on page 299.

To create a new report schedule:

1. On the Schedules tab, click the Add Recent Schedule button or Add Historic

Schedule button as appropriate.

- or -

Right-click in the Schedules pane, and from the menu that appears, click either Add
Recent Schedule or Add Historic Schedule as appropriate.

311
OPTIMA 8.0 Operations and Maintenance Guide

The Summary Schedule dialog box appears. This picture shows an example recent
schedule:

2. To enable the schedule to run, select the Schedule Enabled option.

3. Define the details for the new schedule, as described in the following table:

In This Pane Do This

Schedule Period Change the Start Date and the End Date of the schedule, calculated
based on the SYSDATE. To allow for time zone differences, you can
specify start points before or after the SYSDATE in terms of days and
hours.
If you want the start date and the end date to be calculated by truncating
to the midnight of the day that has been selected, ensure that the
Truncate to Day checkbox is enabled.
Schedule Configuration Set the priority of the schedule. A lower number indicates higher priority.
From the Next Schedule Date Formula drop-down list, select the Oracle
formula that will be used to calculate when the schedule should next be
run after it has finished processing.
Tip: You can include a CASE statement in the Next Schedule Date
Formula, to enable more advanced scheduling.

312
About the OPTIMA Summary Application

In This Pane Do This

Schedule Dependencies Select the Dependencies checkbox if you want the schedule to be
dependent on a parent schedule. This means that after the parent
schedule has run, and if it is determined that the current schedule can be
run, then the Next_RUN_DATE of the current schedule is set to the
current date.
From the Parent Schedule ID drop-down list, select the schedule ID for
this schedule.
Notes:
• This option is ignored if either of the Managed Element load options is
being used.
• You can select a summary report which populates either the source1
table or the source2 table in the Report Configuration tab. If the
source tables are not populated by the summary, then dependencies
cannot be used for the current schedule. The Parent Schedule ID is
automatically set by the Summary GUI. If the current schedule is a
recent schedule, then the recent schedule for the parent report is
selected as the Parent Schedule ID. If the current schedule is a
historic schedule, then the historic schedule for the parent report is
selected as the Parent Schedule ID.
Load Options Select the Override Report Load Option if you want this particular report
schedule to use a different load option to the one defined for the report.
Select the required load option for the schedule from the drop-down list.
This means that you can have different load options for Recent and
Historic schedules - for example, you may just want to use Insert Only for
the Recent schedule to load data only, but then use Insert with Delete for
the Historic schedule to 'clean up' the summary tables.
For more information on the load options, see Setting Advanced Options
on page 296.
Important: You cannot have schedules for the same PRID using a
combination of primary key load options (Insert Only, Insert then Update,
Update Only) and managed element load options (Managed Element
Insert, Managed Element with Delete). For example, you cannot have a
Recent schedule that uses Managed Element Insert and a Historic
schedule that uses Insert then Update.
This is because the Aggregated Elements field on the Report
Configuration tab is defined differently for each of these load options. For
more information, see The Report Configuration Tab.
This does not apply to Insert with Delete and Date Insert Only, because
these do not use the difference engine.
Note: The recommended load options for schedules using standard time-
based aggregations (HR, DY, WK or MO summaries) are:
• Recent Schedule - Managed Element Insert
• Historic Schedule - Managed Element with Delete
These are the defaults created by the OPTIMA Installation Tool.
Schedule Filter SQL Select the Schedule Filter option, and then define the SQL query that you
want to use to filter the data.
For example, if you are creating separate schedules for each timezone
within your network, you should use this to filter on timezone, in order to
filter out data that has not been completely loaded yet because it is in a
different timezone that begins loading later than the other timezones.
Note: This filter is used in addition to the report-level filter. Both of these
filters are optional and are not required for any of the load options.

313
OPTIMA 8.0 Operations and Maintenance Guide

In This Pane Do This

Schedule Information Click the Set to SYSDATE button to set the Next_RUN_DATE to the
current date. In this case, the schedule will run immediately.
Important: This needs to be done when the schedule is first created,
otherwise it will never run. The next run time will then be calculated based
on the Next Schedule Date Formula.
Click the Reset Currently Processing button to reset the data.
You should click this button if the DBMS_SCHEDULER job session
crashes while this schedule is running, in order to run the schedule again.
Important: You should first investigate the cause of the crash before
resetting the data.

4. Click Save.

Viewing and Editing Report Schedules


To view report schedules that have been created:

1. In the OPTIMA Summary Configuration dialog box, click the Schedule Explorer button
.

The Schedule Explorer appears, displaying the report schedules for all the reports.

This picture shows an example of the Schedule Explorer:

This dialog box explains all the schedule parameters. For more information on schedule
parameters, see The Schedules Tab on page 307.

You can edit individual report schedules, or a group of schedules simultaneously.

314
About the OPTIMA Summary Application

To edit a single report schedule:

1. Double-click a schedule that you want to edit.

- or -

While creating a report, in the Schedules tab of the OPTIMA Summary Configuration
dialog box, double-click a schedule.

- or -

Select the required schedule, and then click the Edit Single Schedule button .

The Summary Schedule dialog box appears.

2. Edit the schedule details as required. For more information on these, see Adding Report
Schedules on page 311.

3. Click Save.

Tip: For information on how to edit several schedules at once, see Editing Multiple Report
Schedules on page 315.

Editing Multiple Report Schedules

As well as editing individual report schedules, you can also edit multiple report schedules
simultaneously.

To do this:

1. Select the report schedules that you want to edit.

2. Click the Edit Multiple Schedules button .

The Edit Multiple Schedules dialog box appears:

315
OPTIMA 8.0 Operations and Maintenance Guide

3. Change the details of the schedules as required. The parameters are the same as those for
individual schedules, although a few have slightly different names; for example, the 'Set all
Selected Schedule's Next Run Dates to SYSDATE' option is a checkbox rather than a
'Set To SYSDATE' button.

4. When you have made the required changes, click Save.

Processing of a Period by a Schedule


This picture shows an example of the periods that a schedule processes:

Time

Raw Table Data

Now

Recent Schedule
Period

Historic Schedule Period


(Normal Operation)

Historic Schedule Period


(First Execution)

An example of a period of a schedule is processed

Processing of a period of a schedule consists of the following steps:

1. When a summary table is configured by the OPTIMA Summary, a report is created which
will have a PRID. The OPTIMA Summary Process will then generate the following two
schedules for the report:
o Recent Schedule
o Historic Schedule

Note: For more information on PRIDs, see About PRIDs on page 29.

2. The recent schedule will summarize and resummarize the recent data in the raw table, for
example, from SYSDATE-3 to SYSDATE.

3. The historic schedule will run less often and will resummarize any late data that has loaded
into the raw table. The historic schedule will therefore process an older period, for example,
from SYSDATE-15 to SYSDATE-3.

The end period of the historic schedule should match the start period of the recent
schedule, the summary will subtract one second from the end period so the data queried
will not overlap. When the historic schedule executes for the first time, it will process a
much longer period to allow all the data in the raw table to be summarized.

316
About the OPTIMA Summary Application

Creating Regional Schedules for the OPTIMA Summary


In the OPTIMA Summary, you can create regional schedules, that is schedules specific to their
particular time zone.

You can do this for all reports within a particular schema, to ensure that data from each time zone
is processed by a separate schedule.

To do this:

1. Ensure that:
o You have uploaded and successfully activated the required interface with summaries
using the OPTIMA Installation Tool.
o You have installed the Summary table SUMMARY_GLOBAL_FILTERS, using the
SUMMARY_GLOBAL_FILTERS.SQL file.
o The SUMMARY_LOG table is partitioned, and has an INSERT grant assigned to you
o The SUMMARY_REPORTS table has INSERT and UPDATE grants assigned to you

2. In the SUMMARY_GLOBAL_FILTERS table of your database, add a new record for each
time zone/schedule required.

3. For each record:


o Set the IS_PARENT_SCHEDULE column to be 0 (NULL), except for the last time
zone/schedule that will be run, which should be set to 1.
o In the GLOBAL_SCHEDULE_FILTER column, specify the schedule filter which will be
used in all reports to filter on the time zone.
EXTRACT(timezone_hour FROM datetimezone)-EXTRACT(timezone_hour
FROM systimestamp) = -2

EXTRACT(timezone_hour FROM datetimezone)-EXTRACT(timezone_hour


FROM systimestamp) = 0

EXTRACT(timezone_hour FROM datetimezone)-EXTRACT(timezone_hour


FROM systimestamp) = 2

4. In the OPTIMA Summary Configuration dialog box, click the Schedule Explorer button
.

5. In the Schedule Explorer, click the Create Global Schedules button .

317
OPTIMA 8.0 Operations and Maintenance Guide

The Global Schedule Creation dialog box appears:

6. From the Schema drop-down list, select the schema that you have activated, and then click
the Create Schedules button.

Note: You should check the SUMMARY_LOG table for any errors.

In the Schedule Explorer, you should now see the following:


o For all hourly and daily reports, a new recent schedule is added for each record in the
SUMMARY_GLOBAL_FILTERS table.

This replaces the previous recent schedule, and differs in two ways - the dependencies
will be switched off and the SCHEDULE_FILTER_SQL will be populated from the
values in the SUMMARY_GLOBAL_FILTERS table.
o The dependencies for the weekly and monthly schedules are set to the daily schedule
that has the SCHEDULE_FILTER_SQL in which the IS_PARENT_SCHEDULE value
in the SUMMARY_GLOBAL_FILTERS table is set to 1. This means that the weekly
and monthly schedules will be dependent on the last daily schedule to run.
o For all hourly and daily reports, the SQL queries (Select, Date Insert, Pk Insert and Pk
Update) are updated to use the %SFILTER schedule filter.

About Dependencies
A dependency is a situation when one schedule depends on data to arrive from another schedule
before it executes. Hence, dependencies have parent and child schedules. The schedule that is
dependent is the child schedule while the one for which the child schedule waits is the parent
schedule. A child schedule will execute only after the parent schedule has executed and data from
the parent schedule has arrived.

For example, a daily schedule will be dependent on an hourly schedule as it will be executed only
after data for all the hours in a day has come in after executing an hourly schedule. In this case, the
daily schedule is the child schedule and the hourly schedule is the parent schedule.

There are two types of dependency that exist:


• Dependencies for Historic Schedules
• Dependencies for Recent Schedules

318
About the OPTIMA Summary Application

The two types are processed differently. The Schedule_Type column determines the type of
dependency:

Schedule Type Identifi


er

Recent with dependencies 1


Historic with dependencies 2
Recent no dependencies 3
Historic no dependencies 4

If the schedule type is 3 or 4, it means that such schedules do not have any dependent schedules.

Note: Dependencies are not used for schedules that used the Managed Element Insert or
Managed Element with Delete load options.

Dependencies for Recent Schedules

If the schedule type is 1, then any child schedules will be scheduled to process immediately if the
summary process has processed the last period in the parent schedule.

For example, if the 11pm data is processed, then a daily (child) recent schedule is set to run
immediately. If a daily summary schedule is running, then if it has processed the last day of the
month, then the monthly schedule will be set.

To improve efficiency, before a schedule processes a set of periods, it will generate a list of child
schedule IDs together with the date and time that must be processed to cause the child schedule to
be set. Every time it has completed processing a period, it will check the date and time of the period
with the list and if it finds any matches it will set the NEXT_RUN_DATE of the child schedule to
SYSDATE.

Dependencies for Historic Schedules

If the schedule type is 2, and any data - even row 1 - changes in the parent schedule, then the child
historic is set to run immediately. There is no check on the date period being processed, and the
assumption is that the processing period of the child schedule is the same as the parent schedule.

The dependencies check is therefore done when a schedule has finished processing a period. If
any rows have been updated or inserted then all child schedules with PARENT_SCHEDULE_ID
equal to the current SCHEDULE_ID are set to run immediately (NEXT_RUN_DATE is set to
SYSDATE).

319
OPTIMA 8.0 Operations and Maintenance Guide

Editing and Deleting a Summary Report


To edit a summary report in the OPTIMA Summary Configuration dialog box:

1. Select the report that you want to edit and click the Edit Report button .

-or-

Right-click and from the menu that appears, click Edit Report.

2. In the dialog box that appears, make changes to the report.

3. Click the Close button .

4. In the dialog box that appears, click Yes to save your changes.

To delete a summary report in the OPTIMA Summary Configuration dialog box:

1. Select the report that you want to delete and click the Delete Report button .

-or-

Right-click and from the menu that appears, click Delete Report.

2. In the dialog box that appears, click Yes to delete the report. The selected report is
removed from the summary reports table.

Editing Multiple Reports


The OPTIMA Summary Application enables you to edit the following attributes of multiple reports at
the same time:
• Report Enabled Status
• Entries Formula for Source1 Table, Source2 Table, and Destination Table
• Load Option
• Log Severity

To do this:

1. In the OPTIMA Summary Configuration dialog box, select multiple reports that you want
to edit.

2. Click the Edit Multiple Reports button .

- or -

Right-click and from the menu that appears, click Edit Multiple Reports.

3. The Edit Multiple Reports dialog box opens.

320
About the OPTIMA Summary Application

This picture shows an example of the Edit Multiple Reports dialog box:

The different PRIDs that will be affected with your changes are displayed in the first pane.
These PRIDs are of the different reports that you have selected.

This table shows the different options:

In This Pane Do This

Report Enabled Click Update Report Enabled Status to activate the Enabled/Disabled
options.
Select the desired option depending on whether you want to enable or
disable the selected reports.
Entries Formula (Source 1 Select the Update Source 1 Entries Formula checkbox to activate the New
Table) Value text box.
In the New Value text box, type the new entries formula for Source 1 table.

321
OPTIMA 8.0 Operations and Maintenance Guide

In This Pane Do This

Entries Formula Select the Update Source 2 Entries Formula checkbox to activate the New
Value text box.
(Source 2 Table)
In the New Value text box, type the new entries formula for Source 2 table.
Entries Formula Select the Update Destination Entries Column checkbox to activate the
New Value text box.
(Destination Table)
In the New Value text box, type the new entries formula for the Destination
table.
Load Option Click the Update Load Option checkbox to activate the New Value drop-
down list.
From the New Value drop-down list, select the new load option.
Log Severity Click the Update Log Severity option to activate the New Value drop-down
list.
From the New Value drop-down list, select the new log severity.

4. Click Save to apply these changes to all the selected reports.

Stopping the OPTIMA Summary Reports


In normal operation, a report should not need to be stopped. If it is necessary to remove a report,
then this can be achieved using an Oracle job.

However, you may in very rare circumstances need to force the OPTIMA Summary to stop
processing a report. To do this, an entry in the OPTIMA_COMMON table needs to be updated.
OPTIMA_COMMON is a general purpose table used by many OPTIMA applications.

To do this:

Ensure that the terminate parameter has been defined in the OPTIMA_COMMON table as follows:

INSERT INTO AIRCOM.OPTIMA_COMMON(PARAMETER, PVALUE) VALUES


('OPTIMA_SUMMARY_TERMINATE',0);

COMMIT;

To stop the OPTIMA Summary package processing any further reports and halt as soon as
the current period has been processed:

Update the terminate parameter to 1 using the following command:

UPDATE AIRCOM.OPTIMA_COMMON SET PVALUE=1 WHERE


PARAMETER='OPTIMA_SUMMARY_TERMINATE';

COMMIT;

To restart the summaries:

Set the parameter back to 0 using the following command:

UPDATE AIRCOM.OPTIMA_COMMON SET PVALUE=0 WHERE


PARAMETER='OPTIMA_SUMMARY_TERMINATE';

COMMIT;

322
About the OPTIMA Summary Application

Tuning the OPTIMA Summary


In order to get the best results when using the OPTIMA Summary, you should bear in mind the
following:
• Use a Managed Element load type for most Time-based Aggregations: For hourly,
daily, weekly and monthly time-based aggregations (not Busy Hour summary types), it is
recommended that you use the Managed Element approach.

A recent schedule should be configured to use Managed Element Insert to summarize all
of the managed elements that have no data in the Summary table. A historic schedule
should also be set up to use Managed Element with Delete to resummarize older data, in
order to capture late-arriving data.

In the Summary Schedule dialog box, you should select the Override Report Load Option
checkbox to configure the recent and historic schedules to use different load options. For
more information, see Adding Report Schedules on page 311.
• Use Other Load Types for Element Aggregation, BH and BH Summary
Configurations: For more information, see Supported Summary Types on page 283.
• Select the most appropriate load type according to your needs: The specified
database values indicate the LOAD_METHOD value stored in the SUMMARY_REPORTS
table. If you cannot access the OPTIMA Summary GUI, you can specify the load type by
updating the SUMMARY_REPORTS table with the required LOAD_METHOD value.

Summary GUI LOAD_MET Description


HOD

Insert with Delete 0 • Run date comparison cursor


• For periods with no data in the summary table run Date
Insert Query
• For periods where data exists but requires updating,
delete the entire period and re-insert period using Date
Insert query
Insert then Update 1 • Run date comparison cursor
• For periods with no data in the summary table run Date
Insert Query
• For periods where data exists but requires updating, call
difference engine to calculate primary keys that have
changed and run PK Insert query and PK Update query
to upsert the data
Insert Only 2 • Run date comparison cursor
• For periods with no data in the summary table run Date
Insert Query
• For periods where data exists but requires updating, call
difference engine to calculate primary keys that have
changed and run PK Insert query to insert the data
• Does not update PKs that have changed

323
OPTIMA 8.0 Operations and Maintenance Guide

Summary GUI LOAD_MET Description


HOD

Update Only 3 • Run date comparison cursor


• For periods with no data in the summary table run Date
Insert Query
• For periods where data exists but requires updating, call
difference engine to calculate primary keys that have
changed and run PK Update query to update the rows
that have changed
• Does not run PK Insert query to insert PKs that have
changed, however completely new periods will still be
inserted
Date Insert Only 4 • Fastest load option
• Runs more efficient date comparison cursor that only
looks at periods that exist in the source table but have no
data at all in the summary table
• Runs Date Insert query to insert these periods
• However if any data arrives in the raw table for the period
after it has been summarized, it will not be resummarized
Managed Element 5 • Runs efficient difference engine query to calculate
Insert managed elements/periods that have data in the source
table but no data in the summary table for that managed
element/period combination
• Inserts these managed elements per period
• If data arrives for that managed element after it has been
summarized, it will not be resummarized
Managed Element with 6 • Runs efficient difference engine query to calculate
Delete managed elements/periods that are completely new or
that require updating
• Loops through all summary periods one by one, deletes
managed elements that have been marked for updates
then inserts all managed elements that are marked for
insert or update.

Tip: For a general overview of the load types, see About the Load Options for Summary
Reports on page 299.

• Schedule the DO_WORK Oracle Job: The DO_WORK Oracle Job should be scheduled
to run using the DBMS_SCHEDULER. Multiple jobs should be set up to run the summary
concurrently. This table describes the four methods of calling DO_WORK:

SQL Call Description

OPTIMA_SUMMARY.DO_WORK; Run the most urgent schedule(s) to process from


any schema.
OPTIMA_SUMMARY.DO_WORK('VENDOR_SCHE Run the most urgent schedules to process from
MA') VENDOR_SCHEMA.
OPTIMA_SUMMARY.DO_WORK(NULL, 'CELL%') Run the most urgent schedules to process from
any schema whose summary table name begins
with CELL.
OPTIMA_SUMMARY.DO_WORK('VENDOR_SCHE Run the most urgent schedules to process from
MA', 'CELL%') VENDOR_SCHEMA whose summary table
begins with CELL.

324
About the OPTIMA Summary Application

Troubleshooting the OPTIMA Summary


The following table shows troubleshooting tips for the OPTIMA Summary:

Problem Solution

The sessions for summary Create a single daily job to run the following PL/SQL:
schedules occasionally
begin
terminate abnormally
optima_summary.reset_expired_schedules
end;
This will log a warning message for the updated schedules, for example
'Warning - Schedule ID 297 will be reset, as this schedule started
processing over 24 hours ago and the session no longer exists'.

Viewing Log Messages


The Log Viewer enables you to view the log messages.

Important: It is essential that the Summary_Log table is partitioned for the current date and time
for the log messages to appear in the Log Viewer.

To view the log messages in the OPTIMA Summary Configuration dialog box:

1. Click the Log Viewer button .

This picture shows an example of the Log Viewer:

2. From the Select the minimum date and time to display drop-down options, select the
date and time after which you want to view the log messages.
325
OPTIMA 8.0 Operations and Maintenance Guide

Tip: The log messages that will be displayed will be for the time period between the
selected date and time and the present.

3. From the Select prid to display drop-down list, select a particular PRID for which the log
messages will be displayed.

Note: The default value is midnight until now.

4. Click the Refresh button to get the list of log messages.

The Log Viewer dialog box displays the log messages.

The following table lists the information that you can view for log messages:

This Parameter Indicates

Datetime Date and time of the log message


Schedule ID Identifier of the schedule
PRID The automatically-assigned PRID uniquely identifies each instance of the
application. It is composed of a 9-character identifier, made up of Interface ID,
Program ID and Instance ID.
The Interface ID is made up of 3 numbers but the Program ID and Instance ID are
both made up of 3 characters, which can be a combination of numbers and
uppercase letters.
For more information, see About PRIDs on page 29.
Severity Severity level of the log message
Message # Identifier for the message
Message Log message details from the PL/SQL package

326
About the OPTIMA Summary Application

About Oracle DBMS_SCHEDULER Jobs


An Oracle DBMS_SCHEDULER job decides which schedule to run depending on the priority.

This picture shows an example of how schedules are executed:

optima_summary.do_work()
Oracle Job

optima_summary.do_work()
Oracle Job

optima_summary.do_work()
Oracle Job

Ordered by
1. Priority
2. next_run_date

Schedules from SUMMARY_SCHEDULES table where...

1. next_run_date < SYSDATE


2. Schedule is not currently processing
3. Schedule is enabled

Execution of a schedule

Execution of a schedule consists of the following steps:

1. Oracle Jobs are set up within the database to run every minute and call
OPTIMA_SUMMARY.DO_WORK().

2. The schedules are selected from the list in the SUMMARY_SCHEDULES table based on the
following criteria:
o Schedule’s next run date is before the current date (NEXT_RUN_DATE < SYSDATE)
o Schedule is not currently processing (current_process_start_date IS NULL)
o Schedule is enabled (ENABLED=1)

3. The schedule list is then ordered by


o PRIORITY (lowest number first)
o NEXT_RUN_DATE (oldest date first)

4. The job will process the highest priority schedule and then terminate. If there are more
schedules to process, they will be picked up by the next available job.

Each job therefore represents a concurrent execution of the summary. If there are five jobs,
then the five schedules can be processed at the same time.

Tip: The recommended number of jobs is 4*CPU_Count. CPU_Count is an ORACLE


parameter.

327
OPTIMA 8.0 Operations and Maintenance Guide

Note: As the list of schedules ages, the Run Order value - stored as a parameter in the
Scheduler Explorer of the OPTIMA Summary GUI - will be taken into account. The run
order is determined by an algorithm that takes into account how long the schedule has
been waiting to run. This algorithm increases the priority of a schedule for each hour that
the schedule is delayed from running (by subtracting one from the priority value), meaning
that it will be higher in the run order. This means that a lower priority schedule that has
been waiting a longer time than a higher priority schedule could be run first.

For example, consider two schedules A and B; Schedule A has a priority of 3 while
Schedule B has a priority of 5. Schedule A has been delayed from running by 1 hour
meaning that its run order is 2 (Priority minus 1 [hour]). However, Schedule B has been
delayed from running by 4 hours, so its run order is 1 (Priority minus 4 [hours]). Therefore
Schedule B will be run first.

About Scheduling Oracle Jobs


In OPTIMA, you must schedule Oracle jobs using DBMS_SCHEDULER (DBMS_JOBS is not
supported).

DBMS_Scheduler enables you to perform resource plan management, so that you are able to
control:
• The number of concurrent jobs for a particular job_class
• The order of executing of a job or groups of jobs
• Switching jobs from one resource plan to another during the day
• And much more

With DBMS Scheduler:


• A task can be scheduled to run at a particular date and time
• A task can be scheduled to run only once, or multiple times
• A task can be turned off temporarily or removed completely from the schedule

The DBMS Scheduler is configurable from sqlplus or TOAD.

Scheduler Components

The Scheduler uses three basic components to handle the execution of scheduled tasks. An
instance of each component is stored as a separate object in the database when it is created:

Component Description

Programs A program defines what the Scheduler will execute.


Schedules A schedule defines when and at what frequency the Scheduler will execute a particular set
of tasks.
Jobs A job assigns a specific task to a specific schedule. A job therefore tells the schedule which
tasks - either one-time tasks created 'on the fly', or pre-defined programs - are to be run. A
specific program can be assigned to one, multiple, or no schedule(s); likewise, a schedule
may be connected to one, multiple, or no program(s).

328
About the OPTIMA Summary Application

Using the Summary for Direct Database Loading


As well as using the OPTIMA Summary application to summarize data within the OPTIMA
database, you can also use it to load data from any other third-party database directly into OPTIMA
over a direct database link.

To do this:

1. Create a database link that enables you to access the source database from OPTIMA using
the login details for the source database. A database link is a schema object in one
database that enables you to access objects on another database.

For more information on how to create database links, see http://otn.oracle.com.

You can run the following script within TOAD to create the database link while logged into
OPTIMA as the DBA.

An example database link could be:

CREATE PUBLIC DATABASE LINK OMCDBLINK

CONNECT TO OMC_USER IDENTIFIED BY password

USING 'OMCDB';

Where:
o OMCDBLINK is the name of the link you want to create
o OMC_USER is the name of the user on the source database (OMCDB)
o password is the password string
o OMCDB is the name of the source database from where the data is to be loaded into
OPTIMA.

2. In the OPTIMA database, define your summary report configuration in the usual way, using
the following guidelines:
o The Summary Time Aggregation option should be set as 'Element Aggregation Only'
o The Summary Table Granularity should match the granularity of the raw and
destination tables
o For the Source Table, the schema should be the one in the source database that you
want to load, and the table should be written as 'tablename@database link name'
o The Entries Formula should be COUNT(*) and the Entries Column should be 1 in order
to ensure a 1:1 mapping of primary key records
o The Summary Table that you define should be the table in the destination database
into which you want to load the data

329
OPTIMA 8.0 Operations and Maintenance Guide

This picture shows an example:

3. Define the corresponding query on the SQL Query tab.

An example query could be:

SELECT * ALL FROM ERICSSON_GERAN.BSCQOS@OMCDB

WHERE %DATE1

This query will load all rows from the BSCQOS table in the ERICSSON_GERAN schema of
the source database referenced by the OMCDB database link.

Note: The query should not have a 'group by' clause, as it will be comparing individual
rows rather than groups.

4. Define the column mappings and schedules as normal.

5. Resummarize, using 'Insert and Update'.

330
About the Data Quality Package

11 About the Data Quality Package

The Data Quality package enables you to configure reports on the quality of data, for example, data
that is incomplete or missing.By default, the Data Quality package processes the data on a daily
basis, and calculates all its results at a daily level (although it can be configured to process at
different periods; for more information, see Configuring Period Processing on page 348).

The results are stored in specially-designated tables in the OPTIMA database, and you can use
several OPTIMA Excel reports to present these results in a more useful way to users. OPTIMA
supports Office 2010.

Installing the Data Quality Package


Before you can use the Data Quality package, install the following files to a local directory:
• opx_DAQ_WIN_420.exe
• OSSFrameWork.dll
• OSSDataQuality.dll

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

Using the Data Quality Console


You use the Data Quality Console to configure the Data Quality package. When you start the Data
Quality Console, you must connect to the database.

To start the Data Quality Console:

1. Double-click the opx_DAQ_WIN_420.exe.

2. In the Login dialog box:


o From the Oracle Home drop-down list, select the appropriate Oracle version
o From the Database drop-down list, select the required database
o Type the user name and password

This picture shows an example:

3. Click Connect.

331
OPTIMA 8.0 Operations and Maintenance Guide

The Data Quality Console appears. This picture shows an example:

Configuring the Data Quality Package


You can configure the Data Quality package in three stages:

1. Global Configuration - this includes column set configuration, data source attribute
configuration and granularity configuration. For more information, see About Global
Configuration on page 333.

2. Data Source - this involves adding and configuring the data sources for the Data Quality
package. For more information, see About Data Source Configuration on page 341.

3. Data Quality - this determines the processes and groups with which the Data Quality
package will run. For more information, see About Data Quality Configuration on page 353.

Each of these configurations corresponds to a folder in first level of the Data Quality Console tree.

Data Quality Console Tree

Important: Although you can configure the Data Quality package manually (as described in this
chapter), it is recommended that you use the OPTIMA Installation Tool to ensure that the correct
options are chosen and therefore the reports include all of the statistics.

For more information, see OPTIMA Installation Tool User Reference Guide.

332
About the Data Quality Package

About Global Configuration


To complete the Global Configuration, you need to configure:
• Column Sets - for more information, see Configuring Column Sets on page 333
• Data Source Attributes - for more information, see Configuring Data Source Attributes on
page 336
• Granularities - for more information, see Configuring Granularities on page 338

Each of these configurations corresponds to a folder in the Global Configuration tree:

Global Configuration Tree

Configuring Column Sets


You use the Column Sets configuration to set the levels that the Data Quality package will process.
By default, you can run the Data Quality package on the following column set levels:

Level Description

Table Refers to the table as a whole.


For example, a last load report on the table level will find the maximum date of the
whole table.
Managed Element Refers to the parent element for the table, that is, the element that generates the data
for this table which is then loaded into OPTIMA.
For example, for GSM cell data, the managed element will normally be the BSC.
Reporting on this level allows you to see files that have not loaded and obtain
information which is much more aggregated than the Reported Element level.
Reported Element Refers to the level that the data is reported on, for example, for cell data, the cell level.
The whole element hierarchy is included, so a cell data table may be BSC + CELL. It
is automatically defined as the primary key minus the date column but its contents can
be changed.
Reported Element Data Quality processing:
• Takes the longest time to run
• Returns large amounts of data
• Is normally only viewed by drilling down from the Managed Element level for a
particular managed element

Important: It is strongly recommended that you run the Data Quality package only on the Table
and Managed Element column set levels. The Data Quality package should not be run on the
Reported Element level, in the majority of circumstances, as this causes too much of a load on the
OPTIMA database.

There are also other system column sets which help you describe the data. For example, you must
use the Reported DateTime column set to store the table’s DATE field. The DATE field contains the
date of the data, for example, the DATETIME column.

You can also define your own user-defined (non-system) column sets. This enables you to check
for data quality on user-defined levels, for example, for regions of a country. For information about
how to create column sets, see Creating Column Sets on page 334.

333
OPTIMA 8.0 Operations and Maintenance Guide

Creating Column Sets

As part of the Column Sets configuration, you can also create user-defined column sets:

Column sets defined in the Data Quality Console

To do this:

1. In the Data Quality Console, in the tree view, select the Column Sets folder.

2. In the right hand pane, right-click and, from the menu that appears, click Create Column
Set.

The Create Column Set dialog box appears:

334
About the Data Quality Package

3. In the dialog box that appears, complete the following information:

In this field Do this

Name Type a name for the column set.


Alias Type an alias for the column set.
Abbreviation Type an abbreviation for the column set
Enable for all Data Sources Select this option if you want to enable the column set for all data
sources.
Order Set the order for the column set.

4. When you have finished, click Create.

This picture shows an example of a completed column set:

Editing and Deleting Column Sets

To edit a column set:

1. In the Data Quality Console, in the right hand pane:

Right-click the column set you want to edit and, from the menu that appears, click
Properties.

- or -

Double-click the column set you want to edit.

2. In the Column Set Properties dialog box that appears, make the required changes.

3. Click Apply to save your changes.

4. Click OK to close the Column Set Properties dialog box and return to the Data Quality
Console.

335
OPTIMA 8.0 Operations and Maintenance Guide

To delete a column set:

1. In the Data Quality Console, in the right hand pane, select the column set you want to
delete.

2. Right-click and, from the menu that appears, click Delete.

- or -

Click the Delete button .

3. In the message box that appears, click Yes to confirm.

Configuring Data Source Attributes


You can assign attributes and attribute values to each data source. An attribute for a data source
could be, for example, the Vendor the table represents, with assigned values such as Ericsson or
Nortel. This configuration option is included solely for the purpose of reporting and enables you to
show different attributes with the Data Quality data.

Data Source Attributes defined in the Data Quality Console

Creating Attributes

To create an attribute for a data source:

1. In the Data Quality Console, in the tree view, select the Data Source Attributes folder.

2. In the right hand pane, right-click and, from the menu that appears, click Create New
Attribute.

336
About the Data Quality Package

The Create Attribute dialog box appears:

3. In the dialog box that appears, complete the following information:

In this field Do this

Name Type a name for the attribute.


Alias Type an alias for the attribute.
Abbreviation Type an abbreviation for the attribute.
Mandatory Select this option if you want this attribute to be mandatory.
Order Set the order for the attribute.

4. When you have finished, click Create.

The picture shows an example of a completed attribute:

337
OPTIMA 8.0 Operations and Maintenance Guide

Editing and Deleting Attributes

To edit an attribute:

1. In the Data Quality Console, in the right hand pane:

Right-click the attribute you want to edit and, from the menu that appears, click Properties.

- or -

Double-click the attribute you want to edit.

2. In the Attribute Properties dialog box that appears, on the General tab, make the required
changes.

3. On the Pick List tab, you can type the range of values the attribute can have. For example,
the System Key attribute has the values Yes and No to determine if the column is a key
field.

4. Click Apply to save your changes.

5. Click OK to close the Attribute Properties dialog box and return to the Data Quality
Console.

To delete an attribute:

1. In the Data Quality Console, in the right hand pane, select the attribute you want to delete.

2. Right-click and, from the menu that appears, click Delete.

- or -

Click the Delete button .

3. In the message box that appears, click Yes to confirm.

Configuring Granularities
OPTIMA performance tables have different granularities of data. For example, raw data tables often
have hourly data, whereas summary tables often have daily or weekly data. There are a large
number of system granularities which come pre-configured with the Data Quality package, for
example, 15 minutes, daily or weekly.

Granularities defined in the Data Quality Console

338
About the Data Quality Package

You can also create new granularities as required. For more information, see Creating Granularities
on page 339.

Creating Granularities

To create a new granularity:

1. In the Data Quality Console, in the tree view, select the Granularities folder.

2. In the right hand pane, right-click and, from the menu that appears, click Create New
Granularity.

3. In the dialog box that appears, complete the following information:

In this field Do this

Name Type a name for the granularity.


Alias Type an alias for the granularity.
Abbreviation Type an abbreviation for the granularity.
Mandatory Select this option if you want this granularity to be mandatory.
Order Set the order for the granularity.

4. When you have finished, click Create.

5. To set the times of day when the granularity appears, you must edit the granularity. For
information about how to do this, see Editing and Deleting Granularities on page 339.

This picture shows an example granularity:

339
OPTIMA 8.0 Operations and Maintenance Guide

Editing and Deleting Granularities

To edit a granularity:

1. In the Data Quality Console, in the right hand pane:

Right-click the granularity set you want to edit and, from the menu that appears, click
Properties.

- or -

Double-click the granularity you want to edit.

2. In the Granularity Properties dialog box that appears, on the General tab, make the
required changes.

3. On the Times of Day tab, type the times of day when the Granularity occurs. This setting is
used to check that data is present at the specified times.

Note: Times must be entered in HH24MI format as a 4 digit 24 hour time with no
punctuation. For example, for a 6 hour granularity you would type: 0000, 0600, 1200, 1800.
For granularities of 1 day and less, simply type 0000 or the time of day when the data is
present. This picture shows an example:

4. Click Apply to save your changes.

5. Click OK to close the Granularity Properties dialog box and return to the Data Quality
Console.

To delete a granularity:

1. In the Data Quality Console, in the right hand pane, select the granularity you want to
delete.

Note: You cannot delete system granularities.

2. Right-click and, from the menu that appears, click Delete.

- or -

Click the Delete button .

3. In the message box that appears, click Yes to confirm.

340
About the Data Quality Package

About Data Source Configuration


Once you have completed the Global Configuration, you need to add the data sources. The data
sources you add are tables inside the database.

You can add any of the following tables as data sources:


• Raw data performance tables loaded by the ETL Loader
• Summary tables summarized by the OPTIMA Summary
• CFG tables containing lists of elements, for example, CELLCFG or BSCCFG

You cannot add the following tables as data sources:


• External tables created and used by the ETL Loader
• Tables used to configure the OPTIMA front end or back end

Important: You should not generally add the AIRCOM, GLOBAL, and OSSBACKEND schema
tables used for configuring OPTIMA as data sources.

The data sources you configure for Data Quality can be viewed in the Data Source tree in the Data
Quality Console. This picture shows an example:

Data Source Tree

You can view the data sources in three different categories:

Category Description

All Shows a list of all data sources added to the Data Quality package.
Schemas Shows the data sources by the schema they are in. Each schema appears as a sub-
item.
Attributes Shows the data sources by attribute. Each different attributes appears as a sub-item.

Note: By default, when you first install the Data Quality package, no data sources are configured.

Adding Data Sources


To add a data source:

1. In the Data Quality Console, in the tree view, select the All folder.

2. In the right hand pane, right-click and, from the menu that appears, click Add Data Source.

The Add Data Source Wizard appears.

341
OPTIMA 8.0 Operations and Maintenance Guide

3. On the first page of the Wizard, choose a method to generate a list of available data
sources. The options are described in the following table:

Select this To
option

Oracle Data Search the Oracle Data Dictionary tables for all schemas in the database with the
Dictionary exception of the AIRCOM, SYS and SYSTEM schemas.
This allows you to add tables that have not been configured before for any
OPTIMA application.
OSS Data Dictionary Generate a list of tables used in the OPTIMA Data Dictionary which is populated
by the OPTIMA Interface Template.
If the OPTIMA Interface Template has been used to install the database, then this
will provide a quick method to select only tables relevant to Data Quality.
Configuration Tables Search for tables configured in OPTIMA backend applications such as the
OPTIMA Summary and the ETL Loader.
This may be the best option when the Summary and Loader have been
configured and Data Quality is needed for the tables used by these applications.

4. Click Next.

5. On the next page of the Wizard, in the right hand pane, select the data sources you want to
add and use the right arrow button to move them to the left hand pane.

Tip: You can add schemas, tables and / or views.

This picture shows an example:

6. When you have finished, click Finish.

The selected data sources are added to the Data Source tree in the Data Quality
Console.

Once you have added the data sources, you can configure them by either setting their properties or
by using the Add Columns Wizard. For more information, see Setting Data Source Properties on
page 343 and Configuring the Data Quality Package Using the Add Columns Wizard on page 350.

342
About the Data Quality Package

Setting Data Source Properties


You can further configure a data source by setting its properties. You can set properties for a single
data source or globally for multiple data sources.

Note: Not all properties are available when setting data source properties globally.

To set data source properties:

1. In the Data Quality Console, in the right hand pane, select the data source(s) whose
properties you want to set.

Tip: Use the Shift and Ctrl keys to select more than one data source at a time.

2. Right-click and, from the menu that appears, click Properties.

The Data Source Properties dialog box appears.

3. On the General tab, complete the following information:

In this field Do this

Loading Type Select the type of data the data source contains from the drop-down list, either raw
or summary data.
Granularity Select the granularity of the data source from the drop-down list, for example, 15
minute data.
Use Topology Select this option if you want the Data Quality package to obtain a list of elements
Table which should be present at each granularity from a topology configuration (CFG)
table.
If you enable this option, you must also select the Owner (schema) and Name of the
topology table you want to use from the drop-down lists.

4. On the Attributes tab, you set values for any attributes created during Data Source
Attributes configuration. For more information, see Configuring Data Source Attributes on
page 336.

To set an attribute value, click the Value field you want to set and select the value from the
drop-down list:

Note: The attributes are used only for reporting purposes and are not required by the Data
Quality package.

5. On the Columns tab, you can view the column sets created during Column Sets
configuration by selecting a column set from the drop-down list. For more information, see
Configuring Column Sets on page 333.

343
OPTIMA 8.0 Operations and Maintenance Guide

To add a column to a column set:


o Select All from the Filter By Sets drop-down list.
o Select the column you want to add.
o Right-click and, from the menu that appears, select the column set.

The default values for the system column sets are then populated by the Data Source
Wizard.

Note: If you are setting properties for multiple data sources, the Columns tab is disabled.

6. On the Data Quality Configuration tab, you configure the Data Quality processes. For
more information about Data Quality processes, see About Data Quality Configuration on
page 353. The following table describes the options:

On this Do this
sub-tab

Completeness Select the levels you want the Data Quality package to process for Completeness.
Completeness is the percentage of available data for the period loaded.
Tip: It is recommended that you enable the Table and Managed Element levels.
To change a threshold value for a column set:
• Right-click the column set and, from the menu that appears, click Change
Threshold.
• In the dialog box that appears, set the required threshold and click OK.
To view the column details for a column set, right-click the column set and, from the
menu that appears, click Column Details.
To add a column set:
• Click the Add button.
• In the dialog box that appears, select the column set from the drop-down list, set a
threshold value and click OK.
Note: You can only add column sets that contain one or more columns.
To remove a column set, select the column set and click Remove.
To enable Completeness processing for the data source, ensure that the Enable
Process option is selected.
Note: If you are setting properties for multiple data sources, then only the Enable
Process option is available.
Tip: It is recommended that you configure the Data Quality package to have
Completeness enabled for all tables in each interface. However, it is not required for
tables with a granularity of a day or more.

344
About the Data Quality Package

On this Do this
sub-tab

Availability Select the levels you want the Data Quality package to process for Availability.
Availability is the percentage of elements which are completely missing for a day.
The options for Availability are the same as those on the Completeness sub-tab, as
described above.
Tip: It is recommended that you enable the Table and Managed Element levels. You
should also enable the Reported Element level for:
• For tables with a granularity of a day or more
• For tables with a granularity of less than a day that require troubleshooting
Note: There is no need to select these levels for CFG tables, as you only need to
define these for the performance management tables which are being reported on.
Tip: It is recommended that you configure the Data Quality package to have Availability
enabled for all tables in each interface.
Nullness Select the column set(s) you want the Data Quality package to process for Nullness.
Nullness is the number of null entries in the table for a day for a specified list of
columns.
The options for Nullness are the same as those on the Completeness sub-tab, as
described above.
In addition, you can choose which columns in the data source to check for null values.
To do this:
• Click the Choose Columns button.
• In the dialog box that appears, select the column(s) you require in the list and click
OK.
Tip: It is recommended that Nullness is only used when you need to perform
troubleshooting on specific tables.
Last Load Select the column set(s) you want the Data Quality package to process for Last Load.
Last Load is the last date a table loaded, that is, the maximum date of the table.
The options for Last Load are the same as those on the Completeness sub-tab, as
described above.
Tip: It is recommended that Last Load is only used when you need to perform
troubleshooting on specific tables.

Important: In order for a table to be included in a report, Completeness and Availability


must be configured, enabled and scheduled at the Table and Managed Element levels. If
you are using the OPTIMA Installation Tool, this is the default configuration for the Data
Quality package, but you must still execute the scheduling.

345
OPTIMA 8.0 Operations and Maintenance Guide

This picture shows an example of the Completeness sub-tab:

7. On the DQ Period Processing tab, you can configure the period processing options.

For more information on how period processing works, see Configuring Period Processing
on page 348.

To configure period processing for Availability:


o Select the required processing levels:

For more information on these, see Configuring Column Sets on page 333.
o Click the View Config/Scheduling Info button to check that the tables are configured
correctly.

346
About the Data Quality Package

The following dialog box appears:

Tip: Click the Current button to see the information for the current datasource, or click the
All button to see the information for all datasources.

To use Availability reporting, the following conditions must apply:

A CFG table is configured, with the Managed and Reported columns set for the CFG
table as well as the raw table

The data in the CFG columns must match the data in the raw table columns

The order of the CFG columns must match that of the raw table columns

Important: You cannot directly edit the data displayed in this dialog box; instead, you must
edit the data on the other tabs of the Data Source Properties dialog box.

To configure period processing for Nullness:


o Select the required processing levels:

347
OPTIMA 8.0 Operations and Maintenance Guide

o To choose the columns that you want to report on, click the Choose Columns button,
and in the dialog box that appears, select the required columns:

Note: The available columns for period processing are the same as those available for
daily processing.

o Click OK.

8. Click Apply to save your changes.

9. Click OK to close the Data Source Properties dialog box and return to the Data Quality
Console.

Configuring Period Processing

By default, all of the data quality reports produce daily results. This means that although the
granularity of the raw data may be based on a period of minutes (for example, 10 minutes or 30
minutes), the data quality reports on show the results for a whole day.

This can cause limitations, depending on your requirements, for example:


• You cannot see the current day's quality results until after midnight, when the day has
finished
• If you run the reports for a three-day period, then you will just get three separate reports,
one for each day

348
About the Data Quality Package

However, if you require the daily quality reports to be processed for a different period, for example
hourly, you can do this as well. There are two stages to this:

1. On the DQ Period Processing tab of the Data Source Properties dialog box:
o Specify the levels that you want to process
o Check that the tables are configured correctly.

2. When you run the DQ_PERIOD_PROCESSING package, set the PERIOD_START and
PERIOD_END to be the required period.

Note: A second is subtracted from the end date when you run the package in a job or
script. So, for example, to process Report 5 for 02/07/2009 for the 6pm hour, run the
package with following parameters:

(5,
to_date('020720091800','ddmmyyyyhh24mi'),to_date('020720091900','dd
mmyyyyhh24mi'))

Note: You can only configure period processing for availability and nullness reporting. This is
because:
• Completeness reporting uses a variable number of loading periods - it may report based on
a single period, or on a number of periods
• For last load reporting, the maximum date of the data in the table cannot be applied to sub-
periods

Important: Period processing reports on the period as a whole. For example, if you run the
availability report for 3 hours and 1 hour has a missing cell, then this will not be picked up.
However, if you run each hour individually, the missing cell will be detected. Similarly, the nullness
report reports on the total rows and null rows for the period it is run as a whole.

Editing and Deleting Data Sources


To edit a data source:

1. In the Data Quality Console, in the right hand pane:

Right-click the data source you want to edit and, from the menu that appears, click
Properties.

- or -

Double-click the data source you want to edit.

2. In the Data Source Properties dialog box that appears, make the required changes. For
more information, see Setting Data Source Properties on page 343.

3. Click Apply to save your changes.

4. Click OK to close the Data Source Properties dialog box and return to the Data Quality
Console.

To delete a data source:

1. In the Data Quality Console, in the right hand pane, select the data source(s) you want to
delete.

Tip: Use the Shift and Ctrl keys to select more than one data source at a time.

349
OPTIMA 8.0 Operations and Maintenance Guide

2. Right-click and, from the menu that appears, click Delete.

- or -

Click the Delete button .

3. In the message box that appears, click Yes to confirm.

Configuring the Data Quality Package Using the Add Columns Wizard
You can use the Add Columns Wizard as an alternative method of configuring the Data Quality
package. The main benefit of using the Add Columns Wizard is that it reduces the time taken to
configure columns contained in the columns sets of each data source.

Important: Before using the Add Columns Wizard, first ensure you have:
o Added the data sources. For more information, see Adding Data Sources on page 341.
o Configured the column sets. For more information, see Configuring Column Sets on
page 333.

To configure the Data Quality package using the Add Columns Wizard:

1. In the Data Quality Console, in the right hand pane, right-click and, from the menu that
appears, click Add Columns.

The Add Columns Wizard appears.

2. On the first page of the Wizard, select the data source(s) you want to configure and use the
double right arrow button to move them to the left hand pane.

Tip: Use the Shift and Ctrl keys to select more than one data source at a time.

3. Click Next.

4. On the next page of the Wizard, select the column set(s) you want to configure and use the
double right arrow button to move them to the left hand pane.

Tip: If you do not need to select the columns for each column set and only need to be able
to enable and disable levels, click Skip Column Sets Mapping and proceed to step 10.

5. Click Next.

350
About the Data Quality Package

6. On the next page of the Wizard, select the columns to include in the column sets. If you
only want to include columns that form the primary key of the data source, select the PK
Cols Only option.

Tip: You can change the sort order for each column alphabetically by clicking the column
headings.

7. Click Next.

8. On the next page of the Wizard, set the order of the columns in the column set by selecting
columns and using the Up and Down buttons to position them.

Note: The column order you set applies to all column sets and all data sources selected.
You should choose an order for all columns, for example, the BSC column always above
the Cell column, the MSC column always above the BSC column.

9. Click Next.

10. On the last page of the Wizard, select the Data Quality processes that you want to be
enabled for the column sets that you selected in step 4. For a detailed description of the
processes, see About Data Quality Configuration on page 353.

351
OPTIMA 8.0 Operations and Maintenance Guide

Tips:

It is recommended that:
o You enable Completeness and Availability at the Managed Element level for all data
tables in each interface (although Completeness is not required for tables with a
granularity of a day or more)
o You enable Completeness and Availability at the Reported Element level for all data
tables with a granularity of a day or more, and for data tables with a granularity of less
than a day that you want to troubleshoot
o Last Load and Nullness are only used for troubleshooting on particular tables

Important: In order for a table to be included in a report, Completeness and Availability


must be configured, enabled and scheduled at the Table and Managed Element levels.

11. Click Finish to save your configuration.

Quickly Enabling Processing Types


You can quickly enable processing types for a data source using the popup menu. For more
information about processing types, see About Data Quality Configuration on page 353.

To enable a processing type:

In the Data Quality Console, in the right hand pane, right-click the data source for which
you want to enable processing and, from the menu that appears, point to Data Quality and
then click the processing type you require.

Any processes that are already enabled are shown with a selected checkbox. This picture
shows an example:

352
About the Data Quality Package

About Data Quality Configuration


Data Quality configuration enables you to further configure the Data Quality processes configured
during Data Source configuration. For more information about configuring Data Quality processes,
see Setting Data Source Properties on page 343 and Quickly Enabling Processing Types on page
352.

You can also create new processes after you have completed Data Source configuration. For
information about how to do this, see Creating Processes on page 354.

In the Data Quality Console, in the Data Quality tree, you can view the Data Quality processes by
type and by group:

Data Quality Tree

The Type folder contains the different processing types, these are:

Processing Description
Type

Availability The percentage of elements which do not appear at least once for a period.
This type shows which elements are missing in the defined period, and also any new
elements that have appeared, with statistics for this.
In order to know which elements should be appearing, the process looks at the
configuration table for the data table, and compare this against the elements listed. If
a configuration table is not available, then it will look back in the data table for a
configurable number of days, defaulted to 7.
For more information, see About the Standard Data Quality Reports on page 362.
Completeness Provides statistics to show how complete the available data is.
The process calculates how many records are expected in the table for the day,
based on the available elements and periods in the table. In an element is incomplete
(in other words, does not have results for all periods), the element and the missing
period are be stored, and listed in the reports.
Important: In order to fully understand the data quality of any table, the Availability
and Completeness statistics need to be presented together, using the reports
provided.
Last Load Status The last date a table loaded, that is, the maximum date of the table.
Important: This process is only relevant to for troubleshooting specific problems, so
it is recommended this process is initially disabled, and only used for troubleshooting
specific tables.
Nullness The number of null, or missing, entries in the table for a specified list of columns, for
a period.
For more information, see About the Standard Data Quality Reports on page 362.
Warning: This process is particularly intensive and if executed for a large number of
tables may impact the database performance. Therefore it is recommended that this
process is initially disabled, and only used for troubleshooting specific tables.

353
OPTIMA 8.0 Operations and Maintenance Guide

The Groups folder contains the processing groups. For more information about processing groups,
see Scheduling Data Quality with Process Groups on page 357.

Creating Processes
To create a new process:

1. In the Data Quality Console, in the tree view, select the folder for the type of process you
want to create:

2. In the right hand pane, right-click and, from the menu that appears, click Create Process.

The Create New Process dialog box appears. This picture shows an example:

3. In the Create New Process dialog box, complete the following information:

In this Do this
field

Data Source Select the data source for this process from the drop-down list.

Days Back Set the number of days back to go for processing this process. For example, if the Data
Quality package runs daily and Days Back is set to 3, then it will process 3 days of data
each day.
Tip: Alternatively, if you just want to process this package for today's data only, select the
Today Only checkbox.
Group Set the number of the processing group that this process runs under.
For more information about processing groups, see Scheduling Data Quality with Process
Groups on page 357.

354
About the Data Quality Package

In this Do this
field

Active Enable the process by selecting the Active option. If the process is enabled, it will run if it
is scheduled.

Severity Log Indicates the information level of the messages that will be sent to the log file.
Anything below the selected level will not be reported - for example, if Minor is selected
then only Minor, Major and Critical logging will occur.
Comments Type any comments you want to add for the process. This option is for information only
and is not used in processing.
Note: This option is not available if you are setting properties for multiple processes.

4. When you have finished, click Create.

Setting Process Properties


You can further configure a process by setting its properties. You can set properties for a single
process or globally for multiple processes.

Note: Not all properties are available when setting process properties globally.

To set process properties:

1. In the Data Quality Console, in the right hand pane, select the process(es) whose
properties you want to set.

Tip: Use the Shift and Ctrl keys to select more than one process at a time.

2. Right-click and, from the menu that appears, click Properties.

The Process Properties dialog box appears.

3. On the General tab, complete the following information:

In this field Do this

Data Source Select the data source for this process from the drop-down list.
Tip: If you are defining an Availability process, you can also filter out old
elements present in a CFG table. For more information, see Filtering Data for
Availability Reports on page 357.
Days Back Set the number of days back to go for processing this process. For example, if
the Data Quality package runs daily and Days Back is set to 3, then it will
process 3 days of data each day.
Tip: Alternatively, if you just want to process this package for today's data
only, select the Today Only checkbox.
Group Set the number of the processing group that this process runs under.
For more information about processing groups, see Scheduling Data Quality
with Process Groups on page 357.
Active Enable the process by selecting the Active option. If the process is enabled, it
will run if it is scheduled.

Severity Log Indicates the information level of the messages that will be sent to the log file.
Anything below the selected level will not be reported - for example, if Minor is
selected then only Minor, Major and Critical logging will occur.

355
OPTIMA 8.0 Operations and Maintenance Guide

In this field Do this

Comments Type any comments you want to add for the process. This option is for
information only and is not used in processing.
Note: This option is not available if you are setting properties for multiple
processes.

This picture shows an example:

4. On the Stats tab, you can view information about the running of the process:

5. Click Apply to save your changes.

6. Click OK to close the Process Properties dialog box and return to the Data Quality
Console.

356
About the Data Quality Package

Filtering Data for Availability Reports

When defining the data source for an Availability process, you can also choose to filter out
elements from a CFG table based on their LAST_STAT value, which is a field storing the date of
the most recent data that has been loaded for that element.

To do this:

In the AIRCOM.OPTIMA_COMMON table, set the DQ_CFG_ELEMENT_EXPIRY parameter value


as follows:
• The default value is 7, which means that the Availability report will only include elements
where the LAST_STAT value is within the last 7 days (SYSDATE - 7).
• If you want to set a different number of days, edit the parameter value as required

Tip: To turn off the filter, set the parameter to -1.

Editing and Deleting Processes


To edit a process:

1. In the Data Quality Console, in the right hand pane:

Right-click the process you want to edit and, from the menu that appears, click Properties.

- or -

Double-click the process you want to edit.

2. In the Process Properties dialog box that appears, on the General tab, make the required
changes. For more information, see Setting Process Properties on page 355.

3. Click Apply to save your changes.

4. Click OK to close the Process Properties dialog box and return to the Data Quality
Console.

To delete a process:

1. In the Data Quality Console, in the right hand pane, select the process(es) you want to
delete.

Tip: Use the Shift and Ctrl keys to select more than one process at a time.

2. Right-click and, from the menu that appears, click Delete.

- or -

Click the Delete button .

3. In the message box that appears, click Yes to confirm.

Scheduling Data Quality with Process Groups


To run the Data Quality package and execute the processing types in the database, you must
schedule the processes using process groups. The Data Quality package treats each of the four
processing types () separately for scheduling purposes, which means that there are four distinct
processes for each table, which you can group together and schedule accordingly.

357
OPTIMA 8.0 Operations and Maintenance Guide

Tip: It is recommended that you schedule each process type into different groups, and also divide
the tables to process raw tables and summary tables separately. For example, you could schedule
the following groups:
• Group 1 - Availability for all raw tables
• Group 2 - Availability for all summary tables
• Group 3 - Completeness for all raw tables
• Group 4 - Completeness for all summary tables

... and so on

Including the other two processing types, this will lead to eight process groups per interface.

Note: If the interface has no sub-daily summary tables, then Completeness does not need to be
scheduled for summary tables. This is how the OPTIMA Installation Tool will configure the
processes if it is used to configure the Data Quality package.

You can define the groupings by specifying the group numbers in the Process Properties dialog
box. The Group option can be configured for multiple processes at once. For more information, see
Setting Process Properties on page 355.

In order to run a process group, you must schedule it in the database using the Oracle Scheduler.
You should use the following procedure:

OSSBACKEND.DQ_PROCESSING.RUN_PROCESS_GROUP (n);

Where n is the number of the process group to execute.

Note: If you have installed the Data Quality package in a schema other than OSSBACKEND, then
substitute the schema name, which is at the beginning of the example above.

To start with, you only need to configure the groups for Availability and Completeness. If Nullness
or Last Load processes are required later for troubleshooting, they can be scheduled as required.

Important: If OPTIMA is deployed in an Oracle RAC environment, you must ensure that the
schedules are created to support node affinity. Node affinity is used to reduce interconnect traffic
between the database nodes by ensuring that all processing on a particular set of data (usually an
interface) is performed on the same node.

To do this, you should configure the Oracle resource plans accordingly and use the scheduled job
classes to map a schedule to a particular resource plan.

Using the Data Quality Configuration Package


The Data Quality Configuration (DQ_OIT_CONFIG) package is installed as part of the standard
Data Quality backend module, and can be used to configure Data Quality using the OPTIMA
Installation Tool data dictionary in the OSS_DICTIONARY schema.

It currently configures data quality for raw tables to enable DQ reports to be run on the loaded data.
It will add OIT CFG tables to the Data Quality data dictionary as well as raw tables so that the CFG
tables can be used to calculate the expected elements for a period. Data Quality is enabled on the
table and managed level for every raw table found in the OIT interface. The Data Quality
Configuration package also configures data quality reports for Summary data with hourly, daily or
weekly periods (‘HR’,’DY’ or ’WK’).

358
About the Data Quality Package

Important: After the package has been used the installer should check the COMMON_LOGS table
for any processing errors. These may require some additional manual DQ configuration when the
OIT template used does not contain all required configuration.

The Data Quality Configuration package uses the following information from the OIT data
dictionary:
• Schema • CFG table names • Raw table names • Raw table
name granularities
• Raw table's • Raw table's counter • Summary table names • Summary periods
default CFG groups and columns
table

The OIT interface must have been activated (not only uploaded) and the raw, CFG, or summary
tables must exist in order to use the Data Quality package. The Data Quality Configuration package
will then use the Oracle data dictionary to determine column information and so on.

About the Data Quality Configuration Package


When using the Data Quality Configuration package, you should note the following.

Setting the 'Process Days Back'

The 'process days back' setting will be set to 3 (days) for all processes (completeness, availability,
null and last load daily processes).

You can modify this by changing the value of the 'c_process_days_back' constant in the
'DQ_OIT_CONFIG.pks' package header. However it is recommended that this left as the default 3
days.

The Nullness Column Set

Data Quality NULL reporting checks individual columns for NULL values. These columns can be
configured in the Data Quality Console.

The Data Quality Configuration package will add the first counter (NUMBER) column from each
table to this column set.

If the table’s counters come from more than one input file (for example, when the OPTIMA
Combiner is used to combine the files before loading), then the Data Quality Configuration package
will also add the first counter from each counter group/input file to the NULL column set.

Note: This will not include columns configured as the REMOVE or KEY combiner types in the
Microsoft Excel OIT Interface Template.

Sub-daily Availability Processing

The Data Quality Configuration package will enable sub-daily availability processing on the Table
and Managed Element levels on all raw tables that have a CFG table defined in the OIT.

Summary Completeness Processing

You can configure Completeness reporting for Summary tables, but only on Hourly (‘HR’)
Summaries.

359
OPTIMA 8.0 Operations and Maintenance Guide

Running the Data Quality Configuration Package


This section describes how to run the Data Quality Configuration package for raw and summary
tables:

Running the Data Quality Configuration package for Raw Tables

To do this:

1. Upload and activate an OIT interface.

2. Retrieve the reference to this interface from the OIT data dictionary by running the following
query:

SELECT * FROM OSS_DICTIONARY.DD_INTRFC

Retrieve the value of INTRFC_ID for the OIT interface that you would like to configure in
Data Quality.

3. Log on as OSSBACKEND and run the following script, replacing 1 with the value of
INTRFC_ID:

BEGIN

OSSBACKEND.DQ_OIT_CONFIG.CONFIGURE_DQ_FROM_OIT(1);

END;

4. After the package has been run, check the COMMON_LOGS table for errors.

Note: If the auto-populate chooses the wrong column, remember to configure the managed
element columns correctly. For more information, see Troubleshooting the Data Quality
Configuration Package on page 361.

Running the Data Quality Configuration package for Summary Tables

After you have configured Data Quality for raw tables from the OIT data, you can then configure
Data Quality for summary tables from the OIT data.

A new view, DQ_OIT_SUMMARY_TABLES, lists the available summary tables with their respective
summary periods and GPI_COL identifiers. The GPI_COL identifier is the managed element on
which the summary is aggregated. This is not necessarily the same element that the summary
reports on, because the summary aggregation reporting element may be at a finer granularity and
would generate too much data in the DQ Report.

To run the Data Quality Configuration package, run the following script, replacing 1 with the value
of INTRFC_ID and specifying the required summary period as 'HR' (hourly), 'DY' (daily) or 'WK'
(weekly):

BEGIN

OSSBACKEND.DQ_OIT_CONFIG.CONFIGURE_DQ_SUMMARY_FROM_OIT(1, ‘HR’);

END;

360
About the Data Quality Package

Logging for the Data Quality Configuration Package


The DQ_OIT_CONFIG package logs to the COMMON_LOGS table (normally found in the LOGS
schema).

By default, debug messages will not be logged but an information level message will be logged for
every raw, summary, and CFG table added. Any other messages should be carefully read and may
require a response.

Troubleshooting the Data Quality Configuration Package


This section describes how to respond to problematic log messages in the DQ_OIT_CONFIG
package:

Example

Log Action required from Installer: Raw table RPPLOAD has managed
message element column BSC but its CFG table PCU_CFG has managed element
column PCU. The data in these columns must match otherwise the
managed columns must be corrected manually. {Owner=ERICSSON_GSM}.
Cause This is stating that the managed element configured for the raw table RPPLOAD has a
different managed element defined than its CFG table PCU_CFG. Data Quality will allow
these column names to be different, but they must represent the same data. In the
example above the BSC column represents different data to the PCU column
Solution Change the managed element column defined for the raw or CFG table (or both) in the
Data Quality GUI. To do this:
1. In the Data Quality Console, in the tree view, click Data Source, Schemas and then the required
schema.
2. Right-click the raw/CFG table and from the menu that appears, click Properties.
3. Click the Columns tab.
4. Filter by Managed Element to see the columns currently defined for managed element.
5. Right click a column and from the menu that appears, click Managed Element. The managed element
column should be the parent element of the data (for example, the BSC for cell data in a 2g network).
6. To remove a column from the managed element list, right-click the column and deselect the Managed
Element option. The managed element of the raw table must match the managed element of its CFG
table (defined in the General tab) for Data Quality to work correctly.
The Data Quality configuration package will attempt to use the raw table’s GPI
configuration in the Loaded Counters sheet in order to determine the managed element
column. The CFG table’s managed element will be set to the most commonly used GPI
column for the raw table’s associated with it. If this GPI column does not exist in the CFG
table it will use the next most commonly used GPI column, and so on. The raw table’s
managed element column will be set to the GPI column if defined.
If the GPI columns are not defined, then the configuration package will use the first PK
column which is not of the date type (for raw and CFG tables).
Tip: The DQ_OIT_CONFIG package will work best (and require minimum manual
configuration) if:
• The GPI column is defined for every raw table in the OIT Interface template Excel
spreadsheet
• The GPI column also exisst with the same name in the CFG table that has been
defined for that raw table

361
OPTIMA 8.0 Operations and Maintenance Guide

Saving Configuration Information to Microsoft Excel


In the Data Quality Console, you can save your configuration information to an Excel file. To do
this:

1. In the Data Quality Console, in the tree view, select the folder that contains the
configuration information that you want to save to Excel.

2. In the left-hand pane, right-click and, from the menu that appears, click Save to Excel.

3. In the Save As dialog box that appears, browse to the appropriate folder, type a name, and
click Save.

The information is saved as an Excel file.

About the Standard Data Quality Reports


This section describes the reports that are produced for the different types of data quality reporting.

Availability

There are two reports for availability:

Report name Database table Description

Statistics DQP_AVAIL_STATS Provides overall statistics related to availability;


for example, the number of new data elements
(in other words, in the raw table but not in the
cfg table).
Elements DQP_AVAIL_ELEMENT Describes each missing or new element.

The Statistics report provides the following information:

Database Field Description

DQP_CONFIG_ID The configuration ID, corresponding to the DQP_CONFIG table or


the DQP_AVAILABILITY_CONFIG view (also available in the GUI).
LEVEL_CITM The data level - table, managed or reported.
DATETIME_INS The date and time of the report.
PERIOD_START The start of the processing period, defined by the user when they
called the package.
PERIOD_END The end of the processing period, defined by the user when they
called the package.
TOTAL_CFG_ELEMENTS The total number of elements at this level in the CFG table.
TOTAL_DATA_ELEMENTS The total number of elements at this level in the raw table.

362
About the Data Quality Package

Database Field Description

CFG_ELEMENTS_NOT_IN_DATA The total number of missing elements - in other words, the number
of elements defined in the CFG table that were not found in the raw
table.
DATA_ELEMENTS_NOT_IN_CFG
The total number of new elements - in other words, the number of
elements found in the raw table that were not defined in the CFG
table.

The Elements report provides the following information:

Database Field Description

DQP_CONFIG_ID The configuration ID, corresponding to the DQP_CONFIG table or


the DQP_AVAILABILITY_CONFIG view (also available in the GUI).
LEVEL_CITM The data level - table, managed or reported.
DATETIME_INS The date and time of the report.
ELEMENT_NAME The name of the element.
PERIOD_START The start of the processing period, defined by the user when they
called the package.
PERIOD_END The end of the processing period, defined by the user when they
called the package.
ELEMENT_TYPE Indicates whether the element is missing (M) or new (N).

Nullness

There is a single report for nullness, which corresponds to the DQP_NULL_STATS database table,
and reports on the total number of rows and the number of null rows, per table, per level and per
column.

Database Field Description

DQIP_CONFIG_ID The configuration ID, corresponding to the DQP_CONFIG table or


the DQP_AVAILABILITY_CONFIG view (also available in the GUI).
LEVEL_CITM The data level - table, managed or reported.
PERIOD_START The start of the processing period, defined by the user when they
called the package.
PERIOD_END The end of the processing period, defined by the user when they
called the package.
COLUMN_NAME The name of the column being reported on.
ELEMENT_NAME The name of the element being reported on.
If this consists of more than one element (depending on the level
that is being reported), the ELEMENT_NAME will be in the format
BSC1-CELL1.
TOTAL_ROWS The total number of rows being reported on.
NULL_ROWS The total number of null rows found.
DATETIME_INS The date and time of the report.

363
OPTIMA 8.0 Operations and Maintenance Guide

About the Enhanced Data Quality Reports


This section describes the reports that are produced for the different types of data quality reporting.

About the Raw Table Data Integrity Report


This report provides Data Quality statistics as calculated by the Data Quality package.

This report is divided into 6 tabs, described in the following table:

Tab Contents

1 An introduction to the report and each tab. The report has filters on the interface (or schema) name
and the date range.
2 and 3 The Data Quality statistics:
• At table level
• At managed element level
4 The missing elements.
5 The incomplete elements.
6 A summary of the processing for the tables.

You should investigate any tables where the data quality is poor. This could be caused by files not
being made available on the OSS for OPTIMA to collect, or by errors within the OPTIMA mediation
process.

The Managed Element listed will indicate the files missing, depending on its status:
• If the managed elements are completely missing, no files have been loaded into that table
for the day.
• If the managed elements are incomplete, not all of the files have been loaded - the period
will indicate which ones.
• If a managed element is missing from all the tables within a particular interface, it suggests
that the files were never provided to OPTIMA. However you should confirm this by looking
at the FTP logs and the OSS server.
• If managed elements are loaded on some tables and not others, this would indicate that
there are problems with the mediation process for that table, or that the measurement
object(s) associated with that table are no longer being collected.

Note: In order for tables to be displayed fully in the reports, you must have executed both
Availability and Completeness for that table. If a table is missing, then you should check the last tab
to verify if the processes have run for that table. The Health Check report (described in About the
Health Check Report on page 365) can then be used for further troubleshooting.

364
About the Data Quality Package

About the Summary Table Data Integrity Report


This report provides Data Quality statistics as calculated by the Data Quality package.

This report is divided into 7 tabs, described in the following table:

Tab Contents

1 An introduction to the report and each tab. The report has filters on the interface (or
schema) name and the date range.
2-4 The Data Quality statistics.
5 The missing elements.
6 The incomplete elements.
7 A summary of the processing for the tables.

You should investigate any tables where the data quality is poor. This could be caused by files not
being made available on the OSS for OPTIMA to collect, by errors within the OPTIMA mediation
process or by errors in the OPTIMA Summary process.

The statistics also show the number of records available in the source table for the summaries. If
the number of the records found in the summary table matches this value, this indicates that the
Summary process is OK, and that any missing records have not been loaded. Therefore, you
should focus your investigations on the raw tables.

Note: In order for tables to be displayed fully in the reports, you must have executed both
Availability and Completeness for that table. If a table is missing, then you should check the last tab
to verify if the processes have run for that table. The Health Check report (described in About the
Health Check Report on page 365) can then be used for further troubleshooting.

About the System Summary Report


This report provides an interface-level view of the Data Quality statistics.

The results show the various statistics rolled up for each interface per day, by simply aggregating
the results from all the tables within that interface).

It also shows the percentage of tables on which the result is based.

About the File Availability Report


This report shows the managed elements for the selected interface and the percentage of periods
received for the day.

The managed element is the element level at which the files are produced, and therefore gives a
direct indication of the files that have been received and loaded into the OPTIMA database.

About the Health Check Report


This report provides detail on the processing and scheduling of the Data Quality package. This
information can be used to help troubleshooting, performance analysis or just to monitor the
processes.

365
OPTIMA 8.0 Operations and Maintenance Guide

The report is divided into 7 tabs, described in the following table:

Tab Contents

1 An introduction to the report and each tab.

2 An overview of the processes at the interface level, showing:


• The number of tables processed successfully for each Data Quality process for the day
• The number of messages and errors found
• Some performance statistics
3 A similar overview of the processes at the table level.
4 Any errors encountered when running the Data Quality process.
For more information on commonly-found errors and their resolutions, see Troubleshooting
the Data Quality Package on page 369.
5 The Data Quality configuration for each process for each table.
Important: It is recommended that you enable the Availability and Completeness processes
at the Table and Managed Element level, and disable the Nullness and Last Load processes.
If this minimum enabled state is not met, then tables will not have the necessary statistics
collected and so will not appear in the Data Quality reports.
6 The Oracle schedules configured to run the Data Quality processes.
These schedules must have been executed successfully in order for the statistics to be
collected.
7 Any data tables that are not configured in the Data Quality package.

Example Workflow for Using the Availability Package


This topic describes the suggested steps that you could follow to use the Availability package. In
this example, period (sub-daily) processing is used, rather than the default daily processing.

The rest of this section describes these steps, and the rest of the Data Quality package, in more
detail.

1. Ensure that you have installed your OPTIMA database.

2. Check log file for errors and ensure that all package bodies in the OSSBACKEND schema
are compiled correctly.

3. Run the following package function to create the partitions and check afterwards they have
been created:

AIRCOM.OSS_MAINTENANCE.MAINTAIN_TABLE_PARTITIONS

4. Run the opx_DAQ_WIN_420.exe.

5. Log in using the username 'OPTIMA_DQ_USER' and the password.

366
About the Data Quality Package

6. In the Data Quality Console dialog box, select the Data Source folder, and then the All
folder:

7. In the right-hand pane, right click, and from the menu that appears, click Add DataSource.

8. In the Data Quality Console Wizard:


o Select the Oracle Data Dictionary and click Next
o Select the required data tables and CFG tables, and add them to the right-hand pane
using the right arrow button
o Click Finish

9. In the Data Quality Console dialog box, select the Schemas folder, and then the folder
corresponding to the schema name:

10. In the right-hand pane, right-click the CFG table that you want to use for data quality
processing, and from the menu that appears, click Properties.

11. In the dialog box that appears, click the Columns tab and configure the Managed Element
and Reported Element columns correctly.

12. In the Filter By Sets drop-down list, select Reported Element.

Ensure the column order is the same as will be defined for the raw table:

Tip: Use the Up and Down arrows to re-order the columns.

367
OPTIMA 8.0 Operations and Maintenance Guide

13. From the Filter By Sets drop-down list, select Managed Element, and ensure the column
order is the same as will be defined for the raw table.

14. Click Apply and then click OK.

15. Right click the raw table, and from the menu that appears, click Properties.

16. In the dialog box that appears, on the General tab, set the loading type and granularity.

17. Select the Use Topology Table option, and select the CFG table that you have chosen to
use for data quality processing:

18. On the Columns tab, configure the Managed Element, Reported Element and Reported
Datetime columns correctly.

19. In the Filter By Sets drop-down list, select each of the three groups in turn, and ensure the
column order matches that used in the CFG table for the same level.

20. On the DQ Period Processing tab, in the Availability pane, select the levels on which you
want to report.

21. Click the View Config/Scheduling info button to check the configuration and retrieve the
DQP_CONFIG_ID.

22. Click Apply to save the configuration, and then click OK.

23. On all datasources (CFG and data/raw) grant the select privilege to ossbackend:

grant select on <schema>.<table> to ossbackend

24. Check that the data quality tables DQP_AVAIL_ELEMENT and DQP_AVAIL_STATS have
partitions for the startdate of the period you wish to check.

25. In an SQL script run Data Quality Period Processing for availability using the following
script as the ossbackend user:

execute OSSBACKEND.DQ_PERIOD_PROCESSING.PROCESS_AVAIL (
:DQP_CONFIG_ID, :PERIOD_START, :PERIOD_END );

Where

DQP_CONFIG_ID is the configuration ID, specifying the data source configuration that you
are using (as defined in the Data Source Properties dialog box)

PERIOD_START and PERIOD_END define the range of the time period you want to report
on.

368
About the Data Quality Package

Troubleshooting the Data Quality Package


Problems with the Data Quality package are often due to configuration errors - often tables are
missing from reports due to processes being enabled incorrectly.

To ensure that tables appear in reports, the following must be enabled:


• Availability process at the Table and Managed Element levels:

• Completeness process at the Table and Managed Element levels:

369
OPTIMA 8.0 Operations and Maintenance Guide

For future troubleshooting, you could also configure, but not enable, the following:
• Nullness at the Table and Managed Element levels:

• Last Load at the Table and Managed Element levels:

370
About the Data Quality Package

This table shows an example of the expected configuration that would be displayed in the Health
Check report:

Interface Table Process Process Process Table Managed Reported


Name Group State Level Element Element
Level Level

ALU_CSCO CDRST Availability - 47 Enabled Enabled Enabled Disabled


RE ATS ALU_CSCORE.
CDRSTATS
ALU_CSCO CDRST Completeness - 46 Enabled Enabled Enabled Disabled
RE ATS ALU_CSCORE.
CDRSTATS
ALU_CSCO CDRST Last Load - 49 Disabled Disabled Disabled Disabled
RE ATS ALU_CSCORE.
CDRSTATS
ALU_CSCO CDRST Nullness - 48 Disabled Disabled Disabled Disabled
RE ATS ALU_CSCORE.
CDRSTATS

371
Scheduling Oracle Jobs

12 Scheduling Oracle Jobs

This chapter describes how Oracle jobs are scheduled.

About Scheduling
For OPTIMA to be fully effective you must create ORACLE DBMS SCHEDULER jobs and run them
in the background.

The common processes that are always required and need to be scheduled are:
• AIRCOM.OSS_MAINTENANCE.RUN_OSS_MAINTENANCE_DAILY ('LOGS')
• AIRCOM.OPTIMA_SUMMARY.DO_WORK

Multiple versions of the DO_WORK process may be needed (with unique names), the
number required will be determined by the overall number of summaries and the resources
available to process them.

The parameter 'SUMMARY_SCHEDULES_TO_PROCESS' can also be added into the


AIRCOM.OPTIMA_COMMON table to control the number of summaries processed by each
DO_WORK job before closing down.

These processes may also be needed dependent on the individual customer setup:
• AIRCOM.OPTIMA_SANDBOX.REMOVE_EXPIRED_OBJECTS
• AIRCOM.OSS_MAINTENANCE.MAINTAIN_TABLESPACES
• AIRCOM.OPTIMA_ALARMS.MAINTAIN_ALARMS_TABLE

For each Vendor Interface the following processes will need to be scheduled:
• AIRCOM.OSS_MAINTENANCE.RUN_OSS_MAINTENANCE_DAILY ('<schema>')
• All CFG population procedures for that schema.

About Gather Stats


The configuration of the Gather Stats package depends on four metadata tables:

1. STATS_CONTROL - Controls full execution of the package and comes with the default
settings that will be used to populate the metadata tables

2. STATS_SCHEMA_METADATA - controls utilization at the schema level

3. STATS_TABLE_METADATA - controls utilization at table level

4. STATS_EXCLUDE_TABLES - controls which tables will not be included for the gather stats
process

These metadata tables must be populated for the gather stats process to work. This is
automatically done when you run the Installation/Upgrade scripts for OPTIMA 8.0.

When new Interfaces are added via the OIT, they are automatically added to the gather stats
process.

The STATS_CONTROL table comes with pre-defined settings for gathering your statistics. For
most AIRCOM OPTIMA installations these default settings are appropriate.

373
OPTIMA 8.0 Operations and Maintenance Guide

Creating a Daily Gather Stats Job


To create the Gather Stats job that will run daily (03:00) and gather statistics for all objects
(metadata tables) for all schemas:

begin
dbms_scheduler.create_job(
job_name => 'AIRCOM.GATHER_STATS_OPTIMA'
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN GATHER_STATS.collect_schema_stats;
END;'
,start_date => SYSDATE+1
,repeat_interval => 'FREQ=DAILY;BYHOUR=03'
,enabled => TRUE
,comments => 'Gather Stats');
end;
/

Warning: The scheduled job should be created under the schema that the Gather Stats package
has been installed on. Failure to do so may prevent the job from running successfully.

To see if the last run for the scheduled job was successful:

Select log_id, log_date, owner, job_name, job_class, operation,


status
from all_scheduler_job_log
where owner='AIRCOM'
and job_name ='GATHER_STATS_OPTIMA'
order by log_date desc;

If you want to force the job to run immediately after creation:

begin
dbms_scheduler.run_job('GATHER_STATS_OPTIMA',false);
end;
/

To verify that the job has terminated successfully:

Select owner, job_name, session_id


from all_scheduler_running_jobs
where owner='AIRCOM'
and job_name ='GATHER_STATS_OPTIMA';

To stop the job gracefully (force=false).

begin
dbms_scheduler.STOP_JOB('GATHER_STATS_OPTIMA');
end;
/

374
Scheduling Oracle Jobs

Creating a Gather Stats Job per Interface


Another option for configuring the Gathers Stats job is to create a scheduled job per Vendor-
Interface/Schema. You can use this PL/SQL block to create a job for all interfaces:

DECLARE
vhour varchar2(50);
begin
for i in (select schema_name from stats_schema_metadata) loop
vhour:='FREQ=DAILY;BYHOUR=0'||mod(i.num,4);
begin
-- drop gather_stats job if already exists
dbms_scheduler.drop_job('GT_'||i.schema_name);
exception
when others then
null;
end;
dbms_scheduler.create_job(
job_name => 'GT_'||i.schema_name
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN
GATHER_STATS.collect_schema_stats(pschema=>'||q'[']'||i.schema_
name||q'[']'||'); END; '
,start_date => SYSDATE
,repeat_interval => vhour
,enabled => TRUE
,comments => 'Gather Stats');
end loop;
end;
/

For RAC where node affinity is used, contact Product Support.

Disabling Oracle's Auto Gather Stats Jobs


Although not mandatory, it is recommended that you use a single Gather Statistics procedure to
manage your database statistics. The GATHER_STATS package will lock the schema stats whilst
collecting statistics. If you have chosen not to stop the default Oracle gather statistics jobs, there is
the risk of a concurrency issue.

To disable the default Oracle statistic gather jobs:

BEGIN
DBMS_AUTO_TASK_ADMIN.DISABLE(
client_name => 'auto optimizer stats collection',
operation => NULL,
window_name => NULL);
END;
/

To verify that the Autotask Background Job has been disabled successfully:

Select client_name, operation_name, attributes, status


from DBA_AUTOTASK_OPERATION
where client_name = 'auto optimizer stats collection';

If you are unsure of the Oracle Database version that you have installed:

Select banner from v$version;

375
OPTIMA 8.0 Operations and Maintenance Guide

Gathering Dictionary Statistics


It is advisable to gather dictionary and fixed object statistics once a month. The following job
enables this functionality. Be sure to consult your Oracle DBA before performing this operation, as
it may already be in place for another job in the database. This PL/SQL job creates the
'GATHER_DICTIONARY_FIXED_STATS' job to run on the 1 st day of every month at 01:00 in the
morning:

Begin
dbms_scheduler.create_job(
job_name => 'GATHER_DICTIONARY_FIXED_STATS'
,job_type => 'PLSQL_BLOCK'
,job_action => 'BEGIN
gather_stats.collect_dictionary_stats; END;'
,start_date => SYSDATE
,repeat_interval => 'FREQ=MONTHLY;BYMONTHDAY=01;BYHOUR=01'
,enabled => TRUE
,comments => 'Gather Stats');
end;
/

Monitoring the Gather Stats Job


There are a number of log tables that can be used to monitor the execution of the Gather_Stats job.
Currently there are no public synonyms created for these log tables, the tables can only be
accessed by other users by schema followed by tablename.

The default retention for these log tables is 30 days.

The log tables to monitor are:

STATS_EVENT_LOG

Purpose: Log all event level information.

STATS_TABLES_LOG

Purpose: Log table level information for each table/interface, with execution times and CPU/IO
metrics.

376
About the OSS Maintenance Package

13 About the OSS Maintenance Package

The OSS Maintenance package enables you to maintain your database. You can use the package
to:
• Maintain partitions
• Maintain tablespaces
• Gather statistics

Warning: OSS Maintenance should not be used to manage the data files for a database using
ASM.

Installing the OSS Maintenance Package


To install the OSS Maintenance package for the first time in an OPTIMA database:

1. Ensure that you have installed your OPTIMA database.

2. Configure the following tables:


o Configuring the DD_LUNS Table on page 384
o Configuring Partition Maintenance on page 381
o Configuring the MAINTAIN_TABLESPACE Table on page 385

3. Create jobs to run the key maintenance procedures.

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

Maintaining Table Partitions


TEOCO recommends partitioning the majority of the tables in the database, including vendor raw
data tables, summary tables and tables used to store log messages.

Partitioning involves dividing table data across data partitions according to values held in one or
more table columns. This facilitates the writing and retrieval of data, administration, index
placement and query processing.

The size of the partitions will vary, depending upon the size and quantity of the data that is loaded
into the tables. For example, raw data tables storing hourly data could be partitioned into daily
partitions, daily summary tables could be partitioned into weekly partitions and monthly summary
tables could be partitioned into monthly or yearly partitions. The OSS Maintenance package
enables you to maintain the table partitions in your database.

As a guideline raw interface data is typically stored for 1 to 3 months, where summary and busy
hour data is stored for 2 to 5 years. The physical storage of the database server needs to be able
to store the desired retention periods.

377
OPTIMA 8.0 Operations and Maintenance Guide

Partition maintenance is implemented in the following ways:


• New partitions are created in advance, so that they will be available as the tables grow. As
the number of future partitions that can be created during the installation is limited by
available disk space, new partitions must be automatically generated.
• Old partitions are dropped when they are no longer required. As data is only needed for a
certain amount of time, for example 1 year, partitions that contain obsolete data need to be
removed.

To use partition maintenance, tables must be created with a single partition in the very distant past
using the naming format PYYYYMMDD, for example P19700101. By default, the first partition created
by the OPTIMA Installation Tool uses this date. This is because Oracle cannot move data between
partitions, so the first partition must be dated at a time when there is no possibility of data existing.

Note: Sub-partition templates will be gathered by the CASCADE option. However, the OSS
Maintenance package does not currently support sub-partition by interval.

How Partition Maintenance Works


Partition maintenance involves the following steps:

1. Add new partitions.

2. Delete old partitions.

Important: Partition maintenance uses the following basic principles:


• For all PARTTYPEs, when the OSS Maintenance application is run, the partitions will be
truncated back to the last whole period - for example, if you are using monthly partitioning
and start the OSS Maintenance application on the 4th day of the month, then the first
partition will start on the 1st day of the month. If you are using daily partitioning and start
the OSS Maintenance application at 08:00, then the first partition will start at 00:00 the
same day.
• Monthly partitioning uses calendar months - that is, the interval between two dates (19/2 to
19/3, 22/9 to 22/10 and so on) - rather than a particular number of days or weeks.
• Yearly partitioning uses calendar years - that is, the interval between two dates (19/2/2009
to 19/2/2010, 22/9/2009 to 22/9/2010 and so on) - rather than a particular number of weeks
or months.

Adding New Partitions

Based on the PARTTYPE that has been set, the OSS Maintenance application will add partitions
until it reaches the PARTADVANCE threshold.

For example, if the PARTTYPE is 1 (Daily), the PARTADVANCE is 6, then the OSS Maintenance
application will create 6 daily partitions in advance of the sysdate.

Deleting Old Partitions

Based on the PARTTYPE that has been set, the OSS Maintenance application will delete any
partitions that are older than the PARTRETENTION threshold, in relation to the sysdate.

For example, if the PARTTYPE is 1 (Daily), the PARTRETENTIONPERIOD is 6, then the OSS
Maintenance application will delete any partitions that are more than 6 days old (in other words, 6
partitions behind the sysdate).

378
About the OSS Maintenance Package

Changing from one PARTTYPE to another

When using OPTIMA, you may find that you need to change from one PARTTYPE to another (for
example, from hourly partitioning to daily, or vice versa). If you intend to do this, you should first
consider the following:
• Because each existing partition may contain data, the OSS Maintenance application cannot
create new partitions within an existing range.

Therefore, it will not create any new partitions using the new PARTTYPE until all of the
partitions for the old PARTTYPE have been wholly aged out - that is, until the sysdate has
reached the last of the future partitions determined by the PARTADVANCE.

As a basic example, consider the following scenario where you are currently using monthly
partitioning:

Month Feb Mar Apr May June July Aug

Monthl P01/03 P01/04 P01/05 P01/06 P01/07 P01/08


y (sysdate) (PARTADVANCE)
Partitio
n

In this scenario:
o The sysdate is 01/04/2010
o The PARTADVANCE is 4

Therefore, the partitions that will be created for the future are P20100501, P20100601,
P20100701 and 20100801

If you then switch to weekly partitioning, then the OSS Maintenance application will not
start to create weekly partitions until the sysdate has reached 08/07/2010. At this point in
time, the last monthly partition in the range (P20100801) has begun to be created, and
because the PARTADVANCE is 4 (weeks), the first weekly partition (P20100808) can be
created.

Month Feb Mar Apr May June July Aug

Wk 1 Wk 2 Wk 3 Wk 4

Monthl P01/03 P01/04 P01/05 P01/06 P01/07 P01/08


y (final
Partitio monthly
n partition)

sysdate (08/07)

Weekly P08/08 P15/08 P22/08


Partitio (first
n weekly
partition)

Note: It is often the case that if the PARTTYPE is changed, then the PARTADVANCE and
PARTRETENTIONPERIOD are changed in order to keep the ranges for advance
partitioning and partition retention the same. In other words, a change from monthly to
weekly would mean multiplying by 4 (as a PARTADVANCE of 4 months = 16 weeks), or a
change from hourly to daily would mean dividing by 24 (as a PARTRETENTION of 48
hours = 2 days).
379
OPTIMA 8.0 Operations and Maintenance Guide

• The OSS Maintenance application cannot delete partitions until they have wholly expired
and moved outside the PARTRETENTIONPERIOD date.

Looking at the same example scenario above, where:


o The sysdate is 01/04/2010
o The PARTRETENTIONPERIOD is 3

Then if you are using monthly partitioning, then the old partitions kept are P20100301,
P20100201 and P20100101.

Month Dec Jan Feb Mar Ap


r

Monthly partition P01/01 P01/02 P01/03 P01/04


(PARTRETENTIONPERIOD (sysdate)
)

However, if you have changed from monthly partitioning to weekly partitioning, then the
OSS Maintenance application cannot start to delete partitions until the periods no longer
overlap - in other words, only complete partitions can be deleted.

D Jan Feb Mar Apr M


Month e a
c y

Wk Wk Wk Wk Wk 1 Wk 2 Wk 3 Wk 4
1 2 3 4

Monthl P01/0 P01/0 P01/03 08/04


1 2 P01/04. sysdate
y
Partiti (PARTRETENTIONPERIOD)
on (01/05)

Therefore, if you keep the same PARTRETENTIONPERIOD of 3, then P20100101,


P20100201 and P20100301 are no longer kept. Because part of the data for the next
partition P20100401 falls within the 3-week threshold, this is kept, and will not be deleted
till all of its data falls outside the PARTRETENTIONPERIOD - in other words, 4 weeks later
(01/05/2010). It will not be able to delete data in weekly partitions until weekly partitioning
begins after the range has completed, and 3 weeks of weekly partitions have elapsed -
P20100808 will be the first one to be deleted, on 01/09/2010.

380
About the OSS Maintenance Package

What Happens When Partition Maintenance is Run Intermittently


If you do not run partition maintenance regularly, 'gaps' within the data can be created where one
set of partitions ends and another set of partitions begins:

Jan Feb Mar April May June Jul Aug Sept Oc


y t

Wk Wk Wk Wk
1 2 3 4

P01/02 P01/03 P01/04 P01/05 P01/06


(sysdate)

In this example, monthly partitioning has created a number of partitions from February to June, but
then the OSS maintenance application has not been run for three months. If the next time that it is
run is the start of September (using weekly partitions), then there will be an unpartitioned 'gap' of
data between the start of June and the end of the first week in September that has to be filled.

To do this, it starts to create filler partitions using the current PARTTYPE, starting from the date of
the last partition created and ending at the date of the last partition that will be created according to
the PARTADVANCE. In our example above, if the PARTADVANCE is 3, when the OSS
Maintenance application is restarted on Sept 1st, weekly filler partitions will be created from 1st
June to 22nd September.

Configuring Partition Maintenance


You configure partition maintenance by editing the MAINTAIN_TABLE table with a suitable SQL
Editor, for example TOAD.

This table should contain one row for every partitioned PM table (raw or summary) in the OPTIMA
database.

Important:
• Before configuring partition maintenance, ensure that the COMMON_LOGS table is added to
the MAINTAIN_TABLE table
• If a partitioned table is not configured in MAINTAIN_TABLE then it will not have statistics
gathered for it

The following table describes the configuration options:

Column Description Example Value Populate


d by OIT

SCHEMA The schema (user) of the table to maintain. AIRCOM YES


TABLE_NAME The name of the table to maintain. CELLSTATS YES
PARTRETENTION The number of partitions before the 14 YES
sysdate to retain. Earlier partitions will be
deleted.
PARTADVANCE The number of partitions after the sysdate 7 YES
to create. TEOCO recommends creating
several future partitions to avoid any loss of
data if the OSS Maintenance package
should fail to run.

381
OPTIMA 8.0 Operations and Maintenance Guide

Column Description Example Value Populate


d by OIT

PARTTYPE The type of partitions to create. The 1 YES


available options are:
1 = Daily
2 = Weekly
3 = Monthly
4 = Yearly
5 = Hourly
LASTRUNDATE The sysdate when the table partition 28/06/2009 11:31:00 YES
maintenance last ran. This column is
updated automatically by the OSS
Maintenance package.
This should be set to NULL if the
maintenance has never been run before.
NEXTRUNDATE The sysdate when the table partition 29/06/2009 00:00:00 YES
maintenance should next run. This column
is updated automatically by the OSS
Maintenance package.
A date before SYSDATE indicates that this
table should be processed.
SCHEDULEPERIOD Updates the NEXTRUNDATE with the 1 YES
correct date after the partition maintenance
has processed the table. The available
options are:
1 = Daily (NEXTRUNDATE is midnight the
following day)
2 = Weekly (NEXTRUNDATE is Monday
the following week)
3 = Monthly (NEXTRUNDATE is on the first
day of the following month)
4 = Yearly (NEXTRUNDATE is on the first
day of the following year)
PRIORITY The priority in which the tables are 1 YES
maintained. The COMMON_LOGS table is
always partitioned first, then the tables are
partitioned in order of priority, in ascending
order.
PARTITION_STATS_M This column should generally be left NULL NULL NO
ETHOD or set to STALE - both of these values will
gather stale partition stats on the table.
For tables that take a long time to process,
set to COPY.

MIN_DATE The minimum partition for the table. NO


Important: This is used internally by the
OSS Maintenance package, so do not edit
this column.
REBUILD_INDEXES Indicates whether the indexes will be NULL NO
rebuild or not.
Important: It is currently not recommended
to configure rebuild indexes, so leave this
column NULL.

382
About the OSS Maintenance Package

Important: When you run the OSS maintenance package for the first time, then the columns not
populated automatically by the OIT should be left NULL. You can then use these columns later for
tuning - for more information, see Tuning the OSS Maintenance Package on page 389.

Scheduling Partition Maintenance


To schedule partition maintenance:

Create an Oracle job to run the following:

AIRCOM.OSS_MAINTENANCE.MAINTAIN_TABLE_PARTITIONS;

Notes:
• This is normally done inside the AIRCOM schema
• No parameters are passed to the procedure

Tip: If you want to run partition maintenance as part of the entire OSS Maintenance package, see
Scheduling the OSS Maintenance Package on page 388.

Maintaining Tablespaces
You can use the OSS Maintenance package to maintain the database’s tablespaces.

The tablespace maintenance procedures add new datafiles to tablespaces whose space is running
out. Datafiles are created using the following parameters:

SIZE 100M AUTOEXTEND ON NEXT 100M MAXSIZE 2000M

This means that datafiles are created with an initial size of 100MB and extend by 100MB at a time
to a maximum size of 2000MB. New datafiles are added before the maximum datafile size of
2000MB is reached. The location of the directories where the datafiles are added is stored in a
DD_LUNS table.

There are four different tablespace maintenance procedures, which are described in the following
table:

Procedure Description
Maintain_Tablespaces_AllLUNS Adds a datafile for every LUN defined when a tablespace is
running out of space (Oracle striping).
Maintain_Tablespaces_By_Size Adds one new datafile on the least occupied LUN of the correct
type.
Maintain_Tablespaces_SingleLUN Creates one datafile at a time on a single LUN. Future datafiles
for the tablespace will remain on the same LUN.

Maintain_Tablespaces Processes all tablespaces using the method configured for each
tablespace. The above three procedures are then called within
this procedure.

Important: The OSS Maintenance package is only required for OPTIMA databases using non-ASM
(LUNs) for storage. For OPTIMA databases using ASM it is important to ensure that the datafiles
for your tablespaces are created as ‘max size unlimited’, because there is no need to manage the
tablespaces with ASM.

383
OPTIMA 8.0 Operations and Maintenance Guide

When using ASM, the ASM Disk group views (V$ASM_DISKGROUP/V$ASM_DISK) should be
regularly monitored by an Oracle DBA to view all available storage.

Configuring Tablespace Maintenance


You configure tablespace maintenance by manually populating the DD_LUNS and
MAINTAIN_TABLESPACE tables with a suitable SQL Editor, for example TOAD.

Configuring the DD_LUNS Table


The following table describes the DD_LUNS table configuration options:

Note: Type one row per location on disk where a tablespace can be created.

In This Do This For Example


Column

LUN_TYPE Type a single letter defining the type of data which the D
LUN will store. The available LUN types are described
in a separate table below.
LUN_PATH Type the path (normally a location on disk) where the C:\optima_data\lun1\
datafiles will be put.
Important: Ensure the last character in the path is the
directory separator, that is, "\" in Windows and "/" in
UNIX.
FULL This column is reserved for future use. Type "N" for this N
column.
CAPACITY_MB Type the amount of space (in MB) on the LUN disk 8000
location which is available to be used by datafiles.
Notes:
• Do not include the entire disk space.
• Ensure that the specified amount of disk space will
be available and will not be used by other
processes.
Tip: As datafiles have a maximum size of 2000 MB,
TEOCO recommends defining this capacity as a
multiple of 2000.

The following LUN types are available:

LUN_TYPE Description Maintain


Tablespaces?

S System tablespaces N

D Data tablespaces Y
I Index tablespaces Y
T Temporary tablespaces N

Important: Only LUN types D and I are maintained by the OSS Maintenance package.

384
About the OSS Maintenance Package

Configuring the MAINTAIN_TABLESPACE Table


The following table describes the MAINTAIN_TABLESPACE table configuration options:

Note: Type one row per tablespace in the database.

In This Column Do This Example Value

TABLESPACE_NAME Type the tablespace name. ERI_CELLSTATS_I


Important: The tablespace name must end
with the letter of its LUN_TYPE, that is, "D" for
data tablespaces and "I" for index tablespaces.
MAINTENANCE_TYPE Type a maintenance option for each 1
tablespace. The available values are:
0 = Do not maintain this tablespace
1 = Maintain by adding a datafile on the LUN of
the correct LUN_TYPE with the most space
remaining when the tablespace needs
extending.
2 = Maintain tablespace on a single LUN, all
data files will be in the LUN defined in the
SINGLE_LUN column.
3 = Maintain tablespace by adding a data file to
all the LUNs available when the tablespace is
running out of space. Only LUNS that have the
same LUN_TYPE as the tablespace will be
included.
SINGLE_LUN Type the LUN that is always used to add C:\optima_data\lun1\
datafiles to the tablespace.
Note: This column is only used when
MAINTENANCE_TYPE is set to 2
(SINGLE_LUN).

Tip: You can use the following query to populate the TABLESPACE_NAME column from the data
dictionary:

INSERT INTO MAINTAIN_TABLESPACE (TABLESPACE_NAME) SELECT TABLESPACE_NAME


FROM DBA_TABLESPACES WHERE TABLESPACE_NAME NOT IN
('SYSTEM','RBS','USR','TEMP');
COMMIT;

TEOCO recommends that system and temporary tablespaces are not included in the
MAINTAIN_TABLESPACE table and that their rows are deleted after they have been populated.

Scheduling Tablespace Maintenance


TEOCO recommends scheduling the Maintain_Tablespaces procedure to run regularly and
frequently, for example, at hourly intervals. This is because if a tablespace becomes full, then no
data for any tables within that tablespace can be inserted.

To schedule tablespace maintenance, create an Oracle job to run the following:

AIRCOM.OSS_MAINTENANCE.MAINTAIN_TABLESPACES;

385
OPTIMA 8.0 Operations and Maintenance Guide

Notes:
• This is normally done inside the AIRCOM schema
• No parameters are passed to the procedure

Tip: If you want to run tablespace maintenance as part of the entire OSS Maintenance package,
see Scheduling the OSS Maintenance Package on page 388.

Gathering Statistics
Gathering schema statistics in the database is important in ensuring that queries run quickly and
efficiently.

Statistics are gathered using Oracle’s DBMS_STATS package, which contains the following
procedures:

Item Description

GATHER_SC_STATS Gathers statistics for the schema name that is passed as a parameter to the
procedure.
GATHER_TB_STATS Gathers statistics for the table name that is passed as a parameter to the
procedure.

Note: All gather schema parameters can be altered to allow flexibility of the schema statistics that
are gathered. The defaults are to allow Oracle to know what to gather and how much.

The following options are used when gathering statistics using


DBMS_STATS.GATHER_SCHEMA_STATS:

Option Value

ESTIMATE_PERCENT 15
METHOD_OPT FOR ALL INDEXED COLUMNS SIZE 254
DEGREE 2
CASCADE TRUE
OPTIONS GATHER AUTO

Configuring Statistics Gathering


In a standard database set up, statistics gathering does not require any configuration, however you
need to pass certain details as follows:

For this procedure Pass the following


GATHER_SC_STATS The name of the schema from which you are gathering statistics.
GATHER_TB_STATS The names of the schema and table from which you are gathering
statistics.

386
About the OSS Maintenance Package

To enhance the flexibility of the GATHER_SC_STATS and GATHER_TB_STATS procedures, you


can also pass the following as parameters:

Item Description Default

PSCHEMA The name of the schema. MANDATORY


PTABLE The name of the table. MANDATORY
(GATHER_TB_STATS only)
PPARTNAME The name of the partition. NULL
(GATHER_TB_STATS only)
PESTIMATE The percentage of rows to estimate dbms_stats.auto_sample_size
PBLOCK_SAMPLE Random block or random row. FALSE
PMETHOD_OPT Histogram collection. For all columns, SIZE AUTO
PDEGREE Degree of parallelism. 4
PGRANULARITY Level to collect if partitioned. AUTO
PCASCADE Gather stats on indexes as well. TRUE
POPTIONS Which objects to collect on. GATHER AUTO
(GATHER_SC_STATS only)
PNO_INVALIDATE Invalidate dependent cursors. TRUE
PFORCE Gather on locked stats. TRUE.

Scheduling Statistics Gathering


You can schedule the procedures for statistics gathering by setting up an Oracle job, normally
inside the AIRCOM schema, to run the appropriate procedure as follows:

For this procedure Set up the Oracle Job to run this

GATHER_ALL_STATS AIRCOM.OSS_MAINTENANCE.GATHER_ALL_STATS;
GATHER_SCHEMA_STATS AIRCOM.OSS_MAINTENANCE.GATHER_SCHEMA_STATS('AIRCOM');
GATHER_TABLE_STATS AIRCOM.OSS_MAINTENANCE.GATHER_TABLE_STATS('AIRCOM',
'TABLENAME');
COPY_TABLE_STATS AIRCOM.OSS_MAINTENANCE.COPY_TABLE_STATS('AIRCOM',
'TABLENAME');
GATHER_SC_STATS AIRCOM.OSS_MAINTENANCE.GATHER_SC_STATS('AIRCOM');
GATHER_TB_STATS AIRCOM.OSS_MAINTENANCE.GATHER_TB_STATS('AIRCOM',
'TABLENAME');
GATHER_DICTIONARY_STATS AIRCOM.OSS_MAINTENANCE.GATHER_DICTIONARY_STATS;
Note: This should be scheduled to run once per month, unless advised
otherwise.
GATHER_SYSTEM STATS Does not need to be scheduled to be run regularly. This procedure is for
use by DBAs and System Engineers only.

Important: If you need to change the defaults for GATHER_SC_STATS or GATHER_TB_STATS, then
you must use the relevant parameters.

387
OPTIMA 8.0 Operations and Maintenance Guide

A job needs to be set up for each schema to gather statistics for, replacing 'AIRCOM' with the
schema name. In most cases it is expected that GATHER_ALL_STATS will be used instead of
GATHER_SCHEMA_STATS.

Tip: If you want to run statistics gathering as part of the entire OSS Maintenance package, see
Scheduling the OSS Maintenance Package on page 388.

Scheduling the OSS Maintenance Package


As well as scheduling the separate components of the OSS Maintenance package on an individual
basis, you can also schedule to run the entire package as a whole, using the
run_oss_maintenance_daily function. This function should include the name of the schema to
be maintained; for example, for the NOKIA_GPRS schema, the function would be as follows:

begin

oss_maintenance.run_oss_maintenance_daily('NOKIA_GPRS');

end;

You should schedule this function to run on the following schemas:


• LOGS

Important: This schema must be maintained before any of the others, because it contains
the COMMON_LOGS table. This requires a valid partition for the current day to exist
before the OSS Maintenance package can run, as it needs to be able to log messages to
the COMMON_LOGS table.

• AIRCOM
• GLOBAL
• OSSBACKEND (if you are using the Data Quality module)
• All vendor schemas

Tip: You can create several concurrent DBMS_SCHEDULER jobs, one for each schema that you
want to maintain.

Scheduling the OSS Maintenance Package for Very Small Databases

If your database is very small, it is possible (but not recommended) to maintain all of the vendor
schemas contained in the GLOBAL.VENDOR table at once. To do this, use the following function:

begin

oss_maintenance.run_oss_maintenance_daily('ALL');

end;

388
About the OSS Maintenance Package

Turning Off Components in the OSS Maintenance Package


If you have scheduled to run the OSS Maintenance package as a whole, rather than as separate
components, it is possible to turn off particular components. You can choose to turn off the
rebuilding indexes and statistics gathering tasks.

To turn off the rebuilding of indexes:

In the OPTIMA_COMMON table, set the REBUILD_INDEXES parameter to N.

When the OSS Maintenance package is run, a message will appear explaining that the
rebuilding indexes component will not run due to this setting.

To turn off the gathering of statistics:

In the OPTIMA_COMMON table, set the STOP_GATHER_STATS parameter to 1.

When the OSS Maintenance package is run, a message will appear explaining that the
statistics gathering component will not run due to this setting.

Tuning the OSS Maintenance Package


After you have run the OSS Maintenance package for the first time, you can fine-tune it in order to
improve its effectiveness.

You can use the following methods:


• Modify the estimate percentage
• Force the gathering of GLOBAL schema stats
• Modify the PARTITION_STATS_METHOD

Modifying the Estimate Percentage


To enable Oracle to tune the percentage of rows to estimate when gathering statistics, the OSS
Maintenance package uses a recommended estimate percentage of
dbms_stats.auto_sample_size.

It is recommended that you usually run the package with this default, but you can override this
value by specifying a value for this parameter of the run_oss_maintenance_daily function.

For example, to force the OSS Maintenance package to use an estimate percentage of 1% for the
ERICSSON_UTRAN schema, you should schedule the following:

begin

oss_maintenance.run_oss_maintenance_daily('ERICSSON_UTRAN',1);

end;

389
OPTIMA 8.0 Operations and Maintenance Guide

Forcing the Gathering of GLOBAL Schema Stats


By default, the gathering of stale GLOBAL schema statistics will occur on the first day of the first
week of month, based on your regional settings. For example, in the US this will run on the first
Sunday of the month, and in the UK it will be the first Monday of the month.

However, if you want to do this at any other time, you should force the OSS Maintenance package
to execute this.To do this, pass 'FORCE' as the third parameter to the
run_oss_maintenance_daily function, for example:

begin

oss_maintenance.run_oss_maintenance_daily(p_schema
=>'ERICSSON_UTRAN', p_processing_type => 'FORCE');

end;

Modifying the PARTITION_STATS_METHOD


When you first run the OSS Maintenance package, the PARTITION_STATS_METHOD parameter
of the MAINTAIN_TABLE table should be set to NULL.

However, after this initial run, you can modify this value.

Note: For more information on the MAINTAIN_TABLE table, see Configuring Partition Maintenance
on page 381.

Setting this parameter to either NULL or STALE will produce the same result - the table statistics
will be calculated by gather_stale_partition_stats. However, if you set the parameter to COPY, then
the table statistics will be calculated by copy_partition_stats and gather_copy_partition_stats.

Important: NULL or STALE should be used for as many tables as possible. In particular, monthly
and yearly tables will always use these values.

The processing times are logged by this procedure in two different messages:

Message Description Example


Number

827470 The time taken to process the entire gather_stale_partition_stats function:


schema. maintained schema LOGS - processing
time: 00:00:13
You should check this message for the
processing time of each schema.
If a schema is taking an unacceptable
time to process, then you should refer
to message 827491.
827491 The time taken to process each table or gather_stale_partition_stats debug: Called
partition. dbms_stats.gather_table_stats at table level
(Schema=AIRCOM,Table=SCHEDULE_DAI
Any tables taking an unacceptable time LY) - processing time: 00:00:01
to process should be updated in the
AIRCOM.MAINTAIN_TABLE, with the gather_stale_partition_stats debug: Called
PARTITION_STATS_METHOD set to dbms_stats.gather_table_stats at partition
COPY. level
(Schema=LOGS,Table=COMMON_LOGS,P
artition=P20101008) - processing time:
00:00:01

390
About the OSS Maintenance Package

Maintenance
TEOCO recommends the following basic maintenance checks are carried out to ensure the OSS
Maintenance package is functioning correctly:

Check When Why

For broken / failed OSS Daily Checking jobs are executing correctly ensures that
Maintenance package jobs. maintenance is processing and that partitions and
tablespaces are allowing new data to be inserted.
The COMMON_LOGS tables for Daily / Weekly Any errors in the COMMON_LOGS table relating to the
errors. You can use the OSS Maintenance package (PRID 000827001) must be
following query to retrieve all investigated.
important (Warning level and
above) messages logged in
the last week:
select * from
common_logs where
severity>2 and
PRID='000827001' and
datetime > sysdate-7
order by datetime desc
The Loader and summary Weekly If partitions or tablespaces are not maintained, errors
loading for tablespace and will be found in the Loader and OPTIMA Summary
partition errors and check when data is inserted into the database.
partitions are available for
tables. Check that there are partitions available and that the
tablespace has space remaining.
Check for any “Unable to extend tablespace/datafile”
errors in the loading / summarizing which indicate that
tablespace maintenance is not working correctly.
Checking the max dates of raw and summary tables
ensures that the tables are still loading.
That the LAST_ANALYSED Monthly Checking that statistics have been recently generated
date for tables and indexes is ensures that queries on the tables are optimized
recent and that table statistics correctly.
have been gathered. If you are
using TOAD, you can find this
information in the Schema
Browser for Tables/Indexes on
the Stats/Size tab.

Troubleshooting the OSS Maintenance Package


The following table shows troubleshooting tips for the OSS Maintenance package:

Symptom Solution (s)

OSS Maintenance procedures not The Oracle job may not be running the scheduled jobs; check that the
running. Oracle job is configured correctly and has not broken.
The COMMON_LOGS and OSS_LOGGING package may not be installed
correctly; check that the COMMON_LOGS has a public synonym, and the
OSS_LOGGING package is installed (this is a requirement for the OSS
Maintenance package.)
The DD_LUNS, MAINTAIN_TABLESPACE and MAINTAIN_TABLE tables
may not be installed or configured correctly; check that these tables
contain the correct configuration.

391
OPTIMA 8.0 Operations and Maintenance Guide

Symptom Solution (s)

COMMON_LOGS table log entries There are a number of possible causes, which should be logged in the
contain error messages. error message. However, if you are unable to resolve the problem from
the information in the error message, please contact TEOCO Support.
Raw or Summary tables fail to The partition maintenance is not creating new partitions correctly. To
insert data with “ORA-14400: resolve this:
inserted partition key does not map
to any partition” error. • Check that partitions exist for the table
• Check that the table is included in MAINTAIN_TABLE table
• Check that MAINTAIN_TABLE_PARTITIONS is running correctly in
the job
Raw or Summary tables fail to The tablespace maintenance is not creating new datafiles correctly. To
insert data with “Unable to extend resolve this:
tablespace/ data file” error.
• Check if the tablespace is full
• Check if the tablespace is configured correctly in the
MAINTAIN_TABLESPACE table
• Check that Maintain_Tablespaces procedure is running
correctly in the job
No statistics produced for any If all of the partitions for a table have no statistics, then the
partitions for a table copy_partition_stats procedure will not be able to copy valid
statistics.
To solve this, run the OSS Maintenance package with all of the tables
initially configured for gather_stale_partition_stats.
The statistics for partitioned tables Check that the table is configured in the AIRCOM.MAINTAIN_TABLE
are not being gathered table.
The OSS Maintenance package cannot gather statistics for any
partitioned tables which are not in the MAINTAIN_TABLE table.
The COPY option is not gathering Check that the table has 3 days worth of partitions retained (or 2 weeks
statistics for weekly partitioned tables). This is required for
gather_copy_partition_stats to run correctly.
If this is not the case, configure the table for STALE statistics gathering.

Performance Reporting
The procedures listed in the following table give timing statistics, which enable the administrator to
monitor the time taken to process the various OSS maintenance procedures for the current
schema:

Procedure Message Message String


Number

add_partitions 827479 upper(p_schema) || '.' || upper(p_table) ||


' - Added partitions from ' || v_new_min_partition_date
||
' to ' || v_new_max_partition_date)
drop_partitions 827415 upper(p_schema) || '.' || upper(p_table) ||
' - Dropped ' || v_no_of_partitions_dropped || ' partitions'
maintain_schema_partitions 827474 maintain_schema_partitions function - processed ' ||
v_no_of_tables_maintained
|| ' tables in schema ' || p_schema
|| get_time_taken(v_start_time_function)

392
About the OSS Maintenance Package

Procedure Message Message String


Number

gather_global_schema_stats 827467 gather_global_schema_stats function for schema ' ||


p_schema
|| get_time_taken(v_start_time_function)
gather_copy_partition_stats 827468 gather_copy_partition_stats function: maintained ' ||
v_no_of_tables_maintained
|| ' tables in schema ' || p_schema
||
get_time_taken(v_start_time_function)
copy_partition_stats 827469 copy_partition_stats function: maintained ' ||
v_no_of_tables_maintained
|| ' tables in schema ' || p_schema
||
get_time_taken(v_start_time_function)
gather_stale_partition_stats 827470 gather_stale_partition_stats function: maintained ' ||
v_no_of_tables_maintained
|| ' tables in schema ' || p_schema ||
get_time_taken(v_start_time_function)
gather_stale_partition_stats (debug) 827491 gather_stale_partition_stats debug: Called
dbms_stats.gather_table_stats at partition level
(Schema=' || p_schema || ',Table=' ||
rec_stale_stats_nopart.TABLE_NAME || ',Partition=' ||
rec_stale_stats_nopart.PARTITION_NAME || ')' ||
get_time_taken(v_start_time_per_call));
rebuilt_part_schema_indexes 827472 rebuild_part_schema_indexes function: maintained ' ||
v_no_of_indexes_maintained
|| ' indexes in schema ' || p_schema
||
get_time_taken(v_start_time_function)
run_oss_maintenance_daily_schm 827473 OSS Maintenance daily for schema ' || p_schema || '
complete'
|| get_time_taken(v_start_time_schema)
gather_system_stats 827486 gather_system_stats function: completed successfully
(interval = ' || p_interval || ')' ||
get_time_taken(v_start_time_function)

393
OPTIMA 8.0 Operations and Maintenance Guide

394
About the OPTIMA Report Scheduler

14 About the OPTIMA Report Scheduler

The Report Scheduler enables the Administrator to configure the OPTIMA Report Scheduling
System. The Report Scheduler comprises a configuration utility and an executable application
(Windows NT service or stand-alone.)

Using the configuration utility, Administrators can:


• Choose how configuration settings are stored
• Configure the database connection for the Report Scheduler
• Indicate whether time zones are being used
• Configure the email connection for reports
• Configure settings for debugging
• Configure log file settings
• Enable and disable the scheduling of reports

Note: The OPTIMA Report Scheduler will be withdrawn at the next release.

Installing the Report Scheduler


To prepare to use the Report Scheduler:

1. Install the Report Scheduler configuration utility (OptimaReportSchedulerConfig.exe) to the


backend binary directory.

2. Install one of the following Report Scheduler applications to the backend binary directory:
o Windows NT service application (OptimaReportScheduler.exe)

- or -
o Stand-alone application (OptimaReportSchedulerGUI.exe)

Warning: If you install the Report Scheduler to a different directory, you must also copy
the crypter.dll to that directory.

Note: TEOCO recommends using the Report Scheduler as a stand-alone executable


application, for the following reasons:
o It is more memory and CPU efficient
o It is easier to set-up
o It has error logging, file maintenance and crash-monitor functionality
o It is less prone to errors as it does not interfere with Windows services

3. If you are scheduling Microsoft Excel 2003 reports, ensure that you have the following
folder on the machine(s) on which the Report Scheduler is running:

C:\Windows\SysWOW64\config\systemprofile\Desktop

This folder is required for Excel 2003 to function properly when interacting with OPTIMA.

395
OPTIMA 8.0 Operations and Maintenance Guide

Configuring the Report Scheduler


Before you can use the Report Scheduler, you must configure the application using the
configuration utility.

To configure the Report Scheduler:

1. Type the executable filename into the command prompt:

OptimaReportSchedulerConfig.exe

The Report Scheduler configuration utility appears. This picture shows an example:

2. From the Settings menu, click Edit.

3. On the Storage Type page that appears, choose the storage type you require by selecting
the appropriate option, and then click Next.

4. On the Database page, add the following details:

In This Box: Do This:

Username Type the username the Report Scheduler will use to connect to the database.
Password Type the password the Report Scheduler will use to connect to the database.

Service Type the name of the database.


Use Time Zone If your network is spread across more than one time zone, time zone support is
required in order to manage the difference between the User Time Zone where
the scheduler is located and the Universal Time Zone where the database is
located:
• Select the Use Time Zone option
• From the list of available time zones that appears, select the one that
represents the time zone for the scheduler. The run time of the reports on the
remote database is adjusted accordingly.
For more information on time zones, see Using OPTIMA Across Different Time
Zones on page 49.
For more information on how the Report Scheduler uses time zones, see
Scheduling Reports Across Different Time Zones on page 398.

Tip: Click Test Connection to test the database connection before proceeding.

396
About the OPTIMA Report Scheduler

5. Click Next.

6. On the Email page, add the following details:

In This Field: Do This:

Allow Export to Select this checkbox if you want to export reports to email.
email
SMTP Server Type the name of the SMTP Server.
Port Number Type the port number of the SMTP Server.
SMTP Select this checkbox if the SMTP Server that you have defined requires
authentication authentication.
required
SMTP User If you have selected the 'SMTP authentication' option, type the SMTP username
Name/Password and password.
Report "From" Type the email address of the report sender.
address field

Tip: Click Test Connection to test the email connection before proceeding.

7. Click Next.

8. On the Debug page:

Select The: To Do This:

Event Log If you want debug level information to be included in the Event Viewer Application
checkbox log.
Note: This option only applies if you are running the Report Scheduler Windows
NT service application.
Backup Email If you want to backup email attachments in a separate directory. Select the
Attachments location of the backup directory in the Debug Directory field.
checkbox

9. Click Next.

10. If you want the Report Scheduler to run continuously, select the Run Continuous
checkbox. Otherwise, you will need to schedule the Report Scheduler in the Windows
scheduler.

11. Click Next.

12. On the PID Settings page, click the PID File Settings button to configure PID File settings.
For more information, see Configuring PID File Settings on page 400.

13. Click Next.

14. On the Log Settings page, click the Log Settings button to configure log file settings. For
more information, see Configuring Log File Settings on page 401.

15. Click Next.

16. On the Temp Directory page you can specify the Temp Directory to be used by the Report
Scheduler. If you do not specify a path, the Windows Temp directory is used by default.

17. Click Finish to save your configuration.

397
OPTIMA 8.0 Operations and Maintenance Guide

Scheduling Reports Across Different Time Zones


The Report Scheduler works differently depending on whether or not you have configured the
Report Scheduler to use a specific time zone.

The Report Scheduler Uses a Time Zone

If your network spans across multiple time zones, and you have configured the Report Scheduler to
use a specific time zone, when the scheduler is started, it will search for and run:
• All report schedules set on the same time zone as the Report Scheduler, where the next
run date is equal to or less than the database local time (for example, the Oracle
SYSDATE) adjusted by the time zone
• Any other schedules without a specified time zone, where the next run date is equal to or
less than the database local time (SYSDATE)

Consider the following network example:

ANOTHER COUNTRY

SCHEDULER

UK – 1200 GMT GREECE (+2 HR) - 1400

DATABASE CLIENT

A network across multiple time zones

The example network is spread across different time zones:


• The database is in the UK
• The client is in Greece
• The report scheduler is in a third country

The following report schedules have been created:

Schedule Number Time Zone Next Run Time

1 GREECE 14:00

2 GREECE 12:00
3 GMT 15:00
4 None Set 12:00

398
About the OPTIMA Report Scheduler

The Report Scheduler has been configured to use the GREECE time zone.

If the Report Scheduler is set running, then the database time (SYSDATE) is converted according
to the GREECE timezone, giving an actual runtime of 14:00.Therefore, the Scheduler will run:
• All report schedules set on the GREECE time zone, where the next run date is equal to or
less than 14:00
• Any other schedules without a specified time zone, where the next run date is equal to or
less than 12:00

This means that it deals with each example schedule record as follows:

Schedule Record Runs? Reason Why

1 Y Next run time = 1400, SYSDATE + GREECE TIME ZONE


= 1400
2 Y Next run time = 1200, SYSDATE + GREECE TIME ZONE
= 1400
3 N Has a different time zone set, and so is ignored
4 Y Next run time = 1200, SYSDATE = 1200

The Report Scheduler Does Not Use A Time Zone

If your network spans across multiple time zones, and you have configured the Report Scheduler to
use a specific time zone, when the scheduler is started, it will search for and run all report
schedules where the next run date is equal to or less than the database local time (for example, the
Oracle SYSDATE) adjusted by the specific time zone for each schedule.

Note: If no time zone has been set for a schedule, it will just compare the database local time with
the next run time.

Take an example network, when the following report schedules have been created:

Schedule Time Zone Next Run Time


Number

1 GREECE 14:00
2 GREECE 12:00
3 GMT 15:00
4 None Set 12:00

If the Report Scheduler is set running, then it will treat each schedule record as follows:

Schedule Time Zone Next Run SYSDAT Adjusted Run?


Number Time E SYSDATE

1 GREECE 14:00 12:00 14:00 Y


2 GREECE 12:00 12:00 14:00 Y
3 GMT 15:00 12:00 12:00 N
4 None Set 12:00 12:00 12:00 Y

399
OPTIMA 8.0 Operations and Maintenance Guide

Configuring PID File Settings


When you click the PID File Settings button on the PID Settings page in the OPTIMA Report
Scheduler configuration utility, the PID File Settings dialog box appears. This picture shows an
example:

PID File Settings dialog box

This table describes the information to complete the PID File Settings dialog box:

Item Description

Interface ID The three-digit interface identifier (mandatory).


Program ID The three-character program identifier (mandatory).
Instance ID The three-character program instance identifier (mandatory).
Exename Type the name of the executable file.
PID File Dir Select the location of the PID File directory.

400
About the OPTIMA Report Scheduler

Configuring Log File Settings


When you click the Log Settings button on the Log Settings page in the OPTIMA Report
Scheduler configuration utility, the Log Settings Wizard appears. This picture shows an example:

Log Settings Wizard

To configure log file settings:

1. On the Log Settings page, add the following details:

In This Field: Do This:

Enable Logging checkbox Select this checkbox to enable logging.


Generate Log File Daily checkbox Select this checkbox if you want to generate log files on a daily
basis.
Log File Directory Select the location of the Log File directory.
Log Successful Events Only checkbox Select this checkbox if you only want to add operation
successful messages to the log file.
Log Unsuccessful Events Only Select this checkbox if you only want to add operation
checkbox unsuccessful messages to the log file.
Filter Log Events Above and Including Select the level of information required in the log file. The
Level drop-down list available options are: Debug, Information, Warning, Minor,
Major, Critical.

For more information on log files, see About Log Files on page 33.

2. Click Next.

3. On the Completing the Wizard page, check your settings in the Settings Summary pane.

Tip: Click the More Info button, to view Program information.

4. Click Finish to save your settings and close the Log Settings Wizard.

401
OPTIMA 8.0 Operations and Maintenance Guide

Starting the Report Scheduler GUI


The Report Scheduler GUI is a standalone application that can be launched from anywhere on a
client machine. Before you can start the Report Scheduler, you must ensure you have a valid
configuration file and a database correctly configured and accessible. For more information, see
Configuring the Report Scheduler on page 396.

To start the Report Scheduler GUI:

Type the executable filename into the command prompt:

OptimaReportSchedulerGUI.exe

The Report Scheduler window appears:

Report Scheduler window

This table describes the Report Scheduler window menu options:

Menu: Option: Description:

File Exit Closes the Report Scheduler.


Process Enable Enables the processing of report schedules.

Running Multiple Instances of the Report Scheduler


The Report Scheduler processes one report at a time. If you need to process a lot of reports by a
certain time, for example overnight, you can achieve this by running multiple Report Scheduler
instances.

If you want to run multiple instances on the same machine, then you must ensure that each
instance uses different INI files, PID files and log files, otherwise the PID file will prevent any other
instance from running.

To run multiple instances on the same machine:

1. Using the Configuration Wizard, create a separate ini file for each instance, storing them in
different directories. For example:
o C:\Optima Backend\Optima Report Scheduler\Instance1\OptRepSchedulerConfig.ini
o C:\Optima Backend\Optima Report Scheduler\Instance2\OptRepSchedulerConfig.ini

Important: You must use the same ini file name - OptRepSchedulerConfig.ini - for each
instance that you define.

2. Set a different backup and log directory for each instance.

402
About the OPTIMA Report Scheduler

3. Set a different PRID and PID file directory for each instance, using the Program ID for the
Report Scheduler, 816. For example:
o Instance1: PRID = 001816001; PID file directory = C:\Optima Backend\Optima Report
Scheduler\Instance1\PID
o Instance2: PRID = 001816002; PID file directory = C:\Optima Backend\Optima Report
Scheduler\Instance2\PID

4. Re-configure the Process Monitor to monitor each instance.

5. Schedule each instance in the Windows Scheduler, including the ini file location as a
parameter:

Important: This will override the default ini file location set using the Configuration Wizard.
o "C:\Optima Backend\Optima Report Scheduler\OptimaReportSchedulerGUI.exe"
ini="C:\Optima Backend\Optima Report Scheduler\Instance1"
o "C:\Optima Backend\Optima Report Scheduler\OptimaReportSchedulerGUI.exe"
ini="C:\Optima Backend\Optima Report Scheduler\Instance2"

Maintenance of the Report Scheduler


In usual operation, the Report Scheduler should not need any special maintenance. During
installation the OPTIMA Directory Maintenance application will be configured to maintain the
backup and log directories automatically.

However TEOCO recommends the following basic maintenance checks are carried out for the
Report Scheduler:

Check The When Why

Log messages for error Weekly In particular any Warning, Minor, Major and Critical
messages messages should be investigated.

Checking a Log File Message


The log file for the Report Scheduler is stored in the directory as defined in the Log Settings
Wizard. For more information, see Configuring Log File Settings on page 401.

You can choose to create a new log file every day. The information level required in the log file is
also defined in the Log Settings Wizard and will be one of the following:
• Debug
• Information
• Warning
• Minor
• Major
• Critical

These levels help the user to restrict low level severity logging if required. For example, if Minor is
selected then only Minor, Major and Critical logging will occur.

Note: Utilities are provided for searching and filtering log messages and also for loading the
messages into the database. For more information, see Checking Log Files on page 41.

403
OPTIMA 8.0 Operations and Maintenance Guide

Stopping the Report Scheduler Application


If the Report Scheduler application is scheduled, then it will terminate when all outstanding
schedules have been processed.

If the application is run continuously, then it will monitor continuously for schedules. In this case, the
application can be terminated.

Checking the Report Scheduler is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

Troubleshooting
The following table shows troubleshooting tips for the Report Scheduler:

Problem Cause Solution

Application exits Another instance is running. Use Process Monitor to check instances running.
immediately.
Invalid or corrupt (INI) file.
New configuration Settings are not saved to the Check settings and location of file.
settings are not being configuration (INI) file.
used by the Restart the Report Scheduler processing
application. File created in the wrong (service and standalone) applications.
location.
Report Scheduler
processing (service and
standalone) applications
have not been restarted to
pick up the new settings.
Error emailing reports: Anti-virus software is Deactivate anti-virus software.
10053: Software running.
caused connection
abort.
SMTP Authentication Invalid SMTP Username Use the configuration utility to delete the SMTP
Error setting. Username setting on the Email page of the
configuration. For information about how to do
Port 25 is being blocked. this, see Configuring the Report Scheduler on
page 396. If the problem persists, for example
users outside the domain are still unable to
receive emails, then request that the customer's
IT department enable relaying without
authentication for the OPTIMA Server.
Check that no anti-virus software is blocking port
25.
Failure to export report User has insufficient Enable permissions.
to email address due privileges on Debug
to the following error: Directory or Debug Directory Create Debug Directory.
421 4.3.5 Unable to doesn't exist.
create data file:
Input/Output error
(Processing time is…)

404
About the OPTIMA Report Scheduler

Problem Cause Solution

Printer is unavailable. Default or specified printer Assign user a logon account for the Report
has not been installed on the Scheduler service and then add the assigned
Report Scheduler server. user to the printer.
To assign a logon account:
1. In the Services window, right-click the Report
Scheduler service and, from the menu that
appears, click Properties.
2. In the Report Scheduler Properties dialog box,
click the Log On tab and select the This Account
radio button.
3. Complete the user account and password
fields.
4. Click Apply and then close the Services
window.
To add a user for an installed printer:
1. In the Printers and Faxes window, select the
printer that you want to use to send reports.
2. In the Printer Tasks list, click Share This
Printer.
3. In the Printer Properties dialog box, select the
Security tab and click Add.
4. In the Select Users, Computers or Groups
dialog box, type the username and click OK.
5. Click Apply and then close the Printer
Properties dialog box.
Report Scheduler Ghost sessions not mapped 1. Locate the Oracle sqlnet.ora file on the
creates ghost sessions to OS processes. machine on which the OPTIMA database is
in the database, installed. This is normally in
requiring DBA ..Oracle\product\(version
intervention to kill number)\dbhome_1\NETWORK\ADMIN
inactive sessions and
leaving Excel 2. Open the sqlnet.ora file and check that this
processes behind. line exists in it:
SQLNET.EXPIRE_TIME=10
3. If the above line is not present, add it and save
the file.
4. Restart the Oracle listener and restart the
OPTIMA database.

Troubleshooting Exporting to Email


The following list contains tips for helping you if you are experiencing problems exporting to email
with the Report Scheduler. For example, if you are receiving an "Unable to export to email address"
error message. In this case, you should check that:
• The Report Scheduler machine can send an email from a client such as Outlook Express
by connecting to the customer mail server. Normally, if Outlook Express can send an
email, then the Report Scheduler can too.
• The Alarm Notifier can send an email. For more information, see About the Alarm Notifier
on page 414.
• The Report Scheduler can export a report to file on a shared directory. This confirms that
the Report Scheduler is working and confirms that you only have a problem with sending
email.

405
OPTIMA 8.0 Operations and Maintenance Guide

Note: You need to run the Report Scheduler with a network user so it can access all
network file locations and the customer's email server.

• The email From address you are using is a valid email address

If "Invalid email address…" appears in the schedule history in the Schedule Explorer, it can
mean that the From email address (rather than the email address of the user to whom the
email is sent) is invalid. Ensure that you use a valid form of email address, for example,
[email protected].
• All anti-virus software has been disabled on the Report Scheduler machine.

If "10053: Software caused connection abort" occurs, it means anti-virus software is


blocking port 25. Normally, you only need to stop virus checking for port 25 but it can be
useful to disable all anti-virus software when testing the Report Scheduler.

Note: If reports are to be emailed externally, you may need to ask the customer's IT
department to open port 25 on the firewall for the Report Scheduler application.

• The version numbers of the Report Scheduler configuration utility and GUI are compatible.
• The Report Scheduler GUI is scheduled, for example, at 10 minute intervals, in the
Windows Task Scheduler.

Ensure the Report Scheduler is not set to run in continuous mode. The Report Scheduler
should be launched from the Windows Scheduler. In this way, it will connect to the
database, process the reports, then close down until re-launched by the Windows
Scheduler.
• The PID file and log file settings are correctly set.

To test the Process Monitor, kill the Report Scheduler during testing and check that the
Process Monitor removes the PID after the specified time.

To receive more detailed log messages (for testing purposes), set the severity level of the
log file to Debug.
• The database user can connect to the database.
• You are using an appropriate storage type. TEOCO recommends using the configuration
(INI) file rather than saving to registry.
• You are using the Report Scheduler stand-alone executable application and not the older
Windows NT service application.

You can check this in the Windows Services window. If the service is installed, uninstall it
by typing the following at the command prompt:

OptimaReportScheduler /uninstall

You should additionally check the following requirements:


• If the IP address of the Report Scheduler machine needs to be registered to access the
mail server machine. The Report Scheduler normally connects to the mail server on port
25. You can test this by typing the following command at the command prompt:
telnet <ip_address_of_mailserver> 25

You specify the port number using the Report Scheduler configuration utility. For more
information, see Configuring the Report Scheduler on page 396.
• If an authenticated SMTP username and password are required for the Report Scheduler
to be able to send email.

406
About the OPTIMA Report Scheduler

If you are having problems, try typing a valid username into the Report Scheduler
configuration utility or try leaving it blank. If "SMTP Authentication Error" occurs, it means
the Report Scheduler is using an invalid username. In this case, you should use the
configuration utility to either remove the SMTP username or add a valid one. For more
information, see Configuring the Report Scheduler on page 396.

Note: If you remove the username, users outside the domain may not be able to receive
the emails. In this case, request that the Customer’s IT department enable relaying without
authentication for the OPTIMA server.

When using the Report Scheduler, you can also consult the following sources of information:
• For additional security restrictions on the mail server. For example, an error message of
"Connection closed gracefully" when using the telnet command indicates that the mail
server is closing the connection. Contact the customer's IT department for more
information about security restrictions on the mail server.
• The log files in the common log directory for additional error messages. To receive more
detailed log messages, set the severity level of the log file to Debug. TEOCO recommends
reducing the logging level again, once the Report Scheduler is working correctly.

Example OPTIMA Report Scheduler Configuration (INI) File


[PIDCompIniSection]
InterfaceId=789
ProgramId=456
InstanceId=123
Exename=opx_OPT_SCH.exe
PIDFileDir=/OPTIMA_DIR/<application_name>/pid

[LogCompIniSection]
LogMaint_SuccessFul=0
LogMaint_UnSuccessFul=1
LogMaint_LogSize=512
LogMaint_OverrideEventsAuto=0
LogMaint_ClearManually=0
LogMaint_ClearLogbySize=1
LogMaint_Days=7
EnableLogMaintenance=0
LogMaint_TimeInterval=60
GenerateDailyLogs=1
PRID=789456123
LogLevel=WARNING
EnableLogging=1
LogDir=/OPTIMA_DIR/<application_name>/log

[Database]
UserName=test1
Password=ENC(Gev\wPrn)ENC
DBService=TEST

[Email]
SMTPServer=test.server
SMTPUserName=test.user
PortNumber=25
[email protected]
AllowExportToEmail=1

[Debug]
EnableDebugInEventViewer=1
EnableAttachmentDir=1
AttachmentDir=/OPTIMA_DIR/<application_name>/Debug

407
OPTIMA 8.0 Operations and Maintenance Guide

[StandAlone]
RunContinuous=1

[LogSettings]
LogDirectory=/OPTIMA_DIR/<application_name>/Log

[TIMEZONE]

UseTimeZone=1
OptimaAbbrev=opt_1014
TimeZoneName=Australia/NSW
TimeZoneAbbrev=LMT
OptimaDescription=Test
SystemBias=0
SystemStandardName=GMT Standard Time
SystemStandardBias=0
SystemDaylightName=GMT Standard Time
SystemDaylightBias=-60

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

408
About OPTIMA Alarms

15 About OPTIMA Alarms

The alarms defined in the OPTIMA front end are processed by two backend programs:
• The Alarms Processor checks the next schedule date of each alarm and then processes
and updates any alarm whose schedule date is due. For more information, see About the
Alarms Processor on page 409.
• The Alarm Notifier polls the database for recently raised alarms and sends alarm
notifications via email or SMS. For more information, see About the Alarm Notifier on page
414.

Important: When using OPTIMA alarms, it is important to run the Alarms Maintenance
scheduled job periodically, in order to ensure that performance is kept at an optimum level.
For more information, see Maintaining Alarms on page 429.

About the Alarms Processor


The Alarms Processor (opx_ALM-GEN-817.exe) connects to the database and polls the alarms at
a specified polling interval. It checks the next schedule date of each alarm and then processes and
updates any alarm whose schedule date is due.

Important: You can run more than one instance of the Alarms Processor, but to avoid locking
records, you should use the Filter parameter to enable the Alarms Processors to process different
alarm definitions. For more information, see Configuring the Alarms Processor on page 410.

Starting the Alarms Processor


Before you can use the Alarms Processor, install the following file in the backend binary directory:
• opx_ALM_GEN_817.exe (Windows)
• opx_ALM_GEN_817 (UNIX)

To start the Alarms Processor:

In Windows, type:

opx_ALM_GEN_817.exe opx_ALM_GEN_817.ini

In Unix, type:

opx_ALM_GEN_817.exe opx_ALM_GEN_817.ini

Note: All applications are scheduled in a usual operation within the data loading architecture.

409
OPTIMA 8.0 Operations and Maintenance Guide

Configuring the Alarms Processor


The Alarms Processor is configured using a configuration (INI) file. Configuration changes are
made by editing the parameters in the configuration (INI) file with a suitable text editor. The Alarms
Processor configuration (INI) file is divided into different sections.

The following table describes the parameters in the [DIR] section:

Parameter Description

LogDir The location of the directory where log files will be stored.
PIDFileDir The location of the directory where monitor (PID) files will be created.
TempDir The location of the directory where temporary files will be stored.

The following table describes the parameters in the [MAIN] section:

Parameter Description

InstanceID The three-character program instance identifier (mandatory).


MachineID The three-digit interface identifier (mandatory).
LogGranularity Defines the frequency of logging, the options are:
0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily
LogLevel (or Sets the level of information required in the log file. The available options are:
LogSeverity)
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
ProgramID The three-character program identifier (mandatory).
RefreshTime The pause (in seconds) between executions of the main loop when running
continuously.
RunContinuous 0 - Have the data validation application run once.
1 - Have the data validation application continuously monitor for input files.
StandAlone 0 – Run the application without a monitor file. Do not select this option if the
application is scheduled or the OPTIMA Process Monitor is used.
1 – Run the application with a monitor file.
Verbose 0 - Run silently. No log messages are displayed on the screen.
1 - Display log messages on the screen.

410
About OPTIMA Alarms

The following table describes the parameters in the [OPTIONS] section:

Parameter Description

Database Oracle database TNS name


Password Password
UserName User name
Filter Define an SQL filter on the alarm definition table.
The current instance of the alarm processor will only process alarm definitions
that match this filter. You should use this parameter to enable different alarm
processors to process different alarm definitions.
Important: If you do not specify this parameter, the alarm processor will process
all enabled definitions.

This is an example of an Alarms Processor configuration INI file:

[DIR]

LogDir=/OPTIMA_DIR/<application_name>/log

TempDir=/OPTIMA_DIR/<application_name>/temp

PIDFileDir=/OPTIMA_DIR/<application_name>/prid

[MAIN]

LogGranularity=3

LogLevel=1

RefreshTime=0

RunContinuous=0

StandAlone=0

MachineID=000

ProgramID=817

InstanceID=001

Verbose=1

[OPTIONS]

Database=OPTRAC_VM

UserName=OPTIMA_ALARM_PROC

Password=ENC(l\mlofhY)ENC

Filter=VENDOR=201

411
OPTIMA 8.0 Operations and Maintenance Guide

Alarms Processor Message Log Codes


This section describes the message log codes for the Alarms Processor:

Message Description Severity


Code

3055 Executing OPTIMA_ADMIN.AUTHENTICATE_ROLE DEBUG


procedure.
3056 Finished executing procedure DEBUG
OPTIMA_ADMIN.AUTHENTICATE_ROLE.
7000 Calling the connection COMMIT Method. DEBUG
7001 Calling the connection COMMIT Method. DEBUG
7002 Calling the connection COMMIT Method. DEBUG
7010 Calling the connection ROLLBACK Method. DEBUG
7011 Calling the connection ROLLBACK Method. DEBUG
7012 Calling the connection ROLLBACK Method. DEBUG
7013 Calling the connection ROLLBACK Method. DEBUG
7031 DB Role Authentication Error : <errorCode>. CRITICAL
8000 <ExceptionErrorMessage>. CRITICAL
8001 Connected to database <databaseName>. DEBUG
8002 <ExceptionErrorMessage>. CRITICAL
8003 No Alarm Definitions table empty. INFORMATION
8004 Read <numberOfRows> rows from Alarm Definitions table. DEBUG
8005 Failed to read current time from database. INFORMATION
8006 <ExceptionErrorMessage>. CRITICAL
8007 Definition ID: <definitionID> Status=<statusValue>. DEBUG
8008 Definition ID: <definitionID> Status=1. DEBUG
8009 Failed to read NEXT_POLLING_DATE_TIME from DEBUG
ALARM_DEFINITION where DEFINITION_ID=<definitionID>.
8010 Error decrypting password in INI file: <ErrorMessage>. CRITICAL
<DateTime> DEBUG

8011 Definition ID: <definitionID> current time < next polling time. DEBUG
8012 Definition ID: <definitionID> current time > next polling time. DEBUG
8013 problemTextSet for query set: <problemText>. DEBUG
8014 Error: <currentDateTime>. DEBUG
8015 Procedure Type: <type>. DEBUG
8016 Oracle Error: <oracleErrorCode>. DEBUG
8017 Definition ID: <definitionID>. DEBUG
8018 SQL Query <sqlQuery>. DEBUG
8019 Problem Text <problemText>. DEBUG
8020 MaxRows <numOfRows>. DEBUG
8021 <ExceptionErrorMessage>. CRITICAL
8022 <ExceptionErrorMessage>. CRITICAL

412
About OPTIMA Alarms

Message Description Severity


Code

8023 problemTextSet for query clear: <problemText>. DEBUG


8024 <ExceptionErrorMessage>. CRITICAL
8025 <ExceptionErrorMessage>. CRITICAL
8026 problemTextSet: <problemText>. DEBUG
8027 <ExceptionErrorMessage>. CRITICAL
8028 <ExceptionErrorMessage>. CRITICAL
8029 <ExceptionErrorMessage>. CRITICAL
8030 Next Polling DateTime: <NextPollingDate>. DEBUG
8031 DefinitionName: <definitionName>, Definition ID: <definition DEBUG
ID>, Status: <alarmStatus>, RippleSet: <rippleSetValue>,
RippleClear: <rippleClearvalue>, <ProcessType>.
8034 <ExceptionErrorMessage>. CRITICAL
8035 <ExceptionErrorMessage> CRITICAL
8037 <textMessage> DEBUG
8038 Definition ID: <definitionID>. DEBUG
8039 SQL Query: <sqlQuery>. DEBUG
8040 Problem Text: <problemText>. DEBUG
8041 Override SQL: <overrideSql>. DEBUG
8050 Calling DB procedure DEBUG
OPTIMA_ALARMS.PROCESSSETALARM.
8051 Calling DB procedure DEBUG
OPTIMA_ALARMS.PROCESSCLEARALARM.
8052 Parameter def_id{<definitionID>}. DEBUG
8053 Parameter sstrsql{<sqlCommand>}. DEBUG
8054 Parameter strproblem_text_set{<logProblemText>}. DEBUG
8055 Parameter nbindings {<maxRows>}. DEBUG
8056 Out Parameter result{<result>}. DEBUG
8057 DB procedure run for {<endTime-startTime>} seconds. DEBUG
8400 Querying Database SQL{<sqlCommand>}. DEBUG
8401 <ExceptionErrorMessage>. CRITICAL
8410 Querying Database SQL{<sqlCommand>}. DEBUG
8411 <ExceptionErrorMessage>. CRITICAL
8450 Calling DB Procedure DEBUG
OPTIMA_ALARMS.UPDATENEXTPOLLINGDATATIME.
8451 Parameter definition id{ <definitionID> }. DEBUG
8452 Out parameter NextPollingDateTime{<NextPollingDate>}. DEBUG
8500 Querying Database SQL{ <sqlQuery> }. DEBUG
8550 Querying Database SQL{<sqlCommand>}. DEBUG
8551 <ExceptionErrorMessage>. CRITICAL
8888 Definition ID: <definitionID> Status=-1. DEBUG

413
OPTIMA 8.0 Operations and Maintenance Guide

About the Alarm Notifier


The Alarm Notifier is a standalone application that polls the Alarms table in the database for
recently raised alarms and sends alarm notifications via email or SMS to users or groups of users.

Installing the Alarm Notifier


Before you can use the Alarm Notifier, install the following file to the backend binary directory:

AlarmNotifier.exe

Warning: If you install the Alarm Notifier to a different directory, you must also copy the crypter.dll
to that directory.

Important: You must install the backend components using the OPTIMA combined backend
package, which will install the required components quickly and easily. For more information on
how to use this, see Installing the OPTIMA Combined Backend Package on page 21.

Prerequisites for Using the Alarm Notifier


Before using the Alarm Notifier, you should ensure that:
• You have installed the OPTIMA client, including the WINDOWS_CALL_INTERFACES
option.

This option is installed by default with a full ENTERPRISE install.


• Your database contains all of the required tables and grants.

For more information, see Required Tables for the Alarm Notifier.

Required Tables for the Alarm Notifier


You should ensure that your database contains all of the required tables:
• ALARMS
• ALARMS_INTERACTIVE
• ALARMS_LOG
• ALARM_HANDLER
• ALARM_HANDLER_HISTORY_LOG
• ALARM_SEVERITY
• ALARM_SEVERITY_NOT_RULES
• OPTIMA_CONTACTS
• OPTIMA_CONTACT_GROUPS
• OPTIMA_CONTACT_USER_TO_GROUP

In addition, the following grants are required:

GRANT SELECT ON ALARMS TO OPTIMA_USERS;

GRANT DELETE, INSERT, UPDATE ON ALARMS TO OPTIMA_ADMINISTRATORS;

GRANT SELECT ON ALARMS_INTERACTIVE TO PUBLIC;


414
About OPTIMA Alarms

GRANT DELETE, INSERT, SELECT, UPDATE ON ALARMS_INTERACTIVE TO


OPTIMA_ADMINISTRATORS;

GRANT DELETE, INSERT, SELECT, UPDATE ON ALARM_HANDLER TO


OPTIMA_ADMINISTRATORS;

GRANT DELETE, INSERT, SELECT, UPDATE ON ALARM_HANDLER_HISTORY_LOG TO


OPTIMA_ADMINISTRATORS;

GRANT DELETE, INSERT, SELECT, UPDATE ON ALARM_SEVERITY TO


OPTIMA_ADMINISTRATORS;

GRANT DELETE, INSERT, SELECT, UPDATE ON ALARM_SEVERITY_NOT_RULES TO


OPTIMA_ADMINISTRATORS;

GRANT DELETE, INSERT, SELECT, UPDATE ON OPTIMA_CONTACTS TO


OPTIMA_ADMINISTRATORS;

GRANT DELETE, INSERT, SELECT, UPDATE ON OPTIMA_CONTACT_GROUPS TO


OPTIMA_ADMINISTRATORS;

GRANT DELETE, INSERT, SELECT, UPDATE ON OPTIMA_CONTACT_USER_TO_GROUP TO


OPTIMA_ADMINISTRATORS;

If you have used the OPTIMA Database Installer, these tables and grants should be generated
automatically.

If you have not used the Database Installer, please contact Product Support to obtain the required
scripts.

Starting the Alarm Notifier


To start using the Alarm Notifier:

1. From the Start menu, select OPTIMA Alarm Notifier:

The application is started, with the Alarm Notifier dialog box minimised.

2. Double-click the Alarm Notifier icon in your system tray.

- or -

Right-click the Alarm Notifier icon in your system tray and, from the menu that
appears, click AIRCOM OPTIMA Alarm Notifier.

The Alarm Notifier dialog box appears.

About the Alarm Notifier Dialog Box


In the Alarm Notifier dialog box, you can:
• View the Alarm Notifier's current status
• Control how the Alarm Notifier is executed
• Configure modem, email, database and SMSC settings

415
OPTIMA 8.0 Operations and Maintenance Guide

This picture shows an example of the Alarm Notifier dialog box:

Alarm Notifier dialog box

Viewing the Status of the Alarm Notifier


In the Status pane of the Alarm Notifier dialog box, you can view the Alarm Notifier's current
status. The following table describes the information shown in the Status pane:

This field: Shows this information:

Status Whether the Alarm Notifier is active or disabled.

When the Alarm Notifier is disabled, the Disabled Alarm Notifier icon is
displayed in your system tray.
Last Execution The date and time that the processing of alarms was last completed.
Current Action The action that is currently being performing. The possible actions are:
• Waiting
• Executing
• Updating Log
• Error

This picture shows an example:

Status pane

416
About OPTIMA Alarms

Executing the Alarm Notifier


You control the automatic execution of the Alarm Notifier in the Execution pane of the Alarm
Notifier dialog box. The following table describes the Execution pane:

In this field: Do this:

EXECUTE NOW Click this button to force an immediate execution of the Alarm
Notifier.
Note: The EXECUTE NOW button is disabled when the Automatic
Execution Enabled checkbox is selected.
Automatic Execution Enabled Select this checkbox if you want the Alarm Notifier to automatically
execute at a specified polling interval. You set the polling interval on
the Database tab. See Configuring Database Settings on page 421
for more information.
When Automatic Execution is enabled, the time remaining (in
seconds) until the next scheduled execution is shown in a progress
bar.

This picture shows an example:

Execution pane

Configuring Modem Settings


On the Modem configuration tab, you configure the settings for using the Alarm Notifier with an
attached GSM modem or handset. Modems are usually connected to the host machine via an
RS232 COM port or a COM port emulator such as a USB or PCI device. The Alarm Notifier has
been tested successfully with several types of mobile handset and should work with any handset
that supports Protocol Description Unit (PDU) format SMS. For more information about supported
handsets, please contact TEOCO Support.

417
OPTIMA 8.0 Operations and Maintenance Guide

To configure modem settings:

1. In the Alarm Notifier dialog box, on the Modem Configuration tab, complete the following
information:

In this field: Do this:

Comm Port From the drop-down list, select the port number on the host
machine to which the handset is connected.
Comm Settings This field displays the default baud rate, parity, data bit, and stop
bit settings for the majority of handsets.
Tip: For newer handsets, you may be able to improve performance
by increasing the baud rate.
Note: TEOCO recommends that you:
• Do not change the Comm settings if the Alarm Notifier is
working correctly
• Consult the modem manufacturer for advice if you do need to
change the Comm settings
Use DTR (Data Terminal Ready) Select this checkbox if you are using a modem that is configured to
use DTR.
The RS-232 Data Terminal Ready signal is lowered when the
computer wishes the modem to hang up. The computer wishes to
hang up when people have ended their login session ends or when
they fail to respond to the login prompt.
Tip: Try using this option if your Comm port is set correctly but
your modem is not responding correctly.
Note: This option is not supported by all handsets.
Allow Priority SMS's to be sent Select this checkbox if you want to send alarm notifications as 16-
(Flash SMS) bit text messages of class 0 (Flash SMS). On phones that support
this feature, alarm notifications will appear as Flash SMS (also
called blinking SMS or alert SMS) messages.
The user will not have to delete this message, and it will appear
immediately on his handset without him having to open it.
Note: The flash SMS feature is not supported by all handsets, so
test before using this as a standard.
Enable Interactive SMS Select this checkbox if you want to use interactive SMS. Enabling
interactive SMS means that the Alarm Notifier can perform certain
actions, such as returning information, by responding to specified
keywords received via SMS. For information about using
interactive SMS, please contact TEOCO Support.
Set Message Centre Number Click this button to set the Short Message Service Centre (SMSC)
number on the attached modem or handset. In the dialog box that
appears, type the Message Service Centre number and click OK.
Your network operator can provide you with this information.
Note: The maximum length of the Message Centre Number can be
only 11 digits.
Check Phone PIN State Click this button to check if a phone has a pin code set and unlock
its SIM card for use.
You should use this if you are using a modem that has no other
interface.

418
About OPTIMA Alarms

In this field: Do this:

Test Interactive SMS Click this button if you have configured interactive SMS and you
want to test the response without having to send the modem an
SMS message.
When prompted, enter the phone number that will be used as the
incoming SMS number, and then enter a keyword.
The Alarm Notifier dialog box will return the response that would be
sent to the user, had they SMS'd that keyword from that number to
the modem or mobile handset attached to the host PC.
The results of the test are displayed in the Modem Test Response
window.
Test Modem Settings Click this button to test that you have correctly configured your
modem settings. The results of the test are displayed in the Modem
Test Response window.

This picture shows an example:

2. Click Apply to save your changes.

3. Click OK to minimise the Alarm Notifier dialog box.

Configuring Email Settings


On the Mail Configuration tab, you set the required parameters to allow the Alarm Notifier to send
emails.

To configure email settings:

1. In the Alarm Notifier dialog box, on the Mail Configuration tab, complete the following
information:

In this field: Do this:

Mail Server IP Address / Name Type the hostname or IP address of the mail server to connect
to.
Mail Sent From (Alarm User Alias) Type the sender's name that will appear in the From field of
received email notifications.
Note: This field is not used for authentication and can be set to
anything.

419
OPTIMA 8.0 Operations and Maintenance Guide

In this field: Do this:

Type Select the type of authentication required from the drop-down


list. The available options are:
• None
• POP
• Login
• Plain
Contact the customer's mail server administrator for information
about the type of authentication required.
Username Type the username required to connect to the mail server.
Note: This field is only required if the authentication type is set
to something other than None.
Password Type the password required by the mail server for the specified
username.
This password will be encrypted when written to the INI file.
Note: This field is only required if the authentication type is set
to something other than None.
POP Server Type the hostname or IP address of the POP server.
Note: This field is only required when the POP authentication
type is selected.
Test Email Settings Click this button to test that the Alarm Notifier can successfully
send email notifications. In the dialog box that appears, type the
required email address and click OK. The results of the test are
displayed in the Server Test Response window.
Adjust for DST on Sent email Select or deselect and use Test Email Settings button again
until email sent time is correctly synchronized.

This picture shows an example:

2. Click Apply to save your changes.

3. Click OK to minimise the Alarm Notifier dialog box.

420
About OPTIMA Alarms

Configuring Database Settings


On the Database Configuration tab, you set the database, polling and logging options for the
Alarm Notifier.

To configure database settings:

1. In the Alarm Notifier dialog box, on the Database Configuration tab, complete the
following information:

In this field: Do this:

User Name Type the user name you want to use to connect to the database.
It is recommended that you connect as the same user that owns
the required tables described in Prerequisites for Using the Alarm
Notifier on page 414.
Password Type the password required to connect to the database.
This password is encrypted when it is written to the INI file.
Database Select the SID that identifies the database to connect to from the
drop-down list. You can also type the name into the same box.
Tip: You can find the SID listed in the Oracle tnsnames.ora file.
Database Alarm Polling Interval Type the interval (in seconds) that the program will wait before
checking for new alarms when in automatic execution mode. See
Executing the Alarm Notifier on page 417 for more information.
Verbose Logging (For Debug Select this checkbox if you want to show more detailed information
Purposes) about actions performed and errors encountered in the Current
Actions window. See About the Current Actions Window on page
425 for more information.
Warning: This option is useful for debugging the application, but
can cause slower performance if many alarms are being
processed.
Do not process cleared alarms Select this checkbox if you do not want the Alarm Notifier to send
notifications for cleared alarms (in other words, notifications that
state an alarm condition no longer exists).
Test Database Settings Click this button to test that the Alarm Notifier can successfully
connect to the database with the parameters you have set. The
results of the test are displayed in the Database Test Response
window.

This picture shows an example:

2. Click Apply to save your changes.


421
OPTIMA 8.0 Operations and Maintenance Guide

3. Click OK to minimise the Alarm Notifier dialog box.

Configuring SMSC Settings


On the SMSC configuration tab, you set the parameters that allow the Alarm Notifier to connect
directly to an operator's Short Message Service Centre (SMSC) via Short Message Peer to Peer
protocol (SMPP).

To configure SMSC Settings:

1. In the Alarm Notifier dialog box, on the SMSC Configuration tab, complete the following
information:

In this field: Do this:

Address Type the socket network address of the SMSC (either TCP/IP or X.25),
for example, 192.168.88.1 for TCP/IP connections.
Port Type the port which is used for TCP/IP connections only.
Single port connectivity only Select this checkbox if you are using single port connectivity.
Normally, an SMPP connection requires two socket connections: one for
transmitting and one for receiving SMS messages. However, some
SMSC operators only provide a port for transmitting messages or handle
both operations on a single socket connection.
Important: Only change this setting if instructed to by your network
operator.
System ID Type the System ID provided by your network operator. This setting is
used to identify you or your application.
System Type Type the System Type provided by your network operator.
Notes:
• This setting is used as additional information to identify your
application.
• This setting is optional.
Password Type the password required to connect to the SMSC.
This is encrypted when it is written to the log file.
TON Type the short value representing the Type of Number (TON) of the
address for your application, for example, this could be a TCP/IP
address. If you have not been provided with this information, type 1 in
this field.
This information is often used by the SMSC for internal billing.
NPI Type the short value representing the Numbering Plan of the address for
your application, for example this could be a TCP/IP address. If you have
not been provided with this information, type 1 in this field.
This information is often used by the SMSC for internal billing.

422
About OPTIMA Alarms

In this field: Do this:

SMPP Version Type the long value representing the SMPP Interface version that your
application supports.
Notes:
• Some older SMSC implementations require a one digit value, for
example, 3, whereas more recent implementations expect a two-
digit value, for example, 33 or 34.
• The SMPP Interface version must be sent in hexadecimal format. If
you are using the 3.4 version, a hexadecimal 0x34 value must be
sent. To achieve this, set the SMPP Interface version to 52, which
corresponds to the required 0x34 hexadecimal value. You do not
need to do this if you are using a 3.3 or lower version, as
hexadecimal values from 0x0 to 0x33 are allowed.
Transceiver Select this checkbox if the transmitting and receiving of SMS messages
is to be handled via a single port.
Notes:
• When using this option, ensure you also select the Single port
connectivity only checkbox.
• This option is not required for standard SMPP links.
Important: Only change this setting if instructed to by your network
operator.
Send from Type the sender information which is shown when the message arrives
at the mobile. Usually this is a mobile number in international format or a
short number identifier. Request this information from your network
operator, if you are unsure.
Note: This setting can be an alphanumeric string but TEOCO
recommends testing whether your SMSC operator supports
alphanumeric senders.
Dest TON Type the short value representing the TON for the bstrDestination value.
If you have not been provided with this information, type 1 in this field.
Dest NPI Type the short value representing the NPI for the bstrDestination value.
If you have not been provided with this information, type 1 in this field.
Validity(hr) Type the long value containing the validity period of the SMS message in
hours. The validity period determines how long a message is stored by
the SMSC and how long it tries to deliver it to the mobile if the mobile is
not reachable. The maximum Validity value depends on the SMSC
operator but the range is between 48 and 72 hours.
Important: If your SMSC does not support this setting, type 0 in this
field.
Source TON Type the short value representing the TON for the bstrOriginator value. If
you have not been provided with this information, type 1 in this field.
Source NPI Type the short value representing the NPI for the bstrOriginator value. If
you have not been provided with this information, type 1 in this field.

423
OPTIMA 8.0 Operations and Maintenance Guide

In this field: Do this:

Option Type the long value representing the SMS message option. The
following options are available:
0 - Normal SMS messages
2 - Delivery notification
4 - Direct display messages
8 - 8bit encoded messages
16 - User Data Header (logo or ringing tone)
32 - Virtual SMSC
64 - Unicode messages 128: EMS messages
Warning: Do not change this setting unless instructed to by your
network operator. Incorrect use of this option can cause the Alarm
Notifier to fail when attempting to send alarm notifications.
Use SMSC as primary send Select this checkbox if you want notifications to be sent via the SMSC.
mechanism
The Alarm Notifier will first attempt to send a notification via the SMSC. If
this fails, it will then attempt to send the notification via an attached
modem or handset. This method provides a backup send mechanism in
the event of a LAN failure.
Use SMSC Keep Alives Select this checkbox if you want an Enquire Link request to be sent to
the SMSC every thirty seconds to ensure that the connection to the
SMSC does not time out during periods of inactivity.
Tip: Try using this option if errors occur after periods of inactivity but the
connection worked correctly initially.
Note: This setting is not required by all SMSCs.
Test SMSC Settings Click this button to test that your SMSC configuration is set up correctly.
The results of the test are displayed in the Server Test Response
window.

This picture shows an example:

2. Click Apply to save your changes.

3. Click OK to minimise the Alarm Notifier dialog box.

424
About OPTIMA Alarms

About the Current Actions Window


The Current Actions window enables you to view information about:
• What actions the Alarm Notifier is currently performing
• Any errors that the Alarm Notifier has encountered

Tip: You can display more detailed information in the Current Actions window by selecting the
Verbose Logging checkbox on the Database Configuration tab. For more information, see
Configuring Database Settings on page 421.

To open the Current Actions window:

Right-click the Alarm Notifier icon in your system tray and, from the menu that
appears, click Show Status Window.

- or -

In the Alarm Notifier dialog box, click the Show Status Window button

This picture shows an example of the Current Actions window:

Current Actions Window

Note: When you open the Current Actions window for the first time, it appears minimized in the
top left-hand corner of your screen:

Current Actions Window Minimized

425
OPTIMA 8.0 Operations and Maintenance Guide

You should locate and resize the Current Actions window as you require, and it will then open with
same location and dimensions in the future.

Configuring the Database for Alarm Notification


As well as using the OPTIMA front end and the Alarm Notifier dialog box, you must also configure
alarm notification directly within the database.

In the database, you must configure:


• Alarm severity
• Interactive SMS

Configuring Alarm Severity in the Database


Alarm Severity enables the Alarm Notifier to control:
• When SMS notifications are sent
• How long they are valid for, if they are not sent immediately

Alarm Validity

To set how long a notification will be valid for:

You need to manually define values in the ALARM_SEVERITY table in the OPTIMA database:

Column Name ID P Nu Data Type Description


k ll

ID 1 1 N NUMBER This ID maps to a severity ID when you


create an alarm and set the alarm severity.
You only need one entry per severity in this
table, regardless of how many configured
alarms map to the severity type.
SEVERITY 2 N VARCHAR2( Describes the severity of the alarm.
32)
For example, MINOR, CRITICAL,
INFORMATION and so on.
VALIDITY_PERIOD_TY 3 Y NUMBER This is the time interval for the message
PE validity:
0 - Never expires
1 - Weeks
2- Days
3- Hours
4 - Minutes
5 - Seconds
VALIDITY_NO_OF_PER 4 Y NUMBER This is the number of the corresponding
IODS units defined in the
VALIDITY_PERIOD_TYPE that defines the
alarm validity.
For example, if the
VALIDITY_PERIOD_TYPE is 1 (weeks) and
VALIDITY_NO_OF_PERIODS is 2, then the
validity period is 2 weeks.

426
About OPTIMA Alarms

Important: If these options are missing or invalid, the Notifier will assume all notifications are valid
forever, and all notifications will always be sent immediately.

To understand how these parameters work together, consider the following examples:

ID SEVERITY VALIDITY_PERIOD_TYPE VALIDITY_NUMBER_OF_PE


RIODS

1 MAJOR 1 2
2 MINOR 3 5
3 CRITICAL 0 0

This means that alarms raised as:


• MAJOR have a validity of two weeks
• MINOR have a validity of five hours
• CRITICAL are permanently valid (in other words, never expires)

If for any reason a notification cannot be sent within the specified period (or if the notifier is disabled
and enabled at a later date) when it processes alarms that have exceeded their validity period, the
Notifier will not send any notification, and the event will be logged as Expired.

Permissible Send Times for Alarms

You can also define periods when notifications cannot be sent, depending on their severity type.

For example, you may want to avoid sending non-critical notifications to users in the middle of the
night.

To do this, you need to manual define values in the ALARM_SEVERITY_NOT_RULES table in the
OPTIMA database:

Column ID P Nu Data Description


Name k ll Type

ID 1 1 N NUMBER Unique identifier for the table.


SEVERITY_ID 2 N NUMBER The ID of the severity type (as defined in the
ALARM_SEVERITY table).
DAY_OF_WE 3 N NUMBER The day of the week:
EK (1)
1 - Sunday
2 - Monday
3 - Tuesday
4 - Wednesday
5 - Thursday
6 - Friday
7 - Saturday
START_TIME 4 N DATE The start time on the specified day that messages may
not be sent.
The date is ignored, only the time portion is used.
END_TIME 5 N DATE The end time on the specified day that messages may
not be sent.
The date is ignored, only the time portion is used.

427
OPTIMA 8.0 Operations and Maintenance Guide

The time between the two START_TIME and END_TIME entries on the specified day will be a
'blackout' period, where notifications will be put on hold, and sent at the next available time that falls
outside that period. If the notification should expire during this waiting period, it will never be sent.

You must add as many entries per severity to create all of the blackout periods required by the
customer. While in this state, the Alarm Notifier will log notifications as being 'On Hold Due to
Severity Time'.

To understand how these parameters work together, consider the following examples:

ID SEVERITY_ID DAY_OF_WEEK START_TIME END_TIME

1 2 7 1899/01/01 1899/01/01 18:00


14:00
1 4 1899/01/01 1899/01/01 15:00
09:00

This means that the following blackout periods will be used:


• Notifications of MINOR severity will not be sent on Saturday from 14:00 to 18:00
• Notifications of MAJOR severity will not be sent on Wednesday from 09:00 to 15:00

Configuring Interactive SMS in the Database


If you enable interactive SMS, then the Alarm Notifier will respond to specific user-defined SMS
keywords that are sent to it using SMS, and perform actions or return information as appropriate.

To configure interactive SMS:

You need to manually define values in the ALARMS_INTERACTIVE table in the OPTIMA
database:

Column ID Null Data Type Description


Name

SQL 1 Y VARCHAR2 The SQL statement that provides the information you want
(4000) the Alarm Notifier to return when the specified keyword is
received from the user.
Tip: You can create the SQL statement in an SQL editor,
and paste it into this field.
When defining this field, you should remember the
following:
• You can only send a limited amount of characters via
SMS, so your statement should only return a small
amount of information
• Ensure that you name the returned data fields
something appropriate, as the headers will be sent with
the information to identify it
Tip: You can use placeholders in your SQL query, which
will be substituted at runtime by parameters sent by the
user in the SMS request. Placeholders are defined using
the pipe character "|"and then a number, starting from 2,
and going up to as many as required.
Each word that the user sends after the keyword will be
considered as a new placeholder and associated with its
relative place.
For example, if a user sends the text 'TEST CAT DOG' the
Alarm Notifier would read TEST as the keyword, and CAT
as placeholder |2 and DOG as placeholder |3.

428
About OPTIMA Alarms

Column ID Null Data Type Description


Name

DESCRIPT 2 Y VARCHAR2(2 The information that starts the message sent back to the
ION 00) user, and so should be short and descriptive.
KEYWOR 3 Y VARCHAR2(2 This is the word the user must sms to the Alarm Notifier to
D 0) have it perform a particular query. It must be stored in the
databes in uppercase, but the user can sms it to the alarm
notifier in any format.

To understand how these parameters work together, consider the following examples:

SQL DESCRIPT KEYWO


ION RD
SELECT distinct(s.USERNAME)||'='|| s.STATUS USR LOGGED IN: USERS
FROM V$SESSION s WHERE ( (s.USERNAME is not null)
and
(NVL(s.osuser,'x') <> 'SYSTEM') and (s.type <>
'BACKGROUND') )
SELECT last_date, next_date, JOB JOB
STATUS:
broken,
failures
FROM DBA_JOBS
WHERE job='|2'

Note: In the second example, the placeholder '|2' is included, to be substituted at runtime.

This would mean that:


• If a user SMS'd the word 'USERS' to the Alarm Notifier, it would return a list of users logged
on to the database, and indicate whether they were active or not. A sample message may
look like:

LOGGED IN: USR: AIRCOM=ACTIVE USR: SCOTT=INACTIVE USR:


HAROLD_S=INACTIVE
• If a user SMS'd the word 'JOB 4' to the Alarm Notifier, it would return the status of Oracle
job number 4. A sample message may look like:

JOB STATUS: LAST_DATE: 18/03/2006 16:03:26 NEXT_DATE: 19/03/2006 16:00:00


BROKEN: N FAILURES: 0

Maintaining Alarms
When using OPTIMA alarms, it is important to periodically run the Alarms Maintenance scheduled
job (AIRCOM.OPTIMA_ALARMS.Maintain_Alarms_Table), using DBMS_Scheduler. This job will:
• Delete all of the old alarms
• Reduce the size of the AIRCOM.ALARMS table and its primary key to reduce the space
used following the delete
• Gather statistics on the AIRCOM.ALARMS table and its primary key

Tip: It is recommended that this is run once daily per schema, at night-time.

429
OPTIMA 8.0 Operations and Maintenance Guide

To configure this scheduled job, ensure that the following parameters are set correctly in the
OPTIMA_Common table:

1. If you are using the Web-based USER ALARM Viewer, then set the
ALARMS_USEACKNOWLEDGE parameter to 1. By default this is 0.

2. Define the number of days for which to keep alarms after they have been cleared,
acknowledged, forwarded or notified (as appropriate), using the
ALARMS_DELETEAFTERDAYS parameter. After this number of days has been exceeded
(the default is 1 day), the next time the Alarms Maintenance scheduled job is run, then the
following rules will be followed:
o If Definition is set for SNMP Forward, then the alarm will only be deleted if both the
SET and CLEAR events have been forwarded
o If an Alarm Handler is Active for an alarm, then the alarm will only be deleted if both the
SET and CLEAR events have been processed by the Alarm Handler
o If the Web-based USER ALARM Viewer is being used, then the alarm will only be
deleted if both the SET and CLEAR events have been acknowledged by the user

Troubleshooting the Alarm Notifier


The following table shows troubleshooting tips for the Alarm Notifier:

Symptom Possible Causes Solution

When trying to test email settings If the notifier says the mail has You would have to trace the event
using the Alarm Notifier, the been sent, then there can be a through the SMTP server logs.
following message is received: problem with the SMTP server or
the client receiving the mail.
"Authenticated message sent.
SMTP session closed", Can be a problem with the ‘From’ Make sure that there are no spaces in
email address the ‘Mail Sent From’ data.
However, the tested email
address is not receiving the email. Check if there is a ‘space’ in the Make sure your from address include a
email address you have provided @ symbol. The relay servers require
in the ‘Mail Sent From’ data. If an email address with the symbol and
there is space it will not work will not forward the mail without it.
Check if there is an ‘@’ symbol in For example, make the from address
the ‘Mail Sent From’ data. It will [email protected] or
not work without it. something similar.
Email sent by the notifier does not The Exchange Server might Talk to the IT department and make
reach the recipient block the emails for different sure that the emails sent from Notifier
reasons. are not blocked.
If there is 3rd party anti-spam Ensure that you get the anti-spam
software installed on the software to exclude OPTIMA emails
Exchange server, it might scan from being blocked.
the contents of every mail and
delete the mail if it is recognised
as a spam mail.
When running Alarms Notifier, an Database not upgraded properly/ Make sure that the Database is
error message pops up Alarms table doesn’t have all the upgraded properly and that the Alarms
‘NOTIFIED invalid identifier. No columns used by the Alarms table contains all the necessary
further processing will be done’ Notifier columns for Alarms Notifier
User doesn't want a user login on Not using the appropriate Login Alarms Notifier provides different
any of their servers. They don't option. Authentication types like None, Pop,
want to have an open session of Login, Plain etc (Mail Configuration
Windows NT server tab- Authentication type). If the user
chooses the ‘None’ option they don’t
need to provide a username and
password.

430
About OPTIMA Alarms

Symptom Possible Causes Solution

Only ‘set’ Alarms are notified. No Option to send notification for Make sure that in the Alarm Handler
notification made for clear alarms. Clear Alarms not selected when definition the ‘Apply Handler on Clear
configuring the handler. Alarms’ option is checked.
Make sure that the ‘Do not process
cleared alarms’ checkbox on the
Database Configuration tab in the
Alarms Notifier is not selected.
The Version of Alarm Notifier Install Alarms Notifier version 3.2.
used might not have this feature.
Only from Alarm Notifier V3.2 the
notification is made for Clear
Alarms
Once the option “Use SMSC Keep The reason that multiple You should configure a dedicated
Alives” is selected; the SMS programs cannot use the same account for OPTIMA.
notifier will disconnect another account is because the Alarms
application developed by the Notifier keeps its session to the
customer that is also connecting SMSC open permanently.
to the SMSC server. Both
applications are using same
account to connect to the SMSC.
Delay in receiving SMS The Alarms and Log tables may Implement some jobs to clean up the
notification have grown too large, which log and the alarms tables for older
cause a delay in the processing events.
of the alarms.
Investigation needs to be done by the
In this case, the Notifier may: operator to resolve the delays in the
SMSC
• Send the SMS late.
• Send the SMS on time, but
with a delay in the SMSC
before the notification is
delivered
The error 'Send via SMSC failed: This is a standard status SMSC Check the Send From mobile phone
CIMD2 Error Code {11}' appears. error, which means 'Teleservice number and clarify it is numeric.
not provisioned'.
Note: This can be alphabetic although
you must check with your SMSC.
The Alarms Notifier is sending the The Alarms Notifier queries all If you want to stop this you will have to
total backlog for alarms, not just alarms where the NOTIFIED field manually update the ALARMS table by
the latest. in the ALARMS table is either setting the NOTIFIED field value to 1.
NULL, 0 or 2, and then sends the
notification accordingly.

431
OPTIMA 8.0 Operations and Maintenance Guide

432
About the SNMP Agent

16 About the SNMP Agent

OPTIMA uses the SNMP Agent to provide an outgoing interface for alarms compliant with X733
through SNMP protocols. SNMP clients can request information from the SNMP Agent about
alarms in the database. The SNMP Agent can also send SNMP traps to these SNMP clients.

The SNMP Agent uses a MIB (Management Information Base), which is a virtual database used for
managing the SNMP entities. If you install the SNMP Agent using the OPTIMA Combined Backend
Package (recommended), then the required MIB can be found in \Program Files (x86)\AIRCOM
International\AIRCOM OPTIMA Backend 8.0\Documentation.

The SNMP Interface


This diagram shows the basic functionality of the SNMP interface:

SNMP Interface Functionality

Fault management systems can integrate with OPTIMA's SNMP interface which provides SNMP
trap forwarding to named IP addresses and an SNMP Agent for more granular interaction by the
FM system.

The SNMP Interface supports the following functionality:


• The SNMP Agent has the ability to accept and respond to SNMP GET, GETNEXT,
GETBULK, WALK, SET commands from the Fault Management System (FMS) based on
the MIB.
• Generation of an alarm TRAP for both SET and CLEAR events in the alarm module. The
generation of TRAPS can be set on an individual alarm definition. The TRAP format is
based on the MIB. For more information about alarms, see the OPTIMA User Reference
Guide.
• The ability to generate a heartbeat TRAP. The format of the heartbeat TRAP is
configurable. Heartbeat TRAPS are sent at an interval defined by the heartbeat interval
parameter in the SNMP Agent. This is writable in the MIB via an SNMP SET command and
so can be set by the FMS. For more information about configuring heartbeat TRAPs, see
Configuring the SNMP Agent on page 437.

433
OPTIMA 8.0 Operations and Maintenance Guide

• A coldstart trap is sent to the FMS when the system is first initialised to notify the FMS that
the Agent is active.
• The ability to perform a FMS initiated re-synchronization. The FMS can set a writable re-
synchronization flag in the MIB via an SNMP SET command. During this time no TRAP is
sent but is stored until the re-synchronization flag is reset.

As well as the basic configuration of one agent and one FMS shown above, you can configure the
SNMP interface using a number of other combinations. For more information on configuring
possible scenarios with different numbers of agents and FMSs, see:
• Configuring the SNMP Interface for a Single Agent and Multiple FMSs on page 434
• Configuring the SNMP Interface for Multiple Agents and Multiple FMSs on page 435

Configuring the SNMP Interface for a Single Agent and Multiple FMSs
It is possible to configure the SNMP interface for a variety of scenarios, including a single agent
and different numbers of FMSs.

This diagram shows an example scenario, with one agent and two FMSs:

SNMP Interface Scenario with one Agent and two FMSs

To define this sort of configuration, you should use the 'IPAddress', 'Port' and 'Community'
parameters in the [TRAP-LISTENER] section of the ini file to specify the location of each FMS that
you want to use.

For more information, see Configuring the SNMP Agent on page 437.

If you are using this sort of configuration, you should consider the following points:
• The tables in the database for alarms are mapped to the X.733-compliant MIB views, which
have dependencies on the columns and the type of data in the column, including size limits
• There is no restriction on the SNMP GET and SET requests that are managed by the agent

434
About the SNMP Agent

Configuring the SNMP Interface for Multiple Agents and Multiple FMSs
It is possible to configure the SNMP interface for a variety of scenarios, including different numbers
of agents and different numbers of FMSs.

This diagram shows an example scenario, with three agents and five FMSs:

SNMP Interface Scenario with Three Agents and Five FMSs

435
OPTIMA 8.0 Operations and Maintenance Guide

To define this sort of configuration, you should:


• Create separate INI files for each agent. For more information on how to do this, see
Configuring the SNMP Agent on page 437.

Important: Ensure that you:


o Use the 'ExtEnterpriseOid' parameter in the INI file to add a differentiator for each
Agent. This ensures that the FMS knows which Agent sent the alarm.
o Use the 'Port' parameter in the INI file to enable each Agent running on the same
server to bind to a different port to receive SNMP GET/SET instructions from the FMS
(SNMP Manager).
• Configure your views to be associated with each of these separate agents, according to the
alarm types and your requirements. For example, you may choose to associate the
PERFORMANCE views with an agent used for troubleticketing, or the SYSTEM views with
an agent used by your system administrator.

For more information, see Configuring Views on page 444.


• Use the 'IPAddress', 'Port' and 'Community' parameters in the [TRAP-LISTENER] section
of each agent INI file to specify the location of each FMS that you want to use.

If you are using a configuration with multiple agents and multiple FMSs, you should consider the
following points:
• When configuring different Alarm events to go to different FMSs, the Alarm SET events
have to be mutually exclusive. This is because of the method used to record events sent
using SNMP TRAP events.
• The FMSs do not need to map to the Alarm SET events, and an FMS may receive different
alarms for different agents.

In the example scenario above, three Alarm SET events have been created - A, B and C. These
events have their own CUSTOMIZABLE CONTENT MIB views, which are configured in three
separate INI files, one for each Agent.

Note: The standard deployment scripts provide three MIB views for PERFORMANCE, SYSTEM
and TCA alarms respectively.

Installing the SNMP Agent


Before you can use the SNMP Agent, install the following file to the backend binary directory:
• opx_ALM_GEN_820.exe (Windows)
• opx_ALM_GEN_820 (Unix)

Starting the SNMP Agent


To start the SNMP Agent:

Type the executable name and a configuration file name into the command prompt. If you are
creating a new configuration file, this is when you choose the file name.

In Windows type:

opx_ALM_GEN_820.exe opx_ALM_GEN_820.ini

In Unix type:

opx_ALM_GEN_820 opx_ALM_GEN_820.ini

436
About the SNMP Agent

Note: The SNMP Agent should be run continuously.

Configuring the SNMP Agent


The SNMP Agent is configured using a configuration (INI) file. Configuration changes are made by
editing the parameters in the configuration (INI) file with a suitable text editor.

Important: If you are creating an SNMP interface with multiple agents, you should create a
separate INI file for each agent, using the 'ExtEnterpriseOid' parameter to differentiate between
agents.

The SNMP Agent configuration (INI) file is divided into seven sections.

The following table describes the parameters in the [DIR] section:

Parameter Description

EXEName The executable name.


LogDir The location of the log files.
PersistentPath The location of persistent files.
PidFilePath The location of the monitor files.
TempDir The location of temporary files created while the SNMP Agent is running.

The following table describes the parameters in the [MAIN] section:

Parameter Description

FolderFileLimit The maximum number of output files that can be created in each output (sub)
folder.
This must be in the range of 100-100,000 for Windows, or 100-500,000 on
Sun/UNIX, otherwise the application will not run.
Warning: Depending on the number of files that you are processing, the lower
the file limit, the more output sub-folders that will be created. This can have a
significant impact on performance, so you should ensure that if you do need to
change the default, you do not set the number too low.
The default value is 10,000.
InstanceID The three-character program instance identifier (mandatory).
InterfaceID The three-digit interface identifier (mandatory).
LogGranularity Defines the frequency of logging, the options are:
0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily

437
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

LogLevel (or Sets the level of information required in the log file. The available options are:
LogSeverity)
1 - Debug
2 - Information (Default)
3 - Warning
4 - Minor
5 - Major
6 - Critical
LogOptions 0 - Do not generate a log file.
1 - Generate a log file to the specified directory.
ProgramID The three-character program identifier (mandatory).
UseFolderFileLimit Indicates whether the folder file limit should be used (1) or not (0).
The default value is 0 ('OFF').
Verbose 0 - Run silently. No log messages are displayed on the screen.
1 - Display log messages on the screen.
TestConnectionDelay When attempting to recover from a loss of database connection, the SNMP
Agent will test the database connection each time it queries the database to get
the latest alarms data.
If the connection is lost, the SNMP Agent will attempt to reconnect three times; if
the connection is not restored after this, the SNMP Agent will terminate.
This parameter specifies the number of seconds to delay before each re-
connection attempt. The default value is 30.

The following table describes the parameters in the [SNMP-AGENT] section:

Parameter Description

AlarmTableView The database view to query when populating the Alarm table in the MIB. For
more information, see Configuring Views on page 444.
DbPollInterval The database polling interval in minutes.
EnterpriseOid The Enterprise OID used in the MIB.
Note: The Enterprise OID is 23322.
ExtEnterpriseOid The Enterprise OID used when sending traps.
By default this is 0 (not used), but can be specified if you do not want to use the
Enterprise OID when sending traps.
HeartbeatTrapInterval The time in minutes between sending the heartbeat trap.
ObjectTableView The database view to query when populating the Object table in the MIB. For
more information, see Configuring Views on page 444.
Port The port number the SNMP Agent listens for incoming requests.
ReadCommunity The community string used in the GET, GETNEXT request.
ResyncTable The database view to query when sending alarm traps due to a
resynchronization. For more information, see Configuring Views on page 444.
ResyncType The resynchronization type:
0 - Agent .
1 - Manager.
SendEndOfResyncTr 0 - Do not send an end of resynchronization trap.
ap
1 - Send an end of resynchronization trap.

438
About the SNMP Agent

Parameter Description

SysLocation The location where the SNMP Agent is running, for example, a physical location
or a machine name.
SysName The name of the SNMP Agent. The default setting for this parameter is OPTIMA
SNMP Agent.
TrapGuardPeriod The delay time (in milliseconds) after each trap is sent, for example,
TrapGuardPeriod=1000 means a 1 second delay after each trap is sent.
TrapView The database view to query when sending alarm traps. For more information,
see Configuring Views on page 444.
WaitForRequestTime The time in seconds to wait for incoming requests.
outSeconds
WriteCommunity The community string used in the SET request.

The following table describes the parameters in the [DATABASE] section:

Parameter Description

Database The database name.


Password The password required to connect to the database.
UserName The user name to connect to the database.

The following table describes the parameters in the [TRAP-LISTENER] section:

Parameter Description

Community The community string to use when sending traps.


IPAddress The IP address to send traps to.
Port The port number used in sending traps.
ResyncTraps 1 - The MANAGER at the specified IP address wants to be sent resync traps and
is allowed to change the resyncFlag value using a SET message.
0 - The MANAGER at the given IP address does not want to be sent resync
traps and is not allowed to change the resyncFlag value using a SET message.

Important: If you want to send traps to multiple destinations, you should specify the Community,
IPAddress and Port for each destination as separate entities. The first set of parameters should
have no suffix, but the parameters for each additional destination should be suffixed by a number,
starting with 1 for the second destination (Community1, IPAddress1, Port1), 2 for the third
destination (Community2, IPAddress2, Port2) and so on.

The following table describes the parameters in the [HEARTBEAT_TRAP] section:

Parameter Description

AdditionalText The default value for this trap value is HEARTBEAT_TRAP.


EventTypeID The default value for this trap value is 2.
FirstOccurence This trap value is optional.
NotificationID The default value for this trap value is 99999.
ObjectId This trap value is optional.
Occurrence This trap value is optional.
PerceivedSeverity The default value for this trap value is 4.

439
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

ProbableCauseID The default value for this trap value is 0.


ProposedRepairAction This trap value is optional.
SpecificProblem This trap value is optional.
TrendIndicator This trap value is optional.

The following table describes the parameters in the [END_OF_RESYNC] section:

Parameter Description

AdditionalText The default value for this trap value is END_OF_RESYNC.


EventTypeID The default value for this trap value is 2.
FirstOccurence This trap value is optional.
NotificationID The default value for this trap value is 99998.
ObjectId This trap value is optional.

Occurrence This trap value is optional.


PerceivedSeverity The default value for this trap value is 4.
ProbableCauseID The default value for this trap value is 0.
ProposedRepairAction This trap value is optional.
SpecificProblem This trap value is optional.
TrendIndicator This trap value is optional.

The following table describes the parameters in the [OPTIMA-SEVERITY-MAPPING] section:

Parameter Description

Admin_clear This section maps the OPTIMA Severity levels onto corresponding MIB
perceivedSeverity values.
Clear
The available MIB perceivedseverity options are:
Critical
0 - Indeterminate
Information_only
1 - Critical
Intermediate
2 - Major
Minor
3 - Minor
Major
4 - Warning
Warning
5 - Cleared
One of these options can be mapped to each parameter, which represents an
OPTIMA Severity level.
For example, 'Clear=5' indicates that the OPTIMA Severity level 'Clear'
corresponds to the MIB perceivedSeverity value of 5 (Cleared).

440
About the SNMP Agent

Summary of the SNMP Agent Modes


You can configure the SNMP Agent's response behavior to a resync, using the ResyncType. There
are two possible modes:
• Agent mode (ResyncType = Agent)
• Manager mode (ResyncType = Manager)

This topic summarizes the process for each mode.

Process for Agent Mode

1. The SNMP Manager (at the customer end) starts a resync with an SNMP SET request on
the resyncFlag OID to 1.

2. The Agent stops listening for requests from clients.

3. The Agent queries the database view defined in the ResyncTable parameter in the [SNMP-
AGENT] section of the INI file.

4. The Agent sends a trap for each row from this database view.

5. If the SendEndofResyncTrap parameter in the [SNMP-AGENT] section of the INI file is set
to 1, then Agent sends the endOfReyncTrap.

6. The Agent sets the resyncFlag to 0.

Process for Manager Mode

1. The SNMP Manager (at the customer end) starts a resync with an SNMP SET request on
the resyncFlag OID to 1.

2. The Agent stops sending new events as traps, and waits (it does not update the MIB table
with new events).

3. The SNMP Manager uses an SNMP GET or SNMP WALK request to obtain all of the trap
information from the TRAP MIB.

4. After the SNMP Manager has synchronized, then the SNMP Manager uses an SNMP SET
request to set the resyncFlag OID to 0.

5. The Agent sends all of the new events (that occurred when the resyncFlag OID was set to
1) as traps.

Detailed Description of the SNMP Agent Modes


This section provides a detailed description of the SNMP agent behavior for the following
ResyncType modes:
• Agent
• Manager

When ResyncType = Agent :

The SNMP agent reads the INI file, creates the SNMP session, connects to the database, creates
the log file, creates the PRID file, and builds the MIB in memory. The agent then queries the
AlarmTableView database view and populates the alarmsTable MIB table in the agent memory
based on the result set returned from the database.

441
OPTIMA 8.0 Operations and Maintenance Guide

The agent sets the following MIB scalars in memory:


• resyncFlag to 0
• trapSequenceNumber to 0 or its last value when the agent was cleanly shut down
• pollHeartBeat to HeartbeatTrapInterval or its last value when agent was cleanly shut down

The agent then enters into a main loop and performs the following steps:

1. The agent listens on Port for a PDU request for a period of


WaitForRequestTimeoutSeconds

2. The agent responds to a PDU GET message by searching for the value of the OID in the
MIB message stored in memory. The following are the options:
o The agent responds to a PDU GETNEXT message by returning the next OID after OID
received in the message
o The agent will respond to a PDU SET message by setting the OID value from the
message in the MIB memory
o No database connections are made when responding to these requests

Once the agent finishes responding to a request or waits for a request to be over, it performs the
following actions:

1. The agent checks to see if a Heartbeat trap should be sent by comparing the pollHeartBeat
value and the last time a Heartbeat trap was sent. When it is ready to send the heartbeat
trap, the agent builds the heartbeat trap PDU using the values listed in the
HEARTBEAT_TRAP section of the INI file. For more information on the Heartbeat_Trap
section, see Configuring the SNMP Agent on page 437. It then sleeps for period
TrapGuardPeriod milliseconds if the trap was sent without error and then resets the last
heartbeat time.

2. The agent checks to see if the resyncFlag value is 1. If yes, it performs the following
actions:
o Queries the ResyncActiveAlarms database view
o For each row in the result set, the agent builds the trap PDU for alarm reading the 11
column values in the row, sends the alarm trap, sleeps for TrapGuardPeriod
milliseconds period if the trap was sent without error, and then inserts a row into the
SNMP_UPDATE database table
o Once it finishes processing the result set, the agent calls
SNMP_PKG.SET_FWD_IN_ALL_ALARMS_TBL database procedure which updates
the ALARM table fields related to the SNMP AGENT, resets the resyncFlag to 0, and if
SendEndOfResyncTrap is set, the agent sends a EndOfResyncTrap

3. The agent then checks the database polling time and sends any new alarms traps. The
agent compares the last database polling time to the DbPollInterval:
o When database polling is due, the agent queries the database view TrapView
o For each row in the result set, the agent builds the trap PDU for alarm reading the 11
column values in the row, sends the alarm trap, sleeps for TrapGuardPeriod
milliseconds period if the trap was sent without error, and inserts a row into database
table SNMP_UPDATE

4. Once it finishes processing the result set, the agent calls the
SNMP_PKG.SET_FWD_IN_ALL_ALARMS_TBL database procedure which updates the
ALARM table fields related to the SNMP AGENT. If the TrapView database view is not
empty:
o Clears the current MIB tables alarmsTable and objectsTable
o Queries the AlarmTableView database view and populates the alarmsTable MIB table
in the agent memory

442
About the SNMP Agent

o Queries the ObjectTableView database view and populates the objectsTable MIB table
in the agent memory
o Resets the last db polling time

When ResyncType = Manager :

The SNMP agent reads the INI file, creates the SNMP session, connects to the database, creates
the log file, creates the PRID file and then builds the MIB in memory. The agent then queries the
AlarmTableView database view and populates the alarmsTable MIB table in the agent memory
based on the result set returned from the database.

The agent queries the ObjectTableView database view and populates the objectsTable MIB table in
the agent memory based on the result set that is returned from the database.

The agent sets the following MIB scalars in memory:


• resyncFlag to 0
• trapSequenceNumber to 0 or its last value when the agent was cleanly shut down
• pollHeartBeat to HeartbeatTrapInterval or its last value when agent was cleanly shut down

The agent then enters into a main loop and performs the following steps:

1. The agent listens on Port for a PDU request for a period of


WaitForRequestTimeoutSeconds.

2. The agent responds to a PDU GET message by searching for the value for the OID in the
MIB message stored in memory. The following are the options:
o The agent responds to a PDU GETNEXT message by returning the next OID after OID
received in the message
o The agent responds to a PDU SET message by setting the OID value from the
message in the MIB memory
o No database connections are made when responding to these requests

Once the agent is finished responding to a request or waits for a request to be over, it performs the
following actions:

1. The agent checks to see if a Heartbeat trap should be sent by comparing the pollHeartBeat
value and the last time a Heartbeat trap was sent

2. When it is time to send a heart beat:


o When resyncFlag = 1, no heartbeat trap is sent
o When resyncFlag = 0, a heartbeat trap is sent

3. The Agent builds the heartbeat trap PDU using the values listed under the
HEARTBEAT_TRAP section of the INI file. For more information on the Heartbeat_Trap
section, see Configuring the SNMP Agent on page 437. It then sleeps for a period of
TrapGuardPeriod milliseconds if the trap was sent without error and then resets the last
heartbeat time.

4. The agent then checks the database polling time and sends any new alarms traps. The
agent compares the last database polling time to the DbPollInterval:
o When database polling is due, the agent sends traps if resyncFlag value = 0
o The agent queries the TrapView database view
o For each row in the result set, the agent builds the trap PDU for alarm reading the 11
column values in the row, sends the alarm trap, sleeps for period TrapGuardPeriod
milliseconds if the trap was sent without error, and inserts a row into database table
SNMP_UPDATE

443
OPTIMA 8.0 Operations and Maintenance Guide

5. Once it finishes processing the result set, the agent calls the
SNMP_PKG.SET_FWD_IN_ALL_ALARMS_TBL database procedure which updates the
ALARM table fields related to the SNMP AGENT. If the TrapView database view was not
empty and traps were sent, the agent:
o Clears the current alarmsTable and objectsTable MIB tables
o Queries the AlarmTableView database view and populates the alarmsTable MIB table
in the agent memory
o Queries the ObjectTableView database view and populates the objectsTable MIB table
in the agent memory
o Resets the last db polling time

Configuring Views
In the [SNMP-AGENT] section of the configuration (INI) file, there are some parameters that query
views in the database. You can configure these views to control the behavior of the SNMP Agent.

Important: You cannot change the names or number of columns in these views, but you can
change the formulas that provide the data in them.

The following table describes how to configure the views:

This view Is used for Uses And must have this header
these
columns

Alarm Table Active events, that X735 alarm CREATE OR REPLACE FORCE VIEW
is, SET events that columns AIRCOM.SNMP_ALARM_MIB_<ALARM TYPE>
have no (NOTIFICATIONID, ALARM_DATETIME,
corresponding PERCIEVEDSEVERITY, FIRSTOCCURENCE,
CLEAR event. OCCURENCE,
DEFINITION_ID, ELEMENT_ID,
MANAGEDOBJECT, IDEVENTTYPE,
IDPROBABLECAUSE,
SPECIFICPROBLEM, PROPOSEDREPAIRACTION,
ADDITIONALTEXT, TRENDINDICATOR)
Object Table Objects for which Object CREATE OR REPLACE FORCE VIEW
alarms are valid. columns AIRCOM.SNMP_OBJECTS_<ALARM TYPE>
(DEFINITION_ID, ELEMENT_ID,
ELEMENT_NAME)
Trap Unforwarded events, X735 alarm CREATE OR REPLACE FORCE VIEW
that is, events that columns AIRCOM.SNMP_TRAP_MIB_<ALARM TYPE>
have not been (NOTIFICATIONID, ALARM_DATETIME,
forwarded by the PERCIEVEDSEVERITY, FIRSTOCCURENCE,
SNMP Agent. OCCURENCE,
DEFINITION_ID, ELEMENT_ID,
MANAGEDOBJECT, IDEVENTTYPE,
IDPROBABLECAUSE,
SPECIFICPROBLEM, PROPOSEDREPAIRACTION,
ADDITIONALTEXT, TRENDINDICATOR)

There are now 3 different alarm types, and this must be specified at the end of the view name, as
follows:
• For performance alarms, use PERFORMANCE
• For system alarms, use SYSTEM
• For TCAs, use TCA

444
About the SNMP Agent

For example, the Alarm Table view for a system alarm is AIRCOM.SNMP_ALARM_MIB_SYSTEM,
and the Trap view for a TCA is AIRCOM.SNMP_TRAP_MIB_TCA.

Maintenance
In usual operation, the SNMP Agent should not need any special maintenance. However, TEOCO
recommends the following basic maintenance checks are carried out for the SNMP Poller:

Check The When Why

Log file for error messages. Weekly In particular any Warning, Minor, Major and Critical
messages should be investigated.

Stopping the SNMP Agent


The SNMP Agent is run continuously. You can stop the SNMP Agent by pressing CRTL-C or by
closing the console window.

Checking the Version of the SNMP Agent


If you need to contact TEOCO support regarding any problems with the SNMP Agent, you must
provide the version details.

You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_ALM_GEN_820.exe -v

In Unix:

opx_ALM_GEN_820 –v

For more information about versioning, see About Versioning on page 33.

Checking the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

445
OPTIMA 8.0 Operations and Maintenance Guide

Troubleshooting
The following table shows troubleshooting tips for the SNMP Agent:

Problem Cause Solution

Cannot save configuration (INI) User has insufficient privileges on Enable permissions.
file. configuration (INI) file or directory.
Make file writable.
The file is read only or is being used
by another application. Close the Parser to release the
configuration (INI) file.
Application exits immediately. Another instance is running. Use Process Monitor to check
instances running.
Invalid or corrupt (INI) file.
SNMP session not created Network problem Report to system administrator.

SNMP Agent Message Log Codes


This section describes the message log codes for the SNMP Agent:

Message Description Severity


Code

1111 IP Address of request is not allowed to set ResyncFlag. WARNING


3000 SNMP Agent instance started. <releaseVersion>. INFORMATION
3001 Trap sent: <status>. DEBUG
Trap failed: <status> . DEBUG

DB Connection Error : <errorDetails>. DEBUG

Reading <DIRCounterSection> section of INI file. Failed! DEBUG

Reading <DIRCounterSection> section of INI file. Success! DEBUG

Reading <MainCounterSection> section of INI file. Failed! DEBUG

Reading <MainCounterSection> section of INI file. Success! DEBUG

Reading <SnmpAgentCounterSection> section of INI file. Failed! DEBUG

Reading <SnmpAgentCounterSection> section of INI file. Success! DEBUG

Reading <DatabaseCounterSection> section of INI file. Failed! DEBUG

Reading <DatabaseCounterSection> section of INI file. Success! DEBUG

Reading <HeartBeatTrapCounterSection> section of INI file. Failed! DEBUG

Reading <HeartBeatTrapCounterSection> section of INI file. Success! DEBUG

Reading <EndOfResyncTrapCounterSection> section of INI file. Failed! DEBUG

Reading <EndOfResyncTrapCounterSection> section of INI file. Success! DEBUG

Reading <SeverityMappingCounterSection> section of INI file. Failed! DEBUG

Reading <SeverityMappingCounterSection> section of INI file. Success! DEBUG

Reading <TrapListenersCounterSection> section of INI file. Failed! DEBUG

Reading <TrapListenersCounterSection> section of INI file. Success! DEBUG

Reading INI file. DEBUG

446
About the SNMP Agent

Message Description Severity


Code

Successfully read INI file <argv[1]>. DEBUG

Resync active alarms selected in the INI file. DEBUG

Resync active alarms not selected in the INI file. DEBUG

Column missing or db error: <NotificationID>. DEBUG

Column missing or db error: <ColumnNameAlarmDate>. DEBUG

Column missing or db error: <ColumnPerceivedSeverity>. DEBUG

Column missing or db error: <ColumnFirstOccurrence>. DEBUG

Column missing or db error: <ColumnNameOccurrence>. DEBUG

Column missing or db error: <ColumnNameManagedObject>. DEBUG

Column missing or db error: <ColumnNameIdEventType>. DEBUG

Column missing or db error: <ColumnNameIdProbableCause>. DEBUG

Column missing or db error: <ColumnNameSpecificProblem>. DEBUG

Column missing or db error: <ColumnNameProposedRepairAction>. DEBUG

Column missing or db error: <ColumnNameAdditionalText>. DEBUG

Column missing or db error: <ColumnNameTrendIndicator>. DEBUG

Column missing or db error: <ColumnNameDefinitionID>. DEBUG

Column missing or db error: <ColumnNameElementID>. DEBUG

Column missing or db error: <ColumnNameElementName>. DEBUG

Database error: <errorDetails>. DEBUG

IP Address of request : <ipAddress>. DEBUG

Resync not 0 or 1. DEBUG

resync value change to <ResyncActiveAlarmsValue>. DEBUG

pollHeartBeat not between 1 and 2880. DEBUG

pollHeartBeat value change to <HeartbeatTrapIntervalValue>. DEBUG

3005 Successfully read INI file. INFORMATION


3010 Successfully started logging. INFORMATION
3015 Successfully created PRID monitor. INFORMATION
3020 Signal handler started. INFORMATION
3025 MIB OID's created. INFORMATION
3030 Successfully created connection to Optima database: <DBname>. INFORMATION
Successfully created SNMP session. INFORMATION

3035 Error creating SNMP session: <errorStatus>. CRITICAL


MIB object created. INFORMATION

3040 Read Community and Write Community set. INFORMATION


3045 Added sysGroup to MIB. INFORMATION
3050 Added sysContact to MIB. INFORMATION
3054 OPTIMA_ADMIN package successfully prepared. DEBUG
3055 Executing OPTIMA_ADMIN.AUTHENTICATE_ROLE procedure. DEBUG

447
OPTIMA 8.0 Operations and Maintenance Guide

Message Description Severity


Code

Added sysName to MIB. INFORMATION

3056 Finished executing procedure OPTIMA_ADMIN.AUTHENTICATE_ROLE. DEBUG


3060 Added sysLocation to MIB. INFORMATION
3065 Added Alarm tree to MIB. INFORMATION
3070 Added Object tree to MIB. INFORMATION
3071 Database authentication method DEBUG
3075 Successfully created MIB objects. INFORMATION
3080 ResyncType-> Manager. INFORMATION
3085 ResyncType-> Agent. INFORMATION
3086 DbPollInterval-> <DbPollIntervalValue> minutes. INFORMATION
3087 HeartbeatTrapInterval-> <HeartbeatTrapIntervalValue> minutes. INFORMATION
3090 Last pollHeartBeat check set to <value>. DEBUG
3091 Last database polling check set to <value>. DEBUG
3100 Starting main loop, checking for requests, checking heartbeat interval, INFORMATION
checking DB polling interval.
3202 Could not access MIB object trapSequenceNumber. WARNING
3900 Exited main loop, no longer checking for messages, heartbeat interval or INFORMATION
DB polling interval.
3980 Could not access MIB object PollHeartBeat. DEBUG
3981 Could not access MIB object TrapSequenceNumber. DEBUG
3992 Failed to save Trap SequenceNumber to <SeqNoFileValue>. WARNING
3993 Saved Trap SequenceNumber to <SeqNoFileValue>. INFORMATION
3994 Failed to save Heartbeat interval to <HeartBeatFileValue>. WARNING
3995 Saved Heartbeat interval to <HeartBeatFileValue>. INFORMATION
3996 Removed MIB from memory. INFORMATION
3997 Database connection closed. INFORMATION
3998 SNMP socket cleanup done. INFORMATION
3999 SNMP Agent instance ended. INFORMATION
4001 Exiting Agent due to critical error. CRITICAL
4002 Exiting Application. Error : Unable to create the Pid file at specified path. WARNING
4003 Exiting Application. Error : Another instance of the application may be WARNING
running.
4005 Querying database for alarms. INFORMATION
4010 SQL->select * from <alarmTableName>. INFORMATION
4016 Empty result set, have no Alarms to populate the alarmsTable. INFORMATION
4017 Processing result set and populating alarmsTable. INFORMATION
4020 Query failed. Error-><errorDetails>. MAJOR
4025 Populated alarmsTable in MIB with <rowsCount> rows. IMPORTANT
4505 Querying database for objects. INFORMATION
4510 SQL->select distinct <elementName> from <agentTable>. INFORMATION
4516 Empty result set, have no objects to populate the objectsTable. WARNING

448
About the SNMP Agent

Message Description Severity


Code

4517 Processing result set and populating objectTable. INFORMATION


4520 Query failed. Error-> <errorDetails>. MAJOR
4525 Populated objectsTable in MIB with <rowCount> rows. INFORMATION
5000 Could not access MIB object TrapSequenceNumber. WARNING
5001 Could not read or create <SeqNoFileValue>. MAJOR
Could not read or create <HeartBeatFileValue>. MAJOR

Password Decryption Error : <errorMsg>. MAJOR

Errors found in INI file <argv[1]>. MAJOR

Failed to create log file. MAJOR

Failed to create PRID Monitor. MAJOR

Failed to authenticate user: <userName>. MAJOR

Failed to create connection to Optima database: <DBname>. MAJOR

Failed to create SNMP session. MAJOR

Failed to populate alarm table. MAJOR

Failed to populate object table. MAJOR

5005 Read Trap Sequence number from persistent file <SeqNoFileValue>. INFORMATION
5010 Could not access MIB object TrapSequenceNumber. WARNING
5015 Created <SeqNoFileValue> and set Trap Sequence value to 0. INFORMATION
5500 Could not access MIB object PollHeartBeat. WARNING
Querying database for traps. INFORMATION

5505 Overriding INI HeartbeatTrapInterval with value from persistent file INFORMATION
<HeartBeatFileValue>.
5510 Could not access MIB object PollHeartBeat. WARNING
SQL->select * from <tableName> order by <alarmDate>. INFORMATION

5515 Created file <HeartBeatFileValue> and set PollHeartBeat to INI file value. DEBUG
5516 Empty result set, no traps to send. INFORMATION
5520 Query failed. Error-> <errorDetails>. MAJOR
5530 Sent alarm trap: Seq{<TrapSequenceNumber>} DEBUG
<ColoumName_ManagedObject> {<ManagedObjectValue>}
<ColoumName_AlarmDate> {<AlarmDateValue>} to
<TrapListenersAddress> / <TrapListenersPort>.
5535 Failed to send alarm trap to <TrapListenersAddress> / <TrapListenerPort>. DEBUG
5540 Error executing SQL <sqlStatement> Error <errorDetails>. WARNING
5541 Inserted row into SNMP_UPDATE table. DEBUG
5550 Successfully updated database. INFORMATION
5555 Error occurred when updating database. WARNING
5560 Error occurred when updating database <errorDetails>. WARNING
6000 Sent coldstart trap to <TrapListenersAddress> / <TrapListenersPort>. INFORMATION
6005 Failed to send coldstart trap to <TrapListenersAddress> / WARNING
<TrapListenersPort>.
6100 Sent Heartbeat trap Seq{ <TrapSequenceNumber> } to INFORMATION
<TrapListenersAddress> / <TrapListenersPort>.

449
OPTIMA 8.0 Operations and Maintenance Guide

Message Description Severity


Code

6105 Failed to send Heartbeat trap to <TrapListenersAddress> / WARNING


<TrapListenersPort>.
6200 Sent EndOfResync trap to Seq{ <TrapSequenceNumber>} to INFORMATION
<TrapListenersAddress> / <TrapListenersPort>.
6205 Failed to send EndOfResync trap to <TrapListenersAddress> / WARNING
<TrapListenersPort>.
7000 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: GET} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7031 DB Role Authentication Error : <errorDetails>. CRITICAL
7100 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: GETNEXT} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7200 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: RESPONSE} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7300 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: SET} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7400 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: V1TRAP} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7500 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: GETBULK} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7600 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: INFORM} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7700 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: TRAP} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7800 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: REPORT} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
7900 Request received. {FROM: <get_printable>} {ID: <get_request_id>} {PDU DEBUG
TYPE: UNKNOWN} {OID: <oidPrintable>} [{VALUE: <vbPrintable>}].
8011 Error Accessing PID Directory. CRITICAL
8101 Agent read resyncFlag as <ReysncValue>. INFORMATION
8102 Could not access MIB object resyncFlag. WARNING
8201 Agent setting resyncFlag to <value>. INFORMATION
8202 Could not access MIB object resyncFlag. WARNING
8500 pollHeartBeat interval period is over. DEBUG
8501 ResyncType=manager and resyncFlag=0 so sending heartbeat traps. DEBUG
8502 ResyncType=manager and resyncFlag=1 so NOT sending heartbeat traps. DEBUG
8503 ResyncType=agent so sending heartbeat traps. DEBUG
8504 Last pollHeartBeat check set to <value>. DEBUG
Last database polling check set to <value>. DEBUG

8600 Database polling interval period is over. DEBUG


8601 ResyncType=manager and resynFlag=0, so sending traps. DEBUG
ResyncType=manager and resynFlag=1, cannot send traps or update the DEBUG
alarmsTable or update the objectsTable until resyncFlag is set 0.
8602 Successfully sent <numberOf> traps. DEBUG
8701 ResyncType=agent and resyncFlag=1, so sending traps. INFORMATION
8702 Successfully sent <numberOf> traps. DEBUG
8703 Database polling interval period is over. DEBUG

450
About the SNMP Agent

Message Description Severity


Code

8704 ResyncType=agent, so sending traps. DEBUG


8705 Successfully sent <numberOf> traps. DEBUG
8706 Last database polling check set to <value>. DEBUG
9980 Ending the SNMP Agent instance - CTRL_C_EVENT. WARNING
9981 Ending the SNMP Agent instance - CTRL_BREAK_EVENT. WARNING
9982 Ending the SNMP Agent instance - CTRL_CLOSE_EVENT. WARNING
9983 Ending the SNMP Agent instance - CTRL_LOGOFF_EVENT. WARNING
9984 Ending the SNMP Agent instance - CTRL_SHUTDOWN_EVENT. WARNING
9985 Ending the SNMP Agent instance - UNKDOWN_EVENT. WARNING
9997 Illegal storage access - Ending Agent. INFORMATION
9998 Termination Request - Ending Agent. INFORMATION
9999 Ctrl-C - Ending Agent. INFORMATION

451
OPTIMA 8.0 Operations and Maintenance Guide

Example SNMP Agent Configuration (INI) File


[DIR]
TempDir=/OPTIMA_DIR/<application_name>/temp
LogDir=/OPTIMA_DIR/<application_name>/log
PidFilePath=/OPTIMA_DIR/<application_name>/prid
EXEName=<application_name>
PersistentPath=/OPTIMA_DIR/<application_name>

[MAIN]
LogGranularity=0
LogLevel=1
LogOptions=1
TrapsOnly=0
FileFolderLimit=0
InterfaceID=001
ProgramID=002
InstanceID=003
Verbose=1
TestConnectionDelay=30

[SNMP-AGENT]
Port=161
ResyncTable=AIRCOM.SNMP_ALARM_MIB_PERFORMANCE
ReadCommunity=public
WriteCommunity=public
DbPollInterval=1
HeartbeatTrapInterval=10
EnterpriseOid=23322
ExtEnterpriseOid=23322.3
AlarmTableView=AIRCOM.SNMP_ALARM_MIB_PERFORMANCE
ObjectTableView=AIRCOM.SNMP_OBJECTS_PERFORMANCE
TrapView=AIRCOM.SNMP_TRAP_MIB_PERFORMANCE
TrapsOnly=0
WaitForRequestTimeoutSeconds=1
ResyncType=manager
TrapGuardPeriod=0
SysName=SNMPAgent
SysLocation=London
SendEndOfResyncTrap=1

[DATABASE]
Database=OPTRAC_VM
UserName=optima_snmpagent_proc
Password=ENC(l|mlofhY)ENC

[TRAP-LISTENERS]
IPAddress=127.0.0.1
Port=162
Community=public

IPAddress1=127.0.0.1
Port1=163
Community1=public

IPAddress2=127.0.0.1
Port2=164
Community2=public

[HEARTBEAT_TRAP]
NotificationID=99999
PerceivedSeverity=4
ProbableCauseID=0

452
About the SNMP Agent

EventTypeID=2
AdditionalText=HEARTBEAT_TRAP

[END_OF_RESYNC]
NotificationID=99999
PerceivedSeverity=4
ProbableCauseID=0
EventTypeID=2
AdditionalText=END_OF_RESYNC

[OPTIMA-SEVERITY-MAPPING]
Intermediate=1
Warning=4
Minor=3
Clear=5
Major=2
Critical=1
Information_only=0
Admin_Clear=5

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

453
OPTIMA 8.0 Operations and Maintenance Guide

About the SNMP MIB for Alarm Forwarding


The SNMP MIB for alarm forwarding contains 2 tables, alarmstable and objectstable.

This picture shows an example of the MIB tree structure:

Example MIB tree

The alarmstable contains the following objects:

OBJECT-TYPE SYNTAX DESCRIPTION CUSTOM

notificationID INTEGER ( 1 .. Alarm notification ID. SYSTEM


2147483647 )
perceivedSeverity INTEGER Severity. STANDARD
{indeterminate(0),
critical(1), major(2),
minor(3),
warning(4), cleared(5)}
firstOccurance DisplayString Date and time of first occurrence: CUSTOM
Format YYYYMMDD HH:MM:SS.

454
About the SNMP Agent

OBJECT-TYPE SYNTAX DESCRIPTION CUSTOM

eventTime DisplayString Date and time of alarm event: CUSTOM


Format YYYYMMDD HH:MM:SS.
managedObject DisplayString Element object ID. REFERENCE
ideventType INTEGER Identifier of type of event. STANDARD
idprobableCause INTEGER Identifier of probable cause. STANDARD
specificProblem DisplayString Alarm description. CUSTOM
proposedRepairActi DisplayString Not used. CUSTOM
on
additionalText DisplayString Additional information - problem text. CUSTOM
trendIndicator DisplayString Not currently used. STANDARD

Notes:
• The standards for ideventType and idprobablyCause can be found in the URL IANA-ITU-
ALARM-TC-MIB
• The standards for perceivedSeverity and trendIndicator (not used) can be found in the URL
ITU-ALARM-TC-MIB

The objectstable contains the following objects:

OBJECT-TYPE SYNTAX DESCRIPTION

objectsentry ObjectsEntry Definition of the alarm object table entry.

The MIB also contains an alarmtrap, which contains the following objects (which correlate to the
definitions in the table above):

trapSequenceNumber,

notificationID,

perceivedSeverity,

firstOccurance,

eventTime,

objectId,

ideventType,

idprobableCause,

specificProblem,

proposedRepairAction,

additionalText,

trendIndicator

455
OPTIMA 8.0 Operations and Maintenance Guide

Example SNMP Agent Traps


This section describes some example SNMP Agent traps - the required configuration and a sample
of the results that it may return.

The following traps are described:


• coldStart
• Heartbeat
• Alarm
• End of Resync

Example ColdStart Trap


The coldStart trap is a standard trap, defined using the conventions described in 'RFC 1215 -
Convention for defining traps for use with the SNMP', available at
http://www.faqs.org/rfcs/rfc1215.html.

A coldStart trap signifies that the SNMP entity, supporting a notification originator application, is
reinitialising itself and that its configuration may have been altered.

Here is an example of the coldStart trap received by the trap receiver software:

Source: 127.0.0.1

Timestamp: 4348 hours 43 minutes 25 seconds

SNMP Version: 2

Trap OID:
.iso.org.dod.internet.snmpV2.snmpModules.snmpMIB.snmpMIBObjects.snmpTraps.coldSt
art

Variable Bindings:

Name: .iso.org.dod.internet.mgmt.mid-2.system.sysUpTime.0

Value: [TimeTicks] 4348 hours 43 minutes 25 seconds (1565540573)

Name: snmpTrapOID

Value: [OID] coldStart

456
About the SNMP Agent

Example HeartBeat Trap


The HeartBeat trap is defined using the alarmTrap definition described in About the SNMP MIB for
Alarm Forwarding on page 454.

The values for the objects in the alarmTrap are read from the INI values in the
[HEARTBEAT_TRAP] section. For more information, see Configuring the SNMP Agent on page
437.

Here is an example of the HeartBeat trap received by the trap receiver software:

Source: 127.0.0.1

Timestamp: 1 minute 1 second

SNMP Version: 2

Trap OID: .iso.org.dod.internet.private.enterprises.aircom.traps.trapsPrefix.alarmTrap

Variable Bindings:

Name: .iso.org.dod.internet.mgmt.mib-2.system.sysUpTime.0

Value: [TimeTicks] 1 minute 1 second (6123)

Name: snmpTrapOID

Value: [OID] alarmTrap

Name: .iso.org.dod.internet.private.enterprises.aircom.alarms.trapSequenceNumber.0

Value: [Integer] 20126

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.notification
ID

Value: [Integer] 99999

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.perceived
Severity

Value: [Integer] warning (4)

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.firstOccura
nce

Value: [Null] null

457
OPTIMA 8.0 Operations and Maintenance Guide

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.eventTime

Value: [OctetString] 20101201 14:51:40

Name:
.iso.org.dod.internet.private.enterprises.aircom.objects.objectsTable.objectsEntry.objectId

Value: [Null] null

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.ideventTy
pe

Value: [Integer] 2

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.idprobable
Cause

Value: [Integer] 0

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.specificPr
oblem

Value: [Null] null

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.proposed
RepairAction

Value: [Null] null

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.additional
Text

Value: [OctetString] HEARTBEAT_TRAP

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.trendIndic
ator

Value: [Null] null

458
About the SNMP Agent

Example Alarm Trap


The Alarm trap is defined using the alarmTrap definition described in About the SNMP MIB for
Alarm Forwarding on page 454.

The values for the objects in the alarmTrap are read from the database view taken from the INI
parameter 'TrapView', defined in the [SNMP-AGENT]. For more information, see Configuring the
SNMP Agent on page 437.

Here is an example of the Alarm trap received by the trap receiver software:

Source: 127.0.0.1

Timestamp: 6 minutes 43 seconds

SNMP Version: 2

Trap OID: .iso.org.dod.internet.private.enterprises.aircom.traps.trapsPrefix.alarmTrap

Variable Bindings:

Name: .iso.org.dod.internet.mgmt.mib-2.system.sysUpTime.0

Value: [TimeTicks] 6 minutes 43 seconds (40310)

Name: snmpTrapOID

Value: [OID] alarmTrap

Name: .iso.org.dod.internet.private.enterprises.aircom.alarms.trapSequenceNumber.0

Value: [Integer] 20157

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.notification
ID.7878

Value: [Integer] 7878

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.perceived
Severity.7878

Value: [Integer] minor (3)

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.firstOccura
nce.7878

Value: [OctetString]

459
OPTIMA 8.0 Operations and Maintenance Guide

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.eventTime
.7878

Value: [OctetString] 20101126 10:30:25

Name:
.iso.org.dod.internet.private.enterprises.aircom.objects.objectsTable.objectsEntry.objectId.4
.77.83.67.54

Value: [OctetString] MSC6

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.ideventTy
pe.7878

Value: [Integer] 2

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.idprobable
Cause.7878

Value: [Integer] 3

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.specificPr
oblem.7878

Value: [OctetString] 255

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.proposed
RepairAction.7878

Value: [OctetString]

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.additional
Text.7878

Value: [OctetString] MINOR ALARM RAISED (ALARM DEF: 1203): IP MSC6

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.trendIndic
ator.7878

Value: [OctetString]

460
About the SNMP Agent

Example End of Resync Trap


The End of Resync trap is defined using the alarmTrap definition described in About the SNMP MIB
for Alarm Forwarding on page 454.

The values for the objects in the alarmTrap are read from the INI values in the [END_OF_RESYNC]
section. For more information, see Configuring the SNMP Agent on page 437.

Important: End of Resync traps are only sent if the 'ResyncType' parameter in the [SNMP-AGENT]
section of the INI file is set to 'AGENT'.

Here is an example of the End of Resync trap received by the trap receiver software:

Source: 127.0.0.1

Timestamp: 6 minutes 43 seconds

SNMP Version: 2

Trap OID: .iso.org.dod.internet.private.enterprises.aircom.traps.trapsPrefix.alarmTrap

Variable Bindings:

Name: .iso.org.dod.internet.mgmt.mib-2.system.sysUpTime.0

Value: [TimeTicks] 6 minutes 43 seconds (40341)

Name: snmpTrapOID

Value: [OID] alarmTrap

Name: .iso.org.dod.internet.private.enterprises.aircom.alarms.trapSequenceNumber.0

Value: [Integer] 20158

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.notification
ID

Value: [Integer] 99998

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.perceived
Severity

Value: [Integer] warning (4)

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.firstOccura
nce

Value: [Null] null

461
OPTIMA 8.0 Operations and Maintenance Guide

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.eventTime

Value: [OctetString] 20101201 15:04:36

Name:
.iso.org.dod.internet.private.enterprises.aircom.objects.objectsTable.objectsEntry.objectId

Value: [Null] null

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.ideventTy
pe

Value: [Integer] 2

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.idprobable
Cause

Value: [Integer] 0

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.specificPr
oblem

Value: [Null] null

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.proposed
RepairAction

Value: [Null] null

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.additional
Text

Value: [OctetString] END_OF_RESYNC

Name:
.iso.org.dod.internet.private.enterprises.aircom.alarms.alarmsTable.alarmsEntry.trendIndic
ator

Value: [Null] null

462
About the File Splitter

17 About the File Splitter

The File Splitter splits a single file that contains a variety of different objects into a number of files
containing similar objects. This is done prior to parsing, and makes the parsing of the data easier -
rather than trying to parse a variety of objects in a single file, it can more effectively parse similar
objects in different files, one file at a time.

In this way, the File Splitter splits data 'horizontally' (that is row/object-by-row/object), compared to
the Data Validation application, which splits data 'vertically' (that is column-by-column).

Important: The File Splitter is generally only used for Siemens files.

Example of Using the File Splitter


This topic describes an example of using the File Splitter.

In this scenario, you have a file, MF_USMM_CY4_MSC1_FR00000.spf:

01929G053 153ZAGA
MF.USMM.CY4

07-09-2801:45 ****2432 0 0 0 0 0 0
0 0 0 11144…

07-09-2801:45 ****3088 0 0 0 0 0 0
0 0 0 0 0 0 547 698 0 253
534…

You can split this into two files, 2432_MF_USMM_CY4_MSC1_FR00000.spf:

01929G053 153ZAGA
MF.USMM.CY4

07-09-2801:45 ****2432 0 0 0 0 0 0
0 0 0 11144…

And 3088_MF_USMM_CY4_MSC1_FR00000.spf:

153ZAGA MF.USMM.CY4

07-09-2801:45 ****3088 0 0 0 0 0 0 0 0 0 0
0 0 547 698 0 253 534…

Installing the File Splitter


Before you can use the File Splitter, install the following file in the backend binary directory:
• opx_SPL_GEN_414.exe (Windows)
• opx_SPL_GEN_414 (UNIX)

Starting the File Splitter


To start the File Splitter application:

Type in the executable file name and the configuration (INI) file name into the command prompt:

463
OPTIMA 8.0 Operations and Maintenance Guide

In Windows:

opx_SPL_GEN_414.exe opx_SPL_GEN_414.ini

In Unix:

opx_SPL_GEN_414 opx_SPL_GEN_414.ini

Configuring the File Splitter


The File Splitter application is configured using a configuration (INI) file. Configuration changes are
made by editing the parameters in the configuration (INI) file with a suitable text editor. The File
Splitter configuration (INI) file is divided into different sections.

The following table describes the relevant parameters in the [DIR] section:

Parameter Description

DirFrom The location of the input files.


DirTo Where the files are output after splitting.
DirBackup The location of the backup files.
ErrorDir Where files with errors are sent.
LogDir The location of the log files.
TempDir The location of temporary files created during the splitting process.
PidFilePath The location of the monitor files.

The following table describes the parameters in the [MAIN] section:

Parameter Description

LogGranularity Defines the frequency of logging, the options are:


0 - Continuous
1 - Monthly
2 - Weekly
3 - Daily
LogLevel Sets the level of information required in the log file. The available options
are:
1 - Debug
2 - Information
3 - Warning
4 - Minor-
5 - Major
6 - Critical
RefreshTime The pause (in seconds) between execution of the main loop when running
continuously.
Iterations This parameter is used when the application does not run in continuous
mode so that it will be able to check for input files in the input folder for the
number of required iterations before an exit. Integer values are allowed,
like 1,2,3,4 and so on.
RunContinuous 0 - Have the Splitter run once.
1 - Have the Splitter continuously monitor for input files.

464
About the File Splitter

Parameter Description

StandAlone 0 – Run the application without a monitor file. Do not select this option if
the application is scheduled or the OPTIMA Process Monitor is used.
1 – Run the application with a monitor file.
InterfaceID The three-digit interface identifier (mandatory).
ProgramID The three-character program identifier (mandatory).
InstanceID The three-character program instance identifier (mandatory).
EnableBackup 1 – Enable to backup original input raw file(s) after successfully being
processed.
0 – Do not enable backup therefore delete the input raw file(s) after
successfully being processed.
InputFileMask Filter for input file to process, for example, *C*.*
UseFolderFileLimit Indicates whether the folder file limit should be used (1) or not (0).
FolderFileLimit The maximum number of output files that can be created in each output
(sub) folder.
This must be in the range of 100-100,000 for Windows, or 100-500,000 on
Sun/UNIX, otherwise the application will not run.
Warning: Depending on the number of files that you are processing, the
lower the file limit, the more output sub-folders that will be created. This
can have a significant impact on performance, so you should ensure that
if you do need to change the default, you do not set the number too low.
The default is 10,000.

The following table describes the parameters in the [REPORTS] section:

Parameter Description

Number The number of separate reports you want to create for the file that you
are splitting.
Each report section defines the criteria that will be used to split the file
(for example, the search strings and/or termination strings).
REPORT1, REPORT 2 and so The name of the first report, second report and so on.
on.

Each report that you define will also have a separate section in the INI file. The following table
describes the parameters for each of these sections:

Parameter Description

TerminationStr Indicates whether you want to split the data into files based on a Start
String and Termination string (1) or not (0).
Important: If you do this, you cannot also set the SearchStr parameter
to 1 and use that method as well.

StartString Together, the StartString and TerminationString define the range of


data that will be extracted into each split file for each object.
TerminationString
For example, if the StartString is OBJTYPE and the TerminationString
is END, then the data between each instance of these two strings will
be extracted into a separate file.
If only TerminationString is defined, then the range of data will be
extracted in between each TerminationString.

465
OPTIMA 8.0 Operations and Maintenance Guide

Parameter Description

SearchStr Indicates whether you want to split the data files into files based on
one or more search strings (1) or not (0).
Important: If you do this, you cannot also set the TerminationStr
parameter to 1 and use that method as well.
AddHeader Indicates whether the header will be added to each split file (1) or not
(0) when SearchStr is used.
SearchStringNumber Stores the number of search strings that you want to use. There
should be a corresponding number of search strings defined - for
example, if SearchStringNumber = 2, then there should be 2 search
strings (SearchString1 and SearchString2) defined.
SearchString1, SearchString2 The string on which you want to search.
and so on.
For example, if my SearchString value is *2368, then each row
containing the value 2368 will be collected into a single file and stored
in the C:\Temp\414\out\2368 sub-directory.

Maintenance
In usual operation, the File Splitter application should not need any special maintenance. During
installation the File Splitter application will be configured to maintain the backup and log directories
automatically.

However TEOCO recommends the following basic maintenance checks are carried out for File
Splitter application:

Check The When Why

Input directory for a backlog of Weekly Files meeting the maintenance criteria should not be
files meeting the maintenance in the input directory. A backlog indicates a problem
criteria. with the program.
Log messages for error Weekly In particular any Warning, Minor, Major and Critical
messages messages should be investigated.

Checking for Error Files


Files categorised as error files by the File Splitter are stored in the directory as defined in the
configuration (INI) file.

The log file is expected to have information related to any error files found in the particular
directory. For more information about the log file, see Checking a Log File Message on page 172.

Stopping the File Splitter


If the File Splitter is scheduled, then it will only terminate when all files in the input directory have
been processed.

However, if the File Splitter is run continuously, then the input directory is monitored continuously
and in this case, it can be terminated.

466
About the File Splitter

Checking the Version of the File Splitter


You can either obtain the version details from the log file or you can print the information by typing
the following command at the command prompt:

In Windows:

opx_SPL_GEN_414.exe -v

In Unix:

opx_SPL_GEN_414 -v

For more information about obtaining version details, see About Versioning on page 33.

Checking the Application is Running


To check that the application is running, check that there is a PRID file in the application's PRID
folder. For more information about PRIDs, see About PRIDs on page 29.

Troubleshooting
The following table shows troubleshooting tips for the File Splitter:

Symptom Possible Cause Solution

Application not Application has not been Use Process Monitor to check last run
processing input files. scheduled. status.

Crontab entry removed. Check crontab settings.

Application has crashed and Check process list and monitor file. If
Process Monitor is not configured. there is a monitor file and no
corresponding process with that PID, then
remove the monitor file.
Note: The process monitor will do this
automatically.
Incorrect configuration settings. Check configuration settings.

File do not match the input Change the input masks.


mask(s).
Application exits Another instance is running. Use Process Monitor to check instances
immediately. running.
Invalid or corrupt (INI) file.

Files in Error Directory. Incorrect configuration settings. Check log file for more information on the
problems.
Invalid input files. Check error file format.

Files are not being split. The search/termination strings are Check that the strings are defined
not found. correctly.
The output mask is incorrect. Change the output masks.

467
OPTIMA 8.0 Operations and Maintenance Guide

Example File Splitter Configuration (INI) File


[DIR]

DirFrom=/OPTIMA_DIR/<application_name>/in

DirTo=/OPTIMA_DIR/<application_name>/out

DirBackup=/OPTIMA_DIR/<application_name>/backup

ErrorDir=/OPTIMA_DIR/<application_name>/error

LogDir=/OPTIMA_DIR/<application_name>/log

TempDir=/OPTIMA_DIR/<application_name>/temp

PidFilePath=/OPTIMA_DIR/<application_name>/prid

CombinerDir=/OPTIMA_DIR/<application_name>/combiner

EXEName=<application_name>

EnableBackup=1

EnableCombiner=1

[MAIN]

InputFileNameAsColumn=0

LogGranularity=3

LogLevel=1

RefreshTime=1

TruncateHeader=0

RunContinuous=0

StandAlone=0

UseFolderFileLimit=0

FolderFileLimit=10000

InterfaceID=001

ProgramID=414

InstanceID=001

[REPORTS]

Number=1

REPORT1=split_2352_2496

;REPORT2=split_2368_2512

468
About the File Splitter

[split_2352_2496]

OutputDirectory=/OPTIMA_DIR/<application_name>/out

UseInputFileMask=1

InputFileMask=*.spf

UseOutputFileMask=0

OutputFileMask=.csv

AddHeader=1

TerminationStr=0

StartString=OBJTYPE

TerminationString=END

SearchStr=1

SearchStringNumber=4

SearchString1=*2456|/OPTIMA_DIR/<application_name>/out/2456

SearchString2=*3184|/OPTIMA_DIR/<application_name>/out/3184

SearchString3=* 128|/OPTIMA_DIR/<application_name>/out/128

SearchString4=*2512|/OPTIMA_DIR/<application_name>/out/2512

[split_2368_2512]

OutputDirectory=/OPTIMA_DIR/<application_name>/out

UseInputFileMask=1

InputFileMask=*.spf

UseOutputFileMask=0

OutputFileMask=.csv

AddHeader=1

TerminationStr=0

StartString=OBJTYPE

TerminationString=END

SearchStr=1

SearchStringNumber=2

SearchString1=*2368|/OPTIMA_DIR/<application_name>/out/2368

SearchString2=*2512|/OPTIMA_DIR/<application_name>/out/2512

469
OPTIMA 8.0 Operations and Maintenance Guide

Important: In the [DIR] section (also called [Directory Parameters] in some INI files), you should
replace:
• OPTIMA_DIR with the OPTIMA home directory, which is set as an environment variable,
and will be different depending on whether you are using Windows or UNIX
• <application_name> with the name of the application (minus any file extension)
• Forward slashes with back slashes, if you are using Windows

470
Functions, Procedures and Packages

18 Functions, Procedures and Packages

This appendix lists the Functions, Procedures and Packages associated with the various OPTIMA
schemas. The Job Scheduler and Frontend and Backend jobs and schedules are also described.

AIRCOM Schema Functions


This table describes the functions stored in the AIRCOM Schema:

Function Description

AC_SPLIT Returns a single counter value from an array counter.


AC_SUM Aggregates array counter values and returns an array
type value.
This function supersedes AC_SPLIT.
CHANGE_DATE_BY_PERIOD Returns a date +/- a set period.
CHECK_DUPLICATE_EH_NAME Checks for existing element hierarchies.
CHECK_DUPLICATE_FILTERNAME Checks for existing filters.
CONVERT_DATE_TO_TZTIME Returns a date converted to a different time zone.
CREATE_VIEWS Creates KPI views.
DELTA Returns a static counter value for a delta counter.
DIV Performs a division of two values, handling 0s.
EH_ACCESS_RIGHTS Checks access for element hierarchies.
ETL_ALERTS_ADD_PROBLEM_TEXT Generates the problem text for a Threshold Crossing Alert
(TCA).
FILTERACCESS_RIGHTS Checks access for filters.
GET_ALARMS_VIEW_ALL_SCMODE
GET_TABLE_OPERATIONS
GETCCROWS Returns the rows from custom columns for each vendor.
GETFOLDERPATH Returns the folder path for reporting objects.
GETTABLETYPE Returns the Optima table type dependant on the name of
the table.
HAS_RIGHTS Checks access for folder's parents, use
PROCESSRIGHTS.
KPI_ACCESS_RIGHTS Checks access for Key Performance Indicators (KPIs).
MAX_DATE Returns the maximum date value for a table.
NUM_ELEMENTS Returns the number of elements for the maximum date for
a table.
PARSE_SQL Parses an SQL string.
PARSE_SQL_FOR_EXEC Parses a passed string for SQL execution.
PARSE_STRING Parses a passed string for SQL execution.
PERCENT Performs a percentage calculation of two values, handling
0s.
PROCESSRIGHTS Checks access for a folder.
SEVNUM_TO_SEVTEXT Returns the severity level, from the number used for
logging.

471
OPTIMA 8.0 Operations and Maintenance Guide

Function Description

USER_TABLE_ACCESS_RIGHTS Checks access to data tables.


VALIDATE_ALIAS Checks if a KPI alias is valid.
VALIDATE_EQUATION Checks if a KPI equation is valid.
WEEK_BEGINING Returns the date of the beginning of the week.

AIRCOM Schema Procedures


This table describes the procedures stored in the AIRCOM Schema:

Procedure Description

ASSIGN_USER_TO_CONSUMERGROUP Assigns a user to a consumer group dependant on


their user role.
DROP_EXPIRED_MAT_VIEWS Handles expired sandbox materialized views.
DROP_VIEW Deletes a view.
EXECUTE_SQL Executes an SQL string.
POPULATE_REPORT_TABLES Regenerates or updates the tables in data dictionary.
SET_NLS_TERRITORY Alters the National Language Support (NLS) settings
for the current session.
UPDATEELEMENTANDDATECOL Regenerates or updates the element and date
columns in the data dictionary.

AIRCOM Schema Packages


This table describes the packages stored in the AIRCOM Schema:

Package Description

CELL_PROFILER Used for capacity planning.


DIFFERENCE_ENGINE Calculates the difference between a source and
destination table.
ERLANGB_PACKAGE Calculates the Grade Of Service (GOS), Traffic or
Capacity based on the other two parameters.

OPTIMA_ADMIN Administers user access to OPTIMA application.


OPTIMA_ALARMS Processes and raises alarms.
OPTIMA_KPI Used in creation of KPIs.
OPTIMA_LOADER Called by the loader process and loads data.
OPTIMA_MAINTENANCE Resets all OPTIMA sequences to reset system.
OPTIMA_PACKAGE Administers user access level within the OPTIMA
application.
OPTIMA_SUMMARY Performs aggregation (element, time and busy hour).
OPTIMA_USER_VIEWS Creates Sandbox materialized views.
OPTIMA_USER_VIEWS_TEST Copy of above.
OSS_LOGGING Logs records into the OPTIMA log table, called by all
processes.

472
Functions, Procedures and Packages

Package Description

OSS_MAINTENANCE Maintains the OPTIMA database, collects stats,


maintains partitions and tablespaces and rebuilds
indexes.
POPULATE_REPORTS Regenerates or updates the data dictionary.
SNMP_PKG Maintains alarms that have been forwarded by Simple
Network Management Protocol (SNMP).

OSSBACKEND Schema Packages


This table describes the packages stored in the OSSBACKEND Schema:

Package Description

DQ_MAIN Calculates data quality (Completeness, Availability,


Nulls and Last Load).
DQ_PERIOD_PROCESSING Performs sub-daily data quality processing.
DQ_PROCESSING Performs data quality processing, calling DQ_MAIN.

WEBWIZARD Schema Functions


This table describes the functions stored in the WEBWIZARD Schema:

Function Description

GETLAYERKEY Returns primary key used for layer, given a layer


name.
GETUSERKEY Returns primary key used for user, given a user name.

WEBWIZARD Schema Packages


This table describes the packages stored in the WEBWIZARD Schema:

Package Description

FILES_PKG Procedures for WEBWIZARD Explorer file access and


favorites.
FILTER_PKG Procedures for saving and removing GIS filters.
FOLDER_PKG Procedures for WEBWIZARD Explorer folder access.
GRP_PKG Procedures for Adding, editing and removing groups.
LAYER_PKG GIS layer storage procedures.
REGION_PKG GIS region storage procedures.
USR_PKG Procedures for adding, editing and removing users.

473
OPTIMA 8.0 Operations and Maintenance Guide

VENDOR Schema Procedures


This table describes the procedures stored in each Vendor Schema:

Package Description

CUSTOM_GRANT Called by the application to grant access to an object


in its schema.
REVOKE_GRANT Called by the application to revoke access to an object
in its schema.

Scheduling Jobs
There are a number of different ways to schedule jobs used in OPTIMA:
• Cron (used to schedule UNIX backend programs)
• SCHEDTASK (used to schedule the Windows backend programs)
• The Oracle DBMS Scheduler (DBMS_SCHEDULER) (Recommended for scheduling
Oracle jobs)

- or -
• The Oracle Job Scheduler (DBMS_JOBS) (can also be used for scheduling Oracle jobs)

UNIX

The Unix Scheduler is known as cron. The key features are:


• * * * * runme.sh # script (preceded by the min, hour, day, week, and so on schedule for the
script to be run)
• Crontab –l # lists the crontab entries
• Crontab <cronfile> # loads cronfile into crontab

UNIX backend programs are scheduled using this functionality. This is an example of a crontab
configuration:

0,15,30,45 * * * * /opt/AIoptima/run/RunProcessMonitor_205.sh
0 1 * * * /opt/AIoptima/run/RunDirectoryMaintenance.sh
0 5 * * * /opt/AIoptima/run/RunLoaders_DirMaintenance.sh
0 * * * * /opt/AIoptima/run/RunOpxLog.sh
20 ** * * /opt/AIoptima/run/RunLoaders_LOGS.sh

WIN OS

The Windows Job Scheduler has the following key features:


• Can be configured from the command line or GUI
• Windows backend programs are scheduled using this functionality

474
Functions, Procedures and Packages

DBMS_SCHEDULER

DBMS_SCHEDULER enables users to perform resource plan management, which allows them to
control:
• The number of concurrent jobs for a particular job_class
• The order of executing of a job or groups of jobs
• Switching jobs from one resource plan to another during the day
• And much more

With DBMS Scheduler:


• A task can be scheduled to run at a particular date and time
• A task can be scheduled to run only once, or multiple times
• A task can be turned off temporarily or removed completely from the schedule

The DBMS Scheduler is configurable from sqlplus or TOAD.

Important: Oracle DBMS_SCHEDULER package is the recommended choice compared to


DBMS_JOBS, because it offers significant improvements over DBMS_JOB for scheduling jobs and
tasks.

Scheduler Components

The Scheduler uses three basic components to handle the execution of scheduled tasks. An
instance of each component is stored as a separate object in the database when it is created:

Component Description

Programs A program defines what the Scheduler will execute.


Schedules A schedule defines when and at what frequency the Scheduler will execute a
particular set of tasks.
Jobs A job assigns a specific task to a specific schedule. A job therefore tells the
schedule which tasks - either one-time tasks created "on the fly," or predefined
programs - are to be run. A specific program can be assigned to one, multiple, or
no schedule(s); likewise, a schedule may be connected to one, multiple, or no
program(s).

DBMS_JOBS

DBMS_JOBS has the following key features:


• Can be used to schedule Oracle Procedures such as the OPTIMA Summary/Data Quality
processes.
• OPTIMA Oracle Job scheduler GUI application available to configure the Oracle jobs.
• Also configurable from sqlplus or TOAD.
• Use Oracle Jobs to schedule anything running in the database. Make sure Oracle jobs are
enabled by setting the JOB_QUEUE_PROCESSES oracle parameter.

475
OPTIMA 8.0 Operations and Maintenance Guide

OPTIMA Backend Jobs and Schedules


This table describes OPTIMA Backend Jobs and Schedules:

No Module/ Schedule Recommended Comments Scheduler


Area Frequency Type

1 OSS Maintaintablepartitions Once daily, at 2100. Maintain partitions Oracle job


Maintenan scheduler
2 ce Maintain_Tablespaces Once daily, at 2330. Maintain
tablespaces
*Not required for
ASM.
3 DROP_EXPIRED_MAT_ Once daily, at 0500. Sandbox views
VIEWS maintenance
4 TRUNC_ALARM_ Once daily, at 0200. Truncate error
ERRORS_TABLE tables.
5 Summary Do_Work Every minute. Procedure runs a
query on the
However, next SUMMARY_SCHE
schedule will run DULES to find the
only after previous most urgent
one finishes. schedule to
Only one concurrent process.
summary.
Max 2 Do_Work jobs
per CPU.
6 Data DQ_Period_Processing Configured at Procedure for
Quality installation. processing data
quality reports of
any period length.
7 Process_Avail Every hour. Procedure to
process data
Set to sysdate + 15 quality period
min to allow data to processing for
have been loaded availability.
for the past hour.
For subdaily
availability.
8 Run_process_group Once daily, at 0100. Procedure to
process all DQ
reports in the group
number passed.
9 Process 0,15,30,45 mins past The Process Cron
Monitor the hour. Monitor
continuously
checks the running
of OPTIMA
backend
applications on a
particular machine
to ensure that they
have not crashed,
runaway or hung.

476
Functions, Procedures and Packages

No Module/ Schedule Recommended Comments Scheduler


Area Frequency Type

10 Directory The Directory


Maintenan Maintenance
ce application reports
on and maintains
user-specified
directories based
on user-defined
maintenance
parameters.
11 OpxLog 5 mins past the The Opxlog utility
hour. shall search for and
filter the log
messages in the
backend
application log files.
It shall be used to
produce
concatenated
filtered CSV files of
log messages for
loading into the
OPTIMA database.
Administration
reports shall then
be written to report
on the loaded log
messages and alert
the OPTIMA
Administrator to
any problems with
the backend
applications.
14 Loader_ 15 min past the This loads the
Logs hour. output of the
OPXLOG utility.

477
OPTIMA 8.0 Operations and Maintenance Guide

478
Glossary of Terms

Glossary of Terms

A
Agent

In the context of SNMP, this is a software module that performs the network management functions
requested by the network management stations.

An agent module may be implemented in any network element that is to be managed, such as host,
bridge or router.

Agents and network management stations communicate by means of SNMP.

B
BSC

Base Station Controller. A piece of equipment that controls one or more BTSs.

BTS

Base Transceiver Station.

C
Columnar Object

An object that is part of an SMP table. There is no instance of the columnar object for each row in
the table.

CSV

Comma-Separated Values. A type of data format in which each piece of data is separated by a
comma.

F
FTP

File Transfer Protocol. The standard protocol for exchanging files across the Internet.

I
INI

Initialization file. INI files are used to initialize, or set parameters for, the operating system and
certain programs.

IP

Internet Protocol. This defines the format for all data travelling through a TCP/IP network, performs
the routing functions and provides a mechanism for processing unreliable data.

479
OPTIMA 8.0 Operations and Maintenance Guide

K
KPI

Key Performance Indicator. A quantifiable measurement, agreed beforehand, representing a critical


success factor of an organization.

M
MAC

Message Authentication Code.

MIB

Management Information Base. A type of database used to manage the devices in a network. MIBs
are especially used with SNMP.

MSC

Mobile Switching Centre. In a cellular network, this is a switch or exchange that interworks with
location databases.

P
PDU

Protocol Data Unit. The PDU format is used to send and receive SMS messages.

R
RFC

Request For Comment.

RMON

Remote Network Monitoring.

S
SDCCH

Stand-alone Dedicated Control Channel. This is a channel used in GSM to provide a reliable
connection for signalling and SMS messages.

SMI

Structure of Management Resources.

SMP

Simple Management Protocol.

480
Glossary of Terms

SMPP

Short Message Peer-to-peer Protocol. The protocol used for exchanging SMS messages between
SMS peer entities such as SMSCs.

SMS

Short Message Service. The text messaging system, enabling messages to be sent to/from GSM
phones and to external systems (for example, email or voicemail). Messages that cannot be
delivered straight away (due to the receiver's mobile being switched off or out of range) are stored,
and delivered as soon as possible.

SMSC

Short Message Service Center. A network element in the mobile telephone network which delivers
SMS messages.

SMTP

Simple Mail Transfer Protocol. A protocol used to send and receive email messages.

SNMP

Simple Network Management Protocol. SNMP is the protocol used for network management and
the monitoring of network devices and their functions.

SQL

Structured Query Language. SQL is an ANSI and ISO standard computer language for getting
information from and updating a database.

T
TCH

Traffic channel. This is a logical channel used to transport data.

TCP

Transmission Control Protocol. The protocol used (along with the IP) to ensure reliable and in-order
delivery of data across the Internet.

481
OPTIMA 8.0 Operations and Maintenance Guide

482
Index

starting • 48
Defining
Index monitoring settings • 261
reports • 186, 197
Direct database loading, using the summary for • 329
Directory Maintenance Application
about • 267
A checking the application is running • 275
checking the log file • 275
Alarm Notifier checking the version • 275
about • 414 configuration (INI) file • 277
configuring • 417, 419, 421, 422 configuring • 269
executing • 417 installing • 268
installing • 414 maintaining • 274
Alarms starting • 268
about • 409 stopping • 275
troubleshooting • 276
C Directory Maintenance Process, about • 268

Checking
an application is running • 83, 150, 177, 188, 208, E
236, 263, 275, 404 Error Files, checking • 172, 187, 206
error files • 172, 187, 206 Error tables, loader • 222, 235
log files • 41, 82, 172, 187, 207, 235, 263, 275, Examples
403 Data Validation configuration (INI) file • 190
version details • 82, 149, 177, 188, 207, 236, 263, Parser configuration (INI) file • 178
275 Executing, Alarm Notifier • 417
Combining Process, about • 195 External Programs
Configuration (INI) file, example • 178, 190, 198, 245, about • 41
265, 277, 407, 452 configuring • 32
Configuring scheduling • 31
Alarm Notifier • 417, 419, 421, 422
Data Quality Package • 332
Data Validation Application • 183 F
Directory Maintenance Application • 269
File Combiner Application
external programs • 32
about • 193
FTP Application • 66
checking for error files • 206
Loader file mappings • 227
checking the application is running • 208
Loader table mappings • 228
checking the log file • 207
OPTIMA Parser • 167
checking the version • 207
partition maintenance • 381
configuration (INI) file • 198
Report Scheduler • 396
defining reports • 197
reports • 219
maintaining • 206
SNMP Agent • 437
starting • 196
statistics gathering • 386
stopping • 207
tablespace maintenance • 384
File Combiner Configuration Utility, troubleshooting •
Consumer groups, in OPTIMA • 51
208
File locations and naming. about • 30
D FTP Application
checking the application is running • 82
Data Loading Process checking the log file • 82
starting • 40 checking the version • 82
stopping • 40 configuration (INI) file parameters • 68
Data Quality Package configuring • 66
configuring • 332 installing • 60
configuring period processing • 348 prerequisites • 63
installing • 331 stopping • 82
Data Validation Application
checking for error files • 187
checking the application is running • 188 I
checking the log file • 187
Installing
checking the version • 188
Alarm Notifier • 414
configuration (INI) file • 190
Data Quality Package • 331
maintaining • 187
Data Validation Application • 182
starting • 185
Directory Maintenance Application • 268
stopping • 188
File Combiner Application • 195
troubleshooting • 188
FTP Application • 60
Data Validation, about • 181
OPTIMA • 20
Database Server
Parser • 166
rebooting • 48
Parser Configuration Utility • 166
483
OPTIMA 8.0 Operations and Maintenance Guide

Process Monitor • 259 maintaining tablespaces • 383


Report Scheduler • 395 troubleshooting • 391
SNMP Agent • 436
P
L
Parser
Loader checking the log file • 172
about • 213 checking the Parser is running • 177
checking the Loader is running • 236 checking the version • 177
checking the log file • 235 configuration (INI) file • 178
checking the version • 236 configuring • 167
configuration (INI) file • 245 installing • 166
Configuration Window • 218 maintaining • 172
maintaining • 234 starting • 166
selecting • 217 stopping • 176
starting • 216 troubleshooting • 177
table settings • 225 Parsing Process, about • 166
troubleshooting • 241 Partition maintenance
validator options • 232 configuring • 381
Loader error tables scheduling • 383
checking • 235 Passwords
configuring • 222 backend applications affected by security • 39
Loading security • 38
direct database • 329 Period processing, configuring for Data Quality • 348
Log Files Process Monitor
about • 33 about • 257
checking • 41, 82, 172, 187, 207, 235, 263, 275 checking for error files • 172
checking the application is running • 263
checking the log file • 263
M checking the version • 263
Maintenance configuration (INI) file • 265
Data Validation Application • 187 configuring • 260
Directory Maintenance Application • 274 installing • 259
File Combiner Application • 206 maintaining • 262
Loader • 234 starting • 259
OPTIMA • 42 stopping • 263
OSS Maintenance Package • 391 troubleshooting • 265
Parser • 172 Program IDs, about • 29
Process Monitor • 262
Report Scheduler • 403 R
SNMP Agent • 445
SNMP Poller • 149 Rebooting, database server • 48
Mappings Report Scheduler
loader file • 227 checking the log file • 403
loader table • 228 checking the Report Scheduler is running • 404
MIBs configuration (INI) file • 407
converting to CSV files • 111 configuring • 396
loading in the SNMP Poller GUI • 107 installing • 395
Monitoring Process, about • 32, 258 maintaining • 403
Monitoring Settings, defining • 261 starting • 402
stopping • 404
troubleshooting • 404
O Reports
OPTICRYPT, using • 38, 39 configuring • 219
OPTIMA creating in the SNMP Poller GUI • 107
installing • 20 Roles, OPTIMA • 50
maintaining • 42
troubleshooting • 43 S
upgrading • 20
Opxlog Utility Scheduling
about • 44 external programs • 31
command options • 45 partition maintenance • 383
configuration (INI) file • 47 statistics gathering • 387
Oracle consumer groups, using in OPTIMA • 51 tablespace maintenance • 385
Oracle roles, in OPTIMA • 50 SNMP Agent
OSS Maintenance Package about • 433
about • 377 configuration (INI) file • 452
gathering statistics • 386 configuring • 437, 444
maintaining • 391 installing • 436

484
Index

maintaining • 445
troubleshooting • 446
SNMP Poller U
checking the application is running • 150 Upgrading, OPTIMA • 20
checking the version • 149
maintaining • 149
stopping • 149 V
troubleshooting • 151
Validation Process, about • 182
Starting
Version details, checking • 82, 177, 188, 207, 236,
Data Loading Process • 40
263, 275
database server • 40
Versioning
Directory Maintenance Application • 268
about • 33
Loader • 216
checking the version • 177
Loader GUI • 216
Parser • 166
Process Monitor • 259
Report Scheduler • 402
Statistics gathering
configuring • 386
scheduling • 387
Stopping
Data Loading Process • 40
Data Validation Application • 188
Directory Maintenance Application • 275
File Combiner Application • 207
FTP Application • 82
Parser • 176
Process Monitor • 263
Report Scheduler • 404
SNMP Agent • 445
SNMP Poller • 149
Summary
configuring • 286
connecting to the database • 285
using for direct database loading • 329
viewing log messages • 325
viewing oracle jobs • 327
Summary reports
adding • 291
defining time zones in • 302
deleting • 320
editing • 320

T
Table settings, for the Loader • 225
Tablespace maintenance
configuring • 384
scheduling • 385
TCAs
defining • 225, 228
Time zones
defining in summary reports • 302, 304
using in report schedules • 396
Troubleshooting
Data Validation Application • 188
Directory Maintenance Application • 276
File Combiner Application • 208
Loader • 241
OPTIMA • 43
OSS Maintenance Package • 391
Parser • 177
Parser Configuration Utility • 177
Process Monitor • 265
Report Scheduler • 404
SNMP Agent • 446
SNMP Poller • 151

485
OPTIMA 8.0 Operations and Maintenance Guide

486

You might also like