0% found this document useful (0 votes)
11 views

User Interface Guide

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

User Interface Guide

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 230

Netcool/Impact

User Interface Guide

IBM
Note
Before using this information and the product it supports, read the information in "Notices".

Edition notice
This edition applies to version 7.1.0.29 of IBM Tivoli Netcool®/Impact and to all subsequent releases and modifications
until otherwise indicated in new editions.
References in content to IBM products, software, programs, services or associated technologies do not imply that they
will be available in all countries in which IBM operates. Content, including any plans contained in content, may change
at any time at IBM's sole discretion, based on market opportunities or other factors, and is not intended to be a
commitment to future content, including product or feature availability, in any way. Statements regarding IBM's future
direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. Please
refer to the IBM Community terms of use for more information.
© Copyright International Business Machines Corporation 2006, 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents

About this publication...........................................................................................ix


Intended audience...................................................................................................................................... ix
Publications................................................................................................................................................. ix
Netcool/Impact library.......................................................................................................................... ix
Accessing terminology online................................................................................................................ix
Accessing publications online............................................................................................................... ix
Ordering publications .............................................................................................................................x
Accessibility..................................................................................................................................................x
Tivoli technical training................................................................................................................................ x
Support for problem solving.........................................................................................................................x
Obtaining fixes........................................................................................................................................ x
Receiving weekly support updates.........................................................................................................x
Contacting IBM Software Support ........................................................................................................ xi
Conventions used in this publication ....................................................................................................... xiii
Typeface conventions ......................................................................................................................... xiii
PDF code examples with single quotation marks............................................................................... xiii
Operating system-dependent variables and paths.............................................................................xiii

Chapter 1. Working with the User Interface............................................................1


Globalization................................................................................................................................................ 1
Using SJIS or EUC Japanese character encoding....................................................................................... 1
Navigating Netcool/Impact .........................................................................................................................1
Selecting a cluster and project.................................................................................................................... 2
Personalizing the GUI preferences..............................................................................................................3

Chapter 2. Working with projects........................................................................... 5


Projects overview......................................................................................................................................... 5
Project components..................................................................................................................................... 5
Important differences between projects, and the global repository......................................................... 6
Global repository..........................................................................................................................................6
Adding and removing items in the global repository.............................................................................6
Creating and editing a project......................................................................................................................7
Deleting a project......................................................................................................................................... 7
Automated project deployment feature......................................................................................................8
Running the DeployProject policy.......................................................................................................... 8
DeployProject policy input parameters window....................................................................................8
Version control file locking...........................................................................................................................9
Unlocking all locked items........................................................................................................................... 9

Chapter 3. Managing data models........................................................................ 11


Data model components........................................................................................................................... 11
Setting up a data model.............................................................................................................................11
Accessing the data model tab................................................................................................................... 12
Data model menu controls........................................................................................................................ 12
Data sources overview...............................................................................................................................13
Data source categories.........................................................................................................................13
List of data sources.............................................................................................................................. 14
Creating data sources.......................................................................................................................... 16
Editing data sources............................................................................................................................. 17
Deleting data sources...........................................................................................................................17

iii
Testing data source connections......................................................................................................... 17
Datasourcelist file.................................................................................................................................17
Data types overview...................................................................................................................................19
Data type categories............................................................................................................................ 20
Predefined data types overview...........................................................................................................20
List of predefined data types............................................................................................................... 20
Viewing data types............................................................................................................................... 21
Editing data types.................................................................................................................................21
Deleting data types.............................................................................................................................. 21
Typelist file........................................................................................................................................... 21
Data items overview.................................................................................................................................. 23
Links overview............................................................................................................................................23

Chapter 4. Configuring data sources.....................................................................25


Data sources.............................................................................................................................................. 25
SQL database DSA failover........................................................................................................................ 25
SQL database DSA failover modes.......................................................................................................25
SNMP data sources.................................................................................................................................... 25
SQL database data sources....................................................................................................................... 26
DB2 data source configuration............................................................................................................ 26
Derby data source configuration..........................................................................................................28
Creating flat file data sources.............................................................................................................. 30
GenericSQL data sources..................................................................................................................... 31
HSQLDB data source configuration..................................................................................................... 33
Informix data source configuration..................................................................................................... 35
MS-SQL Server data source configuration...........................................................................................37
MySQL data source configuration........................................................................................................ 40
ObjectServer data source configuration.............................................................................................. 43
ODBC data source configuration..........................................................................................................45
Oracle data source configuration.........................................................................................................47
Connecting to an Oracle data source using LDAP............................................................................... 50
Connecting to an Oracle data source using a JDBC LDAP URL........................................................... 51
Connecting to Oracle RAC cluster........................................................................................................52
PostgreSQL data source configuration................................................................................................ 52
Sybase data source configuration........................................................................................................54
JDBC ResultSetType and ResultSetConcurrency configuration......................................................... 56
UI data provider data sources................................................................................................................... 57
Creating a UI data provider data source..............................................................................................57
Providing support for multi-tenancy for Tree Table and Topology widgets........................................58
RESTful DSA data source........................................................................................................................... 60
Creating a RESTful DSA data source....................................................................................................60
OAuth data source..................................................................................................................................... 61
Creating an OAuth data source............................................................................................................ 61
LDAP data sources..................................................................................................................................... 62
Creating LDAP data sources.................................................................................................................62
LDAP data source configuration window............................................................................................. 62
Mediator data sources............................................................................................................................... 63
CORBA Mediator DSA data source configuration window................................................................. 64
Direct Mediator DSA data source configuration window.....................................................................64
Creating SNMP data sources................................................................................................................64
JMS data source.........................................................................................................................................66
JMS data source configuration properties...........................................................................................66

Chapter 5. Configuring data types........................................................................ 69


Viewing data type performance statistics.................................................................................................69
Data type performance statistics.........................................................................................................69
Data type caching...................................................................................................................................... 70

iv
Data type caching types....................................................................................................................... 70
Creating internal data types...................................................................................................................... 71
Internal data type configuration window............................................................................................ 71
External data types.................................................................................................................................... 73
Deleting a field......................................................................................................................................73
List of predefined data types..................................................................................................................... 73
Predefined data types overview...........................................................................................................74
Time range groups and schedules....................................................................................................... 74
ITNM DSA data type............................................................................................................................. 78
SQL data types........................................................................................................................................... 79
Configuring SQL data types.................................................................................................................. 79
SQL data type configuration window - Table Description tab............................................................. 80
SQL data type configuration window - adding and editing fields in the table.................................... 83
SQL data type configuration window - Cache settings tab................................................................. 85
Creating flat file data types........................................................................................................................86
UI data provider data types....................................................................................................................... 86
Creating a UI data provider data type..................................................................................................86
LDAP data types.........................................................................................................................................87
Configuring LDAP data types................................................................................................................87
LDAP Info tab of the LDAP data type configuration window........................................................... 88
Mediator DSA data types........................................................................................................................... 89
Viewing Mediator DSA data types........................................................................................................89
SNMP data types........................................................................................................................................90
SNMP data types - configuration overview..........................................................................................90
Packed OID data types.........................................................................................................................90
Table data types................................................................................................................................... 92
LinkType data types................................................................................................................................... 95
Configuring LinkType data items..........................................................................................................95
Document data types.................................................................................................................................96
Adding new Doc data items................................................................................................................. 96
FailedEvent data types.............................................................................................................................. 96
Viewing FailedEvent data items........................................................................................................... 96
Hibernation data types.............................................................................................................................. 96
Working with composite data types.......................................................................................................... 97
Creating composite data types............................................................................................................ 97
Creating linked fields............................................................................................................................97
Configuring a linked field on a composite data type........................................................................... 98

Chapter 6. Working with data items..................................................................... 99


Viewing data items.....................................................................................................................................99
Adding new data items.............................................................................................................................. 99
Editing and deleting data items...............................................................................................................100
Viewing data items for a UI data provider data type ............................................................................. 100
Using the GetByFilter function to handle large data sets................................................................. 100

Chapter 7. Working with links............................................................................ 103


Dynamic links...........................................................................................................................................103
Static links................................................................................................................................................103
Working with dynamic links.....................................................................................................................103
Creating dynamic links.......................................................................................................................104
Editing and deleting dynamic links.................................................................................................... 106
Working with static links..........................................................................................................................107
Creating static links............................................................................................................................107

Chapter 8. Working with policies........................................................................109


Policies overview..................................................................................................................................... 109
Accessing policies....................................................................................................................................109

v
Policies panel controls.............................................................................................................................110
Writing policies........................................................................................................................................ 110
Policy wizards.....................................................................................................................................110
Recovering automatically saved policies................................................................................................ 113
Working with the policy editor.................................................................................................................113
Policy editor toolbar controls.............................................................................................................113
Policy syntax checking....................................................................................................................... 115
Policy syntax highlighter.................................................................................................................... 115
Optimizing policies.............................................................................................................................116
Running policies with parameters in the editor................................................................................ 116
Browsing data types...........................................................................................................................116
Configuring policy settings in the policy editor................................................................................. 117
Adding functions to policy................................................................................................................. 119
List and overview of functions........................................................................................................... 119
Changing default font used in the policy editor................................................................................ 127
Using version control interface............................................................................................................... 127
Uploading policies................................................................................................................................... 127
Working with predefined policies............................................................................................................128
Accessibility Features..............................................................................................................................130

Chapter 9. Working with services....................................................................... 131


Services overview.................................................................................................................................... 131
Creating services..................................................................................................................................... 131
Services panel controls........................................................................................................................... 132
List of services......................................................................................................................................... 133
Personalizing services............................................................................................................................. 135
Starting and stopping services................................................................................................................ 135
Viewing services logs...............................................................................................................................135
Services log viewer.............................................................................................................................135
Service log viewer results.................................................................................................................. 136
Creating new tabs.............................................................................................................................. 137
Event mapping......................................................................................................................................... 137
Creating event filters.......................................................................................................................... 137
Configuring an event filter..................................................................................................................137
Consolidating filters........................................................................................................................... 138
Event mapping table.......................................................................................................................... 140
Editing and deleting filters................................................................................................................. 140
Filter analysis..................................................................................................................................... 141
Command execution manager service....................................................................................................141
Command line manager service..............................................................................................................142
Configuring the command line manager service .............................................................................. 142
Database event listener service.............................................................................................................. 142
Configuring the database event listener service .............................................................................. 142
E-mail sender service.............................................................................................................................. 143
Configuring the Email sender service................................................................................................ 143
Event processor service.......................................................................................................................... 144
Configuring the Event processor service........................................................................................... 144
Hibernating policy activator service........................................................................................................145
Hibernating policy activator Configuration........................................................................................146
Configuring the hibernating policy activator service.........................................................................146
Policy logger service................................................................................................................................ 146
Policy logger configuration.................................................................................................................146
Configuring the Policy logger service.................................................................................................147
Policy log files.....................................................................................................................................149
ITNM event listener service.....................................................................................................................150
Configuring ITNM event listener service........................................................................................... 150
ITNM event listener service configuration window...........................................................................150

vi
Configuring the ImpactDatabase service................................................................................................151
Self monitoring service............................................................................................................................ 151
Configuring the self monitoring service ............................................................................................ 152
Database event reader service................................................................................................................153
Configuring the database event reader service.................................................................................153
Database event reader configuration window - general settings..................................................... 154
Database event reader configuration window - event mapping....................................................... 155
Configuring number of rows in the database event reader select query......................................... 156
Email reader service................................................................................................................................ 156
Configuring the email reader service.................................................................................................156
Event listener service.............................................................................................................................. 159
Configuring the event listener service............................................................................................... 159
JMS message listener..............................................................................................................................160
JMS message listener service configuration properties................................................................... 160
OMNIbus event listener service.............................................................................................................. 162
Setting up the OMNIbus event listener service.................................................................................162
Configuring the OMNIbus event listener service...............................................................................162
OMNIbus event reader service................................................................................................................163
Configuring the OMNIbus event reader service................................................................................ 163
Creating a new OMNIbus event reader from the command line...................................................... 164
OMNIbus event reader service General Settings tab...................................................................... 165
OMNIbus event reader service Event Mapping tab...........................................................................166
OMNIbus Event Reader event locking examples.............................................................................. 169
Forcing checkpointing after a specified number of minutes.............................................................170
Handling Serial rollover......................................................................................................................171
Policy activator service............................................................................................................................172
Policy activator configuration............................................................................................................ 172
Configuring the policy activator service............................................................................................ 172

Chapter 10. Working with operator views........................................................... 175


Viewing operator views........................................................................................................................... 175
Operator views.........................................................................................................................................175
Operator view types........................................................................................................................... 175
Operator views panel controls................................................................................................................ 176
Layout options......................................................................................................................................... 177
Action panel policies............................................................................................................................... 177
Information groups..................................................................................................................................177
Creating and viewing a basic operator view............................................................................................178

Chapter 11. Configuring Event Isolation and Correlation..................................... 181


Overview.................................................................................................................................................. 181
Event Isolation and Correlation policies........................................................................................... 182
Event Isolation and Correlation operator views................................................................................182
Configuring Event Isolation and Correlation data sources.....................................................................182
Configuring Event Isolation and Correlation data types.........................................................................183
Creating, editing, and deleting event rules............................................................................................. 184
Creating an event rule........................................................................................................................ 184
Configuring WebGUI to add a new launch point.....................................................................................185
Launching the Event Isolation and Correlation analysis page ...............................................................186
Viewing the Event Analysis......................................................................................................................186

Chapter 12. Working with reports.......................................................................187


Accessing reports.................................................................................................................................... 187
Viewing Reports....................................................................................................................................... 187
Reports toolbar........................................................................................................................................ 188
Policy Efficiency report............................................................................................................................ 189
Policy Error report....................................................................................................................................189

vii
Operator Efficiency report....................................................................................................................... 189
Node Efficiency report............................................................................................................................. 190
Action Error report................................................................................................................................... 190
Action Efficiency report........................................................................................................................... 190
Impact ROI Efficiency report...................................................................................................................191
Impact ROI Efficiency report business processes............................................................................ 192
Creating a sample Impact ROI Efficiency report...............................................................................192
Impact Profile report............................................................................................................................... 195
Configuring Impact Profile report...................................................................................................... 195
Impact Profile Report data................................................................................................................ 196
Impact Profile Report rules editor..................................................................................................... 197

Chapter 13. Configuring Maintenance Window Management............................... 199


About MWM maintenance windows........................................................................................................199
Logging on to Maintenance Window Management................................................................................. 200
Creating a one time maintenance window..............................................................................................200
Creating a recurring maintenance window............................................................................................. 200
Viewing maintenance windows............................................................................................................... 201

Chapter 14. Working with the configuration documenter.....................................203


Viewing items in the configuration documenter..................................................................................... 203

Appendix A. Notices.......................................................................................... 205


Trademarks.............................................................................................................................................. 206

Index................................................................................................................ 209

viii
About this publication
The Netcool/Impact User Interface Guide contains information about the user interface in Netcool/
Impact.

Intended audience
This publication is for users who use the Netcool/Impact user interface.

Publications
This section lists publications in the Netcool/Impact library and related documents. The section also
describes how to access Tivoli® publications online and how to order Tivoli publications.

Netcool/Impact library
• Administration Guide
Provides information about installing, running and monitoring the product.
• Policy Reference Guide
Contains complete description and reference information for the Impact Policy Language (IPL).
• DSA Reference Guide
Provides information about data source adaptors (DSAs).
• Operator View Guide
Provides information about creating operator views.
• Solutions Guide
Provides end-to-end information about using features of Netcool/Impact.

Accessing terminology online


The IBM® Terminology Web site consolidates the terminology from IBM product libraries in one
convenient location. You can access the Terminology Web site at the following Web address:
http://www.ibm.com/software/globalization/terminology

Accessing publications online


Publications are available from the following locations:
• The Quick Start DVD contains the Quick Start Guide. Refer to the readme file on the DVD for instructions
on how to access the documentation.
• IBM Knowledge Center web site at http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/
com.ibm.netcoolimpact.doc6.1.1/welcome.html. IBM posts publications for all Tivoli products, as they
become available and whenever they are updated to the Tivoli Information Center Web site.
Note: If you print PDF documents on paper other than letter-sized paper, set the option in the File →
Print window that allows Adobe Reader to print letter-sized pages on your local paper.
• Tivoli Documentation Central at http://www.ibm.com/tivoli/documentation. You can access publications
of the previous and current versions of Netcool/Impact from Tivoli Documentation Central.
• The Netcool/Impact wiki contains additional short documents and additional information
and is available at: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/
Tivoli%20Netcool%20Impact/page/Overview%20and%20Planning

© Copyright IBM Corp. 2006, 2023 ix


Ordering publications
You can order many Tivoli publications online at http://www.elink.ibmlink.ibm.com/publications/servlet/
pbi.wss.
You can also order by telephone by calling one of these numbers:
• In the United States: 800-879-2755
• In Canada: 800-426-4968
In other countries, contact your software account representative to order Tivoli publications. To locate the
telephone number of your local representative, perform the following steps:
1. Go to http://www.elink.ibmlink.ibm.com/publications/servlet/pbi.wss.
2. Select your country from the list and click Go.
3. Click About this site in the main panel to see an information page that includes the telephone number
of your local representative.

Accessibility
Accessibility features help users with a physical disability, such as restricted mobility or limited vision,
to use software products successfully. In this release, the Netcool/Impact console does not meet all the
accessibility requirements.

Tivoli technical training


For Tivoli technical training information, refer to the following IBM Tivoli Education Web site at http://
www.ibm.com/software/tivoli/education.

Support for problem solving


If you have a problem with your IBM software, you want to resolve it quickly. This section describes the
following options for obtaining support for IBM software products:
• “Obtaining fixes” on page x
• “Receiving weekly support updates” on page x
• “Contacting IBM Software Support ” on page xi

Obtaining fixes
A product fix might be available to resolve your problem. To determine which fixes are available for your
Tivoli software product, follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/support.
2. Navigate to the Downloads page.
3. Follow the instructions to locate the fix you want to download.
4. If there is no Download heading for your product, supply a search term, error code, or APAR number in
the search field.
For more information about the types of fixes that are available, see the IBM Software Support Handbook
at http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html.

Receiving weekly support updates


To receive weekly e-mail notifications about fixes and other software support news, follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/support.
2. Click the My IBM in the toobar. Click My technical support.

x About this publication


3. If you have already registered for My technical support, sign in and skip to the next step. If you have
not registered, click register now. Complete the registration form using your e-mail address as your
IBM ID and click Submit.
4. The Edit profile tab is displayed.
5. In the first list under Products, select Software. In the second list, select a product category (for
example, Systems and Asset Management). In the third list, select a product sub-category (for
example, Application Performance & Availability or Systems Performance). A list of applicable
products is displayed.
6. Select the products for which you want to receive updates.
7. Click Add products.
8. After selecting all products that are of interest to you, click Subscribe to email on the Edit profile
tab.
9. In the Documents list, select Software.
10. Select Please send these documents by weekly email.
11. Update your e-mail address as needed.
12. Select the types of documents you want to receive.
13. Click Update.
If you experience problems with the My technical support feature, you can obtain help in one of the
following ways:
Online
Send an e-mail message to [email protected], describing your problem.
By phone
Call 1-800-IBM-4You (1-800-426-4409).
World Wide Registration Help desk
For word wide support information check the details in the following link: https://www.ibm.com/
account/profile/us?page=reghelpdesk

Contacting IBM Software Support


Before contacting IBM Software Support, your company must have an active IBM software maintenance
contract, and you must be authorized to submit problems to IBM. The type of software maintenance
contract that you need depends on the type of product you have:
• For IBM distributed software products (including, but not limited to, Tivoli, Lotus®, and Rational®
products, and DB2® and WebSphere® products that run on Windows or UNIX operating systems), enroll
in Passport Advantage® in one of the following ways:
Online
Go to the Passport Advantage Web site at http://www-306.ibm.com/software/howtobuy/
passportadvantage/pao_customers.htm .
By phone
For the phone number to call in your country, go to the IBM Worldwide IBM Registration Helpdesk
Web site at https://www.ibm.com/account/profile/us?page=reghelpdesk.
• For customers with Subscription and Support (S & S) contracts, go to the Software Service Request Web
site at https://techsupport.services.ibm.com/ssr/login.
• For customers with IBMLink, CATIA, Linux®, OS/390®, iSeries, pSeries, zSeries, and other support
agreements, go to the IBM Support Line Web site at http://www.ibm.com/services/us/index.wss/so/its/
a1000030/dt006.
• For IBM eServer™ software products (including, but not limited to, DB2 and WebSphere products
that run in zSeries, pSeries, and iSeries environments), you can purchase a software maintenance
agreement by working directly with an IBM sales representative or an IBM Business Partner. For more

About this publication xi


information about support for eServer software products, go to the IBM Technical Support Advantage
Web site at http://www.ibm.com/servers/eserver/techsupport.html.
If you are not sure what type of software maintenance contract you need, call 1-800-IBMSERV
(1-800-426-7378) in the United States. From other countries, go to the contacts page of the
IBM Software Support Handbook on the Web at http://www14.software.ibm.com/webapp/set2/sas/f/
handbook/home.html and click the name of your geographic region for phone numbers of people who
provide support for your location.
To contact IBM Software support, follow these steps:
1. “Determining the business impact” on page xii
2. “Describing problems and gathering information” on page xii
3. “Submitting problems” on page xii

Determining the business impact


When you report a problem to IBM, you are asked to supply a severity level. Use the following criteria to
understand and assess the business impact of the problem that you are reporting:
Severity 1
The problem has a critical business impact. You are unable to use the program, resulting in a critical
impact on operations. This condition requires an immediate solution.
Severity 2
The problem has a significant business impact. The program is usable, but it is severely limited.
Severity 3
The problem has some business impact. The program is usable, but less significant features (not
critical to operations) are unavailable.
Severity 4
The problem has minimal business impact. The problem causes little impact on operations, or a
reasonable circumvention to the problem was implemented.

Describing problems and gathering information


When describing a problem to IBM, be as specific as possible. Include all relevant background
information so that IBM Software Support specialists can help you solve the problem efficiently. To save
time, know the answers to these questions:
• Which software versions were you running when the problem occurred?
• Do you have logs, traces, and messages that are related to the problem symptoms? IBM Software
Support is likely to ask for this information.
• Can you re-create the problem? If so, what steps were performed to re-create the problem?
• Did you make any changes to the system? For example, did you make changes to the hardware,
operating system, networking software, and so on.
• Are you currently using a workaround for the problem? If so, be prepared to explain the workaround
when you report the problem.

Submitting problems
You can submit your problem to IBM Software Support in one of two ways:
Online
Click Submit and track problems on the IBM Software Support site at http://www.ibm.com/software/
support/probsub.html. Type your information into the appropriate problem submission form.
By phone
For the phone number to call in your country, go to the contacts page of the IBM Software Support
Handbook at http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html and click the
name of your geographic region.

xii About this publication


If the problem you submit is for a software defect or for missing or inaccurate documentation, IBM
Software Support creates an Authorized Program Analysis Report (APAR). The APAR describes the
problem in detail. Whenever possible, IBM Software Support provides a workaround that you can
implement until the APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
Software Support Web site daily, so that other users who experience the same problem can benefit from
the same resolution.

Conventions used in this publication


This publication uses several conventions for special terms and actions, operating system-dependent
commands and paths, and margin graphics.

Typeface conventions
This publication uses the following typeface conventions:
Bold
• Lowercase commands and mixed case commands that are otherwise difficult to distinguish from
surrounding text
• Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons,
list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs,
property sheets), labels (such as Tip:, and Operating system considerations:)
• Keywords and parameters in text
Italic
• Citations examples: titles of publications, diskettes, and CDs
• Words defined in text (example: a nonswitched line is called a point-to-point line)
• Emphasis of words and letters (words as words example: "Use the word that to introduce a
restrictive clause."; letters as letters example: "The LUN address must start with the letter L.")
• New terms in text (except in a definition list): a view is a frame in a workspace that contains data.
• Variables and values you must provide: ... where myname represents....
Monospace
• Examples and code examples
• File names, programming keywords, and other elements that are difficult to distinguish from
surrounding text
• Message text and prompts addressed to the user
• Text that the user must type
• Values for arguments or command options

PDF code examples with single quotation marks


How to resolve issues with PDF code examples with single quotation marks.
Throughout the documentation, there are code examples that you can copy and paste into the product.
In instances where code or policy examples that contain single quotation marks are copied from the PDF
documentation the code examples do not preserve the single quotation marks. You need to correct them
manually. To avoid this issue, copy and paste the code example content from the html version of the
documentation.

Operating system-dependent variables and paths


This publication uses the UNIX convention for specifying environment variables and for directory notation.

About this publication xiii


When you use the Windows command line, replace the $variable with the %variable% for environment
variables and replace each forward slash (/) with a backslash (\) in directory paths. The names of
environment variables are not always the same in the Windows and UNIX environments. For example,
%TEMP% in Windows environments is equivalent to $TMPDIR in UNIX environments.
Note: If you are using the bash shell on a Windows system, you can use the UNIX conventions.
• On UNIX systems, the default installation directory is /opt/IBM/tivoli/impact.
• On Windows systems, the default installation directory is C:\Program Files\IBM\Tivoli\impact.
Windows information, steps, and process are documented when they differ from UNIX systems.

xiv Netcool/Impact: User Interface Guide


Chapter 1. Working with the User Interface
The graphical user interface (GUI) gives you immediate access to all projects, policies, reports, data types,
operator views, services, and defined clusters.
To facilitate the Netcool/Impact UI, ensure that the browser settings allow pop-up windows for the
Netcool/Impact UI. For more information about enablement of browser pop-ups, see documentation for
the browser

Globalization
Netcool/Impact does not support Unicode names for databases, tables, schemas, and columns in foreign
language data sources.

Using SJIS or EUC Japanese character encoding


You can input, display, and process Japanese characters in a policy by changing the encode option to
Unicode in your browser. Use this following procedure to change the encode option on your browser.

Procedure
1. Open your browser.
2. Select View > Encoding or View > Character Encoding, depending on which browser you are using.
3. Select Unicode (UTF-8).

Navigating Netcool/Impact
How to navigate to Netcool/Impact components.
When you log on, you see a number of tabs along the top of the UI. The Welcome tab provides
information to get you started with Netcool/Impact features.
All the Netcool/Impact components are found in the tabs at the top of the UI. Depending on the user
permissions that you are assigned, you have access to some or all of the following Netcool/Impact
components.
• Welcome
• Data Model
• Policies
• Services
• Operator View
• Event Isolation and Consolidation
• Maintenance Window
• Reports
Tip: You can select the Global project to view all the items in the selected tab.
Click the Reports tab to locate the following reports:
• Policy Efficiency Report
• Policy Error Report
• Operator Efficiency Report
• Node Efficiency Report
• Action Error Report

© Copyright IBM Corp. 2006, 2023 1


• Action Efficiency Report
• Impact ROI Efficiency Report
• Impact Profile Report
Data Model
Set up a data model for your solution: data sources, data types, data items, and links.
For more information about the data model, see the online help and the Netcool/Impact DSA Reference
Guide.
Policies
Create policies to manipulate events and data from your data sources.
For more information about the policies, see the online help and the Netcool/Impact Policy Reference
Guide.
Services
Work with services: monitor event sources, send, and receive email notifications, and run policies.
For more information about the services, see the online help.
Operator Views
View events and data in real time and run policies that are based on that data.
For more information about the operator views, see the online help and the Netcool/Impact Operator
View Guide.
Reports
View information about your network and network operators, and assess the efficiency of your
configuration.
For more information about the reports, see the online help.
Event Isolation and Correlation
You can set up Event Isolation and Correlation to isolate the event that has caused the problem. You
can also view the events dependent on the isolated event. For more information about configuring
data sources and creating event rules, see the online help and the Netcool/Impact DSA Guide.
For more information about setting up and configuring Event Isolation and Correlation, see the
Netcool/Impact Solutions Guide.
Maintenance Window Management
Maintenance Window Management (MWM) is an add-on for managing Netcool/OMNIbus maintenance
windows.
For more information about using Maintenance Window Management, see the online help.
For more information about setting up Maintenance Window Management, see the Netcool/Impact
Solutions Guide.

Selecting a cluster and project


Use the cluster and project menus to select the clusters and projects you want to use when you are
working in the GUI.
1. When you log in to the GUI, the name of the current Cluster and current Project is displayed in the
banner at the top of the page.
2. To change clusters, open the Cluster menu by selecting the down arrow next to the Cluster name then
select the cluster from the drop-down list. The list of available clusters will periodically refresh itself
automatically. You can manually refresh the list by selecting the Refresh option in the Cluster menu.
3. To change project, select the down arrow next to the current Project name to open the Project Menu.
The Project Menu shows a list of available projects on the left column. Select the project from the list
to load the project. You can also choose to create, edit, and delete projects by using the icons on the
right column. The Refresh option refreshes the list of available projects.
Note: The Global project shows all the items from all projects in the current tab.

2 Netcool/Impact: User Interface Guide


4. You can switch between clusters and projects. Save any work in progress before you switch clusters
and projects to implement any changes.
Start working with Netcool/Impact by creating a project for your data models, policies, and services. For
more information about working with projects, see Chapter 2, “Working with projects,” on page 5.

Personalizing the GUI preferences


You can change some of the default behavior and display options for some of the policies, and services
panels.

Procedure
1. Click Options from the main menu, then click Preferences to open the Preferences dialog box.
2. Select the options that you want to personalize. Select from the tab options.
For example, click Policies. Select the check box for the options you want to enable.
• Select Show line number to view the line numbers for the policy editor.
• Select Automatically Save Drafts (every 5 minutes) and the policy is automatically saved every 5
minutes while you are editing it.
• Select the Character limit for Syntax Highlighting. Requires a restart of the Policies page.
3. Click Save.

Chapter 1. Working with the User Interface 3


4 Netcool/Impact: User Interface Guide
Chapter 2. Working with projects
You use projects to organize and manage related data sources and types, operator views, policies,
services, and wizards.
Using the GUI you complete the following tasks.
• Switch between clusters and projects
• Create projects
• Edit projects
• View project items
• Add and remove project members
• Delete projects
• Use the Deploy function to copy the data sources, data types, operator views, policies, and services in
a project between two running server clusters on a network.

Projects overview
A project is a view of a subset of the elements stored in the global repository.
You can use projects to manage your policies and their associated elements. They help you to remember
which data types and services relate to each policy and how the policies relate to each other. Projects
also help to determine whether a policy, or its associated data types or services, is still in use or must be
deleted from the project.
Also, you can find policies and their associated data and services easily when they are organized by
project. You can add any previously created policies, data types, operator views, and services to as many
projects as you like. You can also remove these items when they are no longer needed in any project.
If you have not as yet created any projects, Default and Global projects and projects predefined by
Netcool/Impact are the only projects listed in the Projects menu.
The Global project lists all items in the global repository. Any item that you create, for example a data
type, is not stored in the project that is currently selected, it is automatically added to the Global project.
The Default project is an example, it works just like any project, you can add items to it edit, or delete it.
When you delete a project, the items that were assigned as project members remain in the global project
and as members of any other projects they were assigned to.
Important: You cannot edit or delete the Global project.

Project components
When you create a project, you can add any existing policies, data sources, data types, and services to it
as project members.
A project can consist of the following components:
• Policies
• Data sources that are set up for project data types
• Data types that are associated with the policies
• Operator views that are related to the policies
• Services
Important: When you are naming projects, data sources, data types polices, and services, data types, you
cannot use Dot notation ".".

© Copyright IBM Corp. 2006, 2023 5


Important differences between projects, and the global repository
Make sure that you are aware of the following differences when editing and deleting items in projects, and
the global repository.
• When you select any project, and create an item on the selected tab, the item is automatically added to
the global repository.
• If you select the Global project and create an item in the specified tab the item is added to the global
repository only.
• Editing a policy, data source, data type, operator view, or service from the tab menu changes it in every
project it is attached to and in the global repository.
• When you delete an item (a policy, data source, data type, operator view, or service) from the global
repository, it is deleted (all versions of it) from the server and every project it is a member of.
• Deleting a policy, data model, service, or operator view from the tab menu deletes it everywhere. The
item is deleted from the server, the global repository and from every project it was assigned to. You
must be careful to delete only items that you want to delete globally.
• If you select the Global project, click an item in the selected tab and delete it. The item is removed from
the server, global repository and from every project it was assigned to.
• The only safe way to delete an item from a project, without removing it permanently from the database,
is to remove it in the project editor window.

Global repository
The global repository is the storage area for all the policies, data sources, data types, operator views, and
services for the cluster that you are connected to.
When you create an item on the Data Model, Policies, Services or Operator View tabs, the items are
automatically added to the global repository.
You add new policies and their associated data and services to the global repository, just as you would
to a project, but they are independent of any projects. You can attach added items to projects as project
members at any time.
You must only edit and delete items that you want to change or delete globally. Deleting an item from the
tab menu deletes it from the global repository and every project it is attached to.
A version control interface is provided so that you can use it to save data as revisions in a version control
archive. You can also use the Global project to unlock all the items that you checked out.

Adding and removing items in the global repository


How to view, add, and remove items from the global repository project.

Procedure
1. To view the items in the global repository select a specific tab, for example the Operator View, and
select the Global option in the Projects menu.
You see all the operator views that are stored in the global repository.
2. Each time that you create an item, for example a data type, it is automatically added to the Global
repository project in the specific tab.
3. To remove an item from the Global repository, open the appropriate tab and select the item that you
want to delete.
4. Click the Delete icon on the tab menu bar.
5. Click OK in the confirmation box.
The item is deleted from the global repository, and all projects it was assigned to.

6 Netcool/Impact: User Interface Guide


Creating and editing a project
Use this procedure to view, create or edit project or to add or remove its members.

About this task


You can view the members of a single project by selecting the project name in the project menu for each
open tab in the work area. If you have not yet created any projects, the Default, Global and predefined
projects are the only projects listed in the Projects menu.

Procedure
1. To view, create or edit a project, select the cluster and click the down arrow next to the existing project
name to open the Projects window.
• From the Manage Projects list, click Create Project.
• From the Manage Projects list, click Edit Current Project.
Use the project editor window to configure your new project or edit an existing project.
2. Click Edit Current Project, in the General Settings section, a default name is automatically given to
the project. You can create a unique name for your project. However, you cannot edit a project name
after the project is saved.
Remember: To use UTF-8 characters in the project name, check that the locale on the Impact Server
where the project is saved is also set to the UTF-8 character encoding.
3. In the Member section, you can add data sources, data types, policies, operator views, and services to
your project.
a) From the List By list, select a group whose elements you want to add to your project.
When you select an item, for example, Data Sources, all the data sources that you have created,
plus the predefined data sources are listed in the Members pane. If you have not yet created any
data sources, data types, policies, or services on your server, only predefined items are listed in the
Members pane.
4. Select the members that you want to add to the project from the Available Members list and click the
right-arrow button >>. The selected items will appear in the Project Members list.
Then click OK.
5. To remove selected members from the project and return them to the Available Members list, select
them in the Project Members list and click the left-arrow button <<.
Then click OK.
6. If you do not want to add any items to the project now, simply click OK without making any changes.

Deleting a project
Use this procedure to delete a project without removing the project members from other projects or from
the global repository.

Procedure
1. From the Project menu, select the project you want to delete.
2. In the Projects window, click the Delete Current Project icon.
When you delete a project it is removed from the server. However, the project members that were
assigned to it are not removed from other projects or from the global repository.
Important: You cannot edit or delete the Global project.
3. Click OK to confirm the deletion.

Chapter 2. Working with projects 7


Automated project deployment feature
You can copy the data sources, data types, policies, and services in a project between two running server
clusters on a network using the automated project deployment feature.
You can use this feature when moving projects from test environments into real-world production
scenarios.
Important: Automated project deployment requires both server clusters to use the same Name Server.
When you copy data sources and types, policies, and services between clusters, you have the option of
specifying a version control checkpoint label for the revisions of the items that you copy to the target
server cluster. Two checkpoint labels are used for this process. The first is the label that you specify,
which is applied to the copied versions of the project components. The second is the specified label with
the string _AFTER_DEPLOYMENT appended. This label is applied to subsequent changes to the project
components made using the GUI or CLI.
Revision checkpointing is supported only if you are using SVN and CVS version control system for Netcool/
Impact.
To automatically deploy project, use one of the following options:
• Run the built-in DeployProject policy using the GUI. The DeployProject policy is a built-in policy that
copies all the data sources, data types, policies, and services in a project between two running server
clusters.
• Create and run a custom deployment policy that uses the Deploy function. See the Policy Reference
Guide for more information.

Running the DeployProject policy


Use this procedure to run the DeployProject policy and copy all the data sources, data types, policies,
and services in a project between two running server clusters.

Procedure
1. Select the server cluster from which you want to copy data from the main toolbar.
2. Select DeployProject from the list of policies.
The Policy Editor opens and shows the contents of the DeployProject policy.
3. Click Configure Policy Settings to open the Policy Settings Editor window.
For reference on the configuration options, see “DeployProject policy input parameters window” on
page 8.
4. Click OK to save the configuration and close the window.
After you run the DeployProject policy, you can check the contents of the policy log for the results of
the project deployment.

DeployProject policy input parameters window


Use this information to configure the DeployProject policy parameters.

Table 1. DeployProject policy input parameters window

Window element Description

TargetCluster Enter the name of the destination server cluster.

Username Enter a valid user name.

Password Enter a valid password.

8 Netcool/Impact: User Interface Guide


Table 1. DeployProject policy input parameters window (continued)

Window element Description

Project Enter the name of the project to copy.

Checkpoint ID If you are using Subversion as the version control system you can type
a checkpoint label. This label is applied to all project components when
checked into the version control system for the target cluster. If you
are not using Subversion or you do not want to use a checkpoint label,
accept the default value for this field, which is NULL.

Version control file locking


Netcool/Impact is installed with a version control interface that you can use to save data as revisions in a
version control archive.
When you create a policy, data source, data type, or service, a corresponding element is created in the
version control system. When you open one of these items for viewing or editing, the item is automatically
locked. When an item is locked, other users can view the item, but cannot edit the item until the lock has
been released. When you save and close an item, the lock is automatically released and is available for
editing by other users.
If required, for example, after the system goes down, you can use the global project to unlock the locked
files. You can unlock only the items that you have checked out. If you have an item open for editing you
cannot unlock it. Save and close the item.
Important: You cannot unlock an item if the lock belongs to another user. If you open a file locked
by another user, the file will open in read-only mode. Only the lock owner or administrators with the
impactAdminUser role can unlock the item in exceptional circumstances.
For details about unlocking your own locked files, see “Unlocking all locked items” on page 9.

Unlocking all locked items


Use this procedure to unlock all items that you have checked out.

Procedure
1. From the Projects menu, select the Global project.
2. Select the down arrow next to the project name, then click Clear all user locks to unlock all the items
that you have checked out.
You can unlock only your own items. If you want to unlock an item that is owned by another user,
contact an administrator assigned the impactAdmin user role.
3. A confirmation message shows when the files are unlocked.

Chapter 2. Working with projects 9


10 Netcool/Impact: User Interface Guide
Chapter 3. Managing data models
A data model is a model of the business data and metadata that is used in an Netcool/Impact solution.
DSA (Data Source Adapter) data models are sets of data sources, data types, and data items that
represent information that is managed by the internal data repository or an external source of data. For
each category of DSA, the data model represents different structures and units of data that are stored
or managed by the underlying source. For example, for SQL database DSAs, data sources represent
databases; data types represent database tables; and data items represent rows in a database table.
The following DSAs; Web Services, SNMP, ITNM (Precision), and XML, store some of the configuration in
the $IMPACT_HOME/dsa directory. In a clustered environment, the $IMPACT_HOME/dsa directory will be
replicated in the secondary servers in a cluster from the primary server during startup.
If you are changing these directories and configurations, it is best to make these changes on the primary
server while the servers are down. When the changes are complete, start primary server followed by
the secondary servers in the cluster. Some of the changes replicate in real time, for example if you use
the Web Services and XML wizards. There is also a directory, $IMPACT_HOME/dsa/misc, where you
can store scripts and flat files for example, which will be replicated across the cluster during startup of
secondary servers that are retrieving this data from the primary server.

Data model components


A data model is made up of components that represent real world sources of data and the actual data
inside them.
Data sources
Data sources are elements of the data model that represent real world sources of data in your
environment.
Data types
Data types are elements of the data model that represent sets of data stored in a data source.
Data items
Data items are elements of the data model that represent actual units of data stored in a data source.
Links
Links are elements of the data model that define relationships between data types and data items.
Event sources
Event sources are special types of data sources. Each event source represents an application that
stores and manages events.

Setting up a data model


To set up a data model, you must first determine what data you need to use in your solution and where
that data is stored. Then, you create a data source for each real world source of data and create a data
type for each structural element that contains the data you need.

Procedure
1. Create data sources
Identify the data you want to use and where it is stored. Then, you create one data source for each real
world source of data. For example, if the data is stored in one MySQL database and one LDAP server,
you must create one MySQL and one LDAP data source.
2. Create data types
After you have set up the data sources, you create the required data types. You must create one data
type for each database table (or other data element, depending on the data source) that contains data

© Copyright IBM Corp. 2006, 2023 11


you want to use. For example, if the data is stored in two tables in an Oracle database, you must create
one data type for each table.
3. Optional: Create data items
For most data types, the best practice is to create data items using the native tools supplied by the
data source. For example, if your data source is an Oracle database, you can add any required data to
the database using the native Oracle tools. If the data source is the internal data repository, you must
create data items using the GUI.
4. Optional: Create links
After you create data types, you can define linking relationships between them using dynamic links.
You can also define linking relationships between internal data items using static links. That makes it
easier to traverse the data programmatically from within a policy. Use of links is optional.

5. Create event sources


Most process events are retrieved from a Netcool/OMNIbus ObjectServer. The ObjectServer is
represented in the data model as an event source.

Accessing the data model tab


Use this procedure to access the data model tab.

Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster list, select the cluster you want to use.
3. From the Project list, select the project you want to use.
The data sources that are available to the project are displayed in the Data Model tab.

Data model menu controls


This topic gives an overview of the controls that are used in the data model menu.

Table 2. Data model menu controls

Icon Description

Click this icon to create a data source. Select one of the available data source types from the
list. After you create a data source, you can right-click the data source and click New Data
Type to create an associated data type.

Select a data source and click this icon to create a data type for the selected data source.
After you create a data type, it is listed under its data source. Alternatively, you can right-click
a data source and select New Data Type to create a data type for this data source.

Select an element in the list and click this icon to edit it. Alternatively, right-click an item in the
list and select Edit in the menu.

Click to view the selected data type in the editor panel. Select the View Data Items option
to view the data items for the data type, or the View Performance Report option to review
a performance report for the data type. Alternatively, you can view the data items or the
performance report for a data type by right-clicking the data type.

12 Netcool/Impact: User Interface Guide


Table 2. Data model menu controls (continued)

Icon Description

Click this icon to test the connection to the data source. Alternatively, right-click an item in the
list and select Test Connection in the menu.
Important: If you see an error message stating that the data source cannot establish a
connection to a database because a JDBC driver was not found, it means that a required JDBC
driver is missing in the shared library directory. To fix this, place a licensed JDBC driver in the
shared library directory and restart the server. For more information see, the "SQL database
DSAs" chapter in the Netcool/Impact DSA Reference Guide.

Click the Delete icon to delete a data source or type from the server. Alternatively, you can
right-click a data source or type and select Delete.
This action deletes an item permanently from the database. To safely remove a data type from
only one project and not from the database, use the project editor.

This icon is visible when a data source or data type item is locked, or the item is being used
by another user. Hover the mouse over the locked item to see which user is working on the
item. You can unlock your own items but not items locked by other users. If you have an
item open for editing you cannot unlock it. Save and close the item. To unlock an item you
have locked, right click on the item name and select Unlock. Users who are assigned the
impactAdminUser role are the only users who can unlock items that are locked by another
user in exceptional circumstances.

Data sources overview


Data sources provide an abstract layer between Netcool/Impact and real world source of data.
Internally, data sources provide connection and other information that Netcool/Impact uses to access the
data. When you create a data model, you must create one data source for every real world source of data
you want to access in a policy.
The internal data repository of Netcool/Impact can also be used as a data source.

Data source categories


Netcool/Impact supports four categories of data sources.
SQL database data sources
An SQL database data source represents a relational database or another source of data that can be
accessed using an SQL database DSA.
LDAP data sources
The Lightweight Directory Access Protocol (LDAP) data source represent LDAP directory servers.
Mediator data sources
Mediator data sources represent third-party applications that are integrated with Netcool/Impact
through the DSA Mediator.
JMS data sources
A Java™ Message Service (JMS) data source abstracts the information that is required to connect to a
JMS Implementation.

Chapter 3. Managing data models 13


List of data sources
The following table lists and describes the data sources that you can create.

Table 3. User-defined data sources

Data source Type Description

CORBA Mediator Mediator The Mediator data source represents third-party


applications that are integrated with Netcool/Impact
through the DSA Mediator.

DB2 SQL database You use the DB2 DSA to access information in an IBM DB2
database.

Derby SQL database Use the Derby DSA to access information in a Derby
database. The Derby DSA is used to store the underlying
data that is used by the GUI reporting tools and
Netcool/Impact solutions such as Maintenance Window
Management.

Direct Mediator Mediator The Mediator data source represents third-party


applications that are integrated with Netcool/Impact
through the DSA Mediator.

Flat File SQL database You use the Flat File DSA to read information in a character-
delimited text file. The flat file data source can be accessed
like an SQL data source that uses standard SQL commands
in Netcool/Impact. For example, DirectSQL. The flat file DSA
is read only, which means that you cannot add new data
items in GUI. To create a flat file data source, you need a
text file that is already populated with data.

Generic SQL SQL database You use the Generic SQL DSA to access information in any
database application through a JDBC driver.

HSQLDB SQL database Use the HSQL DSA to access information in a HSQL
database.

Informix SQL database You use the Informix® DSA to access information in an IBM
Informix database.

JMS Messaging API A Java Message Service (JMS) data source abstracts
the information that is required to connect to a JMS
Implementation.

Kafka Messaging API You use the Kafka DSA to access message data from a Kafka
endpoint.

LDAP Directory Server The Lightweight Directory Access Protocol (LDAP) data
source represent LDAP directory servers. The LDAP DSA
supports only non-authenticating data sources.

MS SQL Server SQL database Use the MS-SQL Server DSA to access information in the
Microsoft SQL Server database.

14 Netcool/Impact: User Interface Guide


Table 3. User-defined data sources (continued)

Data source Type Description

MySQL MySQL Use the MySQL DSA to access information in a MySQL


database database.

ObjectServer SQL database The ObjectServer data source represents the instance of
the Netcool/OMNIbus ObjectServer that you monitor by
using the OMNIbus event listener service, or OMNIbus event
reader service.

OAuth Authentication You can use the OAuth data source to provision
access to an external OAuth authentication provider. This
enables components such as the EmailReader service to
authenticate with an OAuth provider.

ODBC SQL database Use the ODBC DSA to access information in an ODBC
database.

Oracle SQL database Use the Oracle DSA to access information in an Oracle
database.

PostgreSQL SQL database Use the PostgreSQL DSA to access information in a


PostgreSQL database.

RESTful API REST The RESTful API data source represents access to a HTTP
REST endpoint. An Impact policy can send REST requests
through the RESTful API data source.

SNMP Mediator The SNMP DSA is a data source adapter that Netcool/
Impact uses to set and retrieve management information
that is stored by SNMP agents.Netcool/Impact can use the
SNMP DSA to send SNMP traps and notifications to SNMP
managers.

Sybase SQL database Use the Sybase DSA to access information in a Sybase
database.

UI Data Provider REST The UI Data Provider represents access to a Data Provider
endpoint such as the TBSM provider or another Impact
cluster.

List of predefined data sources


Initially, the data sources listed in the global project are predefined data sources.

Table 4. predefined data sources

Data source Description

defaultobjectserver The default ObjectServer data source. The defaultobjectserver


data source is configured during the installation, when you create an
instance of the Impact Server.

Chapter 3. Managing data models 15


Table 4. predefined data sources (continued)

Data source Description

ImpactDB ImpactDB represents the database where the reporting data is


stored.

Internal The Internal data source contains the following predefined data
types, TimeRangeGroup, LinkType, and FailedEvent.

ITNM The ITNM data source is used with ITNM and the ITNM DSA.

NOIReportDatasource Reporting database for NOI Event Analytics.

ObjectServerForNOI Historical database for NOI Event Analytics.

ObjectServerHistoryDB2ForNO Historical database for NOI Event Analytics.


I

ObjectServerHistoryMSSQLFor Historical database for NOI Event Analytics.


NOI

ObjectServerHistoryOrclForNO Historical database for NOI Event Analytics.


I

RelatedEventsDatasource Related Events database for NOI Event Analytics.

Schedule The Schedule data source contains the predefined data type
schedule. You cannot edit the schedule data source but you can add
additional data types.

seasonalReportDatasource Seasonal report database for NOI Event Analytics.

seasonalReportDataSourceDB Seasonal report database for NOI Event Analytics.


2

Slack REST endpoint used for the Slack integration.

Statistics The Statistics data source contains the hibernation data type. You
cannot edit the statistics data source or add additional data types.

URL The URL data source contains the predefined data type document.
You cannot edit the URL data source but you can add additional data
types.

XmlDsaMediatorDataSource The XmlDsaMediator data source is used with the XML DSA.

Creating data sources


Use this procedure to create a user-defined data source.

Before you begin


Before you create a data source, you must get the connection information for the underlying application.
The connection information that you need varies depending on the type of event source. For most SQL

16 Netcool/Impact: User Interface Guide


database data sources, this information is the host name and the port where the application is running,
and a valid user name and password. For LDAP and Mediator data sources, see the DSA Reference Guide
for the connection information required.

Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select a template for the data
source that you want to create. The tab for the data source opens.
4. Complete the information, and click Save to create the data source.

Editing data sources


Use this procedure to configure an existing data source.

Procedure
1. In the Data Model tab, double-click the name of the data source that you want to edit. Alternatively,
right click the data source and click Edit.
2. Make the changes and click Save to apply them.

Deleting data sources


Before deleting a data source, you must first delete any data types listed under the data source.
If you do not delete them, you get an error message when you try to delete the data source. When you
delete a data source from within a project, it is also deleted from any other projects that use it and from
the global repository. To remove a data source from one project, use the editor window for that project.
For more information about removing data sources from a project, see “Deleting a project” on page 7.
In the Data Model tab, select the data source you want to delete, and click the delete icon on the toolbar.
Alternatively, right click the data source and select Delete.

Testing data source connections


If you have defined a backup data source, both the primary and backup data source connections are
tested.
If you have defined a backup data source, both the primary and backup data source connections are
tested. You can click that data source and then click the Test Connection button.
If the test succeeds for the primary connection, you get a message that indicates that it was successful.
If the test fails for the primary source, the backup source is then tested. If the backup succeeds, you get
a message that the connection was successful. It is only when both the primary and backup tests fail that
you receive a message that the connection cannot be made.

Datasourcelist file
The datasourcelist file is a text file that lists all the Impact data sources that have been created.
It is located in the etc directory and has the following name format:
<server>_datasourcelist
Where <server> is the name of the Impact server, for example NCI_datasourcelist.
The datasourcelist file comprises the following elements:
1. A total count of all data sources:

impact.datasources.numdatasources=xxx

Chapter 3. Managing data models 17


2. For each data source, the following entries:

impact.datasources.n.name=EIC_alertsdb
impact.datasources.n.number=8
impact.datasources.n.type=ObjectServer

Where:
n is a sequential number starting from 1.
name is the name of the data source as it appears in the data types under Data Model tab of the
Impact UI.
number is the data source number.
type is the type of the data source.

Order of entries
The sequential numbers do not have to be in order in the datasourcelist file, but it is important that there
are no gaps in the number sequence. For example, if you have three data sources, you must have the
following entries (in any order) in the datasourcelist file:

impact.datasources.numdatasources=3
impact.datasources.1.name=EIC_alertsdb
impact.datasources.1.number=1
impact.datasources.1.type=ObjectServer
impact.datasources.2.name=EventrulesDB
impact.datasources.2.number=2
impact.datasources.2.type=DB2
impact.datasources.3.name=FlatFile_DS
impact.datasources.3.number=3
impact.datasources.3.type=Flat File

Note: In this case, the following would not be valid:

impact.datasources.numdatasources=3
impact.datasources.1.name=EIC_alertsdb
impact.datasources.1.number=1
impact.datasources.1.type=ObjectServer
impact.datasources.3.name=EventrulesDB
impact.datasources.3.number=2
impact.datasources.3.type=DB2
impact.datasources.4.name=FlatFile_DS
impact.datasources.4.number=3
impact.datasources.4.type=Flat File

If the datasourcelist file gets corrupted (for example, gets wiped out or the numbers are not sequential)
you can use the rebuildDatasourceList and createDatasourceList utilities to fix it. For details see
“rebuildDatasourceList” on page 18 and “createDatasourceList” on page 19.

rebuildDatasourceList
The rebuildDatasourceList script removes gaps and errors from an existing Impact datasourcelist file.
The tool consists of the following files:
• rebuildDatasourceList.xml: This file contains all of the logic necessary to rebuild the
<NCI>_datasourcelist file, thereby removing any gaps in the numeric sequence.
• rebuildDatasourceList.bat: Windows bat file that calls ant and executes
rebuildDatasourceList.xml.
• rebuildDatasourceList.sh: UNIX sh file that calls ant and executes
rebuildDatasourceList.xml.
These files are installed in the following directory:

18 Netcool/Impact: User Interface Guide


<installdir>/install/tools
To run the rebuildDatasourceList script, change to the <installdir>/install/tools directory and
run the .bat or .sh script.
No additional input requirements are needed.

createDatasourceList
The createDatasourceList script generates a new Impact datasourcelist file based on the contents of
the .ds files and the backup file DataSourceInfoBackup.
The tool consists of the following files:
• createDatasourceList.xml: This file contains all of the logic necessary to create the
<NCI>_datasourcelist file.
• createDatasourceList.bat: Windows bat file that calls ant and executes
createDatasourceList.xml.
• createDatasourceList.sh: UNIX sh file that calls ant and executes
createDatasourceList.xml.
These files are installed in the following directory:
<installdir>/install/tools
To run the createDatasourceList script, change to the <installdir>/install/tools directory and
run the .bat or .sh script.
No additional input requirements are needed.
Note: A backup of the previous <NCI>_datasourcelist file (if one
exists) will be saved in the <installdir>/etc/ directory and will be
renamed <installdir>/etc/<NCI>_datasourcelist_pre_<datetime>. For example:
<installdir>/etc/NCI_datasourcelist_pre_20210905013045.

Data types overview


Data types describe the content and structure of the data in the data source table and summarize this
information so that it can be accessed during the execution of a policy.
Data types provide an abstract layer between Netcool/Impact and the associated set of data in a data
source. Data types are used to locate the data you want to use in a policy. For each table or other data
structure in your data source that contains information you want to use in a policy, you must create one
data type. To use a data source in policies, you must create data types for it.
Attention: Some system data types are not displayed in the GUI. You can manage these data types
by using the Command Line Interface (CLI).
The structure of the data that is stored in a data source depends on the category of the data source where
the data is stored. For example, if the data source is an SQL database, each data type corresponds to a
database table. If the data source is an LDAP server, each data type corresponds to a type of node in the
LDAP hierarchy.
A data type definition contains the following information:
• The name of the underlying table or other structural element in the data source
• A list of fields that represent columns in the underlying table or another structural element (for
example, a type of attribute in an LDAP node)
• Settings that define how Netcool/Impact caches data in the data type

Chapter 3. Managing data models 19


Data type categories
Netcool/Impact supports four categories of data types.
SQL database data types
SQL database data types represent data stored in a database table.
LDAP data types
LDAP data types represent data stored at a certain base context level of an LDAP hierarchy.
Mediator data types
Mediator data types represent data that is managed by third-party applications such as a network
inventory manager or a messaging service.
Internal data types
You use internal stored data types to model data that does not exist, or cannot be easily created, in
external databases.

Predefined data types overview


Predefined data types are special data types that are stored in the global repository.
You can edit some predefined data types by adding new fields, but you cannot edit or delete existing
fields. You can view, edit, create, and delete data items of some predefined data types by using the GUI.
You cannot delete predefined data types except for the FailedEvent predefined data type.

List of predefined data types


An overview of the predefined data types available in the global project.

Table 5. Predefined data types

Data type Type Description

Schedule Editable Schedules define a list of data items associated with specific time
ranges, or time range groups, that exist.

Document Editable Custom URL Document data types are derived from the
predefined Doc data type.

FailedEvent Editable The FailedEvent data type, together with the


ReprocessedFailedEvents policy, provides you with a way to deal
with failed events that are passed from the ObjectServer.

ITNM Editable This data type is used with ITNM and the ITNM DSA.

TimeRangeGroup Non-editable A time range group data type consists of any number of time
ranges.

LinkType Non-editable The LinkType data type provides a way of defining named and
hierarchical dynamic links.

20 Netcool/Impact: User Interface Guide


Table 5. Predefined data types (continued)

Data type Type Description

Hibernation Non-editable When you call the Hibernate function in a policy, the policy
is stored as a Hibernation data item for a certain number of
seconds.

Viewing data types


You view data types in the data navigator panel.
Before you have created any data types, you see only the data source type selection list. Each time you
create a data type, you first create the data source you want it to connect to. After you configure a data
type, it is listed in the data connections panel under the associated data source.

Editing data types


Use this procedure to edit an existing data type.

Procedure
1. Click Data Model to open the Data Model tab.
2. Expand the data source that contains the data type you want to edit, select the data type, double-click
the name of the data type that you want to edit. Alternatively, right-click the data type and click Edit.
3. Make the required changes in the Data type tab.
4. Click Save to apply the changes.

Deleting data types


Use the following procedure to delete a data type.

Procedure
1. From the list of data sources and types, locate the data type you want to delete.
2. Select the data type, right-click and select Delete, or click the Delete icon on the toolbar.

Attention: When you delete a data type from within project or the global repository, it is also
deleted from any other projects that use it. To remove a data type from one project, open the
editor window for that project.

Typelist file
The typelist file is a text file that lists all the Impact data types that have been created.
It is located in the etc directory and has the following name format:
<server>_typelist
Where <server> is the name of the Impact server, for example NCI_typelist.
The typelist file comprises the following elements:
For each data type, the following entries:

impact.types.n.name=EIC_alertquery
impact.types.n.number=2
impact.types.n.class=SQL
impact.types.n.image=database.png

Where:

Chapter 3. Managing data models 21


n is a sequential number starting from 1.
name is the name of the data type as it appears in the data types under Data Model tab of the Impact UI.
number is the data type number.
class is the category of the data type. (for example, the class of SQL is a database data type that is
stored in a database table).
image is the icon used for the data type.

Order of entries
The sequential numbers do not have to be in order in the typelist file, but it is important that there are
no gaps in the number sequence. For example, if you have three data types, you must have the following
entries (in any order) in the typelist file:

impact.types.1.name=EIC_alertquery
impact.types.1.number=1
impact.types.1.class=SQL
impact.types.1.image=database.png
impact.types.2.name=EIC_PARAMETERS
impact.types.2.number=2
impact.types.2.class=SQL
impact.types.2.image=database.png
impact.types.3.name=EIC_RuleResources
impact.types.3.number=2
impact.types.3.class=SQL
impact.types.3.image=database.png

Note: In this case, the following would not be valid:

impact.types.1.name=EIC_alertquery
impact.types.1.number=1
impact.types.1.class=SQL
impact.types.1.image=database.png
impact.types.3.name=EIC_PARAMETERS
impact.types.3.number=2
impact.types.3.class=SQL
impact.types.3.image=database.png
impact.types.4.name=EIC_RuleResources
impact.types.4.number=2
impact.types.4.class=SQL
impact.types.4.image=database.png

If the typelist file gets corrupted (for example, gets wiped out or the numbers are not sequential) you can
use the rebuildTypeList and createTypeList utilities to fix it. For details see “rebuildTypeList” on page 22
and “createTypeList” on page 23.

rebuildTypeList
The rebuildTypeList script removes gaps and errors from an existing Impact typelist file.
The tool consists of the following files:
• rebuildTypeList.xml: This file contains all of the logic necessary to rebuild the <NCI>_typelist
file, thereby removing any gaps in the numeric sequence.
• rebuildTypeList.bat: Windows bat file that calls ant and executes rebuildTypeList.xml.
• rebuildTypeList.sh: UNIX sh file that calls ant and executes rebuildTypeList.xml.
These files are installed in the following directory:
<installdir>/install/tools
To run the rebuildTypeList script, change to the <installdir>/install/tools directory and run
the .bat or .sh script.

22 Netcool/Impact: User Interface Guide


No additional input requirements are needed.

createTypeList
The createTypeList script generates a new Impact typelist file based on the contents of the .type files.
The tool consists of the following files:
• createTypeList.xml: This file contains all of the logic necessary to create the <NCI>_typelist
file.
• createTypeList.bat: Windows bat file that calls ant and executes createTypeList.xml.
• createTypeList.sh: UNIX sh file that calls ant and executes createTypeList.xml.
These files are installed in the following directory:
<installdir>/install/tools
To run the createTypeList script, change to the <installdir>/install/tools directory and run
the .bat or .sh script.
No additional input requirements are needed.
Note: A backup of the previous <NCI>_typelist file (if one exists)
will be saved in the <installdir>/etc/ directory and will be renamed
<installdir>/etc/<NCI>_typelist_pre_<datetime>. For example: <installdir>/etc/
NCI_typelist_pre_20210905013045.

Data items overview


Data items are elements of the data model that represent actual units of data stored in a data source.
You create internal data items individually in the data items viewer. External data items are created
automatically when a policy references the data type to which they belong, by a lookup in the external
database.
Attention: The LDAP data type, which uses the LDAP DSA, is a read-only data type. Therefore you
cannot edit or delete LDAP data items from within the GUI.

Links overview
Links are an element of the data model that defines relationships between data items and between data
types.
They can save time during the development of policies because you can define a data relationship once
and then reuse it several times when you need to find data related to other data in a policy. Links are an
optional part of a data model. Dynamic links and static links are supported.
Netcool/Impact provides two categories of links.
Static links
Static links define a relationship between data items in internal data types.
Dynamic links
Dynamic links define a relationship between data types.

Chapter 3. Managing data models 23


24 Netcool/Impact: User Interface Guide
Chapter 4. Configuring data sources
Using the GUI you can view, create, edit, and delete data sources.

Data sources
Data sources are elements of the data model that represent real world sources of data in your
environment.
These sources of data include third-party SQL databases, LDAP directory servers, or other applications
such as messaging systems and network inventory applications.
Data sources contain the information that you need to connect to the external data. You create a data
source for each physical source of data that you want to use in your Netcool/Impact solution. When you
create an SQL database, LDAP, or Mediator data type, you associate it with the data source that you
created. All associated data types are listed under the data source in the Data Sources and Types task
pane.

SQL database DSA failover


Failover is the process by which an SQL database DSA automatically connects to a secondary database
server (or other data source) when the primary server becomes unavailable.
This feature ensures that Netcool/Impact can continue operations despite problems accessing one or
the other server instance. You can configure failover separately for each data source that connects to a
database using an SQL Database DSA.

SQL database DSA failover modes


Standard failover, failback, and disabled failover are supported failover modes for SQL database DSAs.
Standard failover
Standard failover is a configuration in which an SQL database DSA switches to a secondary database
server when the primary server becomes unavailable and then continues using the secondary until
Netcool/Impact is restarted.
Failback
Failback is a configuration in which an SQL database DSA switches to a secondary database server
when the primary server becomes unavailable and then tries to reconnect to the primary at intervals
to determine whether it has returned to availability.
Disabled failover
If failover is disabled for an SQL database DSA the DSA reports an error to Netcool/Impact when the
database server is unavailable and does not attempt to connect to a secondary server.

SNMP data sources


SNMP data sources represent an agent in the environment.
The data source configuration specifies the host name and port where the agent is running, and the
version of SNMP that it supports. For SNMP v3, the configuration also optionally specifies authentication
properties.
You can either create one data source for each SNMP agent that you want to access using the DSA, or you
can create a single data source and use it to access all agents. You can create and configure data sources
using the GUI. After you create a data source, you can create one or more data types that represent the
OIDs of variables managed by the corresponding agent.

© Copyright IBM Corp. 2006, 2023 25


SQL database data sources
An SQL database data source represents a relational database or another source of data that can be
accessed using an SQL database DSA.
A wide variety of commercial relational databases are supported, such as Oracle, Sybase, and Microsoft
SQL Server. In addition, freely available databases like MySQL, and PostgreSQL are also supported. The
Netcool/OMNIbus ObjectServer is also supported as a SQL data source.
The configuration properties for the data source specify connection information for the underlying source
of data. Some examples of SQL database data sources are:
• A DB2 database
• A MySQL database
• An application that provides a generic ODBC interface
• A character-delimited text file
You create SQL database data sources using the GUI. You must create one such data source for each
database that you want to access. When you create an SQL database data source, you need to specify
such properties as the host name and port where the database server is running, and the name of the
database. For the flat file DSA and other SQL database DSAs that do not connect to a database server, you
must specify additional configuration properties.
Note that SQL database data sources are associated with databases rather than database servers. For
example, an Oracle database server can host one or a dozen individual databases. Each SQL database
data source can be associated with one and only one database.

DB2 data source configuration


Use this information to create a DB2 data source.

Table 6. General settings for DB2 data source configuration


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Username Type a user name with which you can access the
database.

Password Type a password that allows you access to the


database. As you type, the characters are replaced
with asterisks (*).

26 Netcool/Impact: User Interface Guide


Table 6. General settings for DB2 data source configuration (continued)
Window element Description

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database at
any one time. That number must be greater than,
or equal to, the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 7. Primary source settings for DB2 data source configuration


Window element Description

Host Name Type the host name. Default value is localhost.

Port Type or select a port number. The default number


is 50000.

Database Type the name of the database to connect to.

Chapter 4. Configuring data sources 27


Table 7. Primary source settings for DB2 data source configuration (continued)
Window element Description

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Table 8. Backup source settings for DB2 data source configuration


Window element Description

Host Name Type the host name. The default value is


localhost.

Port Type or select a port number. The default value is


50000.

Database Type the name of the database to connect to.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Derby data source configuration


Use this information to create and configure a Derby data source.

Table 9. General settings for Derby data source window


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Username Type a user name with which you can access the
database.

Password Type a password that allows you access to


the database. As you type, the characters are
concealed by asterisks (*).

28 Netcool/Impact: User Interface Guide


Table 9. General settings for Derby data source window (continued)
Window element Description

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.

Database Failure Policy Select the failover option. Available options are Fail
over and Disable Backup. The Fail back option is
not supported for Derby databases.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 10. Primary source settings for Derby data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 1527.

Database Type the name of the database to connect to. The


default database is database.

Chapter 4. Configuring data sources 29


Table 10. Primary source settings for Derby data source window (continued)
Window element Description

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Table 11. Backup source settings for Derby data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default value is 1527.

Database Type the name of the database to connect to. The


default database is database.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Creating flat file data sources


The Flat File DSA is read only which means that you cannot add new data items in the GUI. The flat
file data source can be accessed like an SQL data source that uses standard SQL commands in Netcool/
Impact for example, DirectSQL.

Procedure
1. To create a flat file data source you need a text file that is already populated with data.
For example, create a /home/impact/myflatfile.txt file with the following content:

Name, Age
Ted, 11
Bob, 22

2. In the Data Model tab, click the New Data Source icon and click Flat File.
The New Flat File tab opens.
3. Enter the required information
a) Enter a unique name for your data source name, for example MyFlatFileDataSource.
b) In the Directory field, provide the path to your flat file, for example /home/impact.
c) In the Delimiters field, specify the delimiters that you used in your flat file, for example ,.

30 Netcool/Impact: User Interface Guide


The header row of the flat file supports the use of the following characters ;:/+|,\t\n\r\f
and <space>. The remaining rows of the flat file support the use of the following characters ;:/
+-|,\t and <space>.
4. Click Save to finish creating a new flat file data source.

What to do next
Use the data source that you just created to create a flat file data type. For more information about
creating flat file data types, see “Creating flat file data types” on page 86.

Flat file data source configuration


Use this information to create a flat file data source.

Table 12. General settings for flat file data source configuration
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Table 13. Source settings for flat file data source configuration
Window element Description

Directory The path to the directory that contains the flat file.

Delimiters Characters that separate the information tokens


in the flat file. The characters must be enclosed
in single quotation marks, for example: ',;-+/'.
The header row of the flat file supports the use of
the following characters, ;:/+|,\t\n\r\f and
<space>. All other rows support the use of the
following characters, ;:/+-|,\t and <space>.

GenericSQL data sources


Before creating a GenericSQL data source, you need to install the appropriate JDBC driver for your
database.
For details, see the Administration Guide

GenericSQL data source configuration


Use this information to configure a GenericSQL data source.

Table 14. GenericSQL Data Source window: General Settings


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Chapter 4. Configuring data sources 31


Table 14. GenericSQL Data Source window: General Settings (continued)
Window element Description

JDBC Driver Class Type the name of the JDBC driver for the database.

Username Type a user name with which you can access the
database.

Password Type a password that allows you access to


the database. As you type, the characters are
concealed by asterisks (*).

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections allowed to the database at one
time. That number has to be greater than or equal
to the number of threads running in the Event
Processor. See “Configuring the Event processor
service” on page 144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 15. GenericSQL Data Source window: Primary Source


Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 5432.

32 Netcool/Impact: User Interface Guide


Table 15. GenericSQL Data Source window: Primary Source (continued)
Window element Description

URL The URL that is required to connect to the


database.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.
Important: If you see an error message stating
that the data source cannot establish a connection
to a database because a JDBC driver was not
found, it means that a required JDBC driver is
missing in the shared library directory. To fix
this, place a licensed JDBC driver in the shared
library directory and restart the server. For more
information see, the "SQL database DSAs" chapter
in the Netcool/Impact DSA Reference Guide.

Table 16. GenericSQL Data Source window: Backup Source


Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default value is 5432.

URL The URL that is required to connect to the


database.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

HSQLDB data source configuration


Use this information to create an HSQLDB data source.

Obtaining the HSQLDB JDBC driver


Before configuring an HSQLDB data source, you need to obtain the JDBC driver for HSQLDB from the
following site:
http://hsqldb.org
and copy it to: $IMPACT_HOME/dsalib

Chapter 4. Configuring data sources 33


Configuring the HSQLDB data source
Table 17. General settings in the HSQLDB data source window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Username Type a user name with which you can access the
database.

Password Type a password that allows you access to the


database. As you type, the characters are replaced
with asterisks (*).

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database at
one time. That number has to be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

34 Netcool/Impact: User Interface Guide


Table 18. Primary source settings in the HSQLDB data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 9001.

Database Type the name of the database to connect to. The


default value is impact.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Table 19. Backup source settings in the HSQLDB data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default value is 9001.

Database Type the name of the database to connect to. The


default value is impact.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Informix data source configuration


Use this information to create an Informix data source.

Table 20. General settings for the Informix data source window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Chapter 4. Configuring data sources 35


Table 20. General settings for the Informix data source window (continued)
Window element Description

Username Type a user name with which you can access the
database.

Password Type a password that allows you access to the


database. As you type, the characters are replaced
with asterisks (*).

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 21. Primary source settings for the Informix data source window
Window element Description

Host Name Type the host name. Default value is localhost.

Port Select a port number. The default number is 1526.

Server Type the name of the server where the database is


located.

36 Netcool/Impact: User Interface Guide


Table 21. Primary source settings for the Informix data source window (continued)
Window element Description

Database Type the name of the database to connect to.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Table 22. Backup source settings for the Informix data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default value is 1526.

Server Type the name of the server where the database is


located.

Database Type the name of the database to connect to.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

MS-SQL Server data source configuration


Use this information to create an MS_SQL Server data source.
Note:
Integrated security is supported by JDBC on Windows operating system only, by adding
the integratedSecurity=true option (as below). If this option is used, it looks for the
sqljdb_auth.dll file in the library path.
To use Windows authentication, add ;integratedSecurity=true to the database name using the
GUI.
After making this change, the relevant .ds file in the impact\etc directory will look like this:
<Datasource>.MS-SQLServer.PRIMARYDATABASE=database;integeratedSecurity\=true

Chapter 4. Configuring data sources 37


Table 23. General settings for MS-SQL Server data source window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

User name Type a user name with which you can access the
database.

Password Type a password that allows you access to the


database. As you type, the characters are replaced
with asterisks (*).

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

38 Netcool/Impact: User Interface Guide


Table 24. Primary source settings for MS-SQL Server data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 1433.

Database Type the name of the database to connect to.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.
Important: If you see an error message stating
that the data source cannot establish a connection
to a database because a JDBC driver was not
found, it means that a required JDBC driver is
missing in the shared library directory. To fix
this, place a licensed JDBC driver in the shared
library directory and restart the server. For more
information see, the "SQL database DSAs" chapter
in the Netcool/Impact DSA Reference Guide.

Table 25. Backup source settings for MS-SQL Server data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default value is 1433.

Database Type the name of the database to connect to.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Chapter 4. Configuring data sources 39


MySQL data source configuration
Use this information to create a MySQL data source.

Table 26. General settings in the MySQL data source window


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

JDBC Driver Class Select the MySQL JDBC driver class. Refer to your
database server documentation for the appropriate
class name.

Username Type a valid user name with which you can access
the database.

Password Type a valid password with which you can access


to the database. As you type, the characters are
replaced with asterisks (*).

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. For best performance,
this number must be greater than or equal to the
maximum number of event processor threads. See
“Configuring the Event processor service” on page
144.

40 Netcool/Impact: User Interface Guide


Table 26. General settings in the MySQL data source window (continued)
Window element Description

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 27. Primary source settings in the MySQL data source window
Window element Description

Host Name Type the host name or IP address of the system


where the data source is located. The default value
is localhost.

Port Select the port number that is used by the data


source. The default number is 3306.

Database Type the name of the database to connect to.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.
Important: If you see an error message stating
that the data source cannot establish a connection
to a database because a JDBC driver was not
found, it means that a required JDBC driver is
missing in the shared library directory. To fix
this, place a licensed JDBC driver in the shared
library directory and restart the server. For more
information see, the "SQL database DSAs" chapter
in the Netcool/Impact DSA Reference Guide.

Table 28. Backup source settings in the MySQL data source window
Window element Description

Host Name Type the host name or IP address of the system


where the backup data source is located. Optional.
The default value is localhost.

Port Select a port number that is used by the backup


data source. Optional. The default value is 3306.

Database Type the name of the database to connect to.

Chapter 4. Configuring data sources 41


Table 28. Backup source settings in the MySQL data source window (continued)
Window element Description

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Note: From Fix Pack 23 onwards, a datasource of type MySQL with the MySQL 8 JDBC Driver in the
$IMPACT_HOME/dsalib directory can be created. To do this, use the following steps:
1. Create a SQL datasource of type MySQL.
2. Go to $IMPACT_HOME/etc and manually edit the datasource file for the MySQL datasource that you
created.
Example in the file NCI_XXX.ds
Change the JDBCDRIVER property from:

XXX.MySQL.JDBCDRIVER=org.gjt.mm.mysql.Driver

to:

XXX.MySQLJDBCDRIVER=com.mysql.jdbc.Driver

3. Restart the Impact server.

$IMPACT_HOME/bin/stopImpactServer.sh
$IMPACT_HOME/bin/startImpactServer.sh

4. Click on Test connect and confirm a connection can be made.


Example before and after of the MySQL data source file:

#This file was written by server.


#Tue Aug 31 03:57:15 PDT 2021
mySQLDataSource.MySQL.PRIMARYPORT=3306
mySQLDataSource.MySQL.JDBCDRIVER=org.gjt.mm.mysql.Driver
mySQLDataSource.MySQL.BACKUPPORT=3306
mySQLDataSource.MySQL.PRIMARYDATABASE=test
mySQLDataSource.MySQL.FAILOVERPOLICY=FAILOVER
mySQLDataSource.MySQL.DBUSERNAME=tester
mySQLDataSource.MySQL.DISABLEFAILOVER=false
mySQLDataSource.MySQL.DBPASSWORD=XXX
mySQLDataSource.MySQL.PRIMARYHOST=XXX1.xxx.ibm.com
mySQLDataSource.MySQL.BACKUPDATABASE=test
mySQLDataSource.MySQL.MAXSQLCONNECTION=5
mySQLDataSource.MySQL.BACKUPHOST=XXX1.xxx.ibm.com

#This file was written by server.


#Tue Aug 31 03:57:15 PDT 2021
mySQLDataSource.MySQL.PRIMARYPORT=3306
mySQLDataSource.MySQL.JDBCDRIVER=com.mysql.jdbc.Driver
mySQLDataSource.MySQL.BACKUPPORT=3306
mySQLDataSource.MySQL.PRIMARYDATABASE=test
mySQLDataSource.MySQL.FAILOVERPOLICY=FAILOVER
mySQLDataSource.MySQL.DBUSERNAME=tester
mySQLDataSource.MySQL.DISABLEFAILOVER=false
mySQLDataSource.MySQL.DBPASSWORD=XXX
mySQLDataSource.MySQL.PRIMARYHOST=XXX1.xxx.ibm.com
mySQLDataSource.MySQL.BACKUPDATABASE=test

42 Netcool/Impact: User Interface Guide


mySQLDataSource.MySQL.MAXSQLCONNECTION=5
mySQLDataSource.MySQL.BACKUPHOST=XXX1.xxx.ibm.com

Note:
The required JDBC driver that Impact uses to connect to MySQL server is known as the Connector/J.
This is the jar file that needs to be loaded into $IMPACT_HOME/dsalib.
If the MySQL server is configured to use SSL, secure connections from Impact can be made. Secure
connections can be achieved by setting additional connection properties. These properties can be set in
the Database field in the MySQL Data Source Editor.
For versions 8.0.12 and earlier of Connector/J: Add properties: ?
allowPublicKeyRetrieval=true&requireSSL=true
For example:
Database: nameOfDatabase?allowPublicKeyRetrieval=true&requireSSL=true
For versions 8.0.13 to 8.0.18 of Connector/J: Add properties: ?
allowPublicKeyRetrieval=true&sslMode=REQUIRED
For example:
Database: nameOfDatabase?allowPublicKeyRetrieval=true&sslMode=REQUIRED
For later versions of Connector/J, please refer to the mySQL documentation regarding required
connection properties.

ObjectServer data source configuration


Use this information to create an ObjectServer data source.

Table 29. General settings for ObjectServer data source configuration


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

User name Type a user name with which you can access the
database.

Password Type a password with which you can access to the


database.

Chapter 4. Configuring data sources 43


Table 29. General settings for ObjectServer data source configuration (continued)
Window element Description

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 30. Primary source settings for ObjectServer data source configuration
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 4100.

SSL Mode: Enable Select if this data source connects to the


ObjectServer through SSL.

44 Netcool/Impact: User Interface Guide


Table 30. Primary source settings for ObjectServer data source configuration (continued)
Window element Description

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Table 31. Backup source settings for ObjectServer data source configuration
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 4100.

SSL Mode: Enable Select if this data source connects to the


ObjectServer through SSL.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

ODBC data source configuration


Use this information to create an ODBC data source.

Table 32. General settings in the ODBC data source window


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

User name Type a user name that you use to access the
database.

Password Type a password that you use to access to the


database. As you type, the characters are replaced
with asterisks (*).

Chapter 4. Configuring data sources 45


Table 32. General settings in the ODBC data source window (continued)
Window element Description

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 33. Primary source settings in the ODBC data source window
Window element Description

ODBC Name Type the ODBC name.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

46 Netcool/Impact: User Interface Guide


Table 34. Backup source settings in the ODBC data source window
Window element Description

ODBC Name When you select the Database Failure Policy as


either Fail over or, Fail back, you must specify a
Backup Source. If you select the Database Failure
Policy as Disable Backup, the Backup Source
field is not required.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Oracle data source configuration


Use this information to create an Oracle data source.

Table 35. General settings for Oracle data source window


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

User name Type a user name that with which you can access
to the database.

Password Type a password that with which you can access


to the database. As you type, the characters are
replaced with asterisks (*).

Chapter 4. Configuring data sources 47


Table 35. General settings for Oracle data source window (continued)
Window element Description

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.

Connection Options Type of connection to an Oracle data source. Select


one of the following options:
• General Settings - default settings.
• Customized URL - integration with an Oracle RAC
cluster is supported. For more information, see
“Connecting to Oracle RAC cluster” on page 52.
• LDAP Data Source - connect to data source
bound in a Naming Service using LDAP. For more
information, see “Connecting to an Oracle data
source using LDAP” on page 50.
• LDAP URL - use a JDBC LDAP URL to connect
to an LDAP data source. For more information,
see “Connecting to an Oracle data source using a
JDBC LDAP URL” on page 51.

48 Netcool/Impact: User Interface Guide


Table 35. General settings for Oracle data source window (continued)
Window element Description

Connection Method Method that you want to use to connect to


the Oracle database. Select one of the following
options:
• SID - select this option if you want to connect to
the Oracle database using a service identifier.
• Service Name - select this option if you want to
connect to the Oracle database using a service
name.

Context Factory The class name to initialize the


context. It is dependent on which
Naming Service is used. For example,
com.sun.jndi.ldap.LdapCtxFactory.
This option is displayed only if you choose LDAP
Data Source in the Connection Options.

Provider URL The URL used to connect to the Naming service.


For example, ldap://localhost:389/dc=abc.
This option is displayed only if you choose LDAP
Data Source in the Connection Options.

Binding Name The name to which the Oracle Data Source object
is bound. For information about the binding name,
refer the docs of the Naming Service provider. For
example, cn=myDataSource.
This option is displayed only if you choose LDAP
Data Source in the Connection Options.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 36. Primary source settings for Oracle data source window
Window element Description

Host Name Type a primary host name. The default value is


localhost.

Port Select a primary port number. The default value is


set to a common port number: 1521.

SID / Service Name Type a primary Oracle service identifier or service


name. The default value is ORCL. For more
information, see your Oracle documentation.

Chapter 4. Configuring data sources 49


Table 36. Primary source settings for Oracle data source window (continued)
Window element Description

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.
Important: If you see an error message stating
that the data source cannot establish a connection
to a database because a JDBC driver was not
found, it means that a required JDBC driver is
missing in the shared library directory. To fix
this, place a licensed JDBC driver in the shared
library directory and restart the server. For more
information see, the "SQL database DSAs" chapter
in the Netcool/Impact DSA Reference Guide.

Table 37. Backup source settings for Oracle data source window
Window element Description

Host Name Type a backup host name. The default value is


localhost. Backup host name is optional.

Port Select a secondary port number. The default value


is set to a common port number: 1521. Backup
port number is optional.

SID / Service Name Type a backup SID or service name. The default
value is ORCL. For more information, see your
Oracle documentation. Backup SID is optional.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.
This button is disabled when the backup source
information is left blank.

Connecting to an Oracle data source using LDAP


Use this information to connect to an Oracle data source bound in a Naming Service using LDAP.
Select the LDAP DataSource option in the Connection Option list. The user name and password that you
use to connect to an Oracle data source bound in a Naming Service using LDAP are the credentials that
are required to access the Naming Service (not the database login credentials).

50 Netcool/Impact: User Interface Guide


The Oracle data source should have the necessary information required to access the database (like user
name, password, SID, host, port) already configured in it. This information is used behind the scenes to
connect to the data source. If the connection to the database is successful the Connection OK message is
displayed.
For more information about Oracle data sources, refer to the Oracle JDBC Developer's Guide and
Reference.
For example, you use OpenLDAP as the Naming Service, and an Oracle data source is already bound to a
logical name (cn=myDataSource in the Binding Name field). When you click Test Connection, the first
connection is made to the naming service using LDAP and when the connection is established, Netcool/
Impact looks for an Oracle data source for the logical name cn=myDataSource.

Connecting to an Oracle data source using a JDBC LDAP URL


Use this information to create an Oracle data source that uses a JDBC LDAP URL to connect to the
database.

Procedure
1. Open the Data Model tab and click the New Data Source icon in the toolbar and select Oracle.
2. In the Data Source Name field, enter a unique name to identify the data source.
3. In the Username field, enter a use that you can use to access the database.
4. In the Password field, enter a password that you can use to access the database.
5. In the Maximum SQL Connections list, choose a number of connections in the connection pool. For
maximum performance define the size of the connection pool as greater than or equal to the maximum
number of threads that run in the event processor.
For maximum performance set the size of the connection pool as greater than or equal to the
maximum number of threads that are running in the event processor.
Important: Changing the maximum connections setting in an SQL data source requires a restart of the
Impact Server.
For information about viewing existing thread and connection pool information, see the information in
the Netcool/Impact Administration in the section Command-Line tools, Event Processor commands. See
the Select PoolConfig from Service where Name='EventProcessor';
Important: In a clustered environment, the event processor configuration is not replicated
between servers. You must run the Select PoolConfig from Service where
Name='EventProcessor'; command on the primary and the secondary servers.
Limiting the number of concurrent connections manages performance. Type the maximum number of
connections that are allowed to the database at one time. That number must be greater than or equal
to the number of threads that are running in the Event Processor. See “Configuring the Event processor
service” on page 144.
6. In the Connection Options list, choose LDAP URL.
7. In the Oracle LDAP URL field, enter the Oracle LDAP URL in the following format:

jdbc:oracle:thin:@ldap:<IP_address>/ADTEST,cn=OracleContext,DC=oracle,
dc=support,dc=com

8. After you enter the URL, you are prompted for the LDAP user name and password.
For example, enter the following:
• In the LDAP Username field, enter
cn=Administrator,cn=Users,dc=oracle,dc=support,dc=com.
• In the LDAP Passwordfield, enter netcool.

Chapter 4. Configuring data sources 51


Results
After the data source has been created, the Oracle JDBC connection is made through the LDAP URL
provided in the user interface.

Connecting to Oracle RAC cluster


Netcool/Impact supports integration with an Oracle RAC cluster.
If you choose to connect to an Oracle RAC cluster, in the URL field, enter the URL, preceded by
jdbc:oracle:thin:@. For example:

jdbc:oracle:thin:@
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = host1)(PORT = port1))
(ADDRESS = (PROTOCOL = TCP)(HOST = host2)(PORT = port2))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = service-name)
(FAILOVER_MODE =(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))
)
)

PostgreSQL data source configuration


Use this information to create a PostgreSQL data source.

Table 38. General settings for PostgreSQL data source window


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

User name Type a user name that you use to access to the
database.

Password Type a password that you use to access to the


database. As you type, the characters are replaced
with asterisks (*).

52 Netcool/Impact: User Interface Guide


Table 38. General settings for PostgreSQL data source window (continued)
Window element Description

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 39. Primary source settings for PostgreSQL data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 5432.

Database Type the name of the database to connect to.

Chapter 4. Configuring data sources 53


Table 39. Primary source settings for PostgreSQL data source window (continued)
Window element Description

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.
Important: If you see an error message stating
that the data source cannot establish a connection
to a database because a JDBC driver was not
found, it means that a required JDBC driver is
missing in the shared library directory. To fix
this, place a licensed JDBC driver in the shared
library directory and restart the server. For more
information see, the "SQL database DSAs" chapter
in the Netcool/Impact DSA Reference Guide.

Table 40. Backup source settings for PostgreSQL data source window
Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default value is 5432.

Database Type the name of the database to connect to.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

Sybase data source configuration


Use this information to create a Sybase data source.

Table 41. General settings in Sybase data source window


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

54 Netcool/Impact: User Interface Guide


Table 41. General settings in Sybase data source window (continued)
Window element Description

User name Type a user name with which you can access to the
database.

Password Type a unique password. As you type, the


characters are replaced with asterisks (*).

Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.

Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.

Table 42. Primary source settings in Sybase data source window


Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default number is 5000.

Chapter 4. Configuring data sources 55


Table 42. Primary source settings in Sybase data source window (continued)
Window element Description

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.
Important: If you see an error message stating
that the data source cannot establish a connection
to a database because a JDBC driver was not
found, it means that a required JDBC driver is
missing in the shared library directory. To fix
this, place a licensed JDBC driver in the shared
library directory and restart the server. For more
information see, the "SQL database DSAs" chapter
in the Netcool/Impact DSA Reference Guide.

Table 43. Backup source settings in Sybase data source window


Window element Description

Host Name Type the host name. The default value is


localhost.

Port Select a port number. The default value is 5000.

Test Connection Click to test the connection to the host to ensure


that you entered the correct information. Success
or failure is reported in a message box. If the host
is not available at the time you create the data
source, you can test it later. To test the connection
at any time, from the data source list, right-click
the data source and select Test Connections from
the list of options.

JDBC ResultSetType and ResultSetConcurrency configuration


Use this information to set JDBC statement options.
By default, Impact uses the JDBC statement options TYPE_SCROLL_SENSITIVE and
CONCUR_UPDATABLE for all data sources. If this fails, Impact then retries with TYPE_FORWARD_ONLY.
You can override these settings for a specific data source by using the resultsettype and
resultsettypeconcurrency properties in the etc/servername_datasource.props file:

impact.[dataSourceType].resultsettype=[integer]
impact.[dataSourceType].resultsetconcurrency=[integer]

Note: Both these properties take integer values which correspond to the following statement options:

TYPE_FORWARD_ONLY=1003
TYPE_SCROLL_INSENSITIVE=1004
TYPE_SCROLL_SENSITIVE=1005
CONCUR_READ_ONLY=1007
CONCUR_UPDATABLE=1008

56 Netcool/Impact: User Interface Guide


[dataSourceType] identifies the data source for which you want to set the JDBC statement options.
For example, the following sample sets the Oracle JDBC statement options to TYPE_FORWARD_ONLY and
CONCUR_READ_ONLY:

impact.oracle.resultsettype=1003
impact.oracle.resultsetconcurrency=1007

After changing the values in the etc/servername_datasource.props file, restart the Impact server
for the changes to take effect.

UI data provider data sources


A UI data provider data source represents a relational database or another source of data that can be
accessed by using a UI data provider DSA.
You create UI data provider data sources in the GUI. You must create one such data source for each UI
data provider that you want to access.

Creating a UI data provider data source


Use this information to create a UI data provider data source.

Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select UI Data Provider. The
tab for the data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
5. In the Host Name field, add the location where the UI data provider is deployed. The location is a
fully qualified domain name or IP address.
6. In the Port field, add the port number of the UI data provider.
7. Use SSL: To enable Netcool/Impact to connect over SSL to a data provider, you must export a
certificate from the data provider and import it into the Impact Servers and each GUI Server. If the
data provider is an IBM Dashboard Application Services Hub server, complete these steps to export
and import the certificate. For other data provider sources, after you obtain the certificate, use steps
(f and g) to import the certificate.
a) In the IBM Dashboard Application Services Hub server, go to Settings, WebSphere
Adminstrative Console, Launch WebSphere administrative console.
b) Within the administrative console, select Security, SSL certificate and key management, Key
stores and certificates, NodeDefaultKeyStore, Personal certificates.
c) Check the default certificate check box and click Extract.
d) Enter dash, for the certificate alias to extract.
e) For certificate file name, enter a file name on the system to which the certificate is written
to, such as C:\TEMP\mycertificate.cert.
f) Copy the certificate file to the Impact Server host and import it into both the Impact Servers
and GUI Servers. For more information about the import commands, refer to the Netcool/Impact
Administration Guide, within the security chapter go to the 'Enabling SSL connections with
external servers' topic.
g) Restart the Impact Servers and eachGUI Server.

Chapter 4. Configuring data sources 57


For more information, see the Netcool/Impact Administration Guide under the section Secure
Communication.
If you want to connect to the local UI data provider by using the UI data provider data source with
an SSL enabled connection, the signed certificate must be exchanged between the GUI Server and
Impact Server. For more information see Configuring SSL with scripts in the Security section of the
documentation.
8. Base Url: Type the directory location of the rest application, such as, /ibm/tivoli/rest.
9. User Name: Type a user name with which you can access the UI data provider.
10. Password: Type a password with which you can access the UI data provider.
11. Click Test Connection to test the connection to the UI data provider to ensure that you entered the
correct information.
Success or failure is reported in a message box. If the UI data provider is not available when you
create the data source, you can test it later.
To test the connection to the UI data provider at any time, from the data source list, right-click the
data source and select Test Connection from the list of options.
12. Click Discover Providers to populate the Select a Provider list.
13. From the Select a Provider list, select the provider that you want to return the information from.
14. From the Select a Source list, select the data content set that you want to return information from.
The Select Source list is populated with the available UI data provider data content sets on the
specified computer.
15. Click Save to create the data source.

Providing support for multi-tenancy for Tree Table and Topology widgets
Impact supports multi-tenancy. This is where two identical widgets are displayed on the same page, or
where the same page is duplicated on two tabs and widgets are executing the same policy.
Using the out of the box configuration, if two identical Tree Table or Topology widgets are placed on
the same page, or if the same page is duplicated on two tabs, the data they receive is vulnerable
to corruption. This happens if Impact UI data provider cannot distinguish between the widgets. Since
widgets execute the same policy, the dataset UI Data Provider sends back to the widgets could contain
duplicated rows, mixed-up data or otherwise be incorrect.
Multi-tenancy support, in such a scenario, consists of enabling widgets to receive a unique dataset, per
widget, while executing the same policy. To provide this functionality, a new input parameter must be
created on the policy used by the widget. The parameter should be called "owner" (see new property
below if this is not feasible). Declare owner as a policy input parameter and use it in the policy
according to the business logic, to generate widget-specific data sets with the same policy. The new
input parameter must be set by the widget in the Configure Optional Dataset Parameters section. For
each widget which uses the policy, the widget must set the value uniquely.
Example:
owner = 'MyWidgetOne’ // for the first widget
owner = ’MyWidgetTwo’ // for the second widget
Note: The value can be any string, except "default".
Depending on the scenario the new input parameter may determine different output for the policy, or
it may have no effect on the policy output. See the following policy excerpt for an example where two
different data sets based on the owner parameter value are delivered:

if (owner == "MyWidgetOne") {
var sysObj = NewObject();
sysObj.UITreeNodeType = "Tree";
sysObj.system = "Z11";
sysObj.node = "nodeOne";
sysObj.status = "Critical";
sysObj.UITreeNodeId = 0;

58 Netcool/Impact: User Interface Guide


sysObj.UITreeNodeParent = 3;
systemTree.push(sysObj);
....
}
if (owner == "MyWidgetTwo") {
var sysObj = NewObject();
sysObj.UITreeNodeType = "Tree";
sysObj.system = "A00";
sysObj.node = "nodeTwo";
sysObj.status = "Normal";
sysObj.UITreeNodeId = 0;
sysObj.UITreeNodeParent = 3;
systemTree.push(sysObj);
....
}

Important: When enabling multi-tenancy for a policy, the output parameter from the policy cannot be a
scalar type value. Multi-tenancy is only supported for the DirectSQL / UI Data Provider Datatype, Impact
Object, Array of Impact Object and Datatype formats. If an unsupported format is selected, the policy will
return no data.

uidataprovider.multitenant.parameter.name
By default, this property is set to owner. In most cases, this value will not need to be changed. However, if
this value conflicts with another parameter (for example, the widget data source policy has another input
parameter with the same name) you must declare a new value in the server.props file. For example, if
you decide that the parameter which will convey the widget identity to the Impact UI data provider should
be named tenant.
Declare it in the $IMPACT_HOME/etc/server.props file with the following line:
uidataprovider.multitenant.parameter.name=tenant
Declare tenant as a policy input parameter and use it in the policy accordingly, to generate widget-
specific data sets with the same policy. See the following policy excerpt for an example where two
different data sets based on the tenant parameter value are delivered:

if (tenant == "TenantOne") {
var sysObj = NewObject();
sysObj.UITreeNodeType = "Tree";
sysObj.system = "Z11";
sysObj.node = "nodeOne";
sysObj.status = "Critical";
sysObj.UITreeNodeId = 0;
sysObj.UITreeNodeParent = 3;
systemTree.push(sysObj);
....
}
if (tenant == "MyWidgetTwo") {
var sysObj = NewObject();
sysObj.UITreeNodeType = "Tree";
sysObj.system = "A00";
sysObj.node = "nodeTwo";
sysObj.status = "Normal";
sysObj.UITreeNodeId = 0;
sysObj.UITreeNodeParent = 3;
systemTree.push(sysObj);
....
}

For each widget in the multi-tenancy use case (duplicated widgets on the same page or on different
tabs) assign a unique value for the new input parameter in the Configure Optional Dataset Parameters
section.
For example:
tenant = ’TenantOne' // for the first widget
tenant = 'MyWidgetTwo' // for the second widget
Important: default is a reserved word and cannot be used as a value for the parameter. For example,
tenant = 'default' is not allowed.

Chapter 4. Configuring data sources 59


RESTful DSA data source
To use the REST DSA, you must create a RESTful DSA data source.

Creating a RESTful DSA data source


Use this information to create a RESTful DSA data source.

Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select RESTful API. The tab for
the data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
5. In the Host Name field, add the hostname of the REST service that you want to connect to. The
hostname is a fully qualified domain name or IP address.
6. In the Resource Path field, add the path information to the resource if necessary.
7. In the Port field, add the port number (this will be 80 by default).
8. Use HTTPS/SSL to enable Netcool/Impact to connect over SSL to a REST data source. You must
export a certificate from the data source and import it into the Impact Servers and each GUI Server.
a) Get the certificate through the browser.
Refer to your browser manual, searching for exporting a certificate.
b) Copy the certificate file to the Impact Server host and import it into both the Impact Servers and
GUI Servers.
For more information about import commands, see the Netcool/Impact Administration Guide
under the section Enabling SSL connections with external servers in the Security chapter.
c) Restart the Impact Servers and each GUI Server.

Alternatively, you can select the Disable SSL Verification checkbox to allow the RESTful
DSA to connect over SSL without having to import the certificate. If enabled, the DSA will no longer
attempt to verify the SSL connection.
9. Select the Reuse Connection checkbox if required.
Connection caching is done at a policy level. This means the same HTTP connection can be reused
within a policy when it is running.
10. Select the Cache Response checkbox if required.
Note: Response caching is based on entity tags. It is one of several mechanisms that the
HTTP protocol provides for cache validation, which allows a client to make conditional requests.
Impact by default adds a Cache Control : Max-Age=0 header that causes any caches used
during the request to revalidate ensuring that the entity tag is checked. Modify this header to the
Cache Control setting you want to use. Impact by default adds a Cache Control : Max-Age=0
header to any newly created REST data sources in the HTTP header list.
11. Authentication.
If using basic authentication, you must provide the username and password:
a) In the User Name field type a user name with which you can access the REST API.
b) In the Password field type a password with which you can access the REST API.

60 Netcool/Impact: User Interface Guide


If using OAuth authentication, you must provide the OAUTH Data Source. (To configure an OAUTH
Data Source, see “Creating an OAuth data source” on page 61.):
a) Select the Use OAuth check-box to enable OAUTH authentication.
b) Select the OAuth data source that you want to use from the drop-down menu.
12. Specify an HTTP header if you are making requests to a datasource where the same HTTP header are
being used consistently.
For example, if a new header is added to the grid, this is the same as adding a request header. If the
grid has the following header details:

Header Name Header Value


Content-Type application/json
Max-Forwards 10

The following will be added to the URL when making the request:

GET /api/alerts/v1 HTTP/1.1


Host: ibmnotifybm.mybluemix.net
Authorization: Basic ******************
Content-Type: application/json;charset=UTF-8
Max-Forwards: 10

13. Specify HTTP parameters if you are making requests to the datasource where the same HTTP
parameters are being used consistently.
The REST API datasource can persist these and they will be used on every call to the data source
unless overridden by the policy function.
For example, if a new parameter is added to the grid, this is the same as adding a query parameter to
the request. If the grid has the following paramaters:

Parameter Name Parameter Value


size 100
name impact

Then ?size=100&name=impact will be added to the URL when making the request.
14. Click Test Connection to see if it is possible to connect to the data source with the current data
source settings.
15. Click Preview Request to preview an example of the raw http request with the current data source
settings.
16. Click Save to create the data source.

OAuth data source


To use OAUTH authentication, you must create an OAuth data source.

Creating an OAuth data source


Use this information to create an OAuth data source.

Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select OAuth. The tab for the
data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.

Chapter 4. Configuring data sources 61


5. In the Access Token field, add the access token for the OAuth data source.
6. In the Refresh Token field, add the refresh token for the OAuth data source.
7. In the Client ID field, add the Client ID for the OAuth service that you want to use.
8. In the Client Secret field, add the Client Secret for the OAuth data source.
9. In the Token URI field, add the Token URI for the OAuth provider's authentication server.
10. In the Auth URI field field, add the Auth URI for the OAuth provider's authorization server.

LDAP data sources


The Lightweight Directory Access Protocol (LDAP) data source represent LDAP directory servers.
Netcool/Impact supports the OpenLDAP and Microsoft Active Directory servers.
You create LDAP data sources in the GUI Server. You must create one data source for each LDAP server
that you want to access. The configuration properties for the data source specify connection information
for the LDAP server and any required security or authentication information.

Creating LDAP data sources


By default the LDAP DSA supports non-authenticating data sources only.
You can make them authenticating, however, using the Netcool/Impact properties file. For information
about authenticating LDAP data sources, see the DSA Reference Guide.
Do not specify authentication parameters for the LDAP data source unless the underlying LDAP server
is configured to require them. If you specify authentication parameters and they are not required by the
LDAP server, Netcool/Impact fails to connect to the data source.

LDAP data source configuration window


Use this information to configure an LDAP data source.

Table 44. General settings for LDAP data source window


Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Table 45. Source settings for LDAP data source window


Window element Description

LDAP Server Type the server name where the LDAP database
resides. The default is localhost.

Port Select a port number. The default value is set to a


common port number: 389.
Note: Port 636 is often used for SSL connections.

62 Netcool/Impact: User Interface Guide


Table 45. Source settings for LDAP data source window (continued)
Window element Description

Security Protocol Optional. Type the security protocol to use when


connecting to the LDAP server. Supported security
protocols are ssl and sasl. If you do not specify a
security protocol, none is used.
Note: A value of ssl must be used for SSL
connections.

Service Provider Optional. Type the service provider to use when


connecting to the LDAP server. To use the default
Java LDAP provider, do not specify any value
for this property. If you do not want to use the
default Java LDAP provider, enter the fully qualified
package and class name of the initial context
factory class for the LDAP provider you want to use.

Table 46. Authentication settings for LDAP data source window


Window element Description

Authentication Mechanism Optional. Type the authentication type to use


when connecting to the LDAP server. Basic
authentication types are none, anonymous
and simple. Other types of authentication as
described in the LDAP v2 and v3 specifications
are also supported. If the LDAP server does not
have authentication enabled, do not specify a
value for this property. For more information about
authentication types, see the documentation that
is provided by the LDAP server.

User name For simple authentication, enter the fully qualified


LDAP user name. For authentications none and
anonymous, leave this field blank.

Password For simple authentication, enter a valid LDAP


password. For authentication types of none and
anonymous, leave this field bland.
Restriction: Do not specify authentication
parameters for the LDAP data source unless the
underlying LDAP server is configured to require
them. If you specify authentication parameters and
they are not required by the LDAP server, Netcool/
Impact fails to connect to the data source.

Mediator data sources


Mediator data sources represent third-party applications that are integrated with Netcool/Impact through
the DSA Mediator.
These data sources include a wide variety of network inventory, network provisioning, and messaging
system software. In addition, providers of XML and SNMP data can also be used as mediator data sources.

Chapter 4. Configuring data sources 63


Typically Mediator DSA data sources and their data types are installed when you install a Mediator DSA.
The data sources are available for viewing and, if necessary, for creating or editing.
Attention: For a complete list of supported data source, see your IBM account manager.

CORBA Mediator DSA data source configuration window


Use this information to configure a CORBA Mediator DSA data source.

Table 47. General settings in the CORBA Mediator DSA data source window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Table 48. Source settings in the CORBA Mediator DSA data source window
Window element Description

Source Complete the Name Service fields and the IOR


File Location field.

Name Service Host Add the Name Service Host.

Name Service Port Add the Name Service Port.

Name Service Context Add the Name Service Context.

Name Service Object Name Add the Name Service Object Name.

IOR File Location Add the IOR File Location.

Direct Mediator DSA data source configuration window


Use this information to configure a Direct Mediator Data Source.
1. In the Data source name field, enter a unique name to identify the data source. You can only use
letters, numbers, and the underscore character in the data source name. If you use UTF-8 characters,
make sure that the locale on the Impact Server where the data source is saved is set to the UTF-8
character encoding.
2. For the Mediator Class Name, a dd the Mediator Class Name.

Creating SNMP data sources


When you have an SNMP DSA installed, you need to create any required SNMP data sources.
You can either create one data source for each SNMP agent that you want to access using the DSA, or you
can create a single data source and use it to access all agents.
If you plan to use the standard data-handling functions AddDataItem and GetByFilter to access
SNMP data, you must create a separate data source for each agent.
Important: To create a data source with SNMP v3 authentication, specify the properties described in the
“SNMP data source configuration window” on page 65 and then enter the information for the agent to

64 Netcool/Impact: User Interface Guide


authenticate the DSA as an SNMP user. The authentication parameters can be overridden by calls to the
SNMP functions in the Impact Policy Language.

SNMP data source configuration window


Use this information to configure a SNMP data source.

Table 49. General settings in the SNMP data source configuration window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Table 50. Data source settings in the SNMP data source configuration window
Window element Description

Mediator Class Name The following class name appears in this field:

com.micromuse.dsa.snmpdsa.SnmpMediator

Table 51. SNMP agent settings in the SNMP data source configuration window
Window element Description

Host Name If you are creating this data source for use with
the standard data-handling functions AddDataItem
and GetByFilter, enter the host name or IP address.
If you are creating this data source for use with the
new SNMP functions, accept the default value.

Read Community Type the name of the SNMP read community. The
default is public.

Write Community Type the name of the SNMP write community. The
default is public

Timeout Type or select a timeout value in seconds. When


the DSA connects to an agent associated with
this data source, it waits for the specified timeout
period before it returns an error to Netcool/Impact.

Port If you are creating this data source for use with
the standard data-handling functions AddDataItem
and GetByFilter, select or enter the port number.
If you are creating this data source for use with the
new SNMP functions, accept the default value.

Version Select the correct version, 1, 2 or 3. If you select


SNMP version 3, the SNMP V3 section of the
window activates.

Chapter 4. Configuring data sources 65


Table 52. SNMP V3 settings in the SNMP data source configuration window
Window element Description

User The name of an SNMP v3 authentication user.

Authentication Protocol Select a protocol. The default is MD5

Authentication Password Password for the authentication user.

Privacy Protocol Select a protocol.

Privacy Password Type a privacy password.

Context ID Type a context ID.

Context Name Type a context name.

JMS data source


A Java Message Service (JMS) data source abstracts the information that is required to connect to a JMS
Implementation.
This data source is used by the JMSMessageListener service, the SendJMSMessage, and
ReceiveJMSMessage functions.

JMS data source configuration properties


You can configure the properties for the Java Message Service (JMS) data source.

Table 53. General settings for the JMS data source window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

66 Netcool/Impact: User Interface Guide


Table 54. Source settings for the JMS data source window
Window element Description
JNDI Factory Initial Enter the name of the JNDI initial context factory.
The JNDI initial context factory is a Java object
that is managed by the JNDI provider in your
environment. The JNDI provider is the component
that manages the connections and destinations for
JMS.
OpenJMS, BEA WebLogic, and Sun Java
Application Server distribute a JNDI provider as
part of their JMS implementations. The required
value for this field varies by JMS implementation.
For OpenJMS, the value of the property is

org.exolab.jms.jndi.
InitialContextFactory

For other JMS implementations, see the related


product documentation.

JNDI Provider URL Enter the JNDI provider URL. The JNDI provider
URL is the network location of the JNDI provider.
The required value for this field varies by JMS
implementation. For OpenJMS, the default value
of this property is tcp://hostname:3035, where
host name is the name of the system on which
OpenJMS is running. The network protocol TCP or
RMI, must be specified in the URL string. For other
JMS implementations, see the related product
documentation.

JNDI URL Packages Enter the Java package prefix for the JNDI context
factory class. For OpenJMS, BEA WebLogic, and
Sun Java Application Server, you are not required
to enter a value in this field.

JMS Connection Factory Name Enter the name of the JMS connection factory
object. The JMS connection factory object is
a Java object that is responsible for creating
new connections to the messaging system.
The connection factory is a managed object
that is administered by the JMS provider. For
example, if the provider is BEA WebLogic, the
connection factory object is defined, instantiated,
and controlled by that application. For the
name of the connection factory object for your
JMS implementation, see the related product
documentation.

JMS Destination Name Enter the name of a JMS topic or queue, which is
the name of the remote topic or queue where the
JMS message listener listens for new messages.

Chapter 4. Configuring data sources 67


Table 54. Source settings for the JMS data source window (continued)
Window element Description
JMS Connection User Name Enter a JMS user name. If the JMS provider
requires a user name to listen to remote
destinations for messages, enter the user name in
this field. JMS user accounts are controlled by the
JMS provider.

JMS Connection Password If the JMS provider requires a password to listen


to remote destinations for messages, enter the
password in this field.

Test Connection Test the connection to the JMS Implementation.


If the test is successful, the system shows the
following message:
JMS: Connection OK

68 Netcool/Impact: User Interface Guide


Chapter 5. Configuring data types
Data types are elements of the data model that represent sets of data stored in a data source.
The structure of data types depends on the category of data source where it is stored. For example, if the
data source is an SQL database, each data type corresponds to a database table. If the data source is an
LDAP server, each data type corresponds to a type of node in the LDAP hierarchy.

Viewing data type performance statistics


You can use the Performance Statistics report to determine whether the caching enabled for the data type
is working efficiently.

Before you begin


You must set the performance measurements settings in the data types Caching tab. See “SQL data type
configuration window - Cache settings tab” on page 85. These can be modified if required.

Procedure
1. In the Data Model tab, locate the data type for which you want performance statistics.
2. Right click on the data type and click View Performance Statistics .
For more information about the statistics reported in the window, see “Data type performance
statistics” on page 69.
3. Close the window.

Data type performance statistics


The definitions of data type performance statistics.

Table 55. Data type performance statistics: performance averages


Setting type Description

Number of Queries Average number of queries calculated over the


time interval (seconds).

Number of Inserts Average number of inserts calculated over the time


interval (seconds).

Number of Updates Average number of updates calculated over the


time interval (seconds).

Number of Rows Average number of rows retrieved (either from the


cache or from the database) by the number of
queries over the query interval.

Time to Execute Each Query Average time it took to run each query calculated
over the query interval.

Time to Read Results of Each Query Average time it took to read the results of each
query over the query interval.

Averages are calculated over time interval The time interval.

© Copyright IBM Corp. 2006, 2023 69


Table 56. Data type performance statistics: cache status
Setting type Description

Number of Queries (% of total) Actual number of queries and the percentage of


queries retrieved from the query cache per query
interval.

Number of Data Items (% of total) Actual number of data items and the percentage of
data items loaded from the data cache per query
interval.

Number of Data Items in Use The number of data items loaded from the data
cache referred by queries in the query cache.

Time Spent Clearing the Cache The time it took to clear the cache.

Percentages are calculated over query interval Query interval.

Data type caching


You can use data type caching to reduce the total number of queries that are made against a data source
for performance or other reasons.
Caching helps you to decrease the load on the external databases used by Netcool/Impact. Data caching
also increases system performance by allowing you to temporarily store data items that have been
retrieved from a data source.
Important: Caching works best for static data sources and for data sources where the data does not
change often.
Caching works when data is retrieved during the processing of a policy. When you view data items in the
GUI, cached data is retrieved rather than data directly from the data source.
You can specify caching for external data types to control the number of data items temporarily stored
while policies are processing data. Many data items in the cache use significant memory but can save
bandwidth and time if the same data is referenced frequently.
Important: Data type caching works with SQL database and LDAP data types. Internal data types do not
require data type caching.
You configure caching on a per data type basis within the GUI. If you do not specify caching for the data
type, each data item is reloaded from the external data source every time it is accessed.

Data type caching types


You can control the following aspects of data type caching.
Data caching
Use data caching to temporarily store individual data items retrieved from a data source.
When a policy uses the GetByKey function, data caching defines the number of records that can
be held in the cache. You can configure both the maximum number of data items to cache and the
expiration time for data items in the cache.
Important: In order for data caching to work, the KeyFields in the data type must be unique.
Query caching
You can use query caching to temporarily store sets of data items that are retrieved during individual
queries to a data source.
When a policy uses the GetByfilter function, query caching defines the number of completed
queries allowed in the cache (not the number of data items).

70 Netcool/Impact: User Interface Guide


Important: You have to set data caching for query caching to work.
Count caching
Count caching is used to temporarily store the count values obtained in a policy. Count caching uses
the GetByFilter function with the CountOnly parameter set to True.
This type of caching is for compatibility with earlier versions only, do not use it unless it is necessary.

Creating internal data types


Overview of the tabs in the internal data type editor.

Table 57. Internal data type editor tabs

Tab Description

Custom Fields In this tab, you can add any number of fields to form a database table.

Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the items
that policies need to be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.

Internal data type configuration window


Use this information to configure an internal data type.

Table 58. General settings on the Internal Data Type Editor Custom Fields tab
Editor element Description

Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.
If you receive an error message when you save a
data type, check the Global tab for a complete list
of data type names for the server. If you find the
name you tried to save, you must change it.

Persistent Leave the box checked as Persistent (permanent)


to permanently store the data items that are
created for the data type. When the server is
restarted, the data is restored. If the box is cleared,
the data is held in memory, but only while the
server is running. When the server restarts, the
data is lost because it was not backed up in a file.
This feature is useful if you need data only on a
temporary basis and then want to discard it.
Persistent data types are always written to file.
Therefore, making internal data types temporary is
faster.

New Field Click to add a field to the table.

Chapter 5. Configuring data types 71


Table 58. General settings on the Internal Data Type Editor Custom Fields tab (continued)
Editor element Description

Access the data through UI data provider To ensure that the UI data provider can access
the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box, the data
type sends data to the UI data provider. When the
data model refreshes, the data type is available
as a data provider source. The default refresh rate
is 5 minutes. For more information about UI data
providers, see the Solutions Guide.

Table 59. Additional settings on the Internal Data Type Editor Custom Fields tab
Editor element Description

ID Type a unique ID for the field.

Field Name Type the actual field name. The field name can be
the same as the ID. You can reference both the ID
field and the Field Name field in policies.
If you do not enter a Display Name, Netcool/
Impact uses the ID field name by default.

Format Select a format for the field from the Format list:

Display Name Field: You can use this field to select a field from the
menu to label data items according to the field
value. Choose a field that contains a unique value
that can be used to identify the data item for
example, ID. To view the values on the data item,
you must go to View Data Items for the data type
and select the Links icon. Click the data item to
display the details.

Description Type some text that describes the field.

Table 60. UI data provider settings on the Internal Data Type Editor Custom Fields tab
Editor element Description

Define Custom Types and Values (JavaScript) To show percentages and status in a widget, you
must create a script in JavaScript format. The
script uses the following syntax. Where Type is
either Percentage or Status and VariableName
can be a variable or hardcoded value. Always cast
the variable name to String to avoid any error even
if the value is numeric.

ImpactUICustomValues.put("<FieldName>,
<Type>",<VariableName>);

Add the script to the Define Custom Types and


Values (JavaScript) area.

72 Netcool/Impact: User Interface Guide


Table 60. UI data provider settings on the Internal Data Type Editor Custom Fields tab (continued)
Editor element Description
Check Syntax and Preview Script Sample Result Click the Preview Script Sample Result button to
preview the results and check the syntax of the
script. The preview shows a sample of 10 rows of
data in the table.

External data types


External data types use an external data source to access information in vendor acquired databases, such
as SQL, or LDAP databases and DSAs.
By definition, an external data type is the lookup method used to find data from the external data source.
An external data type contains all the fields (data items) in its data source that meet the lookup criteria.
When the database is accessed, the fields from the database schema are assigned to the data type. You
can also add additional fields to the type, for example, if a field was added to the data source after you
created the data type. You can delete fields that you do not need to have as part of your data type.
Creating data types from external data sources is similar to creating internal data types, except that the
external data type editor has a Table Description tab instead of a Custom Fields tab and an additional
data caching part to regulate the flow of data between Netcool/Impact and the external data source. The
fields in the Table Description tab are not custom fields that you create. These fields identify the required
data from the external data source.
All data types must belong to a data source. Before you create an external data type, create a data source
to associate with the data type.

Deleting a field
You can use the Delete function to limit which fields are updated, inserted, and selected from the data
source.
Remember: When you delete a field from the data type, it is not deleted from the data source.
Using a subset of the database fields can speed performance of the data type.

List of predefined data types


An overview of the predefined data types available in the global project.

Table 61. Predefined data types

Data type Type Description

Schedule Editable Schedules define a list of data items associated with specific time
ranges, or time range groups, that exist.

Document Editable Custom URL Document data types are derived from the
predefined Doc data type.

FailedEvent Editable The FailedEvent data type, together with the


ReprocessedFailedEvents policy, provides you with a way to deal
with failed events that are passed from the ObjectServer.

Chapter 5. Configuring data types 73


Table 61. Predefined data types (continued)

Data type Type Description

ITNM Editable This data type is used with ITNM and the ITNM DSA.

TimeRangeGroup Non-editable A time range group data type consists of any number of time
ranges.

LinkType Non-editable The LinkType data type provides a way of defining named and
hierarchical dynamic links.

Hibernation Non-editable When you call the Hibernate function in a policy, the policy
is stored as a Hibernation data item for a certain number of
seconds.

Predefined data types overview


Predefined data types are special data types that are stored in the global repository.
You can edit some predefined data types by adding new fields, but you cannot edit or delete existing
fields. You can view, edit, create, and delete data items of some predefined data types by using the GUI.
You cannot delete predefined data types except for the FailedEvent predefined data type.

Time range groups and schedules


The Schedule and Time Range Group data types have special data items that are similar to internal data
items, but they are used specifically for defining scheduling information.
Policies typically use schedules and time range groups to look up the availability of another item, for
example, whether an administrator is on call at the time the policy is run.
Schedules contain time ranges associated with data items. You can group time ranges so that you can
easily reuse them so that you do not have to enter the information each time.

Time range group data types


A time range group data type consists of any number of time ranges.
There are three types of time ranges:

Table 62. Time range specifications

Time range Description


type

Positive The time range is active when the current time is within the time range, unless it is
overlapped by a Negative or an Override.

Negative The time range is inactive for the specified range. This time range is useful, for
example, to exclude a lunch hour from a Positive time range.

Override The time range is always active within the range, regardless of any negative ranges.

74 Netcool/Impact: User Interface Guide


You can specify any combination of the time ranges as described below:

Table 63. Time Range Combinations

Time Description
range

Daily A time range between a starting time and an ending time for every day of the week, for
example, 9 a.m. to 5 p.m.

Weekly A range between a starting time on a specified day and ending on a specified day every
week, for example Monday 9 a.m. to Friday 5 p.m.

Absolute A range of time between two specific dates, for example, March 3, 2004 to March 4, 2004.
One way this time range is useful is for server maintenance. If a server is due to be down
for maintenance on a specific day and you do not want it to show up as an alarm, you could
define an Absolute range and use it in an Event Suppression policy.

Configuring time range groups


Use this procedure to create a new time range group.

Procedure
1. In the Data Model tab select the Global project, from the Project menu.
2. In the list of data sources, and data types, click the plus sign next to the Internal data source to view
its data types.
3. Select the TimeRangeGroup data type.
4. Right-click and select View Data Items to open the TimeRangeGroup screen.
5. Click the New Data Item button.
6. In the Time Range Group Name field, type a unique name to describe the group.
7. To add a new time range, click on the New Time Range icon in the table and configure the time range
accordingly.
See “Adding daily time ranges” on page 75, “Adding weekly time ranges” on page 76, and “Adding
absolute time ranges” on page 76.
After configuration is complete, click Save to save the time range.
8. To add an existing time range group:
a. Click on the Add Existing Time Range Group icon in the table.
b. In the Group Editor screen, select the existing group that you want to add.
c. Click Save to add the group.

Adding daily time ranges


Use this procedure to add daily time ranges.

Procedure
In the Time Range Editor, after you select Daily from the dropdown field, complete the information about
the start and end times of the time range.
Enter the information using this table as a guide:

Chapter 5. Configuring data types 75


Table 64. Daily Time Range screen

Screen element Description

Start Time: hour/min Using the 24-hour clock, enter the start time.

EndTime: hour/min Using the 24-hour clock, enter the end time.

Time Zone Select the appropriate time zone from the list.

Positive See Table 63 on page 75.


Negative
Override

Click Save to save the time range. Click the Back icon to return to the Time Range Group screen.

Adding weekly time ranges


Use this procedure to add weekly time ranges.

Procedure
In the Time Range Editor, after you select Weekly from the dropdown field, complete the information
about the start and end times of the time range.
Enter the information using this table as a guide:

Table 65. Weekly Time Range screen

Screen element Description

Start Select the day of the week to indicate the beginning day of the time range.

hour/min Type or select the time of day to start the time range.

End Select the day of the week to indicate the end of the time range.

hour/min Type or select the time of day to end the time range.

Time Zone Select the appropriate time zone from the list.

Positive See Table 63 on page 75.


Negative
Override

Click Save to save the time range. Click the Back icon to return to the Time Range Group screen.

Adding absolute time ranges


Use this procedure to add absolute time ranges.

Procedure
In the Time Range Editor, after you select Absolute from the dropdown field, complete the information
about the start and end times of the time range.
Enter the information using this table as a guide:

76 Netcool/Impact: User Interface Guide


Table 66. Absolute Time Range screen

Screen element Description

Start Click the calendar icon to select the start date. Complete the hours, minutes, and
seconds of the start time.

End Click the calendar icon to select the end date. Complete the hours, minutes, and
seconds of the end time.

Time Zone Select the appropriate time zone from the list.

Positive See Table 63 on page 75.


Negative
Override

Click Save to save the time range. Click the Back icon to return to the Time Range Group screen.

Schedules overview
Schedules define a list of data items associated with specific time ranges, or time range groups, that exist.
You can use links between Schedule data items and other data items to schedule any items, for example,
the hours when a departmental node is business critical or to identify who is currently on call when an
alert occurs.

Configuring schedules
Use this procedure to create a schedule.

Procedure
1. Expand the Schedule data source in the Data Model tab.
2. Select the Schedule data type. Right-click and select View Data Items.
3. Click the New Data Item button.
4. Enter the following information in the tab:
a. In the Schedule Name field, type a unique name for the schedule.
b. In the Description field, add a Description for the schedule.
5. To display schedule member data items in the schedule members dropdown:
a. Click the Configure Members button.
b. Add one or more data item members for this schedule.
Enter information in the Configure Members screen as outlined in the following table:

Table 67. Configure Members screen


Screen element Description

Data Type The type from which to select members for the
schedule.

Filter Type a filter in the field to limit the number of


displayed member candidates.

Chapter 5. Configuring data types 77


Table 67. Configure Members screen (continued)
Screen element Description

Filter button Click to apply the filter to the member


candidates.

Available Members Highlight one or more candidates from the list.

Add Click to add the candidates to the Members list.

Selected Members (and Types) Highlight one or more candidates from the list.

Remove Click to remove the candidates from the


Members list.

6. Click Save. Click the back icon to return to the Schedule configuration screen.
Now you can select the member for which to add time ranges.
7. Enter the time ranges for the candidate. See “Configuring time range groups” on page 75.
The green light next to the On Call Status for the current member indicates that the administrator
is on call. If the administrator is not on call, the traffic light is red.
8. Repeat for each schedule member selectable from the Schedule Member drop-down list.
9. Click the back icon on the Schedule Editor to display the new schedule data item as a new row in the
table.
For information about editing and deleting data items, see Chapter 6, “Working with data items,” on
page 99.

ITNM DSA data type


The ITNM data type is the only one that works with the ITNM DSA.
You cannot rename an ITNM data type.
When the DSA queries the ITNM database, the records are returned as data items of the ITNM data type.
Each field in the records is turned into an attribute of the corresponding data item.
For example, a record can contain fields such as:
• ObjectId
• EntityName
• Address
• Description
• ExtraInfo
To access the values, you can directly access the attributes just like any other data items using the
following command:

log("Description is " + DataItem.Description);

This command prints out the Description field string that was on the ITNM record returned by the
query.

78 Netcool/Impact: User Interface Guide


SQL data types
SQL data types define real-time dynamic access to data in tables in a specified SQL database.
When the database is accessed, the fields from the database schema are assigned to the data type. Some
of the SQL data sources automatically discover the fields in the table. Others do not support automatic
table discovery; for these data sources, you must enter the table name to see the names of the fields.
The editor contains three tabs.

Table 68. External data type editor tabs

Tab Description

Table Name the data type, change the data source, if necessary, and add any number of
Description fields from the data source to form a database table.

Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the
items that policies must be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.
For more information about dynamic links tab, see Chapter 7, “Working with links,”
on page 103.

Cache Settings In this tab, you can set up caching parameters to regulate the flow of data between
Netcool/Impact and the external data source.
Use the guidelines in “SQL data type configuration window - Cache settings tab”
on page 85, plus the parameters for the performance report for the data type to
configure data and query caching.

Important: SQL data types in Netcool/Impact require all columns in a database table to have the Select
permission enabled to allow discovery and to enable the save option when creating data types.

Configuring SQL data types


Use this procedure to configure an SQL data type.

Procedure
• Provide a unique name for the data type.
• Specify the name of the underlying data source for the data type.
• Specify the name of the database and the table where the underlying data is stored.
• Auto-populate the fields in the data type.
• Select a display name for the data type.
• Specify key fields for the data type.
• Specify a data item filter.
• Specify which field in the data type to use to order data items.
• Specify the direction to use when ordering data items.
• Enable the data type for access to a UI Data Provider

Chapter 5. Configuring data types 79


What to do next
After you have saved the data type, you can close the Data Type Editor or you can configure caching and
dynamic links for the data type.

SQL data type configuration window - Table Description tab


Use this information to configure the SQL data type.

Table 69. General settings for the Table Descriptions tab of the SQL data type configuration window
Editor element Description

Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.
Data type names must be unique globally, not just
within a project. If you receive an error message
when you save a data type, check the Global
project tab for a complete list of data type names
for the server. If you find the name you tried to
save, you must change it.

Data Source Name This field is automatically populated, based on


the data source you selected in the data sources
tab. If you have other SQL data sources that are
configured to use with Netcool/Impact, you can
change the name to any of the SQL data sources in
the list, if necessary.
If you enter a new name, a message window
prompts you to confirm your change.
Click OK to confirm the change. If you change your
mind about selecting a different data source, click
Cancel.

Enabled Leave the State check box checked to activate the


data type so that it is available for use in policies.

Access the data through UI data provider To ensure that the UI data provider can access
the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box the data type
sends data to the UI data provider. When the data
model refreshes, the data type is available as a
data provider source. The default refresh rate is
5 minutes. For more information about UI data
providers, see the Solutions Guide.

80 Netcool/Impact: User Interface Guide


Table 70. Table description settings for the Table Descriptions tab of the SQL data type configuration
window
Window element Description

Base Table Specify the underlying database and table where


the data in the data type is stored.
The names of all the databases and tables are
automatically retrieved from the data source so
that you can choose them from a list.
Type the name of the database and the table in
the Base Table lists. The first list contains the
databases in the data source. The second list
contains the tables in the selected database, for
example, alerts, and status.

Refresh Click Refresh to populate the table.


The table columns are displayed as fields in a
table. To make database access as efficient as
possible, delete any fields that are not used in
policies.

Show New / Deleted Fields If you have deleted fields from the data type that
still exist in the SQL database, these fields do not
show in the user interface. To restore the fields
to the data type, mark the Show New / Deleted
Fields check box and click Refresh.

New Field Use this option if you need to add a field to the
table from the data source database. For example,
in the case where the field was added to the
database after you created the data type.
Make sure that the field name you add has the
same name as the field name in the data source.
Important: Any new fields added to this table are
not automatically added to the data source table.
You cannot add fields to the database table in this
way.
For more information, see “SQL data type
configuration window - adding and editing fields in
the table” on page 83.

Chapter 5. Configuring data types 81


Table 70. Table description settings for the Table Descriptions tab of the SQL data type configuration
window (continued)
Window element Description

Key field Key fields are used when you retrieve data from
the data type in a policy that uses the GetByKey
function. They are also used when you define a
GetByKey dynamic link.
Important: You must define at least one key field
for the data type, even if you do not plan to use
the GetByKey function in your policy. If you do not,
Netcool/Impact does not function properly.
Generally, the key fields you define correspond to
key fields in the underlying database table.
To specify a key field, double-click on the key
field column and then click the check box in the
appropriate row in the Key Field column. You can
add multiple key fields.

Display Name Field You can use this field to select a field from the
menu to label data items according to the field
value. Choose a field that contains a unique value
that can be used to identify the data item for
example, ID. To view the values on the data item,
you must go to View Data Items for the data type
and select the Links icon. Click the data item to
display the details.

Automatically Remove Deleted Fields Mark the Automatically Remove Deleted Fields
check box to remove any fields from the data
type that have already been removed from the
SQL database. The deleted fields are removed
automatically when a policy that uses this data
type is run.

Table 71. Data filtering and ordering settings for the Table Descriptions tab of the SQL data type
configuration window
Window element Descriptions

Filter Type a restriction clause to limit the types of data


items that are seen for the data type. For example,
to limit the rows in a field that is called City to
New York, you would enter:

City = "New York"

For example, to limit the rows to the New York or


Athens, you would enter:

City = "New York" OR City = "Athens"

You can use any sql Where clause syntax.

Order By Enter the names of one or more fields to use to sort


the data items retrieved from the data source.

82 Netcool/Impact: User Interface Guide


Table 72. UI data provider settings on the Table Descriptions tab of the SQL data type configuration
window.
Editor element Description

Define Custom Types and Values (JavaScript) To show percentages and status in a widget, you
must create a script in JavaScript format. The
script uses the following syntax.

ImpactUICustomValues.put("<FieldName>,
<Type>",<VariableName>);

Add the script to the Define Custom Types and


Values (JavaScript) area.

Check Syntax and Preview Script Sample Result Click the Check Syntax and Preview Script
Sample Result button to preview the results and
check the syntax of the script. The preview shows a
sample of 10 rows of data in the table.

SQL data type configuration window - adding and editing fields in the table
Use this information to add or edit a field to the table for a SQL data type.
In the Table tab, in the New Field area, click New to add a field to the data type, or select the edit icon
next to an existing field that you want to edit.

Table 73. External data type Editor - New field window

Window element Description

ID By default, the ID is the same as the column name in the database. You can
change it to any other unique name. For example, if the underlying column
names in the data source are difficult to use, the ID field to provide an easier
alias for the field.

Field Name Type a name that can be used in policies. It represents the name in the SQL
column. Type the name so that it is identical to how it is displayed in the data
source. Otherwise, Netcool/Impact reports an error when it tries to access the
data type.

Chapter 5. Configuring data types 83


Table 73. External data type Editor - New field window (continued)

Window element Description

Format For SQL database data types, Netcool/Impact automatically discovers the
columns in the underlying table and automatically detects the data format
for each field when you set up the data type. For other data types, you
must manually specify the format for each field that you create. For more
information about formats, see the Working with data models chapter in the
Solutions Guide in the section Working with data types, Data type fields.
Restriction: The Microsoft SQL server table treats the TIMESTAMP field as a
non-date time field. The JDBC driver returns the TIMESTAMP field as a row
version binary data type, which is discovered as STRING in the Microsoft SQL
server data type. To resolve this issue, in the Microsoft SQL server table, use
DATEITEM to display the property time format instead of TIMESTAMP.
Select a format from the following list:
• STRING
• LONG_STRING
• INTEGER
• PASSWORD_STRING
• LONG
• FLOAT
• DOUBLE
• DATE
• TIMESTAMP
• BOOLEAN
• CLOB

Display Name You can use this field to select a field from the menu to label data items
according to the field value. Choose a field that contains a unique value that
can be used to identify the data item for example, ID. To view the values on
the data item, you must go to View Data Items for the data type and select
the Links icon. Click the data item to display the details.
If you do not enter a display name, Netcool/Impact uses the ID field name by
default.

Description Type some text that describes the field. This description is only visible when
you edit the data type in the GUI.

Default Value Type a default expression for the field. It can be any value of the specified
format see the format row, or it can be a database-specific identifier such as
an Oracle pseudonym; sequence.NEXTVAL.

84 Netcool/Impact: User Interface Guide


Table 73. External data type Editor - New field window (continued)

Window element Description

Insert Statements: When you select the Exclude this Field check box Netcool/Impact does not
Exclude this field set the value for the field when inserting and updating a new data item into
the database. This field is used for insert and update statements only, not for
select statements.
Sybase data types:
You must select this option when you map a field to an Identity field or a
field with a default value in a Sybase database. Otherwise, Netcool/Impact
overwrites the field on insert with the specified value or with a space character
if no value is specified.
ObjectServer data types:
The Tally field automatically selects the Exclude this Field check box to
be excluded from inserts and updates for the object server data type since
this field is automatically set by Netcool/OMNIbus to control deduplication of
events.
The Serial field automatically selects the Exclude this Field check box to be
excluded from inserts and updates when an ObjectServer data type points to
alerts.status.

Type Checking: Strict Click to enable strict type checking on the field. Netcool/Impact checks the
format of the value of the field on insertion or update to ensure that it is of
the same format as the corresponding field in the data source. If it is not the
same, Netcool/Impact does not check the value on insertion or update and a
message to that effect is displayed in the server log. If you do not enable strict
type checking, all type checking and format conversions are done at the data
source level.

SQL data type configuration window - Cache settings tab


Use this information to configure caching for a SQL data type.
Select the hours, minutes, and seconds for the options that you select.
• Enable Data Caching: This check box toggles data caching on and off
– Maximum number of data items: Set the total number of data items to be stored in the cache during
the execution of the policy.
– Invalidate Cached Data Items After: Set to invalidate the cached items after the time periods
selected.
• Enable Query Caching: This check box toggles query caching on and off.
– Maximum number of queries: Set the maximum number of database queries to be stored in the
cache.
– Invalidate Cached Queries After: Set to invalidate the cached items after the time periods selected.
• Enable Count Caching: Do not set. Available for compatibility with earlier versions only.
• Performance Measurement Intervals: Use this option to set the reporting parameters for measuring how
fast queries against a data type are executed.
– Polling Interval: Select a polling interval for measuring performance statistics for the data type.
– Query Interval: Select the query interval for the performance check.

Chapter 5. Configuring data types 85


Creating flat file data types
Use this procedure to create a flat file data type.

Procedure
1. Before you can create a flat file data type you must create a flat file data source.
For more information about creating flat file data sources, see “Creating flat file data sources” on page
30.
2. Click Create a new data type next to the flat file data source that you created earlier, for example
MyFlatFileDataSource.
3. In the new data type window, provide the required information.
a) In the Data Type Name: field type a unique name for your data type name. For example,
MyFlatFileDataType.
Your data source, MyFlatFileDataSource, should already have been preselected in the Data
Source Name: list. If not, select it from the list.
b) In the Base Table: field, enter the name of your flat file that you created for your flat file data
source, for example myflatfile.txt.
c) Click Refresh to load field names from your text file.
d) Select the check boxes in the Key Field column.
e) Save your flat file data type.

Results
If you open the data items viewer, you can see the entries from your flat file.

UI data provider data types


A UI data provider data type represents a structure similar to a table that contains sets of data in a
relational database. Each UI data provider database data type contains a set of fields that correspond to
data sources in the UI data provider. You create UI data provider data types in the GUI. You must create
one such data type for each data set that you want to access.
The configuration properties for the data type specify which subset of data is retrieved from the UI data
provider data source.

Creating a UI data provider data type


Use this information to create a UI data provider data type.

Procedure
1. Right click the UI data provider data source you created, and select New Data Type.
2. In the Data Type Name field, type the name of the data type.
3. The Enabled check box is selected to activate the data type so that it is available for use in policies.
4. The Data Source Name field is prepopulated with the data source.
5. From the Select a Dataset list, select the data set you want to return the information from.
The data sets are based on the provider and the data sets that you selected when you created the data
source. If this list is empty, then check the data source configuration.
6. Click Save. The data type shows in the list menu.

86 Netcool/Impact: User Interface Guide


LDAP data types
An LDAP data type represents a set of entities in an LDAP directory tree.
The LDAP DSA determines which entities are part of this set in real time by dynamically searching the
LDAP tree for those that match a specified LDAP filter within a certain scope. The DSA performs this
search in relation to a location in the tree known as the base context.
The LDAP Data Type editor contains three tabs.

Table 74. LDAP Data Type editor tabs

Tab Description

LDAP Info In this tab, you configure the attributes of the data type. For more information about
these attributes, see “LDAP Info tab of the LDAP data type configuration window” on
page 88.

Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information. Links between individual data items can
represent any relationship between the items that policies need to be able to look
up. For example, a node linked to an operator allows a policy to look up the operator
responsible for the node.
For more information about creating links to other data types, see Chapter 7, “Working
with links,” on page 103.

Cache In this tab, you can set up caching parameters to regulate the flow of data between
Settings Netcool/Impact and the external data source.
For more information about, cache settings see “SQL data type configuration window -
Cache settings tab” on page 85.

Important: You must create one LDAP data type for each set of entities that you want to access. The
LDAP data type is a read-only data type which means that you cannot edit or delete LDAP data items from
within the GUI.

Configuring LDAP data types


Use this procedure to configure an LDAP data type.

Procedure
• Provide a unique name for the data type.
• Specify the name of the underlying data source for the data type.
• Specify the base context level in the LDAP hierarchy where the elements you want to access are
located.
• Specify a display name field.
• Specify a restriction filter.

Chapter 5. Configuring data types 87


LDAP Info tab of the LDAP data type configuration window
Use this information to configure LDAP information for a LDAP data type.

Table 75. General settings in the LDAP Info Tab on the LDAP Data Type editor
Editor element Description

Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.

Enabled Leave checked to enable the data type so that it


can be used in policies.

Table 76. LDAP settings in the LDAP Info Tab on the LDAP Data Type editor
Editor element Description

Data Source Name Type the name of the underlying data source.
This field is automatically populated, based on
your data source selection in the Data Types task
pane of the Navigation panel. However, if you have
more than one LDAP data source configured for use
with Netcool/Impact, you can select any LDAP data
source in the list, if necessary.
If you enter a new name, the system displays a
message window that asks you to confirm your
change.

Search scope Select the search scope:


• OBJECT_SCOPE
• ONLEVEL_SCOPE
• SUBTREE_SCOPE

Base Context Type the base context that you want to


use when you search for LDAP entities. For
example:ou=people, o=companyname.com.

Key Search Field Type the name of a key field, for example, dn.

Display Name Field You can use this field to select a field from the
menu to label data items according to the field
value. Choose a field that contains a unique value
that can be used to identify the data item for
example, ID. To view the values on the data item,
you must go to View Data Items for the data type
and select the Links icon. Click the data item to
display the details.

88 Netcool/Impact: User Interface Guide


Table 76. LDAP settings in the LDAP Info Tab on the LDAP Data Type editor (continued)
Editor element Description

Restriction Filter: Optionally, type a restriction filter. The restriction


filter is an LDAP search filter as defined
in Internet RFC 2254. This filter consists of
one or more Boolean expressions, with logical
operators prefixed to the expression list. For more
information, see the LDAP Filter information in the
Policy Reference Guide.

Table 77. Attribute configuration in the LDAP Info Tab on the LDAP Data Type editor
Editor element Description

New Field For each field that you want to add to the data
type, click New.

Mediator DSA data types


Mediator DSA data types are typically created using scripts or other tools provided by the corresponding
DSA.
Usually the data types, and their associated data sources are installed when you install the Mediator DSA
(CORBA or Direct), so you do not have to create them. The installed data types are available for viewing
and, if necessary, for editing.
For more information about the Mediator data types used with a particular DSA, see the DSA
documentation.

Viewing Mediator DSA data types


Use this information to configure the Mediator DSA Data Type.
The DSA Data Type editor contains three tabs, as described in the following table:

Table 78. DSA Data Type editor tabs

Tab Description

DSA Mediator This tab contains the attributes of the data type. See your DSA documentation for more
information.

Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the items
that policies need to be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.
For more information about dynamic links tab, see Chapter 7, “Working with links,” on
page 103.

Cache In this tab, you can set up caching parameters to regulate the flow of data between
Settings Netcool/Impact and the external data source.

Chapter 5. Configuring data types 89


SNMP data types
If you are using an SNMP DSA, once you have created an SNMP data source, you can use the GUI to create
SNMP data types.
See “Creating SNMP data sources” on page 64.
If you plan to use the standard data-handling functions AddDataItem and GetByFilter to access
SNMP data, create a separate data type for each set of variables (packed OID data types) or each set of
tables (table data types) that you want to access. If you plan to use the SNMP functions provided with
the DSA, you can create a single data type for each data source and use it to access all the variables
and tables associated with the agent. For more detailed information about SNMP data types, see the DSA
Reference Guide.

SNMP data types - configuration overview


An overview of the SNMP data type configuration window.
The SNMP Data Type editor contains three tabs.

Table 79. DSA Data Type editor tabs

Tab Description

DSA Mediator This tab contains the attributes of the data type. See your DSA documentation for more
information.

Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the items
that policies need to be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.
For more information about dynamic links tab, see Chapter 7, “Working with links,” on
page 103.

Cache In this tab, you can set up caching parameters to regulate the flow of data between
Settings Netcool/Impact and the external data source.

To ensure that the UI data provider can access the data in this data type, select the Access the data
through UI data provider: Enabled check box on the DSA Mediator tab. When you enable the check
box, the data type sends data to the UI data provider. When the data model refreshes, the data type is
available as a data provider source. The default refresh rate is 5 minutes. For more information about UI
data providers, see the Solutions Guide.

Packed OID data types


Packed OID data types reference the OIDs of one or more variables managed by a single agent.
You use this category of data type when you want to access single variables or sets of related variables.
When you create a packed OID data type, you specify the name of the associated data source, the OID for
each variable and options that determine the behavior of the DSA when connecting to the agent.

90 Netcool/Impact: User Interface Guide


Packed OID SNMP data type - configuration window
Use this information to configure the Packed OID SNMP data type.

Table 80. General settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description

Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.

Data Source Name This field is automatically populated, based on


your data source selection in the Data Types task
pane of the Navigation panel. However, if you have
other SQL data sources that are configured to use
with Netcool/Impact, you can change it to any of
the SQL data sources in the list, if necessary.
If you enter a new name, a message window
prompts you to confirm your change.
Click OK to confirm the change. If you change your
mind about selecting a different data source, click
Cancel.

Access the data through UI data provider: To ensure that the UI data provider can access
Enabled the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box the data type
sends data to the UI data provider. When the data
model refreshes, the data type is available as a
data provider source. The default refresh rate is
5 minutes. For more information about UI data
providers, see the Solutions Guide.

Table 81. SNMP settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description

OID Configuration Select Packed OID data types from the OID
Configuration list.

New Attribute If you are creating the data type for use with the
standard data-handling functions AddDataItem
and GetByFilter, create a new attribute on the
data type for each variable you want to access.
To create an attribute, click New Attribute and
specify an attribute name and the OID for the
variable.
If you are creating this data source for use with the
new SNMP functions, you do not need to explicitly
create attributes for each variable. In this scenario,
you pass the variable OIDs when you make each
function call in the Netcool/Impact policy.

Chapter 5. Configuring data types 91


Table 81. SNMP settings for the DSA Mediator tab of the SNMP data type editor (continued)
Editor element Description

Get Bulk: Enabled If you want the DSA to retrieve table data from
the agent that uses the SNMP GETBULK command
instead of an SNMP GET command, select Get
Bulk. The GETBULK command retrieves table data
by using a continuous GETNEXT command. This
option is suitable for retrieving data from large
tables.
When you select Get Bulk, you can control
the number of variables in the table for which
the GETNEXT operation is completed using the
specified Non-Repeaters and Max Repetitions
values.

Max Repetitions Max Repetitions specifies the number of


repetitions for each of the remaining variables in
the operation.

Nonrepeaters The Nonrepeaters value specifies the first number


of non-repeating variables.

Define Custom Types and Values (JavaScript) To show percentages and status in a widget, you
must create a script in JavaScript format. The
script uses the following syntax.

ImpactUICustomValues.put
("<FieldName>,<Type>",<VariableName>);

Add the script to the Define Custom Types and


Values (JavaScript) area.

Preview Script Sample Result Click the Preview Script Sample Result button to
preview the results and check the syntax of the
script. The preview shows a sample of 10 rows of
data in the table.

Table data types


Table data types reference the OIDs of one or more tables managed by a single agent.
You use this category of data type when you want to access SNMP tables. When you create a table
data type, you specify the name of the associated data source, the OID for each table and options that
determine the behavior of the DSA when connecting to the agent.

Creating table data types


Use this procedure to create a table data type.

Procedure
1. In the data types tab, select an SNMP data source from the list.
2. Click the New Data Type button to open the New Data Type editor.
3. Type a name for the data type in the Data Type Name field.
Important:

92 Netcool/Impact: User Interface Guide


The data type name must match the table name that will be queried, for example, ifTable, or
ipRouteTable.
4. Select an SNMP data source from the Data Source Name field. By default, the data source you chose
in step 2 is selected.
5. Select Table from the OID Configuration list.
6. If you are creating this data type for use with the standard data-handling functions AddDataItem and
GetByFilter, you must create a new attribute on the data type for each table you want to access. To
create an attribute, click the New Attribute button and specify an attribute name and the OID for the
table.
Important:
The attributes are the column names in each table. For example, in the following ifTable, the attributes
will be ifIndex, ifDescr and other column names:

Column Names OID


ifIndex .1.3.6.1.2.1.2.2.1.1
ifDescr .1.3.6.1.2.1.2.2.1.2
... ...

If you are creating this data source for use with the new SNMP functions, you do not need to explicitly
create attributes for each table. In this scenario, you pass the table OIDs when you make each function
call in the Netcool/Impact policy.
7. If you want the DSA to retrieve table data from the agent using the SNMP GETBULK command instead
of an SNMP GET, select Get Bulk.
The GETBULK command retrieves table data using a continuous GETNEXT command. This option is
suitable for retrieving data from very large tables.
8. If you have selected Get Bulk, you can control the number of variables in the table for which the
GETNEXT operation is performed using the specified Non-Repeaters and Max Repetitions values.
The Non-Repeaters value specifies the first number of non-repeating variables and Max Repetitions
specifies the number of repetitions for each of the remaining variables in the operation.
9. Click Save.

Table data types configuration window


Use this information to configure a table data type.

Table 82. General settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description

Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.

Chapter 5. Configuring data types 93


Table 82. General settings for the DSA Mediator tab of the SNMP data type editor (continued)
Editor element Description

Data Source Name This field is automatically populated, based on


your data source selection in the Data Types task
pane of the Navigation panel. However, if you have
other SQL data sources that are configured to use
with Netcool/Impact, you can change it to any of
the SQL data sources in the list, if necessary.
If you enter a new name, a message window asks
you to confirm your change.
Click OK to confirm the change. If you change your
mind about selecting a different data source, click
Cancel.

Access the data through UI data provider: To ensure that the UI data provider can access
Enabled the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box the data type
sends data to the UI data provider. When the data
model refreshes, the data type is available as a
data provider source. The default refresh rate is
5 minutes. For more information about UI data
providers, see the Solutions Guide.

Table 83. SNMP settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description

Oid Configuration Select Table from the list.

New Attribute If you are creating this data type for use with
the standard data-handling functions AddDataItem
and GetByFilter, you must create a new attribute
on the data type for each variable you want to
access. To create an attribute, click New Attribute
and specify an attribute name and the OID for the
variable.
If you are creating this data source to use with the
new SNMP functions, you do not need to explicitly
create attributes for each table. In this scenario,
you pass the variable OIDs when you make each
function call in the Impact policy.

94 Netcool/Impact: User Interface Guide


Table 83. SNMP settings for the DSA Mediator tab of the SNMP data type editor (continued)
Editor element Description

Get Bulk: Enabled If you want the DSA to retrieve table data from the
agent using the SNMP GETBULK command instead
of an SNMP GET, select Get Bulk. The GETBULK
command retrieves table data using a continuous
GETNEXT command. This option is suitable for
retrieving data from very large tables.
When you select Get Bulk, you can control
the number of variables in the table for which
the GETNEXT operation is performed using the
specified Non-Repeaters and Max Repetitions
values.

Max Repetitions Max Repetitions specifies the number of


repetitions for each of the remaining variables in
the operation.

Nonrepeaters The Non-Repeaters value specifies the first


number of non-repeating variables.

LinkType data types


The LinkType data type provides a way of defining named and hierarchical dynamic links.
To reference links directly from a policy, you can specify the link type directly instead of the target data
type name.
You can create hierarchies between data types, for example, using the source as a parent to multiple
target children (for example, one customer to multiple servers).
Linktype data items are useful when you want to create several dynamic links between the same target
and source data type for use in several policies. For example, in one policy you might want to filter the
severity level for events for the target data type. In another policy, you might want to filter the server
names for the target data type. You would create a LinkType data item for each scenario and select the
appropriate one when creating the link.
For more information, see “Dynamic links” on page 103.

Configuring LinkType data items


Use the following procedure to create a new LinkType data item:

Procedure
1. Select the LinkType data type.
2. Right-click and select View Data Items then click New to create a new LinkType data item.
3. Select the name, source, and target data types for the new link type.
The new data item appears in the Available LinkType Data Items table.
When you create dynamic links, the LinkType data type is available for selection. See Chapter 7,
“Working with links,” on page 103 for more information.

Chapter 5. Configuring data types 95


Document data types
Custom URL Document data types are derived from the predefined Doc data type.
You can add additional fields to the predefined Doc type and you can add data items. You cannot modify
or delete the built-in fields in a custom URL Doc data type.

Adding new Doc data items


Use the following procedure to add a new Doc data item:

Procedure
1. Select the Doc data type then either right click View Data Items. Click New to create a new Doc data
item.
The Create Doc Data Item window opens.
2. Type a Document name.
3. Type a description for the document.
4. Type the IP address of the document.
5. Click OK.
The new Doc data item is displayed in the table.

FailedEvent data types


The FailedEvent data type, together with the ReprocessedFailedEvents policy, provides you with a way to
deal with failed events that are passed from the ObjectServer.
Both the FailedEvent data type and the ReprocessedFailedEvents policy are predefined and are stored in
the global repository.
Note: The best practice to deal with failed events is to run PolicyActivatorService at regular intervals.

Viewing FailedEvent data items


Each FailedEvent data item row includes four fields.
• Key
• EventContainerString
• Policy Name
• EventReader name
You can use this information to re-create the EventContainer and send it back to the original policy
that caused the error. See Chapter 6, “Working with data items,” on page 99 for more information.

Hibernation data types


When you call the Hibernate function in a policy, the policy is stored as a Hibernation data item for a
certain number of seconds.
You typically do not need to create or modify Hibernation data items using the GUI. However, you
can delete stored hibernations if an error condition occurs and the hibernations are not woken up by the
policy activator or another policy. See the Policy Reference Guide for more information about handling
hibernations.

96 Netcool/Impact: User Interface Guide


Working with composite data types
Composite data types are data types that have one or more fields dynamically linked to fields in another
data type.
Composite data types are useful for creating a single data type that references information in more than
one data source or that references more than one table or other structure in a single data source. You
can use composite data types to retrieve and update data in data sources. You cannot use composite data
types to insert new data or delete data.
Complete the following steps to create a composite data type:
• Create a composite internal or external data type.
• Edit the data type and create static or dynamic link from base data type to the target data type.
• Create a linked field for the base data type.

Creating composite data types


To create a composite data type, you create a base data type. The base data type can be an internal or
external data type.

Before you begin


See the following links about creating data types to determine the type of composite data type you want
to create:
• For information about creating internal data types, see “Creating internal data types” on page 71.
• For information about creating external data types, see “External data types” on page 73.

Procedure
1. Click Data Model to open the Data Model tab.
2. Select the data source from the data sources list.
3. Click the New Data Type icon. A new Data Type Editor tab opens.
4. Create your chosen data type.

Creating linked fields


To create a linked field in a composite data type, you create a link from the base data type to the target
data type. Then, add a field to the data type by using a linking expression as the name of the field. The
linking expression specifies which field in the target data type you want the linked field to reference.
The linking expression syntax is as follows:

links.type.item.field

• type is the name of the target data type


• field is the name of the field you want to reference
• item identifies the OrgNode in the array returned by the linking expression as first, last, or array[n]
An array of OrgNodes is a zero-based array, where array[0] is the first item. For example, the following
linking expression references the value of the Name field in the first Customer OrgNode returned when a
link is evaluated:

links.Customer.first.Name

The following linking expression references the value of the Location field in the second Node OrgNode
returned when a link is evaluated:

Chapter 5. Configuring data types 97


links.Node.array[1].Location

Configuring a linked field on a composite data type


Complete the following steps to create a dynamic or static link and a linked field from the base data type
to the target data type.

Before you begin


See the following sections about creating links to determine which type of link you want to create for your
composite data type.
• For information about creating dynamic links, see “Creating dynamic links” on page 104.
• For information about creating static links, see “Creating static links” on page 107.

Procedure
1. Click Data Model to open the Data Model tab.
2. Expand the data source that contains the data type you want to edit, select the data type, double-click
the name of the data type. Alternatively, right-click the data source and click Edit.
3. Create a dynamic or static link, from the base data type to the target data type.
4. In the New Field area of the Table description tab, click New to open the Field properties window to
create a field for the base data type:
Complete the following steps to create the linked field:
a) In the ID field, give the filed a unique name
b) In the Field Name field, add a linking expression as the field name.
c) From the Format list, select the type of data to be held in this field.
d) In the Display name field, add the display name.
e) In the Description field, add the description.
Note: If using a link by key and the data type is internal, the field referenced as the key must match
the key field in a row in the target data type. Otherwise, NULL is returned.
f) Click OK.
The field you created shows in the list of fields in the Table Description tab.
5. Click Save to add the changes to the data type.

98 Netcool/Impact: User Interface Guide


Chapter 6. Working with data items
Data items are elements of the data model that represent actual units of data stored in a data source.
The structure of this unit of data depends on the category of the associated data source. For example, if
the data source is an SQL database data type, each data item corresponds to a row in a database table. If
the data source is an LDAP server, each data item corresponds to a node in the LDAP hierarchy.

Viewing data items


Use this procedure to view the data items for a data type.

Procedure
1. Locate the data type in the data connections list.
2. Select a data type and click the View Data Items icon next to the data type.
If you have multiple data items open and you select View Data Items on a data type you opened
already, the tab switches to the existing open data item tab.
When viewing data items, Netcool/Impact has a built-in threshold mechanism to control how much
data gets loaded. The default threshold limit is 10000. If the underlying table which the data type
points has more than 10000 rows which match the data type filter, Netcool/Impact shows a warning
message indicating that the number of rows for the data type exceeds the threshold limit.
Note: The threshold limit is set in $IMPACT_HOME/etc/server.props using the
property, impact.dataitems.threshold. To view data exceeding the threshold limit, the
impact.dataitems.threshold property would need to be modified and the server restarted.
The higher the value is set, the more memory is consumed. The heap settings for both the Impact
Server and the GUI Server would have to increased from the default values. For more information
about setting the minimum and maximum heap size limit, see the chapter on Self Monitoring in the
Netcool/Impact Administration Guide.
You can limit the number of data items shown by entering a search string in the Filter field.
Filter Retrieved Data Items: The filter searches all the fields in the current set of paged results
containing the search text. If the number of results requires the results to be paged, the filter only
filters the results on the current page. The filter is cleared when you navigate between pages.
For information about entering filter syntaxes, see the Working with filters section of the Policy
Reference Guide.

Adding new data items


Use this procedure to add a new data item.

Procedure
1. In the Data Model tab, select the appropriate data type, right-click and select View Data Items.
2. To add a new data item to the table, click the New Data Item in the toolbar.
The screen that you next see depends on the data type configuration.
3. Enter the information in the screen.
4. Click Save and then the Back icon to return to the data item list.
The new data item is listed in the table.

© Copyright IBM Corp. 2006, 2023 99


Editing and deleting data items
Use this procedure to edit a data item.

Procedure
1. To edit a data item, select the data item and click Edit.
The edit screen that you see depends on the data type configuration.
a) Change the information as necessary.
b) Click Save to save the changes, then click the Back icon to return to the data item list.
Note: When editing an SQL data item, the save attempt will include all fields in the data item unless
the field is marked for exclusion. To exclude a field, configure the Insert Statements: Exclude
this field property in the data type. See SQL data type configuration window - adding and editing
fields in the table for more information.
2. To delete an item, select the data items that you want to delete.
Check marks are placed in the check boxes next to the selected data items and the data items are
highlighted.
a) If you want to delete all the data items in the table, click the all link. Check marks are placed in
every check box in the Select column and the data items are highlighted.
b) Click the Delete icon to delete the selected data items.

Viewing data items for a UI data provider data type


You can view and filter data items that are part of a UI provider data type.

Procedure
1. In the Data Model tab, right click the data type and select View Data Items. If items are available for
the data type, they show on the right side in tabular format.
2. If the list of returned items is longer than the UI window, the list is split over several pages. To go from
page to page, click the page number at the bottom.
3. To view the latest available items for the data type, click the Refresh icon on the data type.
4. You can limit the number of data items that display by entering a search string in the Filter field. For
example, add the following syntax to the Filter field, totalMemory=256. Click Refresh on the data
items menu to show the filtered results.
Filter Retrieved Data Items: The filter searches all the fields in the current set of paged results
containing the search text. If the number of results requires the results to be paged, the filter only
filters the results on the current page. The filter is cleared when you navigate between pages.
Tip: If your UI Data Provider data type is based on a Netcool/Impact policy, you can add
&executePolicy=true to the Filter field to run the policy and return the most up to date filtered
results for the data set.
For more information about using the Filter field and GetByFilter function runtime parameters to limit
the number of data items that are returned, see “Using the GetByFilter function to handle large data
sets” on page 100.

Using the GetByFilter function to handle large data sets


You can extend the GetByFilter function to support large data sets. To fetch items from a UI data provider
with the GetByFilter, additional input parameters can be added to the filter value of the GetByFilter
function. Additional filter parameters allow you to refine the result set returned to the policy.
The UI data provider REST API supports the following runtime parameters:

100 Netcool/Impact: User Interface Guide


• count: limits the size of the returned data items.
• start: specifies the pointer to begin retrieving data items.
• param_*: sends custom parameters to data sets that the UI data provider uses during construction and
data presentation. The UI Data Provider server recognizes any additional parameters and handles the
request if the parameter has the prefix param_. These values are also used to uniquely identify a data
set instance in the REST service cache.
• id: If used, it fetches a single item. The id parameter specifies the id of item you want to retrieve. For
example, &id=1. If the id parameter is used, all other filtering parameters are ignored.
Tip: If your UI Data Provider data type is based on a policy, then you can add executePolicy=true
to the FILTER parameter in GetByFilter( Filter, DataType, CountOnly) to run the policy and
ensure the latest data set results are returned by the provider.

This policy example uses the FILTER runtime parameters in a GetByFilter (Filter, DataType,
CountOnly) implementation in a UI data provider.

DataType="123UIdataprovider";
CountOnly = false;

Filter = "t_DisplayName ='Windows Services'";


Filter = "t_DisplayName starts 'Wind'";
Filter = "t_DisplayName ends 'ces'";
Filter = "t_DisplayName contains ’W’&count=6&param_One=paramOne";
Filter = "t_DisplayName contains 'W'&count=3&start=2";
Filter = "((t_DisplayName contains 'Wi')
or (t_InstanceName !isnull))";
Filter = "((t_DisplayName contains 'Wi')
or (t_InstanceName='NewService'))&count=3";
Filter = "((t_DisplayName contains 'Wi')
or (t_InstanceName='NewService'))&count=5&start=1";

MyFilteredItems = GetByFilter( DataType, Filter, CountOnly );

Log( "RESULTS: GetByFilter(DataType="+DataType+", Filter="+Filter+",


CountOnly="+CountOnly+")" );

Log( "MATCHED item(s): " + Num );

index = 0;
if(Num > 0){
while(index <Num){
Log("Node["+index+"] id = " + MyFilteredItems[index].id +
"---Node["+index+"] DisplayName= " +
MyFilteredItems[index].t_DisplayName);
index = index + 1;
}
}
Log("========= END =========");

Here are some more syntax examples of the FILTER runtime parameters that you can use in a
GetByFilter (Filter, DataType, CountOnly) implementation in a UI data provider.
Example 1:

Filter = "&count=6";

No condition is specified. All items are fetched by the server, but only the first 6 are returned.
Example 2:

Filter = "&count=3&start=2";

No condition specified. All items are fetched by the server, but only the first 3 are returned, starting at
item #2
Example 3:

Filter = "t_DisplayName ends 'ces'

Chapter 6. Working with data items 101


Only items that match the condition = "t_DisplayName ends 'ces' are fetched.
Example 4:

Filter = "t_DisplayName contains 'W'&count=6&param_One=paramOne";

Only items that match the condition "t_DisplayName contains


'W'&count=6&param_One=paramOne"; are fetched. Only the first six items that contain 'W' and
paramOne are returned and paramOne is available for use by the provider when it returns the data
set.
Example 5:

Filter = "&param_One=paramOne";

All items are fetched by the server, and paramOne is available for use by the provider when it returns the
data set.

Adding Delimiters
The default delimiter is the ampersand (&) character. You can configure a different delimiter by editing
the property impact.uidataprovider.query.delimiter in the NCI_server.props file. Where
NCI is the name of your Impact Server. Any time you add a delimiter you must restart the Impact Server
to implement the changes.
The delimiter can be any suitable character or regular expression, that is not part of the data set name or
any of the characters used in the filter value.
The following characters must use double escape characters \\ when used as a delimiter:

* ^ $ . |

Examples:
An example using an Asterisk (*) as a delimiter:
• Property Syntax: impact.uidataprovider.query.delimiter=\\*
• Filter query: t_DisplayName contains 'Imp'*count=5
An example with a combination of characters:
• Property Syntax:impact.uidataprovider.query.delimiter=ABCD
• Filter query: t_DisplayName contains 'Imp'ABCDcount=5
An example of a regular expression, subject to Java language reg expression rules:
• Property Syntax: impact.uidataprovider.query.delimiter=Z|Y
• Filter queryt_DisplayName contains 'S'Zcount=9Zstart=7YexecutePolicy=true
An example of a combination of special characters: * . $ ^ |
• Property Syntax: impact.uidataprovider.query.delimiter=\\*|\\.|\\$|\\^|\\|
• Filter query t_DisplayName contains 'S'.count=9|start=7$executePolicy=true

102 Netcool/Impact: User Interface Guide


Chapter 7. Working with links
Links are elements of the data model that define relationships between data types and data items.
You set up links after you create the data types that are required by your solution. Static links define
relationships between data items, and dynamic links define relationships between data types. Links are
an optional component of the Netcool/Impact data model. When you write policies, you can use the
GetByLinks function to traverse the links and retrieve data items that are linked to other data items.

Dynamic links
Dynamic links define a relationship between data types.
This relationship is specified when you create the link and is evaluated in real time when a call to the
GetByLinks function is encountered in a policy. Dynamic links are supported for internal, SQL database
and LDAP data types.
The relationships between data types are resolved dynamically at run time when you traverse the link in a
policy or when you browse links between data items. They are dynamically created and maintained from
the data in the database.
The links concept is similar to the JOIN function in an SQL database. For example, there might be a 'Table
1' containing customer information (name, phone number, address, and so on) with a unique Customer
ID key. There may also be a 'Table 2' containing a list of servers. In this table, the Customer ID of
the customer that owns the server is included. When these data items are kept in different databases,
Netcool/Impact enables the creation of a link between Table 1 and Table 2 through the Customer ID field,
so that you can see all the servers owned by a particular customer.
You can use dynamic links only at the database level. (When relationships do not exist at the database
level, you needs to create static links.) You can create dynamic links for all types of data types (internal,
external, and predefined). See Chapter 5, “Configuring data types,” on page 69 for information about the
kinds of data type.
Dynamic links are unidirectional links, configured from the source to the target data type.

Static links
Static links define a relationship between data items in internal data types.
Static links are supported for internal data types only. Static links are not supported for other categories
of data types, such as SQL database and LDAP types, because the persistence of data items that are
stored externally cannot be ensured.
A static link is manually created between two data items when relationships do not exist at the database
level.
With static links, the relationship between data items is static and never changes after they have been
created. You can traverse static links in a policy or in the user interface when you browse the linked data
items. Static links are bi-directional.

Working with dynamic links


Dynamic links use a specified method to link data items of the source data type to the data items of a
target data type.
The linking methods are described:

© Copyright IBM Corp. 2006, 2023 103


Table 84. Linking Methods

Link Description
By:

Key This method evaluates an expression from one data type and matches this to the key field of
the target data type.

Filter This method uses a filter expression to describe the link between any fields in the source type
to any fields of the target data type.

Policy This method runs a specified policy to look up data items in the target and link all the retrieved
data items to data items of the source type.

Creating dynamic links


Use this procedure to create a dynamic link.

Procedure
1. To open the Data Type editor, click a data type name.
2. In the Data Type editor, select the Dynamic Links tab.
3. You can create the following types of dynamic links:
• Link By Filter. For more information abut creating links by filter, see “Adding new links by filter” on
page 104.
• Link By Key. For more information abut creating links by key, see “Adding new links by key” on page
105.
• Link By Policy. For more information abut creating links by policy, see “Adding new links by policy” on
page 106.
Tip: To create a new link by policy, you may need to scroll down so that the Link By Policy area is
visible.
4. Select the target data type from the Target Data Types list.
5. Select the exposed link type from the Exposed Link Type list.
6. Depending on the type of link you are creating, type in the filter, key expression, or select a policy.
• For a link by filter, type the filter syntax for the link in the Filter into Target Data Type field. For
example: Location = '%Facility%'.
• For a link by key, type the key expression in the Foreign Key Expression field. For example:
FirstName + ' ' + LastName.
• For a link by policy, select the linking policy from the Policy To Execute to Find Links list.
7. Click OK and click Save on the main to tab to implement the changes.

Adding new links by filter


A link by filter is a type of dynamic link where the relationship between two data types is specified by
using the link filter syntax.

About this task


The link filter syntax is as follows:

target_field = %source_field% [AND (target_field = %source_field%) ...]

104 Netcool/Impact: User Interface Guide


Where target_field is the name of a field in the target data type and source_field is the name of
the field in the source data type. When you call the GetByLinks function in a policy, Netcool/Impact
evaluates the data items in the target data type and returns those items whose target_field value is
equal to the specified source_field.
If the value of source_field is a string, you must enclose it in single quotation marks.
The following examples show valid link filters:

Location = '%Name%'
(NodeID = %ID%) AND (Location = '%Name%')

Use the following steps to add a new link by filter.

Procedure
1. Click New Link by Filter.
2. Enter the information in the New Link By Filter window
a) Select the Target data type from the list.
b) In the Exposed Link Type menu, select a link to follow from the list. The target data type name (in
other words the exposed link) and the link type data items that match this source and target. See
“LinkType data types” on page 95.
c) In the Filter into target Data Type field, A filter is an expression that specifies which fields in the
source and target types must match in order for a link to exist. It can be either a simple expression
(source name = target name) or a complex expression that is defined by a Boolean operator that
indicates the order of the operation

(Custname = '%customer%') AND (device_num = %DeviceNumber%)

The link shows in the New Link By Filter table in the Dynamic Links tab.
3. Click OK and click Save on the main to tab to implement the changes.

Adding new links by key


A link by key is a type of dynamic link where the relationship between two data types is specified by a
foreign key expression. When you define a Link by Key dynamic link, you specify a field in the source and
target data types that contains a matching value.

Procedure
1. Click New Link by Key.
2. Enter following information in the window.
a) Select the Target Data Type from the list.
For example, User.
b) In the Exposed Link Name field, select a link to follow from the list.
For example, User. The target data type name in other words the exposed link and the link type
data items that match this source and target.
c) Type the Foreign Key Expression
For example: LastName + ", " + FirstName. For more information about foreign key
expression, see “Foreign key expressions” on page 106.

The new link shows as a row in the New Link By Key table in the Dynamic Links tab.
3. Click OK and click Save on the main to tab to implement the changes.

Chapter 7. Working with links 105


Foreign key expressions
You can build the expression from one or more fields.
When you call the GetByLinks function in a policy, Netcool/Impact evaluates the data items in the target
data type and returns those data items whose key field values match the specified key expression.
Type a field name or combination of field names in the source type that match the Key field in the target
type. For example, if you want the key into the source type to be a field called 'NodeName', you enter
NodeName. You can enter more than one field by entering the characters '+' '+' to join them.
For example, if the source type has a FirstName field and a LastName field and the target Key field is
Name, you can create the link by entering the following expression:

FirstName + ' ' + LastName

The expression is applied to the following field value pairs, for example, if in the source the fields are:

FirstName = 'John'
LastName = 'Doe'

The resulting value for the target Key field (Name in this case) is:

Name = 'John Doe'

This matches to:

'John' + ' ' + 'Doe' = 'John Doe'

Adding new links by policy


A link by policy is a type of dynamic link where the relationship between two data types is specified by
a policy. The policy contains the logic that is used to retrieve data items from the target data type. The
linking policy specifies which data items to return by setting the value of the DataItems variable.

Procedure
1. Click New Link by Policy.
The New Link By Policy window opens.
2. Enter the following information in the window
a) Select the Target Data Type from the list.
For example, LinkPolicy.
b) Select a link from the Exposed Link Type list.
For example, LinkPolicy. The target data type name in other words the exposed link and the link
type data items that match this source and target.
c) Select a policy from the list of available policies.
For example, GetPolicy.
The new link appears as a row in the table in the Dynamic Links tab.
3. Click OK and click Save on the main to tab to implement the changes.

Editing and deleting dynamic links


Use this procedure to edit a dynamic link.

Procedure
1. To edit a link, click the Edit in the row of the link you want to edit.
2. Make any necessary changes. Click OK and click Save on the main to tab to implement the changes.

106 Netcool/Impact: User Interface Guide


See “Working with dynamic links” on page 103 sections for more details.
3. To delete a link, in the Select: column, select the links that you want to delete.
• If you want to delete all the links in the table, click the All link. Check marks are placed in every
check box in the Select: column and the data links are highlighted.
4. Click the Delete link to delete the selected links.

Working with static links


You can view static links and create a static link between the data items of internal data types.

Creating static links


You can view static links and create a static link between the data items of internal data types. Use this
procedure to create a static link for an internal data type.

Procedure
1. Click Data Model to open the Data Model tab.
2. Expand the Data Source that contains the internal data type you want to link, right-click and select
View Data Items.
The Data Item editor opens in the main panel.
3. Click the Edit Links icon in the Edit Links column next to one of the data item rows.
The Link Editor tab opens.
4. Select Target Type of Linked Items from the selection list.
Only Internal and Predefined data types show in the list.
5. To add a link, highlight the data items that you want that are listed in the Unlinked Data Items list and
click Add.
The items move to the Linked Data Items and LinkTypes list.
6. To remove a link, highlight the data items that you want to remove from the Linked Data Items list and
click Remove.
The data items are returned to the Unlinked Data Items list.
7. Click Save and then the Back icon to return to the data item list.

Chapter 7. Working with links 107


108 Netcool/Impact: User Interface Guide
Chapter 8. Working with policies
You use the policy editor to create, manipulate, save, delete and edit policies.
You can create new policies from scratch, or use a policy wizard. Policy wizards present a series of
windows that help you through the policy creation process.

Policies overview
Policies consist of a series of function calls that manipulate events and data from your supported data
sources.
A policy, for example, can contain a set of instructions to automate alert management tasks, defining the
conditions for sending an e-mail to an administrator, or sending instructions to the ObjectServer to clear
an event.
You use the policy editor to create, manipulate, save, delete and edit policies. You can create new policies
from scratch, or use a policy wizard. Policy wizards present a series of windows that help you through the
policy creation process.

Accessing policies
Use this procedure to view, edit and delete policies.

About this task


Before you create any policies, the Policies tab is empty. To view a list of policies for a project, select a
project from which you want to view the policies. If you want to view a list of all your policies, not just the
policies associated with a particular project, you can access the entire list in the global repository. You can
also create a new policy in the global repository if you do not want to add it to a project at the current
time. It can be added to a project later. For more information about the global repository, see “Global
repository” on page 6.

Procedure
1. Click Policies to open the Policies tab.
a) From the Cluster and Project lists, select the cluster and project you want to use.
The list of policies is displayed.
2. To edit a policy, in the Policies tab, select a policy name in the list.
a) Right-click the policy and select Edit or click the Edit icon in the toolbar.
3. To delete a policy, select the policy in the policies pane and click the Delete Policy icon in the toolbar.
a) You can also delete a policy by right-clicking its name in the policies pane and selecting Delete in
the menu.

© Copyright IBM Corp. 2006, 2023 109


Policies panel controls
An overview of the policies task pane icons and indicators.

Table 85. Policy task pane controls

icon Description

Click the New Policy icon to create an IPL policy. To create a policy using JavaScript select
the JavaScript Policy option. To create a policy using one of the policy wizards, select Use
Wizard.
Remember: If you use UTF-8 characters in the policy name, make sure that the locale on the
Impact Server where the policy is saved is set to the UTF-8 character encoding.

Select a policy and use this icon to edit it. Alternatively, you can edit a policy by right clicking its
name and selecting Edit in the menu.

Select a policy and use this icon to delete it from the database. Alternatively, you can delete a
policy by right clicking its name and selecting Delete in the menu.

Click the icon to open a window where you can recover an auto-saved policy.
When the Enable Autosave option is selected, a temporary copy of the policy that you are
working on is saved periodically. This feature saves your work in instances of a session timeout,
browser crash, or other accident. Automatically saved policies are not shown in the policies
navigation panel and are not replicated among clusters/import. You must first recover and save
the drafted policy before you run it. For more information about recovering auto-saved policies,
see “Recovering automatically saved policies” on page 113.

Upload a Policy File. Click the icon to open the Upload a Policy window. You can upload
policy and policy parameters files that you wrote in an external editor or files that you created
previously.

This icon is visible when a policy is locked, or the item is being used by another user. Hover
the mouse over the locked item to see which user is working on the item. You can unlock
your own items but not items locked by other users. If you have an item open for editing you
cannot unlock it. Save and close the item. To unlock an item you have locked, right click on the
item name and select Unlock. Users who are assigned the impactAdminUser role are the only
users who can unlock items that are locked by another user in exceptional circumstances.

Writing policies
You write policies in the policy editor by using one of the following methods.
• You can write them from scratch with IPL or JavaScript. In the Policies tab, select New Policy > IPL
Policy or New Policy > JavaScript Policy.
• You can use a policy wizard. For more information, see “Policy wizards” on page 110.

Policy wizards
You use policy wizards to create simple policies without having to manually create data types and add
functions.
The wizards consist of a series of windows that guide you through the policy creation process. At the end
of the process, you can run the policy immediately without any further modification. However, if you want
to modify the policy at any time, you can do so using the Policy editor.

110 Netcool/Impact: User Interface Guide


Note: The OMNIbus event reader service must be running before you can use all wizards, except for the
Web Services and XML DSA wizards.
You can use the following policy wizards:
Event Enrichment
Event enrichment is the process by which Netcool/Impact monitors an event source for new events,
looks up information related to them in an external data source and then adds the information to
them.
Event Notification
Event notification is the process by which Netcool/Impact monitors an event source for new events
and then notifies an administrator or users when a certain event or combination of events occurs.
Event Notification policies notify you that an event has occurred. Before you can use the Event
Notification policy wizard, configure the e-mail sender service.
Event Relocation
You can use Event Relocation policies to send an event from one central ObjectServer to another
ObjectServer.
Event Suppression
Event Suppression policies set a flag in an event in response to a database query. This flag can then be
used in a filter to prevent the event from appearing in the Event List.
XinY
X events in Y time is the process in which Netcool/Impact monitors an event source for groups of
events that occur together and takes the appropriate action based on the event information. X Events
in Y policies suppress events until a certain number of identifiable events occur within a specified time
period.
You can configure two main parameters in the wizard.
• The number of incidents (N) in an event that will cause a violation.
• The length of the rolling time window in which these (N) incidents must occur to run the violation.
The XinY policy wizard tracks how many times a single event with a single identifier is inserted
or updated during the time window. If this number reaches (N) then it sends an event to Netcool/
OMNIbus indicating that the threshold for incidents has been exceeded for the particular event. The
XinY policy wizard tracks incidents separately for each of the events that match the filter that triggers
the policy.

XML
XML policies are used to read and to extract data from any well-formed XML document.
Web Services
Web Services DSA policies are used to exchange data with external systems, devices, and applications
using Web Services interfaces.

Writing policies using wizards


Use this procedure to develop a policy using a wizard.

Procedure
1. In the Policies tab, select the arrow next to the New Policy icon. To run the Web services wizard,
select Use Wizard > Web Services.
2. In the Web Services Invocation-Introduction window, type in your policy name in the Policy Name
field. Click Next to continue.
Example http://www.webservicex.net/stockquote.asmx?wsdl.
3. In the Web Services Invocation-WSDL file and Jar File window, in the URL or Path to WSDL field,
enter the URL or a path for the target WSDL file.

Chapter 8. Working with policies 111


In instances where the GUI server is installed separately from the back-end server, the file path for the
WSDL file refers to the back-end server file system, not the GUI server file system. If you enter a URL
for the WSDL file, that URL must be accessible to the back-end Impact Server host and the GUI server
host.
Note: If the WSDL file contains XSD imports, these files are provided separately. The WSDL files and
related XSD files must be placed in a directory with no spaces.
4. In the Jar file area, select one of the following available options:
• Select a previously generated jar file for the WSDL file:
Applies if you generated a jar file from a WSDL file previously. Select one of the existing jar files
from the list menu.
– Currency.jar
– Stock.jar
– length.jar
The Package Name field is automatically completed. Select the Edit check box to modify the
package name.
• Provide a package name for the new jar file :
Select this option to create a jar file. Complete the Package Name field for the new jar file. The
package name cannot have a period ".".
Click Next.
5. In the Web Service Invocation-Web Service Name, Port and Method window, select the general web
service information for the following items: Web Services, Web Service Port Type, and Web Service
Method. Click Next.
6. In the Web Services Invocation - Web Service Method parameters window, enter the parameters
that are required by the target web service method. Click Next.
A Complex Type is a composite of another type. Expand the parameter name to view what information
is required.
For a Collection Type, multiple values are required, the user must first enter a size for the collection
when asked to enter a for the collection type. When OK is pressed, parameter entry fields are
generated for each item in the collection.
7. Optional: In the Web Service Invocation-Web Service EndPoint window, you can edit the URL or
Path to WSDL by selecting the edit check box. To enable web service security, select the Enable web
service security service check box.
Select one of the following authentication types:
• HTTP user name authentication
• SOAP message user name authentication
Add the User name and Password. Click Next.
8. The Web Service Invocation-Summary and Finish window is displayed. It shows the name of the
policy. Click Finish to create the policy.

XML policies
XML policies are used to read and to extract data from any well-formed XML document.
The XML DSA can read XML data from files, from strings, and from HTTP servers via the network (XML
over HTTP). The HTTP methods are GET and POST. GET is selected by default. In the XML wizard you
can specify the target XML source and the schema file, to create the corresponding data source and data
types for users. The wizard also updates the necessary property files and creates a sample policy to help
you start working with XML DSA. When choosing the XML String option in the XML DSA wizard, ensure that
the xml string you copy and paste does not contain references to stylesheet-related tags.

112 Netcool/Impact: User Interface Guide


Recovering automatically saved policies
When the autosave option is selected you can recover and save an automatically saved policy.

Procedure
1. In the Policies tab, click the Auto-Save version icon in the toolbar.
2. Choose one auto-saved policy from the Drafted Policy list.
3. Click Open to view the drafted policy in the editor.
4. Click Save to save the drafted policy.

Working with the policy editor


The GUI provides a policy editor that you can use to create and edit policies.
The policy editor offers a text editor with syntax highlighting, a function browser, a syntax checker, and
other utilities to make it easy to manage policies. You can also write policies in an editor of your choice
and then upload them into Netcool/Impact. After they are uploaded, you can edit them and check the
syntax by using the policy editor.
Tip: Throughout the documentation, there are code examples that you can copy and paste into the
product. In instances where code or policy examples that contain single quotation marks are copied
from the PDF documentation the code examples do not preserve the single quotation marks. You need
to correct them manually. To avoid this issue, copy and paste the code example content from the html
version of the documentation.
Netcool/Impact 7.1.0.6 ships with a new Policy Editor. By default, the Netcool/Impact displays the
existing Policy Editor. You can access the new policy editor by enabling the Use new policy editor (beta)
checkbox in the user preferences. The new policy editor offers several enhancements to the existing
editor including bracket matching, code folding, tab key support, status reporting, improved find/replace
and an auto-complete tool.
Note: If you create and edit a policy by using an external editor of your choice, you must check its syntax
with the nci_policy script before you run it. For more information about the nci_policy script, see
the Administration Guide.

Policy editor toolbar controls


An overview of the policy editor toolbar controls.

Table 86. Policy Editor toolbar options

Icon Description

The Save icon saves the current policy.


Use the Save with comments option to save your policy with comments.
To save a policy with a different file name click Save as....
Remember: If you use UTF-8 characters in the policy name, check that the
locale on the Impact Server where the policy is saved is set to the UTF-8
character encoding.

Restore your work to its state before your last action, for example, add
text, move or, delete. Undo works for one-level only.

Restore your work to its state before you selected the Undo action. Redo
works for one-level only.

Chapter 8. Working with policies 113


Table 86. Policy Editor toolbar options (continued)

Icon Description

Cut highlighted text. In some instances, due to browser limitations, the


Cut icon cannot be activated. Use the keyboard short cut Ctrl + x
instead.

Copy highlighted text. In some instances due to browser limitations, the


Copy icon cannot be activated. Use the keyboard short cut Ctrl + c
instead.

Use this icon to paste cut, or copied text to a new location. In some
instances due to browser limitations, the Paste icon cannot be activated.
Use the keyboard short cut Ctrl + v instead.
To copy and paste rich text formatted content, for example from a web
page or document file:
1. Paste the content into a plain text editor first to remove the rich text
formatting.
2. Copy the content from the plain text editor into the policy editor.

Use this icon to find and replace text in a policy. Search for a text string.
Type the text that you want to find, choose if you want to run a case-
sensitive search, and choose the direction of the search.
Search for text and replace it with a text you specify. Type the text that you
want to search for. Type the replacement text. Choose if you want to run a
case-sensitive search, and choose the direction of the search.

Click the Go To icon to show a Go To Line field in the policy editor. Type the
number of the line you want the cursor to go to. Click Go.

Insert a selected function, an action function, or a parser function, in your


policy. Add additional parameters for the function if required.
The toolbar selection lists provide you with a set of functions to use in your
policy.

Access a list of data types. The Data Type Browser icon simplifies policy
development by showing available data types and details including field
name and type information. You do not have to open the data type viewer
to get the data type information.

The Check Syntax icon checks the policy for syntax errors. If there are
errors, the error message locates the error by the line number. If there are
no errors, a message to that effect is shown.

Optimize the policy. For more information see “Optimizing policies” on


page 116.

Click the Run Policy icon to start the policy. After removing all syntax
errors, you can run the policy to ensure that it produces the result you
wanted. To run your policy with additional parameters, use the Run with
Parameters option. You can use this option after you configure policy
settings for your policy.

114 Netcool/Impact: User Interface Guide


Table 86. Policy Editor toolbar options (continued)

Icon Description

Use this icon to configure settings for the policy. For more information, see
“Configuring policy settings in the policy editor” on page 117.

Click the View Version History icon to view the history of changes made to
policies, and compare different versions of policies. For more information
about version history interface, see “Using version control interface” on
page 127.
Important:
The View Version History icon is disabled for new and drafted policies and
it becomes active after the policy is committed to server.
This option is supported only with the embedded SVN version control
system.

Click this icon to view the policy logs in the log viewer. For more
information about the policy log viewer, see “Services log viewer” on page
135.

Click this icon to manually enable or disable the syntax highlighter. For
information about automatically configuring the syntax highlighter, see
“Policy syntax highlighter” on page 115.

Policy syntax checking


While you are creating your policy, you can check to ensure that the syntax is correct.
When you select the Check Syntax icon, a list of errors are shown in a dialog box. If there are no errors in
the policy, the following message is displayed:

Syntax check successful. No error found.

If the checker finds errors, you will see a table listing all the errors that were found.
The Type column of the table contains an error indicator, either Warning or Error.
The Line column of the table contains the line number where the error occurred. To find the error, click
the line number. The editor scrolls to that line in the script.
The Message column of the table outlines the error.

Policy syntax highlighter


When you create a policy, the syntax highlighter can be configured to automatically toggle itself off at
startup if the policy exceeds a predefined character limit. When working with large policies in the policy
editor, disabling the syntax highlighter can alleviate possible performance slowdowns. Note: For the beta
policy editor, the syntax highlighter is always enabled. The toggle icon and character limit will have no
effect for the beta editor.

Procedure
1. Open a policy, in the policy editor toolbar, click the toggle icon to manually enable or disable the syntax
highlighter.
2. The syntax highlighter can be configured to automatically toggle itself off at startup when the policy
exceeds a specified character limit.

Chapter 8. Working with policies 115


a) On the menu bar click Options > Preferences tab.
b) In the Character limit for syntax highlighting field, type the character limit for the policies.
When a policy reaches this character limit, the syntax highlighter is automatically turned off.
c) Click Save.
d) Reopen the Policies tab to implement the changes

Optimizing policies
After you create your policy, you can check to see whether there is a way to improve it.

Procedure
1. Click the Optimize icon.
The Optimization handles three functions:
• Hibernate
• GetByKey
• GetByFilter
For the Hibernation function, the optimization checks to make sure that you have a
RemoveHibernation function with the same hibernation key and notifies you if you do not. For the
GetByKey and GetByFilter functions, the optimization checks the data type and sees what fields are
returned from a data type. It then checks the policy to see if all of the fields are being used. When all
of the fields from the data type are not being used, you receive a message showing which fields are not
being used. You can change the data type fields if required.
2. Click Save to implement any changes.
When you change a policy and you want to click Optimize again you must save the policy first. The
optimize feature works from the saved version and not the modified version.

Running policies with parameters in the editor


If you specified any parameters for the policy, you can run the policy with these parameters in the policy
editor.

Procedure
1. Click the Run with Parameters icon to open the Policy Trigger window.
Note: The fields you see depend on the policy parameters and values you specified for the policy. If
you have not set a default value for a parameter you must provide it now, otherwise a NULL value will
be passed.
Ouput parameters are required if you want to show policy output through a UI data provider. For more
information about setting parameters, see “Configuring policy settings in the policy editor” on page
117.
2. Click Execute to run the policy with parameters.

Browsing data types


Use this procedure to view available data types and their details directly from the policy editor.

Procedure
1. Click the Types Browser icon.
2. Click a data type to see the details. The Data Type Detail window opens and shows the details.

116 Netcool/Impact: User Interface Guide


Configuring policy settings in the policy editor
Use this procedure to configure the policy settings for your policy in the policy editor.

Procedure
1. In the policy editor toolbar, click the Configure Policy Settings icon to open the policy settings editor.
You can create policy input and output parameters and also configure actions on the policy that relates
to UI Data Provider and Event Isolation and Correlation options.
2. Click New to open the Create a New Policy Input Parameter window or the Create a New Policy
Output Parameter window or the Create New policy action window as required.
For more information, see “Configuring policy parameters and enabling actions” on page 117
Enter the information in the configuration window. Required fields are marked with an asterisk (*). If
you select DirectSQL as the format, see “Creating custom schema values for output parameters” on
page 118.
3. To edit an existing input or output parameter, select the check box next to the parameter and select
edit in the corresponding cell of the Edit column.
4. To enable a policy to run with an UI data provider select the Enable policy for UI Data Provider
Actions check box.
5. To enable a policy to run in with the Event Isolation and Correlation capabilities, select the Enable
Policy for Event Isolation and Correlation Actions check box.
6. Click OK to save the changes to the parameters and close the window.

Configuring policy parameters and enabling actions


You can use the Policy settings editor to configure input and output parameters on a policy. You can also
enable options for use with UI data providers and Event Isolation and Correlation features.

Procedure
1. In the Policy Input Parameters section, click New to create a policy input parameter.
a) In the Name field, type a name to describe the parameter.
b) In the Label field, add a label. The label is displayed in the Policy Trigger window.
c) From the Format menu, select the format of the parameter.
d) In the Default Value field, add a default value. This value is displayed in the Policy Trigger window.
e) In the Description field, add a description for the parameter.
2. In the Policy Output Parameters section, click New to create a policy output parameter.
Tip: When you create multiple output parameters, remember each policy output parameter that you
create generates its own data set. When you assign a data set to a widget, only those tasks that are
associated with the specific output parameter are run.
a) In the Name field, type a name to describe the parameter.
b) In the Policy Variable Name field, add the variable name. The variable name is displayed in the
Policy Trigger window.
c) From the Format menu, select the format of the parameter.

d) Click the Schema Definition Editor icon


If you define output parameters that use the DirectSQL/UI Provider Datatype, Impact Object or
Array of Impact Object formats, click this icon to create the custom schema definition values. For
more information, see “Creating custom schema values for output parameters” on page 118.
e) In the Default Value field, add a default value. This value is displayed in the Policy Trigger window.

Chapter 8. Working with policies 117


f) In the Data Source Name field, type the name of the data source that is associated with the output
parameter.
g) In the Data Type Name field, type the name of the data type associated with the output parameter.
3. In the UI Data Provider Policy Related Actions section, click New to create a UI Data Provider policy
related action.
You can use this option to enable a policy action on a widget in the console (Dashboard the Dashboard
Application Services Hub) in Jazz for Service Management.
a) In the Name field, add a name for the action.
The name that you add displays in the widget in the console when you right-click an item in the
specified widget.
b) In the Policy Name menu, select the policy that you want the action to relate to.
c) In the Output Parameter menu, select the output parameter that is associated with this action. If
you select the All Output Parameters option, the action will be available for all output parameters
for the current policy.
4. To enable a policy to run with an UI data provider, select the Enable policy for UI Data Provider
Actions check box.
5. To enable a policy to run in with the Event Isolation and Correlation capabilities, select the Enable
Policy for Event Isolation and Correlation Actions check box.
6. Click OK to save the changes to the parameters and close the window.

Creating custom schema values for output parameters


When you define output parameters that use the DirectSQL, Array of Impact Object, or Impact Object
format in the user output parameters editor, you also must specify a name and a format for each field that
is contained in the DirectSQL, Array of Impact Object, or Impact Object objects.

About this task


Custom schema definitions are used by Netcool/Impact to visualize data in the console and to pass values
to the UI data provider and OSLC. You create the custom schemas and select the format that is based on
the values for each field that is contained in the object. For example, you create a policy that contains two
fields in an object:

O1.city="NY"
O1.ZIP=07002

You define the following custom schemas values for this policy:

Table 87. Custom schema values for City


Field Entry
Name City
Format String

Table 88. Custom schema values for ZIP


Field Entry
Name ZIP
Format Integer

If you use the DirectSQL policy function with the UI data provider or OSLC, you must define a custom
schema value for each DirectSQL value that you use.
If you want to use the chart widget to visualize data from an Impact object or an array of Impact objects
with the UI data provider and the console, you define custom schema values for the fields that are

118 Netcool/Impact: User Interface Guide


contained in the objects. The custom schemas help to create descriptors for columns in the chart during
initialization. However, the custom schemas are not technically required. If you do not define values for
either of these formats, the system later rediscovers each Impact object when it creates additional fields
such as the key field. UIObjectId, or the field for the tree widget, UITreeNodeId. You do not need to
define these values for OSLC.

Procedure
1. In the Policy Settings Editor, select DirectSQL, Impact Object, or Array of Impact Object in the
Format field.

2. The system shows the Open the Schema Definition Editor icon beside the Schema Definition
field. To open the editor, click the icon.
3. You can edit an existing entry or you can create a new one. To define a new entry, click New. Enter a
name and select an appropriate format.
To edit an existing entry, click the Edit icon beside the entry that you want to edit
4. To mark an entry as a key field, select the check box in the Key Field column. You do not have to define
the key field for Impact objects or an array of Impact objects. The system uses the UIObjectId as the
key field instead.
5. To delete an entry, select the entry and click Delete.

Adding functions to policy


Use this procedure to add a function to a policy.

Procedure
1. Click the Insert function icon and select one of the functions.
2. Enter the required parameters in the new function configuration window.
Note: When entering a string, check that all string literals are enclosed in quotation marks ("string "), to
distinguish them from variable names, which do not take quotation marks.
For the beta policy editor, you can also access the auto-complete tool which provides suggestions
based on the current context. When working inside the policy document, press Control+Space to
access the tool. The auto-complete Control+Space shortcut key may conflict with other operating
system shortcuts. If you have such a conflict, consider changing the keyboard shortcut for the
command.

List and overview of functions


A list of all functions with a short overview.

Table 89. List of functions

Name Type Description

Activate Policy The Activate function runs another policy.

ActivateHibernation Policy The ActivateHibernation function


continues running a policy that was previously
put to sleep by using the Hibernate function.
You must also run the RemoveHibernation
function to remove the policy from the
hibernation queue and to free up memory
resources.

Chapter 8. Working with policies 119


Table 89. List of functions (continued)

Name Type Description

AddDataItem Database, The AddDataItem function adds a data item to a


Internal data type.

BatchDelete Database The BatchDelete function deletes a set of data


items from a data type.

BatchUpdate Database The BatchUpdate function updates field values


in a set of data items in a data type.

BeginTransaction Database The BeginTransaction is a local transactions


function that is used in SQL operations.

CallDBFunction Database The CommitTransaction function is a local


transactions function that is used in SQL
operations.

CallStoredProcedure Database The CallStoredProcedure function calls a


database stored procedure.

ClassOf Context The ClassOf function returns the data type of a


variable.

CommandResponse Systems Use the CommandResponse function to run


interactive and non-interactive programs on
both local and remote systems.

CommitChanges Database Used only with GetByFilter, and GetByKey


functions to force updates in a database.

CommitTransaction Database The CommitTransaction function is a local


transactions function that is used in SQL
operations.

ConvertObjectsToJSON String This function converts Netcool/Impact Objects


to a one level JSON string.

ConvertXMLToImpactObjects String The ConvertXMLToImpactObjects function


converts any XML string to a nested structure
of Impact objects.

Decrypt String The Decrypt function decrypts a string that has


been previously encrypted by using Encrypt or
the nci_crypt tool.

DeleteDataItem Database, The DeleteDataItem function deletes a single


Internal data item from a data type.

Deploy Miscellaneous The Deploy function copies data sources, data


types, policies, and services between server
clusters.

120 Netcool/Impact: User Interface Guide


Table 89. List of functions (continued)

Name Type Description

DirectSQL Database The DirectSQL function runs an SQL operation


against the specified database and returns any
resulting rows to the policy as data items.

Distinct Array The Distinct function returns an array of distinct


elements from another array.

Encrypt String The Encrypt function encrypts a string.

Escape String This function escapes special characters in an


input string in a policy.

Eval Context The Eval function evaluates an expression by


using the specified context.

EvalArray Array, Context The EvalArray function evaluates an expression


by using the specified array.

Exit Policy You use the Exit function to stop a function


anywhere in a policy or to exit a policy.

Extract String The Extract function extracts a word from a


string.

Float Numeric The Float function converts an integer, string, or


Boolean expression to a floating point number.

FormatDuration Time The FormatDuration function converts a


duration in seconds into a formatted date/time
string.

GetByFilter Database, The GetByFilter function retrieves data items


Internal, ITNM, from a data type by using a filter as the query
LDAP, XML condition.

GetByKey Database, The GetByKey function retrieves data items


Internal, LDAP from a data type by using a key expression as
the query condition.

GetByLinks Database, The GetByLinks function retrieves data items in


Internal, XML target data types that are linked to one or more
source data items.

GetByXPath XML The GetByXPath function provides a way to


parse an XML string or get an XML string
through a URL specified as parameter.

GetClusterName Variables, You use the GetClusterName function inside a


Impact policy to identify which cluster is running the
policy.

Chapter 8. Working with policies 121


Table 89. List of functions (continued)

Name Type Description

GetDate Time The GetDate function returns the date/time as


the number of seconds expired since the start of
the UNIX epoch.

GetFieldValue Java Use this function to get the value of static or


non-static public fields in a Java class. For non-
static fields, use the variable FieldName for a
Java class or TargetObject for a Java object.
For a static Java class field, use the variable
ClassName.

GetGlobalVar Variables This function retrieves the global value that is


saved by previous SetGlobalVar calls.

GetHTTP REST You can use the GetHTTP function to retrieve


any HTTP URL or to post content to a web page.

GetHibernatingPolicies Policy The GetHibernatingPolicies function retrieves


data items from the Hibernation data type by
performing a search of action key values.

GetHostAddress Variables, You use the GetHostAddress function inside a


Impact policy to get the IP address of the system where
the Netcool/Impact server is running.

GetScheduleMember Time The GetScheduleMember function retrieves


schedule members that are associated with a
particular time range group and time.

GetServerName Variables You use the GetServerName function inside a


policy to identify which server is running the
policy.

GetServerVar Variables You use this function to retrieve the global value
that is saved by previous SetServerVar.

Hibernate Policy The Hibernate function causes a policy to


hibernate.

Illegal String If the input in the policy has malicious content,


the Illegal function throws an exception in a
policy.

Int Numeric The Int function converts a float, string, or


Boolean expression to an integer.

JRExecAction Systems The JRExecAction function runs an external


command by using the JRExec server.

122 Netcool/Impact: User Interface Guide


Table 89. List of functions (continued)

Name Type Description

JavaCall Java You use this function to call the method


MethodName in the Java object TargetObject
with parameters, or, to call the static method
MethodName in the Java class ClassName with
parameters.

Keys Context The Keys function returns an array of strings


that contain the field names of the specified
data item.

Length Array, String The Length function returns the number of


elements or fields in an array or the number of
characters in a string.

Load JavaScript You use this function to load a JavaScript library


into your JavaScript policy.

LocalTime Time The LocalTime function returns the number of


seconds since the beginning of the UNIX epoch
as a formatted date/time string.

Log Policy The Log function prints a message to the policy


log.

Merge Context The Merge function merges two contexts


or event containers by adding the member
variables of the source context or event
container to those of the target.

NewEvent Context The NewEvent function creates a new event


container.

NewJavaObject Java The NewJavaObject function is used to call


the constructor for a Java class.

NewObject Context The NewObject function creates a new context.

ParseDate Time The ParseDate function converts a formatted


date/time string to the time in seconds since the
beginning of the UNIX epoch. 1st January 1970
00:00:00 (UTC).

ParseJSON String This function converts a JSON string into a


Netcool/Impact Object.

Random Numeric The Random function returns a random integer


between zero and the given upper bound.

RDFModel RDF You can use the RDFModel function to create an


RDF model without any input parameters.

Chapter 8. Working with policies 123


Table 89. List of functions (continued)

Name Type Description

RDFModelToString RDF You can use the RDFModelToString function to


export an RDF model to a string in a particular
language.
RDFModelUpdateNS RDF You can use the RDFModelUpdateNS function to
insert, update, or remove a namespace from an
RDF model.
RDFNodeIsResource RDF You can use the RDFNodeIsResource function
to help other functions read and parse objects
that are also an RDF resource.
RDFNodeIsAnon RDF You can use the RDFNodeIsAnon function to
assist in reading and parsing an RDF.
RDFParse RDF You can use the RDFParse function to help other
functions read and parse an RDF object
RDFRegister RDF You can use the RDFRegister function to help
you to register service providers or OSLC
resources with the registry server.
RDFUnRegister RDF To remove the registration record of a service
provider or resource from the registry server,
use the RDFUnRegister function to supply the
location of the registration record, the Registry
Services server username and password, and
the registration record that you want to remove.
RDFSelect RDF You can use the RDFSelect function to assist
in reading and parsing an RDF. To retrieve
statements based on an RDF model, you call
the RDFSelect function and pass the RDF model
that is created by the RDFParse function. You
can filter based on subject, predicate, and
object.
RDFStatement RDF You can use the RDFStatement function to
create and add statements to an RDF model.

RExtract String The RExtract function uses regular expressions


to extract a substring from a string.

RExtractAll String The RExtractAll function uses regular


expression matching to extract multiple
substrings from a string.

ReceiveJMSMessage JMS The ReceiveJMSMessage function retrieves a


message from the specified Java Message
Service (JMS) destination.

RemoveHibernation Policy The RemoveHibernation function deletes a


data item from the Hibernation data type and
removes it from the hibernation queue.

124 Netcool/Impact: User Interface Guide


Table 89. List of functions (continued)

Name Type Description

Replace String The Replace function uses regular expressions


to replace a substring of a specified string.

ReturnEvent Policy The ReturnEvent function inserts, updates, or


deletes an event from an event source.

RollbackTransaction Database The RollbackTransaction function rolls back any


changes that are done by an SQL operation.

SendEmail Notifications The SendEmail function sends an email that


uses the email sender service.

SendJMSMessage JMS The SendJMSMessage function sends a


message to the specified destination by using
the Java Message Service (JMS) DSA.

SetFieldValue Java Use the SetFieldValue function to set a public


field variable in a Java class to some value.

SetGlobalVar Variables The SetGlobalVar function creates in a policy a


global variable, which can be accessed from any
local functions, library functions, and exception
handlers in a policy.

SetServerVar Variables The SetServerVar function creates a server-


wide global variable in a policy.

SNMPGetNextAction SNMP, Systems The SnmpGetNextAction function retrieves the


next SNMP variables in the variable tree from
the specified agent.

SNMPGetAction SNMP, Systems The SnmpGetAction function retrieves a set of


SNMP variables from the specified agent.

SNMPSetAction SNMP, Systems The SnmpSetAction function sets variable


values on the specified SNMP agent.

SNMPTrapAction SNMP The SnmpTrapAction function sends a trap (for


SNMP v1) or a notification (for SNMP v2) to an
SNMP manager.

Split String The Split function returns an array of substrings


from a string by using the specified delimiters.

String String The String function converts an integer, float, or


boolean expression to a string.

Strip String The Strip function strips all instances of the


specified substring from a string.

Substring String The Substring function returns a substring from


a specified string by using index positions.

Chapter 8. Working with policies 125


Table 89. List of functions (continued)

Name Type Description

Synchronised Policy Use the Synchronized function to write thread-


safe policies for use with a multi-threaded event
processor by using IPL or JavaScript.

ToLower String The ToLower function converts a string to


lowercase characters.

ToUpper String The ToUpper function converts a string to


uppercase characters.

Trim String The Trim function trims leading and trailing


white space from a string.

URLDecode String, REST The URLDecode function returns a URL encoded


string to its original representation.

URLEncode String, REST The URLEncode function converts a string to a


URL encoded format.

UpdateDB Database The UpdateEventQueue function updates or


deletes events in the event reader event queue.

UpdateEventQueue Database The UpdateEventQueue function updates or


deletes events in the event reader event queue.

WSInvoke Web Services Provided for backward-compatibility only.


Important: [This feature is
deprecated.]

WSInvokeDL Web Services The WSInvokeDL function makes web services


calls when a Web Services Description
Language (WSDL) file is compiled with
nci_compilewsdl, or when a policy is
configured using the Web Services wizard.

WSNewArray Web Services The WSNewArray function creates an array of


complex data type objects or primitive values,
as defined in the WSDL file for the web service.

WSNewEnum Web Services The WSNewEnum function returns an


enumeration value to a target web service.

WSNewObject Web Services The WSNewObject function creates an object of


a complex data type as defined in the WSDL file
for the web service.

WSNewSubObject Web Services The WSNewSubObject function creates a child


object that is part of its parent object and has a
field or attribute name of ChildName.

WSSetDefaultPKGName Web Services WSSetDefaultPKGName

126 Netcool/Impact: User Interface Guide


For more details about each of these functions, see the Policy Reference Guide.

Changing default font used in the policy editor


Use this procedure to change the default font in the policy editor.

Procedure
1. Open
the $IMPACT_HOME/wlp/usr/servers/ImpactUI/apps/ImpactUI.ear/impactAdmin.war/
scripts/impactdojo/ibm/tivoli/impact/editor/themes/PolicyEditor.css file in a text
editor.
2. Update the values of the following entries with your own values:
• font-family
• font-size
• line-height
3. Refresh the browser to apply the changes.
It is also recommended to clear the browser cache.

Using version control interface


Use this procedure to view the version history of your policies.

Procedure
1. Open a policy in the policy editor.
2. Click the View Version History icon in the policy editor toolbar to open the version control interface.
You see the following columns:

Table 90. Version control interface columns

Column Description

Version Version number of the policy.

Author The user ID of the user who is logged in to the Impact Server.

Date Date the change was committed.

Comments Shows any comments and the user ID of the user who submitted them.

3. Click a version of the policy to view its contents.


• To view the differences between versions of the policy click View Differences.
• To revert to an older version of the policy, select the version that you want to revert to and click
Revert.

Uploading policies
You can upload policies and policy parameters files that you wrote previously to the Impact Server.

Procedure
1. In the Policies tab, from the policy menu, click the Upload a Policy File icon.
The Upload a Policy File window opens.

Chapter 8. Working with policies 127


2. Select the check box for each type of file you want to upload, a policy file, or parameters file. You can
upload both file types at the same time.
The file extension must end with .ipl for an IPL policy or .js for a JavaScript policy. Policy parameter
file extensions must end with .params.
3. Type the path and file name, or click Browse to locate and select the policy or parameter file.
4. From the Encoding list menu, click the arrow to select the original encoding of the file you are
uploading. The default option is Unicode UTF-8.
5. Click Upload.
Note: Internet Explorer: Depending on your browser security settings, Internet Explorer might
replace your full path name with C:\fakepath\. The policy still uploads to the correct location.
Complete the following steps if you want to change your Internet Explorer security settings to show
the correct path name:
a. In Internet Explorer, go Tools > Internet Options > Security.
b. Select the appropriate zone for your GUI Server server.
c. Select Custom Level.
d. Scroll to the Include local directory path when uploading files to a server option.
e. Select Enable. Click OK.
6. The policy is added to the selected project in the Policies tab. The policies list refreshes automatically
and shows the added policy in the policy list.
The uploaded policy parameters file is stored in the Impact Server in $IMPACT_HOME/policy.

Working with predefined policies


You can find predefined policies in the global repository, by selecting the Global in the project selection.
No configuration is required for predefined policies.

Table 91. Predefined policies

Policy Description

AddPolicyProcessMapping This policy is used in reports. You do not need to change this
policy.

DefaultExceptionHandler This policy is used to handle failed events if the policy failure
is not handled locally using the Exception Handler. You can
write your own policy if you need to. If you do not write your
own, the provided policy is used by default.
The DefaultExceptionHandler policy prints a log of the
Events that failed to execute. To configure a customized error
handling policy, see “Configuring the Policy logger service”
on page 147.

DefaultPolicyActivatorExample A sample policy that the DefaultPolicyActivator uses out of


the box. It prints a simple log message.

DeployProject You can use this policy to copy the data sources, data types,
policies, and services in a project between two running
server clusters on a network. You can use this feature when
moving projects from test environments into real-world
production scenarios. For more information about automated
project deployment, see “Automated project deployment
feature” on page 8.

128 Netcool/Impact: User Interface Guide


Table 91. Predefined policies (continued)

Policy Description

EventCorrelationUsingXinYExampl This is an example policy that performs the X in Y


e Correlation. This specific scenario focuses on an IBM Tivoli
Monitoring Tivoli Enterprise Monitoring Server that sends a
flood of events that are tagged as MS_Offline. MS_Offline
events are sent when the Tivoli Enterprise Monitoring Server
agents detect that servers are down or restarted. It can
be updated to match any other events by changing the
CorrelationFilter.

Export This policy is used by the nci_export script during the


export of the Netcool/Impact configuration to another server.
It is recommended that you do not change this policy.

FailedEventExceptionHandler When errors occur during the execution of a policy, the Policy
Logger service executes the appropriate error handling
policy, and temporarily stores the events as data items in
a predefined data type called FailedEvent.
FailedEvent is an internal data type and all data that
is stored internally consumes memory. When you have
resolved the reasons for the event failures, you can reduce
the amount of memory that is consumed by using one of the
following options:
• reprocess the failed events using the
ReprocessFailedEvent.
• delete the events from FailedEvent data type.
See “FailedEvent data types” on page 96 for more
information.

Import This policy is used during the import of the Netcool/Impact


configuration from another server.
It is recommended that you do not change this policy.

ReprocessFailedEvent This policy is used to reprocess failed events. For more


information about failed events, see “FailedEvent data
types” on page 96.

REPORT_PurgeData This policy is used to purge data that is generated by running


reports. You can configure data that is older than a certain
number of days and the maximum number of rows to be
deleted. The default is 2 days.

XINY_DataType_PurgeData This policy is used to purge data items from data types that
are created by the XinY policy wizard. You can configure the
data that is older than a certain number of days. The default
is 4 days.

Chapter 8. Working with policies 129


Accessibility Features
The Policy Editor has several features to improve its accessibility. To get help on these features at any
time, press Control+Shift+0 when the editor is active.

Accessing the current line number


To get the current line number, press Control+Shift+1.
Note: This works for Mozilla Firefox only.

Accessing the syntax highlighter


To get the syntax highlighter information for the current cursor position, press Control+Shift+2. The
cursor should be positioned inside a text block to ensure the information is accurately reported. For
example, if the cursor is positioned next to the opening or closing quote marks of a string, the highlighter
may interpret the quote marks as regular text. Move the cursor into the string text for a more accurate
report.
Note: The syntax highlighter feature must be enabled for this to work. This works for Mozilla Firefox only.

Reading code in the Policy Editor


Punctuation
To improve the readability of code for screen readers, you can configure the screen reader to read out
additional punctuation.
JAWS (Windows)
1. Go to Utilities > Settings Center > Punctuation > Customize Punctuation.
2. Select comma, from the list and change the Level When Spoken to Most.
3. Select exclaim! from the list and change the Level When Spoken to Most.
4. Select and from the list, change the Description to ampersand and change the Level When Spoken to
Most.
Dictionary
You can also improve the pronunciation of code for screen readers by adding custom entries to the screen
reader dictionary.
JAWS (Windows):
1. Go to Utilities > Dictionary Manager.
2. In the Dictionary Entries list, select your language and click the Add button.
3. Enter the exact spelling of the word in the Actual Word field. In the Replacement Word field, enter a
pronunciation of the word that more accurately reflects the word.
4. Click OK to save the dictionary entry.
For a list of recommended dictionary entries, see the following page:
https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/
Tivoli%20Netcool%20Impact/page/Accessibility

130 Netcool/Impact: User Interface Guide


Chapter 9. Working with services
Services are runnable components of the Impact Server that you start and stop using both the GUI and
the CLI.

Services overview
Services perform much of the functionality associated with the Impact Server, including monitoring event
sources, sending and receiving e-mail, and triggering policies.
The most important service is the OMNIbus event reader, which you can use to monitor an ObjectServer
for new, updated or deleted events. The event processor, which processes the events retrieved from the
readers and listeners is also important to the function of the Netcool/Impact.
Internal services control the application's standard processes, and coordinate the performed tasks, for
example:
• Receiving events from the ObjectServer and other external databases
• Executing policies
• Responding to and prioritizing alerts
• Sending and receiving e-mail and instant messages
• Handling errors
Some internal services have defaults, that you can enable rather than configure your own services, or
in addition to creating your own. For some of the basic internal services, it is only necessary to specify
whether to write the service log to a file. For other services, you need to add information such as the port,
host, and startup data.
User defined services are services that you can create for use with a specific policy.
Generally, you set up services once, when you first design your solution. After that, you do not need to
actively manage the services unless you change the solution design.
To set up services, you must first determine what service functionality you need to use in your solution.
Then, you create and configure the required services using the GUI. After you have set up the services,
you can start and stop them, and manage the service logs.

Creating services
How to create a user-defined service.

Procedure
1. Click Services to open the Services tab.
2. From the Cluster and Projects lists, select the cluster and project you want to use.
A list of services that are related to the selected project is displayed.
3. In the Services tab, click the Create New Service icon.
4. From the menu, select a template for the service that you want to create.
5. In the service configuration tab, provide the necessary information to create the service.
6. Click the Save Service icon.
• To edit a service, you can double-click the service, or right-click on the service and select Edit.
Make the necessary changes to the service. Click Save to implement the changes.
Important: You can create a user-defined service by using the defaults that are stored in the
Global project.

© Copyright IBM Corp. 2006, 2023 131


• To delete a service, select the service and click Delete Service. You can also delete a service by
right clicking its name in the services pane and selecting Delete.
Important: Do not delete the default services. If you delete one, you cannot create new services of
the type you delete. Deleting a user-defined service from the services panel, deletes it permanently
from the server. If you want to remove it from a project, but retain it in the database, use the project
editor.

Services panel controls


An overview of the services panel icons and indicators.

Table 92. Service Status panel icons and indicators

Element Description

Click the Create New Service icon to create a user-defined service using one of the available
service templates.

Click the Edit Service icon to edit an existing service using one of the available service
templates. You can also double click on the service to open the service for editing.

Click the View Service Log icon to access the log for the selected service. You can also view
the log for a selected service by right clicking its name and selecting View Log.

Select a stopped service and click the Start service icon to start it. Alternatively, you can start
a service by right clicking its name and selecting Start.

Select a running service and click the Stop Service icon to stop it. Alternatively, you can stop
a service by right clicking its name and selecting Stop.

Click the Delete Service icon to delete a user-defined service. Alternatively, you can delete a
user-defined service by right clicking its name and selecting Delete.
Important: You cannot delete a running service, you must stop it first.

This indicator next to a service name indicates that the service is running.

This indicator next to a service name indicates that the service is stopped.

Source control locking for the service. This icon is visible when the service is locked or the
item is being used by another user. Hover the mouse over the locked item to see which user
is working on the item. You can unlock your own items but not items locked by other users. If
you have an item open for editing you cannot unlock it. Save and close the item.
To unlock an item you have locked, click the unlock service icon. You can also unlock the
service by right clicking on the item name and selecting Unlock.
Users who are assigned the impactAdminUser role are the only users who can unlock items
that are locked by another user in exceptional circumstances.

The following table describes the service icons used in Netcool/Impact.

132 Netcool/Impact: User Interface Guide


Table 93. Service icons
Icon Description
Indicates a generic service icon used to indicate the following services:
• CommandExecutionManager
• EventProcessor
• PolicyLogger
• SelfMonitoring

Indicates the CommandLineManager service.

Indicates database-related services DatabaseEventListener, and


ImpactDataBase.

Indicates a mail and message-related service including the following services:


• EmailReader
• EmailSender
• JMSListener

Indicates policy activator services, including PolicyActivator, and MWMActivator.

Indicates a hibernation service, HibernatingPolicyActivator.

Indicates Omnibus-related services, OMNIbusEventReader, and


OMNIbusEventListener.
Indicates an EventListener and DatabaseEventReader service.

Indicates an ITNMEventListener service.

List of services
A list of internal and user-defined Netcool/Impact services.

Table 94. Impact Services


Service Type Description
CommandExecutionManager internal The command execution manager is the service
responsible for operating the command and
response feature.
CommandLineManager internal Use the command-line manager service to access
the Impact Server from the command line to
configure services parameters and start and stop
services.
DatabaseEventListener internal The database event listener service monitors an
Oracle event source for new, updated, and deleted
events.

Chapter 9. Working with services 133


Table 94. Impact Services (continued)
Service Type Description
DefaultEmailReader internal The email reader service reads incoming email,
and runs policies that are based on the contents
of the email.
DefaultPolicyActivator internal The policy activator service activates policies at
startup or at the intervals you specify for each
selected policy.
DatabaseEventReader user The database event reader is a service that polls
defined supported, external SQL data sources at regular
intervals to get business events in real time.
EmailReader user The email reader service reads incoming email,
defined and runs policies that are based on the contents
of the email.
EventListener user Event listeners monitor non-ObjectServer event
defined source events.
EmailSender internal The e-mail sender is a service that sends e-mail
through an external SMTP service.
EventProcessor internal The event processor manages the incoming event
queue and is responsible for sending queued
events to the policy engine for processing.
HibernatingPolicyActivator internal The hibernating policy activator service monitors
hibernating policies and awakens them at specified
intervals.
ITNMEventListener internal The ITNM event listener service listens for events
sent from ITNM.
ImpactDatabase internal The Netcool Database Server runs as a service in
the Impact Server
JMSMessageListener internal, The Java Message Service (JMS) message listener
user service runs a policy in response to incoming
defined messages that are sent by external JMS message
providers.
MWMActivator internal Maintenance Window Management service.
An add-on for managing Netcool/OMNIbus
maintenance windows.
OMNIbusEventListener user The OMNIbus event listener service is used
defined to integrate with Netcool/OMNIbus and receive
immediate notifications of fast track events.
OMNIbusEventReader internal, OMNIbus event readers are services that monitor
user a Netcool/OMNIbus ObjectServer event source for
defined new, updated, and deleted alerts and then runs
policies when the alert information matches filter
conditions that you define.
PolicyActivator user The policy activator service activates policies at
defined startup or at the intervals you specify for each
selected policy.

134 Netcool/Impact: User Interface Guide


Table 94. Impact Services (continued)
Service Type Description
PolicyLogger internal The policy logger service is responsible for
managing the policy log.
SelfMonitoring internal The self monitoring service is used to send
messages about the internal state of Impact Server
to an ObjectServer.

Personalizing services
You can change the refresh period for the services tab.

Procedure
1. Click Options from the main menu, then click Preferences to open the Preferences dialog box.
2. Select the options that you want to personalize.
• Select the Enable auto refresh check box to automatically refresh the services.
• Select the Refresh interval period. The services are automatically refreshed at time interval you
select.
3. Click Save.

Starting and stopping services


How to start and stop a service.

Procedure
• To start a service, select the service in the services pane and click Start. You can also start a service by
right-clicking its name in the services pane and selecting Start in the menu.
• To stop a service, select the service in the services pane and click Stop. You can also stop a service by
right-clicking its name in the services pane and selecting Stop in the menu.
Note: Service status is not replicated between cluster members. If you start or stop a service on the
primary cluster member, it will not start or stop the same service on a secondary cluster member.

Viewing services logs


Use this procedure to display the service log for a service.

Procedure
• Select a service in the services tab and click View Service Log.
• You can also view a service log by right clicking the service name in the services pane and selecting
View Log in the menu.

Services log viewer


You can use the Services log viewer to view the results of your chosen service logs.
You can select the services from the drop-down menu. The window has a split screen so that you can view
two logs for two different services simultaneously. You can also create additional tabs from where you can
run additional service logs and you can move between tabs. There is also an advanced filter option which
you can use to filter the results of a log.
The log view has the following options:

Chapter 9. Working with services 135


Window element Description

New tab Click this option to create new tabs to view


additional service logs. For more information, see
“Creating new tabs” on page 137.

Default tab This tab displays automatically when you access


the service log viewer.

Service Use this option to select the service you want to


run a log for.

Click to stop the log.

Click to start the log again.

Click to clear the log.

Filter Type in a filter string to filter the results. For more


information about log viewer results, see “Service
log viewer results” on page 136. The Apply Filter
checkbox must be selected for the filter to take
effect.

Apply Filter Select this checkbox to specify that only entries


that match the filter will be included.

Exclude Filter Select this checkbox to exclude incoming log


entries based on the current filter.

Service log viewer results


The log viewer displays information relating to the following features: date, time, policy name, and pool
thread name.
The log viewer shows only the latest logging information. If there is an error in the service log the error
message displays in red font. You can click the icon next to the error message to get more information
about the error.
To refine the log results you want to view, use the Filter option. To use the filter type in a string or use a
Java regular expression.
Important: The filter expression assumes the default settings in java.util.regex pattern. For example, the
filter always assumes case-sensitive flag.
Example of a regular expression:
\bt[a-z]+\b
This expression matches any word starting with letter t and followed by one or more letters from (a to z).
To apply a filter, select the Apply filter check box. The new log message that matches the filter expression
is displayed.
You may also choose to exclude entries that match your filter expression by selecting the Exclude filter
check box.

136 Netcool/Impact: User Interface Guide


To filter the log results with multiple terms, you can use the bracket notation in conjunction with the OR
operator to define multiple terms. For example:
[ OMNIbus | ITNM ]
This expression matches either OMNIbus or ITNM.
You can view results of multiple services, the window has a split screen to view two service log results on
the same tab.
You can also create more tabs to view additional service log results using the New Tab option. For more
information about creating new tabs, see “Creating new tabs” on page 137.

Creating new tabs


You can create multiple tabs to view additional service logs.

Procedure
1. If you want to view more service logs, click the New Tab option to display the Log name dialog.
2. Type in the name of the new tab, click OK to create the new tab in the Log viewer window.
3. Populate the fields in the tab to run the service log, for more information see “Services log viewer” on
page 135.
4. As you create more tabs and view results and you can move from one tab to the other by clicking the
tab heading at the top of the window. For more information see “Service log viewer results” on page
136

Event mapping
Event mapping allows you to map incoming events to one or more specific policies.
You can configure a reader service to test incoming events against one or more event filters. If a match is
found, the reader will execute the associated policy.

Creating event filters


Use this procedure to create an event filter.

Procedure
1. Edit the service, click the Event Mapping tab then click New Mapping to open the Create a New
Event Filter window.
2. Provide the required information to create the filter.
This filter specifies the type of event that maps to the policy. For information about the filter
configuration options, see “Configuring an event filter” on page 137.
3. From the Policy to Run list, select the policy that you want to run for the event type.
4. Click Active.
5. Click OK and the service configuration window gets refreshed with the new filter showing in the table.

Configuring an event filter


Use this information to add filter expressions and configure the event filter.
1. Type a filter expression. This filter specifies the type of event that maps to the policy.

Chapter 9. Working with services 137


For example, you create a policy to run when Netcool/Impact receives an event from an Oracle
database with a Department table. You want the policy to run when the entry in the department
location field (DepLoc) is London. You type:

DepLoc = "London"

An empty filter expression is not allowed.


If you want an event to always trigger the policy, use a filter expression that will always evaluate to
true, for example: 1=1.
2. Select Policy to Run to assign to the filter and run for the event type.
3. Select Active to activate the filter or clear to deactivate the filter.
4. When chaining policies, select the Chain option for each event mapping that associates a restriction
filter with a policy name. For more information, see the Policy Reference Guide.
5. Click Analyze Filter to discover any conflicts with filter mappings that are set for a service.
Note: You can test event filters with a policy.

Developing Filter Expressions


You can develop and test filter expressions using the Eval policy function.
Example 1
The following example demonstrates how to test a filter expression against a local Impact object.

// Create an artificial event container


ec = NewObject();
ec.Class = 87131;
ec.AutomationStatus = 2;
ec.AutomationSet = 'pullPMs';

// Use Eval to test the event object for a match


Log(Eval("AutomationStatus=2 AND (Class=87131 OR Class=87132) AND
AutomationSet='pullPMs'",ec)); // true
Log(Eval("AutomationStatus=2",ec)); // true
Log(Eval("Class=87131 OR Class=87132",ec)); // true
Log(Eval("AutomationSet='Test'",ec)); // false

Example 2
The following example demonstrates how to test a filter expression against events retrieved with
GetByFilter.

// Use GetByFilter and test the expression remotely and then


// test the results using Eval
x = GetByFilter("alerts", "Class=87131 OR Class=87136", false);
if (x[0] != null) {
Log(Eval("Class=87131 OR Class=87136",x[0])); // true
Log(Eval("AutomationSet='Test'",x[0])); // false
}

Consolidating filters
When impact.analyzer.consolidatefilters=true is set, Impact will attempt to consolidate the
filters for all Event Reader services.
When consolidating filters, Impact produces an expression that corresponds to all currently configured
active event filters. In other words, Impact creates a single filter incorporating all active filters. Duplicate
filter expressions are merged and redundant and/or invalid expressions are removed. An example of an
invalid expression is 1=2. This will match nothing and is invalid. It will be removed from the expression
used to select events.

138 Netcool/Impact: User Interface Guide


Master property
Setting the impact.analyzer.consolidatefilters property in the impact/etc/<SERVER
NAME>_server.props file sets the default for the consolidatefilters property for all readers.
It is a Boolean property and the default value is true.

Reader-specific property
To override the setting for specific Event Readers, set the following property in the Event Reader's
property file:
impact.<event reader>.consolidatefilters=false
For example, for TBSMOMNIbusEventReader, the .props file should include the following property:
impact.tbsmomnibuseventreader.consolidatefilters=false

Notes and restrictions


Only active filters are consolidated.
Expressions with LIKE operators are cleared from consolidated filter.

The following restriction clauses cannot be used in a consolidated filter:


1. Minus sign, for example:
FirstOccurrence > getdate - 60
2. Expressions which compare two fields, for example:
(Severity != Type) FirstOccurrence < LastOccurrence

Verifying the consolidation operation


After changing or adding any filters for a reader, verify that the logs for the reader are error free and are
using an expected clause.
Check that the reader log file does not contain the following entry:

Error Initializing the Service. Reason: __LOGVIEWER__STACKTRACE_


java.util.concurrent.ExecutionException:
com.ibm.autonomic.policy.analysis.ratification.RatificationWarningException:
CIQAN1005W
The hyperspace factory does not support the specified expression : BC=false

Check that the impactserver.log file does not contain the following entry:

WARN [EventFilter] Could not consolidate filters.


Please check if the filter expressions are valid.
java.util.concurrent.ExecutionException:
com.ibm.autonomic.policy.analysis.ratification.RatificationWarningException:
CIQAN1005W
The hyperspace factory does not support the specified expression : BC=false
at java.util.concurrent.FutureTask.report(FutureTask.java:134)
at java.util.concurrent.FutureTask.get(FutureTask.java:214)
at com.micromuse.response.event.EventFilter.consolidateFilters
(EventFilter.java:497)

If the above error messages are shown, then either the filter expression must be adjusted (if possible) or
the reader's consolidatefilters property must be set to false using the .props file. This will require
a server restart.

Chapter 9. Working with services 139


Event mapping table
Overview of the event mapping table fields.

Table 95. Event mapping table

Window element Description

Select: When you place your mouse over the word all the word becomes
underlined as a link.
• Click all to select all the rows of filters. You can then click Delete at
the bottom of the list to delete all the previously defined filters.
• Click all again to clear all the rows of filters

Restriction Filter Contains the filter.

Policy Name Contains the name of the policy that triggers when the event matches
the restriction filter.

Active Select Active to activate the filter or clear to deactivate the filter.

Chain When chaining policies, select the Chain option for each event mapping
that associates a restriction filter with a policy name. For more
information, see the Policy Reference Guide.

Move Use the arrows to change the position of the filters in the table. The
order of the filters is only important when you select to stop testing
after the first match.

Edit To edit a filter, click the Edit button next to it.

Editing and deleting filters


Use this information to edit, reorder, and deleted filters.

Procedure
1. Locate the filter in the table and click Edit to open the Edit Event Filter window.
2. Edit the filter text and select a policy to run, as necessary.
3. Click OK to save the information and close the window.
The filter in the table in the Event Mapping tab shows your edits. Restart the service to implement the
changes.
4. You can adjust the order of the filters. The order of the filters depends on which Event Matching option
you select.
• When you select the Stop testing after first match option, Netcool/Impact checks an incoming
event against the filters in the order they are listed in the table until it gets a single match. It does
not continue checking after it finds the first match.
• When you select Test event with all filters, the order is not important.
5. To delete a filter, in the Select: column, select the filters that you want to delete. (Click the All link to
select all the filters in the table.) Click the Delete link.

140 Netcool/Impact: User Interface Guide


Filter analysis
By analyzing the event mapping table you can check the syntax and scope of all your event filters.
To find any conflicts with filter mappings that have been set for a service, in the Event Mapping tab, click
Analyze Event Mapping Table . Choose which filters you want to analyze by selecting either Active filters
or All filters in the Filters to analyze menu.
The Filter Syntax Analysis Result section displays a lists of all syntax errors that were found in the filters,
the position where these syntax errors occur, and a brief description of the error.
Filter expressions are used by the reader to generate an initial SQL query to retrieve events from the
selected database. When the reader performs event matching, the filters are evaluated by the Policy
engine. When writing a filter expression, the syntax must be compatible with the Policy Engine query
syntax. SQL stored procedures and other vendor specific SQL functions are not supported by the Policy
engine.
Filter range overlap analysis in the Filter Range Overlap Analysis Result section shows you which of your
filters overlap and what is the scope of their overlap. You can analyze your filters against the active filters
only, or against all your defined filters by selecting one of the options in the Analyze the filter against
menu.
The Consolidated filter expression section displays an expression that corresponds to all your currently
configured event filters. In other words, it is an expression that you would use in the Filter Expression
field of the new event filter configuration window to create one filter incorporating all your filters.
Tip:
The Filter Analyzer may report the following error when comparing two ObjectServer fields against each
other:
Rule includes an unsupported expression
You can use the impact.analyzer.osfields property to bypass the syntax error in this case. To do so,
add this property to the IMPACT_HOME/etc/server.props file using the following line:
impact.analyzer.osfields=true
Then restart the Impact and GUI servers.

Disable Event Matching


You can force an event reader to skip the event matching and execute a policy against all events retrieved
by the initial SQL query. This can suit situations with a single event filter where you need to make use of a
SQL procedure or other vendor specific SQL such as the ObjectServer function getdate.
To enable this, set the following property in the event reader properties file in [IMPACT_HOME]/etc/
<Server>_<EventReaderName>.props:
impact.[EventReaderName]. parserestrictionfilter=false.
However, this property will only trigger the policy from the first event filter in the event reader. Policies
defined in other filters are ignored.

Command execution manager service


The command execution manager is the service responsible for operating the command and response
feature.
The service queues JRExecAction function calls to run external commands. The command execution
manager only allows you to specify whether to print the service log to a file. There are no other
configuration properties.

Chapter 9. Working with services 141


Command line manager service
Use the command-line manager service to access the Impact Server from the command line to configure
services parameters and start and stop services.
When you configure this service, you specify the port to which you connect when you use the command
line. You can also specify whether you want the service to start automatically when the Impact Server
starts. The command-line manager is the service that manages the CLI. You can configure the port where
the command-line service runs, and the startup and logging options for the service.
The command line manager service is an independent, non-replicable service. In a cluster, if a user stops
or starts the service via the GUI, this will not stop or start the service on the secondary cluster member.

Configuring the command line manager service


Use this information to configure the command line manager service.
1. Start the command-line interface using the following command:

ssh impactadmin@<hostname> -p 2000

Note: The default port is 2000.


2. Select Starts automatically when server starts.
Select to automatically start the service when the server starts. You can also start and stop the service
from the GUI.
3. Select Service log (Write to file) to write log information to a file.

Database event listener service


The database event listener service monitors an Oracle event source for new, updated, and deleted
events.
This service works only with Oracle databases. When the service receives the data, it evaluates the event
against filters and policies that are specified for the service and sends the event to the matching policies.
The service listens asynchronously for events that are generated by an Oracle database server and then
runs one or more policies in response.
You configure the service by using the GUI. Use the configuration properties to specify one or more
policies that are to be run when the listener receives incoming events from the database server.
The database event listener agent is unable to communicate to the Name Server through SSL.

Configuring the database event listener service


You configure the database event listener service by setting events to trigger policies when they match a
filter.

Table 96. Event mapping settings for database event listener service configuration window
Window element Description

Test events with all filters Click this button, when an event matches more
than one filter, you want to trigger all policies that
match the filtering criteria.

Stop testing after first match Click this button if you want to trigger only the first
matching policy.
You can choose to test events with all filters and
run any matching policies or to stop testing after
the first matching policy.

142 Netcool/Impact: User Interface Guide


Table 96. Event mapping settings for database event listener service configuration window (continued)
Window element Description

New Mapping: New Click on the New button to create an event filter.

Analyze Event Mapping Table Click this icon to view any conflicts with filter
mappings that you set for this service.

Starts automatically when server starts Select to automatically start the service when the
server starts. You can also start and stop the
service from the GUI.

Service log (Write to file) Select to write log information to a file.

E-mail sender service


The e-mail sender is a service that sends e-mail through an external SMTP service.
You can configure the local e-mail address information so that you can send e-mail notifications to users
and to other installations of Netcool/Impact. To configure the service, you provide the address for the
local host and the originating e-mail address.

Configuring the Email sender service


Use this information to configure the email sender service.
1. In the SMTP Host field, type the host name. The default value is localhost.
2. In the SMTP Port field, type the port number. The default value is 25.
3. In the From Address field, type the From address. The default value is Impact. An example of a valid
address: [email protected].
4. Select Service log (Write to file) to write log information to a file.
5. Select the SSL check box for an SSL connection to the mail server. Next, refer to the Security>Enabling
SSL connections with external servers section of the documentation to complete the SSL certificate
import.
Note: By default, SSL connections from Netcool/Impact to mail servers use the most secure protocol
supported by the mail server. However, you can use any version of the TLS protocol. This applies to
both email reader and email sender services in Netcool/Impact. If you want to restrict which protocols
are enabled by Impact for SSL connections to a mail server, you can add a property to the service
properties file called impact.<service_name>.secureprotocols. The value of this property can
be a comma-separated list of allowed protocols, for example TLSv1.1,TLSv1.2 or just TLSv1.2.
Note: STARTTLS is an extension to plain text communication protocols, which offers a way to upgrade
a plain text connection to an encrypted (TLS or SSL) connection instead of using a separate port
for encrypted communication. A plain text connection between the Email Sender service and the
emailserver can be encrypted. To enable STARTTLS, a property can be added to the service properties
file called impact.<service_name>.starttls=true. The secure connection is enabled when
both Impact and the emailserver have SSL enabled. Refer to the Security > Enabling SSL connections
with external servers section of the Impact Administration documentation to complete the SSL
certificate import.
Note: The addition of the impact.emailsender.logmessageid=true property to the Email
Sender service properties file ensures that the Message-ID of the sent email is logged in the
impactserver.log.
Note: If there is a requirement to send emails with only text/plain elements, add the following
property to the NCI_emailsender.props properties file of the email sender service:

Chapter 9. Working with services 143


impact.emailsender.contenttypeplaintext=true

Event processor service


The event processor manages the incoming event queue and is responsible for sending queued events to
the policy engine for processing.
The event processor service sends events fetched from readers and listener to the policies. The service is
responsible for managing events coming from the following event sources:
• OMNIbus event reader
• OMNIbus event listener
• Database event reader
• Database event listener
• JMS message listener
• WSNNotification listener
The event processor manages the incoming events queue and is responsible for sending queued events to
the policy engine for processing.
The event processor is typically configured to start automatically when the Impact Server starts. On
start-up, it runs with the minimum number of threads. It measures the performance on startup, increases
the thread count, and compares the performance with the new thread configuration with the default
configuration of minimum threads it started with. If there is an improvement in throughput, it runs with
the new configuration and measures the performance again, until one of two events occurs:
• It reaches the limit set by the maximum number of threads
• It reaches a saturation point where increasing the number of threads further does not improve
performance
Important: In a clustered environment, changes made to the event processor service using the GUI do
not automatically propagate out from the primary Impact Server to the other servers in the cluster. Also
configuration for the EventProcessor service does not replicate to a secondary server during start up. To
change configuration for all Impact Servers, log on to each server individually.

Configuring the Event processor service


Use this information to configure the event processor service.
For maximum performance set the size of the connection pool as greater than or equal to the maximum
number of threads that are running in the event processor.
Important: Changing the maximum connections setting in an SQL data source requires a restart of the
Impact Server.
For information about viewing existing thread and connection pool information, see the information in the
Netcool/Impact Administration in the section Command-Line tools, Event Processor commands. See the
Select PoolConfig from Service where Name='EventProcessor';
Important: In a clustered environment, the event processor configuration is not replicated between
servers. You must run the Select PoolConfig from Service where Name='EventProcessor';
command on the primary and the secondary servers.
Use the same considerations when you configure the maximum threads on a secondary server. The
secondary server uses its own connection pool, which is independent of the size of the connection pool
in primary server. For example, a DB2 data source has connection pool size of 30. The DB2 data source
is replicated between primary and secondary servers. There could potentially be 30+30 = 60 connections
made by Impact primary and secondary servers to the DB2 database. For optimal performance, the
maximum number of threads with this setup of connection pool = 30 (should be at least 30 in each server
of the cluster). The event processor configuration is not replicated between servers, so it must be set up
manually in the secondary using CLI.

144 Netcool/Impact: User Interface Guide


Note: You should be mindful of the maximum number of SQL connections that can be made when setting
the thread minimum and maximum values. For example, if the maximum number of SQL connections set
for the data sources is only 5 and the maximum processor threads is 200, then on receiving more than 5
events at a time, the processor threads will have to wait. This can cause a bottleneck in event processing.
Ideally the maximum number of SQL connections for the DSAs used by Event Processing should be set
to at least the maximum number of processor threads plus the number of event readers using the data
source, this is to avoid queuing policies. .

Table 97. Event processor window

Window element Description

Minimum Number of Threads Set the minimum number of processing threads that
can run policies at one time.

Maximum Number of Threads Set the maximum number of threads that can run
policies at one time.

Processing Throughput: Maximize If you set this property, the event processor tries to
get the maximum performance out of the threads. This
can result in high CPU usage. When you leave this field
cleared, it runs conservatively at around 80% of peak
performance.

Tuning configuration: Maintain on Restart If you set this option, each time the event processor
is started, it uses the same number of threads
it had adjusted to in the earlier run. This feature
is useful in cases where the environment where
Netcool/Impact runs has not changed much from
the previous run. The event processor can start with
the maximum throughput immediately, rather than
engaging in repeated tuning to reach the maximum.

Clear Queue Click this icon to enable the event processor to delete
unprocessed events that it has fetched from one or
more event sources.

Service log (Write to file) Select to write log information to a file.

Hibernating policy activator service


The hibernating policy activator service monitors hibernating policies and awakens them at specified
intervals.
You use the hibernating policy activator with X events in Y time solutions and similar solutions that
require the use of hibernating policies. When you configure this service, you specify how often the service
reactivates hibernating policies waiting to be activated. It can be a specific period or absolute time that
you have defined.

Chapter 9. Working with services 145


Hibernating policy activator Configuration
In the hibernation policy activator you can configure the wakeup interval, and the start up and logging
options.

Configuring the hibernating policy activator service


Use this information to configure the hibernating policy activator.

Table 98. Hibernating Policy Activator service configuration window

Window element Description

Polling Interval Select a polling time interval (in seconds) to establish how often
you want the service to check hibernating policies to see whether
they are due to be woken up. The default value is 30 seconds.

Process wakes up immediately Select to run the policy immediately after wake-up. The wakeup
interval is the interval in seconds at which the hibernating
policy activator checks hibernating policies in the internal data
repository to see if they are ready to be woken.

Starts automatically when server Select to automatically start the service when the server starts.
starts You can also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

Clear All Hibernations: Clear Should it become necessary, click to clear all hibernating policies
from the Impact Server.

Policy logger service


The policy logger service is responsible for managing the policy log.
The log is a text stream used to record messages generated during the runtime of a policy. The log
contains both Netcool/Impact system messages and messages that you create when you write a policy.
The policy logger service specifies an error-handling policy to activate when an error occurs during the
execution of a policy. It also specifies the logging levels for debugging policies and which items must be
logged. When you configure this service, you select a policy to handle the errors as they occur.

Policy logger configuration


You can configure the following properties of the policy logger.
• Error handling policy
• Highest log level
• Logging of SQL statements
• Logging of pre-execution function parameters
• Logging of post-execution function parameters
• Policy profiling
• Logging and reporting options

146 Netcool/Impact: User Interface Guide


Configuring the Policy logger service
Use this information to configure the policy logger service.

Table 99. Policy Logger Service configuration window

Window element Description

Error-handling Policy The error handling policy is the policy that is run by default when
an error is not handled by an error handler within the policy
where the error occurred.
Note: If you have a Policy Activator service and
you want it to utilize a default exception handler
policy, you must specify the following property
in the <servername>_<activatorservicename>.props
file: impact.<activatorservicename>.
errorhandlername=<policy name to run>

Highest Log Level You can specify a log level for messages that you print to the
policy log from within a policy by using the Log function.
When a log() statement in a policy is processed, the specified
log level is evaluated against the number that you select for this
field. If the level specified in this field is greater than or equal to
the level specified in the policy log() statement, the message
is recorded in the policy log.
Warning: Setting Highest Log Level to 3 has the
potential to cause a major load on the system, especially
if you have the NOI Extensions installed. This can
include 100% CPU usage. Log level should only be
increased on a temporary basis and should be reverted
to 0 when debug is complete.

Chapter 9. Working with services 147


Table 99. Policy Logger Service configuration window (continued)

Window element Description

Log what Select what you want to appear in the log:


All SQL Statements / Policy Query Diagnostics checkbox
Logs all SQL statements and the number of data items
that are returned by these policy queries: GetByFilter,
GetByKey, DirectSQL. It also logs the increase of memory
usage after the query was run. The memory change after the
query does not necessarily provide an accurate picture of
the memory that is used by the rows that are returned. Many
other factors in the Netcool/Impact JVM can contribute to
increase and decrease of free memory. However, looking at
several measurements over time for a query can give some
insight into the memory usage of the data that is returned in
the query.
Parameters for Built-in Functions checkbox
Logs the values of the parameters that are passed into
each built-in Netcool/Impact function before and after
the function execution. It also logs the DataItem and
DataItems in the policy context before and after the
function call. This feature also logs the number of rows and
memory usage of the queries.
Local Variables on Exit of Custom Functions checkbox
Logs the values of local variables upon exit of a custom (user
defined) function. For custom functions, it logs any variable
that is defined locally in the function.
Full Current Context on Entry and Exit of Custom Functions
checkbox
Logs the full context on entry and exit of all custom (user
defined) functions. This approach is equivalent to calling
Log(currentcontext()) just after entry and just before
exit of the function, which can result in verbose logs.
Logging of DataItems or all variables in currentcontext(),
displays the variable type after its value.
Warning: Setting anything in the Log what section has
the potential to cause a major load on the system,
especially if you have the NOI Extensions installed.
This can include 100% CPU usage. Extra logging should
only be enabled on a temporary basis and should be
unselected when debug is complete.

Policy Profiling: Enable Select to enable policy profiling. Policy profiling calculates the
total time that it takes to run a policy and prints this time to the
policy log
You can use this feature to see how long it takes to process
variable assignments and functions. You can also see how long it
takes to process an entire function and the entire policy.

Service log (Write to file) Select to write log information to a file.

148 Netcool/Impact: User Interface Guide


Table 99. Policy Logger Service configuration window (continued)

Window element Description

Append Thread Name to Log File Select this option to name the log file by appending the name of
Name the thread to the default log file name.

Append Policy Name to Log File Name Select this option to name the log file by appending the name of
the policy to the default log file name.

Collect Reports Select to enable data collection for the Policy Reports.
If you choose to enable the Collect Reports option, reporting
related logs are written to the policy logger file only when the log
level is set to 3.
To see reporting related logs for a less detailed logging
level for example, log level 1, the $IMPACT_HOME/etc/
<servername>_policylogger.props file can be customized
by completing the following steps:
1. Add impact.policylogger.reportloglevel=1
to the $IMPACT_HOME/etc/
<servername>_policylogger.props property.
2. Restart the Impact Server to implement the change.

Policy log files


You can use policy log files to provide a record of actions performed during the execution of a policy.
Multiple log files can be created as follows:
• 1 log file for each policy
• 1 log file for each thread in the event processor
• 1 log file for each policy for each thread
By default, a single policy log file is created.
Each log file is named by appending the name of the policy or the name of the thread to the default log file
name. For example:
• If you were to run a policy named POLICY_01 and you selected to create log files on a per policy basis,
the resulting log file would be named:

servername_Policy_01_policylogger.log

• If you selected to create log files on a per-thread basis, a possible log file name might be:

servername_Policy_02HttpProcessor [5104] [2]_policylogger.log

Where
HttpProcessor[5104] [2] is the name of the event processor thread where the policy is running on
a Red Hat Linux system.
• If you selected to create log files on a per policy per thread basis, the log file name might be:

servername_Policy_02HttpProcessor [5104] [2]_policylogger.log

Chapter 9. Working with services 149


Enabling multiple policy log files
Use this procedure to enable multiple policy log files.

Procedure
1. In the PolicyLogger Service Configuration window, click the Service Log: Write to File option.
2. Select either the Append Thread Name to Log File Name or the Append Policy Name to Log file
option, or both.

ITNM event listener service


The ITNM event listener service listens for events sent from ITNM.
After you install the ITNM DSA, you can optionally set up a ITNM event listener service. You only need
to set up the listener service if you want to listen for events asynchronously from ITNM. For more
information about ITNM TN or ITNM IP, see the guides for those products.

Configuring ITNM event listener service


Use this procedure to configure the ITNM listener service.

Procedure
1. Enter the required information in the service configuration window and save the configuration.
For information about the configuration options, see “ITNM event listener service configuration
window” on page 150.
2. Before you start the event listener service, first stop all ITNM and rvd processes and enter the
command:

$ITNM_HOME/bin/rvd -flavor
116

3. Restart ITNM.
4. Make sure that the ITNM event listener service is started so that you can receive events from ITNM.
(You have the option to have it start automatically when Netcool/Impact starts.)

ITNM event listener service configuration window


Use this information to configure the ITNM event listener service.

Table 100. ITNM Event Listener service configuration

Table element Description

Listener Filter Leave this field blank.

Policy to Execute Select the policy to run when an event is received from the ITNM
application. You can use the ITNMSampleListenerPolicy that was
installed when you installed Netcool/Impact to help you understand the
event listener functionality.

Name Service Host localhost

Name Service Port 4500

Name Service Context

150 Netcool/Impact: User Interface Guide


Table 100. ITNM Event Listener service configuration (continued)

Table element Description

Name Service Object Name

Direct Mode Class Name Set this to:

con.micromuse.dsa.precisiondsa.PrecisionEventFeedSource

Note: Copy this class name exactly as it is written here, with no extra
spaces.

Direct Mode Source Name Type a unique name that identifies the data source, for example,
ITNMServer.

Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.

Configuring the ImpactDatabase service


The Netcool database server runs as a service in the Impact Server.

About this task


You can also edit the time to wait and host option properties for the embedded Derby Network Server.
For more information, see the topic about configuring the embedded Derby Network Server in the section
about managing the database server in the Administration Guide.

Procedure
1. Click Services to open the Services tab. Select the ImpactDatabase.
2. The ImpactDatabase service uses ImpactDB data source configuration settings, by default.
To change the port or any other configuration settings, you must stop the service and then edit the
ImpactDB data source.
3. Enter the replication port for the Derby backup host in the Replication Port field.
4. Starts automatically when server starts, select to automatically start the service when the server
starts. You can also start and stop the service from the GUI.
5. Service log (Write to file): Select to write log information to a file.
6. Click Save to implement the changes.

Self monitoring service


The self monitoring service is used to send messages about the internal state of Impact Server to an
ObjectServer.
If the event readers and listeners are running (OMNIbusEventReader, OMNIbusEventListener,
DatabaseEventReader, DatabaseEventListener, JMSMessageListener), the self monitoring send status
events regarding its event queue.
The self monitoring service provides the following types of monitoring:
Cluster monitoring
When Impact is running as a cluster, it provides information as to which server is the current primary
and the current secondary. Impact also sends updates when the primary has gone down and one of
the secondary servers assumes the primary role.

Chapter 9. Working with services 151


Data source monitoring
Provides information about the active data sources used by Netcool/Impact. It also gives information
when the connection to the primary or backup host of the data source fails.
Memory and queue monitoring
Checks the heap utilization of the virtual machine used by Netcool/Impact and also the available
system memory of the system where Impact is running at selected intervals and sends that
information to ObjectServer as an event. Events warn users by severity level of conditions such as
maximum heap utilization or insufficient system memory.
At intervals, Netcool/Impact checks to see whether it is approaching the maximum amount of
available memory or whether the queue size is growing at a rate that exceeds a certain number. If so,
the severity of the condition is determined and a corresponding event is sent to the ObjectServer. You
can configure self monitoring to deduplicate the events, or send a new event to the ObjectServer every
time a low memory or growing queue size condition occurs.
Service monitoring
When enabled sends events to the ObjectServer when a service has started or stopped.

Configuring the self monitoring service


Use this information to configure the self monitoring service.

Table 101. Self Monitoring Service window

Window element Description

ObjectServer Data Source Select the ObjectServer that you want to use to send events.

Memory Status: Enable Select to send status events regarding memory usage of the Impact
Server.

Memory Interval Select or type (in seconds) how often the service must send memory
status events to the ObjectServer.

Deduplication Deduplication is enabled by default. See the Netcool/OMNIbus library


for information about deduplication of events.

Queue Status: Enable Select to enable the service to send events about the status of the
event readers, listeners and EventProcessor.

Queue Interval Select or type (in seconds) how often the service must send queue
status events.

Deduplication Deduplication is enabled by default. See the Netcool/OMNIbus library


for information about deduplication of events.

Cluster Status: Enable Select to enable the service to send events about the status of the
cluster to which it belongs. It sends events when:
• A Impact Server is started and joins the cluster
• A server is stopped and removed from the cluster
• A primary server is down and a secondary server becomes the new
primary

152 Netcool/Impact: User Interface Guide


Table 101. Self Monitoring Service window (continued)

Window element Description

Data Source Status: Enable Select to enable the service to send the status when certain
conditions occur with a data source.
For example, the service sends a status message when a user tests
a connection to a data source or when a connection cannot be
established.

Service Status: Enable To enable service monitoring, select this check box and start the self-
monitoring service. The self-monitoring service sends service status
events to the ObjectServer.

Starts automatically when Select to automatically start the service when the server starts. You
server starts can also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

Note: Restart the service to apply the above changes.

Database event reader service


The database event reader is a service that polls supported, external SQL data sources at regular intervals
to get business events in real time.
It retrieves rows from a table, then converts the rows to event format, and passes them to Netcool/
Impact for processing. The data source can be any of the supported SQL data sources. Conceptually, it is
similar to the OMNIbus Event Reader, which polls the ObjectServer to get network fault events.

Configuring the database event reader service


Use this procedure to configure the database event reader service.

Procedure
1. Select the project for which you want to create the service.
2. From the Service Type list, select DatabaseEvent Reader to open the service configuration window.
The DatabaseEventReader Configuration window has two tabs, General Settings and Event Mapping.
3. Enter the required information in the General settings tab of the configuration window.
For information about general settings options, see “Database event reader configuration window -
general settings” on page 154.
4. Enter the required information in the Event Mapping tab of the configuration window.
For information about general settings options, see “Event mapping” on page 137.
Note: If a service uses a Data Source for which the IP address or hostname has changed, you need to
restart the service.

Chapter 9. Working with services 153


Database event reader configuration window - general settings
Use this information to configure the general settings of the database event reader.

Table 102. Database event reader configuration window - General Settings tab

Window element Description

Service name Enter a unique name to identify the service.

Data Source Select an external data source from the list.


The data source must have a field that is guaranteed to be incremented
every time a new record is added to avoid rereading the entire table every
time the data source is accessed. If you want to use the GetUpdates
function in a policy for this data source, the table also must have a time
stamp field that is automatically populated when an insert or update
occurs.

Data Type After you select a data source, the system populates the data type field
with a list of data types created in Netcool/Impact corresponding to that
particular data source. Select a data type from the list.

Polling Interval Select or enter a polling time interval to establish how often you want the
service to poll the events in the event source. The polling time selections
are in milliseconds and the default value is 3000 milliseconds

Restrict fields Click Fields to access a selection list with all the fields that are available
from the selected data source.
You can reduce the size of the query by selecting only the fields that you
need to access in your policy.

Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

Clear State When you click Clear, the internally stored value for the Key field and
Timestamp field are reset to 0. This causes the event reader to retrieve
all events in the data source at startup and place them in the event queue
for processing.
You can only use Clear State to clear the event reader state when the
service is stopped. Clicking Clear while the service is running does not
change the state of the event reader.

Clear Queue Click Clear to enable the database event reader to delete unprocessed
events that it has fetched from an SQL data source.

154 Netcool/Impact: User Interface Guide


Database event reader configuration window - event mapping
Use this information to configure event mapping for the database event reader.

Table 103. DatabaseEvent Reader window - Event Mapping tab


Window element Description

Test events with all filters If an event matches more than one filter, trigger all policies that match
the filtering criteria.

Stop testing after first match Or select to trigger only the first matching policy.

Actions: Get updated events Select to receive events that have been updated (all new events are
automatically sent).

Time Stamp Field If the database event reader is configured to get updated events, both
the TimeStamp field and the Key field must be configured correctly.
• The TimeStamp field must point to a column in the external database
table that is automatically populated with a timestamp when an insert
or update occurs.
• The Key field must point to a column which uniquely identifies a row
(it does not have to be an automatically incremented field).
If the date/time format of the timestamp field in the external database
is different from the default pattern of dd-MMM-yy hh.mm.ss.SSS, a
property named formatpattern must be added to the database event
reader properties file to match the date/time format.
Example:
impact.[DatabaseEventReaderName].formatpattern=dd-
MMM-yy hh.mm.ss.SSS aaa
When the Get updated events checkbox is not selected, the
TimeStamp field does not have to be configured, but the Key field must
in this case be an automatically incremented numeric field.
Note: The Database Event Reader supports a TimeStamp database
field in UNIX Epoch format. The following property must be added to
the database event reader properties file:

impact.[DatabaseEventReaderName].formatpattern=epoch

// formatpattern-> default value:dd-MMM-yy hh.mm.ss.SSS


// valid values:dd-MMM-yy hh.mm.ss.SSS/epoch

Stop the database event reader service from the GUI and click the
Clear State button.
Add the property and restart Impact for the new property to take effect.

Key Field See Time Stamp Field.

New Mapping: New Click to add a new filter.


For information about entering filter syntaxes, see the Working with
filters section of the Policy Reference Guide.

Analyze Event Mapping Table Click this icon to display any conflicts with filter mappings that you have
set for this service.

Chapter 9. Working with services 155


Configuring number of rows in the database event reader select query
Within the select query for the database event reader, by default there is a value of 1000 specified for the
number of rows to fetch. You can configure this value.

Procedure
1. Edit the database event reader properties file $IMPACT_HOME/etc/<server name>_<database
reader name>.props.
2. In the event reader properties file, add the following property or update the following property
impact.<database readername lower case>.maxtoreadperquery=<number of rows to
return>, enter a value for <number of rows to return>.
If the value is set to 0, the select query returns all the rows at one time, impact.<database
readername lower case>.maxtoreadperquery=0
If the value is set to 1000, the select query returns 1000 rows at one time, impact.<database
readername lower case>.maxtoreadperquery=1000
Example MySQL

maxtoreadperquery > 0 : SELECT * FROM CUSTOMERS WHERE NAME LIKE 'IBM' LIMIT 1000;
maxtoreadperquery = 0: SELECT * FROM CUSTOMERS WHERE NAME LIKE 'IBM';

3. Restart the Impact server. For more information about restarting the Impact server, see Stopping and
starting the Impact Server in the Administration Guide.

Email reader service


The email reader service reads incoming email, and runs policies that are based on the contents of the
email.
The email reader service polls a specified POP or IMAP host for email messages. The service reads email
from a mailbox at intervals that you define when the service is created. The service is commonly used
in escalation and notification policies to look for responses to email notifications that are sent out by
Netcool/Impact.
If the number of emails that are waiting to be read from the email reader service is more than 25, the
timeout value increases automatically. When the number of emails that are waiting to be read returns to
less than 25. The timeout value is reset to the default value or the value that is specified in the service
property file.
You can use this default service instead of creating your own, or in addition to creating your own.
To stop the email reader from deleting emails after you read them, add the following property to the
<EmailReaderName>.props file.

<emailreadername>.deleteonread=false

Where <emailreadername> is the email reader service name. Restart the service. This only works for
IMAP email servers.

Configuring the email reader service


Use this information to configure the email reader service.

Table 104. Create New email Reader Service Configuration window

Window element Description

Service name Enter a unique name to identify the service.

156 Netcool/Impact: User Interface Guide


Table 104. Create New email Reader Service Configuration window (continued)

Window element Description

Host: Type the mail host name.

Protocol: Select one of the following options from the drop-down menu: POP3 or
IMAP.

Port: Select the port to connect to the mail server. The default POP3 port is
110. The default IMAP port is 143.

Log in As: Type a login name. The default value is the value that you use to log on
to Netcool/Impact.

Password: Type your password. The letters that you type are replaced with
asterisks.

Polling Interval: Select how often (in seconds) the service polls the POP or IMAP host for
new email messages.

Policy Name: Select a policy to run for this event.

Email Body (ignore) The email reader processes the body of the email as if it were a policy.
If the body of the email is in IPL syntax, then when the email is
received, the contents of the body is run as a policy. The policy that
is associated with the Email Reader service is run separately. Select this
check box if you do not want to run the contents of the email as a policy.
Restart the service to implement the changes.

Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

SSL Select the SSL check box for an SSL connection to the mail server. Next,
refer to the Security>Enabling SSL connections with external servers
section of the documentation to complete the SSL certificate import.
Note: By default, SSL connections from Netcool/Impact to mail servers
use the most secure protocol supported by the mail server. However,
you can use any version of the TLS protocol. This applies to both email
reader and email sender services in Netcool/Impact. If you want to
restrict which protocols are enabled by Impact for SSL connections
to a mail server, you can add a property to the service properties file
called impact.<service_name>.secureprotocols. The value of
this property can be a comma-separated list of allowed protocols, for
example TLSv1.1,TLSv1.2 or just TLSv1.2.

Use OAuth The email reader service supports OAuth authentication.


Select the Use OAuth check-box to enable OAUTH authentication.

Chapter 9. Working with services 157


Table 104. Create New email Reader Service Configuration window (continued)

Window element Description

OAUTH DataSource Name Select the name of the OAuth data source from the OAUTH DataSource
Name drop-down menu.
Note: You have to create the data source before you can select it here.
See “Creating an OAuth data source” on page 61.

Parsing emails with the email reader service


This example shows how to use the email reader service to receive an email. The email reader service
runs a policy, which parses the email content and queries the ObjectServer for all critical events and
sends the results in an email to the relevant user.

Before you begin


• Create a table with the following schema in any database that you have access to through the Netcool/
Impact data sources.

Table 105. Database schema for parsing email


Table Name Fields
email_auth name, email

When you create a table, add a data type that points to the table and call it email_auth. Check the name
field as the key for the data type. Insert some sample data into the table for testing purposes. This table
must include only a record for authorized email addresses.
• A POP3 or IMAP email account must exist.
• You must know the POP3 or IMAP server, user name, and password for this account.
• A data type must exist for the alerts.status table of the ObjectServer that Netcool/Impact is reading
from. In this example, the data type is called OS_NCOMS.
• The email reader service must be configured, see the “Configuring the email reader service” on page
156.

About this task


After you start the EmailReader service, it runs this policy example, which parses the @From, @Subject,
and @Body of each incoming email. The policy checks that the sender of the email has the authority
to send emails to Netcool/Impact by querying the email_auth table. If so, the email reader parses the
subject to determine what the sender wants to do.
In this case, the only command is to query the ObjectServer. The sender types cmd:query in the subject
of the email, and types query:critical in the body. The command, query:critical must be the
only content in the body. The command makes Netcool/Impact query the ObjectServer for all critical
events and send an email to the sender with the results.

Example
This policy example uses IPL.

// Extract the command from the Subject. Commands should be preceded by cmd: to be
// treated as a command.

Command = Rextract(@Subject, ".*cmd:(.*)");

// Extract the name of the sender. This is used to determine if the user has the
// authority to query the Object Server. Build the filter for the lookup into the

158 Netcool/Impact: User Interface Guide


// EmailAuth datatype. Then perform the lookup.

Sender = Rextract(@From, "([A-Za-z]+ [A-Za-z]+) .*");


filter = "name='"+Sender+"'";
Authorized = GetByFilter("email_auth", filter, false);
numAuth = Length(Authorized);

// If the sender has authorization, evaluate the Subject of the email to see if
// it contains a valid command.

If (numAuth == 1) {

// If the word query is parsed out of the Subject at the beginning of the policy,
// check the body for a valid query.

If (Command like "query.*") {

// Strip out new line statements to get the body in one long string, then extract
// the query from the Body. Queries should be preceded by query: to be treated
// as a query. If the body contains critical, then query the Object Server
// for all events where Severity = 5.

Body = Strip(@Body, "\n");


Body = Rextract(Body, ".*query:(.*)");
If (Body like ".*critical.*") {
crits = GetByFilter("OS_NCOMS", "Severity = 5", false);
numCrit = Length(crits);

// Format the current time, then build a message to send to the sender.

querytime = LocalTime(GetDate(), "MM/dd/yyyy KK:mm:ss a");


message = "There are " + numCrit + " critical events in the Object";
message = message + "Server as of " + querytime + ".";
SendEmail(null, Authorized[0].email, "Query Results", message, "[email protected]",
false);
} Else {

// If the query isn't handled, send a message informing sender


// that the query is invalid.
message = "Invalid query sent to Impact server";
SendEmail(null, Authorized[0].email, "Query Results", message, "[email protected]",
false);
}
} Else {

// If the command in the Subject is invalid, send an email notifying the sender.

message = "Invalid Command sent to Impact server";


SendEmail(null, Authorized[0].email, "Query Results", message, "[email protected]",
false);
}
}

Event listener service


Event listeners monitor non-ObjectServer event source events.
Event listener services typically work with DSAs that allow bi-directional communication with a data
source. If you need to configure the Event Listener service to work your DSA, for a configuration procedure
refer to the documentation for that DSA.

Configuring the event listener service


Use this information to configure the event listener service.

Table 106. EventListener service configuration window

Table element Description

Service name Enter a unique name to identify the service.

Listener Filter Leave this field blank.

Chapter 9. Working with services 159


Table 106. EventListener service configuration window (continued)

Table element Description

Policy to Execute Select the policy to run when an event is received from the database
server.

Name Service Host Type in the name of the service host.

Name Service Port Provide the port over which the name service host is accessed.

Name Service Context Type in the name service context.

Name Service Object Name Type in the name of the service object.

Direct Mode Class Name Type in the direct mode class name.

Direct Mode Source Name Provide a unique name that identifies the data source.

Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.

JMS message listener


The Java Message Service (JMS) message listener service runs a policy in response to incoming messages
that are sent by external JMS message providers.
The message provider can be any other system or application that can send JMS messages. Each JMS
message listener listens to a single JMS topic or queue. There is one default JMS message listener named
JMSMessageListener. You can create as many listener services as you need, each of which listens to a
different topic or queue.
A JMS message listener is only required when you want Netcool/Impact to listen passively for incoming
messages that originate with JMS message producers in your environment. You can actively send and
retrieve messages from within a policy without using a JMS message listener.

JMS message listener service configuration properties


You can configure the properties for the Java Message Service (JMS) listener service.

Table 107. JMSMessageListener Service configuration window


Window element Description

Service name Enter a unique name to identify the service.

Policy To Execute Select the policy that you created to run in response to incoming
messages from the JMS service.

JMS Data Source JMS data source to use with the service.
You need an existing and valid JMS data source for the
JMS Message Listener service to establish a connection with
the JMS implementation and to receive messages. For more
information about creating JMS data sources, see “JMS data
source configuration properties” on page 66.

160 Netcool/Impact: User Interface Guide


Table 107. JMSMessageListener Service configuration window (continued)
Window element Description

Message Selector The message selector is a filter string that defines which
messages cause Netcool/Impact to run the policy specified in the
service configuration. You must use the JMS message selector
syntax to specify this string. Message selector strings are similar in
syntax to the contents of an SQL WHERE clause, where message
properties replace the field names that you might use in an SQL
statement.
The content of the message selector depends on the types and
content of messages that you anticipate receiving with the JMS
message listener. For more information about message selectors,
see the JMS specification or the documentation distributed with
your JMS implementation. The message selector is an optional
property.

Durable Subscription: Enable You can configure the JMS message listener service to use
durable subscriptions for topics that allow the service to receive
messages when it does not have an active connection to the
JMS implementation. A durable subscription can have only one
active subscriber at a time. Only a JMS topic can have durable
subscriptions.
Note: Since a durable connection can have only one active
subscriber at a time, in a cluster configuration during failover and
failback, a delay/pause can be configured. The delay/pause allows
the service to shut down on the other cluster members during
failover/failback.
The delay/pause is configured in the jmslistener properties
file using the durablejmspause property, for example:
impact.<jmslistenerservicename>.durablejmspause=3
0000. The durableJmsPause property defines the time in
milliseconds, so
impact.<jmslistenerservicename>.durablejmspause=3
0000 defines a pause of 30 seconds.

Client ID Client ID for durable subscription. It defines the client identifier


value for the connection. It must be unique in the JMS
Implementation.

Subscription Name Subscription Name for durable subscription. Uniquely identifies


the subscription made from the JMS message listener to the JMS
Implementation. If this property is not set, the name of JMS DSA
listener service itself is used as its durable subscription name,
which is JMSMessageListener by default.

Clear Queue Clear the message waiting in the JMSMessageListener queue that
has not yet been picked by the EventProcessor service. It is
recommended not to do this while the Service is running.

Starts automatically when server Select to automatically start the service when the server starts.
starts You can also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

Chapter 9. Working with services 161


OMNIbus event listener service
The OMNIbus event listener service is used to integrate with Netcool/OMNIbus and receive immediate
notifications of fast track events.
The OMNIbus event listener is used to get fast track notifications from Netcool/OMNIbus through the
Accelerated Event Notification feature of Netcool/OMNIbus. It receives notifications through the Insert,
Delete, Update, or Control (IDUC) channel. To set up the OMNIbus event listener, you must set its
configuration properties through the GUI. You can use the configuration properties to specify one or more
channels for which events get processed and also one or more policies that are to be run in response to
events received from Netcool/OMNIbus.
Important:
• The OMNIbus event listener service works with Netcool/OMNIbus 7.3 and later to monitor ObjectServer
events.
• If the Impact Server and OMNIbus server are in different network domains, for the OMNIbus event
listener service to work correctly, you must set the Iduc.ListeningHostname property in the
OMNIbus server. This property must contain the IP address or fully qualified host name of the OMNIbus
server.
For more information about Netcool/OMNIbus triggers and accelerated event notification, and the
Iduc.ListeningHostname property in the OMNIbus server, see the Netcool/OMNIbus Administration
Guide available from this website:
Tivoli Documentation Central

Setting up the OMNIbus event listener service


Use this procedure to create the OMNIbus event listener service.

Procedure
1. Click Services to open the Services tab.
2. If required, select a cluster from the Cluster list.
3. Click the Create New Service icon in the toolbar and select OMNIbusEventListener to open the
configuration window.
4. Enter the required information in the configuration window.
5. Click the Save icon in the toolbar to create the service.
6. Start the service to establish a connection to the ObjectServer and subscribe to one or more IDUC
channels to get notifications for inserts, updates, and deletes.

Configuring the OMNIbus event listener service


Use this information to configure the OMNIbus event listener service.
1. Select an OMNIbus ObjectServer data source. Make sure that your data source has a configured, and
valid connection to an ObjectServer. You can use the default ObjectServer data source that is created
during the installation, defaultobjectserver.
2. Add one or more channels from which Netcool/Impact processes events.
• If you want to subscribe to more than one channel, add a comma between each channel name. If
you edit the channels for example change the channel name or add or remove one or more entries,
you must restart the OMNIbusEventListener service to implement the changes.
• You can create more than 1 instance of the OMNIbusEventListener service. For example, you
can create one OMNIbusEventListener service that subscribes to multiple channels and another
OMNIbusEventListener service that subscribes to one channel.
3. Select Test events with all filtersto test events with all filters and run any matching policies. When an
event matches more than one filter, all policies that match the filtering criteria are triggered.

162 Netcool/Impact: User Interface Guide


4. Select Stop testing after first match to test events with all filters and run any matching policies. When
an event matches more than one filter, all policies that match the filtering criteria are triggered.
5. Click New Mapping: New to create an event filter. This filter specifies the type of event that maps to
the policy. For more information, see “Creating event filters” on page 137.
6. Click Analyze Event Mapping Table to check the validity of event filters. For more information, see
“Filter analysis” on page 141.
7. Select the Starts automatically when server starts check box.
Select to automatically start the service when the server starts. You can also start and stop the service
from the GUI.
8. Service log (Write to file)
9. Select to write log information to a file.
Related concepts
“Filter analysis” on page 141
By analyzing the event mapping table you can check the syntax and scope of all your event filters.
Related reference
“Configuring an event filter” on page 137
Use this information to add filter expressions and configure the event filter.

OMNIbus event reader service


OMNIbus event readers are services that monitor a Netcool/OMNIbus ObjectServer event source for new,
updated, and deleted alerts and then runs policies when the alert information matches filter conditions
that you define.
The event reader service uses the host and port information of a specified ObjectServer data source so
that it can connect to an Objectserver to poll for new and updated events and store them in a queue.
The event processor service requests events from the event reader. When an event reader discovers new,
updated, or deleted alerts in the ObjectServer, it retrieves the alert and sends it to an event queue. Here,
the event waits to be handled by the event processor.
You configure this service by defining a number of restriction filters that match the incoming events,
and passing the matching events to the appropriate policies. The service can contain multiple restriction
filters, each one triggering a different policy from the same event stream, or it can trigger a single policy.
You can configure an event reader service to chain multiple policies together to be run sequentially when
triggered by an event from the event reader.
Important: Before you create an OMNIbus event reader service, you must have a valid ObjectServer data
source to which the event reader will connect to poll for new and updated events.

Configuring the OMNIbus event reader service


You can configure the following properties of an OMNIbus event reader.
• Event reader name
• ObjectServer event source you want the event reader to monitor
• Interval at which you want the event reader to poll the ObjectServer
• Event fields you want to retrieve from the ObjectServer
• Event mapping
• Event locking
• Order in which the event reader retrieves events from the ObjectServer
• Start up, service log, and reporting options
Note: If a service uses a Data Source for which the IP address or hostname has changed, you need to
restart the service.

Chapter 9. Working with services 163


Creating a new OMNIbus event reader from the command line
You can create a new OMNIbus event reader from the command line.
To create an OMNIbus event reader from the command line, use the following steps:
1. Create an XML file called create_service.xml using the following lines:

<?xml version="1.0" encoding="utf-8"?>

<project name="create_service" default="createService" basedir="." xmlns:if="ant:if"


xmlns:unless="ant:unless">

<taskdef name="impactHttp"
classname="com.ibm.tivoli.impact.install.taskdef.ImpactHttpUtils"
classpath="${impact.home}/install/configuration/cfg_scripts/taskdefs/install-
taskdefs.jar"
onerror="report"/>

<target name="createService">

<!-- if you want to add without filters, add the following property:
"addWithoutFilters": "true" -->
<!-- For Filters: Modify the "items:" section in "EVENTMAPPINGS", with your policies and
filters -->
<property name="newService" value='
{"isNew": "true",
"GETUPDATEDEVENTSACTION": false,
"EVENTLOCKINGENABLED": false,
"GETDELETEDEVENTSACTION": false,
"RUNPOLICYONDELETESACTION": "AddPolicyProcessMapping",
"EVENTLOCKINGEXPRESSION": "",
"STARTUPENABLED": false,
"SERVICECLASS": "OMNIbusEventReader",
"EVENTMAPPINGS": {
"layout": [{
"encode": true,
"field": "RESTRICTIONFILTER",
"name": "Restriction Filter"
}, {
"field": "POLICYNAME",
"name": "Policy Name"
}, {
"field": "ACTIVE",
"name": "Active"
}, {
"field": "CHAIN",
"name": "Chain"
}],
"identifier": "id",
"label": "id",
"items": [
{
"CHAIN": false,
"ACTIVE": true,
"id": 1,
"POLICYNAME": "AddPolicyProcessMapping",
"RESTRICTIONFILTER": "1=1"
},
{
"CHAIN": false,
"ACTIVE": true,
"id": 2,
"POLICYNAME": "DefaultExceptionHandler",
"RESTRICTIONFILTER": "1=1"
}
]
},
"SELECTEDFIELDS": {
"identifier": "name",
"label": "name",
"items": [{
"name": "*"
}]
},
"SERVICENAME": "${service.name}",
"DATASOURCENAME": "defaultobjectserver",
"AVAILABLEFIELDS": {
"identifier": "name",
"label": "name",

164 Netcool/Impact: User Interface Guide


"items": []
},
"POLLINGINTERVAL": 3000,
"SERVICELOGENABLED": true,
"COLLECTREPORTSENABLED": false,
"EVENTMATCHING": 0,
"ORDERBY": "",
"FAILEDDISCOVERY": true,
"GETSTATUSEVENTSACTION": false
}'/>

<echo message="Creating the Event Reader Service: "/>


<impactHttp url="http://${impact.hostname}:${impact.port}/restui/serviceui/service/
omnibuseventreader/${service.name}" method="POST" body="${newService}" username="$
{impact.user}" password="${impact.pass}">
<header name="Content-Type" value="application/json"/>
</impactHttp>
</target>

</project>

2. Place the file in your home directory on the Impact server system.
3. Execute following command from the $IMPACT_HOME/bin directory:

./nc_ant -buildfile /home/tivoli/create_service.xml -Dimpact.home="/opt/IBM/tivoli/


impact_server" -Dimpact.hostname="vacuum1.fyre.ibm.com" -Dimpact.port="9090"
-Dimpact.user="impactadmin" -Dimpact.pass="impactpass" -Dservice.name="OMNIbusEventReader2"

Changing the following parameters for your environment: impact.home, impact.hostname,


impact.port, impact.user and impact.pass, and service.name
You can also change the contents for the XML file to modify the parameters as required.
Note: You will need to modify EVENTMAPPINGS for your environment.
4. Execute following command from the $IMPACT_HOME/bin directory:

./nci_version_control <ImpactServerName> add $IMPACT_HOMEt/etc/


<ImpactServerName>_<yourservicename_in_lowercase>.props impactadmin

Where impactadmin is a valid impact admin user.

OMNIbus event reader service General Settings tab


Use this information to configure the general settings of the OMNIbus event reader service.

Table 108. EventReader service - general settings tab


Table Element Description

Service name Enter a unique name to identify the service.

Data Source Select an OMNIbusObjectServer data source. The ObjectServer data source
represents the instance of the Netcool/OMNIbus ObjectServer that you
want to monitor using this service. You can use the default ObjectServer
data source that is created during the installation, defaultobjectserver.

Polling Interval The polling interval is the interval in milliseconds at which the event reader
polls the ObjectServer for new or updated events.
Select or type how often you want the service to poll the events in the
event source. If you leave this field empty, the event reader polls the
ObjectServer every 3 seconds (3000 milliseconds).

Chapter 9. Working with services 165


Table 108. EventReader service - general settings tab (continued)
Table Element Description

Restrict Fields You can complete this step when you have saved the
OMNIbusEventReader service. You can specify which event fields you
want to retrieve from the ObjectServer. By default, all fields are retrieved
in the alerts. To improve OMNIbus event reader performance and reduce
the performance impact on the ObjectServer, configure the event reader to
retrieve only those fields that are used in the corresponding policies.
Click the Fields button to access a list of all the fields available from the
selected ObjectServer data source.
You can reduce the size of the query by selecting only the fields that you
need to access in your policy. Click the Optimize List button to implement
the changes. The Optimize List button becomes enabled only when the
OMNIbusEventReader service has been saved.

Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

Collect Reports Select to enable data collection for the Policy Reports.

Clear State When you click the Clear State button, the Serial and StateChange
information stored for the event reader is reset to 0. The event reader
retrieves all events in the ObjectServer at startup and places them in the
event queue for processing. If the event reader is configured to get updated
events, it queries the ObjectServer for all events where StateChange >=
0. Otherwise, it queries the ObjectServer for events where Serial > 0.
You can use the Clear State button only to clear the event reader state
when the service is stopped. Clicking the button while the service is
running does not change the state of the event reader.

Clear Queue Click to clear unprocessed events.

OMNIbus event reader service Event Mapping tab


In the Event Mapping tab, you set events to trigger policies when they match a filter.

Table 109. Event Mapping tab


Window element Description

Test events with all filters Select this option to test events with all filters and
run any matching policies.
If an event matches more than one filter, all
policies that match the filtering criteria are
triggered.

Stop testing after first match Select this option to stop testing after the first
matching policy, and trigger only the first matching
policy.

166 Netcool/Impact: User Interface Guide


Table 110. Actions on the Event Mapping tab
Window element Description

Get updated events Select to receive updated events and new


events from the ObjectServer. All new events are
automatically sent. For more information, see the
description of the Order By field.
If you do not select Get Updated Events, Netcool/
Impact uses Serial instead. You can configure the
OMNIbusEventReader service to fetch only new
events and to work with a ObjectServer failover/
failback pair in the eventreader.props file.
Important: Adding properties to the
eventreader.props file overrides selecting or
clearing the Get Updates Events check box in the
UI.
• If you plan to use this approach in an
ObjectServer failover scenario, see Managing the
OMNIbusEventReader with an ObjectServer pair
for New Events or Inserts
• If you do not select Get Updated Events,
Netcool/Impact uses the Serial field to query
Netcool/OMNIbus. Serial is an auto increment
field in Netcool/OMNIbus and has a maximum
limit before it rolls over and resets. For
information about to set up Netcool/Impact to
handle Serial rollovers, see “Handling Serial
rollover” on page 171.

Get status events Select to receive the status events that the Self
Monitoring service inserts into the ObjectServer.

Run policy on deletes Select if you want the event reader to receive
notification when alerts are deleted from the
ObjectServer. Then, select the policy that you want
to run when notification occurs from the Policy list.

Policy Is enabled when you select the Run policy on


deletes option. Select the policy that you want to
run when notification occurs from the Policy list.

Chapter 9. Working with services 167


Table 110. Actions on the Event Mapping tab (continued)
Window element Description

Event Locking: Enable Select if you want to use event order locking and
type the locking expression in the Expression field.
Event locking allows a multi-threaded event
processor to categorize incoming alerts that are
based on the values of specified alert fields and
processes them one at a time.
With event locking enabled, if more than one
event exists with a certain lock value, then these
events are not processed at the same time. These
events are processed in a specific order in the
queue.
You use event locking in situations where you
want to prevent a multi-threaded event processor
from attempting to access a single resource from
more than one instance of a policy that are running
simultaneously.

Expression The locking expression consists of one or more


alert field names.
To lock on a single field, specify the field name, for
example:

Node

To lock more than one field, concatenate them with


the + sign, for example:

Node+Severity

If the value of that field is the same in both events,


then one event is locked and the second thread
must wait until the first one is finished.

New Mapping Click to add a filter. See the configuring an event


filter topic, in the User Interface Guide.

168 Netcool/Impact: User Interface Guide


Table 110. Actions on the Event Mapping tab (continued)
Window element Description

Order by If you want to order incoming events that are


retrieved from the ObjectServer, type the name
of an alert field or a comma-separated list of
fields. The event reader sorts incoming events in
ascending order by the contents of this field.
This field or list of fields is identical to the contents
of an ORDER BY clause in an SQL statement. If
you specify a single field, the event reader sorts
incoming events by the specified field value. If you
specify multiple fields, the events are grouped by
the contents of the first field and then sorted within
each group by the contents of the second field, and
so on.
For example, to sort incoming events by the
contents of the Node field, type Node.
To sort events first by the contents of the Node
field and then by the contents of the Summary
field, type Node, Summary.
You can also specify that the sort order is
ascending or descending by using the ASC or DESC
key words.
For example, to sort incoming events by the
contents of the Node field in ascending order, type
the following Node ASC.
All events retrieved from the ObjectServer are
initially sorted by either the Serial or StateChange
field before any additional sorting operations are
performed. If you select the Get Updates Events
option, see the Actions check box in the Event
Mapping section of the window, the events are
sorted by the StateChange field. If this option is
not specified, incoming events are sorted by the
Serial field.

Analyze Event Mapping Table Click to analyze the filters in the Event Mapping
table.

OMNIbus Event Reader event locking examples


The following examples explain how Event Locking works.
The Event Processing service receives events in blocks from the Event Reader service and places them
in a queue. These events are picked up as threads sequentially and sent to the respective policies for
processing.
Note: Event locking on a certain field or fields affects all event readers that lock on those fields.

Example of event locking on single field

Chapter 9. Working with services 169


In this example, event locking is enabled with the event locking expression set to Severity and then
configured with four threads. With event locking set on Severity, only one event with the same value of
Severity can be processed at any instant.
The Event Processor receives from the Event Reader events with the following severities:

3 4 3 5 4 4 2 3 5
F L
F: First Element in the Queue
L: Last Element in the Queue

Since the Event Processor has four threads configured, the first thread receives the first event with
Severity=3 from the queue and sends it to a policy for processing. The second thread receives
the event with Severity=4 and sends it to a policy for processing. Although two remaining threads
are available for processing, the next event Severity=3 cannot be processed because an event with
Severity=3 is already being processed (the first event in the queue). Until the processing of the first
event is complete, the other threads cannot begin, since they would violate the locking criteria.
If the thread that picked the second event in the queue (with Severity=4) finishes processing before the
first event, it waits along with the other two threads until the first event has finished processing. When the
thread that picked up the first event in the queue is finished, three threads picks up the third, fourth, and
fifth events from the queue, since they have different Severity values (3, 5, 4).
At this point, the remaining thread cannot pick up the next event (sixth in the queue) from the queue
because an event with the same Severity level (4) is already processing (fifth in the queue).

Example of event locking on multiple fields

In the previous example, locking is on a single field, Severity. You can also lock on more than one field
by concatenating them with the plus (+) operator. If you lock, for example, on the Node and Severity
fields, you can use one of the following event locking expressions:

Node+Severity

or:

Severity+Node

Event locking on multiple fields works in the same way that locking on a single field except that in
this instance, two events with the same combination of fields cannot be processed at the same instant.
In other words, if two events have the values for Node as abc and xyz and both have the value for
Severity as 5, then they can be processed simultaneously. The only case when the two events cannot be
processed together is when the combination of Node and Severity is the same for the events. In other
words, if there are two events with the Node as abc and Severity as 5, then they cannot be processed
together.

Forcing checkpointing after a specified number of minutes


There are cases when a network is down, or communication between servers is not good. In such
circumstances, Impact may lose a block of events that were sent to the secondary server for processing.
If this happens, then Impact will be holding the checkpoint of events and the events themselves in
memory, which may cause an OutOfMemory error.
Checkpoint means to persist the Serial or Statechange field of events to the etc/
eventreader.state file, so that an event reader knows whether or not it has handled a block of events.
For example, in the case where processing for a block is slow, you may see the following messages in the
logs:
INFO [EventBroker] AbstractEventReader: checkPoint: The Block ID = 248262 is
not the one I was expecting: 248260

170 Netcool/Impact: User Interface Guide


INFO [EventBroker] Hold the events with identifier :248262 until earlier block
of events are processed
In this case event block 248260 has not reported back to the primary cluster member that its processing
is complete. It may still be being processed, or the confirmation may have been lost, possibly due to
network issues. The primary cluster member holds all events after this event block in memory, which may
cause an OutOfMemory error.
To avoid this problem, you can set the maxminutestoforcecheckpoint
property in the OMNIbus event reader properties file: $IMPACT_HOME/etc/
<servername>_<omnibuseventreadername>.props.
For example, add the following property:
impact.<omnibuseventreadername>.maxminutestoforcecheckpoint=5
This forces checkpointing to occur after the specified number of minutes. Impact server can then
continue processing and checkpointing events.
There are two possible reasons for missing checkpoints:
1. Missing checkpoint when events are processed successfully: There may be an exception such as
NullPointerException thrown when checkpoint the block in eventreader, so it is not missing
from processing, just missing for checkpoint.
2. Missing checkpoint when event processing fails or times out: You can find out the exception in the
event processor logs and impactserver.log for these events.

Handling Serial rollover


How to set up Serial rollover with Netcool/Impact.

Symptoms
If you are not using the Get Updates option in the OMNIbus reader service, Netcool/Impact uses the
Serial field to query Netcool/OMNIbus. Serial is an auto increment field in Netcool/OMNIbus and has a
maximum limit before it rolls over and resets.

Resolution
Complete the following steps to set up Netcool/Impact to handle Serial rollover:
1. Identify the OMNIbusEventReader that queries the Netcool/OMNIbus failover/failback pair. A Netcool/
Impact installation provides a reader called OMNIbusEventReader but you can create more instances
in the Services GUI.
2. Stop the Impact Server. In a Netcool/Impact clustered environment, stop all the servers.
3. Copy the sql file serialrotation.sql in the $IMPACT_HOME/install/dbcore/OMNIbus folder
to the machines where the primary and secondary instances of the ObjectServer are running. This
script creates a table called serialtrack in the alerts database and also creates a trigger called
newSerial to default_triggers.
4. Run this script against both the primary and secondary ObjectServer pairs.
• For UNIX based operating systems:

cat <path_to_serialrotation.sql> | ./nco_sql -server '<servername>


-user '<username>' -password '<password>'

For example, if the serialrotation.sql is placed in the /opt/scripts folder and I want to run
this script against the ObjectServer instance NCOMS, connecting as the root user with no password,
the script can be run as:

cat /opt/scripts/serialrotation.sql | ./nco_sql -server 'NCOMS'


-user 'root' -password “”

Chapter 9. Working with services 171


• For Windows operating systems:

type <path to serialrotation.sql> | isql.bat -S <servername>


-U <username> -P <password>

For example, place the serialrotation.sql file in the OMNIHOME/bin folder and run this script
against the ObjectServer instance NCOMS, connecting as a root user with no password:

type serialrotation.sql | isql.bat -S NCOMS -U root -P

Make sure that -P is the last option. You can ignore providing the password and enter it
when prompted instead. For information about Netcool/OMNIbus, see the IBM Tivoli Netcool/
OMNIbus Administration Guide available from the following website: https://www.ibm.com/support/
knowledgecenter/SSSHTQ/landingpage/NetcoolOMNIbus.html.

Further steps
When the script completes, make sure that you enable the newSerial trigger.
1. Start your Netcool/Impactserver and the OMNIbusEventReader. In a clustered setup, start the primary
server first followed by all the secondary servers.
2. Log in to the Netcool/Impact GUI and create an instance of the DefaultPolicyActivator service. In the
Configuration, select the policy to trigger as SerialRollover and provide an interval at which that policy
gets triggered.
3. The SerialRollover policy assumes that the data source used to access Netcool/OMNIbus
is the defaultobjectserver and the event reader that accesses Netcool/OMNIbus is the
OMNIbusEventReader. If you are using a different data source or event reader, you must update the
DataSource_Name and Reader_Name variables in the policy accordingly.
4. Start the instance of the DefaultPolicyActivator service that you created.

Policy activator service


The policy activator service activates policies at startup or at the intervals you specify for each selected
policy.
This is a default service that you can use instead of creating your own, or in addition to creating your own.

Policy activator configuration


In a policy activator you can configure the policy activator name, the activation interval, the policy you
want to run at intervals, and the start up and logging options.
Note: If a service uses a Data Source for which the IP address or hostname has changed, you need to
restart the service.

Configuring the policy activator service


Use this information to configure the policy activator service.

Table 111. Create New Policy Activator Service configuration window

Window element Description

Service name Enter a unique name to identify the service.

Activation Interval Select how often (in seconds) the service must activate the policy.
The default value is 30 on the policy activator service that comes
with Netcool/Impact. When you create your own policy activator
server the default value is 0.

172 Netcool/Impact: User Interface Guide


Table 111. Create New Policy Activator Service configuration window (continued)

Window element Description

Policy Select the policy you want the policy activator to run.

Starts automatically when server Select to automatically start the service when the server starts. You
starts can also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

Chapter 9. Working with services 173


174 Netcool/Impact: User Interface Guide
Chapter 10. Working with operator views
An operator view is a custom web-based tool that you use to view events and data in real time and to run
policies that are based on that data.

Viewing operator views


To view the basic and advanced operator views that are currently defined in IBMTivoliNetcool/Impact, log
on to the GUI.

Procedure
1. Log on to the GUI
2. Click the Operator Views tab.
3. Double-click the operator view to see the details or right click the operator view and click Edit.

Operator views
An operator view is a custom web-based tool that you use to view events and data in real time and to run
policies that are based on that data.
The simplest operator views present a basic display of event and business data. More complex operator
views can function as individual GUIs that you use to view and interact with event and business data
in a wide variety of ways. You can use this kind of GUI to extensively customize an implementation of
Netcool/Impact products and other Tivoli Monitoring applications.
Management and updating of operator view components is done in the GUI Server. In the documentation
where there are references to $IMPACT_HOME/opview/displays, it is referring to the GUI Server
installation in a split installation environment.
Typically, you create operator views to:
• Accept incoming event data from Netcool/OMNIbus or another application.
• Run a policy that correlates the event data with business data that is stored in your environment.
• Display the correlated business data to a user.
• Run one or more policies that are based on the event or business data.
• Start another operator view that is based on the event or business data.
One common way to use an operator view is to configure it to be started from within the Netcool/
OMNIbus event list. Netcool/Impact operators can view related business data for an event by right-
clicking the event in the event list and viewing the data as displayed in the view. The business data
might include service, system, or device information that is related to the event, or contact information for
administrators and customers that are affected by it.
Operator views are not limited to use as Netcool/OMNIbus tools. You can use the operator view feature to
create a wide variety of tools that display event and business data to users.

Operator view types


Basic and advanced operator views are supported.
• Basic operator views that you use to display data in a preformatted web page. For more information
about basic operator views, see “Basic operator views” on page 176.
• Advanced operator views that you use to display data using any HTML formatting that you choose. For
more information about advanced operator views, see “Advanced operator views” on page 176.

© Copyright IBM Corp. 2006, 2023 175


Basic operator views
You use basic operator views to view real-time data in a preformatted web page and to run policies based
on that data.
A basic operator view has the following display elements:
Name and Layout
Displays incoming event information from Netcool/OMNIbus or information from another application
that can be expressed in name/value pairs.
Actions panel
You use it to run one or more policies from within the operator view.
Information groups panel
Displays sets of data that you define when you create the view, or when you manually edit the
operator view policy.
You create basic operator views using the GUI. The GUI automatically creates the corresponding display
page and operator view policy.
If you need to customize the appearance of the view or the type of information displayed in the
information group panel, you can manually edit the display page using a text editor. You can edit the
operator view policy using the GUI.

Advanced operator views


You use advanced operator views to view real-time data in an HTML-formatted web page and to run
policies based on that data.
Unlike basic operator views, which must use the provided preformatted page design, advanced operator
views have no restrictions on the appearance of the resulting web page.
You can use any type of HTML formatting to specify how an advanced operator view is displayed and you
can display data in an advanced view in any format that is viewable using a web browser. You can also
further customize advanced operator views using cascading styles sheets (CSS) and browser scripting
languages.
For detailed information about how to create and view advanced operator views, see the Operator View
Guide.

Operator views panel controls


An overview of the icons and controls used in the operator view panel.

Table 112. Operator views panel controls

Control Description

Click this icon to create a basic operator view.

Select an operator view and use this icon to edit it. Alternatively, you can edit an operator
view by right clicking its name and selecting Edit in the menu.

Click this icon to view the operator view display for the selected operator view. Alternatively,
right click an operator view and select View.

Select an operator view from the list and click this icon to delete it. Alternatively, right click
an operator view and select Delete.

176 Netcool/Impact: User Interface Guide


Layout options
When you create a basic operator view using the GUI, you can use the layout options and the associated
preview feature to specify how different parts of the tool are arranged on the resulting web page.
The following table shows the display panels in a basic operator view:

Table 113. Operator view display panels

Display Panel Description

Event panel Displays information, if any, passed from Netcool/OMNIbus or another


application to the operator view. This information can be fields in a Netcool/
OMNIbus event, or any other information that can be expressed as a set of
name/value pairs.
You can configure the layout so that the event panel is displayed on the top or
the bottom of the operator view, or not at all.

Action panel Contains a list of policies associated with this view. You can configure the layout
so that the action panel is displayed on the top, the bottom, the left or the right
of the display, or not at all.

Information group Displays sets of information retrieved from data types. This data is often
panel business data that is related to event information passed to the view from
Netcool/OMNIbus or another application.

Action panel policies


You can use the action panel editor in the GUI to specify one or more policies that are displayed in the
action panel of a basic operator view.
The action panel presents a list of policies that the user can start from within the view. This is an optional
part of the operator view function. You use the action panel to start policies only, you cannot use it to
display data that is returned by a policy. An advanced operator view, however, does provide the capability
to display this data.

Information groups
An information group is a set of dynamic data that is displayed when you open the view.
This is often business data that is related to event information that is passed to the view from Netcool/
OMNIbus or another application. The data that is displayed in an information group is obtained by a query
to a data source either by filter or by key.
When you create a basic operator view using the GUI, you can specify one or more information groups
that are to be displayed by the view.
The following table shows the properties that you specify when you create an information group:

Table 114. Information group configuration properties

Property Description

Group name Unique name that identifies the information group.

Chapter 10. Working with operator views 177


Table 114. Information group configuration properties (continued)

Property Description

Type Type of query to a data source. Available options are:


• By Key: Key expression that specifies which data to retrieve from the data type.
• The filter syntax is similar to the contents of the WHERE clause in an SQL select
statement.
• By Filter: SQL filter string that specifies which data to retrieve from the data type.

Data type Data type that contains the data that you want to display.

Value Add a value.

Style Layout style for data items in the resulting information group. Options are Tabbed
and Table.

You can customize the information that is displayed in the information groups by editing the operator view
policy.

Creating and viewing a basic operator view


Complete the following steps to create, edit, view and delete, basic operator views.

About this task

Procedure
1. Log on to the GUI.
2. Click the Operator Views tab.
3. Click the New Operator View icon to open the New Operator View.
4. In the Operator View Name field, enter a unique name for the operator view. You cannot edit the
name once the operator view is saved.
5. In the Layout Options area, specify the position of the event panel and action panel in the operator
view. You can preview the appearance of the operator view by using the images available in the
Preview area.
6. Click the Action Panel link, select one or more action policies that the user can open from within the
operator view.
7. Click the Information Groups link. Use the following steps to create one or more information groups:
a) Click the New Information Group icon to insert a new row into the information groups table.
b) In the Group Name field, type a unique name for the group.
c) From the Type list, select By Filter or By Key to specify whether the information group retrieves
data from a data type by filter or by key.
d) From the Data Type list, select the data type that contains the information you want to view.
e) In the Value field, enter a filter or key expression. If the Type is By Filter adding a value is optional.
If the Type is the By Key field from the data type, then the value is mandatory.
f) In the Style list, select Tabbed or Table to specify how the operator view shows the resulting data.
g) Press Enter on your keyboard to confirm the value that you are adding to the information group (or
press Escape on your keyboard to cancel the edit).
h) Repeat these steps to create multiple information groups for any operator view.
i) To edit an information group, click the item that you want to edit and change the value.

178 Netcool/Impact: User Interface Guide


j) To delete one or more information groups, multiselect the rows groups by using the Ctrl and shift
keys on the keyboard, then click Delete.
• To sort rows up or down, select a row or multiple rows to activate the Move Up and Move Down
arrows on the toolbar. Click the required icon to move the rows up or down by one row.
8. Click the Save icon on the main editor toolbar to implement the changes.
• To edit an operator view, double-click the operator view, or click the Edit Operator View icon.
Modify the operator view configuration properties. You cannot modify the Operator View Name.
• To view an operator view page, select the operator view, then right-click and select View to open
the operator view in a new window. In the new window, copy the URL from the browser URL field
and paste it another browser if required.
For more information about alternative methods of viewing operator views see the section
Managing Operator Views on IBM Documentation.
• To delete an operator view, select the operator view and click the Delete icon on the toolbar, or
right-click the operator view and click Delete.

Chapter 10. Working with operator views 179


180 Netcool/Impact: User Interface Guide
Chapter 11. Configuring Event Isolation and
Correlation
Event Isolation and Correlation is provided as an additional component of the Netcool/Impact product.
Event Isolation and Correlation is developed using the operator view technology in Netcool/Impact. You
can set up Event Isolation and Correlation to isolate an event that has caused a problem. You can also
view the events dependent on the isolated event.

Overview
Netcool/Impact has a predefined project, EventIsolationAndCorrelation that contains predefined data
sources, data types, policies, and operator views. When all the required databases and schemas are
installed and configured, you must set up the data sources. Then, you can create the event rules by using
the ObjectServer sql in the Event Isolation and Correlation configuration view from the UI. You can view
the event analysis in the operator view, EIC_Analyze. You can also view the output in the topology widget
dashboard in the Dashboard Applications Services Hub.
Complete the following steps to set up and run the Event Isolation and Correlation feature.
1. Install Netcool/Impact.
2. Install DB2 or use an existing DB2 installation.
3. Configure the DB2 database with the DB2 schema.
4. Install the Discovery Library Toolkit with the setup-dltoolkit-<platform>_64.bin installation
image that is available in the directory IMPACT_INSTALL_IMAGE/<platform>.
If you already have a Tivoli® Application Dependency Discovery Manager (TADDM) installation,
configure the Discovery Library Toolkit to consume the relationship data from TADDM. You
can also consume the data through the loading of Identity Markup Language (IdML) books.
For more information about the discovery library toolkit, see the Tivoli Business Service
Manager Administrator's Guide and the Tivoli Business Service Manager Customization Guide.
The guides are available in the Tivoli Business Service Manager 6.1.1 documentation, available
from the following URL, https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/
wiki/Tivoli%20Documentation%20Central.
You can load customized name space or your own model into SCR. This model can be used
for application topology-based event correlation. For more information see Tivoli Business Service
Manager Customization Guide, Customizing the import process of the Service Component Repository,
Service Component Repository API overview.
5. In the GUI, configure the data sources and data types in the EventIsolationAndCorrelation project to
use with the Impact Server.
6. Create the event rules in the UI to connect to the Impact Server.
7. Configure WebGUI to add a new launchpoint or configure a topology widget to visualize the results.
Tip: When you use Event Isolation and Correlation, the Event Isolation and Correlation events must
have a BSM identity value in field BSM_Identity. If the field does not have a value, you must enter
it manually or create it using the event enrichment feature by using the EIC_EventEnrichment policy
and EIC_EventEnrichment service in the EventIsolationAndCorrelation project. You might also want to
update the event reader Filter Expression in the Event Mapping tab according to your requirements.
General information about navigating Event Isolation and Correlation is in the online help. Additional
detailed information about setting up and configuring Event Isolation and Correlation, is in the Netcool/
Impact Solutions Guide.

© Copyright IBM Corp. 2006, 2023 181


Event Isolation and Correlation policies
The EventIsolationAndCorrelation project has a list of predefined polices that are specific to Event
Isolation and Correlation.
The following policies are in the EventIsolationAndCorrelation project and support the Event Isolation
and Correlation feature and must not be modified:
• EIC_ActionExecutionExamplePolicy
• EIC_ActionExecutionExamplePolicyJS
• EIC_EventEnrichment
• EIC_IsolateAndCorrelate
• EIC_PrimaryEvents
• EIC_ResourcesTopology
• EIC_TopologyVisualization
• EIC_UtilsJS
• EIC_eventrule_config
• EIC_utils
• Opview_EIC_Analyze
• Opview_EIC_confSubmit
• Opview_EIC_configure
• Opview_EIC_requestHandler

Event Isolation and Correlation operator views


The EventIsolationAndCorrelation project has a list of predefined operator views that are specific to
Event Isolation and Correlation.
• EIC_Analyze shows the analysis of an event query.
• EIC_confSubmit supports the configuration of Event Isolation and Configuration.
• EIC_configure configures the event rules for Event Isolation and Configuration.
• EIC_requestHandler supports the configuration of Event Isolation and Configuration.

Configuring Event Isolation and Correlation data sources


All the Event Isolation and Correlation-related features are associated with the project,
EventIsolationAndCorrelation. Configure the necessary data sources, data types, and data items for
the event isolation and correlation.

Procedure
1. In the GUI, click Data Model.
2. From the project list, select the project EventIsolationAndCorrelation.
A list of data sources specific to the EventIsolationAndCorrelation feature is displayed.
• EIC_alertsdb
• SCR_DB
• EventrulesDB
3. For each data source, update the connection information, user ID, and password and save it.
4. Configure EIC_alertsdb to the object server where the events are to be correlated and isolated.
5. Configure SCR_DB to the Services Component Registry database. When you create the SCR schema,
the following tables are created EIC_ACTIONS and EIC_RULERESOURCE.

182 Netcool/Impact: User Interface Guide


Note: When you configure the Services Component Registry (SCR) data sources, you must point the
data sources to what is commonly called the SCR. The SCR is a schema within the TBSM database
that is created when you run the DB2 schema configuration step. The schema is called TBSMSCR. The
database has a default name of TBSM.
6. You must manually add the tables EIC_ACTIONS and EIC_RULERESOURCE to the Services
Component Registry.
a) Use the following SQL commands create the tables in your DB2 Services Component Registry
database.
--EIC_ACTIONS

CREATE TABLE EVENTRULES.EIC_ACTIONS (


RULENAME VARCHAR(64), ACTIONNAME VARCHAR(100), POLICYNAME VARCHAR(64),
AUTOEXECUTE char(5) not null,
CONSTRAINT auto_true_false CHECK (AUTOEXECUTE in ('true','false')));

--EIC_RULERESOURCE

CREATE TABLE EVENTRULES.EIC_RULERESOURCE (


RULENAME VARCHAR(65) not null, SERIAL integer, Resources CLOB);

7. Configure the EventRulesDB data source to connect to the Services Component Registry database.

Configuring Event Isolation and Correlation data types


The EventIsolationAndCorrelation project has a list of predefined data types that are specific to
Event Isolation and Correlation. Except for the data type EIC_alertquery, which you must configure,
the remaining data types are preconfigured and operate correctly once the parent data sources are
configured.

About this task


The following list shows the Event Isolation and Correlation data sources and their data types:
• EIC_alertsdb
– EIC_alertquery
– EIC_TopologyVisualization
• SCR_DB
The following data types are used to retrieve relationship information from the Services Component
Registry.
– bsmidenties
– getDependents
– getRscInfo
• EventRulesDB
The following data types that are used by the database contain the user configuration for Event
Isolation and Correlation.
– EIC_RulesAction
– EIC_RuleResources
– EVENTRULES
– EIC_PARAMETERS

Procedure
1. To configure the EIC_alertquery data type, right-click on the data type and select Edit.
2. The Data Type Name and Data Source Name are prepopulated.

Chapter 11. Configuring Event Isolation and Correlation 183


3. The State check box is automatically selected as Enabled to activate the data type so that it is
available for use in policies.
4. Base Table:
Specifies the underlying database and table where the data in the data type is stored.
5. Click Refresh to populate the table.
The table columns are displayed as fields in a table. To make database access as efficient as possible,
delete any fields that are not used in policies. For information about adding and removing fields from
the data type, see the SQL data type configuration window - Table Description tab in the Online help.
6. Click Save to implement the changes.

Creating, editing, and deleting event rules


How to create, edit, and delete an event rule for Event Isolation and Correlation.

Procedure
1. Select Event Isolation and Correlation to open the Event Isolation and Correlation tab.
2. Click the Create New Rule icon to create an Event Rule. While creating this item the configure page
has empty values for various properties.
3. Click the Edit the Selected Rule icon to edit the existing event rules.
4. Click the Delete the Selected Rule icon to delete an event rule from the system and the list.

Creating an event rule


Complete the following fields to create an event rule.

Procedure
1. Event Rule Name: Specify the event rule name. The event rule name must be unique across this
system.
When you select Edit or New if you specify an existing event rule name, the existing event rule is
updated. When you edit an event rule and change the event rule name, a new event rule is created
with the new name.
2. Primary Event: Enter the SQL to be run against the ObjectServer that is configured in the data source
EIC_alerts db.
The primary event is the event that is selected for analysis.
The primary event filter is used to identify if the event that was selected for analysis has a rule
associated with it. The primary event filter is also used to identify the object in the Services
Component Registry database that has the event that is associated with it.
The object may or may not have dependent entities. During analysis, the event isolation and
correlation feature finds all the dependent entities and their associated events.
For example, the primary event has 3 dependent or child entities and each of these entities has three
events that are associated with it. In total, there are nine dependent events. Any of these secondary
events could be the cause of the primary event. This list of events is what is termed the list of
secondary events. The secondary event filter is used to isolate one or more of these events to be the
root cause of the issue.
3. Test SQL: Click Test SQL to test the SQL syntax that is specified in the primary event.
Modify the query so that only one row is returned. If there are multiple rows, you can still configure
the rule. However, during analysis only the first row from the query is used to do the analysis.
4. Secondary Events: The text area is for the SQL to identify the dependent events. When you specify
the dependent events, you can specify variables or parameters that can be substituted from the
primary event information. The variables are specified with the @ sign.

184 Netcool/Impact: User Interface Guide


For example, if the variable name is dbname, it must be specified as @dbname@. An example is
Identifier = 'BusSys Level 1.2.4.4' and Serial = @ser@. The variables are replaced
during the analysis step. The information is retrieved from the primary event that is based on the
configuration in the parameters table and shows in the Variables Assignment section of the page.
5. Extract parameters: Click Extract Parameters to extract the variable name between @ and populate
the parameter table.
When the variable information is extracted into the table, you can edit each column.
a) Select the field against the regular expression you want to run, and a substitution value is
extracted.
b) Enter the regular expression in the regular expression column. The regular expression follows the
IPL Syntax and is run by using the RExtract function.
c) When the regular expression is specified, click Refresh to validate the regular expression and
check that the correct value is extracted.
The table contains the parameters.
6. Click the Create a new Action icon to add an event-related policy to the event.
A list of policies that are associated with the event that are enabled for Event Isolation and
Correlation are displayed.
7. Select the Auto Execute Action check box to run the policy during the analysis.
When the analysis is complete, you can also run the action by selecting it.
8. Limit Analysis results to related configuration items in the Service Component Registry: Select
this check box if the analysis is to be limited to related configuration items only.
If the check box is not selected, the dependent query is returned.
9. Primary Event is a root cause event: Select this check box to identify whether the primary event is
the cause event and rest of events, are symptom only events.
10. Event Field: Identifies the field in the event that contains the resource identifier in the Services
Component Registry. Select the field from the drop-down menu that holds the resource identifier in
the event.
11. Time window in seconds to correlate events: Add the time period the event is to analyze. The
default value is 600 seconds. The events that occurred 600 seconds before the primary event are
analyzed.
12. Click Save Configuration to add the configuration to the backend database.
13. Now the event rules are configured. The event is ready to be analyzed. You can view the event
analysis in the EIC_Analyze page or in the topology widget in the Dashboard Applications Services
Hub.

Configuring WebGUI to add a new launch point


Configure the WebGUI with a launch out context to launch the analysis page.

About this task


WebGUI can be configured to launch the analysis page. Refer to the procedure for launch out
integration described in the following URL, http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/
com.ibm.netcool_OMNIbus.doc_7.4.0/webtop/wip/task/web_con_integrating.html.
The URL you need for Event Isolation and Correlation is <IMPACTHOSTNAME>:<IMPACTPORT>/opview/
displays/NCICLUSTER-EIC_Analyze.html. Pass the serial number of the selected row for the event.
Note: NCICLUSTER is the name of the cluster configured during the installation of Netcool/Impact. You
must use the name of your cluster whatever it is, in the URL. For example, in Tivoli Business Service
Manager the default cluster name is TBSMCLUSTER. To launch from Tivoli Business Service Manager, you
would need to use the following html file, TBSMCLUSTER-EIC_Analyze.html.

Chapter 11. Configuring Event Isolation and Correlation 185


Launching the Event Isolation and Correlation analysis page
How to launch the Event Isolation and Correlation analysis page.

About this task


You can launch the Event Isolation and Correlation analysis page in the following ways:
• Manually by using the webpage and Event Serial number.
• Using the launch out functionality on Active Event List (AEL) or Lightweight Event List (LEL) from
WebGUI.
• Using a topology widget.

Procedure
Open a browser on Netcool/Impact. Use one of the following options:
• Point to <Impact_Home>:<Impact_Port>/opview/displays/NCICLUSTER-
EIC_Analyze.html?serialNum=<EventSerialNumber>. Where <Impact_Home> and
<Impact_Port> are the Netcool/Impact GUI Server and port and EventSerialNumber is the serial
number of the event you want to analyze. To launch the analysis page outside of the AEL (Action Event
List), you can add serialNum=<Serial Number> as the parameter.
• The Event Isolation and Correlation analysis page can be configured to launch from the Active Event
List (AEL) or LEL (Lightweight Event List) within WebGUI. For more information see, “Configuring
WebGUI to add a new launch point” on page 185. When you create the tool you have to specify only
<Impact_Home>:port/opview/displays/NCICLSTER-EIC_Analyze.html. You do not have to
specify SerialNum as the parameter, the parameter is added by the AEL tool.

Viewing the Event Analysis


View the analysis of an Event query in the EIC_Analyze page. You can also view the results in a topology
widget.

About this task


The input for the EIC_IsolateAndCorrelate policy is the serial number of the event through the serialNum
variable. The policy looks up the primary event to retrieve the resource identifier. The policy then looks
up the dependent events based on the configuration. The dependent events are further filtered using the
related resources, if the user has chosen to limit the analysis to the related resources. Once the serial
number has been passed as the parameter in WebGUI, you can view the event from the AEL or LEL and
launch the Analyze page.

Procedure
Select the event from the AEL or LEL and launch the Analyze page. The EIC_Analyze page contains three
sections:
• Primary Event Information: shows the information on the selected event. This is the event on which
the event isolation and correlation analysis takes place.
• Correlated Events: shows information about the dependent events identified by the tool. Dependant
events are identified as the events that are associated with the dependant child resources of the
device or object that is associated with the primary event. These events are displayed in the context of
dependent resources that were identified from the Services Component Registry.
• Event Rule Processed: shows the rule which was identified and processed when this primary event was
analyzed.

186 Netcool/Impact: User Interface Guide


Chapter 12. Working with reports
The reports provide information about your network and network operators and help you to assess the
efficiency of your configuration.

Accessing reports
Use this procedure to access the reports.

About this task


For reporting data to be available, you need to enable the following options on the Policy logger service,
Policy Profilingg - and Collect Reports. See “Configuring the Policy logger service” on page 147.

Procedure
1. Click Reports to open the Reports tab.
2. Select the report you want to run, the tab for the specified report opens.
The following reports are available:
• Policy Efficiency Report
• Policy Error Report
• Operator Efficiency Report
• Node Efficiency Report
• Action Error Report
• Action Efficiency Report
• Impact ROI Efficiency Report
• Impact Profile Report
3. In the tab menu, select the date and time ranges. Select the view option you want, either Chart View
or Tabular view then run the report. The time range displays in local time. For more information see
“Viewing Reports” on page 187 and “Reports toolbar” on page 188.

Viewing Reports
The reports present their data in graphical and tabular format. Use the chart view tab and the tabular view
tab to switch between these two formats.

Chart view
The chart view presents the report data in graphical format. The legend shows the color code for each
action. The descending order in the legend reflects the order from left to right in the chart.

Tabular view
The tabular view presents the report data in a table. To get more detail for a particular row of the
table, select the row, then click the DrillDown icon on the toolbar above the table. The table refreshes
automatically and loads the information for the row. To return to the main report view click the Drillup
arrow icon on the toolbar.
If you are viewing a multi-page report, use the Page and Row controls at the bottom of the table. In the
Page field, click the arrows to get to the page you want to view. In the Row field, use the arrows to adjust
the number of rows that display per page. The minimum number of rows is three and the maximum is 50
per page. The total number of rows that display on a page is shown on the lower right corner of the table.

© Copyright IBM Corp. 2006, 2023 187


Multi-page reports have Previous and Next links so that you can move from page to page. You can also
click the individual page numbers to move to specific pages.
Note: When viewing the contents of reports, Netcool/Impact loads the data from the Apache
Derby database as long as the number of items is within the threshold limit. The default
threshold limit is 10000. The threshold limit is set in $IMPACT_HOME/etc/server.props using
the property, impact.dataitems.threshold. To view data exceeding the threshold limit, the
impact.dataitems.threshold property would need to be modified and the server restarted. Note
that the higher the value is set, the more memory is consumed.

Reports toolbar
You use the report toolbar to perform a basic report configuration.
You can find some toolbar controls, for example, the time of the report, selection fields, or the refresh
report icon, can be found in all reports. Other controls can be found only in specific reports.

Selecting the time range


Use the Start and End fields in the report toolbar to configure the date and time range for a report. Click a
field to activate the date and time menus. The time range displays in local time.
The default value of the "Start" control is one month prior to the current date and time. The time can be
set at 15 minute intervals.

Report icons
This table explains the function of the icons that you can find in the reports.

Table 115. Report icons

Icon Description

Click to refresh the report data after changing the report parameters.

Only in the Impact Profile report and Impact ROI Efficiency report. Open a
window that you can use to change the report parameters. In the Impact ROI
Efficiency report, when you click the icon you have two options, configure policy
and report mapping and configure business process.

Clear all Impact Profile Report data. You can find this icon only in the Impact
Profile report.

Click this arrow to generate a report.

Stop collecting data for this report. This icon can be found only in the Impact
Profile report.

In the report tabular view, you can drill down to view more detailed information
about a row, by selecting a row, and then clicking this icon. This icon is only
enabled after you select a table row.

Click this icon to return to the main table view of a report, after you drill down for
more detail.

188 Netcool/Impact: User Interface Guide


Policy Efficiency report
The Impact Policy Efficiency Report records historical information about the performance of all your
policies.
Each time a policy runs, the time taken to run it is recorded. When reporting is switched on, you can see a
table of all policies and the average execution time and count for each one.
The chart view shows the average time in seconds each policy took to run.
The Tabular View tab shows the policy name, the average time it took in seconds to run it, and how many
times it ran in the specified time range.
The detail view shows the following information:
• Policy
• Execution time
To use this report, enable reporting in the Policy Logger service configuration window. See “Configuring
the Policy logger service” on page 147

Policy Error report


The Policy Error Report gives you a list of the policies that generated errors along with how many times
each policy was run.
The chart view shows the error count for each policy.
The tabular view shows the failure count for each policy within the specified date range.
This detail view shows the following detail:
• The policy name
• The times the policy executed
• The error message generated.
To use this report, enable reporting in the Policy Logger service configuration window. See “Configuring
the Policy logger service” on page 147

Operator Efficiency report


The Operator Efficiency reporting tool records how quickly operators respond to events.
For each operator, the report records the following information:
• Operator name
• The average time between when the event first occurs and the operator acknowledgment of the event
The chart view displays the average acknowledgment time for each operator.
The tabular view shows the following information:
• Operator name
• Average event Acknowledgment time in seconds
• Acknowledgment count
The detail view shows the following information for each operator:
• The operator name
• Each unique event
• The entry in the event list Summary field
• Acknowledgment count
• The severity level assigned to the event

Chapter 12. Working with reports 189


Node Efficiency report
The Node Efficiency Report records the number of alerts generated by a node.
The chart view shows the unique event count for each node.
The tabular view shows the node name and the unique event count.
The detail view shows the following information for each node:
• Node name
• Severity level
• Information recorded in the Objectserver Summary field
• Location of the node
• Whether the event has been acknowledged
• Unique Event Name
To use this report, enable reporting in the Event Reader Service Configuration window. See “OMNIbus
event reader service General Settings tab” on page 165

Action Error report


The Action Error report shows the actions that generated errors each time a policy executed.
The report shows you how many action errors occurred in Netcool/Impact over a time period that you
selected.
The Chart View reports how many times each action failed.
The Tabular View tab contains a table that shows the number of errors for each action.
You can drill down to see the policies where the errors occurred, the time the errors occurred, and the
error messages that resulted.
The detailed view shows the following details:
• Type of action
• Policy it belongs to
• Time the policy executed
• Error message it generated.
To use this report, enable reporting in the Policy Logger service configuration window. See “Configuring
the Policy logger service” on page 147.

Action Efficiency report


The Action Efficiency report shows the total number of actions that were processed over a selected time
range.
Using this report you can learn how many actions the Impact Server performed and which actions you are
using the most.
The chart view shows how many times each action has been performed for the Impact Server.
The tabular view contains a table that shows how many times an action was run and the average time it
took to process the action.
When you click on a row in the table, the detail view shows the action name, the name of the policy
executed, the time it took to process the action in seconds, and the time it was processed.
To use this report, enable reporting in the Policy Logger service configuration window. See “Configuring
the Policy logger service” on page 147.

190 Netcool/Impact: User Interface Guide


Impact ROI Efficiency report
The Impact Return on Investment (ROI) Report shows operator time saved as a result of a Netcool/
Impact deployment compared to the time it would take an operator to solve the identical problem
manually.
The manual times defaults, provided with the Netcool/Impact installation, are calculated from industry
statistics for common tasks. You associate the relevant policies with these calculations before you turn
on report data collection in the Policy Logger service. In order for each calculation to work, you must
associate at least one policy with it. The saved time is based on how many times the corresponding
policies are executed against the manual process time of the ROI business process during a specified
period of time. After you associate relevant policies with the calculations, you turn on Impact ROI
Efficiency Reporting.

Figure 1. Impact ROI Efficiency report

Important: Before you configure this report, enable report data collection in the Policy Logger service. For
more information, see “Configuring the Policy logger service” on page 147.

Report views
The chart view presents the report data in graphical format. The legend shows the color code for each
process. You can hover the mouse cursor over a process in the chart view to highlight it, and see the total
time saved in seconds after automating the process.
The tabular view shows the following details:
• The process time

Chapter 12. Working with reports 191


• The time it would take an operator to perform the task manually
• The time saved in seconds by automating the process

Impact ROI Efficiency report business processes


A business process is an action that is typically performed manually by an operator.
The Impact ROI Efficiency report is installed with eight default business processes:
• Suppress devices in maintenance
• Suppress devices not provisioned
• Perform preliminary ping test
• Lookup affected circuits/services
• Lookup affected customers
• Open ticket
• Escalate
• Resolve
These business processes are provided as examples only. To use one of them, you need to associate it
with a relevant policy. You can also add your own business processes, as necessary.

Creating a sample Impact ROI Efficiency report


To configure your Impact ROI Efficiency report, you need to associate the relevant policies and business
processes.

Procedure
1. Select Reports to open the Reports tab.
2. Select the Impact ROI Efficiency Report.
3. Click the Configuration icon and select the Configure Business Process option, to add a business
process.

192 Netcool/Impact: User Interface Guide


Figure 2. Configure Business Process window
4. In the editor toolbar, click the New Data Item icon.
a) Type or select an ID for the data item.
b) Type a name for the business process.
c) Type the manual time for this process.
5. Click OK and then repeat for each new process you want to add.
6. Click Configure and select Configure Policy and Report Mapping to associate the processes with a
policy.
a) Select the policy that you want associate with a process.
b) Select the processes you want to map to from the Available Processes list.
c) Click Add to move them to the Assigned Processes list.
d) Optional: If you decide you do not want to associate a process to this policy, select it and click
Remove to move it back to the Available Processes list.
e) Click Apply and close the window.
7. Using the Time Range controls select the time period for which you want to run the report.
8. Click Run Report.
The configured report displays in the editor.

Chapter 12. Working with reports 193


Figure 3. Impact ROI Efficiency Report: chart view

The legend on the left shows the color code for each process. The descending order of the legend
reflects the order from left to right in the chart.
9. Click the Tabular View tab.

194 Netcool/Impact: User Interface Guide


Figure 4. Impact ROI Efficiency Report: tabular view

Impact Profile report


The Impact Profile report provides information about the efficiency of the Impact Server.
The detailed view shows the following details about Netcool/Impact configuration:
• SQL query
• Policy that issued this query
• Type of action
• Data source queried
• Metric

Configuring Impact Profile report


Use this procedure to configure the Impact Profile report.

Procedure
1. Click Reports select Impact Profile Report.
2. From Impact Profile Report toolbar, click Open Configuration to open the Impact Profile Rules
Editor window.
Use this window to set the parameters for the report. For more information about the available
parameters, see “Impact Profile Report rules editor” on page 197.
3. Enable and Start profiling by clicking the Start Profile Report icon.
When you enable and start the Impact Profiling Report, Netcool/Impact inserts profile data into the
Apache Derby database corresponding to operations that match the configured rules.

Attention: As data gets inserted into the Derby database, the Impact Profile memory usage
increases accordingly. Memory usage increases can cause the server to run out of the memory
depending on the size of your maximum heap settings. The default maximum heap setting is
1200 MB. To prevent the server from running out of memory, monitor the memory usage and
adjust the maximum heap limit accordingly. Also, consider periodically clearing the disk space
in the Apache Derby database. For information see the Troubleshooting section How to clear

Chapter 12. Working with reports 195


disk space when reporting is enabled which explains how to clean up disk space by using the
REPORT_PurgeData policy.
4. To stop collecting profiling data, click the Stop Profile Report icon. Until you click the Stop Profile
Report icon, Netcool/Impact continues to gather profiling data even after a server restart. Clicking the
Stop Profile Report icon is the only way to stop and disable Impact Profile Report data from being
gathered.

Impact Profile Report data


The Impact Profile Report Rules editor lists all the queries you can use to generate an Impact Profile
Report.

Table 116. Impact Profile Report Parameters


Impact Profile Description Impact Profile Rule

Queries sent to same data source The number of "hotspot" queries SQL Query XinY Rules
by same policy more than n times sent to the same data source by
in n seconds the same policy in more than a
specified number of seconds.
Queries done more than n times Measures the number of queries SQL Hotspot Rules
in n seconds that are taking more made in a specified number of
than n milliseconds seconds that take more than a
specified number of milliseconds.
Queries made more than n times Counts the number of queries SQL Hotspot Rules
in n seconds that return more made in a specified number of
than n rows seconds that return more than a
specified number of rows.

Inserts into any types more than Measures the number of SQL SQL Hotspot Rules
n times in n seconds that are inserts into any type of data
taking more than n milliseconds. type in a specified time window
that take more than a specified
number of milliseconds.

Internal types written more than Measures the number of internal Internal Type Rules
n times in n seconds data types that are accessed
more than a specified number of
times in a specified number of
seconds.

Same identifier updated by Measures the number of return Return Event Rules
ReturnEvent more than n times in events that update events using
n seconds the same identifier as the source
event.

Same identifier inserted into the Measures the number of new Add Data Item Rules
same Object Server that events events that were sent to the
are read from. ObjectServer that use the same
identifier that they read the event
from.

JRExec calls done more than n Measures the number of JRExecAction Rules
times in n seconds that are taking "troublesome" JRExec calls in
more than n seconds more than a specified number of
times in a specified time period.

196 Netcool/Impact: User Interface Guide


Table 116. Impact Profile Report Parameters (continued)
Impact Profile Description Impact Profile Rule

Hibernations built up in memory Measures whether the number of Hibernation Rules


more than n(true/false) hibernations that have built up in
memory is more than a specified
number over the lifetime of the
server.

Impact Profile Report rules editor


Use the following rules and settings to edit the Impact Profile queries.

Procedure
1. Select the rule in the form that is associated with the query you want to edit.
2. SQL Query XinY Rules
Use this option to change the settings for the following query:
Queries sent to same data source by same policy more than n times in n seconds
• Select the Count Threshold to set the number of SQL queries to be run.
• Select the Count Time Window to set the time window the measurement is to be based on.
3. SQL Hotspot Rules
Use this option to change the settings for the following queries:
Queries done >n times in n seconds that are taking more than n milliseconds
Queries made >n times in n seconds that return >n rows
Inserts into any types > n times in n seconds that are taking >n milliseconds
• Select the Insert Execution Time Threshold to set the time threshold for the SQL inserts.
• Select the Query Execution Time Threshold to set the time threshold for query execution.
• Select the Query Return Row Threshold to set the threshold for the number of queries to be
retrieved.
• Select the Count Threshold to set the threshold for the number of SQL statements to be run.
• Select the Count Time Window to set the time window the measurement is to be based on.
4. JRExecAction Rules
Use this option to change the setting for the following query:
JRExec calls done more than n times in n seconds that are taking > n seconds
• Select the Count Threshold to set the threshold for the number of JREXecActions to be run.
• Select the Execution Time Threshold to set the threshold for how long the JREexActions must take.
• Select the Time Window to set the time window the measurement is to be based on.
5. Internal Type Rules
Use this option to change the settings for the following query:
Internal types written more than n times in n seconds.
• Select the Count Threshold to set the number of times internal data types are written to.
• Select the Time Window to set the length of time the profile is based on.
6. ReturnEvent Rules
Use this option to change the settings for the following queries:

Chapter 12. Working with reports 197


Same identifier updated by ReturnEvent > n in n seconds
Same identifier inserted into the same Objectserver that Netcool/Impact reads events from
• Select the Count Threshold to set the count threshold for the number of returned events.
• Select the Time Window to set the length of time the profile is based on.
7. Hibernation Rules
Use this option to change the settings for the following queries:
Set the number of hibernations to be held in memory
Hibernations built up in memory > n (true/false)
• Select the Hibernation in Memory Threshold to set the number of hibernations to be held in
memory.
8. Click OK to accept the parameter changes. Click Refresh Report to update the parameters in the
Impact Profile Rules Editor.

198 Netcool/Impact: User Interface Guide


Chapter 13. Configuring Maintenance Window
Management
Maintenance Window Management (MWM) is an add-on for managing Netcool/OMNIbus maintenance
windows.
MWM can be used with Netcool/OMNIbus versions 7.x and later. A maintenance time window is a
prescheduled period of downtime for a particular asset. Faults and alarms, also known as events, are
often generated by assets undergoing maintenance, but these events can be ignored by operations.
MWM creates maintenance time windows and ties them to Netcool/OMNIbus events that are based
on OMNIbus fields values such as Node or Location. Netcool/Impact watches the Netcool/OMNIbus
event stream and puts these events into maintenance according to the maintenance time windows. The
Netcool/Impact MWMActivator service in the Services tab under the MWM project must be running
to use this feature. For more information about maintenance windows, see “About MWM maintenance
windows” on page 199.

About MWM maintenance windows


Use the Maintenance Window Management (MWM) web interface to create maintenance time windows
and associate them with Netcool/OMNIbus events.
Netcool/OMNIbus events are based on OMNIbus field values such as Node or Location. The Netcool/
OMNIbus events are then put into maintenance according to these maintenance time windows. If events
occur during a maintenance window, MWM flags them as being in maintenance by changing the value of
the OMNIbus field, integer field, SuppressEscl to 6 in the alerts.status table.
A maintenance time window is prescheduled downtime for a particular asset. Faults and alarms (events)
are often generated by assets that are undergoing maintenance, but these events can be ignored by
operations. MWM tags OMNIbus events in maintenance so that operations know not to focus on them.
You can use MWM to enter one time and recurring maintenance time windows.
• One time windows are maintenance time windows that run once and do not recur. One Time Windows
can be used for emergency maintenance situations that fall outside regularly scheduled maintenance
periods. You can use them all the time if you do not have a regular maintenance schedule.
• Recurring time windows are maintenance time windows that occur at regular intervals. MWM supports
three types of recurring time windows:
– Recurring Day of Week
– Recurring Date of Month
– Every nth Weekday
• Maintenance time windows must be linked to OMNIbus events in order for MWM to mark events as
being in maintenance. When you configure a time window, you also define which events are to be
associated with the time window. The MWM supports the use of all Netcool/OMNIbus fields for linking
events to time windows.
Note: The Time Zones that can be selected in the Time Zone drop down are a mixture of daylight saving
time zones and non daylight saving time zones.

• Each maintenance window has a free format text field that allows you to add manually any
additional comments or descriptive notes.

© Copyright IBM Corp. 2006, 2023 199


Logging on to Maintenance Window Management
How to access Maintenance Window Management (MWM).

Procedure
1. Click the Maintenance Window tab.
This page lists the instances of the different types of maintenance window.
2. Click the New Maintenance Window button to create a new window.

Creating a one time maintenance window


Create a one time maintenance time window for a particular asset.

Procedure
1. Click the New Maintenance Window button to create a new window.
2. For Type of Maintenance Window, select One Time.
3. Check that the Time Zone you want to use is selected.
4. Add fields you wish to assign in the filter to match events. For each field you add, select the operator
from the list provided and assign a value to the field to be used for the filter.
Tip: For a like operator, there is no requirement for regular expressions. You can specify a substring
and select the like operator from MWM.
Tip: For the in operator, provide a space separated list of strings that the field can be (for example,
server1.ibm.com server2.ibm.com server3.ibm.com). A maximum of 50 strings are allowed.
Note: Any field where a value is not provided will not be included in the filter.
5. Click the calendar icons to select the Start Date and End Date for the maintenance time window.
6. Click the Save button to create the window.
7. Click the Back button to view the newly created window in the list of one time windows.

Creating a recurring maintenance window


Create a recurring maintenance time window for a particular asset.

Procedure
1. Click the New Maintenance Window button to create a new window.
2. For Type of Maintenance Window, select the type of recurring window you wish to configure. This can
be either Day of Week, Day of Month, or Nth Day of Week in Month.
3. Check that the Time Zone you want to use is selected.
4. Add fields you wish to assign in the filter to match events. For each field you add, select the operator
from the list provided and assign a value to the field to be used for the filter.
Tip: For a like operator, there is no requirement for regular expressions. You can specify a substring
and select the like operator from MWM.
Tip: For the in operator, provide a space separated list of strings that the field can be (for example,
server1.ibm.com server2.ibm.com server3.ibm.com). A maximum of 50 strings are allowed.
Note: Any field where a value is not provided will not be included in the filter.
5. Provide the Start Time and End Time (hour, minute, second) for the maintenance window.
6. Provide the details specific to the chosen recurring type of window:

200 Netcool/Impact: User Interface Guide


• Recurring Day of Week These windows occur every week on the same day and at the same time of
day. For example, you can set the window to every Saturday from 5 p.m. to 12 a.m. Or you can set
the window for multiple days such as Saturday, Sunday, and Monday from 5 p.m. to 12 a.m.
• Recurring Day of Month These windows occur every month on the same date at the same time of
day. For example, you can set the window to every month on the 15th from 7 a.m. to 8 a.m. Or you
can set the window for multiple months.
• Every nth Weekday These windows occur every month on the same day of the week at the same
time. For example, you can set the window to the first and third Saturday of the month from 5 p.m. to
12 a.m.
7. Click the Save button to create the window.
8. Click the Back button to view the newly created window in the list of windows.

Viewing maintenance windows


The main Maintenance Window page displays the full list of maintenance windows, grouped by window
type. You can use the Filter control to view the windows of a single window type (All Windows, One Time,
Day of Week, Day of Month, Nth Day of Week in Month or Active).
In the One Time Window, the color of the status icon indicates whether the window is active (green),
expired (purple), or has not started yet future (blue).
In the other windows the color of the status icon indicates whether the window is active (green) or
inactive (orange).
You can sort the maintenance windows by column by clicking on a column header of the table. Clicking
again reverses the sort.
To edit an existing maintenance window, click the Edit icon for the window.
You can delete the maintenance windows by checking the boxes next to windows to delete and clicking
Delete for that window type.

Chapter 13. Configuring Maintenance Window Management 201


202 Netcool/Impact: User Interface Guide
Chapter 14. Working with the configuration
documenter
You can use the Configuration Documenter to view the detailed information about the system
components in a Netcool/Impact installation.
• Status: Shows the name and the host where primary and secondary servers are running. Also, which
server is the primary. You can also see information about the running services and the memory status
for the server.
• Data sources: each defined data source.
• Data types: All data types, including predefined data types such as Doc, Schedule, and LinkType;
user-defined internal and external data types; and report data types.
• Policies and policy content.
• Services details and their associated polices.

Viewing items in the configuration documenter


How to access the Configuration Documenter for a selected cluster.

Procedure
1. In the GUI, click the Help menu and select Web Documenter.
The configuration documenter opens in a new browser window. Use the links at the top of the page to
view information about cluster status, server status, data sources, data types, policies, and services.
2. Click the Status link.
Depending on the status of the current server in the cluster, you can view the following information.
• The current server is the primary server
– The name and host where the primary server is running.
– The name and host of each secondary server.
• The current server is a secondary server
– The name and host where the primary server is running.
– Startup replication status, whether it was successful, and also how long it took for it to happen.
Important: Click the link in the secondary server name to open the documenter page for this server.
The Status link also shows the following information on servers.
Memory status
Shows the maximum heap size and the current heap size in MB that the Java virtual machine,
where Netcool/Impact is running, can use.
Event status
Shows the number of events available in the event queues for the various event-related services
like readers, listeners, and EventProcessor. It does not provide information about all the services
that are currently running, only the status for event-related services. For each of these services,
you can see from where the service is reading events. For example, for OMNIbusEventReader that
would include the name of the data source, whether events are being read from the primary, or
backup source of that data source, and additional connection-related information like the host,
port, and the user name that is used to connect to the data source.

© Copyright IBM Corp. 2006, 2023 203


Remember: In the case of the primary server, you can view the queue status for readers or
listeners and EventProcessor. For a secondary server, you can view only the queue status for
EventProcessor because the readers or listeners run only on the primary server.
3. Click Data Sources to view a list of defined data sources that show the data source names and data
source types.
a) Click the data source that you want to view.
The data source details list displays showing host, port, and database information.
4. Click Data Types.
a) Choose a data type from the data type list to view the following details about a data type.
• Name
• Display Name
• Data source name (for external data types). Click the data source name to display the connection
information.
• Configuration information for each of the fields in the data type:
– Field Name
– Display Name
– Key field
– Alias
– Default Expression
– Choices
– Access via UI data provider
• Dynamic links that are associated with the data type
b) To see the connection information for an external data type, click the data source name.
5. Click Policies, choose a policy from the Policy list to view the policy.
6. Click Services, choose a service from the Services list.
The service information that displays depends on the service. The following list shows standard
information about a service:
• Name
• Class Name
• Service Currant status (running or not running)
• Service In Auto Start Up Mode
• Log To File:
• Policy To Execute
a) Select the associated policy link to see it displayed in the documenter.

204 Netcool/Impact: User Interface Guide


Appendix A. Notices

This information was developed for products and services offered in the U.S.A. IBM may not offer the
products, services, or features discussed in this document in other countries. Consult your local IBM
representative for information on the products and services currently available in your area. Any reference
to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not give you any license to these patents. You can
send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:

Intellectual Property Licensing


Legal and Intellectual Property Law
IBM Japan Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore,
this statement might not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.

© Copyright IBM Corp. 2006, 2023 205


Such information may be available, subject to appropriate terms and conditions, including in some cases
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided by
IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any
equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
been made on development-level systems and there is no guarantee that these measurements will be
the same on generally available systems. Furthermore, some measurement may have been estimated
through extrapolation. Actual results may vary. Users of this document should verify the applicable data
for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal without
notice, and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without
notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform
for which the sample programs are written. These examples have not been thoroughly tested under
all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work, must include a copyright
notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. ©
Copyright IBM Corp. _enter the year or years_. All rights reserved.
If you are viewing this information softcopy, the photographs and color illustrations may not appear.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
“Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, other countries, or both.

206 Netcool/Impact: User Interface Guide


Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and service names might be trademarks of IBM or other companies.

Appendix A. Notices 207


208 Netcool/Impact: User Interface Guide
Index

A D
absolute time ranges daily time ranges
adding 76 adding 75
accessibility x data caching 85
accessing reports 187 data items
Action Efficiency report 190 adding 99
Action Error report 190 deleting 100
action functions 119 editing 100
action panel overview 23
policies 177 viewing 99
add-ons data model
Maintenance Window Management 199, 201 components 11
auto-saved policy 113 task pane icons 12
automated project deployment 8 data models
setting up 11
data source 57, 60, 61
B data sources
basic operator view categories 13
action panel policies 177 CORBA Mediator DSA 64
creating 178 creating 16
deleting 178 DB2 26
editing 178 deleting 17
information groups 177 Direct Mediator DSA 64, 65
layout options 177 editing 17
books Flat File 30, 31
see publications ix GenericSQL 31
HSQLDB 33
Informix 35
C jdbc statement configuration 56
JMS 66
Cache Settings tab
LDAP 62
External Data Types editor 85
Mediator DSA 63–65
changing default font 127
MS_SQL 37
character encoding 1
MYSQL 40
clear version control file locking 9
ObjectServer 43
Composite data types 97
ODBC 45
configuration documenter
Oracle 47, 50, 52
opening 203
overview 13, 25
Configuring a linked field on a composite data type 98
PostgreSQL 52
configuring data sources 182
predefined 15
configuring data types 183
SNMP
conventions
v1 and v2 65
typeface xiii
SQL data source
CORBA Mediator DSA data sources 64
Informix 35
creating 107
SQL database 26
Creating an event rule 184
Sybase 54
Creating composite data types 97
testing connections to 17
Creating editing and deleting an event rule 184
user defined 14
creating linked fields 97
data type
creating RESTful DSA data sources 60, 61
LDAP 87
creating UI data provider data sources 57
Packed OID 90
creating UI data provider data types 86
performance statistics 69
Custom Fields tab
SNMP 90
internal data types editor 71
table 92
customer support xi
data type caching 85
data types

Index 209
data types (continued) Event Isolation and Correlation operator views 182
caching 70 Event Isolation and Correlation polices 182
caching types 70 event listener
categories 20 adding filters 137
configuring LDAP 87 service 137
configuring Packed OID SNMP 91 event mapping 137
configuring SQL 79 event mapping table 140
configuring SQL data types event readers
Table Description tab 80 configuration 163, 164
configuring table data types for SNMP r 93 external data type
deleting 21 editor 80
Doc 96 external data types
editing 21 configuring 73
external configuring SQL 79
configuring 73 editor 73, 83
deleting a table row 73 LDAP 87
Flat File 86 Mediator DSA 89
internal 19, 71 Pack OID SNMP 91
internal data types editor 71 table DirectMediator 93
Mediator DSA 89
overview 19
predefined
F
configuring time range groups 75 FailedEvent
time range groups and schedules overview 74 overview of data types 96
time range groups specifications and combinations viewing data items 96
74 failover
SNMP 90 configurations 25
SQL 79 filter
viewing 21 for event listener services 137
viewing performance statistics 69 filters
Datasourcelist analysis 141
createDatasourceList script 19 deleting 140
rebuildDatasourceList script 18 editing 140
DB2 data sources reordering 140
creating 26 fixes
DeployProject obtaining x
parameters 8 Flat File
DeployProject policy 8 creating data type 86
Derby data sources Flat File data sources
creating 28 creating data sources 30, 31
directory names functions
notation xiii action 119
Doc data types
adding a field 96
adding data items 96 G
dynamic links
GenericSQL data sources
creating 104
creating 31
deleting 106
GetByFilter output parameters 100
editing 106
Global project
link by key 105
editing and deleting items 6
link by policy 106
global repository
linking methods 103
adding and removing an item from 6
links by filter 104
clearing version control locking 9
overview 6
E viewing data 6
globalization 1
education x Graphical User Interface
email reader service 158 overview 1
environment variables GUI 1
notation xiii
event filter
configuration options 137 H
consolidating 138
hibernating policy activator
Event Isolation and Correlation 181–184

210 Netcool/Impact: User Interface Guide


hibernating policy activator (continued) Mediator DSA (continued)
configuration 146 Direct Mediator data sources 64, 65
Hibernation data types SNMP data sources 64, 65
overview 96 viewing data types 89
HSQLDBL data sources MS-SQL Server data
creating 33 sources
creating 37
multi-tenancy 58
I multiple policy logs 150
Impact Profile report 195, 196 MWM 199
Impact ROI Efficiency report MySQL data sources
business processes 192 creating 40
scenario 192
ImpactDatabase Service 151 N
Informix data sources
creating 35 navigation 1
input parameters 117 navigation panel
internal data types selecting clusters 2
editor nci_policy script 113
Custom Fields tab 71 negative time range groups 74
IPL functions 119 Netcool/Impact components 1
ITNM DSA data type 78 Node Efficiency report 190
notation
environment variables xiii
J path names xiii
JMS typeface xiii
data source 66
O
K OAuth 61
key expressions 106 OAuth data source 61
ObjectServer data sources
creating 43
L ODBC data sources
creating 45
Launching the Event Isolation and Correlation analysis page
online publications
186
accessing ix
LDAP 50
Operator Efficiency report 189
LDAP data sources
operator view
creating 62
advanced 176
LDAP External Data Type editor
basic 176
LDAP Info tab 88
controls 176
LDAP external data types 87
types 175
Link by Key 105
viewing 175
Link Editor 107
operator view EIC_Analyze 186
links
operator views
dynamic 103, 104
overview 175
overview 23
Oracle data source
static 103, 107
connecting over LDAP 50
LinkType data items
creating 47, 50, 52
configuring 95
integration with RAC cluster 52
LinkType Data Type
ordering publications x
overview 95
output parameters 117
override time range goup 74
M Overview 97, 181

main tabs 1
maintenance schedules 74 P
manuals
Packed OID SNMP data types
see publications ix
configuring 91
Mediator DSA
path names
CORBA data sources 64
notation xiii
data sources 63–65
Performance Statistics report
data types 89

Index 211
Performance Statistics report (continued) predefined data types (continued)
for data types 69 time range groups
personalizing 135 specifications and combinations 74
policies time range groups and schedules overview 74
accessing 109 viewing FailedEvent data items 96
deleting 109 predefined policy 128
editing 109 problem determination and resolution xii
working with 109 projects
Policies 112 automated project deployment 8
policies overview 109 cluster 2
policy components 5
accessibility features 130 creating 7
auto saved 113 deleting 7
DeployProject 8 DeployProject policy 8
developing custom policy 110 editing 7
log files 149, 150 editing and removing 6
optimizing policy 116 overview 5
predefined 128 working with 5
recovering 113 publications
syntax checking 115 accessing online ix
task pane icons 110 ordering x
uploading 127
version control interface 127
wizard 111
Q
wizards 110 query caching 85
writing 110
policy activators
configuration 172 R
policy editor
RAC Cluster Support 52
personalizing 3
recovering
Policy Editor
auto-saved policy 113
browsing data types 116
report
changing default font 127
Impact Profile 195
optimizing policy 116
reports
run policy option 116
Action efficiency 190
run policy parameters 116
Action Error 190
setting input parameters 117
Impact Profile 195, 196
setting output parameters 117
Impact ROI Efficiency 191
toolbar controls 113
navigating 187
Policy Efficiency report 189
Node Efficiency 190
Policy Error report 189
Operator Efficiency 189
policy input parameter
Policy Efficiency 189
attributes 117
Policy Error 189
policy logger
toolbar 188
configuration 146
viewing 187
policy syntax highlighter 115
RESTful DSA 60
positive time range groups 74
RESTful DSA data source 60, 61
PostgreSQL data sources
run policy option 116
creating 52
predefined data items
adding absolute time range groups 76 S
adding daily time range groups 75
adding weekly time range groups 76 schedules
predefined data types configuring 77
configuring time range groups 75 overview 77
Doc 96 selecting projects
FailedEvent overview 96 overview 2
Hibernation 96 Serial rollover 171
Linktype 95 service
LinkType data items command execution manager 141
configuring 95 command line manager 142
overview 20, 74 database event listener 142
schedules database event reader 153–155
configuring schedules 77 e-mail sender 143

212 Netcool/Impact: User Interface Guide


service (continued) static links
event listener 159 creating an Internal data type 107
event processor 144 Link editor 107
hibernating policy activator 145, 146 Sybase data sources
ITNM event listener 150 creating 54
JMS message listener 160 Sybase data types
OMNIbus event listener 162 Setting the Exclude this field option 83
OMNIbus event reader 163, 165, 166, 169, 170
policy activator 172
policy logger 146, 147
T
self monitoring 151, 152 Table Description tab
service log 136 SQL External Data Types editor 80
service log viewer table OID SNMP data types
creating new tabs 137 configuring 93
results 136 time range groups
Service Status panel absolute 76
service icons 132 configuring 75
status icons 132 daily 75
services specifications and combinations 74
displaying log files 135 weekly 76
e-mail reader 156 Tivoli Information Center ix
list 133 Tivoli technical training x
overview 131 training
starting 135 Tivoli technical x
stopping 135 typeface conventions xiii
working with 131 Typelist
setting policy parameters 117 createTypeList script 23
SNMP rebuildTypeList script 22
data sources 65 types browser
v1 and v2 65 browsing data types 116
SNMP data sources 64, 65 Policy Editor 116
SNMP data types
configuring 90
packed OID 91 U
table 93
UI data provider
SNMP DSA
multi-tenancy 58
data sources 25
UI data provider data source 57
SNMP v3 data sources 65
UI data provider data type 86, 100
Software Support
UI data provider data types 86
contacting xi
UI data provider DSA 57, 86
overview x
user-defined services
receiving weekly updates x
creating 131
SQL data sources
DB2 26
Derby 28 V
flat file 30, 31
GenericSQL 31 variables
HSQLDB 33 notation for xiii
MS-SQL Server version control
37 file locking 9
MySQL 40 version control interface 127
ObjectServer 43 Viewing Event Isolation and Correlation results 185, 186
ODBC 45
Oracle 47, 50, 52 W
PostgreSQL 52
Sybase 54 WebGUI 185
SQL data types weekly time ranges
adding a field to the table 83 adding 76
configuring 79, 80
deleting a table row 73
flat file 86
SQL database DSAs
failover 25
failover configurations 25

Index 213
214 Netcool/Impact: User Interface Guide
IBM®

You might also like