User Interface Guide
User Interface Guide
IBM
Note
Before using this information and the product it supports, read the information in "Notices".
Edition notice
This edition applies to version 7.1.0.29 of IBM Tivoli Netcool®/Impact and to all subsequent releases and modifications
until otherwise indicated in new editions.
References in content to IBM products, software, programs, services or associated technologies do not imply that they
will be available in all countries in which IBM operates. Content, including any plans contained in content, may change
at any time at IBM's sole discretion, based on market opportunities or other factors, and is not intended to be a
commitment to future content, including product or feature availability, in any way. Statements regarding IBM's future
direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. Please
refer to the IBM Community terms of use for more information.
© Copyright International Business Machines Corporation 2006, 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents
iii
Testing data source connections......................................................................................................... 17
Datasourcelist file.................................................................................................................................17
Data types overview...................................................................................................................................19
Data type categories............................................................................................................................ 20
Predefined data types overview...........................................................................................................20
List of predefined data types............................................................................................................... 20
Viewing data types............................................................................................................................... 21
Editing data types.................................................................................................................................21
Deleting data types.............................................................................................................................. 21
Typelist file........................................................................................................................................... 21
Data items overview.................................................................................................................................. 23
Links overview............................................................................................................................................23
iv
Data type caching types....................................................................................................................... 70
Creating internal data types...................................................................................................................... 71
Internal data type configuration window............................................................................................ 71
External data types.................................................................................................................................... 73
Deleting a field......................................................................................................................................73
List of predefined data types..................................................................................................................... 73
Predefined data types overview...........................................................................................................74
Time range groups and schedules....................................................................................................... 74
ITNM DSA data type............................................................................................................................. 78
SQL data types........................................................................................................................................... 79
Configuring SQL data types.................................................................................................................. 79
SQL data type configuration window - Table Description tab............................................................. 80
SQL data type configuration window - adding and editing fields in the table.................................... 83
SQL data type configuration window - Cache settings tab................................................................. 85
Creating flat file data types........................................................................................................................86
UI data provider data types....................................................................................................................... 86
Creating a UI data provider data type..................................................................................................86
LDAP data types.........................................................................................................................................87
Configuring LDAP data types................................................................................................................87
LDAP Info tab of the LDAP data type configuration window........................................................... 88
Mediator DSA data types........................................................................................................................... 89
Viewing Mediator DSA data types........................................................................................................89
SNMP data types........................................................................................................................................90
SNMP data types - configuration overview..........................................................................................90
Packed OID data types.........................................................................................................................90
Table data types................................................................................................................................... 92
LinkType data types................................................................................................................................... 95
Configuring LinkType data items..........................................................................................................95
Document data types.................................................................................................................................96
Adding new Doc data items................................................................................................................. 96
FailedEvent data types.............................................................................................................................. 96
Viewing FailedEvent data items........................................................................................................... 96
Hibernation data types.............................................................................................................................. 96
Working with composite data types.......................................................................................................... 97
Creating composite data types............................................................................................................ 97
Creating linked fields............................................................................................................................97
Configuring a linked field on a composite data type........................................................................... 98
v
Policies panel controls.............................................................................................................................110
Writing policies........................................................................................................................................ 110
Policy wizards.....................................................................................................................................110
Recovering automatically saved policies................................................................................................ 113
Working with the policy editor.................................................................................................................113
Policy editor toolbar controls.............................................................................................................113
Policy syntax checking....................................................................................................................... 115
Policy syntax highlighter.................................................................................................................... 115
Optimizing policies.............................................................................................................................116
Running policies with parameters in the editor................................................................................ 116
Browsing data types...........................................................................................................................116
Configuring policy settings in the policy editor................................................................................. 117
Adding functions to policy................................................................................................................. 119
List and overview of functions........................................................................................................... 119
Changing default font used in the policy editor................................................................................ 127
Using version control interface............................................................................................................... 127
Uploading policies................................................................................................................................... 127
Working with predefined policies............................................................................................................128
Accessibility Features..............................................................................................................................130
vi
Configuring the ImpactDatabase service................................................................................................151
Self monitoring service............................................................................................................................ 151
Configuring the self monitoring service ............................................................................................ 152
Database event reader service................................................................................................................153
Configuring the database event reader service.................................................................................153
Database event reader configuration window - general settings..................................................... 154
Database event reader configuration window - event mapping....................................................... 155
Configuring number of rows in the database event reader select query......................................... 156
Email reader service................................................................................................................................ 156
Configuring the email reader service.................................................................................................156
Event listener service.............................................................................................................................. 159
Configuring the event listener service............................................................................................... 159
JMS message listener..............................................................................................................................160
JMS message listener service configuration properties................................................................... 160
OMNIbus event listener service.............................................................................................................. 162
Setting up the OMNIbus event listener service.................................................................................162
Configuring the OMNIbus event listener service...............................................................................162
OMNIbus event reader service................................................................................................................163
Configuring the OMNIbus event reader service................................................................................ 163
Creating a new OMNIbus event reader from the command line...................................................... 164
OMNIbus event reader service General Settings tab...................................................................... 165
OMNIbus event reader service Event Mapping tab...........................................................................166
OMNIbus Event Reader event locking examples.............................................................................. 169
Forcing checkpointing after a specified number of minutes.............................................................170
Handling Serial rollover......................................................................................................................171
Policy activator service............................................................................................................................172
Policy activator configuration............................................................................................................ 172
Configuring the policy activator service............................................................................................ 172
vii
Operator Efficiency report....................................................................................................................... 189
Node Efficiency report............................................................................................................................. 190
Action Error report................................................................................................................................... 190
Action Efficiency report........................................................................................................................... 190
Impact ROI Efficiency report...................................................................................................................191
Impact ROI Efficiency report business processes............................................................................ 192
Creating a sample Impact ROI Efficiency report...............................................................................192
Impact Profile report............................................................................................................................... 195
Configuring Impact Profile report...................................................................................................... 195
Impact Profile Report data................................................................................................................ 196
Impact Profile Report rules editor..................................................................................................... 197
Index................................................................................................................ 209
viii
About this publication
The Netcool/Impact User Interface Guide contains information about the user interface in Netcool/
Impact.
Intended audience
This publication is for users who use the Netcool/Impact user interface.
Publications
This section lists publications in the Netcool/Impact library and related documents. The section also
describes how to access Tivoli® publications online and how to order Tivoli publications.
Netcool/Impact library
• Administration Guide
Provides information about installing, running and monitoring the product.
• Policy Reference Guide
Contains complete description and reference information for the Impact Policy Language (IPL).
• DSA Reference Guide
Provides information about data source adaptors (DSAs).
• Operator View Guide
Provides information about creating operator views.
• Solutions Guide
Provides end-to-end information about using features of Netcool/Impact.
Accessibility
Accessibility features help users with a physical disability, such as restricted mobility or limited vision,
to use software products successfully. In this release, the Netcool/Impact console does not meet all the
accessibility requirements.
Obtaining fixes
A product fix might be available to resolve your problem. To determine which fixes are available for your
Tivoli software product, follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/support.
2. Navigate to the Downloads page.
3. Follow the instructions to locate the fix you want to download.
4. If there is no Download heading for your product, supply a search term, error code, or APAR number in
the search field.
For more information about the types of fixes that are available, see the IBM Software Support Handbook
at http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html.
Submitting problems
You can submit your problem to IBM Software Support in one of two ways:
Online
Click Submit and track problems on the IBM Software Support site at http://www.ibm.com/software/
support/probsub.html. Type your information into the appropriate problem submission form.
By phone
For the phone number to call in your country, go to the contacts page of the IBM Software Support
Handbook at http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html and click the
name of your geographic region.
Typeface conventions
This publication uses the following typeface conventions:
Bold
• Lowercase commands and mixed case commands that are otherwise difficult to distinguish from
surrounding text
• Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons,
list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs,
property sheets), labels (such as Tip:, and Operating system considerations:)
• Keywords and parameters in text
Italic
• Citations examples: titles of publications, diskettes, and CDs
• Words defined in text (example: a nonswitched line is called a point-to-point line)
• Emphasis of words and letters (words as words example: "Use the word that to introduce a
restrictive clause."; letters as letters example: "The LUN address must start with the letter L.")
• New terms in text (except in a definition list): a view is a frame in a workspace that contains data.
• Variables and values you must provide: ... where myname represents....
Monospace
• Examples and code examples
• File names, programming keywords, and other elements that are difficult to distinguish from
surrounding text
• Message text and prompts addressed to the user
• Text that the user must type
• Values for arguments or command options
Globalization
Netcool/Impact does not support Unicode names for databases, tables, schemas, and columns in foreign
language data sources.
Procedure
1. Open your browser.
2. Select View > Encoding or View > Character Encoding, depending on which browser you are using.
3. Select Unicode (UTF-8).
Navigating Netcool/Impact
How to navigate to Netcool/Impact components.
When you log on, you see a number of tabs along the top of the UI. The Welcome tab provides
information to get you started with Netcool/Impact features.
All the Netcool/Impact components are found in the tabs at the top of the UI. Depending on the user
permissions that you are assigned, you have access to some or all of the following Netcool/Impact
components.
• Welcome
• Data Model
• Policies
• Services
• Operator View
• Event Isolation and Consolidation
• Maintenance Window
• Reports
Tip: You can select the Global project to view all the items in the selected tab.
Click the Reports tab to locate the following reports:
• Policy Efficiency Report
• Policy Error Report
• Operator Efficiency Report
• Node Efficiency Report
• Action Error Report
Procedure
1. Click Options from the main menu, then click Preferences to open the Preferences dialog box.
2. Select the options that you want to personalize. Select from the tab options.
For example, click Policies. Select the check box for the options you want to enable.
• Select Show line number to view the line numbers for the policy editor.
• Select Automatically Save Drafts (every 5 minutes) and the policy is automatically saved every 5
minutes while you are editing it.
• Select the Character limit for Syntax Highlighting. Requires a restart of the Policies page.
3. Click Save.
Projects overview
A project is a view of a subset of the elements stored in the global repository.
You can use projects to manage your policies and their associated elements. They help you to remember
which data types and services relate to each policy and how the policies relate to each other. Projects
also help to determine whether a policy, or its associated data types or services, is still in use or must be
deleted from the project.
Also, you can find policies and their associated data and services easily when they are organized by
project. You can add any previously created policies, data types, operator views, and services to as many
projects as you like. You can also remove these items when they are no longer needed in any project.
If you have not as yet created any projects, Default and Global projects and projects predefined by
Netcool/Impact are the only projects listed in the Projects menu.
The Global project lists all items in the global repository. Any item that you create, for example a data
type, is not stored in the project that is currently selected, it is automatically added to the Global project.
The Default project is an example, it works just like any project, you can add items to it edit, or delete it.
When you delete a project, the items that were assigned as project members remain in the global project
and as members of any other projects they were assigned to.
Important: You cannot edit or delete the Global project.
Project components
When you create a project, you can add any existing policies, data sources, data types, and services to it
as project members.
A project can consist of the following components:
• Policies
• Data sources that are set up for project data types
• Data types that are associated with the policies
• Operator views that are related to the policies
• Services
Important: When you are naming projects, data sources, data types polices, and services, data types, you
cannot use Dot notation ".".
Global repository
The global repository is the storage area for all the policies, data sources, data types, operator views, and
services for the cluster that you are connected to.
When you create an item on the Data Model, Policies, Services or Operator View tabs, the items are
automatically added to the global repository.
You add new policies and their associated data and services to the global repository, just as you would
to a project, but they are independent of any projects. You can attach added items to projects as project
members at any time.
You must only edit and delete items that you want to change or delete globally. Deleting an item from the
tab menu deletes it from the global repository and every project it is attached to.
A version control interface is provided so that you can use it to save data as revisions in a version control
archive. You can also use the Global project to unlock all the items that you checked out.
Procedure
1. To view the items in the global repository select a specific tab, for example the Operator View, and
select the Global option in the Projects menu.
You see all the operator views that are stored in the global repository.
2. Each time that you create an item, for example a data type, it is automatically added to the Global
repository project in the specific tab.
3. To remove an item from the Global repository, open the appropriate tab and select the item that you
want to delete.
4. Click the Delete icon on the tab menu bar.
5. Click OK in the confirmation box.
The item is deleted from the global repository, and all projects it was assigned to.
Procedure
1. To view, create or edit a project, select the cluster and click the down arrow next to the existing project
name to open the Projects window.
• From the Manage Projects list, click Create Project.
• From the Manage Projects list, click Edit Current Project.
Use the project editor window to configure your new project or edit an existing project.
2. Click Edit Current Project, in the General Settings section, a default name is automatically given to
the project. You can create a unique name for your project. However, you cannot edit a project name
after the project is saved.
Remember: To use UTF-8 characters in the project name, check that the locale on the Impact Server
where the project is saved is also set to the UTF-8 character encoding.
3. In the Member section, you can add data sources, data types, policies, operator views, and services to
your project.
a) From the List By list, select a group whose elements you want to add to your project.
When you select an item, for example, Data Sources, all the data sources that you have created,
plus the predefined data sources are listed in the Members pane. If you have not yet created any
data sources, data types, policies, or services on your server, only predefined items are listed in the
Members pane.
4. Select the members that you want to add to the project from the Available Members list and click the
right-arrow button >>. The selected items will appear in the Project Members list.
Then click OK.
5. To remove selected members from the project and return them to the Available Members list, select
them in the Project Members list and click the left-arrow button <<.
Then click OK.
6. If you do not want to add any items to the project now, simply click OK without making any changes.
Deleting a project
Use this procedure to delete a project without removing the project members from other projects or from
the global repository.
Procedure
1. From the Project menu, select the project you want to delete.
2. In the Projects window, click the Delete Current Project icon.
When you delete a project it is removed from the server. However, the project members that were
assigned to it are not removed from other projects or from the global repository.
Important: You cannot edit or delete the Global project.
3. Click OK to confirm the deletion.
Procedure
1. Select the server cluster from which you want to copy data from the main toolbar.
2. Select DeployProject from the list of policies.
The Policy Editor opens and shows the contents of the DeployProject policy.
3. Click Configure Policy Settings to open the Policy Settings Editor window.
For reference on the configuration options, see “DeployProject policy input parameters window” on
page 8.
4. Click OK to save the configuration and close the window.
After you run the DeployProject policy, you can check the contents of the policy log for the results of
the project deployment.
Checkpoint ID If you are using Subversion as the version control system you can type
a checkpoint label. This label is applied to all project components when
checked into the version control system for the target cluster. If you
are not using Subversion or you do not want to use a checkpoint label,
accept the default value for this field, which is NULL.
Procedure
1. From the Projects menu, select the Global project.
2. Select the down arrow next to the project name, then click Clear all user locks to unlock all the items
that you have checked out.
You can unlock only your own items. If you want to unlock an item that is owned by another user,
contact an administrator assigned the impactAdmin user role.
3. A confirmation message shows when the files are unlocked.
Procedure
1. Create data sources
Identify the data you want to use and where it is stored. Then, you create one data source for each real
world source of data. For example, if the data is stored in one MySQL database and one LDAP server,
you must create one MySQL and one LDAP data source.
2. Create data types
After you have set up the data sources, you create the required data types. You must create one data
type for each database table (or other data element, depending on the data source) that contains data
Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster list, select the cluster you want to use.
3. From the Project list, select the project you want to use.
The data sources that are available to the project are displayed in the Data Model tab.
Icon Description
Click this icon to create a data source. Select one of the available data source types from the
list. After you create a data source, you can right-click the data source and click New Data
Type to create an associated data type.
Select a data source and click this icon to create a data type for the selected data source.
After you create a data type, it is listed under its data source. Alternatively, you can right-click
a data source and select New Data Type to create a data type for this data source.
Select an element in the list and click this icon to edit it. Alternatively, right-click an item in the
list and select Edit in the menu.
Click to view the selected data type in the editor panel. Select the View Data Items option
to view the data items for the data type, or the View Performance Report option to review
a performance report for the data type. Alternatively, you can view the data items or the
performance report for a data type by right-clicking the data type.
Icon Description
Click this icon to test the connection to the data source. Alternatively, right-click an item in the
list and select Test Connection in the menu.
Important: If you see an error message stating that the data source cannot establish a
connection to a database because a JDBC driver was not found, it means that a required JDBC
driver is missing in the shared library directory. To fix this, place a licensed JDBC driver in the
shared library directory and restart the server. For more information see, the "SQL database
DSAs" chapter in the Netcool/Impact DSA Reference Guide.
Click the Delete icon to delete a data source or type from the server. Alternatively, you can
right-click a data source or type and select Delete.
This action deletes an item permanently from the database. To safely remove a data type from
only one project and not from the database, use the project editor.
This icon is visible when a data source or data type item is locked, or the item is being used
by another user. Hover the mouse over the locked item to see which user is working on the
item. You can unlock your own items but not items locked by other users. If you have an
item open for editing you cannot unlock it. Save and close the item. To unlock an item you
have locked, right click on the item name and select Unlock. Users who are assigned the
impactAdminUser role are the only users who can unlock items that are locked by another
user in exceptional circumstances.
DB2 SQL database You use the DB2 DSA to access information in an IBM DB2
database.
Derby SQL database Use the Derby DSA to access information in a Derby
database. The Derby DSA is used to store the underlying
data that is used by the GUI reporting tools and
Netcool/Impact solutions such as Maintenance Window
Management.
Flat File SQL database You use the Flat File DSA to read information in a character-
delimited text file. The flat file data source can be accessed
like an SQL data source that uses standard SQL commands
in Netcool/Impact. For example, DirectSQL. The flat file DSA
is read only, which means that you cannot add new data
items in GUI. To create a flat file data source, you need a
text file that is already populated with data.
Generic SQL SQL database You use the Generic SQL DSA to access information in any
database application through a JDBC driver.
HSQLDB SQL database Use the HSQL DSA to access information in a HSQL
database.
Informix SQL database You use the Informix® DSA to access information in an IBM
Informix database.
JMS Messaging API A Java Message Service (JMS) data source abstracts
the information that is required to connect to a JMS
Implementation.
Kafka Messaging API You use the Kafka DSA to access message data from a Kafka
endpoint.
LDAP Directory Server The Lightweight Directory Access Protocol (LDAP) data
source represent LDAP directory servers. The LDAP DSA
supports only non-authenticating data sources.
MS SQL Server SQL database Use the MS-SQL Server DSA to access information in the
Microsoft SQL Server database.
ObjectServer SQL database The ObjectServer data source represents the instance of
the Netcool/OMNIbus ObjectServer that you monitor by
using the OMNIbus event listener service, or OMNIbus event
reader service.
OAuth Authentication You can use the OAuth data source to provision
access to an external OAuth authentication provider. This
enables components such as the EmailReader service to
authenticate with an OAuth provider.
ODBC SQL database Use the ODBC DSA to access information in an ODBC
database.
Oracle SQL database Use the Oracle DSA to access information in an Oracle
database.
RESTful API REST The RESTful API data source represents access to a HTTP
REST endpoint. An Impact policy can send REST requests
through the RESTful API data source.
SNMP Mediator The SNMP DSA is a data source adapter that Netcool/
Impact uses to set and retrieve management information
that is stored by SNMP agents.Netcool/Impact can use the
SNMP DSA to send SNMP traps and notifications to SNMP
managers.
Sybase SQL database Use the Sybase DSA to access information in a Sybase
database.
UI Data Provider REST The UI Data Provider represents access to a Data Provider
endpoint such as the TBSM provider or another Impact
cluster.
Internal The Internal data source contains the following predefined data
types, TimeRangeGroup, LinkType, and FailedEvent.
ITNM The ITNM data source is used with ITNM and the ITNM DSA.
Schedule The Schedule data source contains the predefined data type
schedule. You cannot edit the schedule data source but you can add
additional data types.
Statistics The Statistics data source contains the hibernation data type. You
cannot edit the statistics data source or add additional data types.
URL The URL data source contains the predefined data type document.
You cannot edit the URL data source but you can add additional data
types.
XmlDsaMediatorDataSource The XmlDsaMediator data source is used with the XML DSA.
Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select a template for the data
source that you want to create. The tab for the data source opens.
4. Complete the information, and click Save to create the data source.
Procedure
1. In the Data Model tab, double-click the name of the data source that you want to edit. Alternatively,
right click the data source and click Edit.
2. Make the changes and click Save to apply them.
Datasourcelist file
The datasourcelist file is a text file that lists all the Impact data sources that have been created.
It is located in the etc directory and has the following name format:
<server>_datasourcelist
Where <server> is the name of the Impact server, for example NCI_datasourcelist.
The datasourcelist file comprises the following elements:
1. A total count of all data sources:
impact.datasources.numdatasources=xxx
impact.datasources.n.name=EIC_alertsdb
impact.datasources.n.number=8
impact.datasources.n.type=ObjectServer
Where:
n is a sequential number starting from 1.
name is the name of the data source as it appears in the data types under Data Model tab of the
Impact UI.
number is the data source number.
type is the type of the data source.
Order of entries
The sequential numbers do not have to be in order in the datasourcelist file, but it is important that there
are no gaps in the number sequence. For example, if you have three data sources, you must have the
following entries (in any order) in the datasourcelist file:
impact.datasources.numdatasources=3
impact.datasources.1.name=EIC_alertsdb
impact.datasources.1.number=1
impact.datasources.1.type=ObjectServer
impact.datasources.2.name=EventrulesDB
impact.datasources.2.number=2
impact.datasources.2.type=DB2
impact.datasources.3.name=FlatFile_DS
impact.datasources.3.number=3
impact.datasources.3.type=Flat File
impact.datasources.numdatasources=3
impact.datasources.1.name=EIC_alertsdb
impact.datasources.1.number=1
impact.datasources.1.type=ObjectServer
impact.datasources.3.name=EventrulesDB
impact.datasources.3.number=2
impact.datasources.3.type=DB2
impact.datasources.4.name=FlatFile_DS
impact.datasources.4.number=3
impact.datasources.4.type=Flat File
If the datasourcelist file gets corrupted (for example, gets wiped out or the numbers are not sequential)
you can use the rebuildDatasourceList and createDatasourceList utilities to fix it. For details see
“rebuildDatasourceList” on page 18 and “createDatasourceList” on page 19.
rebuildDatasourceList
The rebuildDatasourceList script removes gaps and errors from an existing Impact datasourcelist file.
The tool consists of the following files:
• rebuildDatasourceList.xml: This file contains all of the logic necessary to rebuild the
<NCI>_datasourcelist file, thereby removing any gaps in the numeric sequence.
• rebuildDatasourceList.bat: Windows bat file that calls ant and executes
rebuildDatasourceList.xml.
• rebuildDatasourceList.sh: UNIX sh file that calls ant and executes
rebuildDatasourceList.xml.
These files are installed in the following directory:
createDatasourceList
The createDatasourceList script generates a new Impact datasourcelist file based on the contents of
the .ds files and the backup file DataSourceInfoBackup.
The tool consists of the following files:
• createDatasourceList.xml: This file contains all of the logic necessary to create the
<NCI>_datasourcelist file.
• createDatasourceList.bat: Windows bat file that calls ant and executes
createDatasourceList.xml.
• createDatasourceList.sh: UNIX sh file that calls ant and executes
createDatasourceList.xml.
These files are installed in the following directory:
<installdir>/install/tools
To run the createDatasourceList script, change to the <installdir>/install/tools directory and
run the .bat or .sh script.
No additional input requirements are needed.
Note: A backup of the previous <NCI>_datasourcelist file (if one
exists) will be saved in the <installdir>/etc/ directory and will be
renamed <installdir>/etc/<NCI>_datasourcelist_pre_<datetime>. For example:
<installdir>/etc/NCI_datasourcelist_pre_20210905013045.
Schedule Editable Schedules define a list of data items associated with specific time
ranges, or time range groups, that exist.
Document Editable Custom URL Document data types are derived from the
predefined Doc data type.
ITNM Editable This data type is used with ITNM and the ITNM DSA.
TimeRangeGroup Non-editable A time range group data type consists of any number of time
ranges.
LinkType Non-editable The LinkType data type provides a way of defining named and
hierarchical dynamic links.
Hibernation Non-editable When you call the Hibernate function in a policy, the policy
is stored as a Hibernation data item for a certain number of
seconds.
Procedure
1. Click Data Model to open the Data Model tab.
2. Expand the data source that contains the data type you want to edit, select the data type, double-click
the name of the data type that you want to edit. Alternatively, right-click the data type and click Edit.
3. Make the required changes in the Data type tab.
4. Click Save to apply the changes.
Procedure
1. From the list of data sources and types, locate the data type you want to delete.
2. Select the data type, right-click and select Delete, or click the Delete icon on the toolbar.
Attention: When you delete a data type from within project or the global repository, it is also
deleted from any other projects that use it. To remove a data type from one project, open the
editor window for that project.
Typelist file
The typelist file is a text file that lists all the Impact data types that have been created.
It is located in the etc directory and has the following name format:
<server>_typelist
Where <server> is the name of the Impact server, for example NCI_typelist.
The typelist file comprises the following elements:
For each data type, the following entries:
impact.types.n.name=EIC_alertquery
impact.types.n.number=2
impact.types.n.class=SQL
impact.types.n.image=database.png
Where:
Order of entries
The sequential numbers do not have to be in order in the typelist file, but it is important that there are
no gaps in the number sequence. For example, if you have three data types, you must have the following
entries (in any order) in the typelist file:
impact.types.1.name=EIC_alertquery
impact.types.1.number=1
impact.types.1.class=SQL
impact.types.1.image=database.png
impact.types.2.name=EIC_PARAMETERS
impact.types.2.number=2
impact.types.2.class=SQL
impact.types.2.image=database.png
impact.types.3.name=EIC_RuleResources
impact.types.3.number=2
impact.types.3.class=SQL
impact.types.3.image=database.png
impact.types.1.name=EIC_alertquery
impact.types.1.number=1
impact.types.1.class=SQL
impact.types.1.image=database.png
impact.types.3.name=EIC_PARAMETERS
impact.types.3.number=2
impact.types.3.class=SQL
impact.types.3.image=database.png
impact.types.4.name=EIC_RuleResources
impact.types.4.number=2
impact.types.4.class=SQL
impact.types.4.image=database.png
If the typelist file gets corrupted (for example, gets wiped out or the numbers are not sequential) you can
use the rebuildTypeList and createTypeList utilities to fix it. For details see “rebuildTypeList” on page 22
and “createTypeList” on page 23.
rebuildTypeList
The rebuildTypeList script removes gaps and errors from an existing Impact typelist file.
The tool consists of the following files:
• rebuildTypeList.xml: This file contains all of the logic necessary to rebuild the <NCI>_typelist
file, thereby removing any gaps in the numeric sequence.
• rebuildTypeList.bat: Windows bat file that calls ant and executes rebuildTypeList.xml.
• rebuildTypeList.sh: UNIX sh file that calls ant and executes rebuildTypeList.xml.
These files are installed in the following directory:
<installdir>/install/tools
To run the rebuildTypeList script, change to the <installdir>/install/tools directory and run
the .bat or .sh script.
createTypeList
The createTypeList script generates a new Impact typelist file based on the contents of the .type files.
The tool consists of the following files:
• createTypeList.xml: This file contains all of the logic necessary to create the <NCI>_typelist
file.
• createTypeList.bat: Windows bat file that calls ant and executes createTypeList.xml.
• createTypeList.sh: UNIX sh file that calls ant and executes createTypeList.xml.
These files are installed in the following directory:
<installdir>/install/tools
To run the createTypeList script, change to the <installdir>/install/tools directory and run
the .bat or .sh script.
No additional input requirements are needed.
Note: A backup of the previous <NCI>_typelist file (if one exists)
will be saved in the <installdir>/etc/ directory and will be renamed
<installdir>/etc/<NCI>_typelist_pre_<datetime>. For example: <installdir>/etc/
NCI_typelist_pre_20210905013045.
Links overview
Links are an element of the data model that defines relationships between data items and between data
types.
They can save time during the development of policies because you can define a data relationship once
and then reuse it several times when you need to find data related to other data in a policy. Links are an
optional part of a data model. Dynamic links and static links are supported.
Netcool/Impact provides two categories of links.
Static links
Static links define a relationship between data items in internal data types.
Dynamic links
Dynamic links define a relationship between data types.
Data sources
Data sources are elements of the data model that represent real world sources of data in your
environment.
These sources of data include third-party SQL databases, LDAP directory servers, or other applications
such as messaging systems and network inventory applications.
Data sources contain the information that you need to connect to the external data. You create a data
source for each physical source of data that you want to use in your Netcool/Impact solution. When you
create an SQL database, LDAP, or Mediator data type, you associate it with the data source that you
created. All associated data types are listed under the data source in the Data Sources and Types task
pane.
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
Username Type a user name with which you can access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database at
any one time. That number must be greater than,
or equal to, the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
Username Type a user name with which you can access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.
Database Failure Policy Select the failover option. Available options are Fail
over and Disable Backup. The Fail back option is
not supported for Derby databases.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 10. Primary source settings for Derby data source window
Window element Description
Table 11. Backup source settings for Derby data source window
Window element Description
Procedure
1. To create a flat file data source you need a text file that is already populated with data.
For example, create a /home/impact/myflatfile.txt file with the following content:
Name, Age
Ted, 11
Bob, 22
2. In the Data Model tab, click the New Data Source icon and click Flat File.
The New Flat File tab opens.
3. Enter the required information
a) Enter a unique name for your data source name, for example MyFlatFileDataSource.
b) In the Directory field, provide the path to your flat file, for example /home/impact.
c) In the Delimiters field, specify the delimiters that you used in your flat file, for example ,.
What to do next
Use the data source that you just created to create a flat file data type. For more information about
creating flat file data types, see “Creating flat file data types” on page 86.
Table 12. General settings for flat file data source configuration
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
Table 13. Source settings for flat file data source configuration
Window element Description
Directory The path to the directory that contains the flat file.
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
JDBC Driver Class Type the name of the JDBC driver for the database.
Username Type a user name with which you can access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections allowed to the database at one
time. That number has to be greater than or equal
to the number of threads running in the Event
Processor. See “Configuring the Event processor
service” on page 144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
Username Type a user name with which you can access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database at
one time. That number has to be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 19. Backup source settings in the HSQLDB data source window
Window element Description
Table 20. General settings for the Informix data source window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
Username Type a user name with which you can access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 21. Primary source settings for the Informix data source window
Window element Description
Table 22. Backup source settings for the Informix data source window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
User name Type a user name with which you can access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 25. Backup source settings for MS-SQL Server data source window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
JDBC Driver Class Select the MySQL JDBC driver class. Refer to your
database server documentation for the appropriate
class name.
Username Type a valid user name with which you can access
the database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. For best performance,
this number must be greater than or equal to the
maximum number of event processor threads. See
“Configuring the Event processor service” on page
144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 27. Primary source settings in the MySQL data source window
Window element Description
Table 28. Backup source settings in the MySQL data source window
Window element Description
Note: From Fix Pack 23 onwards, a datasource of type MySQL with the MySQL 8 JDBC Driver in the
$IMPACT_HOME/dsalib directory can be created. To do this, use the following steps:
1. Create a SQL datasource of type MySQL.
2. Go to $IMPACT_HOME/etc and manually edit the datasource file for the MySQL datasource that you
created.
Example in the file NCI_XXX.ds
Change the JDBCDRIVER property from:
XXX.MySQL.JDBCDRIVER=org.gjt.mm.mysql.Driver
to:
XXX.MySQLJDBCDRIVER=com.mysql.jdbc.Driver
$IMPACT_HOME/bin/stopImpactServer.sh
$IMPACT_HOME/bin/startImpactServer.sh
Note:
The required JDBC driver that Impact uses to connect to MySQL server is known as the Connector/J.
This is the jar file that needs to be loaded into $IMPACT_HOME/dsalib.
If the MySQL server is configured to use SSL, secure connections from Impact can be made. Secure
connections can be achieved by setting additional connection properties. These properties can be set in
the Database field in the MySQL Data Source Editor.
For versions 8.0.12 and earlier of Connector/J: Add properties: ?
allowPublicKeyRetrieval=true&requireSSL=true
For example:
Database: nameOfDatabase?allowPublicKeyRetrieval=true&requireSSL=true
For versions 8.0.13 to 8.0.18 of Connector/J: Add properties: ?
allowPublicKeyRetrieval=true&sslMode=REQUIRED
For example:
Database: nameOfDatabase?allowPublicKeyRetrieval=true&sslMode=REQUIRED
For later versions of Connector/J, please refer to the mySQL documentation regarding required
connection properties.
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
User name Type a user name with which you can access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 30. Primary source settings for ObjectServer data source configuration
Window element Description
Table 31. Backup source settings for ObjectServer data source configuration
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
User name Type a user name that you use to access the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type or select the
maximum number of connections that are allowed
to the database at one time. That number must
be greater than or equal to the number of threads
that are running in the Event Processor. See
“Configuring the Event processor service” on page
144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 33. Primary source settings in the ODBC data source window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
User name Type a user name that with which you can access
to the database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.
Binding Name The name to which the Oracle Data Source object
is bound. For information about the binding name,
refer the docs of the Naming Service provider. For
example, cn=myDataSource.
This option is displayed only if you choose LDAP
Data Source in the Connection Options.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 36. Primary source settings for Oracle data source window
Window element Description
Table 37. Backup source settings for Oracle data source window
Window element Description
SID / Service Name Type a backup SID or service name. The default
value is ORCL. For more information, see your
Oracle documentation. Backup SID is optional.
Procedure
1. Open the Data Model tab and click the New Data Source icon in the toolbar and select Oracle.
2. In the Data Source Name field, enter a unique name to identify the data source.
3. In the Username field, enter a use that you can use to access the database.
4. In the Password field, enter a password that you can use to access the database.
5. In the Maximum SQL Connections list, choose a number of connections in the connection pool. For
maximum performance define the size of the connection pool as greater than or equal to the maximum
number of threads that run in the event processor.
For maximum performance set the size of the connection pool as greater than or equal to the
maximum number of threads that are running in the event processor.
Important: Changing the maximum connections setting in an SQL data source requires a restart of the
Impact Server.
For information about viewing existing thread and connection pool information, see the information in
the Netcool/Impact Administration in the section Command-Line tools, Event Processor commands. See
the Select PoolConfig from Service where Name='EventProcessor';
Important: In a clustered environment, the event processor configuration is not replicated
between servers. You must run the Select PoolConfig from Service where
Name='EventProcessor'; command on the primary and the secondary servers.
Limiting the number of concurrent connections manages performance. Type the maximum number of
connections that are allowed to the database at one time. That number must be greater than or equal
to the number of threads that are running in the Event Processor. See “Configuring the Event processor
service” on page 144.
6. In the Connection Options list, choose LDAP URL.
7. In the Oracle LDAP URL field, enter the Oracle LDAP URL in the following format:
jdbc:oracle:thin:@ldap:<IP_address>/ADTEST,cn=OracleContext,DC=oracle,
dc=support,dc=com
8. After you enter the URL, you are prompted for the LDAP user name and password.
For example, enter the following:
• In the LDAP Username field, enter
cn=Administrator,cn=Users,dc=oracle,dc=support,dc=com.
• In the LDAP Passwordfield, enter netcool.
jdbc:oracle:thin:@
(DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = host1)(PORT = port1))
(ADDRESS = (PROTOCOL = TCP)(HOST = host2)(PORT = port2))
(LOAD_BALANCE = yes)
(CONNECT_DATA =
(SERVER = DEDICATED)
(SERVICE_NAME = service-name)
(FAILOVER_MODE =(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))
)
)
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
User name Type a user name that you use to access to the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
Table 39. Primary source settings for PostgreSQL data source window
Window element Description
Table 40. Backup source settings for PostgreSQL data source window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
User name Type a user name with which you can access to the
database.
Maximum SQL Connection For maximum performance set the size of the
connection pool as greater than or equal to the
maximum number of threads that are running in
the event processor.
Important: Changing the maximum connections
setting in an SQL data source requires a restart of
the Impact Server.
For information about viewing existing thread and
connection pool information, see the information
in the Netcool/Impact Administration in the section
Command-Line tools, Event Processor commands.
See the Select PoolConfig from Service
where Name='EventProcessor';
Important: In a clustered environment,
the event processor configuration is not
replicated between servers. You must run the
Select PoolConfig from Service where
Name='EventProcessor'; command on the
primary and the secondary servers.
Limiting the number of concurrent connections
manages performance. Type the maximum number
of connections that are allowed to the database
at one time. That number must be greater than or
equal to the number of threads that are running
in the Event Processor. See “Configuring the Event
processor service” on page 144.
Database Failure Policy Select the failover option. Available options are Fail
over, Fail back, and Disable Backup.
For more information about failover options, see
“SQL database DSA failover modes” on page 25.
impact.[dataSourceType].resultsettype=[integer]
impact.[dataSourceType].resultsetconcurrency=[integer]
Note: Both these properties take integer values which correspond to the following statement options:
TYPE_FORWARD_ONLY=1003
TYPE_SCROLL_INSENSITIVE=1004
TYPE_SCROLL_SENSITIVE=1005
CONCUR_READ_ONLY=1007
CONCUR_UPDATABLE=1008
impact.oracle.resultsettype=1003
impact.oracle.resultsetconcurrency=1007
After changing the values in the etc/servername_datasource.props file, restart the Impact server
for the changes to take effect.
Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select UI Data Provider. The
tab for the data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
5. In the Host Name field, add the location where the UI data provider is deployed. The location is a
fully qualified domain name or IP address.
6. In the Port field, add the port number of the UI data provider.
7. Use SSL: To enable Netcool/Impact to connect over SSL to a data provider, you must export a
certificate from the data provider and import it into the Impact Servers and each GUI Server. If the
data provider is an IBM Dashboard Application Services Hub server, complete these steps to export
and import the certificate. For other data provider sources, after you obtain the certificate, use steps
(f and g) to import the certificate.
a) In the IBM Dashboard Application Services Hub server, go to Settings, WebSphere
Adminstrative Console, Launch WebSphere administrative console.
b) Within the administrative console, select Security, SSL certificate and key management, Key
stores and certificates, NodeDefaultKeyStore, Personal certificates.
c) Check the default certificate check box and click Extract.
d) Enter dash, for the certificate alias to extract.
e) For certificate file name, enter a file name on the system to which the certificate is written
to, such as C:\TEMP\mycertificate.cert.
f) Copy the certificate file to the Impact Server host and import it into both the Impact Servers
and GUI Servers. For more information about the import commands, refer to the Netcool/Impact
Administration Guide, within the security chapter go to the 'Enabling SSL connections with
external servers' topic.
g) Restart the Impact Servers and eachGUI Server.
Providing support for multi-tenancy for Tree Table and Topology widgets
Impact supports multi-tenancy. This is where two identical widgets are displayed on the same page, or
where the same page is duplicated on two tabs and widgets are executing the same policy.
Using the out of the box configuration, if two identical Tree Table or Topology widgets are placed on
the same page, or if the same page is duplicated on two tabs, the data they receive is vulnerable
to corruption. This happens if Impact UI data provider cannot distinguish between the widgets. Since
widgets execute the same policy, the dataset UI Data Provider sends back to the widgets could contain
duplicated rows, mixed-up data or otherwise be incorrect.
Multi-tenancy support, in such a scenario, consists of enabling widgets to receive a unique dataset, per
widget, while executing the same policy. To provide this functionality, a new input parameter must be
created on the policy used by the widget. The parameter should be called "owner" (see new property
below if this is not feasible). Declare owner as a policy input parameter and use it in the policy
according to the business logic, to generate widget-specific data sets with the same policy. The new
input parameter must be set by the widget in the Configure Optional Dataset Parameters section. For
each widget which uses the policy, the widget must set the value uniquely.
Example:
owner = 'MyWidgetOne’ // for the first widget
owner = ’MyWidgetTwo’ // for the second widget
Note: The value can be any string, except "default".
Depending on the scenario the new input parameter may determine different output for the policy, or
it may have no effect on the policy output. See the following policy excerpt for an example where two
different data sets based on the owner parameter value are delivered:
if (owner == "MyWidgetOne") {
var sysObj = NewObject();
sysObj.UITreeNodeType = "Tree";
sysObj.system = "Z11";
sysObj.node = "nodeOne";
sysObj.status = "Critical";
sysObj.UITreeNodeId = 0;
Important: When enabling multi-tenancy for a policy, the output parameter from the policy cannot be a
scalar type value. Multi-tenancy is only supported for the DirectSQL / UI Data Provider Datatype, Impact
Object, Array of Impact Object and Datatype formats. If an unsupported format is selected, the policy will
return no data.
uidataprovider.multitenant.parameter.name
By default, this property is set to owner. In most cases, this value will not need to be changed. However, if
this value conflicts with another parameter (for example, the widget data source policy has another input
parameter with the same name) you must declare a new value in the server.props file. For example, if
you decide that the parameter which will convey the widget identity to the Impact UI data provider should
be named tenant.
Declare it in the $IMPACT_HOME/etc/server.props file with the following line:
uidataprovider.multitenant.parameter.name=tenant
Declare tenant as a policy input parameter and use it in the policy accordingly, to generate widget-
specific data sets with the same policy. See the following policy excerpt for an example where two
different data sets based on the tenant parameter value are delivered:
if (tenant == "TenantOne") {
var sysObj = NewObject();
sysObj.UITreeNodeType = "Tree";
sysObj.system = "Z11";
sysObj.node = "nodeOne";
sysObj.status = "Critical";
sysObj.UITreeNodeId = 0;
sysObj.UITreeNodeParent = 3;
systemTree.push(sysObj);
....
}
if (tenant == "MyWidgetTwo") {
var sysObj = NewObject();
sysObj.UITreeNodeType = "Tree";
sysObj.system = "A00";
sysObj.node = "nodeTwo";
sysObj.status = "Normal";
sysObj.UITreeNodeId = 0;
sysObj.UITreeNodeParent = 3;
systemTree.push(sysObj);
....
}
For each widget in the multi-tenancy use case (duplicated widgets on the same page or on different
tabs) assign a unique value for the new input parameter in the Configure Optional Dataset Parameters
section.
For example:
tenant = ’TenantOne' // for the first widget
tenant = 'MyWidgetTwo' // for the second widget
Important: default is a reserved word and cannot be used as a value for the parameter. For example,
tenant = 'default' is not allowed.
Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select RESTful API. The tab for
the data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
5. In the Host Name field, add the hostname of the REST service that you want to connect to. The
hostname is a fully qualified domain name or IP address.
6. In the Resource Path field, add the path information to the resource if necessary.
7. In the Port field, add the port number (this will be 80 by default).
8. Use HTTPS/SSL to enable Netcool/Impact to connect over SSL to a REST data source. You must
export a certificate from the data source and import it into the Impact Servers and each GUI Server.
a) Get the certificate through the browser.
Refer to your browser manual, searching for exporting a certificate.
b) Copy the certificate file to the Impact Server host and import it into both the Impact Servers and
GUI Servers.
For more information about import commands, see the Netcool/Impact Administration Guide
under the section Enabling SSL connections with external servers in the Security chapter.
c) Restart the Impact Servers and each GUI Server.
Alternatively, you can select the Disable SSL Verification checkbox to allow the RESTful
DSA to connect over SSL without having to import the certificate. If enabled, the DSA will no longer
attempt to verify the SSL connection.
9. Select the Reuse Connection checkbox if required.
Connection caching is done at a policy level. This means the same HTTP connection can be reused
within a policy when it is running.
10. Select the Cache Response checkbox if required.
Note: Response caching is based on entity tags. It is one of several mechanisms that the
HTTP protocol provides for cache validation, which allows a client to make conditional requests.
Impact by default adds a Cache Control : Max-Age=0 header that causes any caches used
during the request to revalidate ensuring that the entity tag is checked. Modify this header to the
Cache Control setting you want to use. Impact by default adds a Cache Control : Max-Age=0
header to any newly created REST data sources in the HTTP header list.
11. Authentication.
If using basic authentication, you must provide the username and password:
a) In the User Name field type a user name with which you can access the REST API.
b) In the Password field type a password with which you can access the REST API.
The following will be added to the URL when making the request:
13. Specify HTTP parameters if you are making requests to the datasource where the same HTTP
parameters are being used consistently.
The REST API datasource can persist these and they will be used on every call to the data source
unless overridden by the policy function.
For example, if a new parameter is added to the grid, this is the same as adding a query parameter to
the request. If the grid has the following paramaters:
Then ?size=100&name=impact will be added to the URL when making the request.
14. Click Test Connection to see if it is possible to connect to the data source with the current data
source settings.
15. Click Preview Request to preview an example of the raw http request with the current data source
settings.
16. Click Save to create the data source.
Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select OAuth. The tab for the
data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
LDAP Server Type the server name where the LDAP database
resides. The default is localhost.
Table 47. General settings in the CORBA Mediator DSA data source window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
Table 48. Source settings in the CORBA Mediator DSA data source window
Window element Description
Name Service Object Name Add the Name Service Object Name.
Table 49. General settings in the SNMP data source configuration window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
Table 50. Data source settings in the SNMP data source configuration window
Window element Description
Mediator Class Name The following class name appears in this field:
com.micromuse.dsa.snmpdsa.SnmpMediator
Table 51. SNMP agent settings in the SNMP data source configuration window
Window element Description
Host Name If you are creating this data source for use with
the standard data-handling functions AddDataItem
and GetByFilter, enter the host name or IP address.
If you are creating this data source for use with the
new SNMP functions, accept the default value.
Read Community Type the name of the SNMP read community. The
default is public.
Write Community Type the name of the SNMP write community. The
default is public
Port If you are creating this data source for use with
the standard data-handling functions AddDataItem
and GetByFilter, select or enter the port number.
If you are creating this data source for use with the
new SNMP functions, accept the default value.
Table 53. General settings for the JMS data source window
Window element Description
Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.
org.exolab.jms.jndi.
InitialContextFactory
JNDI Provider URL Enter the JNDI provider URL. The JNDI provider
URL is the network location of the JNDI provider.
The required value for this field varies by JMS
implementation. For OpenJMS, the default value
of this property is tcp://hostname:3035, where
host name is the name of the system on which
OpenJMS is running. The network protocol TCP or
RMI, must be specified in the URL string. For other
JMS implementations, see the related product
documentation.
JNDI URL Packages Enter the Java package prefix for the JNDI context
factory class. For OpenJMS, BEA WebLogic, and
Sun Java Application Server, you are not required
to enter a value in this field.
JMS Connection Factory Name Enter the name of the JMS connection factory
object. The JMS connection factory object is
a Java object that is responsible for creating
new connections to the messaging system.
The connection factory is a managed object
that is administered by the JMS provider. For
example, if the provider is BEA WebLogic, the
connection factory object is defined, instantiated,
and controlled by that application. For the
name of the connection factory object for your
JMS implementation, see the related product
documentation.
JMS Destination Name Enter the name of a JMS topic or queue, which is
the name of the remote topic or queue where the
JMS message listener listens for new messages.
Procedure
1. In the Data Model tab, locate the data type for which you want performance statistics.
2. Right click on the data type and click View Performance Statistics .
For more information about the statistics reported in the window, see “Data type performance
statistics” on page 69.
3. Close the window.
Time to Execute Each Query Average time it took to run each query calculated
over the query interval.
Time to Read Results of Each Query Average time it took to read the results of each
query over the query interval.
Number of Data Items (% of total) Actual number of data items and the percentage of
data items loaded from the data cache per query
interval.
Number of Data Items in Use The number of data items loaded from the data
cache referred by queries in the query cache.
Time Spent Clearing the Cache The time it took to clear the cache.
Tab Description
Custom Fields In this tab, you can add any number of fields to form a database table.
Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the items
that policies need to be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.
Table 58. General settings on the Internal Data Type Editor Custom Fields tab
Editor element Description
Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.
If you receive an error message when you save a
data type, check the Global tab for a complete list
of data type names for the server. If you find the
name you tried to save, you must change it.
Access the data through UI data provider To ensure that the UI data provider can access
the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box, the data
type sends data to the UI data provider. When the
data model refreshes, the data type is available
as a data provider source. The default refresh rate
is 5 minutes. For more information about UI data
providers, see the Solutions Guide.
Table 59. Additional settings on the Internal Data Type Editor Custom Fields tab
Editor element Description
Field Name Type the actual field name. The field name can be
the same as the ID. You can reference both the ID
field and the Field Name field in policies.
If you do not enter a Display Name, Netcool/
Impact uses the ID field name by default.
Format Select a format for the field from the Format list:
Display Name Field: You can use this field to select a field from the
menu to label data items according to the field
value. Choose a field that contains a unique value
that can be used to identify the data item for
example, ID. To view the values on the data item,
you must go to View Data Items for the data type
and select the Links icon. Click the data item to
display the details.
Table 60. UI data provider settings on the Internal Data Type Editor Custom Fields tab
Editor element Description
Define Custom Types and Values (JavaScript) To show percentages and status in a widget, you
must create a script in JavaScript format. The
script uses the following syntax. Where Type is
either Percentage or Status and VariableName
can be a variable or hardcoded value. Always cast
the variable name to String to avoid any error even
if the value is numeric.
ImpactUICustomValues.put("<FieldName>,
<Type>",<VariableName>);
Deleting a field
You can use the Delete function to limit which fields are updated, inserted, and selected from the data
source.
Remember: When you delete a field from the data type, it is not deleted from the data source.
Using a subset of the database fields can speed performance of the data type.
Schedule Editable Schedules define a list of data items associated with specific time
ranges, or time range groups, that exist.
Document Editable Custom URL Document data types are derived from the
predefined Doc data type.
ITNM Editable This data type is used with ITNM and the ITNM DSA.
TimeRangeGroup Non-editable A time range group data type consists of any number of time
ranges.
LinkType Non-editable The LinkType data type provides a way of defining named and
hierarchical dynamic links.
Hibernation Non-editable When you call the Hibernate function in a policy, the policy
is stored as a Hibernation data item for a certain number of
seconds.
Positive The time range is active when the current time is within the time range, unless it is
overlapped by a Negative or an Override.
Negative The time range is inactive for the specified range. This time range is useful, for
example, to exclude a lunch hour from a Positive time range.
Override The time range is always active within the range, regardless of any negative ranges.
Time Description
range
Daily A time range between a starting time and an ending time for every day of the week, for
example, 9 a.m. to 5 p.m.
Weekly A range between a starting time on a specified day and ending on a specified day every
week, for example Monday 9 a.m. to Friday 5 p.m.
Absolute A range of time between two specific dates, for example, March 3, 2004 to March 4, 2004.
One way this time range is useful is for server maintenance. If a server is due to be down
for maintenance on a specific day and you do not want it to show up as an alarm, you could
define an Absolute range and use it in an Event Suppression policy.
Procedure
1. In the Data Model tab select the Global project, from the Project menu.
2. In the list of data sources, and data types, click the plus sign next to the Internal data source to view
its data types.
3. Select the TimeRangeGroup data type.
4. Right-click and select View Data Items to open the TimeRangeGroup screen.
5. Click the New Data Item button.
6. In the Time Range Group Name field, type a unique name to describe the group.
7. To add a new time range, click on the New Time Range icon in the table and configure the time range
accordingly.
See “Adding daily time ranges” on page 75, “Adding weekly time ranges” on page 76, and “Adding
absolute time ranges” on page 76.
After configuration is complete, click Save to save the time range.
8. To add an existing time range group:
a. Click on the Add Existing Time Range Group icon in the table.
b. In the Group Editor screen, select the existing group that you want to add.
c. Click Save to add the group.
Procedure
In the Time Range Editor, after you select Daily from the dropdown field, complete the information about
the start and end times of the time range.
Enter the information using this table as a guide:
Start Time: hour/min Using the 24-hour clock, enter the start time.
EndTime: hour/min Using the 24-hour clock, enter the end time.
Time Zone Select the appropriate time zone from the list.
Click Save to save the time range. Click the Back icon to return to the Time Range Group screen.
Procedure
In the Time Range Editor, after you select Weekly from the dropdown field, complete the information
about the start and end times of the time range.
Enter the information using this table as a guide:
Start Select the day of the week to indicate the beginning day of the time range.
hour/min Type or select the time of day to start the time range.
End Select the day of the week to indicate the end of the time range.
hour/min Type or select the time of day to end the time range.
Time Zone Select the appropriate time zone from the list.
Click Save to save the time range. Click the Back icon to return to the Time Range Group screen.
Procedure
In the Time Range Editor, after you select Absolute from the dropdown field, complete the information
about the start and end times of the time range.
Enter the information using this table as a guide:
Start Click the calendar icon to select the start date. Complete the hours, minutes, and
seconds of the start time.
End Click the calendar icon to select the end date. Complete the hours, minutes, and
seconds of the end time.
Time Zone Select the appropriate time zone from the list.
Click Save to save the time range. Click the Back icon to return to the Time Range Group screen.
Schedules overview
Schedules define a list of data items associated with specific time ranges, or time range groups, that exist.
You can use links between Schedule data items and other data items to schedule any items, for example,
the hours when a departmental node is business critical or to identify who is currently on call when an
alert occurs.
Configuring schedules
Use this procedure to create a schedule.
Procedure
1. Expand the Schedule data source in the Data Model tab.
2. Select the Schedule data type. Right-click and select View Data Items.
3. Click the New Data Item button.
4. Enter the following information in the tab:
a. In the Schedule Name field, type a unique name for the schedule.
b. In the Description field, add a Description for the schedule.
5. To display schedule member data items in the schedule members dropdown:
a. Click the Configure Members button.
b. Add one or more data item members for this schedule.
Enter information in the Configure Members screen as outlined in the following table:
Data Type The type from which to select members for the
schedule.
Selected Members (and Types) Highlight one or more candidates from the list.
6. Click Save. Click the back icon to return to the Schedule configuration screen.
Now you can select the member for which to add time ranges.
7. Enter the time ranges for the candidate. See “Configuring time range groups” on page 75.
The green light next to the On Call Status for the current member indicates that the administrator
is on call. If the administrator is not on call, the traffic light is red.
8. Repeat for each schedule member selectable from the Schedule Member drop-down list.
9. Click the back icon on the Schedule Editor to display the new schedule data item as a new row in the
table.
For information about editing and deleting data items, see Chapter 6, “Working with data items,” on
page 99.
This command prints out the Description field string that was on the ITNM record returned by the
query.
Tab Description
Table Name the data type, change the data source, if necessary, and add any number of
Description fields from the data source to form a database table.
Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the
items that policies must be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.
For more information about dynamic links tab, see Chapter 7, “Working with links,”
on page 103.
Cache Settings In this tab, you can set up caching parameters to regulate the flow of data between
Netcool/Impact and the external data source.
Use the guidelines in “SQL data type configuration window - Cache settings tab”
on page 85, plus the parameters for the performance report for the data type to
configure data and query caching.
Important: SQL data types in Netcool/Impact require all columns in a database table to have the Select
permission enabled to allow discovery and to enable the save option when creating data types.
Procedure
• Provide a unique name for the data type.
• Specify the name of the underlying data source for the data type.
• Specify the name of the database and the table where the underlying data is stored.
• Auto-populate the fields in the data type.
• Select a display name for the data type.
• Specify key fields for the data type.
• Specify a data item filter.
• Specify which field in the data type to use to order data items.
• Specify the direction to use when ordering data items.
• Enable the data type for access to a UI Data Provider
Table 69. General settings for the Table Descriptions tab of the SQL data type configuration window
Editor element Description
Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.
Data type names must be unique globally, not just
within a project. If you receive an error message
when you save a data type, check the Global
project tab for a complete list of data type names
for the server. If you find the name you tried to
save, you must change it.
Access the data through UI data provider To ensure that the UI data provider can access
the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box the data type
sends data to the UI data provider. When the data
model refreshes, the data type is available as a
data provider source. The default refresh rate is
5 minutes. For more information about UI data
providers, see the Solutions Guide.
Show New / Deleted Fields If you have deleted fields from the data type that
still exist in the SQL database, these fields do not
show in the user interface. To restore the fields
to the data type, mark the Show New / Deleted
Fields check box and click Refresh.
New Field Use this option if you need to add a field to the
table from the data source database. For example,
in the case where the field was added to the
database after you created the data type.
Make sure that the field name you add has the
same name as the field name in the data source.
Important: Any new fields added to this table are
not automatically added to the data source table.
You cannot add fields to the database table in this
way.
For more information, see “SQL data type
configuration window - adding and editing fields in
the table” on page 83.
Key field Key fields are used when you retrieve data from
the data type in a policy that uses the GetByKey
function. They are also used when you define a
GetByKey dynamic link.
Important: You must define at least one key field
for the data type, even if you do not plan to use
the GetByKey function in your policy. If you do not,
Netcool/Impact does not function properly.
Generally, the key fields you define correspond to
key fields in the underlying database table.
To specify a key field, double-click on the key
field column and then click the check box in the
appropriate row in the Key Field column. You can
add multiple key fields.
Display Name Field You can use this field to select a field from the
menu to label data items according to the field
value. Choose a field that contains a unique value
that can be used to identify the data item for
example, ID. To view the values on the data item,
you must go to View Data Items for the data type
and select the Links icon. Click the data item to
display the details.
Automatically Remove Deleted Fields Mark the Automatically Remove Deleted Fields
check box to remove any fields from the data
type that have already been removed from the
SQL database. The deleted fields are removed
automatically when a policy that uses this data
type is run.
Table 71. Data filtering and ordering settings for the Table Descriptions tab of the SQL data type
configuration window
Window element Descriptions
Define Custom Types and Values (JavaScript) To show percentages and status in a widget, you
must create a script in JavaScript format. The
script uses the following syntax.
ImpactUICustomValues.put("<FieldName>,
<Type>",<VariableName>);
Check Syntax and Preview Script Sample Result Click the Check Syntax and Preview Script
Sample Result button to preview the results and
check the syntax of the script. The preview shows a
sample of 10 rows of data in the table.
SQL data type configuration window - adding and editing fields in the table
Use this information to add or edit a field to the table for a SQL data type.
In the Table tab, in the New Field area, click New to add a field to the data type, or select the edit icon
next to an existing field that you want to edit.
ID By default, the ID is the same as the column name in the database. You can
change it to any other unique name. For example, if the underlying column
names in the data source are difficult to use, the ID field to provide an easier
alias for the field.
Field Name Type a name that can be used in policies. It represents the name in the SQL
column. Type the name so that it is identical to how it is displayed in the data
source. Otherwise, Netcool/Impact reports an error when it tries to access the
data type.
Format For SQL database data types, Netcool/Impact automatically discovers the
columns in the underlying table and automatically detects the data format
for each field when you set up the data type. For other data types, you
must manually specify the format for each field that you create. For more
information about formats, see the Working with data models chapter in the
Solutions Guide in the section Working with data types, Data type fields.
Restriction: The Microsoft SQL server table treats the TIMESTAMP field as a
non-date time field. The JDBC driver returns the TIMESTAMP field as a row
version binary data type, which is discovered as STRING in the Microsoft SQL
server data type. To resolve this issue, in the Microsoft SQL server table, use
DATEITEM to display the property time format instead of TIMESTAMP.
Select a format from the following list:
• STRING
• LONG_STRING
• INTEGER
• PASSWORD_STRING
• LONG
• FLOAT
• DOUBLE
• DATE
• TIMESTAMP
• BOOLEAN
• CLOB
Display Name You can use this field to select a field from the menu to label data items
according to the field value. Choose a field that contains a unique value that
can be used to identify the data item for example, ID. To view the values on
the data item, you must go to View Data Items for the data type and select
the Links icon. Click the data item to display the details.
If you do not enter a display name, Netcool/Impact uses the ID field name by
default.
Description Type some text that describes the field. This description is only visible when
you edit the data type in the GUI.
Default Value Type a default expression for the field. It can be any value of the specified
format see the format row, or it can be a database-specific identifier such as
an Oracle pseudonym; sequence.NEXTVAL.
Insert Statements: When you select the Exclude this Field check box Netcool/Impact does not
Exclude this field set the value for the field when inserting and updating a new data item into
the database. This field is used for insert and update statements only, not for
select statements.
Sybase data types:
You must select this option when you map a field to an Identity field or a
field with a default value in a Sybase database. Otherwise, Netcool/Impact
overwrites the field on insert with the specified value or with a space character
if no value is specified.
ObjectServer data types:
The Tally field automatically selects the Exclude this Field check box to
be excluded from inserts and updates for the object server data type since
this field is automatically set by Netcool/OMNIbus to control deduplication of
events.
The Serial field automatically selects the Exclude this Field check box to be
excluded from inserts and updates when an ObjectServer data type points to
alerts.status.
Type Checking: Strict Click to enable strict type checking on the field. Netcool/Impact checks the
format of the value of the field on insertion or update to ensure that it is of
the same format as the corresponding field in the data source. If it is not the
same, Netcool/Impact does not check the value on insertion or update and a
message to that effect is displayed in the server log. If you do not enable strict
type checking, all type checking and format conversions are done at the data
source level.
Procedure
1. Before you can create a flat file data type you must create a flat file data source.
For more information about creating flat file data sources, see “Creating flat file data sources” on page
30.
2. Click Create a new data type next to the flat file data source that you created earlier, for example
MyFlatFileDataSource.
3. In the new data type window, provide the required information.
a) In the Data Type Name: field type a unique name for your data type name. For example,
MyFlatFileDataType.
Your data source, MyFlatFileDataSource, should already have been preselected in the Data
Source Name: list. If not, select it from the list.
b) In the Base Table: field, enter the name of your flat file that you created for your flat file data
source, for example myflatfile.txt.
c) Click Refresh to load field names from your text file.
d) Select the check boxes in the Key Field column.
e) Save your flat file data type.
Results
If you open the data items viewer, you can see the entries from your flat file.
Procedure
1. Right click the UI data provider data source you created, and select New Data Type.
2. In the Data Type Name field, type the name of the data type.
3. The Enabled check box is selected to activate the data type so that it is available for use in policies.
4. The Data Source Name field is prepopulated with the data source.
5. From the Select a Dataset list, select the data set you want to return the information from.
The data sets are based on the provider and the data sets that you selected when you created the data
source. If this list is empty, then check the data source configuration.
6. Click Save. The data type shows in the list menu.
Tab Description
LDAP Info In this tab, you configure the attributes of the data type. For more information about
these attributes, see “LDAP Info tab of the LDAP data type configuration window” on
page 88.
Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information. Links between individual data items can
represent any relationship between the items that policies need to be able to look
up. For example, a node linked to an operator allows a policy to look up the operator
responsible for the node.
For more information about creating links to other data types, see Chapter 7, “Working
with links,” on page 103.
Cache In this tab, you can set up caching parameters to regulate the flow of data between
Settings Netcool/Impact and the external data source.
For more information about, cache settings see “SQL data type configuration window -
Cache settings tab” on page 85.
Important: You must create one LDAP data type for each set of entities that you want to access. The
LDAP data type is a read-only data type which means that you cannot edit or delete LDAP data items from
within the GUI.
Procedure
• Provide a unique name for the data type.
• Specify the name of the underlying data source for the data type.
• Specify the base context level in the LDAP hierarchy where the elements you want to access are
located.
• Specify a display name field.
• Specify a restriction filter.
Table 75. General settings in the LDAP Info Tab on the LDAP Data Type editor
Editor element Description
Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.
Table 76. LDAP settings in the LDAP Info Tab on the LDAP Data Type editor
Editor element Description
Data Source Name Type the name of the underlying data source.
This field is automatically populated, based on
your data source selection in the Data Types task
pane of the Navigation panel. However, if you have
more than one LDAP data source configured for use
with Netcool/Impact, you can select any LDAP data
source in the list, if necessary.
If you enter a new name, the system displays a
message window that asks you to confirm your
change.
Key Search Field Type the name of a key field, for example, dn.
Display Name Field You can use this field to select a field from the
menu to label data items according to the field
value. Choose a field that contains a unique value
that can be used to identify the data item for
example, ID. To view the values on the data item,
you must go to View Data Items for the data type
and select the Links icon. Click the data item to
display the details.
Table 77. Attribute configuration in the LDAP Info Tab on the LDAP Data Type editor
Editor element Description
New Field For each field that you want to add to the data
type, click New.
Tab Description
DSA Mediator This tab contains the attributes of the data type. See your DSA documentation for more
information.
Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the items
that policies need to be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.
For more information about dynamic links tab, see Chapter 7, “Working with links,” on
page 103.
Cache In this tab, you can set up caching parameters to regulate the flow of data between
Settings Netcool/Impact and the external data source.
Tab Description
DSA Mediator This tab contains the attributes of the data type. See your DSA documentation for more
information.
Dynamic Links In this tab you can create links to other data types, both external and internal, to
establish connections between information.
Links between individual data items can represent any relationship between the items
that policies need to be able to look up. For example, a node linked to an operator
allows a policy to look up the operator responsible for the node.
For more information about dynamic links tab, see Chapter 7, “Working with links,” on
page 103.
Cache In this tab, you can set up caching parameters to regulate the flow of data between
Settings Netcool/Impact and the external data source.
To ensure that the UI data provider can access the data in this data type, select the Access the data
through UI data provider: Enabled check box on the DSA Mediator tab. When you enable the check
box, the data type sends data to the UI data provider. When the data model refreshes, the data type is
available as a data provider source. The default refresh rate is 5 minutes. For more information about UI
data providers, see the Solutions Guide.
Table 80. General settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description
Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.
Access the data through UI data provider: To ensure that the UI data provider can access
Enabled the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box the data type
sends data to the UI data provider. When the data
model refreshes, the data type is available as a
data provider source. The default refresh rate is
5 minutes. For more information about UI data
providers, see the Solutions Guide.
Table 81. SNMP settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description
OID Configuration Select Packed OID data types from the OID
Configuration list.
New Attribute If you are creating the data type for use with the
standard data-handling functions AddDataItem
and GetByFilter, create a new attribute on the
data type for each variable you want to access.
To create an attribute, click New Attribute and
specify an attribute name and the OID for the
variable.
If you are creating this data source for use with the
new SNMP functions, you do not need to explicitly
create attributes for each variable. In this scenario,
you pass the variable OIDs when you make each
function call in the Netcool/Impact policy.
Get Bulk: Enabled If you want the DSA to retrieve table data from
the agent that uses the SNMP GETBULK command
instead of an SNMP GET command, select Get
Bulk. The GETBULK command retrieves table data
by using a continuous GETNEXT command. This
option is suitable for retrieving data from large
tables.
When you select Get Bulk, you can control
the number of variables in the table for which
the GETNEXT operation is completed using the
specified Non-Repeaters and Max Repetitions
values.
Define Custom Types and Values (JavaScript) To show percentages and status in a widget, you
must create a script in JavaScript format. The
script uses the following syntax.
ImpactUICustomValues.put
("<FieldName>,<Type>",<VariableName>);
Preview Script Sample Result Click the Preview Script Sample Result button to
preview the results and check the syntax of the
script. The preview shows a sample of 10 rows of
data in the table.
Procedure
1. In the data types tab, select an SNMP data source from the list.
2. Click the New Data Type button to open the New Data Type editor.
3. Type a name for the data type in the Data Type Name field.
Important:
If you are creating this data source for use with the new SNMP functions, you do not need to explicitly
create attributes for each table. In this scenario, you pass the table OIDs when you make each function
call in the Netcool/Impact policy.
7. If you want the DSA to retrieve table data from the agent using the SNMP GETBULK command instead
of an SNMP GET, select Get Bulk.
The GETBULK command retrieves table data using a continuous GETNEXT command. This option is
suitable for retrieving data from very large tables.
8. If you have selected Get Bulk, you can control the number of variables in the table for which the
GETNEXT operation is performed using the specified Non-Repeaters and Max Repetitions values.
The Non-Repeaters value specifies the first number of non-repeating variables and Max Repetitions
specifies the number of repetitions for each of the remaining variables in the operation.
9. Click Save.
Table 82. General settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description
Data Type Name Type a unique name to identify the data type. Only
letters, numbers, and the underscore character
must be used in the data type name. If you use
UTF-8 characters, make sure that the locale on the
Impact Server where the data type is saved is set
to the UTF-8 character encoding.
Access the data through UI data provider: To ensure that the UI data provider can access
Enabled the data in the data type, select the Access the
data through UI data provider: Enabled check
box. When you enable the check box the data type
sends data to the UI data provider. When the data
model refreshes, the data type is available as a
data provider source. The default refresh rate is
5 minutes. For more information about UI data
providers, see the Solutions Guide.
Table 83. SNMP settings for the DSA Mediator tab of the SNMP data type editor
Editor element Description
New Attribute If you are creating this data type for use with
the standard data-handling functions AddDataItem
and GetByFilter, you must create a new attribute
on the data type for each variable you want to
access. To create an attribute, click New Attribute
and specify an attribute name and the OID for the
variable.
If you are creating this data source to use with the
new SNMP functions, you do not need to explicitly
create attributes for each table. In this scenario,
you pass the variable OIDs when you make each
function call in the Impact policy.
Get Bulk: Enabled If you want the DSA to retrieve table data from the
agent using the SNMP GETBULK command instead
of an SNMP GET, select Get Bulk. The GETBULK
command retrieves table data using a continuous
GETNEXT command. This option is suitable for
retrieving data from very large tables.
When you select Get Bulk, you can control
the number of variables in the table for which
the GETNEXT operation is performed using the
specified Non-Repeaters and Max Repetitions
values.
Procedure
1. Select the LinkType data type.
2. Right-click and select View Data Items then click New to create a new LinkType data item.
3. Select the name, source, and target data types for the new link type.
The new data item appears in the Available LinkType Data Items table.
When you create dynamic links, the LinkType data type is available for selection. See Chapter 7,
“Working with links,” on page 103 for more information.
Procedure
1. Select the Doc data type then either right click View Data Items. Click New to create a new Doc data
item.
The Create Doc Data Item window opens.
2. Type a Document name.
3. Type a description for the document.
4. Type the IP address of the document.
5. Click OK.
The new Doc data item is displayed in the table.
Procedure
1. Click Data Model to open the Data Model tab.
2. Select the data source from the data sources list.
3. Click the New Data Type icon. A new Data Type Editor tab opens.
4. Create your chosen data type.
links.type.item.field
links.Customer.first.Name
The following linking expression references the value of the Location field in the second Node OrgNode
returned when a link is evaluated:
Procedure
1. Click Data Model to open the Data Model tab.
2. Expand the data source that contains the data type you want to edit, select the data type, double-click
the name of the data type. Alternatively, right-click the data source and click Edit.
3. Create a dynamic or static link, from the base data type to the target data type.
4. In the New Field area of the Table description tab, click New to open the Field properties window to
create a field for the base data type:
Complete the following steps to create the linked field:
a) In the ID field, give the filed a unique name
b) In the Field Name field, add a linking expression as the field name.
c) From the Format list, select the type of data to be held in this field.
d) In the Display name field, add the display name.
e) In the Description field, add the description.
Note: If using a link by key and the data type is internal, the field referenced as the key must match
the key field in a row in the target data type. Otherwise, NULL is returned.
f) Click OK.
The field you created shows in the list of fields in the Table Description tab.
5. Click Save to add the changes to the data type.
Procedure
1. Locate the data type in the data connections list.
2. Select a data type and click the View Data Items icon next to the data type.
If you have multiple data items open and you select View Data Items on a data type you opened
already, the tab switches to the existing open data item tab.
When viewing data items, Netcool/Impact has a built-in threshold mechanism to control how much
data gets loaded. The default threshold limit is 10000. If the underlying table which the data type
points has more than 10000 rows which match the data type filter, Netcool/Impact shows a warning
message indicating that the number of rows for the data type exceeds the threshold limit.
Note: The threshold limit is set in $IMPACT_HOME/etc/server.props using the
property, impact.dataitems.threshold. To view data exceeding the threshold limit, the
impact.dataitems.threshold property would need to be modified and the server restarted.
The higher the value is set, the more memory is consumed. The heap settings for both the Impact
Server and the GUI Server would have to increased from the default values. For more information
about setting the minimum and maximum heap size limit, see the chapter on Self Monitoring in the
Netcool/Impact Administration Guide.
You can limit the number of data items shown by entering a search string in the Filter field.
Filter Retrieved Data Items: The filter searches all the fields in the current set of paged results
containing the search text. If the number of results requires the results to be paged, the filter only
filters the results on the current page. The filter is cleared when you navigate between pages.
For information about entering filter syntaxes, see the Working with filters section of the Policy
Reference Guide.
Procedure
1. In the Data Model tab, select the appropriate data type, right-click and select View Data Items.
2. To add a new data item to the table, click the New Data Item in the toolbar.
The screen that you next see depends on the data type configuration.
3. Enter the information in the screen.
4. Click Save and then the Back icon to return to the data item list.
The new data item is listed in the table.
Procedure
1. To edit a data item, select the data item and click Edit.
The edit screen that you see depends on the data type configuration.
a) Change the information as necessary.
b) Click Save to save the changes, then click the Back icon to return to the data item list.
Note: When editing an SQL data item, the save attempt will include all fields in the data item unless
the field is marked for exclusion. To exclude a field, configure the Insert Statements: Exclude
this field property in the data type. See SQL data type configuration window - adding and editing
fields in the table for more information.
2. To delete an item, select the data items that you want to delete.
Check marks are placed in the check boxes next to the selected data items and the data items are
highlighted.
a) If you want to delete all the data items in the table, click the all link. Check marks are placed in
every check box in the Select column and the data items are highlighted.
b) Click the Delete icon to delete the selected data items.
Procedure
1. In the Data Model tab, right click the data type and select View Data Items. If items are available for
the data type, they show on the right side in tabular format.
2. If the list of returned items is longer than the UI window, the list is split over several pages. To go from
page to page, click the page number at the bottom.
3. To view the latest available items for the data type, click the Refresh icon on the data type.
4. You can limit the number of data items that display by entering a search string in the Filter field. For
example, add the following syntax to the Filter field, totalMemory=256. Click Refresh on the data
items menu to show the filtered results.
Filter Retrieved Data Items: The filter searches all the fields in the current set of paged results
containing the search text. If the number of results requires the results to be paged, the filter only
filters the results on the current page. The filter is cleared when you navigate between pages.
Tip: If your UI Data Provider data type is based on a Netcool/Impact policy, you can add
&executePolicy=true to the Filter field to run the policy and return the most up to date filtered
results for the data set.
For more information about using the Filter field and GetByFilter function runtime parameters to limit
the number of data items that are returned, see “Using the GetByFilter function to handle large data
sets” on page 100.
This policy example uses the FILTER runtime parameters in a GetByFilter (Filter, DataType,
CountOnly) implementation in a UI data provider.
DataType="123UIdataprovider";
CountOnly = false;
index = 0;
if(Num > 0){
while(index <Num){
Log("Node["+index+"] id = " + MyFilteredItems[index].id +
"---Node["+index+"] DisplayName= " +
MyFilteredItems[index].t_DisplayName);
index = index + 1;
}
}
Log("========= END =========");
Here are some more syntax examples of the FILTER runtime parameters that you can use in a
GetByFilter (Filter, DataType, CountOnly) implementation in a UI data provider.
Example 1:
Filter = "&count=6";
No condition is specified. All items are fetched by the server, but only the first 6 are returned.
Example 2:
Filter = "&count=3&start=2";
No condition specified. All items are fetched by the server, but only the first 3 are returned, starting at
item #2
Example 3:
Filter = "¶m_One=paramOne";
All items are fetched by the server, and paramOne is available for use by the provider when it returns the
data set.
Adding Delimiters
The default delimiter is the ampersand (&) character. You can configure a different delimiter by editing
the property impact.uidataprovider.query.delimiter in the NCI_server.props file. Where
NCI is the name of your Impact Server. Any time you add a delimiter you must restart the Impact Server
to implement the changes.
The delimiter can be any suitable character or regular expression, that is not part of the data set name or
any of the characters used in the filter value.
The following characters must use double escape characters \\ when used as a delimiter:
* ^ $ . |
Examples:
An example using an Asterisk (*) as a delimiter:
• Property Syntax: impact.uidataprovider.query.delimiter=\\*
• Filter query: t_DisplayName contains 'Imp'*count=5
An example with a combination of characters:
• Property Syntax:impact.uidataprovider.query.delimiter=ABCD
• Filter query: t_DisplayName contains 'Imp'ABCDcount=5
An example of a regular expression, subject to Java language reg expression rules:
• Property Syntax: impact.uidataprovider.query.delimiter=Z|Y
• Filter queryt_DisplayName contains 'S'Zcount=9Zstart=7YexecutePolicy=true
An example of a combination of special characters: * . $ ^ |
• Property Syntax: impact.uidataprovider.query.delimiter=\\*|\\.|\\$|\\^|\\|
• Filter query t_DisplayName contains 'S'.count=9|start=7$executePolicy=true
Dynamic links
Dynamic links define a relationship between data types.
This relationship is specified when you create the link and is evaluated in real time when a call to the
GetByLinks function is encountered in a policy. Dynamic links are supported for internal, SQL database
and LDAP data types.
The relationships between data types are resolved dynamically at run time when you traverse the link in a
policy or when you browse links between data items. They are dynamically created and maintained from
the data in the database.
The links concept is similar to the JOIN function in an SQL database. For example, there might be a 'Table
1' containing customer information (name, phone number, address, and so on) with a unique Customer
ID key. There may also be a 'Table 2' containing a list of servers. In this table, the Customer ID of
the customer that owns the server is included. When these data items are kept in different databases,
Netcool/Impact enables the creation of a link between Table 1 and Table 2 through the Customer ID field,
so that you can see all the servers owned by a particular customer.
You can use dynamic links only at the database level. (When relationships do not exist at the database
level, you needs to create static links.) You can create dynamic links for all types of data types (internal,
external, and predefined). See Chapter 5, “Configuring data types,” on page 69 for information about the
kinds of data type.
Dynamic links are unidirectional links, configured from the source to the target data type.
Static links
Static links define a relationship between data items in internal data types.
Static links are supported for internal data types only. Static links are not supported for other categories
of data types, such as SQL database and LDAP types, because the persistence of data items that are
stored externally cannot be ensured.
A static link is manually created between two data items when relationships do not exist at the database
level.
With static links, the relationship between data items is static and never changes after they have been
created. You can traverse static links in a policy or in the user interface when you browse the linked data
items. Static links are bi-directional.
Link Description
By:
Key This method evaluates an expression from one data type and matches this to the key field of
the target data type.
Filter This method uses a filter expression to describe the link between any fields in the source type
to any fields of the target data type.
Policy This method runs a specified policy to look up data items in the target and link all the retrieved
data items to data items of the source type.
Procedure
1. To open the Data Type editor, click a data type name.
2. In the Data Type editor, select the Dynamic Links tab.
3. You can create the following types of dynamic links:
• Link By Filter. For more information abut creating links by filter, see “Adding new links by filter” on
page 104.
• Link By Key. For more information abut creating links by key, see “Adding new links by key” on page
105.
• Link By Policy. For more information abut creating links by policy, see “Adding new links by policy” on
page 106.
Tip: To create a new link by policy, you may need to scroll down so that the Link By Policy area is
visible.
4. Select the target data type from the Target Data Types list.
5. Select the exposed link type from the Exposed Link Type list.
6. Depending on the type of link you are creating, type in the filter, key expression, or select a policy.
• For a link by filter, type the filter syntax for the link in the Filter into Target Data Type field. For
example: Location = '%Facility%'.
• For a link by key, type the key expression in the Foreign Key Expression field. For example:
FirstName + ' ' + LastName.
• For a link by policy, select the linking policy from the Policy To Execute to Find Links list.
7. Click OK and click Save on the main to tab to implement the changes.
Location = '%Name%'
(NodeID = %ID%) AND (Location = '%Name%')
Procedure
1. Click New Link by Filter.
2. Enter the information in the New Link By Filter window
a) Select the Target data type from the list.
b) In the Exposed Link Type menu, select a link to follow from the list. The target data type name (in
other words the exposed link) and the link type data items that match this source and target. See
“LinkType data types” on page 95.
c) In the Filter into target Data Type field, A filter is an expression that specifies which fields in the
source and target types must match in order for a link to exist. It can be either a simple expression
(source name = target name) or a complex expression that is defined by a Boolean operator that
indicates the order of the operation
The link shows in the New Link By Filter table in the Dynamic Links tab.
3. Click OK and click Save on the main to tab to implement the changes.
Procedure
1. Click New Link by Key.
2. Enter following information in the window.
a) Select the Target Data Type from the list.
For example, User.
b) In the Exposed Link Name field, select a link to follow from the list.
For example, User. The target data type name in other words the exposed link and the link type
data items that match this source and target.
c) Type the Foreign Key Expression
For example: LastName + ", " + FirstName. For more information about foreign key
expression, see “Foreign key expressions” on page 106.
The new link shows as a row in the New Link By Key table in the Dynamic Links tab.
3. Click OK and click Save on the main to tab to implement the changes.
The expression is applied to the following field value pairs, for example, if in the source the fields are:
FirstName = 'John'
LastName = 'Doe'
The resulting value for the target Key field (Name in this case) is:
Procedure
1. Click New Link by Policy.
The New Link By Policy window opens.
2. Enter the following information in the window
a) Select the Target Data Type from the list.
For example, LinkPolicy.
b) Select a link from the Exposed Link Type list.
For example, LinkPolicy. The target data type name in other words the exposed link and the link
type data items that match this source and target.
c) Select a policy from the list of available policies.
For example, GetPolicy.
The new link appears as a row in the table in the Dynamic Links tab.
3. Click OK and click Save on the main to tab to implement the changes.
Procedure
1. To edit a link, click the Edit in the row of the link you want to edit.
2. Make any necessary changes. Click OK and click Save on the main to tab to implement the changes.
Procedure
1. Click Data Model to open the Data Model tab.
2. Expand the Data Source that contains the internal data type you want to link, right-click and select
View Data Items.
The Data Item editor opens in the main panel.
3. Click the Edit Links icon in the Edit Links column next to one of the data item rows.
The Link Editor tab opens.
4. Select Target Type of Linked Items from the selection list.
Only Internal and Predefined data types show in the list.
5. To add a link, highlight the data items that you want that are listed in the Unlinked Data Items list and
click Add.
The items move to the Linked Data Items and LinkTypes list.
6. To remove a link, highlight the data items that you want to remove from the Linked Data Items list and
click Remove.
The data items are returned to the Unlinked Data Items list.
7. Click Save and then the Back icon to return to the data item list.
Policies overview
Policies consist of a series of function calls that manipulate events and data from your supported data
sources.
A policy, for example, can contain a set of instructions to automate alert management tasks, defining the
conditions for sending an e-mail to an administrator, or sending instructions to the ObjectServer to clear
an event.
You use the policy editor to create, manipulate, save, delete and edit policies. You can create new policies
from scratch, or use a policy wizard. Policy wizards present a series of windows that help you through the
policy creation process.
Accessing policies
Use this procedure to view, edit and delete policies.
Procedure
1. Click Policies to open the Policies tab.
a) From the Cluster and Project lists, select the cluster and project you want to use.
The list of policies is displayed.
2. To edit a policy, in the Policies tab, select a policy name in the list.
a) Right-click the policy and select Edit or click the Edit icon in the toolbar.
3. To delete a policy, select the policy in the policies pane and click the Delete Policy icon in the toolbar.
a) You can also delete a policy by right-clicking its name in the policies pane and selecting Delete in
the menu.
icon Description
Click the New Policy icon to create an IPL policy. To create a policy using JavaScript select
the JavaScript Policy option. To create a policy using one of the policy wizards, select Use
Wizard.
Remember: If you use UTF-8 characters in the policy name, make sure that the locale on the
Impact Server where the policy is saved is set to the UTF-8 character encoding.
Select a policy and use this icon to edit it. Alternatively, you can edit a policy by right clicking its
name and selecting Edit in the menu.
Select a policy and use this icon to delete it from the database. Alternatively, you can delete a
policy by right clicking its name and selecting Delete in the menu.
Click the icon to open a window where you can recover an auto-saved policy.
When the Enable Autosave option is selected, a temporary copy of the policy that you are
working on is saved periodically. This feature saves your work in instances of a session timeout,
browser crash, or other accident. Automatically saved policies are not shown in the policies
navigation panel and are not replicated among clusters/import. You must first recover and save
the drafted policy before you run it. For more information about recovering auto-saved policies,
see “Recovering automatically saved policies” on page 113.
Upload a Policy File. Click the icon to open the Upload a Policy window. You can upload
policy and policy parameters files that you wrote in an external editor or files that you created
previously.
This icon is visible when a policy is locked, or the item is being used by another user. Hover
the mouse over the locked item to see which user is working on the item. You can unlock
your own items but not items locked by other users. If you have an item open for editing you
cannot unlock it. Save and close the item. To unlock an item you have locked, right click on the
item name and select Unlock. Users who are assigned the impactAdminUser role are the only
users who can unlock items that are locked by another user in exceptional circumstances.
Writing policies
You write policies in the policy editor by using one of the following methods.
• You can write them from scratch with IPL or JavaScript. In the Policies tab, select New Policy > IPL
Policy or New Policy > JavaScript Policy.
• You can use a policy wizard. For more information, see “Policy wizards” on page 110.
Policy wizards
You use policy wizards to create simple policies without having to manually create data types and add
functions.
The wizards consist of a series of windows that guide you through the policy creation process. At the end
of the process, you can run the policy immediately without any further modification. However, if you want
to modify the policy at any time, you can do so using the Policy editor.
XML
XML policies are used to read and to extract data from any well-formed XML document.
Web Services
Web Services DSA policies are used to exchange data with external systems, devices, and applications
using Web Services interfaces.
Procedure
1. In the Policies tab, select the arrow next to the New Policy icon. To run the Web services wizard,
select Use Wizard > Web Services.
2. In the Web Services Invocation-Introduction window, type in your policy name in the Policy Name
field. Click Next to continue.
Example http://www.webservicex.net/stockquote.asmx?wsdl.
3. In the Web Services Invocation-WSDL file and Jar File window, in the URL or Path to WSDL field,
enter the URL or a path for the target WSDL file.
XML policies
XML policies are used to read and to extract data from any well-formed XML document.
The XML DSA can read XML data from files, from strings, and from HTTP servers via the network (XML
over HTTP). The HTTP methods are GET and POST. GET is selected by default. In the XML wizard you
can specify the target XML source and the schema file, to create the corresponding data source and data
types for users. The wizard also updates the necessary property files and creates a sample policy to help
you start working with XML DSA. When choosing the XML String option in the XML DSA wizard, ensure that
the xml string you copy and paste does not contain references to stylesheet-related tags.
Procedure
1. In the Policies tab, click the Auto-Save version icon in the toolbar.
2. Choose one auto-saved policy from the Drafted Policy list.
3. Click Open to view the drafted policy in the editor.
4. Click Save to save the drafted policy.
Icon Description
Restore your work to its state before your last action, for example, add
text, move or, delete. Undo works for one-level only.
Restore your work to its state before you selected the Undo action. Redo
works for one-level only.
Icon Description
Use this icon to paste cut, or copied text to a new location. In some
instances due to browser limitations, the Paste icon cannot be activated.
Use the keyboard short cut Ctrl + v instead.
To copy and paste rich text formatted content, for example from a web
page or document file:
1. Paste the content into a plain text editor first to remove the rich text
formatting.
2. Copy the content from the plain text editor into the policy editor.
Use this icon to find and replace text in a policy. Search for a text string.
Type the text that you want to find, choose if you want to run a case-
sensitive search, and choose the direction of the search.
Search for text and replace it with a text you specify. Type the text that you
want to search for. Type the replacement text. Choose if you want to run a
case-sensitive search, and choose the direction of the search.
Click the Go To icon to show a Go To Line field in the policy editor. Type the
number of the line you want the cursor to go to. Click Go.
Access a list of data types. The Data Type Browser icon simplifies policy
development by showing available data types and details including field
name and type information. You do not have to open the data type viewer
to get the data type information.
The Check Syntax icon checks the policy for syntax errors. If there are
errors, the error message locates the error by the line number. If there are
no errors, a message to that effect is shown.
Click the Run Policy icon to start the policy. After removing all syntax
errors, you can run the policy to ensure that it produces the result you
wanted. To run your policy with additional parameters, use the Run with
Parameters option. You can use this option after you configure policy
settings for your policy.
Icon Description
Use this icon to configure settings for the policy. For more information, see
“Configuring policy settings in the policy editor” on page 117.
Click the View Version History icon to view the history of changes made to
policies, and compare different versions of policies. For more information
about version history interface, see “Using version control interface” on
page 127.
Important:
The View Version History icon is disabled for new and drafted policies and
it becomes active after the policy is committed to server.
This option is supported only with the embedded SVN version control
system.
Click this icon to view the policy logs in the log viewer. For more
information about the policy log viewer, see “Services log viewer” on page
135.
Click this icon to manually enable or disable the syntax highlighter. For
information about automatically configuring the syntax highlighter, see
“Policy syntax highlighter” on page 115.
If the checker finds errors, you will see a table listing all the errors that were found.
The Type column of the table contains an error indicator, either Warning or Error.
The Line column of the table contains the line number where the error occurred. To find the error, click
the line number. The editor scrolls to that line in the script.
The Message column of the table outlines the error.
Procedure
1. Open a policy, in the policy editor toolbar, click the toggle icon to manually enable or disable the syntax
highlighter.
2. The syntax highlighter can be configured to automatically toggle itself off at startup when the policy
exceeds a specified character limit.
Optimizing policies
After you create your policy, you can check to see whether there is a way to improve it.
Procedure
1. Click the Optimize icon.
The Optimization handles three functions:
• Hibernate
• GetByKey
• GetByFilter
For the Hibernation function, the optimization checks to make sure that you have a
RemoveHibernation function with the same hibernation key and notifies you if you do not. For the
GetByKey and GetByFilter functions, the optimization checks the data type and sees what fields are
returned from a data type. It then checks the policy to see if all of the fields are being used. When all
of the fields from the data type are not being used, you receive a message showing which fields are not
being used. You can change the data type fields if required.
2. Click Save to implement any changes.
When you change a policy and you want to click Optimize again you must save the policy first. The
optimize feature works from the saved version and not the modified version.
Procedure
1. Click the Run with Parameters icon to open the Policy Trigger window.
Note: The fields you see depend on the policy parameters and values you specified for the policy. If
you have not set a default value for a parameter you must provide it now, otherwise a NULL value will
be passed.
Ouput parameters are required if you want to show policy output through a UI data provider. For more
information about setting parameters, see “Configuring policy settings in the policy editor” on page
117.
2. Click Execute to run the policy with parameters.
Procedure
1. Click the Types Browser icon.
2. Click a data type to see the details. The Data Type Detail window opens and shows the details.
Procedure
1. In the policy editor toolbar, click the Configure Policy Settings icon to open the policy settings editor.
You can create policy input and output parameters and also configure actions on the policy that relates
to UI Data Provider and Event Isolation and Correlation options.
2. Click New to open the Create a New Policy Input Parameter window or the Create a New Policy
Output Parameter window or the Create New policy action window as required.
For more information, see “Configuring policy parameters and enabling actions” on page 117
Enter the information in the configuration window. Required fields are marked with an asterisk (*). If
you select DirectSQL as the format, see “Creating custom schema values for output parameters” on
page 118.
3. To edit an existing input or output parameter, select the check box next to the parameter and select
edit in the corresponding cell of the Edit column.
4. To enable a policy to run with an UI data provider select the Enable policy for UI Data Provider
Actions check box.
5. To enable a policy to run in with the Event Isolation and Correlation capabilities, select the Enable
Policy for Event Isolation and Correlation Actions check box.
6. Click OK to save the changes to the parameters and close the window.
Procedure
1. In the Policy Input Parameters section, click New to create a policy input parameter.
a) In the Name field, type a name to describe the parameter.
b) In the Label field, add a label. The label is displayed in the Policy Trigger window.
c) From the Format menu, select the format of the parameter.
d) In the Default Value field, add a default value. This value is displayed in the Policy Trigger window.
e) In the Description field, add a description for the parameter.
2. In the Policy Output Parameters section, click New to create a policy output parameter.
Tip: When you create multiple output parameters, remember each policy output parameter that you
create generates its own data set. When you assign a data set to a widget, only those tasks that are
associated with the specific output parameter are run.
a) In the Name field, type a name to describe the parameter.
b) In the Policy Variable Name field, add the variable name. The variable name is displayed in the
Policy Trigger window.
c) From the Format menu, select the format of the parameter.
O1.city="NY"
O1.ZIP=07002
You define the following custom schemas values for this policy:
If you use the DirectSQL policy function with the UI data provider or OSLC, you must define a custom
schema value for each DirectSQL value that you use.
If you want to use the chart widget to visualize data from an Impact object or an array of Impact objects
with the UI data provider and the console, you define custom schema values for the fields that are
Procedure
1. In the Policy Settings Editor, select DirectSQL, Impact Object, or Array of Impact Object in the
Format field.
2. The system shows the Open the Schema Definition Editor icon beside the Schema Definition
field. To open the editor, click the icon.
3. You can edit an existing entry or you can create a new one. To define a new entry, click New. Enter a
name and select an appropriate format.
To edit an existing entry, click the Edit icon beside the entry that you want to edit
4. To mark an entry as a key field, select the check box in the Key Field column. You do not have to define
the key field for Impact objects or an array of Impact objects. The system uses the UIObjectId as the
key field instead.
5. To delete an entry, select the entry and click Delete.
Procedure
1. Click the Insert function icon and select one of the functions.
2. Enter the required parameters in the new function configuration window.
Note: When entering a string, check that all string literals are enclosed in quotation marks ("string "), to
distinguish them from variable names, which do not take quotation marks.
For the beta policy editor, you can also access the auto-complete tool which provides suggestions
based on the current context. When working inside the policy document, press Control+Space to
access the tool. The auto-complete Control+Space shortcut key may conflict with other operating
system shortcuts. If you have such a conflict, consider changing the keyboard shortcut for the
command.
GetServerVar Variables You use this function to retrieve the global value
that is saved by previous SetServerVar.
Procedure
1. Open
the $IMPACT_HOME/wlp/usr/servers/ImpactUI/apps/ImpactUI.ear/impactAdmin.war/
scripts/impactdojo/ibm/tivoli/impact/editor/themes/PolicyEditor.css file in a text
editor.
2. Update the values of the following entries with your own values:
• font-family
• font-size
• line-height
3. Refresh the browser to apply the changes.
It is also recommended to clear the browser cache.
Procedure
1. Open a policy in the policy editor.
2. Click the View Version History icon in the policy editor toolbar to open the version control interface.
You see the following columns:
Column Description
Author The user ID of the user who is logged in to the Impact Server.
Comments Shows any comments and the user ID of the user who submitted them.
Uploading policies
You can upload policies and policy parameters files that you wrote previously to the Impact Server.
Procedure
1. In the Policies tab, from the policy menu, click the Upload a Policy File icon.
The Upload a Policy File window opens.
Policy Description
AddPolicyProcessMapping This policy is used in reports. You do not need to change this
policy.
DefaultExceptionHandler This policy is used to handle failed events if the policy failure
is not handled locally using the Exception Handler. You can
write your own policy if you need to. If you do not write your
own, the provided policy is used by default.
The DefaultExceptionHandler policy prints a log of the
Events that failed to execute. To configure a customized error
handling policy, see “Configuring the Policy logger service”
on page 147.
DeployProject You can use this policy to copy the data sources, data types,
policies, and services in a project between two running
server clusters on a network. You can use this feature when
moving projects from test environments into real-world
production scenarios. For more information about automated
project deployment, see “Automated project deployment
feature” on page 8.
Policy Description
FailedEventExceptionHandler When errors occur during the execution of a policy, the Policy
Logger service executes the appropriate error handling
policy, and temporarily stores the events as data items in
a predefined data type called FailedEvent.
FailedEvent is an internal data type and all data that
is stored internally consumes memory. When you have
resolved the reasons for the event failures, you can reduce
the amount of memory that is consumed by using one of the
following options:
• reprocess the failed events using the
ReprocessFailedEvent.
• delete the events from FailedEvent data type.
See “FailedEvent data types” on page 96 for more
information.
XINY_DataType_PurgeData This policy is used to purge data items from data types that
are created by the XinY policy wizard. You can configure the
data that is older than a certain number of days. The default
is 4 days.
Services overview
Services perform much of the functionality associated with the Impact Server, including monitoring event
sources, sending and receiving e-mail, and triggering policies.
The most important service is the OMNIbus event reader, which you can use to monitor an ObjectServer
for new, updated or deleted events. The event processor, which processes the events retrieved from the
readers and listeners is also important to the function of the Netcool/Impact.
Internal services control the application's standard processes, and coordinate the performed tasks, for
example:
• Receiving events from the ObjectServer and other external databases
• Executing policies
• Responding to and prioritizing alerts
• Sending and receiving e-mail and instant messages
• Handling errors
Some internal services have defaults, that you can enable rather than configure your own services, or
in addition to creating your own. For some of the basic internal services, it is only necessary to specify
whether to write the service log to a file. For other services, you need to add information such as the port,
host, and startup data.
User defined services are services that you can create for use with a specific policy.
Generally, you set up services once, when you first design your solution. After that, you do not need to
actively manage the services unless you change the solution design.
To set up services, you must first determine what service functionality you need to use in your solution.
Then, you create and configure the required services using the GUI. After you have set up the services,
you can start and stop them, and manage the service logs.
Creating services
How to create a user-defined service.
Procedure
1. Click Services to open the Services tab.
2. From the Cluster and Projects lists, select the cluster and project you want to use.
A list of services that are related to the selected project is displayed.
3. In the Services tab, click the Create New Service icon.
4. From the menu, select a template for the service that you want to create.
5. In the service configuration tab, provide the necessary information to create the service.
6. Click the Save Service icon.
• To edit a service, you can double-click the service, or right-click on the service and select Edit.
Make the necessary changes to the service. Click Save to implement the changes.
Important: You can create a user-defined service by using the defaults that are stored in the
Global project.
Element Description
Click the Create New Service icon to create a user-defined service using one of the available
service templates.
Click the Edit Service icon to edit an existing service using one of the available service
templates. You can also double click on the service to open the service for editing.
Click the View Service Log icon to access the log for the selected service. You can also view
the log for a selected service by right clicking its name and selecting View Log.
Select a stopped service and click the Start service icon to start it. Alternatively, you can start
a service by right clicking its name and selecting Start.
Select a running service and click the Stop Service icon to stop it. Alternatively, you can stop
a service by right clicking its name and selecting Stop.
Click the Delete Service icon to delete a user-defined service. Alternatively, you can delete a
user-defined service by right clicking its name and selecting Delete.
Important: You cannot delete a running service, you must stop it first.
This indicator next to a service name indicates that the service is running.
This indicator next to a service name indicates that the service is stopped.
Source control locking for the service. This icon is visible when the service is locked or the
item is being used by another user. Hover the mouse over the locked item to see which user
is working on the item. You can unlock your own items but not items locked by other users. If
you have an item open for editing you cannot unlock it. Save and close the item.
To unlock an item you have locked, click the unlock service icon. You can also unlock the
service by right clicking on the item name and selecting Unlock.
Users who are assigned the impactAdminUser role are the only users who can unlock items
that are locked by another user in exceptional circumstances.
List of services
A list of internal and user-defined Netcool/Impact services.
Personalizing services
You can change the refresh period for the services tab.
Procedure
1. Click Options from the main menu, then click Preferences to open the Preferences dialog box.
2. Select the options that you want to personalize.
• Select the Enable auto refresh check box to automatically refresh the services.
• Select the Refresh interval period. The services are automatically refreshed at time interval you
select.
3. Click Save.
Procedure
• To start a service, select the service in the services pane and click Start. You can also start a service by
right-clicking its name in the services pane and selecting Start in the menu.
• To stop a service, select the service in the services pane and click Stop. You can also stop a service by
right-clicking its name in the services pane and selecting Stop in the menu.
Note: Service status is not replicated between cluster members. If you start or stop a service on the
primary cluster member, it will not start or stop the same service on a secondary cluster member.
Procedure
• Select a service in the services tab and click View Service Log.
• You can also view a service log by right clicking the service name in the services pane and selecting
View Log in the menu.
Procedure
1. If you want to view more service logs, click the New Tab option to display the Log name dialog.
2. Type in the name of the new tab, click OK to create the new tab in the Log viewer window.
3. Populate the fields in the tab to run the service log, for more information see “Services log viewer” on
page 135.
4. As you create more tabs and view results and you can move from one tab to the other by clicking the
tab heading at the top of the window. For more information see “Service log viewer results” on page
136
Event mapping
Event mapping allows you to map incoming events to one or more specific policies.
You can configure a reader service to test incoming events against one or more event filters. If a match is
found, the reader will execute the associated policy.
Procedure
1. Edit the service, click the Event Mapping tab then click New Mapping to open the Create a New
Event Filter window.
2. Provide the required information to create the filter.
This filter specifies the type of event that maps to the policy. For information about the filter
configuration options, see “Configuring an event filter” on page 137.
3. From the Policy to Run list, select the policy that you want to run for the event type.
4. Click Active.
5. Click OK and the service configuration window gets refreshed with the new filter showing in the table.
DepLoc = "London"
Example 2
The following example demonstrates how to test a filter expression against events retrieved with
GetByFilter.
Consolidating filters
When impact.analyzer.consolidatefilters=true is set, Impact will attempt to consolidate the
filters for all Event Reader services.
When consolidating filters, Impact produces an expression that corresponds to all currently configured
active event filters. In other words, Impact creates a single filter incorporating all active filters. Duplicate
filter expressions are merged and redundant and/or invalid expressions are removed. An example of an
invalid expression is 1=2. This will match nothing and is invalid. It will be removed from the expression
used to select events.
Reader-specific property
To override the setting for specific Event Readers, set the following property in the Event Reader's
property file:
impact.<event reader>.consolidatefilters=false
For example, for TBSMOMNIbusEventReader, the .props file should include the following property:
impact.tbsmomnibuseventreader.consolidatefilters=false
Check that the impactserver.log file does not contain the following entry:
If the above error messages are shown, then either the filter expression must be adjusted (if possible) or
the reader's consolidatefilters property must be set to false using the .props file. This will require
a server restart.
Select: When you place your mouse over the word all the word becomes
underlined as a link.
• Click all to select all the rows of filters. You can then click Delete at
the bottom of the list to delete all the previously defined filters.
• Click all again to clear all the rows of filters
Policy Name Contains the name of the policy that triggers when the event matches
the restriction filter.
Active Select Active to activate the filter or clear to deactivate the filter.
Chain When chaining policies, select the Chain option for each event mapping
that associates a restriction filter with a policy name. For more
information, see the Policy Reference Guide.
Move Use the arrows to change the position of the filters in the table. The
order of the filters is only important when you select to stop testing
after the first match.
Procedure
1. Locate the filter in the table and click Edit to open the Edit Event Filter window.
2. Edit the filter text and select a policy to run, as necessary.
3. Click OK to save the information and close the window.
The filter in the table in the Event Mapping tab shows your edits. Restart the service to implement the
changes.
4. You can adjust the order of the filters. The order of the filters depends on which Event Matching option
you select.
• When you select the Stop testing after first match option, Netcool/Impact checks an incoming
event against the filters in the order they are listed in the table until it gets a single match. It does
not continue checking after it finds the first match.
• When you select Test event with all filters, the order is not important.
5. To delete a filter, in the Select: column, select the filters that you want to delete. (Click the All link to
select all the filters in the table.) Click the Delete link.
Table 96. Event mapping settings for database event listener service configuration window
Window element Description
Test events with all filters Click this button, when an event matches more
than one filter, you want to trigger all policies that
match the filtering criteria.
Stop testing after first match Click this button if you want to trigger only the first
matching policy.
You can choose to test events with all filters and
run any matching policies or to stop testing after
the first matching policy.
New Mapping: New Click on the New button to create an event filter.
Analyze Event Mapping Table Click this icon to view any conflicts with filter
mappings that you set for this service.
Starts automatically when server starts Select to automatically start the service when the
server starts. You can also start and stop the
service from the GUI.
Minimum Number of Threads Set the minimum number of processing threads that
can run policies at one time.
Maximum Number of Threads Set the maximum number of threads that can run
policies at one time.
Processing Throughput: Maximize If you set this property, the event processor tries to
get the maximum performance out of the threads. This
can result in high CPU usage. When you leave this field
cleared, it runs conservatively at around 80% of peak
performance.
Tuning configuration: Maintain on Restart If you set this option, each time the event processor
is started, it uses the same number of threads
it had adjusted to in the earlier run. This feature
is useful in cases where the environment where
Netcool/Impact runs has not changed much from
the previous run. The event processor can start with
the maximum throughput immediately, rather than
engaging in repeated tuning to reach the maximum.
Clear Queue Click this icon to enable the event processor to delete
unprocessed events that it has fetched from one or
more event sources.
Polling Interval Select a polling time interval (in seconds) to establish how often
you want the service to check hibernating policies to see whether
they are due to be woken up. The default value is 30 seconds.
Process wakes up immediately Select to run the policy immediately after wake-up. The wakeup
interval is the interval in seconds at which the hibernating
policy activator checks hibernating policies in the internal data
repository to see if they are ready to be woken.
Starts automatically when server Select to automatically start the service when the server starts.
starts You can also start and stop the service from the GUI.
Clear All Hibernations: Clear Should it become necessary, click to clear all hibernating policies
from the Impact Server.
Error-handling Policy The error handling policy is the policy that is run by default when
an error is not handled by an error handler within the policy
where the error occurred.
Note: If you have a Policy Activator service and
you want it to utilize a default exception handler
policy, you must specify the following property
in the <servername>_<activatorservicename>.props
file: impact.<activatorservicename>.
errorhandlername=<policy name to run>
Highest Log Level You can specify a log level for messages that you print to the
policy log from within a policy by using the Log function.
When a log() statement in a policy is processed, the specified
log level is evaluated against the number that you select for this
field. If the level specified in this field is greater than or equal to
the level specified in the policy log() statement, the message
is recorded in the policy log.
Warning: Setting Highest Log Level to 3 has the
potential to cause a major load on the system, especially
if you have the NOI Extensions installed. This can
include 100% CPU usage. Log level should only be
increased on a temporary basis and should be reverted
to 0 when debug is complete.
Policy Profiling: Enable Select to enable policy profiling. Policy profiling calculates the
total time that it takes to run a policy and prints this time to the
policy log
You can use this feature to see how long it takes to process
variable assignments and functions. You can also see how long it
takes to process an entire function and the entire policy.
Append Thread Name to Log File Select this option to name the log file by appending the name of
Name the thread to the default log file name.
Append Policy Name to Log File Name Select this option to name the log file by appending the name of
the policy to the default log file name.
Collect Reports Select to enable data collection for the Policy Reports.
If you choose to enable the Collect Reports option, reporting
related logs are written to the policy logger file only when the log
level is set to 3.
To see reporting related logs for a less detailed logging
level for example, log level 1, the $IMPACT_HOME/etc/
<servername>_policylogger.props file can be customized
by completing the following steps:
1. Add impact.policylogger.reportloglevel=1
to the $IMPACT_HOME/etc/
<servername>_policylogger.props property.
2. Restart the Impact Server to implement the change.
servername_Policy_01_policylogger.log
• If you selected to create log files on a per-thread basis, a possible log file name might be:
Where
HttpProcessor[5104] [2] is the name of the event processor thread where the policy is running on
a Red Hat Linux system.
• If you selected to create log files on a per policy per thread basis, the log file name might be:
Procedure
1. In the PolicyLogger Service Configuration window, click the Service Log: Write to File option.
2. Select either the Append Thread Name to Log File Name or the Append Policy Name to Log file
option, or both.
Procedure
1. Enter the required information in the service configuration window and save the configuration.
For information about the configuration options, see “ITNM event listener service configuration
window” on page 150.
2. Before you start the event listener service, first stop all ITNM and rvd processes and enter the
command:
$ITNM_HOME/bin/rvd -flavor
116
3. Restart ITNM.
4. Make sure that the ITNM event listener service is started so that you can receive events from ITNM.
(You have the option to have it start automatically when Netcool/Impact starts.)
Policy to Execute Select the policy to run when an event is received from the ITNM
application. You can use the ITNMSampleListenerPolicy that was
installed when you installed Netcool/Impact to help you understand the
event listener functionality.
con.micromuse.dsa.precisiondsa.PrecisionEventFeedSource
Note: Copy this class name exactly as it is written here, with no extra
spaces.
Direct Mode Source Name Type a unique name that identifies the data source, for example,
ITNMServer.
Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.
Procedure
1. Click Services to open the Services tab. Select the ImpactDatabase.
2. The ImpactDatabase service uses ImpactDB data source configuration settings, by default.
To change the port or any other configuration settings, you must stop the service and then edit the
ImpactDB data source.
3. Enter the replication port for the Derby backup host in the Replication Port field.
4. Starts automatically when server starts, select to automatically start the service when the server
starts. You can also start and stop the service from the GUI.
5. Service log (Write to file): Select to write log information to a file.
6. Click Save to implement the changes.
ObjectServer Data Source Select the ObjectServer that you want to use to send events.
Memory Status: Enable Select to send status events regarding memory usage of the Impact
Server.
Memory Interval Select or type (in seconds) how often the service must send memory
status events to the ObjectServer.
Queue Status: Enable Select to enable the service to send events about the status of the
event readers, listeners and EventProcessor.
Queue Interval Select or type (in seconds) how often the service must send queue
status events.
Cluster Status: Enable Select to enable the service to send events about the status of the
cluster to which it belongs. It sends events when:
• A Impact Server is started and joins the cluster
• A server is stopped and removed from the cluster
• A primary server is down and a secondary server becomes the new
primary
Data Source Status: Enable Select to enable the service to send the status when certain
conditions occur with a data source.
For example, the service sends a status message when a user tests
a connection to a data source or when a connection cannot be
established.
Service Status: Enable To enable service monitoring, select this check box and start the self-
monitoring service. The self-monitoring service sends service status
events to the ObjectServer.
Starts automatically when Select to automatically start the service when the server starts. You
server starts can also start and stop the service from the GUI.
Procedure
1. Select the project for which you want to create the service.
2. From the Service Type list, select DatabaseEvent Reader to open the service configuration window.
The DatabaseEventReader Configuration window has two tabs, General Settings and Event Mapping.
3. Enter the required information in the General settings tab of the configuration window.
For information about general settings options, see “Database event reader configuration window -
general settings” on page 154.
4. Enter the required information in the Event Mapping tab of the configuration window.
For information about general settings options, see “Event mapping” on page 137.
Note: If a service uses a Data Source for which the IP address or hostname has changed, you need to
restart the service.
Table 102. Database event reader configuration window - General Settings tab
Data Type After you select a data source, the system populates the data type field
with a list of data types created in Netcool/Impact corresponding to that
particular data source. Select a data type from the list.
Polling Interval Select or enter a polling time interval to establish how often you want the
service to poll the events in the event source. The polling time selections
are in milliseconds and the default value is 3000 milliseconds
Restrict fields Click Fields to access a selection list with all the fields that are available
from the selected data source.
You can reduce the size of the query by selecting only the fields that you
need to access in your policy.
Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.
Clear State When you click Clear, the internally stored value for the Key field and
Timestamp field are reset to 0. This causes the event reader to retrieve
all events in the data source at startup and place them in the event queue
for processing.
You can only use Clear State to clear the event reader state when the
service is stopped. Clicking Clear while the service is running does not
change the state of the event reader.
Clear Queue Click Clear to enable the database event reader to delete unprocessed
events that it has fetched from an SQL data source.
Test events with all filters If an event matches more than one filter, trigger all policies that match
the filtering criteria.
Stop testing after first match Or select to trigger only the first matching policy.
Actions: Get updated events Select to receive events that have been updated (all new events are
automatically sent).
Time Stamp Field If the database event reader is configured to get updated events, both
the TimeStamp field and the Key field must be configured correctly.
• The TimeStamp field must point to a column in the external database
table that is automatically populated with a timestamp when an insert
or update occurs.
• The Key field must point to a column which uniquely identifies a row
(it does not have to be an automatically incremented field).
If the date/time format of the timestamp field in the external database
is different from the default pattern of dd-MMM-yy hh.mm.ss.SSS, a
property named formatpattern must be added to the database event
reader properties file to match the date/time format.
Example:
impact.[DatabaseEventReaderName].formatpattern=dd-
MMM-yy hh.mm.ss.SSS aaa
When the Get updated events checkbox is not selected, the
TimeStamp field does not have to be configured, but the Key field must
in this case be an automatically incremented numeric field.
Note: The Database Event Reader supports a TimeStamp database
field in UNIX Epoch format. The following property must be added to
the database event reader properties file:
impact.[DatabaseEventReaderName].formatpattern=epoch
Stop the database event reader service from the GUI and click the
Clear State button.
Add the property and restart Impact for the new property to take effect.
Analyze Event Mapping Table Click this icon to display any conflicts with filter mappings that you have
set for this service.
Procedure
1. Edit the database event reader properties file $IMPACT_HOME/etc/<server name>_<database
reader name>.props.
2. In the event reader properties file, add the following property or update the following property
impact.<database readername lower case>.maxtoreadperquery=<number of rows to
return>, enter a value for <number of rows to return>.
If the value is set to 0, the select query returns all the rows at one time, impact.<database
readername lower case>.maxtoreadperquery=0
If the value is set to 1000, the select query returns 1000 rows at one time, impact.<database
readername lower case>.maxtoreadperquery=1000
Example MySQL
maxtoreadperquery > 0 : SELECT * FROM CUSTOMERS WHERE NAME LIKE 'IBM' LIMIT 1000;
maxtoreadperquery = 0: SELECT * FROM CUSTOMERS WHERE NAME LIKE 'IBM';
3. Restart the Impact server. For more information about restarting the Impact server, see Stopping and
starting the Impact Server in the Administration Guide.
<emailreadername>.deleteonread=false
Where <emailreadername> is the email reader service name. Restart the service. This only works for
IMAP email servers.
Protocol: Select one of the following options from the drop-down menu: POP3 or
IMAP.
Port: Select the port to connect to the mail server. The default POP3 port is
110. The default IMAP port is 143.
Log in As: Type a login name. The default value is the value that you use to log on
to Netcool/Impact.
Password: Type your password. The letters that you type are replaced with
asterisks.
Polling Interval: Select how often (in seconds) the service polls the POP or IMAP host for
new email messages.
Email Body (ignore) The email reader processes the body of the email as if it were a policy.
If the body of the email is in IPL syntax, then when the email is
received, the contents of the body is run as a policy. The policy that
is associated with the Email Reader service is run separately. Select this
check box if you do not want to run the contents of the email as a policy.
Restart the service to implement the changes.
Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.
SSL Select the SSL check box for an SSL connection to the mail server. Next,
refer to the Security>Enabling SSL connections with external servers
section of the documentation to complete the SSL certificate import.
Note: By default, SSL connections from Netcool/Impact to mail servers
use the most secure protocol supported by the mail server. However,
you can use any version of the TLS protocol. This applies to both email
reader and email sender services in Netcool/Impact. If you want to
restrict which protocols are enabled by Impact for SSL connections
to a mail server, you can add a property to the service properties file
called impact.<service_name>.secureprotocols. The value of
this property can be a comma-separated list of allowed protocols, for
example TLSv1.1,TLSv1.2 or just TLSv1.2.
OAUTH DataSource Name Select the name of the OAuth data source from the OAUTH DataSource
Name drop-down menu.
Note: You have to create the data source before you can select it here.
See “Creating an OAuth data source” on page 61.
When you create a table, add a data type that points to the table and call it email_auth. Check the name
field as the key for the data type. Insert some sample data into the table for testing purposes. This table
must include only a record for authorized email addresses.
• A POP3 or IMAP email account must exist.
• You must know the POP3 or IMAP server, user name, and password for this account.
• A data type must exist for the alerts.status table of the ObjectServer that Netcool/Impact is reading
from. In this example, the data type is called OS_NCOMS.
• The email reader service must be configured, see the “Configuring the email reader service” on page
156.
Example
This policy example uses IPL.
// Extract the command from the Subject. Commands should be preceded by cmd: to be
// treated as a command.
// Extract the name of the sender. This is used to determine if the user has the
// authority to query the Object Server. Build the filter for the lookup into the
// If the sender has authorization, evaluate the Subject of the email to see if
// it contains a valid command.
If (numAuth == 1) {
// If the word query is parsed out of the Subject at the beginning of the policy,
// check the body for a valid query.
// Strip out new line statements to get the body in one long string, then extract
// the query from the Body. Queries should be preceded by query: to be treated
// as a query. If the body contains critical, then query the Object Server
// for all events where Severity = 5.
// Format the current time, then build a message to send to the sender.
// If the command in the Subject is invalid, send an email notifying the sender.
Policy to Execute Select the policy to run when an event is received from the database
server.
Name Service Port Provide the port over which the name service host is accessed.
Name Service Object Name Type in the name of the service object.
Direct Mode Class Name Type in the direct mode class name.
Direct Mode Source Name Provide a unique name that identifies the data source.
Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.
Policy To Execute Select the policy that you created to run in response to incoming
messages from the JMS service.
JMS Data Source JMS data source to use with the service.
You need an existing and valid JMS data source for the
JMS Message Listener service to establish a connection with
the JMS implementation and to receive messages. For more
information about creating JMS data sources, see “JMS data
source configuration properties” on page 66.
Message Selector The message selector is a filter string that defines which
messages cause Netcool/Impact to run the policy specified in the
service configuration. You must use the JMS message selector
syntax to specify this string. Message selector strings are similar in
syntax to the contents of an SQL WHERE clause, where message
properties replace the field names that you might use in an SQL
statement.
The content of the message selector depends on the types and
content of messages that you anticipate receiving with the JMS
message listener. For more information about message selectors,
see the JMS specification or the documentation distributed with
your JMS implementation. The message selector is an optional
property.
Durable Subscription: Enable You can configure the JMS message listener service to use
durable subscriptions for topics that allow the service to receive
messages when it does not have an active connection to the
JMS implementation. A durable subscription can have only one
active subscriber at a time. Only a JMS topic can have durable
subscriptions.
Note: Since a durable connection can have only one active
subscriber at a time, in a cluster configuration during failover and
failback, a delay/pause can be configured. The delay/pause allows
the service to shut down on the other cluster members during
failover/failback.
The delay/pause is configured in the jmslistener properties
file using the durablejmspause property, for example:
impact.<jmslistenerservicename>.durablejmspause=3
0000. The durableJmsPause property defines the time in
milliseconds, so
impact.<jmslistenerservicename>.durablejmspause=3
0000 defines a pause of 30 seconds.
Clear Queue Clear the message waiting in the JMSMessageListener queue that
has not yet been picked by the EventProcessor service. It is
recommended not to do this while the Service is running.
Starts automatically when server Select to automatically start the service when the server starts.
starts You can also start and stop the service from the GUI.
Procedure
1. Click Services to open the Services tab.
2. If required, select a cluster from the Cluster list.
3. Click the Create New Service icon in the toolbar and select OMNIbusEventListener to open the
configuration window.
4. Enter the required information in the configuration window.
5. Click the Save icon in the toolbar to create the service.
6. Start the service to establish a connection to the ObjectServer and subscribe to one or more IDUC
channels to get notifications for inserts, updates, and deletes.
<taskdef name="impactHttp"
classname="com.ibm.tivoli.impact.install.taskdef.ImpactHttpUtils"
classpath="${impact.home}/install/configuration/cfg_scripts/taskdefs/install-
taskdefs.jar"
onerror="report"/>
<target name="createService">
<!-- if you want to add without filters, add the following property:
"addWithoutFilters": "true" -->
<!-- For Filters: Modify the "items:" section in "EVENTMAPPINGS", with your policies and
filters -->
<property name="newService" value='
{"isNew": "true",
"GETUPDATEDEVENTSACTION": false,
"EVENTLOCKINGENABLED": false,
"GETDELETEDEVENTSACTION": false,
"RUNPOLICYONDELETESACTION": "AddPolicyProcessMapping",
"EVENTLOCKINGEXPRESSION": "",
"STARTUPENABLED": false,
"SERVICECLASS": "OMNIbusEventReader",
"EVENTMAPPINGS": {
"layout": [{
"encode": true,
"field": "RESTRICTIONFILTER",
"name": "Restriction Filter"
}, {
"field": "POLICYNAME",
"name": "Policy Name"
}, {
"field": "ACTIVE",
"name": "Active"
}, {
"field": "CHAIN",
"name": "Chain"
}],
"identifier": "id",
"label": "id",
"items": [
{
"CHAIN": false,
"ACTIVE": true,
"id": 1,
"POLICYNAME": "AddPolicyProcessMapping",
"RESTRICTIONFILTER": "1=1"
},
{
"CHAIN": false,
"ACTIVE": true,
"id": 2,
"POLICYNAME": "DefaultExceptionHandler",
"RESTRICTIONFILTER": "1=1"
}
]
},
"SELECTEDFIELDS": {
"identifier": "name",
"label": "name",
"items": [{
"name": "*"
}]
},
"SERVICENAME": "${service.name}",
"DATASOURCENAME": "defaultobjectserver",
"AVAILABLEFIELDS": {
"identifier": "name",
"label": "name",
</project>
2. Place the file in your home directory on the Impact server system.
3. Execute following command from the $IMPACT_HOME/bin directory:
Data Source Select an OMNIbusObjectServer data source. The ObjectServer data source
represents the instance of the Netcool/OMNIbus ObjectServer that you
want to monitor using this service. You can use the default ObjectServer
data source that is created during the installation, defaultobjectserver.
Polling Interval The polling interval is the interval in milliseconds at which the event reader
polls the ObjectServer for new or updated events.
Select or type how often you want the service to poll the events in the
event source. If you leave this field empty, the event reader polls the
ObjectServer every 3 seconds (3000 milliseconds).
Restrict Fields You can complete this step when you have saved the
OMNIbusEventReader service. You can specify which event fields you
want to retrieve from the ObjectServer. By default, all fields are retrieved
in the alerts. To improve OMNIbus event reader performance and reduce
the performance impact on the ObjectServer, configure the event reader to
retrieve only those fields that are used in the corresponding policies.
Click the Fields button to access a list of all the fields available from the
selected ObjectServer data source.
You can reduce the size of the query by selecting only the fields that you
need to access in your policy. Click the Optimize List button to implement
the changes. The Optimize List button becomes enabled only when the
OMNIbusEventReader service has been saved.
Starts automatically when Select to automatically start the service when the server starts. You can
server starts also start and stop the service from the GUI.
Collect Reports Select to enable data collection for the Policy Reports.
Clear State When you click the Clear State button, the Serial and StateChange
information stored for the event reader is reset to 0. The event reader
retrieves all events in the ObjectServer at startup and places them in the
event queue for processing. If the event reader is configured to get updated
events, it queries the ObjectServer for all events where StateChange >=
0. Otherwise, it queries the ObjectServer for events where Serial > 0.
You can use the Clear State button only to clear the event reader state
when the service is stopped. Clicking the button while the service is
running does not change the state of the event reader.
Test events with all filters Select this option to test events with all filters and
run any matching policies.
If an event matches more than one filter, all
policies that match the filtering criteria are
triggered.
Stop testing after first match Select this option to stop testing after the first
matching policy, and trigger only the first matching
policy.
Get status events Select to receive the status events that the Self
Monitoring service inserts into the ObjectServer.
Run policy on deletes Select if you want the event reader to receive
notification when alerts are deleted from the
ObjectServer. Then, select the policy that you want
to run when notification occurs from the Policy list.
Event Locking: Enable Select if you want to use event order locking and
type the locking expression in the Expression field.
Event locking allows a multi-threaded event
processor to categorize incoming alerts that are
based on the values of specified alert fields and
processes them one at a time.
With event locking enabled, if more than one
event exists with a certain lock value, then these
events are not processed at the same time. These
events are processed in a specific order in the
queue.
You use event locking in situations where you
want to prevent a multi-threaded event processor
from attempting to access a single resource from
more than one instance of a policy that are running
simultaneously.
Node
Node+Severity
Analyze Event Mapping Table Click to analyze the filters in the Event Mapping
table.
3 4 3 5 4 4 2 3 5
F L
F: First Element in the Queue
L: Last Element in the Queue
Since the Event Processor has four threads configured, the first thread receives the first event with
Severity=3 from the queue and sends it to a policy for processing. The second thread receives
the event with Severity=4 and sends it to a policy for processing. Although two remaining threads
are available for processing, the next event Severity=3 cannot be processed because an event with
Severity=3 is already being processed (the first event in the queue). Until the processing of the first
event is complete, the other threads cannot begin, since they would violate the locking criteria.
If the thread that picked the second event in the queue (with Severity=4) finishes processing before the
first event, it waits along with the other two threads until the first event has finished processing. When the
thread that picked up the first event in the queue is finished, three threads picks up the third, fourth, and
fifth events from the queue, since they have different Severity values (3, 5, 4).
At this point, the remaining thread cannot pick up the next event (sixth in the queue) from the queue
because an event with the same Severity level (4) is already processing (fifth in the queue).
In the previous example, locking is on a single field, Severity. You can also lock on more than one field
by concatenating them with the plus (+) operator. If you lock, for example, on the Node and Severity
fields, you can use one of the following event locking expressions:
Node+Severity
or:
Severity+Node
Event locking on multiple fields works in the same way that locking on a single field except that in
this instance, two events with the same combination of fields cannot be processed at the same instant.
In other words, if two events have the values for Node as abc and xyz and both have the value for
Severity as 5, then they can be processed simultaneously. The only case when the two events cannot be
processed together is when the combination of Node and Severity is the same for the events. In other
words, if there are two events with the Node as abc and Severity as 5, then they cannot be processed
together.
Symptoms
If you are not using the Get Updates option in the OMNIbus reader service, Netcool/Impact uses the
Serial field to query Netcool/OMNIbus. Serial is an auto increment field in Netcool/OMNIbus and has a
maximum limit before it rolls over and resets.
Resolution
Complete the following steps to set up Netcool/Impact to handle Serial rollover:
1. Identify the OMNIbusEventReader that queries the Netcool/OMNIbus failover/failback pair. A Netcool/
Impact installation provides a reader called OMNIbusEventReader but you can create more instances
in the Services GUI.
2. Stop the Impact Server. In a Netcool/Impact clustered environment, stop all the servers.
3. Copy the sql file serialrotation.sql in the $IMPACT_HOME/install/dbcore/OMNIbus folder
to the machines where the primary and secondary instances of the ObjectServer are running. This
script creates a table called serialtrack in the alerts database and also creates a trigger called
newSerial to default_triggers.
4. Run this script against both the primary and secondary ObjectServer pairs.
• For UNIX based operating systems:
For example, if the serialrotation.sql is placed in the /opt/scripts folder and I want to run
this script against the ObjectServer instance NCOMS, connecting as the root user with no password,
the script can be run as:
For example, place the serialrotation.sql file in the OMNIHOME/bin folder and run this script
against the ObjectServer instance NCOMS, connecting as a root user with no password:
Make sure that -P is the last option. You can ignore providing the password and enter it
when prompted instead. For information about Netcool/OMNIbus, see the IBM Tivoli Netcool/
OMNIbus Administration Guide available from the following website: https://www.ibm.com/support/
knowledgecenter/SSSHTQ/landingpage/NetcoolOMNIbus.html.
Further steps
When the script completes, make sure that you enable the newSerial trigger.
1. Start your Netcool/Impactserver and the OMNIbusEventReader. In a clustered setup, start the primary
server first followed by all the secondary servers.
2. Log in to the Netcool/Impact GUI and create an instance of the DefaultPolicyActivator service. In the
Configuration, select the policy to trigger as SerialRollover and provide an interval at which that policy
gets triggered.
3. The SerialRollover policy assumes that the data source used to access Netcool/OMNIbus
is the defaultobjectserver and the event reader that accesses Netcool/OMNIbus is the
OMNIbusEventReader. If you are using a different data source or event reader, you must update the
DataSource_Name and Reader_Name variables in the policy accordingly.
4. Start the instance of the DefaultPolicyActivator service that you created.
Activation Interval Select how often (in seconds) the service must activate the policy.
The default value is 30 on the policy activator service that comes
with Netcool/Impact. When you create your own policy activator
server the default value is 0.
Policy Select the policy you want the policy activator to run.
Starts automatically when server Select to automatically start the service when the server starts. You
starts can also start and stop the service from the GUI.
Procedure
1. Log on to the GUI
2. Click the Operator Views tab.
3. Double-click the operator view to see the details or right click the operator view and click Edit.
Operator views
An operator view is a custom web-based tool that you use to view events and data in real time and to run
policies that are based on that data.
The simplest operator views present a basic display of event and business data. More complex operator
views can function as individual GUIs that you use to view and interact with event and business data
in a wide variety of ways. You can use this kind of GUI to extensively customize an implementation of
Netcool/Impact products and other Tivoli Monitoring applications.
Management and updating of operator view components is done in the GUI Server. In the documentation
where there are references to $IMPACT_HOME/opview/displays, it is referring to the GUI Server
installation in a split installation environment.
Typically, you create operator views to:
• Accept incoming event data from Netcool/OMNIbus or another application.
• Run a policy that correlates the event data with business data that is stored in your environment.
• Display the correlated business data to a user.
• Run one or more policies that are based on the event or business data.
• Start another operator view that is based on the event or business data.
One common way to use an operator view is to configure it to be started from within the Netcool/
OMNIbus event list. Netcool/Impact operators can view related business data for an event by right-
clicking the event in the event list and viewing the data as displayed in the view. The business data
might include service, system, or device information that is related to the event, or contact information for
administrators and customers that are affected by it.
Operator views are not limited to use as Netcool/OMNIbus tools. You can use the operator view feature to
create a wide variety of tools that display event and business data to users.
Control Description
Select an operator view and use this icon to edit it. Alternatively, you can edit an operator
view by right clicking its name and selecting Edit in the menu.
Click this icon to view the operator view display for the selected operator view. Alternatively,
right click an operator view and select View.
Select an operator view from the list and click this icon to delete it. Alternatively, right click
an operator view and select Delete.
Action panel Contains a list of policies associated with this view. You can configure the layout
so that the action panel is displayed on the top, the bottom, the left or the right
of the display, or not at all.
Information group Displays sets of information retrieved from data types. This data is often
panel business data that is related to event information passed to the view from
Netcool/OMNIbus or another application.
Information groups
An information group is a set of dynamic data that is displayed when you open the view.
This is often business data that is related to event information that is passed to the view from Netcool/
OMNIbus or another application. The data that is displayed in an information group is obtained by a query
to a data source either by filter or by key.
When you create a basic operator view using the GUI, you can specify one or more information groups
that are to be displayed by the view.
The following table shows the properties that you specify when you create an information group:
Property Description
Property Description
Data type Data type that contains the data that you want to display.
Style Layout style for data items in the resulting information group. Options are Tabbed
and Table.
You can customize the information that is displayed in the information groups by editing the operator view
policy.
Procedure
1. Log on to the GUI.
2. Click the Operator Views tab.
3. Click the New Operator View icon to open the New Operator View.
4. In the Operator View Name field, enter a unique name for the operator view. You cannot edit the
name once the operator view is saved.
5. In the Layout Options area, specify the position of the event panel and action panel in the operator
view. You can preview the appearance of the operator view by using the images available in the
Preview area.
6. Click the Action Panel link, select one or more action policies that the user can open from within the
operator view.
7. Click the Information Groups link. Use the following steps to create one or more information groups:
a) Click the New Information Group icon to insert a new row into the information groups table.
b) In the Group Name field, type a unique name for the group.
c) From the Type list, select By Filter or By Key to specify whether the information group retrieves
data from a data type by filter or by key.
d) From the Data Type list, select the data type that contains the information you want to view.
e) In the Value field, enter a filter or key expression. If the Type is By Filter adding a value is optional.
If the Type is the By Key field from the data type, then the value is mandatory.
f) In the Style list, select Tabbed or Table to specify how the operator view shows the resulting data.
g) Press Enter on your keyboard to confirm the value that you are adding to the information group (or
press Escape on your keyboard to cancel the edit).
h) Repeat these steps to create multiple information groups for any operator view.
i) To edit an information group, click the item that you want to edit and change the value.
Overview
Netcool/Impact has a predefined project, EventIsolationAndCorrelation that contains predefined data
sources, data types, policies, and operator views. When all the required databases and schemas are
installed and configured, you must set up the data sources. Then, you can create the event rules by using
the ObjectServer sql in the Event Isolation and Correlation configuration view from the UI. You can view
the event analysis in the operator view, EIC_Analyze. You can also view the output in the topology widget
dashboard in the Dashboard Applications Services Hub.
Complete the following steps to set up and run the Event Isolation and Correlation feature.
1. Install Netcool/Impact.
2. Install DB2 or use an existing DB2 installation.
3. Configure the DB2 database with the DB2 schema.
4. Install the Discovery Library Toolkit with the setup-dltoolkit-<platform>_64.bin installation
image that is available in the directory IMPACT_INSTALL_IMAGE/<platform>.
If you already have a Tivoli® Application Dependency Discovery Manager (TADDM) installation,
configure the Discovery Library Toolkit to consume the relationship data from TADDM. You
can also consume the data through the loading of Identity Markup Language (IdML) books.
For more information about the discovery library toolkit, see the Tivoli Business Service
Manager Administrator's Guide and the Tivoli Business Service Manager Customization Guide.
The guides are available in the Tivoli Business Service Manager 6.1.1 documentation, available
from the following URL, https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/
wiki/Tivoli%20Documentation%20Central.
You can load customized name space or your own model into SCR. This model can be used
for application topology-based event correlation. For more information see Tivoli Business Service
Manager Customization Guide, Customizing the import process of the Service Component Repository,
Service Component Repository API overview.
5. In the GUI, configure the data sources and data types in the EventIsolationAndCorrelation project to
use with the Impact Server.
6. Create the event rules in the UI to connect to the Impact Server.
7. Configure WebGUI to add a new launchpoint or configure a topology widget to visualize the results.
Tip: When you use Event Isolation and Correlation, the Event Isolation and Correlation events must
have a BSM identity value in field BSM_Identity. If the field does not have a value, you must enter
it manually or create it using the event enrichment feature by using the EIC_EventEnrichment policy
and EIC_EventEnrichment service in the EventIsolationAndCorrelation project. You might also want to
update the event reader Filter Expression in the Event Mapping tab according to your requirements.
General information about navigating Event Isolation and Correlation is in the online help. Additional
detailed information about setting up and configuring Event Isolation and Correlation, is in the Netcool/
Impact Solutions Guide.
Procedure
1. In the GUI, click Data Model.
2. From the project list, select the project EventIsolationAndCorrelation.
A list of data sources specific to the EventIsolationAndCorrelation feature is displayed.
• EIC_alertsdb
• SCR_DB
• EventrulesDB
3. For each data source, update the connection information, user ID, and password and save it.
4. Configure EIC_alertsdb to the object server where the events are to be correlated and isolated.
5. Configure SCR_DB to the Services Component Registry database. When you create the SCR schema,
the following tables are created EIC_ACTIONS and EIC_RULERESOURCE.
--EIC_RULERESOURCE
7. Configure the EventRulesDB data source to connect to the Services Component Registry database.
Procedure
1. To configure the EIC_alertquery data type, right-click on the data type and select Edit.
2. The Data Type Name and Data Source Name are prepopulated.
Procedure
1. Select Event Isolation and Correlation to open the Event Isolation and Correlation tab.
2. Click the Create New Rule icon to create an Event Rule. While creating this item the configure page
has empty values for various properties.
3. Click the Edit the Selected Rule icon to edit the existing event rules.
4. Click the Delete the Selected Rule icon to delete an event rule from the system and the list.
Procedure
1. Event Rule Name: Specify the event rule name. The event rule name must be unique across this
system.
When you select Edit or New if you specify an existing event rule name, the existing event rule is
updated. When you edit an event rule and change the event rule name, a new event rule is created
with the new name.
2. Primary Event: Enter the SQL to be run against the ObjectServer that is configured in the data source
EIC_alerts db.
The primary event is the event that is selected for analysis.
The primary event filter is used to identify if the event that was selected for analysis has a rule
associated with it. The primary event filter is also used to identify the object in the Services
Component Registry database that has the event that is associated with it.
The object may or may not have dependent entities. During analysis, the event isolation and
correlation feature finds all the dependent entities and their associated events.
For example, the primary event has 3 dependent or child entities and each of these entities has three
events that are associated with it. In total, there are nine dependent events. Any of these secondary
events could be the cause of the primary event. This list of events is what is termed the list of
secondary events. The secondary event filter is used to isolate one or more of these events to be the
root cause of the issue.
3. Test SQL: Click Test SQL to test the SQL syntax that is specified in the primary event.
Modify the query so that only one row is returned. If there are multiple rows, you can still configure
the rule. However, during analysis only the first row from the query is used to do the analysis.
4. Secondary Events: The text area is for the SQL to identify the dependent events. When you specify
the dependent events, you can specify variables or parameters that can be substituted from the
primary event information. The variables are specified with the @ sign.
Procedure
Open a browser on Netcool/Impact. Use one of the following options:
• Point to <Impact_Home>:<Impact_Port>/opview/displays/NCICLUSTER-
EIC_Analyze.html?serialNum=<EventSerialNumber>. Where <Impact_Home> and
<Impact_Port> are the Netcool/Impact GUI Server and port and EventSerialNumber is the serial
number of the event you want to analyze. To launch the analysis page outside of the AEL (Action Event
List), you can add serialNum=<Serial Number> as the parameter.
• The Event Isolation and Correlation analysis page can be configured to launch from the Active Event
List (AEL) or LEL (Lightweight Event List) within WebGUI. For more information see, “Configuring
WebGUI to add a new launch point” on page 185. When you create the tool you have to specify only
<Impact_Home>:port/opview/displays/NCICLSTER-EIC_Analyze.html. You do not have to
specify SerialNum as the parameter, the parameter is added by the AEL tool.
Procedure
Select the event from the AEL or LEL and launch the Analyze page. The EIC_Analyze page contains three
sections:
• Primary Event Information: shows the information on the selected event. This is the event on which
the event isolation and correlation analysis takes place.
• Correlated Events: shows information about the dependent events identified by the tool. Dependant
events are identified as the events that are associated with the dependant child resources of the
device or object that is associated with the primary event. These events are displayed in the context of
dependent resources that were identified from the Services Component Registry.
• Event Rule Processed: shows the rule which was identified and processed when this primary event was
analyzed.
Accessing reports
Use this procedure to access the reports.
Procedure
1. Click Reports to open the Reports tab.
2. Select the report you want to run, the tab for the specified report opens.
The following reports are available:
• Policy Efficiency Report
• Policy Error Report
• Operator Efficiency Report
• Node Efficiency Report
• Action Error Report
• Action Efficiency Report
• Impact ROI Efficiency Report
• Impact Profile Report
3. In the tab menu, select the date and time ranges. Select the view option you want, either Chart View
or Tabular view then run the report. The time range displays in local time. For more information see
“Viewing Reports” on page 187 and “Reports toolbar” on page 188.
Viewing Reports
The reports present their data in graphical and tabular format. Use the chart view tab and the tabular view
tab to switch between these two formats.
Chart view
The chart view presents the report data in graphical format. The legend shows the color code for each
action. The descending order in the legend reflects the order from left to right in the chart.
Tabular view
The tabular view presents the report data in a table. To get more detail for a particular row of the
table, select the row, then click the DrillDown icon on the toolbar above the table. The table refreshes
automatically and loads the information for the row. To return to the main report view click the Drillup
arrow icon on the toolbar.
If you are viewing a multi-page report, use the Page and Row controls at the bottom of the table. In the
Page field, click the arrows to get to the page you want to view. In the Row field, use the arrows to adjust
the number of rows that display per page. The minimum number of rows is three and the maximum is 50
per page. The total number of rows that display on a page is shown on the lower right corner of the table.
Reports toolbar
You use the report toolbar to perform a basic report configuration.
You can find some toolbar controls, for example, the time of the report, selection fields, or the refresh
report icon, can be found in all reports. Other controls can be found only in specific reports.
Report icons
This table explains the function of the icons that you can find in the reports.
Icon Description
Click to refresh the report data after changing the report parameters.
Only in the Impact Profile report and Impact ROI Efficiency report. Open a
window that you can use to change the report parameters. In the Impact ROI
Efficiency report, when you click the icon you have two options, configure policy
and report mapping and configure business process.
Clear all Impact Profile Report data. You can find this icon only in the Impact
Profile report.
Stop collecting data for this report. This icon can be found only in the Impact
Profile report.
In the report tabular view, you can drill down to view more detailed information
about a row, by selecting a row, and then clicking this icon. This icon is only
enabled after you select a table row.
Click this icon to return to the main table view of a report, after you drill down for
more detail.
Important: Before you configure this report, enable report data collection in the Policy Logger service. For
more information, see “Configuring the Policy logger service” on page 147.
Report views
The chart view presents the report data in graphical format. The legend shows the color code for each
process. You can hover the mouse cursor over a process in the chart view to highlight it, and see the total
time saved in seconds after automating the process.
The tabular view shows the following details:
• The process time
Procedure
1. Select Reports to open the Reports tab.
2. Select the Impact ROI Efficiency Report.
3. Click the Configuration icon and select the Configure Business Process option, to add a business
process.
The legend on the left shows the color code for each process. The descending order of the legend
reflects the order from left to right in the chart.
9. Click the Tabular View tab.
Procedure
1. Click Reports select Impact Profile Report.
2. From Impact Profile Report toolbar, click Open Configuration to open the Impact Profile Rules
Editor window.
Use this window to set the parameters for the report. For more information about the available
parameters, see “Impact Profile Report rules editor” on page 197.
3. Enable and Start profiling by clicking the Start Profile Report icon.
When you enable and start the Impact Profiling Report, Netcool/Impact inserts profile data into the
Apache Derby database corresponding to operations that match the configured rules.
Attention: As data gets inserted into the Derby database, the Impact Profile memory usage
increases accordingly. Memory usage increases can cause the server to run out of the memory
depending on the size of your maximum heap settings. The default maximum heap setting is
1200 MB. To prevent the server from running out of memory, monitor the memory usage and
adjust the maximum heap limit accordingly. Also, consider periodically clearing the disk space
in the Apache Derby database. For information see the Troubleshooting section How to clear
Queries sent to same data source The number of "hotspot" queries SQL Query XinY Rules
by same policy more than n times sent to the same data source by
in n seconds the same policy in more than a
specified number of seconds.
Queries done more than n times Measures the number of queries SQL Hotspot Rules
in n seconds that are taking more made in a specified number of
than n milliseconds seconds that take more than a
specified number of milliseconds.
Queries made more than n times Counts the number of queries SQL Hotspot Rules
in n seconds that return more made in a specified number of
than n rows seconds that return more than a
specified number of rows.
Inserts into any types more than Measures the number of SQL SQL Hotspot Rules
n times in n seconds that are inserts into any type of data
taking more than n milliseconds. type in a specified time window
that take more than a specified
number of milliseconds.
Internal types written more than Measures the number of internal Internal Type Rules
n times in n seconds data types that are accessed
more than a specified number of
times in a specified number of
seconds.
Same identifier updated by Measures the number of return Return Event Rules
ReturnEvent more than n times in events that update events using
n seconds the same identifier as the source
event.
Same identifier inserted into the Measures the number of new Add Data Item Rules
same Object Server that events events that were sent to the
are read from. ObjectServer that use the same
identifier that they read the event
from.
JRExec calls done more than n Measures the number of JRExecAction Rules
times in n seconds that are taking "troublesome" JRExec calls in
more than n seconds more than a specified number of
times in a specified time period.
Procedure
1. Select the rule in the form that is associated with the query you want to edit.
2. SQL Query XinY Rules
Use this option to change the settings for the following query:
Queries sent to same data source by same policy more than n times in n seconds
• Select the Count Threshold to set the number of SQL queries to be run.
• Select the Count Time Window to set the time window the measurement is to be based on.
3. SQL Hotspot Rules
Use this option to change the settings for the following queries:
Queries done >n times in n seconds that are taking more than n milliseconds
Queries made >n times in n seconds that return >n rows
Inserts into any types > n times in n seconds that are taking >n milliseconds
• Select the Insert Execution Time Threshold to set the time threshold for the SQL inserts.
• Select the Query Execution Time Threshold to set the time threshold for query execution.
• Select the Query Return Row Threshold to set the threshold for the number of queries to be
retrieved.
• Select the Count Threshold to set the threshold for the number of SQL statements to be run.
• Select the Count Time Window to set the time window the measurement is to be based on.
4. JRExecAction Rules
Use this option to change the setting for the following query:
JRExec calls done more than n times in n seconds that are taking > n seconds
• Select the Count Threshold to set the threshold for the number of JREXecActions to be run.
• Select the Execution Time Threshold to set the threshold for how long the JREexActions must take.
• Select the Time Window to set the time window the measurement is to be based on.
5. Internal Type Rules
Use this option to change the settings for the following query:
Internal types written more than n times in n seconds.
• Select the Count Threshold to set the number of times internal data types are written to.
• Select the Time Window to set the length of time the profile is based on.
6. ReturnEvent Rules
Use this option to change the settings for the following queries:
• Each maintenance window has a free format text field that allows you to add manually any
additional comments or descriptive notes.
Procedure
1. Click the Maintenance Window tab.
This page lists the instances of the different types of maintenance window.
2. Click the New Maintenance Window button to create a new window.
Procedure
1. Click the New Maintenance Window button to create a new window.
2. For Type of Maintenance Window, select One Time.
3. Check that the Time Zone you want to use is selected.
4. Add fields you wish to assign in the filter to match events. For each field you add, select the operator
from the list provided and assign a value to the field to be used for the filter.
Tip: For a like operator, there is no requirement for regular expressions. You can specify a substring
and select the like operator from MWM.
Tip: For the in operator, provide a space separated list of strings that the field can be (for example,
server1.ibm.com server2.ibm.com server3.ibm.com). A maximum of 50 strings are allowed.
Note: Any field where a value is not provided will not be included in the filter.
5. Click the calendar icons to select the Start Date and End Date for the maintenance time window.
6. Click the Save button to create the window.
7. Click the Back button to view the newly created window in the list of one time windows.
Procedure
1. Click the New Maintenance Window button to create a new window.
2. For Type of Maintenance Window, select the type of recurring window you wish to configure. This can
be either Day of Week, Day of Month, or Nth Day of Week in Month.
3. Check that the Time Zone you want to use is selected.
4. Add fields you wish to assign in the filter to match events. For each field you add, select the operator
from the list provided and assign a value to the field to be used for the filter.
Tip: For a like operator, there is no requirement for regular expressions. You can specify a substring
and select the like operator from MWM.
Tip: For the in operator, provide a space separated list of strings that the field can be (for example,
server1.ibm.com server2.ibm.com server3.ibm.com). A maximum of 50 strings are allowed.
Note: Any field where a value is not provided will not be included in the filter.
5. Provide the Start Time and End Time (hour, minute, second) for the maintenance window.
6. Provide the details specific to the chosen recurring type of window:
Procedure
1. In the GUI, click the Help menu and select Web Documenter.
The configuration documenter opens in a new browser window. Use the links at the top of the page to
view information about cluster status, server status, data sources, data types, policies, and services.
2. Click the Status link.
Depending on the status of the current server in the cluster, you can view the following information.
• The current server is the primary server
– The name and host where the primary server is running.
– The name and host of each secondary server.
• The current server is a secondary server
– The name and host where the primary server is running.
– Startup replication status, whether it was successful, and also how long it took for it to happen.
Important: Click the link in the secondary server name to open the documenter page for this server.
The Status link also shows the following information on servers.
Memory status
Shows the maximum heap size and the current heap size in MB that the Java virtual machine,
where Netcool/Impact is running, can use.
Event status
Shows the number of events available in the event queues for the various event-related services
like readers, listeners, and EventProcessor. It does not provide information about all the services
that are currently running, only the status for event-related services. For each of these services,
you can see from where the service is reading events. For example, for OMNIbusEventReader that
would include the name of the data source, whether events are being read from the primary, or
backup source of that data source, and additional connection-related information like the host,
port, and the user name that is used to connect to the data source.
This information was developed for products and services offered in the U.S.A. IBM may not offer the
products, services, or features discussed in this document in other countries. Consult your local IBM
representative for information on the products and services currently available in your area. Any reference
to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not give you any license to these patents. You can
send license inquiries, in writing, to:
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore,
this statement might not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:
IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
“Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, other countries, or both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and service names might be trademarks of IBM or other companies.
A D
absolute time ranges daily time ranges
adding 76 adding 75
accessibility x data caching 85
accessing reports 187 data items
Action Efficiency report 190 adding 99
Action Error report 190 deleting 100
action functions 119 editing 100
action panel overview 23
policies 177 viewing 99
add-ons data model
Maintenance Window Management 199, 201 components 11
auto-saved policy 113 task pane icons 12
automated project deployment 8 data models
setting up 11
data source 57, 60, 61
B data sources
basic operator view categories 13
action panel policies 177 CORBA Mediator DSA 64
creating 178 creating 16
deleting 178 DB2 26
editing 178 deleting 17
information groups 177 Direct Mediator DSA 64, 65
layout options 177 editing 17
books Flat File 30, 31
see publications ix GenericSQL 31
HSQLDB 33
Informix 35
C jdbc statement configuration 56
JMS 66
Cache Settings tab
LDAP 62
External Data Types editor 85
Mediator DSA 63–65
changing default font 127
MS_SQL 37
character encoding 1
MYSQL 40
clear version control file locking 9
ObjectServer 43
Composite data types 97
ODBC 45
configuration documenter
Oracle 47, 50, 52
opening 203
overview 13, 25
Configuring a linked field on a composite data type 98
PostgreSQL 52
configuring data sources 182
predefined 15
configuring data types 183
SNMP
conventions
v1 and v2 65
typeface xiii
SQL data source
CORBA Mediator DSA data sources 64
Informix 35
creating 107
SQL database 26
Creating an event rule 184
Sybase 54
Creating composite data types 97
testing connections to 17
Creating editing and deleting an event rule 184
user defined 14
creating linked fields 97
data type
creating RESTful DSA data sources 60, 61
LDAP 87
creating UI data provider data sources 57
Packed OID 90
creating UI data provider data types 86
performance statistics 69
Custom Fields tab
SNMP 90
internal data types editor 71
table 92
customer support xi
data type caching 85
data types
Index 209
data types (continued) Event Isolation and Correlation operator views 182
caching 70 Event Isolation and Correlation polices 182
caching types 70 event listener
categories 20 adding filters 137
configuring LDAP 87 service 137
configuring Packed OID SNMP 91 event mapping 137
configuring SQL 79 event mapping table 140
configuring SQL data types event readers
Table Description tab 80 configuration 163, 164
configuring table data types for SNMP r 93 external data type
deleting 21 editor 80
Doc 96 external data types
editing 21 configuring 73
external configuring SQL 79
configuring 73 editor 73, 83
deleting a table row 73 LDAP 87
Flat File 86 Mediator DSA 89
internal 19, 71 Pack OID SNMP 91
internal data types editor 71 table DirectMediator 93
Mediator DSA 89
overview 19
predefined
F
configuring time range groups 75 FailedEvent
time range groups and schedules overview 74 overview of data types 96
time range groups specifications and combinations viewing data items 96
74 failover
SNMP 90 configurations 25
SQL 79 filter
viewing 21 for event listener services 137
viewing performance statistics 69 filters
Datasourcelist analysis 141
createDatasourceList script 19 deleting 140
rebuildDatasourceList script 18 editing 140
DB2 data sources reordering 140
creating 26 fixes
DeployProject obtaining x
parameters 8 Flat File
DeployProject policy 8 creating data type 86
Derby data sources Flat File data sources
creating 28 creating data sources 30, 31
directory names functions
notation xiii action 119
Doc data types
adding a field 96
adding data items 96 G
dynamic links
GenericSQL data sources
creating 104
creating 31
deleting 106
GetByFilter output parameters 100
editing 106
Global project
link by key 105
editing and deleting items 6
link by policy 106
global repository
linking methods 103
adding and removing an item from 6
links by filter 104
clearing version control locking 9
overview 6
E viewing data 6
globalization 1
education x Graphical User Interface
email reader service 158 overview 1
environment variables GUI 1
notation xiii
event filter
configuration options 137 H
consolidating 138
hibernating policy activator
Event Isolation and Correlation 181–184
main tabs 1
maintenance schedules 74 P
manuals
Packed OID SNMP data types
see publications ix
configuring 91
Mediator DSA
path names
CORBA data sources 64
notation xiii
data sources 63–65
Performance Statistics report
data types 89
Index 211
Performance Statistics report (continued) predefined data types (continued)
for data types 69 time range groups
personalizing 135 specifications and combinations 74
policies time range groups and schedules overview 74
accessing 109 viewing FailedEvent data items 96
deleting 109 predefined policy 128
editing 109 problem determination and resolution xii
working with 109 projects
Policies 112 automated project deployment 8
policies overview 109 cluster 2
policy components 5
accessibility features 130 creating 7
auto saved 113 deleting 7
DeployProject 8 DeployProject policy 8
developing custom policy 110 editing 7
log files 149, 150 editing and removing 6
optimizing policy 116 overview 5
predefined 128 working with 5
recovering 113 publications
syntax checking 115 accessing online ix
task pane icons 110 ordering x
uploading 127
version control interface 127
wizard 111
Q
wizards 110 query caching 85
writing 110
policy activators
configuration 172 R
policy editor
RAC Cluster Support 52
personalizing 3
recovering
Policy Editor
auto-saved policy 113
browsing data types 116
report
changing default font 127
Impact Profile 195
optimizing policy 116
reports
run policy option 116
Action efficiency 190
run policy parameters 116
Action Error 190
setting input parameters 117
Impact Profile 195, 196
setting output parameters 117
Impact ROI Efficiency 191
toolbar controls 113
navigating 187
Policy Efficiency report 189
Node Efficiency 190
Policy Error report 189
Operator Efficiency 189
policy input parameter
Policy Efficiency 189
attributes 117
Policy Error 189
policy logger
toolbar 188
configuration 146
viewing 187
policy syntax highlighter 115
RESTful DSA 60
positive time range groups 74
RESTful DSA data source 60, 61
PostgreSQL data sources
run policy option 116
creating 52
predefined data items
adding absolute time range groups 76 S
adding daily time range groups 75
adding weekly time range groups 76 schedules
predefined data types configuring 77
configuring time range groups 75 overview 77
Doc 96 selecting projects
FailedEvent overview 96 overview 2
Hibernation 96 Serial rollover 171
Linktype 95 service
LinkType data items command execution manager 141
configuring 95 command line manager 142
overview 20, 74 database event listener 142
schedules database event reader 153–155
configuring schedules 77 e-mail sender 143
Index 213
214 Netcool/Impact: User Interface Guide
IBM®