0% found this document useful (0 votes)
2 views196 pages

DSAReference Guide

The Netcool/Impact DSA Reference Guide provides comprehensive information on data source adapters (DSAs) for IBM Tivoli Netcool/Impact version 7.1.0.33 and later. It includes details on managing DSAs, configuring various types of data sources, and writing policies, along with guidance on accessing related publications and support resources. The document is intended for users responsible for creating data models and policies within the Netcool/Impact environment.

Uploaded by

dbbandaru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views196 pages

DSAReference Guide

The Netcool/Impact DSA Reference Guide provides comprehensive information on data source adapters (DSAs) for IBM Tivoli Netcool/Impact version 7.1.0.33 and later. It includes details on managing DSAs, configuring various types of data sources, and writing policies, along with guidance on accessing related publications and support resources. The document is intended for users responsible for creating data models and policies within the Netcool/Impact environment.

Uploaded by

dbbandaru
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 196

Netcool/Impact

DSA Reference Guide

IBM
Note
Before using this information and the product it supports, read the information in "Notices".

Edition notice
This edition applies to version 7.1.0.33 of IBM Tivoli Netcool®/Impact and to all subsequent releases and modifications
until otherwise indicated in new editions.
References in content to IBM products, software, programs, services or associated technologies do not imply that they
will be available in all countries in which IBM operates. Content, including any plans contained in content, may change
at any time at IBM's sole discretion, based on market opportunities or other factors, and is not intended to be a
commitment to future content, including product or feature availability, in any way. Statements regarding IBM's future
direction or intent are subject to change or withdrawal without notice and represent goals and objectives only. Please
refer to the IBM Community terms of use for more information.
© Copyright International Business Machines Corporation 2006, 2023.
US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with
IBM Corp.
Contents

About this publication...........................................................................................ix


Intended audience...................................................................................................................................... ix
Publications................................................................................................................................................. ix
Netcool/Impact library.......................................................................................................................... ix
Accessing terminology online................................................................................................................ix
Accessing publications online............................................................................................................... ix
Ordering publications .............................................................................................................................x
Accessibility..................................................................................................................................................x
Tivoli technical training................................................................................................................................ x
Support for problem solving.........................................................................................................................x
Obtaining fixes........................................................................................................................................ x
Receiving weekly support updates........................................................................................................xi
Contacting IBM Software Support ........................................................................................................ xi
Conventions used in this publication ....................................................................................................... xiii
Typeface conventions ......................................................................................................................... xiii
PDF code examples with single quotation marks............................................................................... xiii
Operating system-dependent variables and paths.............................................................................xiv

Chapter 1. Managing DSAs..................................................................................... 1

Chapter 2. Data source adapters (DSA)...................................................................3


Categories of DSAs.......................................................................................................................................3
Mediator DSAs........................................................................................................................................ 3
Managing data models.................................................................................................................................3
Event readers............................................................................................................................................... 4
Event listeners..............................................................................................................................................4
Policies......................................................................................................................................................... 4
Working with SQL database DSAs............................................................................................................... 4
List of provided SQL database DSAs...................................................................................................... 5
Returning an array class object in a policy............................................................................................ 6
Adding JDBC drivers and third-party JAR files to the shared library....................................................7
Changing the character set encoding for the database connection..................................................... 8
Setting the useJDBC4ColumnNameAndLabelSemantics property for DB2 JDBC 4............................ 9
Enabling Kerberos authentication for Oracle.........................................................................................9
SQL database data model...................................................................................................................... 9
SQL database policies.......................................................................................................................... 11
SQL database DSA failover...................................................................................................................14
Configuring the JDBC driver connection properties................................................................................. 18
Configuring the JDBC connection properties for a JDBC driver..........................................................18
Configuring the JDBC connection properties for a data source..........................................................18
Constructing a simplified pool key for pooled JDBC connections...................................................... 19

Chapter 3. Working with the UI data provider DSA................................................21


UI data provider data model......................................................................................................................21
UI data provider data sources..............................................................................................................21
UI data provider data types................................................................................................................. 22
Viewing data items for a UI data provider data type .......................................................................... 23
Using the GetByFilter function to handle large data sets................................................................... 23
Retrieving data from a UI provider data source........................................................................................ 25
Creating custom schema values for output parameters..................................................................... 27

iii
Controlling how frequently Impact considers the UI provider data to be stale.......................................29
Clearing the UI Data Provider server cache with the UI data provider DSA............................................ 29
UI data provider operators........................................................................................................................ 30
An example using the UI data provider to integrate with IBM Tivoli Monitoring..................................... 30
Configuring Netcool/Impact to send messages to Tivoli Monitoring Universal Message Console....30

Chapter 4. Working with the RESTful API DSA...................................................... 33


RESTful DSA data model............................................................................................................................33
RESTful DSA data source..................................................................................................................... 33
Making requests to the RESTful data source............................................................................................ 36

Chapter 5. Working with the LDAP DSA................................................................ 37


LDAP DSA overview....................................................................................................................................37
Supported LDAP servers............................................................................................................................37
LDAP data model....................................................................................................................................... 37
LDAP data sources............................................................................................................................... 37
LDAP data types................................................................................................................................... 38
LDAP data items................................................................................................................................... 38
LDAP policies............................................................................................................................................. 39
Retrieving data from an LDAP data source................................................................................................39
Controlling the number of records returned from an LDAP server...........................................................40
Changing how Impact handles referrals for LDAP DSA connections....................................................... 40
International character support................................................................................................................ 41

Chapter 6. Working with the web services DSA.....................................................43


Web services DSA overview.......................................................................................................................43
Compiling WSDL files................................................................................................................................. 43
Obtaining WSDL files............................................................................................................................ 44
Running the WSDL compiler script...................................................................................................... 44
Recompiling new and changed WSDL files..........................................................................................45
Enabling and disabling proxy settings using WSInvokeDL..................................................................45
Compiling WSDL files in an Impact split installation...........................................................................45
Web services DSA functions...................................................................................................................... 46
WSSetDefaultPKGName.......................................................................................................................46
WSNewObject.......................................................................................................................................47
WSNewSubObject................................................................................................................................ 48
WSNewArray.........................................................................................................................................48
WSInvokeDL......................................................................................................................................... 50
WSNewEnum........................................................................................................................................ 55
Writing Web services DSA policies............................................................................................................ 55
Sending messages................................................................................................................................55
Examples using web services DSA functions...................................................................................... 56
Web services listener.................................................................................................................................58
Web services listener process............................................................................................................. 58
SOAP endpoint..................................................................................................................................... 62
Authentication for the web services listener.......................................................................................63
WSDL file...............................................................................................................................................63
Creating policies by using the web services wizard..................................................................................67
Creating policies by using policy editor.....................................................................................................69
Integrating with third-party web services.................................................................................................69

Chapter 7. Web services security......................................................................... 71


Enabling web services security................................................................................................................. 71
Enable HTTPS for the web service connection....................................................................................72
User name token authentication............................................................................................................... 72
User name token authentication with a plain text password................................................................... 73
Message integrity and non-repudiation with signature............................................................................ 74

iv
Encryption.................................................................................................................................................. 75
Sign and encrypt messages....................................................................................................................... 77
Configure security with a WS-Policy file....................................................................................................78
Worked example...................................................................................................................................80

Chapter 8. Working with the JMS DSA.................................................................. 85


Supported JMS providers.......................................................................................................................... 85
Configuring JMS DSAs to send and receive JMS messages..................................................................... 85
Setting up OpenJMS as the JMS provider................................................................................................. 86
JMS data source.........................................................................................................................................86
JMS data source configuration properties...........................................................................................86
Specifying more JNDI properties for the JMS data source.................................................................88
JMS message listener................................................................................................................................ 89
JMS message listener service configuration properties...........................................................................89
Writing JMS DSA policies...........................................................................................................................90
Sending messages to a JMS topic or queue........................................................................................ 91
Retrieving JMS messages from a topic or queue................................................................................ 94
Connecting to WebSphere MQ and JMS DSA............................................................................................97
Configuration option 1..........................................................................................................................97
Configuration option 2..........................................................................................................................98
Connecting Netcool/Impact to WebSphere Business Events.................................................................. 99
Configure Netcool/Impact for WebSphere Business Events integration............................................99
Using the WebSphere Business Events integration............................................................................ 99
Integrating JMS/TIBCO over SSL............................................................................................................ 100

Chapter 9. Working with the Apache Kafka DSA................................................. 103


Kafka data source.................................................................................................................................... 103
Kafka data source configuration settings.......................................................................................... 103
Kafka configuration properties file.................................................................................................... 104
Setting up SASL.................................................................................................................................. 107
Setting up a Kafka data source with SSL........................................................................................... 107
Kafka message listener........................................................................................................................... 107
Sample policy for a Kafka message listener........................................................................................... 108
Writing Kafka DSA policies to send messages to a Kafka topic..............................................................109

Chapter 10. Working with the XML DSA.............................................................. 111


XML DSA overview................................................................................................................................... 111
XML documents....................................................................................................................................... 111
XML DTD and XSD files............................................................................................................................ 111
XML data types........................................................................................................................................ 111
Super data types................................................................................................................................ 111
Element data types............................................................................................................................ 112
XML configuration files............................................................................................................................ 112
XML document and data type mapping.................................................................................................. 112
Creating XML data types..........................................................................................................................113
Create data types scripts.........................................................................................................................113
Data type mappings.................................................................................................................................114
Setting up mappings for XML files and strings.................................................................................. 115
Setting up mappings for XML over HTTP........................................................................................... 116
Reading XML documents ........................................................................................................................ 117
Retrieving the document data item................................................................................................... 117
Retrieving the root level element data item...................................................................................... 118
Retrieving child element data items..................................................................................................118
Accessing attribute values................................................................................................................. 119
Sample policies........................................................................................................................................119
XmlStringTestPolicy........................................................................................................................... 120
XmlFileTestPolicy............................................................................................................................... 120

v
XmlHttpTestPolicy.............................................................................................................................. 121
XmlXsdFileTestPolicy......................................................................................................................... 121

Chapter 11. Working with the SNMP DSA............................................................123


SNMP DSA overview................................................................................................................................ 123
SNMP data model.................................................................................................................................... 123
SNMP data sources............................................................................................................................ 123
SNMP data types................................................................................................................................ 124
SNMP DSA process.................................................................................................................................. 124
Sending data to agents...................................................................................................................... 124
Retrieving data from agents...............................................................................................................125
Sending traps and notifications to managers....................................................................................125
Handling error conditions.................................................................................................................. 125
Handling timeouts..............................................................................................................................125
Installing MIB files...................................................................................................................................125
Working with SNMP data sources........................................................................................................... 125
Creating SNMP data sources............................................................................................................. 126
Editing SNMP data sources................................................................................................................ 127
Deleting an SNMP data source.......................................................................................................... 127
Working with SNMP data types............................................................................................................... 128
Creating SNMP data types................................................................................................................. 128
Editing SNMP data types....................................................................................................................129
Deleting SNMP data types................................................................................................................. 130
SNMP policies.......................................................................................................................................... 130
Setting packed OID data with standard data-handling functions.................................................... 130
Setting packed OID data with SNMP functions................................................................................. 133
Retrieving packed OID data from SNMP agents................................................................................133
Retrieving table data from SNMP agents...........................................................................................135
Sending SNMP traps and notifications.............................................................................................. 136
SNMP functions....................................................................................................................................... 137
SNMPGetAction..................................................................................................................................138
SNMPGetNextAction.......................................................................................................................... 142
SNMPSetAction.................................................................................................................................. 146
SnmpTrapAction................................................................................................................................. 150

Chapter 12. Working with the ITNM DSA............................................................ 153


ITNM DSA overview................................................................................................................................. 153
Setting up the DSA...................................................................................................................................153
Editing the DSA properties file...........................................................................................................153
Running the ITNM event listener service for the DSA.......................................................................154
ITNM DSA data type................................................................................................................................ 155
ExtraInfo field.....................................................................................................................................155
Writing policies using the ITNM DSA.......................................................................................................156
GetByFilter......................................................................................................................................... 156
Writing policies to receive events from ITNM................................................................................... 157
Sample policies........................................................................................................................................158
ITNMSampleListenerPolicy............................................................................................................... 158
ITNMSamplePolicy.............................................................................................................................158

Chapter 13. Working with the socket DSA...........................................................159


Socket DSA overview............................................................................................................................... 159
Socket server........................................................................................................................................... 159
Data model...............................................................................................................................................159
Process.....................................................................................................................................................159
Setting up the socket DSA....................................................................................................................... 159
Writing socket DSA policies.....................................................................................................................159
Using the sample socket server.............................................................................................................. 159

vi
Implementing a custom socket server................................................................................................... 160
Socket DSA data model........................................................................................................................... 160
Socket DSA data source.....................................................................................................................160
Socket DSA data types....................................................................................................................... 160
Configuring the socket DSA..................................................................................................................... 160
Writing socket DSA policies.....................................................................................................................161
Retrieving data by filter...................................................................................................................... 161
Retrieving data by key........................................................................................................................ 162
Retrieving data by links...................................................................................................................... 163
Sending data.......................................................................................................................................164
Working with the sample socket server.................................................................................................. 164
Setting up the sample socket server................................................................................................. 164
Sample socket server components................................................................................................... 164
Running the sample socket server.................................................................................................... 166
Testing the socket server................................................................................................................... 167
Implementing a custom socket server................................................................................................... 167
Creating a socket................................................................................................................................167
Waiting for DSA connections..............................................................................................................168
Performing handshaking with the DSA.............................................................................................. 168
Listening for operation requests from the socket DSA..................................................................... 168
Requesting operation parameters from the socket DSA.................................................................. 168
Performing operations requested by the DSA...................................................................................170
Returning operation results to the DSA.............................................................................................170
Socket DSA and socket server connection state.................................................................................... 170
Socket server threading...........................................................................................................................170

Appendix A. Notices.......................................................................................... 173


Trademarks.............................................................................................................................................. 174

Index................................................................................................................ 177

vii
viii
About this publication
The Netcool/Impact DSA Reference Guide contains information about Impact data source adaptors
(DSAs).

Intended audience
This publication is for users who are responsible for creating Netcool/Impact data models and writing
Netcool/Impact policies.

Publications
This section lists publications in the Netcool/Impact library and related documents. The section also
describes how to access Tivoli® publications online and how to order Tivoli publications.

Netcool/Impact library
• Administration Guide
Provides information about installing, running and monitoring the product.
• Policy Reference Guide
Contains complete description and reference information for the Impact Policy Language (IPL).
• DSA Reference Guide
Provides information about data source adaptors (DSAs).
• Operator View Guide
Provides information about creating operator views.
• Solutions Guide
Provides end-to-end information about using features of Netcool/Impact.

Accessing terminology online


The IBM® Terminology Web site consolidates the terminology from IBM product libraries in one
convenient location. You can access the Terminology Web site at the following Web address:
http://www.ibm.com/software/globalization/terminology

Accessing publications online


Publications are available from the following locations:
• The Quick Start DVD contains the Quick Start Guide. Refer to the readme file on the DVD for instructions
on how to access the documentation.
• IBM Knowledge Center web site at http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/topic/
com.ibm.netcoolimpact.doc6.1.1/welcome.html. IBM posts publications for all Tivoli products, as they
become available and whenever they are updated to the Tivoli Information Center Web site.
Note: If you print PDF documents on paper other than letter-sized paper, set the option in the File →
Print window that allows Adobe Reader to print letter-sized pages on your local paper.
• Tivoli Documentation Central at http://www.ibm.com/tivoli/documentation. You can access publications
of the previous and current versions of Netcool/Impact from Tivoli Documentation Central.

© Copyright IBM Corp. 2006, 2023 ix


• The Netcool/Impact wiki contains additional short documents and additional information
and is available at: https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/
Tivoli%20Netcool%20Impact/page/Overview%20and%20Planning

Ordering publications
You can order many Tivoli publications online at http://www.elink.ibmlink.ibm.com/publications/servlet/
pbi.wss.
You can also order by telephone by calling one of these numbers:
• In the United States: 800-879-2755
• In Canada: 800-426-4968
In other countries, contact your software account representative to order Tivoli publications. To locate the
telephone number of your local representative, perform the following steps:
1. Go to http://www.elink.ibmlink.ibm.com/publications/servlet/pbi.wss.
2. Select your country from the list and click Go.
3. Click About this site in the main panel to see an information page that includes the telephone number
of your local representative.

Accessibility
Accessibility features help users with a physical disability, such as restricted mobility or limited vision,
to use software products successfully. In this release, the Netcool/Impact console does not meet all the
accessibility requirements.

Tivoli technical training


For Tivoli technical training information, refer to the following IBM Tivoli Education Web site at http://
www.ibm.com/software/tivoli/education.

Support for problem solving


If you have a problem with your IBM software, you want to resolve it quickly. This section describes the
following options for obtaining support for IBM software products:
• “Obtaining fixes” on page x
• “Receiving weekly support updates” on page xi
• “Contacting IBM Software Support ” on page xi

Obtaining fixes
A product fix might be available to resolve your problem. To determine which fixes are available for your
Tivoli software product, follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/support.
2. Navigate to the Downloads page.
3. Follow the instructions to locate the fix you want to download.
4. If there is no Download heading for your product, supply a search term, error code, or APAR number in
the search field.
For more information about the types of fixes that are available, see the IBM Software Support Handbook
at http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html.

x About this publication


Receiving weekly support updates
To receive weekly e-mail notifications about fixes and other software support news, follow these steps:
1. Go to the IBM Software Support Web site at http://www.ibm.com/software/support.
2. Click the My IBM in the toobar. Click My technical support.
3. If you have already registered for My technical support, sign in and skip to the next step. If you have
not registered, click register now. Complete the registration form using your e-mail address as your
IBM ID and click Submit.
4. The Edit profile tab is displayed.
5. In the first list under Products, select Software. In the second list, select a product category (for
example, Systems and Asset Management). In the third list, select a product sub-category (for
example, Application Performance & Availability or Systems Performance). A list of applicable
products is displayed.
6. Select the products for which you want to receive updates.
7. Click Add products.
8. After selecting all products that are of interest to you, click Subscribe to email on the Edit profile
tab.
9. In the Documents list, select Software.
10. Select Please send these documents by weekly email.
11. Update your e-mail address as needed.
12. Select the types of documents you want to receive.
13. Click Update.
If you experience problems with the My technical support feature, you can obtain help in one of the
following ways:
Online
Send an e-mail message to [email protected], describing your problem.
By phone
Call 1-800-IBM-4You (1-800-426-4409).
World Wide Registration Help desk
For word wide support information check the details in the following link: https://www.ibm.com/
account/profile/us?page=reghelpdesk

Contacting IBM Software Support


Before contacting IBM Software Support, your company must have an active IBM software maintenance
contract, and you must be authorized to submit problems to IBM. The type of software maintenance
contract that you need depends on the type of product you have:
• For IBM distributed software products (including, but not limited to, Tivoli, Lotus®, and Rational®
products, and DB2® and WebSphere® products that run on Windows or UNIX operating systems), enroll
in Passport Advantage® in one of the following ways:
Online
Go to the Passport Advantage Web site at http://www-306.ibm.com/software/howtobuy/
passportadvantage/pao_customers.htm .
By phone
For the phone number to call in your country, go to the IBM Worldwide IBM Registration Helpdesk
Web site at https://www.ibm.com/account/profile/us?page=reghelpdesk.
• For customers with Subscription and Support (S & S) contracts, go to the Software Service Request Web
site at https://techsupport.services.ibm.com/ssr/login.

About this publication xi


• For customers with IBMLink, CATIA, Linux®, OS/390®, iSeries, pSeries, zSeries, and other support
agreements, go to the IBM Support Line Web site at http://www.ibm.com/services/us/index.wss/so/its/
a1000030/dt006.
• For IBM eServer™ software products (including, but not limited to, DB2 and WebSphere products
that run in zSeries, pSeries, and iSeries environments), you can purchase a software maintenance
agreement by working directly with an IBM sales representative or an IBM Business Partner. For more
information about support for eServer software products, go to the IBM Technical Support Advantage
Web site at http://www.ibm.com/servers/eserver/techsupport.html.
If you are not sure what type of software maintenance contract you need, call 1-800-IBMSERV
(1-800-426-7378) in the United States. From other countries, go to the contacts page of the
IBM Software Support Handbook on the Web at http://www14.software.ibm.com/webapp/set2/sas/f/
handbook/home.html and click the name of your geographic region for phone numbers of people who
provide support for your location.
To contact IBM Software support, follow these steps:
1. “Determining the business impact” on page xii
2. “Describing problems and gathering information” on page xii
3. “Submitting problems” on page xii

Determining the business impact


When you report a problem to IBM, you are asked to supply a severity level. Use the following criteria to
understand and assess the business impact of the problem that you are reporting:
Severity 1
The problem has a critical business impact. You are unable to use the program, resulting in a critical
impact on operations. This condition requires an immediate solution.
Severity 2
The problem has a significant business impact. The program is usable, but it is severely limited.
Severity 3
The problem has some business impact. The program is usable, but less significant features (not
critical to operations) are unavailable.
Severity 4
The problem has minimal business impact. The problem causes little impact on operations, or a
reasonable circumvention to the problem was implemented.

Describing problems and gathering information


When describing a problem to IBM, be as specific as possible. Include all relevant background
information so that IBM Software Support specialists can help you solve the problem efficiently. To save
time, know the answers to these questions:
• Which software versions were you running when the problem occurred?
• Do you have logs, traces, and messages that are related to the problem symptoms? IBM Software
Support is likely to ask for this information.
• Can you re-create the problem? If so, what steps were performed to re-create the problem?
• Did you make any changes to the system? For example, did you make changes to the hardware,
operating system, networking software, and so on.
• Are you currently using a workaround for the problem? If so, be prepared to explain the workaround
when you report the problem.

Submitting problems
You can submit your problem to IBM Software Support in one of two ways:

xii About this publication


Online
Click Submit and track problems on the IBM Software Support site at http://www.ibm.com/software/
support/probsub.html. Type your information into the appropriate problem submission form.
By phone
For the phone number to call in your country, go to the contacts page of the IBM Software Support
Handbook at http://www14.software.ibm.com/webapp/set2/sas/f/handbook/home.html and click the
name of your geographic region.
If the problem you submit is for a software defect or for missing or inaccurate documentation, IBM
Software Support creates an Authorized Program Analysis Report (APAR). The APAR describes the
problem in detail. Whenever possible, IBM Software Support provides a workaround that you can
implement until the APAR is resolved and a fix is delivered. IBM publishes resolved APARs on the
Software Support Web site daily, so that other users who experience the same problem can benefit from
the same resolution.

Conventions used in this publication


This publication uses several conventions for special terms and actions, operating system-dependent
commands and paths, and margin graphics.

Typeface conventions
This publication uses the following typeface conventions:
Bold
• Lowercase commands and mixed case commands that are otherwise difficult to distinguish from
surrounding text
• Interface controls (check boxes, push buttons, radio buttons, spin buttons, fields, folders, icons,
list boxes, items inside list boxes, multicolumn lists, containers, menu choices, menu names, tabs,
property sheets), labels (such as Tip:, and Operating system considerations:)
• Keywords and parameters in text
Italic
• Citations examples: titles of publications, diskettes, and CDs
• Words defined in text (example: a nonswitched line is called a point-to-point line)
• Emphasis of words and letters (words as words example: "Use the word that to introduce a
restrictive clause."; letters as letters example: "The LUN address must start with the letter L.")
• New terms in text (except in a definition list): a view is a frame in a workspace that contains data.
• Variables and values you must provide: ... where myname represents....
Monospace
• Examples and code examples
• File names, programming keywords, and other elements that are difficult to distinguish from
surrounding text
• Message text and prompts addressed to the user
• Text that the user must type
• Values for arguments or command options

PDF code examples with single quotation marks


How to resolve issues with PDF code examples with single quotation marks.
Throughout the documentation, there are code examples that you can copy and paste into the product.
In instances where code or policy examples that contain single quotation marks are copied from the PDF
documentation the code examples do not preserve the single quotation marks. You need to correct them

About this publication xiii


manually. To avoid this issue, copy and paste the code example content from the html version of the
documentation.

Operating system-dependent variables and paths


This publication uses the UNIX convention for specifying environment variables and for directory notation.
When you use the Windows command line, replace the $variable with the %variable% for environment
variables and replace each forward slash (/) with a backslash (\) in directory paths. The names of
environment variables are not always the same in the Windows and UNIX environments. For example,
%TEMP% in Windows environments is equivalent to $TMPDIR in UNIX environments.
Note: If you are using the bash shell on a Windows system, you can use the UNIX conventions.
• On UNIX systems, the default installation directory is /opt/IBM/tivoli/impact.
• On Windows systems, the default installation directory is C:\Program Files\IBM\Tivoli\impact.
Windows information, steps, and process are documented when they differ from UNIX systems.

xiv Netcool/Impact: DSA Reference Guide


Chapter 1. Managing DSAs
DSAs are software components that you use to communicate with external data sources. DSAs broker
information to and from SQL databases, LDAP servers, JMS topics and queues, and software systems that
allow communication through web services APIs. You also use DSAs to parse XML strings and documents,
communicate with web servers through HTTP, and communicate with custom applications through Java
APIs.

© Copyright IBM Corp. 2006, 2023 1


2 Netcool/Impact: DSA Reference Guide
Chapter 2. Data source adapters (DSA)

Data source adapters (DSA) are software components that are used to communicate with external data
sources.

Categories of DSAs
There are the following categories of DSAs:
SQL database DSAs
SQL database DSAs are used to access information stored in SQL database data sources. For more
information about SQL database DSAs, see “Working with SQL database DSAs” on page 4.
LDAP DSA
The LDAP DSA are used to access information stored in an LDAP server. For more information about
LDAP DSA, see Chapter 5, “Working with the LDAP DSA,” on page 37.
Mediator DSAs
Mediator DSAs are used to communicate with various third-party applications or generic data
interfaces such as a Web services API, SNMP, or custom interfaces. For more information about
Mediator DSAs, see “Mediator DSAs” on page 3.

Mediator DSAs
Mediator DSAs are used to communicate with various third-party applications or generic data interfaces
such as a Web services API or custom interfaces.
Some Mediator DSAs are built in DSAs and do not require any additional installation or configuration.
Other Mediator DSAs require you to manually install and configure them.
Table 1 on page 3 lists the provided built-in Mediator DSAs:

Table 1. Mediator DSAs

Mediator DSA For more information, see

JMS DSA Chapter 8, “Working with the JMS DSA,” on page 85

XML DSA Chapter 10, “Working with the XML DSA,” on page 111

SNMP DSA Chapter 11, “Working with the SNMP DSA,” on page 123

ITNM DSA Chapter 12, “Working with the ITNM DSA,” on page 153

The following Mediator DSAs are provided but you must install and configure them independently of the
application:
• Alcatel 5620 DSA
• GE Smallworld DSA

Managing data models


A data model is a model of the business data and metadata that is used in an Netcool/Impact solution.
DSA (Data Source Adapter) data models are sets of data sources, data types, and data items that
represent information that is managed by the internal data repository or an external source of data. For
each category of DSA, the data model represents different structures and units of data that are stored

© Copyright IBM Corp. 2006, 2023 3


or managed by the underlying source. For example, for SQL database DSAs, data sources represent
databases; data types represent database tables; and data items represent rows in a database table.
The following DSAs; Web Services, SNMP, ITNM (Precision), and XML, store some of the configuration in
the $IMPACT_HOME/dsa directory. In a clustered environment, the $IMPACT_HOME/dsa directory will be
replicated in the secondary servers in a cluster from the primary server during startup.
If you are changing these directories and configurations, it is best to make these changes on the primary
server while the servers are down. When the changes are complete, start primary server followed by
the secondary servers in the cluster. Some of the changes replicate in real time, for example if you use
the Web Services and XML wizards. There is also a directory, $IMPACT_HOME/dsa/misc, where you
can store scripts and flat files for example, which will be replicated across the cluster during startup of
secondary servers that are retrieving this data from the primary server.

Event readers
Event readers are services that query a data source at intervals for events and then run a policy that is
based on the incoming event data.
Two types of event readers are provided: standard event readers and database event readers. Standard
event readers query a Netcool/OMNIbus ObjectServer database by using the ObjectServer DSA. Database
event readers query other relational databases by using other types of SQL database DSAs.
The default event reader configuration is sufficient when you process an event flow of around 500 events
per second. To enrich the event flow at any time, adjust the following parameters in the event processor
properties file.

impact.perftesteventreader.objectserver.maxtoreadperquery=2000
impact.perftesteventreader.objectserver.polltime=1500 (polling inteval 1500ms)
impact.perftesteventreader.maxqueuesize=4000

Important: Increasing the read rate will also increase the memory and performance requirements for the
event reader. If the query size is set too large the event reader may suffer an out of memory exception in
the case of an event flood.
For more information about the event processor, see the Event processor commands in the Administration
Guide and Configuring the event processor service in the online help.

Event listeners
Event listeners are services that listen for incoming communication from an external data source through
a DSA.
Event listeners are implemented by certain DSAs that provide the means for asynchronous exchange of
data with the underlying sources of data. These DSAs include the database listener service for some SQL
database DSAs (such as the Oracle DSA), OMNIbusEventListener for OMNIbus version 7.2 and later. They
also include other listeners for Web services, JMS, and ITNM.

Policies
DSA policies are policies that contain instructions for interacting with a data source using a DSA. These
policies contain calls to data-handling functions (such as GetByFilter) or DSA-specific functions that
are instructions to send or retrieve information to and from the external data sources.

Working with SQL database DSAs


SQL database DSAs (data source adapters) are used to retrieve information from relational databases.
SQL database DSAs are also used to retrieve information from other types of data sources (like Netcool/
OMNIbus ObjectServers, character-delimited files), and data sources that provide a public interface

4 Netcool/Impact: DSA Reference Guide


through JDBC (Java Database Connectivity). They are also used to add, modify, and delete information
stored in these data sources.
The SQL database DSAs are direct-mode DSAs that run in-process with the Impact Server. SQL database
DSAs are built in DSAs and do not require installation or configuration, but they require a JDBC driver to
access data in the database. Only these SQL database DSAs have JDBC drivers provided automatically
with Netcool/Impact:
• DB2
• Derby
• Informix
• ObjectServer
• Oracle
• PostgreSQL
Before you can use any other SQL database DSA, you must add its JDBC drivers to the class path. For a
detailed procedure, see “Adding JDBC drivers and third-party JAR files to the shared library” on page 7.
You use SQL database DSAs by creating a data model, and writing policies. For more information, see
“SQL database data model” on page 9, and “SQL database policies” on page 11.

List of provided SQL database DSAs


This topic provides a list, and a brief overview of SQL database DSAs.
For information about how to add or update JDBC drivers see “Adding JDBC drivers and third-party JAR
files to the shared library” on page 7.
DB2 DSA
This DSA is used to retrieve, add, modify and delete information stored in DB2. It is also used to run
DB2 database stored procedures.
Derby DSA
The Apache Derby database is used to store the underlying data that is used by the GUI reporting
tools and Netcool/Impact solutions such as Maintenance Window Management.
The Apache Derby database is used to store other information that is used by Netcool/Impact. For
more information about Apache Derby, see this URL, http://db.apache.org/derby/.
Flat File DSA
You use the Flat File DSA to read information in a character-delimited text file.
You cannot use the Flat File DSA to write information to a text file. The Flat File DSA supports only the
"AND" operator in flat file data type queries. You cannot use the "OR" operator to work with flat file
data types. The flat file data source can be accessed like an SQL data source that uses standard SQL
commands in Netcool/Impact for example, DirectSQL.
Use an SQL database to run more complex queries. If you have to use the Flat File DSA, run multiple
queries that do not require the use of the "OR" operator.
Restriction: The Flat File DSA is intended for use in demonstrating and testing Netcool/Impact and
for infrequently accessing small amounts of data that is stored in a text file. Use of text files and the
Flat File DSA is not an effective substitute for the use of a conventional relational database and an SQL
database DSA. The Flat File DSA offers slower performance when compared to other DSAs.
Generic SQL DSA
This DSA is used to retrieve, add, modify and delete information stored in the database. To use the
Generic SQL DSA, you must specify its JDBC driver in the Generic SQL data source configuration
window.
HSQLDB DSA
You use the HSQL DSA to retrieve, add, modify and delete information stored in a HSQL database.

Chapter 2. Data source adapters (DSA) 5


Informix® DSA
This DSA is used to retrieve, add, modify and delete information stored in an Informix database.
MySQL DSA
This DSA is used to retrieve, add, modify, and delete information stored in a MySQL database.
MS-SQL Server DSA
This DSA is used to retrieve, add, modify, and delete information stored in a MS_SQL database. It is
used to run MS-SQL Server stored procedures.
ObjectServer DSA
You use the ObjectServer DSA to access information in the Netcool/OMNIbus ObjectServer.
ODBC DSA
Use the ODBC DSA to access information in an ODBC database.
Oracle DSA
The Oracle DSA is used to retrieve, add, modify, and delete information that is stored an Oracle
database. It is also used to run Oracle database stored procedures.
PostgreSQL DSA
Netcool/Impact uses this DSA to retrieve, add, modify, and delete information stored in a PostgreSQL
database.
Sybase DSA
This DSA is used to retrieve, add, modify, and delete information stored in a Sybase database. It is
also used to run Sybase stored procedures.

Returning an array class object in a policy


This topic describes how to return an array class object in a policy rather than a String.
By default, array columns returned by DirectSQL are converted to String format when returned in a policy.
This default behavior can be overridden to return an array class object. The exact class will depend
on the JDBC driver in use for the DirectSQL data source. Postgres JDBC driver supports the array
keyword in SQL select statements, for example: select array[column1,column2] as myarray
from schema.table"; with the class org.postgresql.jdbc4.Jdbc4Array.
To enable this behavior on a system wide level for all policies, set the following property in the etc/
<SERVER>_server.props file:

impact.directsql.preservearrays=true

This behavior can also be set on a policy level by setting the variable DIRECTSQL_PRESERVE_ARRAYS in
the policy.
For example by setting:
DIRECTSQL_PRESERVE_ARRAYS="true";
or
DIRECTSQL_PRESERVE_ARRAYS="false";
Where DIRECTSQL_PRESERVE_ARRAYS is set at a policy level, this takes precedence over the global
setting for the policy
For example, if the global setting is impact.directsql.preservearrays=true but the policy has
DRECTSQL_PRESERVE_ARRAYS="false"; then any array column will be returned as String for the
policy.
Note: The setting of DIRECTSQL_PRESERVE_ARRAYS must come before the DirectSQL call in the policy
for it to be effective.

6 Netcool/Impact: DSA Reference Guide


Adding JDBC drivers and third-party JAR files to the shared library
Use this procedure to add a JDBC driver or third-party Java archive (JAR) files to the Netcool/Impact
shared library.

About this task


• Netcool/Impact comes pre-installed with JDBC drivers for several DSAs. Refer to the Installed JDBC
drivers table for the complete list. These drivers can be found under $IMPACT_HOME/lib3p.
• If the driver is not pre-installed, refer to the procedure on how to add the driver to your Netcool/Impact
installation. You must copy the required JDBC drivers to the $IMPACT_HOME/dsalib directory.
• If you have an existing licensed version of a JDBC driver you can add it to the $IMPACT_HOME/dsalib
directory and restart the Impact Server. Netcool/Impact uses that driver to establish connection to the
target database.
If you need any additional third-party .jar files for example, some JDBC drivers, you must download
them from your vendor. You can also copy any third-party JAR files that you require to the same
directory. For example, if you have specific Java classes that you want use with Java policy functions in
Netcool/Impact, you add the JAR files to this directory.

Table 2. Installed JDBC drivers

Name Description Preinstalled

DB2 The DB2 DSA supports versions 9.7, 9.8,10.1, Yes


10.5, 11.1 and 11.5 of the DB2 database.

Derby The Derby DSA uses Apache Derby JDBC Yes


driver version 10.8.3.3. For more information
about Apache Derby, refer to http://db.apache.org/
derby/.

Generic SQL To use the Generic SQL DSA, you must specify No
its JDBC driver in the Generic SQL data source
configuration window.

HSQLDB The DSA supports version 2.0 of the HSQL No


database server.
Note: You must download the HSQLDB JDBC driver
from the following URL: http://hsqldb.org

Informix The Informix DSA supports versions 11.x and 12.x. Yes

MySQL The DSA supports version 5.x and 8 of MySQL. No


Note: JDBC drivers for MySQL can be obtained
from https://www.mysql.com.

MS-SQL Server The DSA can support MS-SQL Server 2008, 2012, No
2014, 2017 and 2019

ObjectServer The DSA supports versions 7.3, 7.4 and 8.1 of the Yes
Netcool/OMNIbus ObjectServer.

Oracle The Oracle DSA supports versions 11g, 12c, 18c Yes
and 19c of the Oracle database server.

Chapter 2. Data source adapters (DSA) 7


Table 2. Installed JDBC drivers (continued)

PostgreSQL The PostgreSQL DSA supports versions 8.x , 9.x, Yes


10.x and 11.5 of the PostgreSQL database.

Sybase The Sybase DSA supports versions 12.x, 15.x and Yes
16.x of the SAP ASE/Sybase Database Server.

Procedure
1. Obtain the appropriate JDBC driver according to the DSA specification or the third-party JAR files.
2. Copy the JDBC driver or third-party JAR files to the $IMPACT_HOME/dsalib directory.
This directory is created during the installation, and might contain files.
3. Restart the Impact Server.

What to do next
In a clustered configuration, you must repeat this procedure for each server in the cluster because files in
the $IMPACT_HOME/dsalib directory are not replicated between cluster members. Stop all the servers
in the cluster while you perform this procedure.

Changing the character set encoding for the database connection


Use this procedure to change the default character set encoding (UTF-8) that is used in establishing a
connection to the SQL database.

Procedure
1. In the $IMPACT_HOME/etc directory, create a properties file for the DSA for which you want to change
the default character set encoding.
The properties filename must have the following format:

servername_drivermainclass.props

where servername is the name of your Impact Server, and drivermainclass is the class name of the
JDBC driver to connect to the SQL database.
For example, you will create the NCI_org.gjt.mm.mysql.Driver.props file, if the name of your
Impact Server is NCI, and if it is connecting to the MySQL database.
Remember: You can get the drivermainclass values for other SQL databases, from their JDBC
documentation.
2. Add a CHARSET=encoding property to the properties file.
For example, CHARSET=EUC_JP.
3. Restart the Impact Server.

8 Netcool/Impact: DSA Reference Guide


Setting the useJDBC4ColumnNameAndLabelSemantics property for DB2
JDBC 4
Use this procedure to change the default value set for the useJDBC4ColumnNameAndLabelSemantics
property that is used in establishing a connection to the DB2 database.

About this task


This property specifies how the IBM Data Server Driver for JDBC and SQLJ handles column
labels in ResultSetMetaData.getColumnName, ResultSetMetaData.getColumnLabel, and
ResultSet.findColumn method calls.

Procedure
1. In the $IMPACT_HOME/etc directory, create or modify a properties file for the DB2 DSA for which
you want to change the useJDBC4ColumnNameAndLabelSemantics property.
The properties filename should have the following format
impact_server_com.ibm.db2.jcc.DB2Driver.props where impact_server is the name of
your Impact Server, for example NCI_com.ibm.db2.jcc.DB2Driver.props.
2. Set the useJDBC4ColumnNameAndLabelSemantics property in the properties
file to a value appropriate for your system . The default value is
DB2BaseDataSource.NO (2). For details about valid values for the
useJDBC4ColumnNameAndLabelSemantics property, see https://www.ibm.com/support/
knowledgecenter/en/SSEPGG_11.5.0/com.ibm.db2.luw.apdv.java.doc/src/tpc/imjcc_r0052607.html.
3. Restart the Impact Server.

Enabling Kerberos authentication for Oracle


Use this procedure to enable Kerberos authentication for Oracle.

Procedure
1. Set up your Oracle database to support Kerberos. See documentation from Oracle for details.
2. Add the connection properties to your Netcool/Impact Oracle data source.
a. Edit your data source file located in the following directory: $IMPACT_HOME/etc
b. Add the following properties:

<YourOralceDataSourceName>.Oracle.NUMDSPROPERTIES=3
<YourOralceDataSourceName>.Oracle.DSPROPERTY.1.NAME=CONNECTION_PROPERTY_THIN_NET_AUTHENTIC
ATION_SERVICES
<YourOralceDataSourceName>.Oracle.DSPROPERTY.1.VALUE=KERBEROS
<YourOralceDataSourceName>.Oracle.DSPROPERTY.2.NAME=CONNECTION_PROPERTY_THIN_NET_AUTHENTIC
ATION_KRB5_MUTUAL
<YourOralceDataSourceName>.Oracle.DSPROPERTY.2.VALUE=true
<YourOralceDataSourceName>.Oracle.DSPROPERTY.3.NAME=CONNECTION_PROPERTY_THIN_NET_AUTHENTIC
ATION_KRB5_CC_NAME
<YourOralceDataSourceName>.Oracle.DSPROPERTY.3.VALUE=/tmp/krb5cc_5088

3. Restart the Impact Server for the changes to take effect.

SQL database data model


An SQL database data model is an abstract representation of data stored in an underlying relational
database or other data source that can be accessed through JDBC.
SQL database data models consist of SQL database data sources, SQL database data types, and SQL
database data items.

Chapter 2. Data source adapters (DSA) 9


SQL database data sources
An SQL database data source represents a relational database or another source of data that can be
accessed using an SQL database DSA.
A wide variety of commercial relational databases are supported, such as Oracle, Sybase, and Microsoft
SQL Server. In addition, freely available databases like MySQL, and PostgreSQL are also supported. The
Netcool/OMNIbus ObjectServer is also supported as a SQL data source.
The configuration properties for the data source specify connection information for the underlying source
of data. Some examples of SQL database data sources are:
• A DB2 database
• A MySQL database
• An application that provides a generic ODBC interface
• A character-delimited text file
You create SQL database data sources using the GUI. You must create one such data source for each
database that you want to access. When you create an SQL database data source, you need to specify
such properties as the host name and port where the database server is running, and the name of the
database. For the flat file DSA and other SQL database DSAs that do not connect to a database server, you
must specify additional configuration properties.
Note that SQL database data sources are associated with databases rather than database servers. For
example, an Oracle database server can host one or a dozen individual databases. Each SQL database
data source can be associated with one and only one database.

SQL database data types


An SQL database data type represents a table in a relational database or a similar structure that contains
sets of data (like an Oracle view or a list of rows in a comma-delimited text file).
The configuration properties for the data type specify the structure and contents of data stored in the
table. Some examples of SQL database data types are:
• A DB2 database table
• A MySQL database table
• The contents of a character-delimited text file
Each SQL database data type contains a set of fields that correspond to columns in the database table
(or structured categories of data in other types of data sources). The data type can contain fields that
represent all of the columns or a subset of the columns in the table.
You create SQL database data types using the GUI. You must create one such data type for each database
table that you want to access.
When you create an SQL database data type, you need to specify such properties as the table name and
the names of the table columns that you want to include in the data type. For the flat file DSA, you must
specify additional configuration properties.

SQL database data items


An SQL database data item represents a table row in a relational database or another set of data (like a
row in a comma-delimited text file).
You use the GUI to view, add, modify, and delete SQL database data items. Typically, however, you use the
tools that are provided by the relational database server (or other third-party tools) to manage the data in
an underlying data source.

10 Netcool/Impact: DSA Reference Guide


SQL database policies
SQL database DSA policies work with data stored in underlying relational databases or other data sources
that can be accessed using an SQL database DSA.
You can perform the following tasks by using a SQL database policy:
• Retrieve data from an SQL database data source
• Add data to an SQL database data source
• Modify data stored in an SQL database data source
• Delete data stored in an SQL database data source
• Call database functions
• Call database stored procedures

Retrieving data from an SQL database data source


The Impact Policy Language (IPL) provides a set of functions that retrieve data from an SQL database data
source based on different criteria.
These functions allow you to retrieve data by key, by filter, and by link, and by directly running SQL
SELECT queries against the underlying database or other source of data. The following table shows the
IPL functions that retrieve SQL database data.

Table 3. IPL Functions that Retrieve SQL Database Data

Function Description

GetByKey Retrieves data items (rows in a table or other data element) whose key fields
match the specified key expression.

GetByFilter Retrieves data items whose field values match the specified SQL filter string.

GetByLinks Retrieves data items that are dynamically or statically linked to another data item
using the GUI.

DirectSQL Retrieves data items by directly running an SQL SELECT query against the
underlying database or other source of data.

For detailed syntax descriptions of these functions, see the Policy Reference Guide.
The following example shows how to use GetByKey to retrieve data items (rows in a table or other data
element) whose key field matches the specified key expression. In this example, the SQL database data
type associated with the table is Customer and the key expression is 12345.

DataType = "Customer";
Key = 12345;
MaxNum = 1;

MyCustomer = GetByKey(DataType, Key, MaxNum);

The following example shows how to use GetByFilter to retrieve data items whose field values match
the specified SQL filter string. In this example, the SQL database data type is Node and the filter string is
Location = 'New York City' AND Facility = 'Manhattan'.

DataType = "Node";
Filter = "Location = 'New York City' AND Facility = 'Manhattan'";
CountOnly = False;

MyNodes = GetByFilter(DataType, Filter, CountOnly);

Chapter 2. Data source adapters (DSA) 11


The following example shows how to use GetByLinks to retrieve data items that have been statically
or dynamically linked to another data item using the Netcool/Impact GUI. In this example, you use
GetByLinks to retrieve data items that are linked to the items of type Node returned in the previous
example.

DataType = {"Customer"};
Filter = "";
MaxNum = 1000;
DataItems = MyNodes;

MyCustomers = GetByLinks(DataType, Filter, MaxNum, MyNodes);

Adding data to an SQL database data source


You can use the AddDataItem function to add data to an SQL database data source.
The following example shows how to use AddDataItem to add a row to an SQL database table that
is represented by the User data type. In this example, Name, Location, Facility, andEmail are
columns in the database table.

DataType = "User";

MyUser = NewObject();

MyUser.Name = "John Smith";


MyUser.Location = "New York City";
MyUser.Facility = "Manhattan";
MyUser.Email = "[email protected]";

AddDataItem(DataType, MyUser);

For a detailed syntax description of this function, see the Policy Reference Guide.

Modifying data stored in an SQL database data source


You can use the BatchUpdate function to modify the data that is stored on the SQL database. You can
also assign values to variables for data items that were previously retrieved by using the GetByKey,
GetByFilter, or GetByLinks to modify data stored in an SQL database.
The following example shows how to modify a row in an SQL database table by assigning values to
member variables of a data item that was previously retrieved by using the GetByFilter function. In
this example, the Customer data type represents a table in the underlying database and the Name,
Location, and Facility fields represent columns in the table.

DataType = "Customer";
Filter = "Name = 'John Smith'";
CountOnly = "False";

MyCustomer = GetByFilter(DataType, Filter, CountOnly);

MyCustomer[0].Location = "Raleigh";
MyCustomer[0].Facility = "FAC_01";

The following example shows how to modify multiple rows in an SQL database table by using the
BatchUpdate function. In this example, you update the Location and Facility columns in the table for
each row where the value of Location is New York City.

DataType = "Customer";
Filter = "Location = 'New York City'";
UpdateExpression = "Location = 'Raleigh' AND Facility = 'FAC_01'";

BatchUpdate(DataType, Filter, UpdateExpression);

For more information about using these methods to modify SQL database data, see the Policy Reference
Guide.

12 Netcool/Impact: DSA Reference Guide


Deleting data stored in an SQL database data source
You can use policies to delete data that is stored in an SQL database data source by using the
DeleteDataItem, or BatchDelete functions.
With these functions, you can delete either a single row or data element, or multiple rows. The following
table shows the IPL functions that delete SQL database data.

Table 4. IPL Functions that Delete SQL Database Data

Function Description

DeleteDataItem Deletes a single data item which is a row in a table or other data element.

BatchDelete Deletes one or more data items whose field values match the specified SQL
filter string.

The following example shows how to delete a row in a database table by using the DeleteDataItem
function. In this example, you first retrieve the data item that represents the row by using the GetByKey
function and then call DeleteDataItem.

DataType = "Node";
Key = "DB2_01";
MaxNum = 1;

MyNode = GetByKey(DataType, Key, MaxNum);

DeleteDataItem(MyNode[0]);

The following example shows how to delete multiple rows from a database table by using the BatchDelete
function. In this example, you delete all rows from the table that is represented by the User data type,
where the value of the Location column is New York City.

DataType = "User";
Filter = "Location = 'New York City'";

BatchDelete(DataType, Filter, NULL);

For more information about using these functions to delete SQL database data, see the Policy Reference
Guide.

Calling database functions


You can use the CallDBFunction to call any SQL function that is defined by the database server.
SQL functions vary per database. For a list of functions that are supported by a specific database server,
see the documentation provided by the software vendor.
The following example shows how to call a database function named NOW() and return the results of the
function for use in a policy.

// Call CallDBFunction and pass the name of a data type, a filter


// string and the function expression

DataType = "Server";
Filter = "0 = 0";
Metric = "NOW()";

DBTime = CallDBFunction(DataType, Filter, Metric);

For a detailed syntax description of the CallDBFunction function, see the Policy Reference Guide.

Chapter 2. Data source adapters (DSA) 13


Calling database stored procedures
You can use the CallStoredProcedure function to call Oracle, Sybase, DB2, and SQL Server database
stored procedures.
The following example shows how to call a Sybase stored procedure named GetCustomerByLocation.
In this example, the Sybase database is represented by the data source SYB_03.

Sp_Parameter = NewObject();
Sp_Parameter.CustType = "Platinum";
Sp_Parameter.Location = "Mumbai";

DataSource = "SYB_03";
ProcName = "GetCustomerByLocation";

MyResults = CallStoredProcedure(DataSource, ProcName, Sp_Parameter);

For a detailed syntax description of the CallStoredProcedure function, see the Policy Reference Guide.

SQL database DSA failover


Failover is the process by which an SQL database DSA automatically connects to a secondary database
server (or other data source) when the primary server becomes unavailable.
This feature ensures that Netcool/Impact can continue operations despite problems accessing one or
the other server instance. You can configure failover separately for each data source that connects to a
database using an SQL Database DSA.

SQL database DSA failover modes


Standard failover, failback, and disabled failover are supported failover modes for SQL database DSAs.
Standard failover
Standard failover is a configuration in which an SQL database DSA switches to a secondary database
server when the primary server becomes unavailable and then continues using the secondary until
Netcool/Impact is restarted.
Failback
Failback is a configuration in which an SQL database DSA switches to a secondary database server
when the primary server becomes unavailable and then tries to reconnect to the primary at intervals
to determine whether it has returned to availability.
Disabled failover
If failover is disabled for an SQL database DSA the DSA reports an error to Netcool/Impact when the
database server is unavailable and does not attempt to connect to a secondary server.

Standard failover
Standard failover is a configuration in which an SQL database DSA switches to a secondary database
server when the primary server becomes unavailable and then continues using the secondary until
Netcool/Impact is restarted.
If the secondary server becomes unavailable, the SQL database DSA will attempt to resume connections
to the original primary server.

Failback
Failback is a configuration in which an SQL database DSA switches to a secondary database server
when the primary server becomes unavailable and then tries to reconnect to the primary at intervals to
determine whether it has returned to availability.
If the primary server has become available, the DSA will resume connections using that server. If the
primary has not become available, the DSA will continue to use the secondary server. In a failback
configuration, the SQL database DSA will always attempt to reconnect to the primary server before
making a connection to the secondary.

14 Netcool/Impact: DSA Reference Guide


Setting up DSA failover
You set up failover when you create and configure an SQL database data source in the GUI.

Procedure
You use the data source editor to select a failover configuration for the data source and to specify
connection information for the primary and secondary database servers.
For more information about creating and configuring SQL database data sources, see the online help.

DSA failover defaults


An SQL database DSA determines that a database server is unavailable when it cannot connect to the
database server, or when the database server returns an error message that is not related to SQL or
stored procedure syntax.
Netcool/Impact provides a built-in list of errors messages that indicate that a database server has
received an incorrectly formed SQL or stored procedure query. SQL database DSAs exclude these errors
when determining whether a database server is unavailable. This means that, by default, a DSA does not
failover or fail back when a syntax error occurs at the database level.
The following shows the built-in list of errors that Netcool/Impact excludes.

Table 5. SQL Database Error Messages for Failover

Database Error Codes

DB2 No default error codes

Derby No default error codes

GenericSQL No default error codes

HSQLDB No default error codes

Informix Error codes from -899 to -200 inclusive

MySQL Error codes 1047, 1048, 1051, 1052, 1054 to 1064 inclusive, 1071,
1106 to 1111 inclusive, 1122, 1138, 1146, 1217, 1222

ObjectServer Error codes 667, 5555, 20000, 20001, 20002

ODBC No default error codes

Oracle Error codes 100, 900 to 999 inclusive, 17006

PostgreSQL SQL states 03000, 42000, 42601, 42602, 42622, 42701, 42702,
42703, 42704, 42803, 42804, 42809, 42883, 42939, 42P01,
42P02, 42P10, 42P18

SQL Server Error codes 105, 207, 208, 213, 229, 230, 260

Sybase Error codes 100 to 300 inclusive, 403, 404, 407, 413

For instructions in providing an alternate customized list, see “Customizing DSA failover” on page 16.

Chapter 2. Data source adapters (DSA) 15


Customizing DSA failover
You can provide an alternate list of error codes that the SQL database DSAs exclude when determining
whether a database server is unavailable.
You store this list in a file named $IMPACT_HOME/etc/NCI_non_failover_errors.props, where
NCI is the name of the Impact Server instance. This file is not automatically created so you must manually
create and edit this file using a text editor.
Properties in this file have the following format:

impact.database=error_codes

where database is the name of the database and error_codes is a comma-separated list of error
identification numbers. To specify a range of codes, place a less-than character between the lower limit
and upper limit numbers as follows: 200<300. The error code range is inclusive of the numbers specified.
The following table shows the internal database names that you must use in the properties file.

Table 6. Database Internal Names

Database Internal Name

DB2 db2

Derby derby

GenericSQL genericsql

HSQL hsqldb

Informix informix

MS SQL Server mssql

MySQL mysql

Netcool/OMNIbus ObjectServer objectserver

ODBC odbc

Oracle oracle

PostgreSQL postgresql

Sybase sybase

Error codes are defined by at the database level. For a list of possible error codes, see the documentation
provided with the database application.
The following example shows a properties file that lists the default built-in error codes excluded by
Netcool/Impact when determining if a database server is unavailable.

impact.db2=
impact.informix=-899<-200
impact.mssql=105,207,208,213,229,230,260
impact.mysql=1047,1048,1051,1052,1054<1064,1071,1106<1111,1122,1138,1146,
1217,1222
impact.objectserver=667,5555,20000,20001,20002
impact.odbc=
impact.oracle=100,900<999,17006
impact.postgresql=03000,42000,42601,42602,42622,42701,42702,42703,42704,

16 Netcool/Impact: DSA Reference Guide


42803,42804,
42809,42883,42939,42P01,42P02,42P10,42P18
impact.sybase=100<300,403,404,407,413

Customizing ObjectServer DSA failback


When you create a new data source during the ObjectServer data source configuration, the failback mode
is selected by default. Extra optional customization is available for ObjectServer DSA to enhance its
failback mechanism and to support failback for JDBC connections to ObjectServer DSAs.

Before you begin


The following information applies when you want to enable customized failback by setting the property to
be true.
• Netcool/OMNIbus ObjectServer v7.3 or later must be installed. The ObjectServer
must support the new gateway mechanism to handle failback. Refer to
the ObjectServer documentation http://publib.boulder.ibm.com/infocenter/tivihelp/v8r1/index.jsp?
topic=%2Fcom.ibm.tivoli.namomnibus.doc%2Fwelcome_ob.htm
• The Netcool/OMNIbus ObjectServer v7.3 and later use the value in the backup ObjectServer to
determine the primary ObjectServer and to update the catalog.properties table with a property
of PropName='ActingPrimary'.

Procedure
• To enable customized failback, in the $IMPACT_HOME/etc/<ServerName>_server.props file,
change the value for impact.objectserver.failback.enabled from false to true.

impact.objectserver.failback.enabled=true

Important: If you are running a cluster setup repeat this step for each server in the cluster. You
must also restart the server to implement these changes. By default this property is disabled and the
failback setup uses a ping mechanism like other SQL DSAs.
• If the connection to the primary server and port hangs, you can change the value for the timeout,
which by default is 20 seconds.
– In the $IMPACT_HOME/etc/<ServerName>_server.props file, add the following property
impact.datasource.failback.tester.timeout=<# of milli seconds>. For example,
if you want to change the timeout to 1 minute, set the value of the property to 60000
impact.datasource.failback.tester.timeout=60000
• Change the polling time, which checks whether the primary server is up. By default, the polling time is
1 minute.
– In the $IMPACT_HOME/etc/<ServerName>_server.props file, add the following
propertyimpact.objectserver.failback.pollinterval=<# of milli seconds>. For
example, if you want to change the polling time to 10 seconds, set the value of the property to
10000 impact.objectserver.failback.pollinterval=10000
• Determine which server is acting as the primary.
– Go to the backup ObjectServer server.
– Run the following query: SELECT Value from catalog.properties WHERE PropName =
'ActingPrimary';
– If the value for PropName='ActingPrimary' is FALSE, then the primary server is in active.
– If the value for PropName='ActingPrimary' is TRUE, then the backup is acting as the primary
server. This situation can occur when the ObjectServer gateway is doing a resync with the primary
server and the primary server is not available to accept any connections.

Chapter 2. Data source adapters (DSA) 17


Configuring the JDBC driver connection properties
These topics describe how to set connection properties for all datasources that utilize a given JDBC driver
or for a specific datasource.
When connecting to an SQL database with a JDBC driver, you may need to configure the connection with
specific JDBC connection properties. These properties are vendor specific and you should consult the
JDBC driver documentation on their usage. You can apply these properties to the JDBC connection at a
DSA level (all datasources that connect to that DSA) or against a specific datasource. If the same property
is declared in both the JDBC driver and datasource property files, then the value from the datasource
properties file takes precedence.

Configuring the JDBC connection properties for a JDBC driver


Use this procedure to set connection properties for a JDBC driver. These connection properties will be
applied to all data sources that use this driver.

Procedure
1. In the $IMPACT_HOME/etc directory, create a properties file for the DSA for which you want to set the
JDBC connection properties.
The properties filename must have the following format:

servername_drivermainclass.props

where servername is the name of your Impact Server, and drivermainclass is the class name of the
JDBC driver to connect to the SQL database.
For example, you will create the NCI_oracle.jdbc.driver.OracleDriver.props file, if the
name of your Impact Server is NCI and it is connecting to an Oracle database.
Remember: You can get the drivermainclass values for other SQL databases from their JDBC
documentation.
2. Add the connection property propertyname=propertyvalue to the properties file as per the
documentation for the JDBC driver, one property per line.
For example, if the Oracle server requires clients to enable specific encryption and integrity settings,
you would add the following properties.

oracle.net.encryption_client=REQUESTED
oracle.net.encryption_types_client=AES256
oracle.net.crypto_checksum_client=REQUESTED
oracle.net.crypto_checksum_types_client=SHA1

3. Restart the Impact Server.

Configuring the JDBC connection properties for a data source


Use this procedure to set connection properties for a specific data source.

Procedure
1. In the $IMPACT_HOME/etc directory, create a properties file for the DSA for which you want to set the
JDBC connection properties.
The properties filename must have the following format:

servername_drivermainclass_datasourcename.props

where servername is the name of your Impact Server, drivermainclass is the class name of the JDBC
driver, and datasourcename is the name of the data source in Impact.

18 Netcool/Impact: DSA Reference Guide


For example, you will create the NCI_com.ibm.db2.jcc.DB2Driver_SysEvents.props file, if the
name of your Impact Server is NCI, it is connecting to a DB2 database, and the name of the data
source is SysEvents.
Remember: You can get the drivermainclass values for other SQL databases from their JDBC
documentation.
2. Add the connection property propertyname=propertyvalue to the properties file as per the
documentation for the JDBC driver, one property per line.
For example, if the DB2 server requires clients to enable specific encryption and integrity settings, you
would add the following properties.

encryptionAlgorithm=2
securityMechanism=9

3. Restart the Impact Server.

Constructing a simplified pool key for pooled JDBC connections


The impact.jdbc.pool.simplifiedkey property allows you to specify that a simplified pool key is
constructed for pooled JDBC connections to a given server.
By default, when JDBC connections are pooled, the pool key is constructed by concatenating the
following items:
• URLs for the primary and backup datasources, which includes the host and port
• Username and password
• All the properties set for the datasource
• The datasource name
This means that if two Impact datasources exist for the same RDBMS, separate connections will be used
for each. In cases where datasource properties are set, this is required.
However, in most cases, where no specific datasource properties exist, the same pooled connections
could be used across all the Impact datasources for the same RDBMS.
When the impact.jdbc.pool.simplifiedkey property is set to true in the
<server>_server.props files, the pool key is constructed by concatenating only the following items:
• URLs for the primary and backup datasources, which includes the host and port
• Username and password
In this case, pooled connections for the same database type, host, port, username and password will be
shared, regardless of the Impact datasource name and properties.
Warning: impact.jdbc.pool.simplifiedkey should never be set to true, if datasource level
properties are set.

Example usage
The following example shows a specific situation where the impact.jdbc.pool.simplifiedkey
property is required:
• Multiple Impact datasources exist for the same RDBMS
• No custom properties are set at datasource level
• For each Impact datasource, a MAXSQLCONNECTION setting of 100 is required to avoid processing
thread contention.
• But the server side maximum allowed number of connections for the RDBMS is also 100.
In this case, a maximum of 100 connections can be pooled in Impact datasources for the RDBMS. If
you did not set the impact.jdbc.pool.simplifiedkey property, Impact could attempt to make 100
connections for each datasource, which would fail as soon as the 101st connection is attempted.

Chapter 2. Data source adapters (DSA) 19


Note: To avoid confusion, MAXSQLCONNECTION should be set to the same value for all the equivalent
data sources, because the pool will be the size set for MAXSQLCONNECTION of the first data source which
creates the connection.
So if NCOMS1 has MAXSQLCONNECTION =5 and NCOMS2 has MAXSQLCONNECTION =40, but a DirectSQL
comes in for NCOMS1 first, then it is 5, and if a DirectSQL comes in for NCOMS2 first, then it is 40.

20 Netcool/Impact: DSA Reference Guide


Chapter 3. Working with the UI data provider DSA
The UI data provider DSA is used to return results from any UI data provider
To set up a UI data provider DSA complete the following steps:
• Create a UI data provider data source
• Create a UI data provider data type
• Create a policy that uses the GetByFilter function
• Run the policy to return the results from the selected UI data provider

UI data provider data model


A UI data provider data model is an abstract representation of data stored in an underlying relational
database or other data source that can be accessed through a UI data provider.
The UI data provider data model has the following elements:
• UI data provider data sources
• UI data provider data types

UI data provider data sources


A UI data provider data source represents a relational database or another source of data that can be
accessed by using a UI data provider DSA.
You create UI data provider data sources in the GUI. You must create one such data source for each UI
data provider that you want to access.

Creating a UI data provider data source


Use this information to create a UI data provider data source.

Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select UI Data Provider. The
tab for the data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
5. In the Host Name field, add the location where the UI data provider is deployed. The location is a
fully qualified domain name or IP address.
6. In the Port field, add the port number of the UI data provider.
7. Use SSL: To enable Netcool/Impact to connect over SSL to a data provider, you must export a
certificate from the data provider and import it into the Impact Servers and each GUI Server. If the
data provider is an IBM Dashboard Application Services Hub server, complete these steps to export
and import the certificate. For other data provider sources, after you obtain the certificate, use steps
(f and g) to import the certificate.
a) In the IBM Dashboard Application Services Hub server, go to Settings, WebSphere
Adminstrative Console, Launch WebSphere administrative console.

© Copyright IBM Corp. 2006, 2023 21


b) Within the administrative console, select Security, SSL certificate and key management, Key
stores and certificates, NodeDefaultKeyStore, Personal certificates.
c) Check the default certificate check box and click Extract.
d) Enter dash, for the certificate alias to extract.
e) For certificate file name, enter a file name on the system to which the certificate is written
to, such as C:\TEMP\mycertificate.cert.
f) Copy the certificate file to the Impact Server host and import it into both the Impact Servers
and GUI Servers. For more information about the import commands, refer to the Netcool/Impact
Administration Guide, within the security chapter go to the 'Enabling SSL connections with
external servers' topic.
g) Restart the Impact Servers and eachGUI Server.
For more information, see the Netcool/Impact Administration Guide under the section Secure
Communication.
If you want to connect to the local UI data provider by using the UI data provider data source with
an SSL enabled connection, the signed certificate must be exchanged between the GUI Server and
Impact Server. For more information see Configuring SSL with scripts in the Security section of the
documentation.
8. Base Url: Type the directory location of the rest application, such as, /ibm/tivoli/rest.
9. User Name: Type a user name with which you can access the UI data provider.
10. Password: Type a password with which you can access the UI data provider.
11. Click Test Connection to test the connection to the UI data provider to ensure that you entered the
correct information.
Success or failure is reported in a message box. If the UI data provider is not available when you
create the data source, you can test it later.
To test the connection to the UI data provider at any time, from the data source list, right-click the
data source and select Test Connection from the list of options.
12. Click Discover Providers to populate the Select a Provider list.
13. From the Select a Provider list, select the provider that you want to return the information from.
14. From the Select a Source list, select the data content set that you want to return information from.
The Select Source list is populated with the available UI data provider data content sets on the
specified computer.
15. Click Save to create the data source.

UI data provider data types


A UI data provider data type represents a structure similar to a table that contains sets of data in a
relational database. Each UI data provider database data type contains a set of fields that correspond to
data sources in the UI data provider. You create UI data provider data types in the GUI. You must create
one such data type for each data set that you want to access.
The configuration properties for the data type specify which subset of data is retrieved from the UI data
provider data source.

Creating a UI data provider data type


Use this information to create a UI data provider data type.

Procedure
1. Right click the UI data provider data source you created, and select New Data Type.
2. In the Data Type Name field, type the name of the data type.
3. The Enabled check box is selected to activate the data type so that it is available for use in policies.

22 Netcool/Impact: DSA Reference Guide


4. The Data Source Name field is prepopulated with the data source.
5. From the Select a Dataset list, select the data set you want to return the information from.
The data sets are based on the provider and the data sets that you selected when you created the data
source. If this list is empty, then check the data source configuration.
6. Click Save. The data type shows in the list menu.

Viewing data items for a UI data provider data type


You can view and filter data items that are part of a UI provider data type.

Procedure
1. In the Data Model tab, right click the data type and select View Data Items. If items are available for
the data type, they show on the right side in tabular format.
2. If the list of returned items is longer than the UI window, the list is split over several pages. To go from
page to page, click the page number at the bottom.
3. To view the latest available items for the data type, click the Refresh icon on the data type.
4. You can limit the number of data items that display by entering a search string in the Filter field. For
example, add the following syntax to the Filter field, totalMemory=256. Click Refresh on the data
items menu to show the filtered results.
Filter Retrieved Data Items: The filter searches all the fields in the current set of paged results
containing the search text. If the number of results requires the results to be paged, the filter only
filters the results on the current page. The filter is cleared when you navigate between pages.
Tip: If your UI Data Provider data type is based on a Netcool/Impact policy, you can add
&executePolicy=true to the Filter field to run the policy and return the most up to date filtered
results for the data set.
For more information about using the Filter field and GetByFilter function runtime parameters to limit
the number of data items that are returned, see “Using the GetByFilter function to handle large data
sets” on page 23.

Using the GetByFilter function to handle large data sets


You can extend the GetByFilter function to support large data sets. To fetch items from a UI data provider
with the GetByFilter, additional input parameters can be added to the filter value of the GetByFilter
function. Additional filter parameters allow you to refine the result set returned to the policy.
The UI data provider REST API supports the following runtime parameters:
• count: limits the size of the returned data items.
• start: specifies the pointer to begin retrieving data items.
• param_*: sends custom parameters to data sets that the UI data provider uses during construction and
data presentation. The UI Data Provider server recognizes any additional parameters and handles the
request if the parameter has the prefix param_. These values are also used to uniquely identify a data
set instance in the REST service cache.
• id: If used, it fetches a single item. The id parameter specifies the id of item you want to retrieve. For
example, &id=1. If the id parameter is used, all other filtering parameters are ignored.
Tip: If your UI Data Provider data type is based on a policy, then you can add executePolicy=true
to the FILTER parameter in GetByFilter( Filter, DataType, CountOnly) to run the policy and
ensure the latest data set results are returned by the provider.

This policy example uses the FILTER runtime parameters in a GetByFilter (Filter, DataType,
CountOnly) implementation in a UI data provider.

DataType="123UIdataprovider";
CountOnly = false;

Chapter 3. Working with the UI data provider DSA 23


Filter = "t_DisplayName ='Windows Services'";
Filter = "t_DisplayName starts 'Wind'";
Filter = "t_DisplayName ends 'ces'";
Filter = "t_DisplayName contains ’W’&count=6&param_One=paramOne";
Filter = "t_DisplayName contains 'W'&count=3&start=2";
Filter = "((t_DisplayName contains 'Wi')
or (t_InstanceName !isnull))";
Filter = "((t_DisplayName contains 'Wi')
or (t_InstanceName='NewService'))&count=3";
Filter = "((t_DisplayName contains 'Wi')
or (t_InstanceName='NewService'))&count=5&start=1";

MyFilteredItems = GetByFilter( DataType, Filter, CountOnly );

Log( "RESULTS: GetByFilter(DataType="+DataType+", Filter="+Filter+",


CountOnly="+CountOnly+")" );

Log( "MATCHED item(s): " + Num );

index = 0;
if(Num > 0){
while(index <Num){
Log("Node["+index+"] id = " + MyFilteredItems[index].id +
"---Node["+index+"] DisplayName= " +
MyFilteredItems[index].t_DisplayName);
index = index + 1;
}
}
Log("========= END =========");

Here are some more syntax examples of the FILTER runtime parameters that you can use in a
GetByFilter (Filter, DataType, CountOnly) implementation in a UI data provider.
Example 1:

Filter = "&count=6";

No condition is specified. All items are fetched by the server, but only the first 6 are returned.
Example 2:

Filter = "&count=3&start=2";

No condition specified. All items are fetched by the server, but only the first 3 are returned, starting at
item #2
Example 3:

Filter = "t_DisplayName ends 'ces'

Only items that match the condition = "t_DisplayName ends 'ces' are fetched.
Example 4:

Filter = "t_DisplayName contains 'W'&count=6&param_One=paramOne";

Only items that match the condition "t_DisplayName contains


'W'&count=6&param_One=paramOne"; are fetched. Only the first six items that contain 'W' and
paramOne are returned and paramOne is available for use by the provider when it returns the data
set.
Example 5:

Filter = "&param_One=paramOne";

All items are fetched by the server, and paramOne is available for use by the provider when it returns the
data set.

24 Netcool/Impact: DSA Reference Guide


Adding Delimiters
The default delimiter is the ampersand (&) character. You can configure a different delimiter by editing
the property impact.uidataprovider.query.delimiter in the NCI_server.props file. Where
NCI is the name of your Impact Server. Any time you add a delimiter you must restart the Impact Server
to implement the changes.
The delimiter can be any suitable character or regular expression, that is not part of the data set name or
any of the characters used in the filter value.
The following characters must use double escape characters \\ when used as a delimiter:

* ^ $ . |

Examples:
An example using an Asterisk (*) as a delimiter:
• Property Syntax: impact.uidataprovider.query.delimiter=\\*
• Filter query: t_DisplayName contains 'Imp'*count=5
An example with a combination of characters:
• Property Syntax:impact.uidataprovider.query.delimiter=ABCD
• Filter query: t_DisplayName contains 'Imp'ABCDcount=5
An example of a regular expression, subject to Java language reg expression rules:
• Property Syntax: impact.uidataprovider.query.delimiter=Z|Y
• Filter queryt_DisplayName contains 'S'Zcount=9Zstart=7YexecutePolicy=true
An example of a combination of special characters: * . $ ^ |
• Property Syntax: impact.uidataprovider.query.delimiter=\\*|\\.|\\$|\\^|\\|
• Filter query t_DisplayName contains 'S'.count=9|start=7$executePolicy=true

Retrieving data from a UI provider data source


Create a policy that includes the GetByFilter function to retrieve data by filter from a UI data provider
data source.
To retrieve data from a UI data provider data source, you must create a Netcool/Impact policy that
uses the GetByFilter function to return theUI data provider data items. The GetByFilter function
is modified for use with data sources. This function retrieves data items whose properties match the
specified UI data provider filter string. The UI data provider filter string is made up of three parts property
'id', operator, and the value.
You can use the operator AND and the operator OR to repeat the conditions. If you use these operators
together, then the full expression must be in parentheses. For example:

((NAME contains ‘abcd’) or (TYPE isnull) or (DESCRIPTION starts ‘abcd’))


and (SIZE >= 100) and (LAST_UPDATE > 1)

UI data provider data items contain many properties. Each of these properties has two attributes that are
relevant for filtering UI data provider data items, a display value attribute and the actual value attribute.
Operators are evaluated against the display value by default. If you want to filter for the actual values
instead, you must add an asterisk (*) before the property. For example:

(*TYPE = ‘SERVER’)'s

For a full list of the available operators, see “UI data provider operators” on page 30

Chapter 3. Working with the UI data provider DSA 25


You can use the Keys function to return an array of strings that contain the field names for a specific
UI data provider data item. For more information about the Keys function, see the Netcool/Impact Policy
Reference Guide.
After you create the policy, you must create a user output parameter and associated custom schema
values for the GetByFilter function to ensure that Netcool/Impact can process the values that the
function returns from the external UI data provider:
1. In the policy editor, click the Configure User Parameters icon.
2. Click the New Policy Output Parameter: New button
3. Select DirectSQL / UI Provider Datatype in the Format field.
4. Enter a name for the parameter in the Name field.
5. Enter the same name as defined in the policy in the Policy Variable Name field.

6. To create the custom schema values, click the Open Schema Definition Editor icon. You must
create custom schema values for each schema that is defined in the database and included in the
returned results. To view the schema values that are required for your policy, right click the associated
data type and click View Data Items. You must create a custom schema value for each column that
you want to view in the widget in the console.
For more information about how to create user output parameters and custom schema values, see
“Creating custom schema values for output parameters” on page 27.

Example
In the following policy example, the UI data provider data type called uidataprovider-ImpactROI is
sourcing the data from the REPORT_ImpactROI data type that uses the GetByFilter function and the
IPL policy language. The REPORT_ImpactROI data type is a standard data type delivered with Netcool/
Impact.

DataType="uidataprovider-ImpactROI";
Filter = "PROCESS_NAME=’Escalate’";
CountOnly = false;

The GetByFilter function returns an OrgNodes object that represents an array of values:

OrgNodes = GetByFilter( DataType, Filter, CountOnly );

The filter matches only one item in the data, and the GetByFilter function returns one item as a result:

Log("Number of org nodes returned:" + Num); // will be = 1


Log("Key = " + OrgNodes[0].Key); // will be = Escalate

In the following policy example, the data type is myuidataproviderDataType

DataType="myuidataproviderDataType";
Filter = "SAVED_TIME > 1000";
CountOnly = false;

This example returns the following OrgNodes object:

OrgNodes = GetByFilter( DataType, Filter, CountOnly );

If the filter matches two items, the GetByFilter function returns these two items as follows:

Log("Number of org nodes returned:" + Num);


// will be = 2
Log("Key = " + OrgNodes[0].Key);
// will be = Escalate
Log("Key = " + OrgNodes[1].Key);
// will be = Resolve

26 Netcool/Impact: DSA Reference Guide


The following example demonstrates how to create a user output parameter and custom values to
represent the output of the GetByFilter function. The following policy uses the GetByFilter function
to retrieve data from an external UI data provider. The values that are returned are contained in the
DemoUISchema parameter.

Filter="&count=200";
DemoUISchema=GetByFilter('UITestCuriMySQL',Filter,CountOnly);
Log(DemoUISchema);

You create the following output parameter for to represent the DemoUISchema parameter. You do not
have to enter a data source or data type name.

Table 7. Output parameter for the DemoUISchema parameter


Field User entry
Name DemoUISchema
Policy variable name DemoUISchema
Format DirectSQL / UI Provider Datatype

After you create the output parameter, you must create custom schema values for id, firstName, and
lastName. To view the schema values that are required for your policy, right click the associated data
type and click View Data Items.

Table 8. Custom schema value for id


Field Entry
Name id
Format Integer

Table 9. Custom schema value for fname


Field Entry
Name firstName
Format String

Table 10. Custom schema value for lastName


Field Entry
Name lastname
Format String

Creating custom schema values for output parameters


When you define output parameters that use the DirectSQL, Array of Impact Object, or Impact Object
format in the user output parameters editor, you also must specify a name and a format for each field that
is contained in the DirectSQL, Array of Impact Object, or Impact Object objects.

About this task


Custom schema definitions are used by Netcool/Impact to visualize data in the console and to pass values
to the UI data provider and OSLC. You create the custom schemas and select the format that is based on

Chapter 3. Working with the UI data provider DSA 27


the values for each field that is contained in the object. For example, you create a policy that contains two
fields in an object:

O1.city="NY"
O1.ZIP=07002

You define the following custom schemas values for this policy:

Table 11. Custom schema values for City


Field Entry
Name City
Format String

Table 12. Custom schema values for ZIP


Field Entry
Name ZIP
Format Integer

If you use the DirectSQL policy function with the UI data provider or OSLC, you must define a custom
schema value for each DirectSQL value that you use.
If you want to use the chart widget to visualize data from an Impact object or an array of Impact objects
with the UI data provider and the console, you define custom schema values for the fields that are
contained in the objects. The custom schemas help to create descriptors for columns in the chart during
initialization. However, the custom schemas are not technically required. If you do not define values for
either of these formats, the system later rediscovers each Impact object when it creates additional fields
such as the key field. UIObjectId, or the field for the tree widget, UITreeNodeId. You do not need to
define these values for OSLC.

Procedure
1. In the Policy Settings Editor, select DirectSQL, Impact Object, or Array of Impact Object in the
Format field.

2. The system shows the Open the Schema Definition Editor icon beside the Schema Definition
field. To open the editor, click the icon.
3. You can edit an existing entry or you can create a new one. To define a new entry, click New. Enter a
name and select an appropriate format.
To edit an existing entry, click the Edit icon beside the entry that you want to edit
4. To mark an entry as a key field, select the check box in the Key Field column. You do not have to define
the key field for Impact objects or an array of Impact objects. The system uses the UIObjectId as the
key field instead.
5. To delete an entry, select the entry and click Delete.

28 Netcool/Impact: DSA Reference Guide


Controlling how frequently Impact considers the UI provider data
to be stale
You can control how frequently Impact considers the UI Provider Data to be stale by setting
the impact.uidataprovider.lifetimemilliseconds property in the $IMPACT_HOME/etc/
server.props file.
The default value for this property is 5000 milliseconds (five seconds). This sets the lifetime of the data
returned to DASH through the UI Data Provider. Once this lifetime has expired, the next call for data from
DASH to the UI Data Provider will result in fresh data being retrieved.
Note: After you have changed the setting for the impact.uidataprovider.lifetimemilliseconds
property, you must stop and restart Impact.

Clearing the UI Data Provider server cache with the UI data


provider DSA
The DSA and the policy function can be used to send a DELETE HTTP request to the UI Data Provider
server. Sending the DELETE method issues a request to the server to release obsolete cached data sets
from memory. Both GetByFilter() and GetHTTP() functions can be used to send the DELETE HTTP request.

Procedure
1. Use the GetHTTP function to send DELETE requests to the UI Data Provider server.
The GetHTTP function with the 'DELETE' method can be run before or after retrieving a data set. Here
is an example of IPL policy with GetHTTP.

host=<host_name> // e.g. "uidp.host.ibm.com";


port =<port_num> // e.g. 15210;
props = NewObject();
props.UserId=<userid> // e.g. "sysadmin";
props.Password =<password> // e.g. "password";
path = "/ibm/tivoli/rest/providers/<provider_name>/datasources/
<datasource_name>/datasets/<dataset_name>?<_paramXXXX>";

// Execute GetHTTP DELETE method to clear cacheof dataset identified with <_paramXXXX>:
GetHTTP(host, port, "http", path, "key", "DELETE", "BASIC", null, null, null, props);

// Execute GetHTTP GET method to get dataset identified with <_paramXXXX> :


GetHTTP(host, port, "http", path, "key", "GET", "BASIC", null, null, null, props);

The only difference in above GetHTTP method execution is the usage of 'GET'
instead of 'DELETE' HTTP method

// Log context and display return codes of the HTTP requests


Log(CurrentContext());

2. Using the GetByFilter function to send a DELETE request to UI Data Provider server.
GetByFilter() can be used to run GET and DELETE HTTP methods. Here is an example:

DataType="UIDPdatatype_name";
Filter = "param_SourceToken= 'sysitmsles:LZ'count=4 delete=true";
CountOnly = false; OrgNodes = GetByFilter( DataType, Filter, CountOnly );
Log("Number of org nodes returned:" + Num);
// They will be 4 or less Org Nodes returned
Log("Number of org nodes returned:" + Num);

// Log context and display return codes of the HTTP requests


Log(CurrentContext());

The important parameter in the Filter is 'delete=true'. When the GetByFilter function runs with
'delete=true' in the filter value, the data set is retrieved and then the data set is removed from UI
Data Provider server cache. If GetByFilter runs with the Filter value 'delete=false' or the delete
parameter is omitted, then only the GET request is submitted to the UI Data Provider server and cache
is not cleared.

Chapter 3. Working with the UI data provider DSA 29


UI data provider operators
You use these operators to create a filter string for UI data provider data sources.

Table 13. Operators for creating filter strings


String Numeric and date Boolean and enumerated
contains = =
!contains != !=
starts <
!starts <=
ends >
!ends >=
isnull
!isnull
=
!=

An example using the UI data provider to integrate with IBM Tivoli


Monitoring
You use the integration with IBM Tivoli Monitoring to send messages from Netcool/Impact into the Tivoli
Monitoring Universal Message Console.
Messages are sent from Tivoli Netcool/Impact into the Tivoli Monitoring Universal Message Console
through the DSA.

Configuring Netcool/Impact to send messages to Tivoli Monitoring Universal


Message Console
Use this procedure to configure Netcool/Impact to send messages to Tivoli Monitoring 6.1 and higher,
Universal Message Console.

Procedure
1. In the Netcool/Impact UI find the ITM project, and the policies for IPL and
JavaScript ITMLibraryFunctionsIPL, ITMLibraryFunctionsJS, ITMFunctionsCallerIPL,
and ITMLibraryFunctionsCallerJS.
2. Update the ITMFunctionsCallerJS or ITMFunctionsCallerIPL policy that you want to use with
Tivoli Monitoring specific information such as host name, port, user name, password, attribute, and
object that is based on the function to be called.
Remember: The example function is using default values.
3. Run the policy. The functions return an XML formatted string.

30 Netcool/Impact: DSA Reference Guide


Obtaining data from Tivoli Monitoring 6.3 using the UI data provider
How to obtain data from Tivoli Monitoring 6.3, with the UI data provider DSA.

Procedure
• To obtain data from Tivoli Monitoring 6.3, use the UI data provider DSA to access the Tivoli Monitoring
data provider seeChapter 3, “Working with the UI data provider DSA,” on page 21.
• For more information, see the following link and information center reference.
– Netcool/Impact devWorks wiki, see examples associated to dashboarding in the Scenarios and
Examples page at the following URL: https://www.ibm.com/developerworks/mydeveloperworks/
wikis/home?lang=en#/wiki/Tivoli%20Netcool%20Impact/page/Scenarios%20and%20examples
– Netcool/Impact Solutions Guide, Visualizing data from the UI data provider in the console, Visualizing
a data mashup from two IBM Tivoli Monitoring sources. This example shows how to visualize the
data from Tivoli Monitoring sources in dashboard. You can use the same method to retrieve data for
event management purposes.

Chapter 3. Working with the UI data provider DSA 31


32 Netcool/Impact: DSA Reference Guide
Chapter 4. Working with the RESTful API DSA
The RESTful API DSA is a data source adaptor to interact with RESTful web services.
Whenever your web browser fetches a file (page, picture, and so forth) from a web server, it does so using
HTTP. HTTP is a request/response protocol, which means your computer sends a request for some file
or resource, and the web server sends back a response. Representational State Transfer (REST) is an
architectural style where resources are accessed using links, the resources are acted upon by using a set
of simple operations. By accessing and acting on these links user can get resources, add new or update
existing ones and also delete resources. The purpose of RESTful DSA is to allow Impact policies to make
these requests.
To use the REST DSA, you must first do the following:
• Create a RESTful API data source. For more information see “Creating a RESTful DSA data source” on
page 33.
• Write policies to execute actions against the RESTful APIs. For more information see “Making requests
to the RESTful data source” on page 36.

RESTful DSA data model


A RESTful DSA data model is a store for the request properties used when connecting to a RESTful API.
The RESTful DSA data model has a single element: the datasource type.

RESTful DSA data source


To use the REST DSA, you must create a RESTful DSA data source.

Creating a RESTful DSA data source


Use this information to create a RESTful DSA data source.

Procedure
1. Select the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. To create a new RESTful data source, click New Data Source > RESTful API.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
5. Set the Host Name. The host name can be a fully qualified domain name or IP address. Do not
include the protocol or port number. For example: ibm.com, api.ibm.com, 192.168.1.255.
6. Set the Resource Path. The resource path will be appended to all REST requests for this data source.
For example: /v1/api, /api/stats/, /services/data.
Tip: You can extend the resource path using the Path parameter in the RESTful policy function
instead. The RESTful function combine the Resource Path from the data source and the Path
parameter to create the complete path for its REST request.
7. Set the Port number. For the SSL protocol, the default port is 443.
8. To enable an SSL connection between the DSA and the endpoint, select the Use HTTPS check box.
• For SSL connections, you must add the endpoint SSL certificate(s) to Impact's trust store. See
Enabling SSL connections with external servers for more information on how to add SSL certificates
to the trust store.

© Copyright IBM Corp. 2006, 2023 33


• You can skip the trust store requirement by selecting the Disable SSL Verification
check box instead.
9. Select the Reuse Connection check box if required.
Connection caching is done at a policy level. This means the same HTTP connection can be reused
within a policy when it is running.
10. Select the Cache Response check box if required.
Note: Response caching is based on entity tags. It is one of several mechanisms that the
HTTP protocol provides for cache validation, which allows a client to make conditional requests.
Impact by default adds a Cache Control : Max-Age=0 header that causes any caches used
during the request to re-validate ensuring that the entity tag is checked. Modify this header to the
Cache Control setting you want to use. Impact by default adds a Cache Control : Max-Age=0
header to any newly created REST data sources in the HTTP header list.
11. Authentication.
If using basic authentication, you must provide the username and password:
a) In the User Name field type a user name with which you can access the REST API.
b) In the Password field type a password with which you can access the REST API.
If using OAuth authentication, you must provide the OAUTH Data Source. (To configure an OAUTH
Data Source, see “Creating an OAuth data source” on page 35.):
a) Select the Use OAuth check-box to enable OAUTH authentication.
b) Select the OAuth data source that you want to use from the drop-down menu.
12. Specify an HTTP header if you are making requests to a datasource where the same HTTP header are
being used consistently.
For example, if a new header is added to the grid, this is the same as adding a request header. If the
grid has the following header details:

Header Name Header Value


Content-Type application/json
Max-Forwards 10

The following will be added to the URL when making the request:

GET /api/alerts/v1 HTTP/1.1


Host: ibmnotifybm.mybluemix.net
Authorization: Basic ******************
Content-Type: application/json;charset=UTF-8
Max-Forwards: 10

13. Set the Protected Request Headers. If the value of a request header contains sensitive
information such as an API key or authorization token, you can use Protected Request Headers to
hide the value from the user interface and policy logger.
Note: If the same request header is also declared in a policy or as a normal Request Header, then the
DSA uses the following order of precedence when selecting a header value: Policy > Normal Request
Headers > Protected Request Headers.
14. Specify HTTP parameters if you are making requests to the datasource where the same HTTP
parameters are being used consistently.
The REST API datasource can persist these and they will be used on every call to the data source
unless overridden by the policy function.
For example, if a new parameter is added to the grid, this is the same as adding a query parameter to
the request. If the grid has the following paramaters:

Parameter Name Parameter Value


size 100
name impact

34 Netcool/Impact: DSA Reference Guide


Then ?size=100&name=impact will be added to the URL when making the request.
15. Click Test Connection to see if it is possible to connect to the data source with the current data
source settings.
Note: The connection test will send a HTTP GET request to the REST service. You can click Preview
Request to review the connection test request.
16. Click Save to create the data source.

Specifying proxy server connection details for a RESTful DSA data source
Use this information to specify proxy server connection details for a RESTful DSA data source.

About this task


Impact can connect to RESTful services through a proxy server. The details for this connection are
entered in the Proxy Settings tab for the REST API Data Source Editor. When using the functions
RESTfulAPIGET, RESTfulAPIPOST, RESTfulAPIPUT, RESTfulAPIDELETE the request will be sent
through the proxy server. To specify proxy server settings, use the following steps:

Procedure
1. Check the Use Proxy Server box if you want connect to a web server using a proxy server. If the
checkbox is checked, you must specify values for the Proxy Hostname and Proxy Port properties.
2. Specify the name of the proxy host in the Proxy Hostname field.
3. Specify the port on the proxy host through which to make the connection in the Proxy Port field.
4. Select Use HTTPS to connect to the proxy server.
You should select this checkbox if you want to use SSL and if you do you have to import a certificate.
5. If you want to perform no authentication with the proxy server, select No Authentication.
6. If you want to perform basic authentication with the proxy server, select BASIC.
If so, use the User Name and Password properties specified in the Proxy Setting tab to authenticate
with the proxy server.
7. Specify the realm if the proxy server requires one in the Proxy Realm field.
8. Specify the user name to log on to the proxy server in the User Name field.
9. Specify the password to log on to the proxy server in the Password field.

Creating an OAuth data source


Use this information to create an OAuth data source.

Procedure
1. Click Data Model to open the Data Model tab.
2. From the Cluster and Project lists, select the cluster and project you want to use.
3. In the Data Model tab, click the New Data Source icon in the toolbar. Select OAuth. The tab for the
data source opens.
4. In the Data Source Name field:
Enter a unique name to identify the data source. You can use only letters, numbers, and the
underscore character in the data source name. If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source is saved is set to the UTF-8 character encoding.
5. In the Access Token field, add the access token for the OAuth data source.
6. In the Refresh Token field, add the refresh token for the OAuth data source.

Chapter 4. Working with the RESTful API DSA 35


Note: Impact will know when the OAuth refresh token is due to expire and will refresh it
automatically without user intervention. The refresh token expiring is controlled by the external
application.
7. In the Client ID field, add the Client ID for the OAuth service that you want to use.
8. In the Client Secret field, add the Client Secret for the OAuth data source.
9. In the Token URI field, add the Token URI for the OAuth provider's authentication server.
10. In the Auth URI field field, add the Auth URI for the OAuth provider's authorization server.

Making requests to the RESTful data source


You can use Impact policy functions to communicate with the data source.
The RESTful DSA supports the following functions.
• RESTfulAPIGET
The RESTfulAPIGET function retrieves resources from a RESTful API.
• RESTfulAPIPOST
The RESTfulAPIPOST function sends resources to a RESTful API.
• RESTfulAPIPUT
The RESTfulAPIPUT function sends requests to update or create resources to a RESTful API.
• RESTfulAPIDELETE
The RESTfulAPIDELETE function deletes resources from a RESTful API.
• RESTfulAPIPATCH
The RESTfulAPIPATCH function updates resources from a RESTful API.

36 Netcool/Impact: DSA Reference Guide


Chapter 5. Working with the LDAP DSA
The LDAP DSA are used to access information stored in an LDAP server.
This type of DSA is read-only. You cannot use Netcool/Impact to insert new LDAP data into the server data
store. The LDAP DSA is s built in DSA and does not require any additional installation or configuration.

LDAP DSA overview


Netcool/Impact uses the Lightweight Directory Access Protocol (LDAP) data source adaptor to retrieve
data managed by an LDAP server.

The LDAP DSA is a direct-mode data source adaptor that runs in-process with Netcool/Impact. This
DSA is automatically loaded during application run time. You do not have to start or stop this DSA
independently of the application. Netcool/Impact is not able to use this DSA to add, modify, or delete
information managed by the LDAP server
To use the LDAP DSA, complete the following tasks:
• Create an LDAP DSA data model that provides an abstract representation of the data managed by the
LDAP server.
• Write one or more LDAP DSA policies that retrieve data from the underlying LDAP server.
For more information about LDAP data model, see.“LDAP data model” on page 37
For more information about LDAP policies, see.“LDAP policies” on page 39

Supported LDAP servers


Netcool/Impact supports directory servers that fully implement the LDAP v2 and v3 specifications,
including Netscape, iPlanet, OpenLDAP, and Microsoft Active Directory servers.

LDAP data model


A Lightweight Directory Access Protocol (LDAP) data model is an abstract representation of data that is
managed by an LDAP directory server.
LDAP data models have the following elements:
• LDAP data sources
• LDAP data types
• LDAP data items

LDAP data sources


The Lightweight Directory Access Protocol (LDAP) data source represent LDAP directory servers.
Netcool/Impact supports the OpenLDAP and Microsoft Active Directory servers.
You create LDAP data sources in the GUI Server. You must create one data source for each LDAP server
that you want to access. The configuration properties for the data source specify connection information
for the LDAP server and any required security or authentication information.

© Copyright IBM Corp. 2006, 2023 37


LDAP data types
A Lightweight Directory Access Protocol (LDAP) data type represents a set of entities in an LDAP directory
tree.
The LDAP DSA determines which entities are part of this set in real time by dynamically searching the
LDAP tree for entities that match a specified LDAP filter within a certain scope. The DSA searches in
relation to a location in the tree that is known as the base context.
Use the GUI to create LDAP data. You must create one LDAP data type for each set of entities that you
want to access. You must create a new field in the data type for each field of data you want to obtain from
the LDAP entities.
The following table shows the configuration properties for an LDAP data type.

Table 14. LDAP Data Type Configuration Properties

Configuration Description
Property

Data type name Name of the new LDAP data type.

Search scope Keyword that indicates the scope for the LDAP search. Possible values are:
OBJECT_SCOPE, ONELEVEL_SCOPE, and SUBTREE_SCOPE.
OBJECT_SCOPE causes the LDAP DSA to search only the specified base context
for matches.
ONELEVEL_SCOPE causes the DSA to search only the child entities of the base
context for matches.
SUBTREE_SCOPE causes the DSA to search all descendants of the base
context.

Base context Location in the LDAP tree regarding which the LDAP DSA searches for matching
entities. An example is ou=people, o=IBM.com.

Key search field Attribute in the LDAP entity that uniquely identifies it as a key. Used when you
retrieve data items from an LDAP data type with the GetByKey function in a
policy.

Display name field Attribute in the LDAP entity that is logged when you log the entity in a policy.

Restriction filter LDAP search filter as described in Internet RFC 2254: String Representation of
LDAP Search Filters.

LDAP data items


A Lightweight Directory Access Protocol (LDAP) data item represents an entity in the LDAP directory tree.
Each field in an LDAP data item corresponds to an attribute in the LDAP entity.
You use the GUI to view LDAP data items. You cannot use the GUI to add, modify, or delete LDAP data
items.

38 Netcool/Impact: DSA Reference Guide


LDAP policies
Information from LDAP data sources is retrieved by the LDAP policies, which are Netcool/Impact policies.
You cannot add, modify, or delete LDAP data from within a policy.

Retrieving data from an LDAP data source


You can retrieve data, by key, filter, or link, from an LDAP data source by using the GETbyKey, GetByFilter,
and GetByLinks functions when you write a policy.
The following table describes the functions that retrieve LDAP data.

Table 15. Functions that Retrieve LDAP Database Data

Function Description

GetByKey Retrieves data items, or entities in the LDAP directory tree, whose key fields
match the specified key expression. The key field is configured in the Key search
field entry in the data type.

GetByFilter Retrieves data items whose field values match the specified LDAP filter string.

GetByLinks Retrieves data items that are dynamically or statically linked to another data item
using the Netcool/Impact GUI.

Example
The following example shows how to use GetByKey to retrieve data items, or entities in the LDAP
directory tree, whose key field matches the specified key expression. In this example, the LDAP data type
associated with a search scope in the tree is Customer and the key expression is 12345.

DataType = "Customer";
Key = 12345;
MaxNum = 1;

MyCustomer = GetByKey(DataType, Key, MaxNum);

The following example shows how to use GetByFilter to retrieve data items whose field values match
the specified LDAP filter string. The LDAP filter is part of the specification that is described in Internet
RFC 2254. In this example, the LDAP data type is Facility and the filter string is (|(facility=Wall
St.)(facility=Midtown)(facility=Jersey City)).

DataType = "Facility";
Filter = "(|(facility=Wall St.)(facility=Midtown)(facility=Jersey City))";
CountOnly = False;

MyFacilities = GetByFilter(DataType, Filter, CountOnly);

If the filter does not return any LDAP entities when you think it should, sometimes this can be fixed
be adding a * to the end of the search text. For example, instead of Filter = "(cn=myuser)", try
Filter= "(cn=myuser*)";
The following example shows how to use GetByLinks to retrieve data items that are statically or
dynamically linked to another data item by using the Netcool/Impact GUI. In this example, you use
GetByLinks to retrieve data items of type Customer that are linked to data items in the MyFacilities
array that is returned in the previous example.

DataType = {"Customer"};
Filter = "";
MaxNum = 1000;
DataItems = MyFacilities;

Chapter 5. Working with the LDAP DSA 39


MyCustomers = GetByLinks(DataType, Filter, MaxNum, DataItems);

Note: A policy processing exception is generated if a null value is returned by either of the GetByFilter
or GetByKey functions when querying an LDAP data source.
To stop this exception from being generated, add the following property to $IMPACT_HOME/etc/
<ImpactServerName>_server.props:
impact.jndi.suppressnull=true
Restart the Impact server for change to take effect.
For detailed syntax descriptions of these functions, see the Policy Reference Guide.

Controlling the number of records returned from an LDAP server


Some LDAP servers are set up for paged data. By default, Impact only requests one page of data, so for
these servers, the number of rows returned, will be restricted to the paging size on the LDAP server. The
following parameters can be added to the LDAP .type file to control the number of records returned from
the LDAP server: COUNTLIMIT and PAGESIZE.
PAGESIZE
This parameter specifies the number of records Impact requests per page. The default is 0 (no
paging). By default, Impact will only ask for one page from the LDAP server. If the LDAP server has a
page size set to 1000, then only 1000 records will be returned. When PAGESIZE is set, Impact will
ask the LDAP server for PAGESIZE number of records, in a loop, and will continue to ask the LDAP
server for records until all records are returned. Setting PAGESIZE means the Impact server is no
longer limited by the paging restrictions in the LDAP server.
COUNTLIMIT
This parameter specifies the number of records returned in a single search. When paging is enabled,
this has no effect, except that the value must be greater than or equal to the page size. For example,
if you set PAGESIZE to 100 and COUNTLIMIT to 90, the server will return "LDAP errpr code 4 -
sizeLimit Exceeded" because the server tried to read a page of 100 records and the count limit
is only 90. If COUNTLIMIT is specified and PAGESIZE is not, then the number of rows is limited to
COUNTLIMIT. The default is 0 (no limit is set, except that imposed by the LDAP server.
To set these parameters, edit the .type file for the associated LDAP data type and add the following
lines:

<LDAP TYPE>.LDAP.COUNTLIMIT=x
<LDAP TYPE>.LDAP.PAGESIZE=y

Where x is the limit to be used in a non paging setup and y is the page size in a paging setup.
You must restart the Impact server for the changes to take effect.
Note: The code can only take effect if there is a restriction filter set on the data type.

Changing how Impact handles referrals for LDAP DSA connections


Impact allows you to change the referral setting for LDAP searches by setting the
impact.ldap.referral property.
To specify how Impact processes referrals that it encounters, add the impact.ldap.referral property
in the <servername>_datasource.props file in IMPACT_HOME/etc. The property takes one of the
following strings:
follow: Impact follows referrals automatically.
ignore: Impact ignores referrals.
throw: Impact throws a ReferralException when a referral is encountered.
For example:

40 Netcool/Impact: DSA Reference Guide


impact.ldap.referral=follow
This instructs Impact to follow referrals automatically.
If the impact.ldap.referral property is not specified, Impact uses the default value throw.
You must restart the Impact server for the changes to take effect.

International character support


The Lightweight Directory Access Protocol (LDAP) Data Source Adaptor (DSA) follows the LDAP v3
standard for international character support.
This standard specifies that non-ASCII characters must be stored in UTF-8 format in the LDAP server to
be handled correctly by client applications. If you use the LDAP DSA to access non-ASCII character data,
make sure that the data is encoded using the UTF-8 standard.

Chapter 5. Working with the LDAP DSA 41


42 Netcool/Impact: DSA Reference Guide
Chapter 6. Working with the web services DSA
The web services data source adaptor (DSA) is a direct-mode adaptor that Netcool/Impact automatically
loads during application run time.
You do not have to start or stop this DSA independently of the application. The web services DSA is
installed with Netcool/Impact so you do not have to complete any additional installation or configuration
steps.
To enable SSL connections between Netcool/Impact servers and external servers, refer to the Netcool/
Impact Administration Guide, within the security chapter go to the 'Enabling SSL connections with
external servers' topic.
The web services DSA provides support for WSDL version 1.1 and 2.0, and SOAP version 1.1.

Web services DSA overview


The web services data source adapter (DSA) is used to exchange data with external systems, devices, and
applications through web services interfaces.
The web services DSA uses blocking messages to communicate with web services. The use of blocking
messages forces Netcool/Impact to wait for a reply from the web service before it can continue
processing a policy. If Netcool/Impact does not receive a reply in the specified time frame, the DSA
times out and returns an error message to Netcool/Impact.
During policy run time, simple object access protocol (SOAP) messages are sent through the DSA to the
specified web service. The message structure is defined by a web services definition language (WSDL) file.
The message content is defined in the policy.
After the DSA sends a message, it waits for a reply from the web service. When the DSA receives the reply,
the returned data is converted into data items and returned to the Impact Server for further processing in
the policy.
Complete the following tasks when working with the web services DSA:
• Compile WSDL files that are associated with the interfaces provided by a web service.
• Create and configure a web services listener that listens on an HTTP port for SOAP/XML messages from
external applications.
• Write policies that send messages to a web service interface and handle the message replies.
• Write policies that handle SOAP/XML messages that are received by the web services listener.

Compiling WSDL files


Before you can use the web services DSA, you must compile a Web Services Description Language (WSDL)
file.
When you compile a WSDL file, you create a set of Java class files that contain a programmatic
representation of the WSDL data. This representation is then used by the web services DSA when it
sends messages to the web service and handles message replies.
WSDL files are XML documents that describe the public interface that is provided by a web service.
To compile the WSDL, you complete the following tasks:
1. Obtain the WSDL file for the web service.
2. Run the WSDL compiler script.
3. The JAR files are created in the $IMPACT_HOME/wslib directory on the primary server.
4. You can now use the web services wizard to create a policy with the newly compiled JAR file.

© Copyright IBM Corp. 2006, 2023 43


Note: If the WSDL file contains XSD imports, files are provided separately. The WSDL files and related XSD
files must be placed in a directory with no spaces.
For more information about WSDL files, see the Web Services Description Working Group home page on
the W3C website at http://www.w3c.org/2002/ws/desc.

Obtaining WSDL files


Every web service must provide one or more Web Services Description Languages (WSDL) files that define
its public interfaces. WSDLs are available from known URLs.

Procedure
Use a version of the WSDL file that defines the SOAP interface for the web service with the web services
DSA. WSDL files are most often made available by a web service at a known URL. For example, the
web service WSDL for a real-time stock quote service is available at http://www.webservicex.net/
stockquote.asmx?wsdl. You can compile a WSDL by using its URL or by using a copy of the file that is
stored locally in your file system.

Running the WSDL compiler script


The WSDL compiler script, nci_compilewsdl, creates a JAR file that contains a programmatic
representation of the WSDL data.

Procedure
1. Navigate to the $IMPACT_HOME/bin directory.
2. In the command prompt, run the compiler script with the following options:

nci_compilewsdl package_name wsdl_file destination

Where:
package_name
The name of the JAR file (without the .jar suffix) to be created by the script.
wsdl_file
The name of the JAR file (without the .jar suffix) to be created by the script.
destination
The directory to copy the generated JAR file to, the default directory is $IMPACT_HOME/wslib.
You must enter the entire command in one line, without any line breaks. For example, on UNIX:

./nci_compilewsdl amazon US.wsdl $IMPACT_HOME/wslib

The example command compiles a WSDL file, US.wsdl that is in the current working directory, and
creates the amazon.jar file, in the $IMPACT_HOME/wslib directory.
Another example shows how to compile a WSDL file using a URL:

./nci_compilewsdl weather
http://www.webservicex.net/WeatherForecast.asmx?WSDL ../wslib

The weather.jar file, is created under $IMPACT_HOME/wslib directory.


3. Optional: If the destination directory for the script is different from the default one, you must copy the
generated JAR file into the $IMPACT_HOME/wslib directory so that Netcool/Impact policies can use
it.

44 Netcool/Impact: DSA Reference Guide


Recompiling new and changed WSDL files
If you change an existing WSDL file or add a new file that uses classes from an existing WSDL file, you
must compile the WSDL file again.

About this task


Netcool/Impact uses the Java archive files that the Java virtual machine stores in the $IMPACT_HOME/
wslib directory. You must remove the existing JAR file that is related to the WSDL file. Then, recompile
the WSDL file.

Procedure
1. Change an existing WSDL file or create a new file that references an existing file.
2. Move all the Java archive files from the $IMPACT_HOME/wslib directory to a temporary location.
3. Restart Netcool/Impact.
4. Compile the WSDL file.
5. If the compiled WSDL file is not saved to the $IMPACT_HOME/wslib directory, move the new JAR file
to the $IMPACT_HOME/wslib directory.
6. Move all the Java archive files, except for the files that you either changed or referenced in your new
WSDL file, from the temporary directory to the $IMPACT_HOME/wslib directory.
If you do copy the files that you changed, your changes are overwritten with the original file.

Enabling and disabling proxy settings using WSInvokeDL


Use the following settings in a policy if you are using the Web Service server behind a proxy server.

Example

CallProps=NewObject();
CallProps.ProxyEnabled=true;
CallProps.ProxyHost=HostName;
CallProps.ProxyPort=PortNumber;

//The following are optional if required by the Proxy server:

CallProps.ProxyDomain=DomainName;
CallProps.ProxyUsername=Username;
CallProps.ProxyPassword=Password;

//Password can be plain text or encrypted value using nci_crypt script.


//If it's an encrypted password, the following property must be used:

CallProps.DecryptPassword=true;

You pass CallProps as an additional argument to the WSInvokeDL command.


For example:

WSInvokeDLResult = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams, CallProps);

Compiling WSDL files in an Impact split installation


If you want to compile WSDL files in an Impact split installation, you need to compile the WSDL files from
the command line on the backend Impact server before run the wizard.
About this task
Compiling WSDL files in an Impact split installation you need to compile the WSDL from the command line
on the backend Impact server before running the wizard. Follow procedures as below.
Procedure

Chapter 6. Working with the web services DSA 45


1. Compile the WDSL files on backend Impact server to create the .jar file using bin/
nci_compilewsdl.
2. Run the Web Services wizard, specify the URL or Path to the WSDL files, check the option to Select a
previously generated jar file for the WSDL file and Package name as you input in procedure 1.
3. Complete the Web Services wizard.

Web services DSA functions


The web services DSA provides a set of special functions that you use to send messages from Netcool/
Impact to a web service.
• WSSetDefaultPKGName
• WSNewObject
• WSNewSubObject
• WSNewArray
• WSNewEnum
• WSInvokeDL

WSSetDefaultPKGName
The WSSetDefaultPKGName function sets the default package that is used by WSNewObject and
WSNewArray.
The package name is the name that you supplied to the nci_compilewsdl script when you compiled
the WSDL file for the web service. It is also the name of the JAR file that is created by this script, without
the .jar suffix.

Syntax
This function has the following syntax:

WSSetDefaultPKGName(PackageName)

Parameters
The WSSetDefaultPKGName function has the following parameter.

Table 16. WSSetDefaultPKGName function parameter

Parameter Format Description

PackageName String Name of the default WSDL package used by WSNewObject and
WSNewArray.

Example
The following example sets the default package that is used by subsequent calls to WSNewObject and
WSNewArray to google.

WSSetDefaultPKGName("google");

46 Netcool/Impact: DSA Reference Guide


WSNewObject
The WSNewObject function creates an object of a complex data type as defined in the WSDL file for the
web service.
You use this function when you are required to pass data of a complex type to a web service as a message
parameter.

Syntax
This function has the following syntax:

Object = WSNewObject(ElementType)

Parameters
This WSNewObject function has the following parameter.

Table 17. WSNewObject function parameter

Parameter Format Description

ElementType String Name of the complex data type that is defined in the WSDL
file. The name format is [Package.]TypeName, where
Package is the name of the package you created when you
compiled the WSDL file, without the .jar suffix.

Return Value
A new web services object.

Examples
The following example shows how to use WSNewObject to create a web services object, what you
previously called WSSetDefaultPKGName in the policy. This example creates an object of the data type
ForwardeeInfo as defined in the mompkg.jar file compiled from the corresponding WSDL.

// Call WSSetDefaultPKGName
WSSetDefaultPKGName("mompkg");

// Call WSNewObject

MyObject = WSNewObject("ForwardeeInfo");

The following example shows how to use WSNewObject to create a web services object, where you did
not previously call WSSetDefaultPKGName in the policy.

// Call WSNewObject

MyObject = WSNewObject("mompkg.ForwardeeInfo");

Chapter 6. Working with the web services DSA 47


WSNewSubObject
The WSNewSubObject function creates a child object that is part of its parent object and has a field or
attribute name of ChildName.

Syntax
This function has the following syntax:

Object = WSNewSubObject(ParentObject, ChildName)

Parameters
This WSNewSubObject function has the following parameters.

Table 18. WSNewSubObject function parameters

Parameter Format Description

ParentObject String Name of the parent object

ChildName String Name of the new child object

Return Value
A new web services child object.

Examples
The following example shows how to use WSNewSubObject to create a web services child object:

// Call WSNewSubObject

ticketId=WSNewSubobject(incident, "TICKETID");

WSNewArray
The WSNewArray function creates an array of complex data type objects or primitive values, as defined in
the WSDL file for the web service.
You use this function when you are required to pass an array of complex objects or primitives to a web
service as message parameters.

Syntax
This function has the following syntax:

Array = WSNewArray(ElementType, ArrayLength)

Parameters
The WSNewArray function has the following parameters:

48 Netcool/Impact: DSA Reference Guide


Table 19. WSNewArray function parameters

Parameter Format Description

ElementType String Name of the complex object or primitive data type that is defined
in the WSDL file. The name format is [Package.]TypeName,
where Package is the name of the package you created when
you compiled the WSDL file, without the .jar suffix. The
package name is required only if you did not previously call the
WSSetDefaultPKGName function in the policy.

ArrayLength Integer Number of elements in the new array.

Return Value
The WSNewArray returns the new array that is created by the function.

Examples
The following example shows how to use WSNewArray to creates a web services array, where you
previously called WSSetDefaultPKGName in the policy. This example creates an array of the data type
String as defined in the mompkg.jar file that is compiled from a WSDL file.

// Call WSSetDefaultPKGName

WSSetDefaultPKGName("mompkg");

// Call WSNewArray

MyArray = WSNewArray("String", 4);

The following example shows how to use WSNewArray to create a web services array, where you did not
previously call WSSetDefaultPKGName in the policy.

// Call WSNewArray

MyArray = WSNewArray("mompkg.String", 4);

The following example invokes a web service method called runPolicy and passes in a string array as
a parameter to the method. The policy creates a WSNewArray object and populates the object with 3
elements. Note that WSNewArray object is used only for arrays of primitives. For arrays of complex types,
you need to create a WSNewSubObject for each array element. Also, in general it is easier to use the web
services wizard to generate the web services policy from a WSDL, rather than manually creating the web
services policy.

RunPolicyDocument=WSNewObject("com.myexample.RunPolicyDocument");
_RunPolicy=WSNewSubObject(RunPolicyDocument,"RunPolicy");

_Arg0=WSNewArray("java.lang.String",3);
_Arg0[0] = 'aaa';
_Arg0[1] = 'bbb';
_Arg0[2] = 'ccc';
_RunPolicy.Arg0Array=_Arg0;

WSParams = {RunPolicyDocument};

WSService = 'MyWebService';
WSEndPoint = 'http://localhost:8888/MyWebService;
WSMethod = 'runPolicy';

WSInvokeDLResult = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams);

Chapter 6. Working with the web services DSA 49


WSInvokeDL
The WSInvokeDL function makes web services calls when a Web Services Description Language (WSDL)
file is compiled with nci_compilewsdl, or when a policy is configured using the Web Services wizard.

Syntax
This function has the following syntax:

[Return] = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams, [callProps])

This function returns the value of your target web services call.

Parameters
The WSInvokeDL function has the following parameters:

Table 20. WSInvokeDL function parameters

Parameter Format Description

WSService String This web service name is defined in the /definitions/service


element of the WSDL file.

WSEndPoin String The web service endpoint URL of the target web service.
t

WSMethod String The web service method defines which method you would like to call in
WSInvokeDL().

WSParams Array The web services operation parameters are defined by /definitions/
message/part elements in the WSDL file. It comprises an array that
contains all of the parameters that are required by the specified web
service operation.

callProps String, The optional container in which you can set any of the properties, which
Boolean, are listed in the callProps properties section.
integer

callProps properties
Remember: Any options that are set in callProps must precede the actual call to WSInvokeDL.
CacheStub
Caches generated stubs. This value must be set to true if either or both of the following properties
are enabled, ReuseHttpClient, MaintainSession.
Examples of usage:

callProps.CacheStub=true;

callProps.ReuseHttpClient = true;

CharSet
Sets the encoding other than UTF-8.
Chunked
Divides the packets into small chunks if the server supports the feature. The default property is true.

50 Netcool/Impact: DSA Reference Guide


Tip: If you receive the following error message when running the WSInvokeDL function then set this
property to false.
Transport error: 11 Error: Length Required in policy

ConnectionManagerTimeout

Sets how long (in milliseconds) a client should wait for a free connection before timing
out. The default is 30000 milliseconds.
Example usage:

callProps.ConnectionManagerTimeout=30000;

Starting with 7.1.0.30, the ConnectionManagerTimeout property has been deprecated and you
should use WSTimeout instead.
CustomHeaders
Adds custom header values other than the headings that are already supported in the documentation.
DecryptPassword
Enables the decryption of an encrypted password in a policy.
EnableWSS
Enables Web Service Security. If you specify EnableWSS, you must also specify the following
properties:
• WSSRepository, which specifies the path location of WSS Repository.
• WSSConfigFile, which specifies configuration file for EnableWSS.

• WSSPolicyFile, which specifies an optional WS-Policy file.


Example:

callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/Sample03_wss.xml";
callProps.WSSPolicyFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/policy03.xml";

WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams, callProps);

GetHeaders
Retrieves the Web Service headers and returns them in the ResponseHeaders variable.
To retrieve the Web Service headers, use the following:

callProps.GetHeaders = true;

You can access individual headers with dot or bracket notation. For example, to access the Content-
Language header:

Log(ResponseHeaders);
ContentLanguage = ResponseHeaders["Content-Language"];
Log(ContentLanguage);

Sample output:

Parser log: "Context"=(Content-Language=en-GB, Date=Fri, 24 Feb 2017 16:41:12 GMT,


Content-Length=259, Content-Type=text/xml; charset=UTF-8, X-Powered-By=Servlet/3.0)
Parser log: en-GB

HandleFault
Is used to manage faults. Fault messages are returned from the web services server to indicate that
there is a problem.

Chapter 6. Working with the web services DSA 51


HTTP
The default HTTP version is 1.1. You can use this property to set the protocol version to 1.0.
KeepAlive
Can be used to enable the KeepAlive header. If set to true, connections are kept alive for reuse
by multiple sequential requests and responses. If set to false, connections are closed after the
response is sent. It requires HTTP 1.1.
Example usage:

callProps.KeepAlive = true;

LogSoapMessages
You can enable logging of outgoing and incoming soap messages, by setting the LogSoapMessages
property to true. This property should only be used for debugging purposes and should not be
enabled permanently in a production environment.

callProps = NewObject();
callProps.LogSoapMessages = "true";
WSInvokeDLResult = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams,
callProps);

The messages will be logged to $IMPACT_HOME/wlp/usr/servers/<server>/logs/


messages.log.
MaintainSession
Sets the session management to enabled status. When session management is enabled, the system
maintains the session-related objects across the different requests. The parameter must be set to
true or false.
MaxConnectionsPerHost

Sets the maximum number of parallel connections to a host. This value can be increased
if the web service client is experiencing timeouts while waiting for a free connection. This number
cannot exceed the value of MaxTotalConnections.
Example usage:

callProps.MaxConnectionsPerHost=10;

MaxTotalConnections

The total number of connections for the web service client across all hosts.
Example usage:

callProps.MaxTotalConnections=20;

Password
Specifies the password for basic authentication. PreemptiveAuth enables Preemptive
Authentication.
ReuseHttpClient
Enables the underlying infrastructure to reuse the HTTP client if one is available. The
ReuseHttpClient is useful if the client is using HTTPS to communicate with the server. The
parameter must be set to true or false.

52 Netcool/Impact: DSA Reference Guide


SocketTimeout

Specifies how long the client should wait between data packets. If no data is received
before the SocketTimeout is reached, the connection will be considered inactive. The default is 60000
milliseconds.
Example usage:

callProps.SocketTimeout=9000; // Use SocketTimeout instead

Timeout
This property is used in a blocking scenario. The client system times out after the specified amount of
time.
You can optionally set a global web Service DSA call timeout property called
impact.server.dsainvoke.timeout. The property must be added to the Netcool/Impact server
property file, <servername>_server.props. It is best to use the timeout property on a per policy
basis as specified in callProps.Timeout.
The value is set in milliseconds, for example, impact.server.dsainvoke.timeout=30000 (30
seconds).
When you set the properties in any of the .props files, restart theNetcool/Impact server to
implement the changes.
If the impact.server.dsainvoke.timeout property is set, all WSInvokeDL calls use the same
timeout setting.
TrustCertificate

Allows the WSInvokeDL function to connect to an SSL endpoint without having to import
the chain into the trust store. The endpoint's SSL certificate must still be valid and not expired:

callProps.TrustCertificate=true;

Username
Specifies the user name for basic authentication.
WSTimeout

Specifies how long the client should wait when establishing a connection with the remote
host. The default is 60000 milliseconds.
Example usage:

callProps.WSTimeout=60000;

XMLFactory

When set to true, the function will use an internal XML library to generate the SOAP XML
payload. This may be useful in resolving compatibility issues with the default XML library.
Example usage:

callProps.XMLFactory=true;

Examples
Remember: Any options that are set in callProps must precede the actual call to WSInvokeDL.

Chapter 6. Working with the web services DSA 53


Apart from its primary usage, the callProps container can be used to enable security. For example, if
the basic authentication is enabled through the wizard, the sample policy contains the following lines:

callProps.Username="username";
callProps.Password="password";

The following example shows how to use the WSInvokeDL function to send a message to the target web
service.
Example using IPL:

ServiceName = "StockQuote";
EndPointURL = "http://www.webservicex.net/stockquote.asmx";
MethodName = "GetQuote";
ParameterArray = { "IBM" };
callProps = NewObject();

Results = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams, callProps);

Example using JavaScript:

ServiceName = "StockQuote";
EndPointURL = "http://www.webservicex.net/stockquote.asmx";
MethodName = "GetQuote";
ParameterArray = [ "IBM" ];

Results = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams, [callProps])

Use the DecryptPassword policy parameter to enable the decryption of an encrypted password in a
policy that is used with the callProps function:

callProps=NewObject();
callProps.Password="<Web Serice encrypted using nci_crypt>";
callProps.DecryptPassword=true;
//default is false and must be plain text password.

The password is decrypted at policy runtime and is used in plain text internally to Netcool/Impact.
You can also use the CustomHeaders parameter to add custom http header values other than the
headings that are already supported in the documentation.

Headers = NewObject();
Headers.HeaderName1='HeaderValue1';
Headers.HeaderName2='HeaderValue2';
callProps.CustomHeaders=Headers;

Use the HandleFault parameter to handle fault messages.

callProps=NewObject();
callProps.HandleFault=true;

When the default value is false, the policy throws an exception. When the value is true, the policy returns
only the fault string message. No fault code is returned.
If the value is true, the return is similar to the following example.

<?xml version=\"1.0\" encoding=\"UTF-8\"?>


<Fault>
<faultstring> some message here
</faultstring>
</Fault>

To turn off the formatting and return the message in plain text add the following parameter anywhere in
<serverName>_server.props:

impact.wsinvoke.formatfaultmessage=false

The changes are implemented dynamically without restarting the server.

54 Netcool/Impact: DSA Reference Guide


WSNewEnum
The WSNewEnum function returns an enumeration value to a target web service.

Syntax
This function has the following syntax:

[Return] = WSNewEnum(EnumType, EnumValue);

Parameters
The WSNewEnum function has the following parameters.

Table 21. WSNewEnum function parameters

Parameter Format Description

EnumType String The enumeration class name that exists in the package that is
created by nci_compilewsdl

EnumValue String The enumeration value to return

Return Value
A new enumeration type and value.

Example
The following example shows how to use the WSNewEnum function to send a message to the target web
service.

euro = WSNewEnum("net.webservicex.www.Currency", "EUR");


usd = WSNewEnum("net.webservicex.www.Currency", "USD");

Writing Web services DSA policies


You can complete the following tasks with the web services DSA in a Netcool/Impact policy:
• Send messages to a web service
• Handle data that is returned from a web service as a message reply

Sending messages
You can use the web services DSA to send messages.

Procedure
1. Call WSSetDefaultPKGName.
2. Add message parameters with any required data.
3. Call WSInvoke or WSInvokeDL.
Important: [The WSInvoke feature is deprecated.]
When a WSDL file is compiled with nci_compilewsdl or by the web services DSA wizard, you must
use the WSInvokeDL() function to make web services calls.

Chapter 6. Working with the web services DSA 55


Calling WSSetDefaultPKGName
The default package used for communication with the web service is set by the WSSetDefaultPKGName
function.
The package name can be the name you supplied to the nci_compilewsdl script when you compiled
the WSDL file for the web service. This name is also the name of the JAR file created by this script, without
the .jar suffix. The package name can also be any other Java package that resides in the CLASSPATH
and contains the class definition of an object you want to use with the WSNewObject or WSNewArray
functions (for example, java.util).
To set the default package, you call WSSetDefaultPKGName and pass the name of the package, without
the .jar suffix.

Example
The following example shows how to set the default package:

WSSetDefaultPKGName("google");

In this example, google.jar is the package you created when you compiled the WSDL file for the web
service.
Note: If you do not call this function before you call WSNewArray or WSNewObject, you must explicitly
specify the package name in those function calls.

Examples using web services DSA functions


The following examples illustrate how the web services DSA functions and demonstrates its abilities.

Example using web services DSA functions to create a real-time stock quote service
You can add a combination of web services DSA functions to create a policy. The following IPL policy
example uses a stock quote service.

WSSetDefaultPKGName("impactstockquote");
endpoint ="http://www.webservicex.net/stockquote.asmx";

quoteDoc=WSNewObject("net.webservicex.www.GetQuoteDocument");

quote = WSNewSubObject(quoteDoc, "GetQuote");


quote.Symbol="IBM";

params = { quoteDoc };
return = WSInvokeDL("StockQuote", endpoint, "GetQuote", params);
result = return.GetQuoteResponse.GetQuoteResult;
log("result = " + result);

The following example is the same but uses JavaScript, where the params = [quoteDoc]; value is
enclosed in braces ([]).

WSSetDefaultPKGName("impactstockquote");
endpoint ="http://www.webservicex.net/stockquote.asmx";

quoteDoc=WSNewObject("net.webservicex.www.GetQuoteDocument");

quote = WSNewSubObject(quoteDoc, "GetQuote");


quote.setSymbol("IBM");

params = [ quoteDoc ];
return = WSInvokeDL("StockQuote", endpoint, "GetQuote", params);
result = return.GetQuoteResponse.GetQuoteResult;
Log("result = " + result);

56 Netcool/Impact: DSA Reference Guide


Example that uses web services DSA functions to create a Global Weather service
The policy in IPL included the following web services DSA functions: WSSetDefaultPKGName,
WSNewObject, WSNewSubObject, and WSInvokeDL.

WSSetDefaultPKGName("impactglbweather");
endpoint ="http://www.webservicex.net/globalweather.asmx";
weatherdoc=WSNewObject("net.webservicex.www.GetWeatherDocument");

weather = WSNewSubObject(weatherdoc, "GetWeather");


weather.CityName = "New York";
weather.CountryName = "United States";
params = { weatherdoc };
return = WSInvokeDL("GlobalWeather", endpoint, "GetWeather", params);
result = return.GetWeatherResponse.GetWeatherResult;
log("result = " + result);

The following example is the same but uses JavaScript, where the params value is enclosed in braces
([]).

WSSetDefaultPKGName("impactglbweather");
endpoint ="http://www.webservicex.net/globalweather.asmx";
weatherdoc=WSNewObject("net.webservicex.www.GetWeatherDocument");

weather = WSNewSubObject(weatherdoc, "GetWeather");


weather.setCityName("New York");
weather.setCountryName("United States");
params = [ weatherdoc ];
return = WSInvokeDL("GlobalWeather", endpoint, "GetWeather", params);
result = return.GetWeatherResponse.GetWeatherResult;
Log("result = " + result);

Example that uses web services DSA functions to create a currency converter service
The policy in IPL, includes the following web service DSA functions: WSSetDefaultPKGName,
WSNewObject, WSNewSubObject, WSInvokeDL, and WSNewEnum.

WSSetDefaultPKGName("impactcurrencyconverter");
endpoint ="http://www.webservicex.net/CurrencyConvertor.asmx";
convDoc=WSNewObject("net.webservicex.www.ConversionRateDocument");

rate = WSNewSubObject(convDoc, "ConversionRate");

fromCur = WSNewEnum("net.webservicex.www.Currency", "EUR");


rate.FromCurrency = fromCur;
toCur = WSNewEnum("net.webservicex.www.Currency", "USD");
rate.ToCurrency = toCur;

params = { convDoc };
return = WSInvokeDL("CurrencyConvertor", endpoint, "ConversionRate", params);
result = return.ConversionRateResponse.ConversionRateResult;
log("result = " + result);
log("--------------------------------");

The following example is the same but uses JavaScript, where the params value is enclosed in square
braces [].

WSSetDefaultPKGName("impactcurrencyconverter");
endpoint ="http://www.webservicex.net/CurrencyConvertor.asmx";
convDoc=WSNewObject("net.webservicex.www.ConversionRateDocument");

rate = WSNewSubObject(convDoc, "ConversionRate");

fromCur = WSNewEnum("net.webservicex.www.Currency", "EUR");


rate.setFromCurrency(fromCur);
toCur = WSNewEnum("net.webservicex.www.Currency", "USD");
rate.setToCurrency(toCur);

params = [ convDoc ];
return = WSInvokeDL("CurrencyConvertor", endpoint, "ConversionRate", params);
result = return.ConversionRateResponse.ConversionRateResult;
Log("result = " + result);
Log("--------------------------------");

Chapter 6. Working with the web services DSA 57


Web services listener
The web services listener is a service that provides a Netcool/Impact web services interface to other
applications to run Netcool/Impact policies.
Before you can use the web services listener, you must assign the impactAdminUser or
impactWebServiceUser role to the user who uses the web services.
See the section Working with the command line, Netcool/Impact Roles in the main documentation for
more information.

Web services listener process


Policy requests from external applications are managed by the web services listener.
The web services listener listens at an HTTP port for SOAP/XML messages from external applications.
These messages make requests to Netcool/Impact to run a policy. When the listener receives a request, it
sends it to the Netcool/Impact policy engine and any runtime parameters and returns the policy results to
the calling application through the HTTP port.
The requests can also be made over HTTPS protocol.
Netcool/Impact has web services listener implementation that is not compatible with versions prior to 7.1
of Netcool/Impact.
If you had a previous implementation with any of the following items you need to take the following steps:
• A Java client that connects to a web services listener: If the Java client is using the Jar file from a
previous version of Netcool/Impact, the Java client must be modified to handle the new methods and
new parameter names or a new Java client class must be used. You can use the WSTestDL.java
file as an example to model the new Java client. The WSTestDL.java file is in the $IMPACT_HOME/
integrations/web-service-listener/ directory. The WSTestDL.java file differs from any
previous version.
• An XML soap envelope client such as SoapUI: The XML must be rewritten or you must create
new project for Netcool/Impact by using the WSDL file in the $IMPACT_HOME/integrations/web-
service-listener directory.
• A Netcool/Impact policy that uses the web services DSA to call the external Impact Web Services
Listener, and uses the WSInvokeDL function that used compiled classes for a previous version of the
WSDL file: You must compile a new package with the updated WSDL file and policy must be updated.
See the examples in “Compiling the WSDL file for the web services listener” on page 59

Setting up the web services listener


The web services listener is automatically installed when you install Netcool/Impact. You do not need to
complete any additional configuration steps.
You can obtain the web services client information, including the WSDL file and a set of utilities that
help you work with web services, at the $IMPACT_HOME/integrations/web-service-listener
directory.
The following files are provided with the web services listener.
ImpactWebServiceListenerDLService.wsdl
The Web Service Listener WSDL file.
WSListenerTestPolicy.ipl
A sample policy.
WSTestDL.java
A sample client file.

58 Netcool/Impact: DSA Reference Guide


README
A readme file.
bin/test_wslistener.bat
The script that runs the sample client.
/lib
This directory contains JAR files for sample application.
$IMPACT_HOME/bin/nci_findendpoint
The script that you can use to find the SOAP endpoint for an Impact Server.

Compiling the WSDL file for the web services listener


Use the web services listener WSDL file that is provided in the integrations/web-service-
listener directory.

About this task


This WSDL file ImpactWebServiceListenerDLService.wsdl provides a set of classes
that are compatible with the Apache axis 2 compiler. The package name is
com.micromuse.response.common.types.

Procedure
Compile the web services listener WSDL file in the usual way. See “Running the WSDL compiler script” on
page 44.
When the web services listener WSDL file has compiled, you can execute a policy like the following
example:

Example
To create a Java based client to run a policy, use the WSTestDL.java file as example to model the code.
Using the XML soap envelope client, the XML is similar to the following examples.
Run a policy without input parameters, no results are retuned.

<soapenv:Envelope
xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:typ="http://response.micromuse.com/types">
<soapenv:Header/>
<soapenv:Body>
<typ:runPolicy>
<arg0>PolicyNameTest</arg0>
<arg2>false</arg2>
</typ:runPolicy>
</soapenv:Body>
</soapenv:Envelope>

Run a policy with one input parameter, returns a result.

<soapenv:Envelope
xmlns:soapenv="http://
schemas.xmlsoap.org/soap/envelope/"
xmlns:typ="http://
response.micromuse.com/types">
<soapenv:Header/>
<soapenv:Body>
<typ:runPolicy>
<arg0>PolicyNameTest</arg0>
<!--Zero or more repetitions:-->
<arg1>
<desc>ProductName</desc>
<format>String</format>
<label>ProductName</label>
<name>ProductName</name>

Chapter 6. Working with the web services DSA 59


<value>Impact V7.1</value>
</arg1>
<arg2>false</arg2>
</typ:runPolicy>
</soapenv:Body>
</soapenv:Envelope>

Multiple input parameters and returns result

<soapenv:Envelope
xmlns:soapenv="http://
schemas.xmlsoap.org/soap/envelope/"
xmlns:typ="http://
response.micromuse.com/types">
<soapenv:Header/>
<soapenv:Body>
<typ:runPolicy>
<arg0>PolicyNameTest</arg0>
<!--Zero or more repetitions:-->
<arg1>
<desc>ProductName</desc>
<format>String</format>
<label>ProductName</label>
<name>ProductName</name>
<value>Impact</value>
</arg1>
<arg1>
<desc>Company</desc>
<format>String</format>
<label>Company</label>
<name>Company</name>
<value>IBM</value>
</arg1>
<arg1>
<desc>Revision</desc>
<format>Integer</format>
<label>Revision</label>
<name>Revision</name>
<value>7</value>
</arg1>
<arg2>true</arg2>
</typ:runPolicy>
</soapenv:Body>
</soapenv:Envelope>

Writing web services listener policies


Web service listener policies are run in response to web messages that are sent to Tivoli Netcool/Impact
from other applications.
The web messages that are sent to Tivoli Netcool/Impact specify the name of the policy to be run and
a set of runtime parameters. External applications use runtime parameters to pass data to the policy.
The web services listener does not pass an event container to the policy engine. Web services listener
policies return data to calling applications in the form of a data item that is called WSListenerResult.
The policies return one data item at a time.

Runtime parameters
Runtime parameters in web services listener policies are handled in the same way as any other policy.
You can use the variable name to reference the parameters in the policy. No initialization of the variables
is required.
For example, if an incoming web services message contains runtime parameters named Param1,
Param2, and Param3, when it runs the policy the web services listener creates new variables in the
policy context with those parameter names. The following code shows how to reference those variables in
a policy:

// Log incoming runtime parameters

Log("Value of Param1: " + Param1);


Log("Value of Param2: " + Param2);
Log("Value of Param3: " + Param3);

60 Netcool/Impact: DSA Reference Guide


Note that all runtime parameters in a web services listener policy are strings. No other type of value can
be passed to such a policy from calling applications.

WSListenerResult
WSListenerResult is a special data item that contains the result of a web services policy.
You can use NewObject function to create the WSListenerResult data and populate its member
variables with values. When the policy terminates, this data item is passed to the web services listener to
be returned to the calling application.
The following example shows how to create the WSListenerResult data item and populate its member
values.

WSListenerResult = NewObject();
WSListenerResult.Node = "192.168.1.1";
WSListenerResult.Location = "New York";
WSListenerResult.Summary = "Node not responding to ping.";

WSListenerResult can contain other data types. The caller parses the object to get the right data from
the result. The name contains the field name through which the caller can identify the type of data that is
used.
For example, the "SERVICEREQUESTIDENTIFIER" column from the database, is an Integer.
The assignment WSListenerResult.SERVICEREQUESTIDENTIFIER=_result[0]
.SERVICEREQUESTIDENTIFIER assigns the Integer value to the result. The result is the return value
from the GetByFilter function. If the value of the service request is "1", then:
• The getValue method from the policyExecutionResult returns 1.
• The getName method from the policyExecutionResult returns SERVICEREQUESTIDENTIFIER.

Results from policy execution into WSListenerResult variable


WSListenerResult supports the return of Array of Impact Objects. You can return one or more Array of
Impact Objects. The client must set the return variable to true: in XML: <arg2>true</arg2>, in Java
Class: runPolicy.setArg2(true);.
To return a WSListenerResult as one Impact object:

WSListenerResult = NewObject();
WSListenerResult.FirstName=”MyName”;
WSListenerResult.LastName=”MyLastName”;

To return one or more Array Of Impact Objects:

WSListenerResult=NewObject();
index = 0;
objects=[];
objects1=[];
while (index < 5) {
Obj = NewObject();
Obj.FirstName="FistNameJS"+index;
Obj.LastName="LastNameJS"+index;
Obj.Location="LocationJS"+index;

Obj1=NewObject();
Obj1.DOB=LocalTime(GetDate());
Obj1.PlaceOfBirth="City"+index;

objects[index]=Obj;
objects1[index]=Obj1;
index = index +1;

}
WSListenerResult.FirstObject=objects;
WSListenerResult.SecondObject=objects1;

The name of the array can be anything, it is not used as an element, for example, FirstObject,
SecondObject.

Chapter 6. Working with the web services DSA 61


To return combination of Array of Impact Objects and any other variable:

WSListenerResult=NewObject();
Log( "class of WSListenerResult: " + ClassOf(WSListenerResult));
index = 0;
objects={};
while (index < 5) {
Obj = NewObject();
Obj.FirstName="FistName"+index;
Obj.LastName="LastName"+index;
Obj.Location="Location"+index;
index = index +1;
objects=objects+{Obj};
}
WSListenerResult.ImpactObject=objects;
WSListenerResult.Product="Impact";

The Return Document: The XML returned from calling runPolicy method has parent name element
“return”: example:

<soap:Envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/">
<soap:Body>
<a:runPolicyResponse xmlns:a="http://types.common.response.micromuse.com/">
<return>
<name>Product</name>
<value>Impact</value>
</return>
<return>
<name>Location</name>
<value>Location0</value>
</return>
<return>
<name>FirstName</name>
<value>FistName0</value>
</return>
<return>
<name>LastName</name>
<value>LastName0</value>
</return>
<return>
<name>Location</name>
<value>Location1</value>
</return>
<return>
<name>FirstName</name>
<value>FistName1</value>
</return>
<return>
<name>LastName</name>
<value>LastName1</value>
</return>
</a:runPolicyResponse>
</soap:Body>
</soap:Envelope>

SOAP endpoint
The Simple Object Access Protocol (SOAP) endpoint is a URL. It identifies the location on the built-in
HTTP service where the web services listener listens for incoming requests. Calling applications must
specify this endpoint when they send web services messages to Netcool/Impact.
The endpoint URL varies depending on the configuration of Netcool/Impact. The following URL uses the
default configuration.

http://<hostname>:<port>/jaxws/impact/ImpactWebServiceListenerDLIfc

Where <hostname> is the name of the system where Netcool/Impact is installed, <port> is the port
number that is used by the built-in HTTP service. The default port number is 9080.
The following example shows the endpoint URL for a web services listener that is running on a system
named impact_01 and uses the default port.

http://impact_01:9080/jaxws/impact/ImpactWebServiceListenerDLIfc

62 Netcool/Impact: DSA Reference Guide


You can also determine the SOAP endpoint by using the nci_findendpoint script in the
$IMPACT_HOME/bin directory. When you run this script, it connects to the Name Server, looks up the
SOAP endpoint, and prints the URL to the standard output. The syntax of nci_findendpoint is as
follows:

nci_findendpoint server_name

Where server_name is the name of the Impact Server instance for example, NCI.
The web services call must provide a user name who has the ImpactAdminUser role in the Impact Server
and the password for this user. By default, the impactadmin user has the ImpactAdminUser role.

Authentication for the web services listener


Before you can use the web services listener, you must assign the impactAdminUser or
impactWebServiceUser role to the user who uses the web services.
If you use an XML envelope such as SoapUI, you must configure the username and password in the
properties of the header. If you use an Netcool/Impact policy you must configure the username and
password in the Web Services Security section.
For more information about how to assign these roles, see Working with command-line tools > Using
WebServices through the command line > Mapping groups, and users to roles on the Netcool/Impact
information center at or in the Administration Guide PDF file.

WSDL file
The Web Services Description Language (WSDL) file is an XML document that describes the web services
interface.
The WSDL file specifies three messages that define the terms of communication between Tivoli Netcool/
Impact and calling applications. Calling applications use these messages to log in to Tivoli Netcool/
Impact and to request the execution of a policy. You can also use the messages to respond to login
requests and return policy results. The WSDL file also specifies types of data that can be passed in the
body of the messages.
The WSDL specifies the following messages:
• runPolicy
• runPolicyResponse
• WSListenerException

runPolicy
The runPolicy message requests that Tivoli Netcool/Impact run the specified policy.
The following table shows the parameters in runPolicy.

Table 22. runPolicy

Parameter Description

arg0 Name of the policy to be run.

arg1 The runtime parameters to pass to the policy.

arg2 Specifies whether to return the results of the policy to the calling application.

A calling application sends this message. The web services listener responds by returning a message of
type runPolicyResponse.

Chapter 6. Working with the web services DSA 63


An example using runPolicy. The target namespace is http://
types.common.response.micromuse.com The variables names are: arg0, arg1, and arg2.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/"
xmlns:typ="http://types.common.response.micromuse.com/">
<soapenv:Header/>
<soapenv:Body>
<typ:runPolicy>
<arg0>PolicyName</arg0>
<arg1>
<desc>input_parameter</desc>
<format>String</format>
<label>input_parameter</label>
<name>input_parameter</name>
<value>value</value>
</arg1>
<arg2>true</arg2>
</typ:runPolicy>
</soapenv:Body>
</soapenv:Envelope>

Policy parameters
A policy can be configured to accept runtime parameters. For more information on how to define
parameters, see Working with policy parameters
To populate the parameters in your run request, declare each parameter using an arg2 element.
The following table shows the parameters in policyUserParams.

Table 23. policyUserParams

Parameter Description

desc The description of the policy parameter. Can be left blank.

format Specify the parameter format type. Supported formats include String,
Long_String, Integer, Long,
Float, Double, Date, Timestamp, Boolean.

label The label of the policy parameter. Can be left blank.

name The name of the policy parameter. Must match the name of the parameter as
defined by the policy.

value The value of the policy parameter.

An example with multiple parameters.

<?xml version="1.0" encoding="UTF-8"?>


<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:typ="http://
response.micromuse.com/types">
<soapenv:Header/>
<soapenv:Body>
<typ:runPolicy>
<arg0>PolicyName</arg0>
<!--Zero or more repetitions:-->
<arg1>
<desc>RTTS Trouble Ticket ID</desc>
<format>String</format>
<label>?</label>
<name>TICKET_ID</name>
<value>T0121</value>
</arg1>
<arg1>
<desc>Assigned Group</desc>
<format>String</format>
<label>?</label>

64 Netcool/Impact: DSA Reference Guide


<name>ASSIGNED_GROUP</name>
<value>GROUP1</value>
</arg1>
<arg1>
<desc>Ticket Status</desc>
<format>String</format>
<label>?</label>
<name>TT_STATUS</name>
<value>ACK</value>
</arg1>
<arg2>true</arg2>
</typ:runPolicy>
</soapenv:Body>
</soapenv:Envelope>

Escaping Parameter Strings


If the parameter values are enclosed in quotes, the policy will automatically strip off the enclosing quotes.
The parameter value, <value>"Test"</value> will appear in the policy without the surrounding
quotes:

Log(input_parameter); // Test

The backslash character is treated as an escape character by the policy engine. If a parameter value
contains a backslash character, you must double escape the string otherwise the run attempt will fail with
a com.micromuse.common.parser.internal.core.ParseException error.
Example:

<arg1>
<desc>input_parameter</desc>
<format>String</format>
<label>input_parameter</label>
<name>input_parameter</name>
<value>File Server G:\\</value>
</arg1>

You can use the EscapeString in the format field to preserve backslashes without the need
to double escape them.
For example, the following policy uses EscapeString to preserve the backslash in the parameter value.

<arg1>
<desc>input_parameter</desc>
<format>EscapeString</format>
<label>input_parameter</label>
<name>input_parameter</name>
<value>File Server G:\</value>
</arg1>

runPolicyResponse
The runPolicyResponse message is sent by the web services listener in response to a request from a
calling application to run a policy.
The runPolicyResponse contains a single parameter result. This parameter contains an array of
name-value pairs that correspond to the member variables in the WSListenerResult data item that is
returned by the policy.
The web services listener sends this message to a calling application if the wantResult parameter was
specified as true in the originating runPolicy message.

WSListenerException
The WSListenerException message is sent by the web services listener in response to invalid
messages from a calling application.
The WSListenerException contains a single parameter named WSListenerException that provides
detail about the error.

Chapter 6. Working with the web services DSA 65


Sample policy and sample client
A sample policy and a sample client, which you can use to learn about web services listener, are provided.
• The sample policy is WSListenerTestPolicy.ipl.
• The sample client is WSTestDL.java.
They are in the $IMPACT_HOME/integrations/web-service-listener directory. You can run the
sample client by using the test_wslistener script that is in the $IMPACT_HOME/integrations/
web-service-listener/bin directory.
1. To run the provided sample client. Import WSListenerTestPolicy.ipl into Netcool/Impact.
2. Run the $IMPACT_HOME/bin/nci_findendpoint script to determine the endpoint of the Doc/
Literal listener.
3. Invoke the test_wslistener.bat script passing the endpoint as an argument as well as an
impactadmin username and password, and optionally a policy name. For example:

test_wslistener.bat
http://ImpactHostName.ibm.com:9080/jaxws/impact/ImpactWebServiceListenerDLIfc
impactadmin netcool123

1. To create a Java Client to connect to the web services listener, use the WSTestDL.java file that is in
the $IMPACT_HOME/integrations/web-service-listener directory.
2. To start the policy by using the XML Soap Envelop client such as SoapUI.
• Run a policy with one input parameter that does not return a result.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<typ:runPolicy xmlns:typ="http://response.micromuse.com/types">
<arg0>WSSSample03</arg0>
<arg1>
<desc>Product</desc>
<format>String</format>
<label>Product</label>
<name>Product</name>
<value>Impact</value>
</arg1>
<arg2>false</arg2>
</typ:runPolicy>
</soapenv:Body>
</soapenv:Envelope>

• Run a policy with multiple input parameters that returns a result.

<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/">
<soapenv:Body>
<typ:runPolicy xmlns:typ="http://response.micromuse.com/types">
<arg0>WSSSample03</arg0>
<arg1>
<desc>Product</desc>
<format>String</format>
<label>Product</label>
<name>Product</name>
<value>Impact</value>
</arg1>
<arg1>
<desc>Company</desc>
<format>String</format>
<label>Company</label>
<name>Company</name>
<value>IBM</value>
</arg1>
<arg2>true</arg2>
</typ:runPolicy>
</soapenv:Body>
</soapenv:Envelope>

66 Netcool/Impact: DSA Reference Guide


Configuring a secure web services listener connection
To start the web services listener over SSL, you must generate a certificate in the Impact Server keystore
and import it into the truststore of the client.

About this task


The Liberty keystore used by Netcool/Impact contains a default certificate for the host where Netcool/
Impact is installed. The certificate matches the fqdn or short host name depending on your environment.

Procedure
1. Use the following command to export the self-signed certificate to a file called mycertificate.

$IMPACT_HOME/sdk/bin/keytool -export -alias default -file mycertificate


-keystore $IMPACT_HOME/wlp/usr/servers/<server>/resources/security/key.jks

2. When prompted enter the keystore password which will be the impactadmin user password
configured at installation time.
3. Copy the mycertificate file to the location where you want to start the WebService call and import
the certificate into the truststore that is used by the Java process. By default the truststore that is
used by the Java process is in the $JAVA_HOME/jre/lib/security/cacerts file and its default
password is changeit.
This password is not managed or updated by the Netcool/Impact installer. If you change the password
you can for you convenience make it the same as the Liberty or Impact keystore password.

$JAVA_HOME/bin/keytool -import -alias default -file mycertificate


-keystore $JAVA_HOME/jre/lib/security/cacerts

If you are running the test_wslistener script from the Netcool/Impact installation, the
$JAVA_HOME that is used is $IMPACT_HOME/sdk.
4. When prompted enter the Java keystore password.
5. Use the following command to run the test_wslistener script:

test_wslistener.bat https://<hostname>:<https port>


/jaxws/impact/ImpactWebServiceListenerDLIfc
impactadmin <password> <optional policy>

Where <hostname> is the host name you configured in the CN field of the dname argument of the
certificate.
6. Run the following command to execute the sample client on a remote host or the Impact Server.

$JAVA_HOME\bin\java -Djavax.net.ssl.keyStore=
$JAVA_HOME\<jre>\lib\security\cacerts"
-cp ".;<$IMPACT_HOME>\integrations\web-service-listener\lib\*"
WSTestDL https:/<impacthost>:9081/jaxws/impact/ImpactWebServiceListenerDLIfc
impactadmin <password> password <policy>

Where <password> is the impactadmin user password and <policy> is the name of the policy to
execute.

Creating policies by using the web services wizard


You can use the web Services wizard to develop policies. To do so, you connect to the GUI and follow the
on-screen prompts.

Procedure
1. In the Policies tab, select the arrow next to the New Policy icon. To run the Web services wizard,
select Use Wizard > Web Services.

Chapter 6. Working with the web services DSA 67


2. In the Web Services Invocation-Introduction window, type in your policy name in the Policy Name
field. Click Next to continue.
Example http://www.webservicex.net/stockquote.asmx?wsdl.
3. In the Web Services Invocation-WSDL file and Jar File window, in the URL or Path to WSDL field,
enter the URL or a path for the target WSDL file.
In instances where the GUI server is installed separately from the back-end server, the file path for the
WSDL file refers to the back-end server file system, not the GUI server file system. If you enter a URL
for the WSDL file, that URL must be accessible to the back-end Impact Server host and the GUI server
host.
Note: If the WSDL file contains XSD imports, these files are provided separately. The WSDL files and
related XSD files must be placed in a directory with no spaces.
4. In the Jar file area, select one of the following available options:
• Select a previously generated jar file for the WSDL file:
Applies if you generated a jar file from a WSDL file previously. Select one of the existing jar files
from the list menu.
– Currency.jar
– Stock.jar
– length.jar
The Package Name field is automatically completed. Select the Edit check box to modify the
package name.
• Provide a package name for the new jar file :
Select this option to create a jar file. Complete the Package Name field for the new jar file. The
package name cannot have a period ".".
Click Next.
5. In the Web Service Invocation-Web Service Name, Port and Method window, select the general web
service information for the following items: Web Services, Web Service Port Type, and Web Service
Method. Click Next.
6. In the Web Services Invocation - Web Service Method parameters window, enter the parameters
that are required by the target web service method. Click Next.
A Complex Type is a composite of another type expand the parameter name to view what information
is required.
For a Collection Type, multiple values are required, the user must first enter a size for the collection
when asked to enter a for the collection type. When OK is pressed, parameter entry fields are
generated for each item in the collection.
7. Optional: In the Web Service Invocation-Web Service EndPoint window, you can edit the URL or
Path to WSDL by selecting the edit check box. To enable web service security, select the Enable web
service security service check box.
Select one of the following authentication types:
• HTTP user name authentication
• SOAP message user name authentication
Add the User name and Password. Click Next.
8. The Web Service Invocation-Summary and Finish window is displayed. It shows the name of the
policy. Click Finish to create the policy.

68 Netcool/Impact: DSA Reference Guide


Creating policies by using policy editor
You can use the policy editor to develop policies.

Procedure
1. Get the latest WSDL file which must match your target web service.
2. Determine the endpoint of your target running web service.
3. Run the $IMPACT_HOME/bin/nci_compilewsdl script to compile the target WSDL file. Always
place the output JAR file to $IMPACT_HOME/wslib directory. Otherwise, Netcool/Impact is not able
to find the JAR file at run time.
4. Use policy editor to write your policy to make web services calls.
5. Run the policy that you created.

Integrating with third-party web services


Sometimes in the development phase you must change your wsdl file and reuse Netcool/Impact
web services wizard for testing purposes. Because JVM caches the loaded classes, the wizard cannot
recognize the latest changes. Use this procedure to clear the cache.

Procedure
1. Stop the Impact Server.
2. Remove the old JAR file from the $IMPACT_HOME/wslib directory.
3. Start the Impact Server.
4. Run the nci_compilewsdl script from IMPACT_HOME/bin to generate the new JAR file in the wslib
directory or alternatively run the web services wizard to generate a new JAR file and a new policy.

Chapter 6. Working with the web services DSA 69


70 Netcool/Impact: DSA Reference Guide
Chapter 7. Web services security

Web service DSA has limited support to Web services security standard defined by Oasis-Open. The
following security features are supported:
• User name token authentication
• User name token authentication with a plain text password
• Message integrity and non-repudiation with signature
• Encryption
• Sign and encrypt messages

Enabling web services security


The web services client supports message level security configurable using an Axis2 descriptor file or
Rampart policy file. You can also enable HTTP basic authentication (transport level security) by adding
optional properties into the Impact policy.

Procedure
1. Stop all the Impact Servers in the cluster.
2. Complete the following steps for the primary Impact Server. These changes are replicated to the
secondary servers in the cluster.
a. Update the $IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml file in your Tivoli Netcool/Impact
installation directory to set up security features that are required by your web service calls.
For most cases, you must update two related XML elements, which are OutflowSecurity and
possibly InflowSecurity in your wss.xml file. See Chapter 7, “Web services security,” on page
71 for examples on how to configure the OutflowSecurity and InflowSecurity parameters.
b. Update the $IMPACT_HOME/dsa/wsdsa/wss/conf/wscb.properties file to set up user ID
and password that is required by particular security features. For example, UsernameToken or
Signature. This file has the following format:

num=2
uid.1=client
pwd.1=apache
uid.2=service
pwd.2=apache

where num property defines how many user names and password pairs are included in the file.
uid.1 defines the user name for the first pair, while pwd.1 is the password for the first pair. The
same scheme applies to the consecutive pairs.
3. Complete the following step for each Impact Server in the cluster:
a. If you require web service security features such as signature or encryption, you must upload a
keystore with your private key to $IMPACT_HOME/dsa/wsdsa/wss/conf/. Create a properties
file under $IMPACT_HOME/dsa/wsdsa/wss/conf/ and add the following properties:

org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin
org.apache.ws.security.crypto.merlin.keystore.type=jks
org.apache.ws.security.crypto.merlin.keystore.password=apache
org.apache.ws.security.crypto.merlin.file=client.jks

Where file is the name of the keystore and password is the password used to access the keystore.
See “Sign and encrypt messages” on page 77 for more information.
4. Start the primary Impact Server. Ensure that it starts and initializes successfully.

© Copyright IBM Corp. 2006, 2023 71


5. To enable the web services security feature in web services DSA, add the following properties to the
policy.

callProps = NewObject();
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/wss.xml";
callProps.WSSPolicyFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/policy.xml";

Note: The use of WSSConfig has been deprecated. The web service security should be
configured using WSSPolicyFile instead.

6. Optional. You can also configure HTTP basic authentication by adding the following optional properties
into the policy.

callProps.Username="myName";
callProps.Password="myPassword";

7. The supporting security feature, WSInvokeDL() function is started with an additional callProps
object:

result = WSInvokeDL("Sample07", endpoint, "echo", params, callProps)

8. Start the remaining, secondary Impact Servers.

Enable HTTPS for the web service connection


By default, the Axis2 descriptor will use http for the web service connection. To enable HTTPS, update the
Axis2 descriptor file.

Procedure
1. Open the WSSConfigFile as defined in the web service policy:

callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/wss.xml";

2. Locate the transportSender and change the name from http to https:

<transportSender class="org.apache.axis2.transport.http.CommonsHTTPTransportSender"
name="https">
<parameter locked="false" name="PROTOCOL">HTTP/1.1</parameter>
<parameter locked="false" name="Transfer-Encoding">chunked</parameter>
</transportSender>

3. Import the certificate chain for the remote endpoint.


Alternatively, configure the WSInvokeDL to automatically trust the endpoint certificates.

User name token authentication


User name tokens are used to validate user names and passwords and determine whether a client is valid
in a particular context.

About this task


Use the following Rampart configuration (wss.xml) to add user name token authentication.

Procedure
1. Open the $IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml parameters file and configure the
OutflowSecurity parameter.

<parameter name="OutflowSecurity"
<action>
<items>UsernameToken Timestamp</items>

72 Netcool/Impact: DSA Reference Guide


<user>bob</user>
<passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</
passwordCallbackClass>
</action>
<parameter>

Configure the message properties:


Set <items> to UsernameToken Timestamp.
Set <user> to the username you want to send.
Set <passwordCallbackClass> to com.micromuse.common.util.WSPWCBHandler. Passwords
cannot be declared inside the rampart file but must be loaded with the use of a callback class that
loads the password from wscb.properties.
2. Configure the password callback file (wscb.properties). Create $IMPACT_HOME/dsa/
wsdsa/wss/conf/wscb.properties and add the username and password pairs to the file.

num=1
uid.1=bob
pwd.1=bobPassword

The wscb.properties defines username/password pairs. In this example, uid refers to the
username and pwd is the user's password.
3. If no InflowSecurity is required, then remove the existing InflowSecurity parameter from
$IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml
4. Restart the Impact server to load the file changes.
5. Open the Web Service policy and set the following properties for the WSInvokeDL function:

callProps = NewObject();
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/wss.xml";

Results
Outbound messages will be sent with an authentication token and timestamp.

User name token authentication with a plain text password


User name tokens are used to validate user names and passwords and determine whether a client is valid
in a particular context.

About this task


Use the following Rampart configuration (wss.xml) to add user name token authentication as plain text
to messages.

Procedure
1. Open the $IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml parameters file and configure the
OutflowSecurity parameter.

<parameter name="OutflowSecurity"
<action>
<items>UsernameToken</items>
<user>bob</user>
<passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</
passwordCallbackClass>
<passwordType>PasswordText</passwordType>
</action>
<parameter>

Configure the message properties:

Chapter 7. Web services security 73


Set <items> to UsernameToken.
Set <user> to the username you want to send.
Set <passwordCallbackClass> to com.micromuse.common.util.WSPWCBHandler. Passwords
cannot be declared inside the rampart file but must be loaded with the use of a callback class that
loads the password from wscb.properties.
Set <passwordType> to PasswordText to enable plain text passwords. Otherwise, the passwords
are encoded as a digest value.
2. Configure the password callback file (wscb.properties). Create $IMPACT_HOME/dsa/
wsdsa/wss/conf/wscb.properties and add the username and password pairs to the file.

num=1
uid.1=bob
pwd.1=bobPassword

Where num represents the total number of user/password pairs. Each username and password are
represented as numeric pairs with the key uid and pwd.
3. If no InflowSecurity is required, then remove any existing InflowSecurity parameter from
$IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml
4. Restart the Impact server to load the file changes.
5. Open the Web Service policy and set the following properties for the WSInvokeDL function:

callProps = NewObject();
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/wss.xml";

Results
Outbound messages will be sent with a plain text authentication token.

Message integrity and non-repudiation with signature


Use the following Rampart configuration (wss.xml) to timestamp and sign messages with a private key.

Procedure
1. Upload or create a keystore with the keys to be used for signing to $IMPACT_HOME/dsa/wsdsa/wss/
conf/.
For this example, messages will be signed with the private key under the client alias.
2. Open the $IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml parameters file and configure the
OutflowSecurity parameter.

<parameter name="OutflowSecurity">
<action>
<items>Timestamp Signature</items>
<user>client</user>
<signaturePropFile>client.properties</signaturePropFile>
<passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</passwordCallbackClass>
<signatureKeyIdentifier>DirectReference</signatureKeyIdentifier>
</action>
<parameter>

Configure the message properties:


Set <items> to Timestamp Signature.
Set <user> to the alias of the private key used to sign the message.
Set <signaturePropFile> to a properties file which contains the location and credentials of the
keystore.

74 Netcool/Impact: DSA Reference Guide


Set <passwordCallbackClass> to com.micromuse.common.util.WSPWCBHandler. Passwords
cannot be declared inside the rampart file but must be loaded with the use of a callback class.
Set <signatureKeyIdentifier> to DirectReference if the public key and X.509 certificate are
sent together in the request. Refer to [WSS4J] for other possible values.
3. Configure the InflowSecurity parameter.

<parameter name="InflowSecurity">
<action>
<items>Timestamp Signature</items>
<signaturePropFile>client.properties</signaturePropFile>
</action>
<parameter>

Configure the message properties:


Set <items> to Timestamp Signature.
Set <signaturePropFile> to a properties file which contains the location and credentials of the
keystore.
4. Configure the keystore properties file. Create a file called client.properties under
$IMPACT_HOME/dsa/wsdsa/wss/conf/ with the following values:

org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin
org.apache.ws.security.crypto.merlin.keystore.type=jks
org.apache.ws.security.crypto.merlin.keystore.password=apache
org.apache.ws.security.crypto.merlin.file=client.jks

Where file is the location of the keystore from Step 1. The password should be the password used to
access the keystore.
5. Configure the password callback file (wscb.properties). Create $IMPACT_HOME/dsa/
wsdsa/wss/conf/wscb.properties and add the username and password pairs to the file.

num=1
uid.1=client
pwd.1=apache

The wscb.properties defines username/password pairs. In this example, uid refers to the aliases
found in the keystore and pwd is the password used to access the key entry.
6. Restart the Impact server to load the file changes.
7. Open the Web Service policy and set the following properties for the WSInvokeDL function:

callProps = NewObject();
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/wss.xml";

Results
Outbound service messages will be signed using the private key from the keystore.

Encryption
Use the following Rampart configuration (wss.xml) to enable message level encryption for the web
service.

Procedure
1. Upload or create a keystore with the keys to be used for encryption to $IMPACT_HOME/dsa/
wsdsa/wss/conf/.
For this example, messages will be encrypted with the public key under the service alias.

Chapter 7. Web services security 75


2. Open the $IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml parameters file and configure the
OutflowSecurity parameter.

<parameter name="OutflowSecurity">
<action>
<items>Encrypt</items>
<encryptionUser>service</EncryptionUser>
<encryptionPropFile>client.properties</encryptionPropFile>
</action>
</parameter>

Configure the message properties:


Set <items> to Encrypt to encrypt outbound web service messages.
Set <encryptionUser> to the alias of the public key used to encrypt the message.
Set <encryptionPropFile> to a properties file which contains the location and credentials of the
keystore.
3. Configure the InflowSecurity parameter.

<parameter name="InflowSecurity">
<action>
<items>Encrypt</items>
<passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</passwordCallbackClass>
<decryptionPropFile>client.properties</decryptionPropFile>
</action>
</parameter>

Configure the message properties:


Set <items> to Encrypt.
Set <passwordCallbackClass> to com.micromuse.common.util.WSPWCBHandler. Passwords
cannot be declared inside the rampart file but must be loaded with the use of a callback class.
Set <decryptionPropFile> to a properties file which contains the location and credentials of the
keystore.
4. Configure the keystore properties file. Create a file called client.properties under
$IMPACT_HOME/dsa/wsdsa/wss/conf/ with the following values:

org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin
org.apache.ws.security.crypto.merlin.keystore.type=jks
org.apache.ws.security.crypto.merlin.keystore.password=apache
org.apache.ws.security.crypto.merlin.file=client.jks

Where file is the location of the keystore from Step 1. The password should be the password used to
access the keystore.
5. Configure the password callback file (wscb.properties). Create $IMPACT_HOME/dsa/
wsdsa/wss/conf/wscb.properties and add the username and password pairs to the file.

num=1
uid.1=service
pwd.1=apache

Where uid.1 is the keystore alias of the private key and pwd.1 is the password used to access the
keystore.
6. Restart the Impact server to load the file changes.
7. Open the Web Service policy and set the following properties for the WSInvokeDL function:

callProps = NewObject();
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/wss.xml";

76 Netcool/Impact: DSA Reference Guide


Results
All outbound Web service messages will be encrypted using the public key found in the keystore. The
keystore file, as described in client.properties file, should contain the public key entry for alias
service. For example, you can get the public key of service as X.509 certificate and import the certificate
into your own keystore.

Sign and encrypt messages


Use the following Rampart configuration (wss.xml) to encrypt and sign outbound messages with a
timestamp. Inbound messages will also be timestamped, encrypted and signed.

Procedure
1. Upload or create a keystore with the keys to be used for signing/encryption/decryption to
$IMPACT_HOME/dsa/wsdsa/wss/conf/.
For this example, messages will be encrypted with the public key under the service alias and signed
with the private key under the client alias.
2. Open the $IMPACT_HOME/dsa/wsdsa/wss/conf/wss.xml parameters file and configure the
OutflowSecurity parameter.

<parameter name="OutflowSecurity">
<action>
<items>Timestamp Signature Encrypt</items>
<user>client</user>
<passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</
passwordCallbackClass>
<signaturePropFile>client.properties</signaturePropFile>
<signatureKeyIdentifier>DirectReference</signatureKeyIdentifier>
<encryptionKeyIdentifier>SKIKeyIdentifier</encryptionKeyIdentifier>
<encryptionUser>service</encryptionUser>
</action>
</parameter>

Configure the message properties:


Set <items> to Timestamp Signature Encrypt.
Set <user> to the alias of the private key used to sign the message.
Set <passwordCallbackClass> to com.micromuse.common.util.WSPWCBHandler. Passwords
cannot be declared inside the rampart file but must be loaded with the use of a callback class.
Set <signaturePropFile> to a properties file which contains the location and credentials of the
keystore.
Set <signatureKeyIdentifier> to DirectReference if the public key and X.509 certificate are
sent together in the request. Refer to [WSS4J] for other possible values.
Set <encryptionKeyIdentifier> to SKIKeyIdentifier
Set <encryptionUser> to the alias of the public key used to encrypt the message.
3. Configure the InflowSecurity parameter.

<parameter name="InflowSecurity">
<action>
<items>Timestamp Signature Encrypt</items>
<passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</
passwordCallbackClass>
<signaturePropFile>client.properties</signaturePropFile>
</action>
</parameter>

Configure the message properties:


Set <items> to Timestamp Signature Encrypt.
Set <passwordCallbackClass> to com.micromuse.common.util.WSPWCBHandler.

Chapter 7. Web services security 77


Set <signaturePropFile> to a properties file which contains the location and credentials of the
keystore.
4. Configure the keystore properties file. Create a file called client.properties under
$IMPACT_HOME/dsa/wsdsa/wss/conf/ with the following values:

org.apache.ws.security.crypto.provider=org.apache.ws.security.components.crypto.Merlin
org.apache.ws.security.crypto.merlin.keystore.type=jks
org.apache.ws.security.crypto.merlin.keystore.password=apache
org.apache.ws.security.crypto.merlin.file=client.jks

Where file is the location of the keystore from Step 1. The password should be the password used to
access the keystore.
5. Configure the password callback file (wscb.properties). Create $IMPACT_HOME/dsa/
wsdsa/wss/conf/wscb.properties and add the username and password pairs to the file.

num=2
uid.1=service
pwd.1=apache
uid.2=client
pwd.2=apache

The wscb.properties defines username/password pairs. In this example, uid refers to the aliases
found in the keystore and pwd is the password used to access the key entry.
6. Restart the Impact server to load the file changes.
7. Open the Web Service policy and set the following properties for the WSInvokeDL function:

callProps = NewObject();
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/wss.xml";

Results
All outbound web service messages service calls will be encrypted using the public key from the keystore
alias service. Outbound messages will also be timestamped and signed using the key from the keystore
alias client. Inbound traffic will be decrypted and its timestamp and signature verified using the same
keystore.

Configure security with a WS-Policy file


Configure web services security using a Rampart policy file instead of an Axis2 descriptor file.

About this task


This example will sign and encrypt messages using a Rampart policy file. The name of the policy is
Sample03.

Procedure
1. Configure Web Service Security as per the “Sign and encrypt messages” on page 77 topic.
The keystore and wscb.properties will be used in a similar manner but using a Rampart policy file.
2. Remove the OutflowSecurity and InflowSecurity parameters from the Axis2 descriptor file
(WSSConfigFile), $IMPACT_HOME/dsa/wsdsa/wss/conf/Sample03_wss.xml.
3. Upload the WS-Policy language file to $IMPACT_HOME/dsa/wsdsa/wss/conf/.
4. Open the WS-Policy file and add a rampart configuration within the body of the WS-Policy.
The following WS-Policy example, policy.xml demonstrates the rampart configuration for signed
and encrypted messages using AsymmetricBinding.

78 Netcool/Impact: DSA Reference Guide


<wsp:Policy wsu:Id="SampleSignatureEncryptionAsymmetricBindingPolicy" xmlns:wsu="http://
docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd"
xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy">
<wsp:ExactlyOne>
<wsp:All>
<sp:AsymmetricBinding xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy">
<wsp:Policy>
<sp:InitiatorToken>
<wsp:Policy>
<sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/
securitypolicy/IncludeToken/AlwaysToRecipient">
<wsp:Policy>
<sp:RequireThumbprintReference/>
<sp:WssX509V3Token10/>
</wsp:Policy>
</sp:X509Token>
</wsp:Policy>
</sp:InitiatorToken>
<sp:RecipientToken>
<wsp:Policy>
<sp:X509Token sp:IncludeToken="http://schemas.xmlsoap.org/ws/2005/07/
securitypolicy/IncludeToken/Never">
<wsp:Policy>
<sp:RequireThumbprintReference/>
<sp:WssX509V3Token10/>
</wsp:Policy>
</sp:X509Token>
</wsp:Policy>
</sp:RecipientToken>
<sp:AlgorithmSuite>
<wsp:Policy>
<sp:TripleDesRsa15/>
</wsp:Policy>
</sp:AlgorithmSuite>
<sp:Layout>
<wsp:Policy>
<sp:Strict/>
</wsp:Policy>
</sp:Layout>
<sp:IncludeTimestamp/>
<sp:OnlySignEntireHeadersAndBody/>
</wsp:Policy>
</sp:AsymmetricBinding>
<sp:Wss10 xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy">
<wsp:Policy>
<sp:MustSupportRefKeyIdentifier/>
<sp:MustSupportRefIssuerSerial/>
</wsp:Policy>
</sp:Wss10>
<sp:SignedParts xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy">
<sp:Body/>
</sp:SignedParts>
<sp:EncryptedParts xmlns:sp="http://schemas.xmlsoap.org/ws/2005/07/securitypolicy">
<sp:Body/>
</sp:EncryptedParts>

<!-- Configure rampart -->


<ramp:RampartConfig xmlns:ramp="http://ws.apache.org/rampart/policy">
<!-- The alias of the key used for signing -->
<ramp:user>client</ramp:user>
<!-- The alias of the key used for encryption -->
<ramp:encryptionUser>service</ramp:encryptionUser>
<!-- Add passwords to wscb.properties -->
<ramp:passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</
ramp:passwordCallbackClass>
<!-- The keystore using for signing -->
<ramp:signatureCrypto>
<ramp:crypto provider="org.apache.ws.security.components.crypto.Merlin">
<ramp:property name="org.apache.ws.security.crypto.merlin.keystore.type">JKS</
ramp:property>
<ramp:property name="org.apache.ws.security.crypto.merlin.file">client.jks</
ramp:property>
<ramp:property
name="org.apache.ws.security.crypto.merlin.keystore.password">apache</ramp:property>
</ramp:crypto>
</ramp:signatureCrypto>
<!-- The keystore used for encryption -->
<ramp:encryptionCypto>
<ramp:crypto provider="org.apache.ws.security.components.crypto.Merlin">
<ramp:property name="org.apache.ws.security.crypto.merlin.keystore.type">JKS</
ramp:property>

Chapter 7. Web services security 79


<ramp:property name="org.apache.ws.security.crypto.merlin.file">client.jks</
ramp:property>
<ramp:property
name="org.apache.ws.security.crypto.merlin.keystore.password">apache</ramp:property>
</ramp:crypto>
</ramp:encryptionCypto>
</ramp:RampartConfig>
<!-- end rampart configuration -->

</wsp:All>
</wsp:ExactlyOne>
</wsp:Policy>

The user element is the alias of the private signing key.


The encryptionUser element is the alias of the public encryption key.
The passwordCallbackClass element is com.micromuse.common.util.WSPWCBHandler. The
passwords for the keystore entries are configured in the corresponding wscb.properties file.
The signatureCrypto element defines the keystore used for signatures.
The encryptionCypto element defines the keystore used for encryption.
5. Add the keystore file to the web service jar generated by the web service wizard. Upload the keystore
(client.jks) to IMPACT_HOME/wslib/ then use the jar tool to add it to the web service jar.

cd IMPACT_HOME/wslib/
IMPACT_HOME/sdk/bin/jar –uf sample03.jar ./client.jks

6. Restart the Impact server to load the file changes.


7. Open the Impact policy and locate the callProps:

callProps = NewObject();
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/Sample03_wss.xml";

Add the WSSPolicyFile property with the location of the WS-Policy file:

callProps.WSSPolicyFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/policy.xml";

Worked example
This example shows how to set up a stand-alone Apache Axis2 rampart server with an Netcool/
Impact policy to enable Web Service Security.

Before you begin


For information about the Apache Axis2 Rampart security module, see http://axis.apache.org/axis2/java/
rampart/
• Java SDK or JRE 1.6 and above is required. You can use Impact SDK (IMPACT_HOME/sdk/bin) if you
are installing Axis2/Rampart in the same system where Netcool/Impact is installed.
• Ant 1.8 or above is required. You can use the Netcool/Impact Ant package (IMPACT_HOME/ant). if you
are installing Axis2/Rampart in the same system where Netcool/Impact is installed.
• Make sure that the Java and Ant executable files are in the system PATH environment variable.

About this task


This example uses Apache Axis2 version 1.7.0 and rampart version 1.7.0.

Procedure
1. Set up Rampart as a stand-alone server.

80 Netcool/Impact: DSA Reference Guide


a. Download the Axis2 binary package from the following URL: http://axis.apache.org/axis2/java/core/
download.cgi
b. Download the Rampart binary package from the following URL: https://axis.apache.org/axis2/java/
rampart/download.html
c. Unpack both packages. Set the environmental variables:

AXIS2_HOME=<where axis2 package was unpacked>


RAMPART_HOME=<where rampart package was unpacked>

d. Copy all the JAR files from RAMPART_HOME/lib to AXIS2_HOME/lib:

cp –rf $RAMPART_HOME/lib/* $AXIS2_HOME/lib/

e. Copy the Rampart MAR files from $RAMPART_HOME/modules/ to $AXIS2_HOME/repository/


modules:

cp $RAMPART_HOME/modules/rahas-1.7.0.mar $AXIS2_HOME/repository/modules/
cp $RAMPART_HOME/modules/rampart -1.7.0.mar $AXIS2_HOME/repository/modules/

f. Change to the RAMPART_HOME/samples/policy directory.

cd $RAMPART_HOME/samples/policy directory

g. Run the following command to build sample03 application and start the stand-alone server.

ant clean service.03

The command creates all the necessary files and starts a stand-alone application for sample03.
The port number is displayed in the terminal.

Buildfile: /home/netcool/apache/rampart-1.7.0/samples/policy/build.xml
[echo] AXIS2_HOME=/home/netcool/apache/axis2-1.7.9/lib
[echo] /home/netcool/apache/axis2-1.7.9/lib

clean:
[delete] Deleting directory /home/netcool/apache/rampart-1.7.0/samples/policy/build

check.dependency:

service.03:
[mkdir] Created dir: /home/netcool/apache/rampart-1.7.0/samples/policy/build/
service_repositories/sample03
[mkdir] Created dir: /home/netcool/apache/rampart-1.7.0/samples/policy/build/
service_repositories/sample03/services
[mkdir] Created dir: /home/netcool/apache/rampart-1.7.0/samples/policy/build/
service_repositories/sample03/modules
[copy] Copying 3 files to /home/netcool/apache/rampart-1.7.0/samples/policy/build/
service_repositories/sample03/modules
[mkdir] Created dir: /home/netcool/apache/rampart-1.7.0/samples/policy/build/temp
[mkdir] Created dir: /home/netcool/apache/rampart-1.7.0/samples/policy/build/temp/
META-INF
[javac] /home/netcool/apache/rampart-1.7.0/samples/policy/build.xml:170: warning:
'includeantruntime' was not set, defaulting to build.sysclasspath=last; set to false for
repeatable builds
[javac] Compiling 2 source files to /home/netcool/apache/rampart-1.7.0/samples/policy/
build/temp
[copy] Copying 1 file to /home/netcool/apache/rampart-1.7.0/samples/policy/build/
temp/META-INF
[copy] Copying 1 file to /home/netcool/apache/rampart-1.7.0/samples/policy/build/temp
[copy] Copying 1 file to /home/netcool/apache/rampart-1.7.0/samples/policy/build/temp
[copy] Copying 1 file to /home/netcool/apache/rampart-1.7.0/samples/policy/build/temp
[jar] Building jar: /home/netcool/apache/rampart-1.7.0/samples/policy/build/
service_repositories/sample03/services/sample03.aar
[delete] Deleting directory /home/netcool/apache/rampart-1.7.0/samples/policy/build/
temp
[java] [SimpleHTTPServer] Starting
[java] [SimpleHTTPServer] Using the Axis2 Repository /home/netcool/apache/
rampart-1.7.0/samples/policy/build/service_repositories/sample03
[java] [SimpleHTTPServer] Listening on port 8080
[java] [INFO] Deploying module: addressing-1.7.9
- file:/home/netcool/apache/rampart-1.7.0/samples/policy/build/service_repositories/
sample03/modules/addressing-1.7.9.mar

Chapter 7. Web services security 81


[java] [INFO] Deploying module: rahas-1.7.0 - file:/home/netcool/apache/
rampart-1.7.0/samples/policy/build/service_repositories/sample03/modules/rahas-1.7.0.mar
[java] [INFO] Deploying module: rampart-1.7.0 - file:/home/netcool/apache/
rampart-1.7.0/samples/policy/build/service_repositories/sample03/modules/rampart-1.7.0.mar
[java] [INFO] Deploying Web service: sample03.aar - file:/home/netcool/apache/
rampart-1.7.0/samples/policy/build/service_repositories/sample03/services/sample03.aar
[java] [INFO] Listening on port 8080
[java] [SimpleHTTPServer] Started

The output shows that the server is running on port 8080.


h. Verify the application by going to http://server:8080/axis2/services and view the
following output.

Deployed services

sample03

Available operations
-echo

Click the sample03 link to view the WSDL file. The full link to the WSDL file is http://
server:8080/axis2/services/sample03?wsdl. Make a note of the URL address.
Now that service is up and running, the next step is to create a policy in Netcool/Impact that will
access the endpoint.
2. Create a Netcool/Impact policy and configure Web Service Security.
a. Create a new policy with the web services wizard. When prompted for the WSDL path, enter the
URL from Step 1h, http://server:8080/axis2/services/sample03?wsdl. For the package
name, enter sample03.
If the web service security requires a username/password, you can configure this in Step 5 of the
wizard under the Web Service Security section. For this example, the endpoint does not require
user authentication.
b. The wizard creates the policy with the following code:

//This policy generated by Impact Wizard.

//This policy is based on wsdl file at http://server:8080/axis2/services/sample03?wsdl

log("Start policy 'sample03'...");


//Specify package name as defined when compiling WSDL in Impact
WSSetDefaultPKGName('sample03');

//Specify parameters
EchoDocument=WSNewObject("org.apache.rampart.samples.policy.sample03.EchoDocument");
_Echo=WSNewSubObject(EchoDocument,"Echo");

_Args0 = 'Hello from Impact Server';


_Echo['Args0'] = _Args0;

WSParams = {EchoDocument};

//Specify web service name, end point and method


WSService = 'Sample03';
WSEndPoint = 'http://server:8080/axis2/services/sample03.sample03HttpSoap11Endpoint/';
WSMethod = 'echo';

//Enable web service security


callProps = NewObject();
callProps.LogSoapMessages=true;
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/Sample03_wss.xml";
log("About to invoke Web Service call echo ......");

WSInvokeDLResult = WSInvokeDL(WSService, WSEndPoint, WSMethod, WSParams, callProps);


log("Web Service call echo return result: " +WSInvokeDLResult);

c. To load the WS-Policy file, we need to set the WSSPolicyFile property. Locate the callProps
section.

82 Netcool/Impact: DSA Reference Guide


callProps = NewObject();
callProps.LogSoapMessages=true;
callProps.EnableWSS = true;
callProps.WSSRepository= "/opt/IBM/tivoli/impact/dsa/wsdsa/wss";
callProps.WSSConfigFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/Sample03_wss.xml";

d. The web service wizard will also compile the WSDL file into a jar file IMPACT_HOME/wslib/
sample03.jar
And add the following line:

callProps.WSSPolicyFile = "/opt/IBM/tivoli/impact/dsa/wsdsa/wss/conf/policy.xml";

3. After creating the policy, we need to update the Axis2 descriptor file. We want to use a Rampart
WS-Policy file instead of the Axis2 descriptor file.
a. Open the file IMPACT_HOME/dsa/wsdsa/wss/conf/Sample03_wss.xml. The XML file is
generated from the wss.xml.template file and must be customized to suit the requirements
of the sample03 endpoint. Locate and delete the OutflowSecurity and InflowSecurity
parameters.
4. Upload the Rampart policy file to Impact and update it to load passwords from wscb.properties.
a. Copy the Rampart sample policy file from RAMPART_HOME/samples/policy/sample01/
policy.xml and upload it to IMPACT_HOME/dsa/wsdsa/wss/conf
b. Open IMPACT_HOME/dsa/wsdsa/wss/conf/policy.xml and locate the
passwordCallbackClass entry:

<ramp:passwordCallbackClass>org.apache.rampart.samples.policy.sample03.PWCBHandler</
ramp:passwordCallbackClass>

Change the value to com.micromuse.common.util.WSPWCBHandler:

<ramp:passwordCallbackClass>com.micromuse.common.util.WSPWCBHandler</
ramp:passwordCallbackClass>

c. Open the file IMPACT_HOME/dsa/wsdsa/wss/conf/wscb.properties and add the username


and password pairs to it:

num=2
uid.1=service
pwd.1=apache
uid.2=client
pwd.2=apache

5. Add the Rampart keystore to Impact's classpath.


a. Copy the Rampart sample keystore from RAMPART_HOME/samples/keys/client.jks and
upload it to IMPACT_HOME/wslib/
b. Use the jar tool to add the keystore to the web service jar previously compiled by the web service
wizard IMPACT_HOME/wslib/sample03.jar.

cd /opt/IBM/tivoli/impact/wslib
/opt/IBM/tivoli/impact/sdk/bin/jar -uf sample01.jar /opt/IBM/tivoli/impact/dsa/wsdsa/wss/
conf/client.jks

6. Restart the Impact server.


7. Run the Web Policy.

Results
The policy log will print the following entry:

Parser log: Web Service call echo return result: <ns:echoResponse


xmlns:ns="http://sample03.policy.samples.rampart.apache.org" xmlns:soapenv="http://www.w3.org/

Chapter 7. Web services security 83


2003/05/soap-envelope">
<ns:return>Hello from Impact Server</ns:return>
</ns:echoResponse>

84 Netcool/Impact: DSA Reference Guide


Chapter 8. Working with the JMS DSA
You can use the Java Message Service (JMS) data source adapter (DSA) to send and receive JMS
messages from within a policy.
The JMS DSA is installed automatically when you install Netcool/Impact.
For detailed information about connecting WebSphere MQ and JMS DSA, see “Connecting to WebSphere
MQ and JMS DSA” on page 97.

Supported JMS providers


Before you can use the Java Message Service (JMS) data source adapter (DSA) to send and retrieve JMS
messages, you must obtain the correct set of JMS client libraries.
The JMS client libraries are third-party software components that provide the function that is required to
connect to the JMS and JNDI providers in your environment. These libraries are Java JAR files that are
distributed with each JMS application. The JMS DSA is compatible with JMS providers that fully support
the JMS 1.1 specification.
For more information about JMS 1.1, see the Oracle Java website at http://www.oracle.com/technetwork/
java/docs-136352.html. Some supported JMS providers include OpenJMS 0.7.7; BEA WebLogic 8.1;
Oracle Java System Application Server 8 and later; and WebSphere MQ. For information about connecting
WebSphere MQ to a JMS DSA, see “Connecting to WebSphere MQ and JMS DSA” on page 97.

Configuring JMS DSAs to send and receive JMS messages


You must complete the configuration steps before you can use the Java Message Service (JMS) data
source adapter (DSA) to send and retrieve JMS messages.

Procedure
1. Obtain and install the required JMS client libraries.
2. Copy the client JAR files from the JMS client installation directory to the $IMPACT_HOME/dsalib
directory.
3. Restart the Impact Server.
4. Create a JMS data source, and configure it for the JMS source.
For more information about creating a JMS data source, see “JMS data source” on page 86.
5. Handle the incoming JMS messages.
You can handle the incoming JMS messages by using any of these approaches:
• Write JMS policies that use the JMS data source, and the JMS functions.
For more information, see “Writing JMS DSA policies” on page 90.
• Configure the JMSListener service to send JMS events to a policy.
If you use the JMSListener to send JMS messages to your policy, you do not have to use the
ReceiveJMSMessage function to receive them. For more information, see “Handling incoming
messages from a JMS message listener” on page 96.

© Copyright IBM Corp. 2006, 2023 85


Setting up OpenJMS as the JMS provider
You can set up OpenJMS as the Java Message Service (JMS) provider for Netcool/Impact.

Procedure
1. Obtain the OpenJMS libraries from the OpenJMS website http://openjms.sourceforge.net/.
2. To install OpenJMS, follow the procedure in the getting started information that is available on the
OpenJMS website.
3. Copy the OpenJMS client JAR files to the $IMPACT_HOME/dsalib directory.
You can find the OpenJMS client JAR files in the lib subdirectory in the OpenJMS installation
directory.
4. Restart the Impact Server.
5. To start the OpenJMS server, use the startup script that is in the bin subdirectory in the OpenJMS
installation directory.
6. Create a JMS data source, and configure it for OpenJMS.
For more information, see “JMS data source” on page 86

JMS data source


A Java Message Service (JMS) data source abstracts the information that is required to connect to a JMS
Implementation.
This data source is used by the JMSMessageListener service, the SendJMSMessage, and
ReceiveJMSMessage functions.

JMS data source configuration properties


You can configure the properties for the Java Message Service (JMS) data source.

Table 24. General settings for the JMS data source window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

86 Netcool/Impact: DSA Reference Guide


Table 25. Source settings for the JMS data source window
Window element Description
JNDI Factory Initial Enter the name of the JNDI initial context factory.
The JNDI initial context factory is a Java object
that is managed by the JNDI provider in your
environment. The JNDI provider is the component
that manages the connections and destinations for
JMS.
OpenJMS, BEA WebLogic, and Sun Java
Application Server distribute a JNDI provider as
part of their JMS implementations. The required
value for this field varies by JMS implementation.
For OpenJMS, the value of the property is

org.exolab.jms.jndi.
InitialContextFactory

For other JMS implementations, see the related


product documentation.

JNDI Provider URL Enter the JNDI provider URL. The JNDI provider
URL is the network location of the JNDI provider.
The required value for this field varies by JMS
implementation. For OpenJMS, the default value
of this property is tcp://hostname:3035, where
host name is the name of the system on which
OpenJMS is running. The network protocol TCP or
RMI, must be specified in the URL string. For other
JMS implementations, see the related product
documentation.

JNDI URL Packages Enter the Java package prefix for the JNDI context
factory class. For OpenJMS, BEA WebLogic, and
Sun Java Application Server, you are not required
to enter a value in this field.

JMS Connection Factory Name Enter the name of the JMS connection factory
object. The JMS connection factory object is
a Java object that is responsible for creating
new connections to the messaging system.
The connection factory is a managed object
that is administered by the JMS provider. For
example, if the provider is BEA WebLogic, the
connection factory object is defined, instantiated,
and controlled by that application. For the
name of the connection factory object for your
JMS implementation, see the related product
documentation.

JMS Destination Name Enter the name of a JMS topic or queue, which is
the name of the remote topic or queue where the
JMS message listener listens for new messages.

Chapter 8. Working with the JMS DSA 87


Table 25. Source settings for the JMS data source window (continued)
Window element Description
JMS Connection User Name Enter a JMS user name. If the JMS provider
requires a user name to listen to remote
destinations for messages, enter the user name in
this field. JMS user accounts are controlled by the
JMS provider.

JMS Connection Password If the JMS provider requires a password to listen


to remote destinations for messages, enter the
password in this field.

Test Connection Test the connection to the JMS Implementation.


If the test is successful, the system shows the
following message:
JMS: Connection OK

Specifying more JNDI properties for the JMS data source


You can specify more Java Naming and Directory Interface (JNDI) properties by editing the Java Message
Service (JMS) data source.

Procedure
1. Open the JMS data source for editing in a text editor of your choice.
You can find all data sources in the $IMPACT_HOME/etc/ directory. The data source file name
is <servername>_<datasourceName>.ds. <servername> is the name of the Impact Server
instance, and <datasourceName> is the name of your JMS data source as displayed in the data
source editor in GUI.
2. Add your JNDI properties in the following format:

<datasourceName>.JMS.DSPROPERTY.#.NAME=<property>
<datasourceName>.JMS.DSPROPERTY.#.VALUE=<property value>

# is the property number in a sequence of properties, the starting number is 1, for example:

<datasourceName>.JMS.DSPROPERTY.1.NAME=java.naming.factory.initial
<datasourceName>.JMS.DSPROPERTY.1.VALUE=org.exolab.jms.jndi.
InitialContextFactory
<datasourceName>.JMS.DSPROPERTY.2.NAME=java.naming.provider.url
<datasourceName>.JMS.DSPROPERTY.2.VALUE=tcp://jndi_host:3035
<datasourceName>.JMS.DSPROPERTY.3.NAME=java.naming.security.principal
<datasourceName>.JMS.DSPROPERTY.3.VALUE=User1
<datasourceName>.JMS.DSPROPERTY.4.NAME=java.naming.security.credentials
<datasourceName>.JMS.DSPROPERTY.4.VALUE=password
<datasourceName>.JMS.NUMDSPROPERTIES=4

The <datasourceName>.JMS.NUMDSPROPERTIES=<number of properties> property specifies


the number of more properties, 4 in the previous example.
Note: Use the $IMPACT_HOME/bin/nci_crypt utility to encrypt the value of the
java.naming.security.credentials property.
3. Save the changes in the data source, and restart the Impact Server to apply the changes.

88 Netcool/Impact: DSA Reference Guide


JMS message listener
The Java Message Service (JMS) message listener service runs a policy in response to incoming messages
that are sent by external JMS message providers.
The message provider can be any other system or application that can send JMS messages. Each JMS
message listener listens to a single JMS topic or queue. There is one default JMS message listener named
JMSMessageListener. You can create as many listener services as you need, each of which listens to a
different topic or queue.
A JMS message listener is only required when you want Netcool/Impact to listen passively for incoming
messages that originate with JMS message producers in your environment. You can actively send and
retrieve messages from within a policy without using a JMS message listener.

JMS message listener service configuration properties


You can configure the properties for the Java Message Service (JMS) listener service.

Table 26. JMSMessageListener Service configuration window


Window element Description

Service name Enter a unique name to identify the service.

Policy To Execute Select the policy that you created to run in response to incoming
messages from the JMS service.

JMS Data Source JMS data source to use with the service.
You need an existing and valid JMS data source for the
JMS Message Listener service to establish a connection with
the JMS implementation and to receive messages. For more
information about creating JMS data sources, see “JMS data
source configuration properties” on page 86.

Message Selector The message selector is a filter string that defines which
messages cause Netcool/Impact to run the policy specified in the
service configuration. You must use the JMS message selector
syntax to specify this string. Message selector strings are similar in
syntax to the contents of an SQL WHERE clause, where message
properties replace the field names that you might use in an SQL
statement.
The content of the message selector depends on the types and
content of messages that you anticipate receiving with the JMS
message listener. For more information about message selectors,
see the JMS specification or the documentation distributed with
your JMS implementation. The message selector is an optional
property.

Chapter 8. Working with the JMS DSA 89


Table 26. JMSMessageListener Service configuration window (continued)
Window element Description

Durable Subscription: Enable You can configure the JMS message listener service to use
durable subscriptions for topics that allow the service to receive
messages when it does not have an active connection to the
JMS implementation. A durable subscription can have only one
active subscriber at a time. Only a JMS topic can have durable
subscriptions.
Note: Since a durable connection can have only one active
subscriber at a time, in a cluster configuration during failover and
failback, a delay/pause can be configured. The delay/pause allows
the service to shut down on the other cluster members during
failover/failback.
The delay/pause is configured in the jmslistener properties
file using the durablejmspause property, for example:
impact.<jmslistenerservicename>.durablejmspause=3
0000. The durableJmsPause property defines the time in
milliseconds, so
impact.<jmslistenerservicename>.durablejmspause=3
0000 defines a pause of 30 seconds.

Client ID Client ID for durable subscription. It defines the client identifier


value for the connection. It must be unique in the JMS
Implementation.

Subscription Name Subscription Name for durable subscription. Uniquely identifies


the subscription made from the JMS message listener to the JMS
Implementation. If this property is not set, the name of JMS DSA
listener service itself is used as its durable subscription name,
which is JMSMessageListener by default.

Clear Queue Clear the message waiting in the JMSMessageListener queue that
has not yet been picked by the EventProcessor service. It is
recommended not to do this while the Service is running.

Starts automatically when server Select to automatically start the service when the server starts.
starts You can also start and stop the service from the GUI.

Service log (Write to file) Select to write log information to a file.

Writing JMS DSA policies


Java Message Service (JMS) policies send or retrieve JMS messages.
JMS policies use the SendJMSMessage and ReceiveJMSMessage functions, or work with the JMS
message listener service.
In a policy, you use the JMS DSA to perform the following tasks:
• Send messages to a JMS topic or queue
• Retrieve messages from a JMS topic
• Queue or handle incoming messages from a JMS message listener

90 Netcool/Impact: DSA Reference Guide


Sending messages to a JMS topic or queue
You can send messages to a Java Message Service (JMS) topic or queue from within a policy.

Procedure
1. Create and configure a JMS data source
For more information, see “JMS data source” on page 86.
2. Create a message properties context.
For more information, see “Message properties context” on page 91.
3. Create a message body string or context.
For more information, see “Creating a message body string or context” on page 93.
4. Call the SendJMSMessage function and pass the values the JMS data source, the message properties
context, and the specified message body as runtime parameters.
For more information about the syntax of the SendJMSMessage function, see “SendJMSMessage” on
page 91.

SendJMSMessage
The SendJMSMessage function sends a message to the specified destination by using the Java Message
Service (JMS) DSA.
To send the message, you call the SendJMSMessage function and pass the JMS data source, a message
properties context, and the message body as input parameters.

Syntax
The SendJMSMessage function has the following syntax:

SendJMSMessage(DataSource, MethodCallProperties, Message)

Parameters
The SendJMSMessage function has the following parameters.

Table 27. SendJMSMessage function parameters

Parameter Format Description

DataSource String Valid, and existing JMS data source.

MethodCallPropertie Context Context that contains message header, and other


s JMS properties for the message. Custom message
properties are supported.

Message String | Context String or context that contains the body of the
message.

Message properties context


The message properties context specifies runtime parameters for the underlying Java Message Service
(JMS) client method call that retrieves the message when you call the ReceiveJMSMessage function.
You pass this context as a runtime parameter when you call the SendJMSMessage function in a policy.
This message properties context specifies the message header, custom message properties, and the
message selector. The table shows the valid JMS message header values.

Chapter 8. Working with the JMS DSA 91


Table 28. JMS Message Header Values

Property Description

DeliveryMode Optional. Specifies the JMS delivery mode for the method.
Possible values are PERSISTENT and NON_PERSISTENT.

DisableMessageId Optional. Specifies whether JMS message IDs are disabled.

DisableMessageTimeStamp Optional. Specifies whether JMS message time stamps are


disabled.

JMSCorrelationID Optional. Specifies a JMS correlation ID for the message.

JMSCorrelationIDAsBytes Optional. Specifies a JMS correlation ID for the message as an


array of bytes.

JMSDeliveryMode Optional. Specifies a JMS delivery mode. Possible values are 0 for
persistent mode and 1 for non-persistent mode.

JMSDestination Optional. Specifies a destination for the message in the form of a


JMS-administered object.

JMSExpiration Optional. Specifies an expiration value in milliseconds for the


message. If not specified, value is set by the JMS provider.

JMSMessageID Optional. Specifies a JMS message ID for the message.

JMSPriority Optional. Specifies a JMS priority level for the message. JMS
supports priority levels from 0 to 9, with 9 as the highest.

JMSRedelivered Optional. Specifies whether the message is being redelivered.


Possible values are True or False.

JMSReplyTo Optional. Specifies the name of a JMS destination where replies to


this message are sent.

JMSTimeStamp Optional. Specifies a time stamp for the message in seconds since
the beginning of the UNIX epoch. If not specified, the value is set
by the JMS provider.

JMSType Optional. Specifies a JMS message type for the message. Some
JMS implementations use a message repository to store defined
types of messages. You can use this header value to associate a
particular message with a message type.

Priority Same as JMSPriority.

TimeToLive Optional. Specifies the length of time that a message is retained by


the JMS delivery system before it expires. The default value is 0,
which indicates an unlimited message lifetime.

For more information about the JMS message header, see the documentation that was provided with your
JMS application.

92 Netcool/Impact: DSA Reference Guide


Optionally, you can also specify custom message properties. These properties are user-defined and can
contain any value. Generally, these properties are used to send meta information about messages that is
not otherwise described in the message header.
The following example shows how to create and populate a message properties context:

// Call NewObject to create the new context


MsgProps = NewObject();

// Assign message header values as member variables


MsgProps.TimeToLive = 0;
MsgProps.Priority = 5;
MsgProps.DeliveryMode = "PERSISTENT";

// Assign custom message properties as member variables


MsgProps.Custom1 = "First custom property";
MsgProps.Custom2 = "Second custom property";

Creating a message body string or context


You specify the message body by using a string value or a context, depending on whether you want to
send a text message or a map message.
To specify the body of a text message, you use a string assignment statement in the policy. When you call
SendJMSMessage, you pass this string to the function as a runtime parameter. This example shows how
to assign the body of a text message to a string:

MsgTextBody = "Body content of text message";

To specify the body of a map message, you create a context by using the NewObject function. You assign
one member variable for each name-value pair in the map, where the name of the variable corresponds to
the name for the pair. When you call SendJMSMessage, you pass this context to the function as a runtime
parameter.
This example shows how to create a message body context for a map message. In this example, the
names of values in the map are name, location, and email.

MsgMapBody = NewObject();

MsgMapBody.name = "John Smith";


MsgMapBody.location = "New York City";
MsgMapBody.email = "[email protected]";

Example of sending a map message to a JMS destination


The following example shows how to send a map message to a Java Message Service (JMS) destination
by using the SendJMSMessage function.

// Set JMSDataSource to a valid and existing JMSDataSource in Impact.


// The destination where the message is sent is obtained from the JMSDataSource.
JMSDataSource = “JMSDS1”;

// Create a message properties object and populate its


// member variables with message header properties and custom properties
MsgProps = NewObject();
MsgProps.TimeToLive = 0;
MsgProps.color = "green";
MsgProps.Expiration = 2000;
MsgProps.DeliveryMode = "PERSISTENT";
MsgProps.ReplyTo="queue2";

// Specify custom message properties


MsgProps.Custom1 = "Value 1";
MsgProps.Custom2 = "Value 2";

// Create a map message content and populate its member


// variables where each variable and value represent a name/
// value pair for the resulting map
MsgMapBody = NewObject();
MsgMapBody.name = "sanjay";

Chapter 8. Working with the JMS DSA 93


MsgMapBody.location = "New York City";
MsgMapBody.facility = "Wall St.";

// Call SendJMSMessage and pass the JNDI properties


// context, the message properties context, the message
// map context and other parameters
SendJMSMessage(JMSDataSource, MsgProps, MsgMapBody);

Example of sending a text message to a JMS destination


This example shows how to send a text message to a Java Message Service (JMS) destination by using the
SendJMSMessage function.

// Set JMSDataSource to a valid and existing JMSDataSource in Impact.


// The destination where the message is sent is obtained from the JMSDataSource.
JMSDataSource = “JMSDS1”;

// Create a message properties object and populate its


// member variables with message header properties and custom properties
MsgProps = NewObject();
MsgProps.TimeToLive = 0;
MsgProps.color = "green";
MsgProps.Expiration = 2000;
MsgProps.DeliveryMode = "PERSISTENT";
MsgProps.ReplyTo="queue2";

// Specify custom message properties


MsgProps.Custom1 = "Value 1";
MsgProps.Custom2 = "Value 2";

// Create a text message content


MsgTextBody = "This is the message body";

// Call SendJMSMessage and pass the JNDI properties


// context, the message properties context, the message
// map context and other parameters
SendJMSMessage(JMSDataSource, MsgProps, MsgTextBody);

Retrieving JMS messages from a topic or queue


You can retrieve messages from a Java Message Service (JMS) topic or queue from within a policy.

Procedure
1. Create and configure a JMS data source
For more information, see “JMS data source” on page 86.
2. Create a message properties context.
For more information, see “Creating a message properties context” on page 95.
3. Call the ReceiveJMSMessage function and pass the values of the JMS data source, and the message
properties context as parameters.
For examples of the ReceiveJMSMessage function usage, see “ReceiveJMSMessage” on page 94.
4. Handle the retrieved message
For more information, see “Handling a retrieved message” on page 95.

ReceiveJMSMessage
The ReceiveJMSMessage function retrieves a message from the specified Java Message Service (JMS)
destination.
To retrieve the message, you call this function and pass a JMS data source, and a message properties
context as input parameters.

94 Netcool/Impact: DSA Reference Guide


Syntax
The ReceiveJMSMessage function has the following syntax:

ReceiveJMSMessage(DataSource, MethodCallProperties)

Parameters
The ReceiveJMSMessage function has the following parameters:

Table 29. ReceiveJMSMessage function parameters

Parameter Format Description

DataSource String Existing, and valid JMS data source.

MethodCallProperties Context Context that contains optional MessageSelector and


Timeout.

Creating a message properties context


The message properties context specifies connection information for the underlying Java Message
Service (JMS) client method call that retrieves the message when you call the ReceiveJMSMessage
function.
You pass this context as a parameter when you call the ReceiveJMSMessage function in a policy. The
following table shows the properties that you can set in the message properties context:

Table 30. Message Properties Context

Property Description

MessageSelector String expression that specifies which message in the topic or queue
you want to retrieve. The message selector syntax is similar to
the contents of an SQL WHERE clause and is defined in the JMS
specification.

Timeout Specifies the length of time that a message is retained by the JMS
delivery system before it expires. Default value is 0, which indicates
an unlimited message lifetime.

You can create an empty message properties context by passing the NewObject function to the
ReceiveJMSMessage as a parameter.
The following example shows how to create a message properties context.

// Call NewObject to create the next context


MsgProps = NewObject();

// Assign a message selector that filters the message to


// retrieve

MsgProps.MessageSelector = "color = 'green' AND custom2 = '1234543'";

Handling a retrieved message


The ReceiveJMSMessage function uses three variables to store message information that is retrieved
from a Java Message Service (JMS) topic or queue.
Table 1 shows the built-in variables that store the message information:

Chapter 8. Working with the JMS DSA 95


Table 31. Built-in Message Variables

Variable Description

JMSMessage JMS message body. If the message is a text message, the value
of this variable is a string. If the message is a map message, the
value of this variable is a context where each member variable in the
context corresponds to a name-value pair in the message map.

MessageType If the message is a text message, the value of this variable is a string
"Text". If the message is a map message, the value of this variable is
a string "Map".

JMSProperties Custom JMS message properties that are attached to the message.

This example shows how to handle a retrieved message:

// Call ReceiveJMSMessage and pass the JNDI properties,


// message properties and other information as runtime parameters
ReceiveJMSMessage(JMSDataSource, MsgProps);

// Print the contents of the message to the policy log


Log("Message type: " + MessageType);
Log("Message properties: " + JMSProperties.Custom1);
Log("Message properties: " + JMSProperties.Custom2);

If (MessageType == "Text") {
Log("Message body: " + JMSMessage);
} Else {
Log("Message map value 1: " + JMSMessage.MyValue1);
Log("Message map value 2: " + JMSMessage.MyValue2);
}

Handling incoming messages from a JMS message listener


When a Java Message Service (JMS) message listener receives a message from a JMS destination, it
compares the contents of the message to message selectors specified in its configuration.
If the message matches the message selector, or if no selector is specified, the JMS message listener
puts the message in its queue. The EventProcessor service picks up the message, and sends it to the
policy as an EventContainer.
The JMS message listener uses the message variables that are used when you use the
ReceiveJMSMessage function - JMSMessage, MessageType, and JMSProperties - to retrieve a policy.
For more information about these variables, see “Handling a retrieved message” on page 95.
When you handle these variables as set by a JMS message listener, you must reference them by
using the @ notation in an IPL policy, or the dot notation in a JavaScript policy, for example,
EventContainer.MessageType.
This example shows how to handle an incoming message from a JMS message listener by using the @
notation.

// Print the contents of the message to the policy log


Log("Message type: " + @MessageType);
Log("Message properties: " + @JMSProperties.Custom1);
Log("Message properties: " + @JMSProperties.Custom2);

If (MessageType == "Text") {
Log("Message body: " + @JMSMessage);
} Else {
Log("Message map value 1: " + @JMSMessage.MyValue1);
Log("Message map value 2: " + @JMSMessage.MyValue2);
}

96 Netcool/Impact: DSA Reference Guide


Example of receiving a map message
This example shows how to use the ReceiveJMSMessage function to receive a map message.
The example uses the map message that was used in “Example of sending a map message to a JMS
destination” on page 93.

/// Use a existing and valid JMSDataSource


JMSDataSource = “JMSDS1”;

// Create a message properties object and populate its


// member variables with optional parameters like MessageSelector and Timeout
MsgProps = NewObject();
// MessageSelector is used for filtering incoming messages so that messages
// with properties matching the MessageSelector expression are delivered.
MsgProps.MessageSelector = "color = 'green' AND Custom2 = 'Value 2'";

// Timeout must be specified in milliseconds. This parameter specifies how long the
// MessageConsumer blocks to receive a Message. A Timeout of zero makes the
// MessageConsumer wait indefinitely to receive a message.
MsgProps.Timeout = 6000;

// Call ReceiveJMSMessage and pass the JMSDataSource and message properties


ReceiveJMSMessage(JMSDataSource, MsgProps);

// Print the contents of the message to the policy log


Log("Message type: " +MessageType);
Log("Message prop.Custom1: " + JMSProperties.Custom1);
Log("Message prop.Custom2: " + JMSProperties.Custom2);
If (MessageType == "Text") {
Log("Message body: " + JMSMessage);
} Else {
Log("Message map.name: " + JMSMessage.name);
Log("Message map.location: " + JMSMessage.location);
}

The If (MessageType == "Text") statement also checks whether the message is a text message,
and prints the message to the log, if it is.

Connecting to WebSphere MQ and JMS DSA


Netcool/Impact can communicate with WebSphere MQ 7 through the JMS data source.
There are two possible configuration options:
• Option 1: WebSphere MQ client and server and Netcool/Impact all on one machine.
• Option 2: WebSphere MQ client and Netcool/Impact on one machine and WebSphere MQ server on a
separate machine.

Configuration option 1
How to configure the WebSphere MQ client and server and Netcool/Impact all on one machine.

Procedure
1. Configure WebSphere MQ server.
For information see, http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp.
2. Copy the following four JAR files from MQ_HOME/java/lib to IMPACT_HOME/dsalib.
• com.ibm.mq.jmqi.jar
• com.ibm.mqjms.jar
• com.ibm.mq.headers.jar
• fscontext.jar
• providerutil.jar
3. Restart the Impact Server so that it picks up the new JAR files.

Chapter 8. Working with the JMS DSA 97


4. The Netcool/Impact JMS DSA will have the following parameters:
For more information about configuring JMS DSA, see JMS data source configuration properties.
• JNDI Factory initial: com.sun.jndi.fscontext.RefFSContextFactory
• JNDI Provider URL: file:/<PathToBindingDirOnMQClient> for example, file:/C:/
MQClientBindings.
• JMS Connection Factory Name as configured on the WebSphere MQ Server.
• JMS Destination Name as configured on the WebSphere MQ Server.

Configuration option 2
How to connect the WebSphere MQ client and Netcool/Impact on one machine and WebSphere MQ server
on a separate machine.

Procedure
1. Copy the following five JAR files from MQ_HOME/java/lib to IMPACT_HOME/dsalib.
• com.ibm.mq.jmqi.jar
• com.ibm.mqjms.jar
• com.ibm.mq.headers.jar
• fscontext.jar
• providerutil.jar
• dhbcore.jar
2. Restart the Impact Server so that it picks up the new JAR files.
3. You must configure the MQSeries® server to use a file system-based JNDI provider. WebSphere MQ
then uses the local file system as a JNDI registry when it registers the JMS resources accessed by the
DSA.
4. You must use "client" mode in your connection factory on the WebSphere MQ server to ensure that the
WebSphere MQ Client communicates through tcp with the WebSphere MQ Server.
For more information see http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp.
5. Ensure that you have a listener that is configured on your preferred port on the WebSphere MQ server.
6. Create the queues and destinations that you need as specified by the WebSphere MQ documentation.
For information see, http://publib.boulder.ibm.com/infocenter/wmqv7/v7r0/index.jsp.
7. Copy the bindings directory, which is configured on the WebSphere MQ server, to the local file system
of the WebSphere MQ client.
8. The Netcool/Impact JMS DSA will have the following parameters:
For more information about configuring JMS DSA, see JMS data source configuration properties.
• JNDI Factory initial: com.sun.jndi.fscontext.RefFSContextFactory
• JNDI Provider URL: file:/<PathToBindingDirOnMQClient> for example, file:/C:/
MQClientBindings.
• JMS Connection Factory Name as configured on the WebSphere MQ Server.
• JMS Destination Name as configured on the WebSphere MQ Server.
9. If there are any configuration changes on the WebSphere MQ Server, you must repeat “7” on page 98
to ensure that the WebSphere MQ Client picks up the configuration changes.

98 Netcool/Impact: DSA Reference Guide


Connecting Netcool/Impact to WebSphere Business Events
About this task
Tip: When installing Netcool/Impact and WebSphere Business Events on the same computer, install
WebSphere Business Events first and make sure it is running before installing Netcool/Impact.
The Netcool/Impact integration with WebSphere Business Events uses this pre-defined XML file based
project Netcool_Impact_Integration in $IMPACT_HOME/integrations/wbe. The integration also uses
a predefined connection factory that is called ImpactTopicConnectionFactory and the following two
topics:
The jms/impactTopic topic receives events from Netcool/Impact.
The jms/wbeTopic topic sends events to Netcool/Impact.
Complete the following steps to configure WebSphere Business Events. For more information about
the steps, see WebSphere Business Events documentation http://pic.dhe.ibm.com/infocenter/wbevents/
v7r0m1/index.jsp

Procedure
1. Create a connection factory called ImpactTopicConnectionFactory.
2. Create two new topics that are called jms/impactTopic and jms/wbeTopic.
3. Using the WebSphere Business Events Data Design application, load the project XML file
$IMPACT_HOME/integrations/wbe/Netcool_Impact_Integration.xml. Or you might want to
save the project XML file to the Server Store and create Business Events to send and test events to
the two topics.

Configure Netcool/Impact for WebSphere Business Events integration


To configure Netcool/Impact for WebSphere Business Events integration you must copy JAR files from
WebSphere Business Events into Netcool/Impact.

Procedure
1. Within <WBE HOME>/WAS/runtime copy the two JAR files
com.ibm.ws.sib.client.thin.jms_<version>.jar and com.ibm.ws.ejb.thinclient_<version>.jar, where
<version> is 7.0.0 for WebSphere Business Events version 7.0.1. Paste the two JAR files into
<IMPACT_HOME>/dsalib.
2. Within <IMPACT_HOME>/wlp/user/shared/config/features.xml, comment or remove the
following line:

<feature>wasJmsClient-1.1</feature>

3. Restart the Netcool/Impact server.

Using the WebSphere Business Events integration


Included in Netcool/Impact, in <IMPACT_HOME>/integrations/wbe, see theWBE project that is in
XML format file. The WBE project has pre-configured policies, data sources, and a JMS listener.

Data sources
You must update the data sources with the WebSphere Business Events host name and port details.
• The SendToWBE data source is configured to connect to the jms/impactTopic destination topic.
• The ReceiveFromWBE data source is configured to connect to the jms/wbeTopic destination topic.

Chapter 8. Working with the JMS DSA 99


Policies
• The WBEPolicy policy includes two library functions that are called the SendEventToWBE function and
the ParseWBEMessage function.
• The WBESend policy uses the SendEventToWBE function to accept a Netcool/Impact object and
create a JMS message. It uses the configured SendToWBE data source to send the JMS message to
WebSphere Business Events. The JMS message, or event, consists of a few fields from the ObjectServer:

MsgObject = NewObject();
MsgObject.Identifier='Test Id';
MsgObject.Node='Test Node';
MsgObject.AlertKey='Key 5';
MsgObject.AlertGroup='Group 5';
MsgObject.Serial=5;
MsgObject.Severity =5;
MsgObject.AdditionalField="Test";
WBEPolicy.SendEventToWBE(MsgObject);

• The WBERecieve policy is attached to the WBEJMSMessageListener listener. The policy uses the
ParseWBEMessage function to receive a JMS message, or event, from WebSphere Business Events
through the listener and converts it to a Netcool/Impact object.

JMS Listener
The JMS listener WBEJMSMessageListener receives messages from jms/wbeTopic and runs the
WBEReceive policy.
The WBE project includes two Touchpoints that are called Netcool Impact Event and Netcool Impact
Action.
• The Netcool Impact Event listens on the jms/impactTopic destination topic for the Netcool/Impact
event over the JMS bus and create an intermediate object called Netcool Omnibus Event.
• The Netcool Impact Action sends data to Netcool/Impact over the JMS bus through the jms/wbeTopic
destination topic.

Note: The topics and connection factory are optional. Complete the following steps, if you want to use the
existing topics and connection factory.
1. Edit the data sources in the WBE project to reflect the connection factory and topics.
2. Using any text/XML editor, update the Netcool_Impact_Integration XML file in IMPACT_HOME/
integrations/wbe directory to replace the topics and connection factory with the existing
information.

Integrating JMS/TIBCO over SSL


The integration of JMS/TIBCO in SSL requires additional configuration properties.

To integrate JMS/TIBCO over SSL, add the following properties into the $IMPACT_HOME/wlp/usr/
servers/<ServerName>/jvm.options file in Netcool/Impact:

-Dcom.ibm.jsse2.overrideDefaultTLS=true
-Djdk.tls.client.protocols=TLSv1.2
-Dhttps.protocols=TLSv1.2

When you have added these properties into jvm.options you must restart the Impact Server for the
changes to take effect.
Warning: Some older infrastructures may not work with later version of JDK. If this is the case, you
will have the following two options to make TIBCO/JMS to work in SSL mode:

100 Netcool/Impact: DSA Reference Guide


• Option 1: Re-enable 3DES ciphers (by modifying the jdk.tls.disabledAlgorithms property
and removing DESede and 3DES_EDE_CBC from the list. This is not ideal due to the security
implications, but it may be necessary when working with older infrastructures.
• Option 2: Upgrade the server to use a modern encryption algorithm.

Chapter 8. Working with the JMS DSA 101


102 Netcool/Impact: DSA Reference Guide
Chapter 9. Working with the Apache Kafka DSA
You can use the Apache Kafka data source adapter (DSA) to send Kafka messages from within a policy. It
can also be used as a reader.

The Apache Kafka DSA is installed automatically when you install Netcool/Impact. Third party Apache
Kafka libraries are shipped in JAR files with Netcool/Impact.

Kafka data source


A Kafka data source abstracts the information that is required to connect to a Kafka Server.
This data source is used by a KafkaMessageListener service and the SendKafkaMessage policy
function.
You specify the Kafka data source configuration details using the Data Model UI. You can specify
additional Kafka configuration details using a properties file.

Kafka data source configuration settings


You can configure the settings for the Kafka data source using the UI.

Table 32. General settings for the Kafka data source window
Window element Description

Data Source Name Enter a unique name to identify the data source.
You can use only letters, numbers, and the
underscore character in the data source name.
If you use UTF-8 characters, make sure that the
locale on the Impact Server where the data source
is saved is set to the UTF-8 character encoding.

Table 33. Source settings for the Kafka data source window
Window element Description
Hostname Hostname of the Kafka server to which you want to
connect.

Port Port that the Kafka server is listening on.

Group ID Name of the group that the consumer joins to read


messages from a given topic or topics.
Consumers within the same group share
messages. If the same messages need to be read
my multiple listeners, those listeners should have
different groups IDs.

© Copyright IBM Corp. 2006, 2023 103


Table 33. Source settings for the Kafka data source window (continued)
Window element Description
Authentication Method Select the authentication method to use.
The options are None and SASL. See “Setting up
SASL” on page 107.
Note: The following three settings (SASL
Mechanism, Username and Password) are
only applicable if you select SASL as the
Authentication Method. If you select SASL for
Authentication Method, you must specify a SASL
Mechanism but Username and Password are
optional.

SASL Mechanism Select the SASL mechanism to use.


Kafka uses SASL to perform authentication.
Current supported SASL mechanisms are PLAIN
and SCRAM.
Note: If you select SASL for the Authentication
Method, you must specify a SASL Mechanism.

Username This property is only applicable if you select SASL


for the Authentication Method.
If the Kafka provider requires a username to listen
to for messages, enter the username in this field.
Kafka user accounts are controlled by the Kafka
provider.

Password This property is only applicable if you select SASL


for the Authentication Method.
If the Kafka provider requires a password to listen
to remote destinations for messages, enter the
password in this field.

Load from Kafka Props File Check this box to specify that additional Kafka
configuration details should be imported from a
properties file. See “Kafka configuration properties
file” on page 104.

Test Connection Test the connection to the Kafka Server. If the


test is successful, the system shows the following
message:
Connection OK

Kafka configuration properties file


You can specify additional Kafka configuration details (for example, for SSL connectivity) by using a
properties files.
If you want to specify Kafka configuration details you must create a properties file in the etc directory
with the following name format:
<SERVER>_kafka_<Data_Source_Name>.props

104 Netcool/Impact: DSA Reference Guide


Where <SERVER> is the name of the server on which the data source is located and
<Data_Source_Name> is the name of the data source.
For example: NCI_kafka_MyDataSource.props
Note: When specifying Kafka configuration details, you must tick the Load from Kafka Props File box on
the DataModel UI. See “Kafka data source configuration settings” on page 103.
Any properties listed in a loaded Kafka properties file take precedence on those specified in the UI.

Kafka configuration properties


Broker settings
bootstrap.servers
Type: list.
List of host/port pairs to use for establishing the initial connection to the Kafka cluster.
Consumer settings
key.deserializer
Type: class.
Deserializer class for key that implements the
org.apache.kafka.common.serialization.Deserializer interface.
value.deserializer
Type: class.
Deserializer class for value that implements the
org.apache.kafka.common.serialization.Deserializer interface.
auto.offset.reset
Type: string.
Specifies what to do when there is no initial offset in Kafka or if the current offset does not exist any more
on the server.
enable.auto.commit
Type: boolean.
If set to true, the consumer's offset will be periodically committed in the background.
max.poll.records
Type: int.
The maximum number of records returned in a single call to poll().
max.partition.fetch.bytes
Type: int.
The maximum amount of data the server should return for a fetch request.
Producer settings
key.deserializer
Type: class.
Deserializer class for key that implements the
org.apache.kafka.common.serialization.Deserializer interface.
value.deserializer
Type: class.

Chapter 9. Working with the Apache Kafka DSA 105


Deserializer class for value that implements the
org.apache.kafka.common.serialization.Deserializer interface.
acks
Type string.
The number of acknowledgments the producer requires the leader to have received before considering a
request complete. This controls the durability of records that are sent.
retries
Type int.
Setting a value greater than zero causes the client to resend any record whose send fails with a potentially
transient error.
batch.size
Type: int.
The producer attempts to batch records together into fewer requests whenever multiple records are being
sent to the same partition. This configuration controls the default batch size in bytes.
linger.ms
Type: int.
Specifies that a small amount of artificial delay is added before sending out records to reduce the number
of requests sent.
buffer.memory
Type: int.
The total bytes of memory the producer can use to buffer records waiting to be sent to the server.
Security settings
security.protocol
Type: int.
Protocol used to communicate with brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT,
SASL_SSL.
ssl.enabled.protocols
Type: list.
List of protocols that have been enabled for SSL connections.
ssl.keystore.type
Type: string.
The file format of the key store file.
ssl.keystore.location
Type: int.
The location of the key store file.
ssl.keystore.password
Type: int.
The store password for the key store file.
ssl.key.password
Type: int.
The password of the private key in the key store file.

106 Netcool/Impact: DSA Reference Guide


ssl.truststore.type
Type: string.
The file format of the trust store file.
ssl.truststore.location
Type: int.
The location of the trust store file.
ssl.truststore.password
Type: int.
The password for the trust store file.

Setting up SASL
Netcool/Impact supports SASL as an authentication method.
The SASL authentication information entered using the Kafka data source configuration settings panel
in the UI is used to construct the following properties for SASL:
• sasl.mechanism
• sasl.jaas.config
Note: The actual properties file always takes precedence over settings from the UI.

Setting up a Kafka data source with SSL


Connections to a Kafka data source can be made using SSL. This is true for both None and SASL
Authentication methods.
The Kafka properties required for the SSL connection are set using the Kafka properties file.
See “Kafka configuration properties file” on page 104.

Kafka message listener


The Kafka message listener service runs a policy in response to incoming messages that are sent by
external Kafka message providers.
Each Kafka message listener can listen to one or more Kafka topic. You can create as many listener
services as you need.
A Kafka message listener is only required when you want Netcool/Impact to listen passively for incoming
messages that originate with Kafka message producers in your environment.
Topics in Kafka
To read data from Kafka, Impact uses a KafkaConsumer to subscribe to Kafka topics and receive
messages from those topics.
A Kafka message is made up of key-value pairs. The deserializer setting tells Impact how to read the
key and the value.
A standard list of built-in deserializers is provided with Impact. You can add to these to by editing the
file etc\kafka.serializers and also placing the associated new JAR file into the lib3p directory,
and then re-starting the Impact backend server. On cluster environments this must be done on all cluster
members.
Note: For reasons of persistence, when Impact is configured on OCP, you should add any new JAR files to
dsalib.
If you set a filter, the listener will only send messages for processing whose key contains the filter
specified.

Chapter 9. Working with the Apache Kafka DSA 107


Deserializer
Messages in a Kafka topic are stored as a serialized key and value pair. When a message is retrieved from
the topic, both the key and value must be deserialized before it can be processed further by a policy.
The Kafka listener offers several in-built deserializers for String, Long, Float, Double, Byte, Short
and List types. The message key and message value can be configured to use separate deserializer
types.

Listener properties
impact.kafka.autoreconnect
This can be set at server level in the etc/<SERVER>_server.props file.
The type is Boolean.
The default is true.
Controls whether the Kafka listener will try to reconnect to a Kafka server if the server goes down.
Note: If the server setting is false, this can be overwritten on individuals listeners by setting
autoreconnect for the listener.
For example:

impact.kafkamessagelistener.autoreconnect=true

Changing listener properties file settings requires an Impact restart.


kafka.autoreconnect.pollinterval
Related to impact.kafka.autoreconnect, this setting specifies how frequently (in milliseconds) the
listener should try to reconnect.
This can be set at server level in the etc/<SERVER>_server.props file.
The type is Integer.
The default is 120000.
Note: If the server setting for impact.kafka.autoreconnect is false, the listener's reconnect
interval can be overwritten on individual listeners by setting autoreconnect.pollinterval for the
listener.
For example:

impact.kafkamessagelistener.autoreconnect.pollinterval=300000

Changing listener properties file settings requires an Impact restart.

Sample policy for a Kafka message listener


The following JavaScript policy can be used in a Kafka message Listener to process the read records. The
policy accesses the Kafka fields in the EventContainer and prints the data to the policy log:

// Read Kafka Record and print to the Policy Log

Log("* New Kafka Message from Topic " + EventContainer.KafkaTopic + "*");


Log("");
Log("Fields with Meta Data for the Record:");
Log("KafkaKey :" + EventContainer.KafkaKey);
Log("KafkaValue :" + EventContainer.KafkaValue);
Log("KafkaRawMessage :" + EventContainer.KafkaRawMessage); // The entire ConsumerRecord
Log("");
Log("Fields with Meta Data for the Record");
Log("KafkaPartition :" + EventContainer.KafkaPartition);
Log("KafkaLeaderEpoch :" + EventContainer.KafkaLeaderEpoch);
Log("KafkaOffset :" + EventContainer.KafkaOffset);

108 Netcool/Impact: DSA Reference Guide


Log("KafkaCreateTime :" + EventContainer.KafkaCreateTime);
Log("KafkaSerializedKeySize :" + EventContainer.KafkaSerializedKeySize);
Log("KafkaSerializedValueSize:" + EventContainer.KafkaSerializedValueSize);

Log("*****************************************************************");

Writing Kafka DSA policies to send messages to a Kafka topic


Kafka DSA policies can send Kafka messages to a Kafka topic.

SendKafkaMessage
To send a Kafka message, you call the SendKafkaMessage function and pass the Kafka data source, the
name of the Kafka topic, the key of the Kafka message, the value of the message itself, and any additional
Kafka properties as required.

Syntax
The SendKafkaMessage function has the following syntax:

SendKafkaMessage(DataSource, Topic, Key, Value, Config)

Parameters
The SendKafkaMessage function has the following parameters.

Table 34. SendJKafkaMessage function parameters

Parameter Format Description

DataSource String Name of Kafka data source to send the message to.

Topic String Name of the Kafka topic to send.

Key Object Key of the Kafka message.

Value Object Value of the Kafka message.

Config Object Object containing any extra request information that


you want to add to the request.
The object must contain name-value pairs. Valid
variables are:
• Properties: Any extra Kafka properties to be
added to, or overridden from, the data model.
• Headers: Any extra Kafka headers to be added to,
or overridden from, the data model.

Sample policy
The following sample policy uses the SendKafkaMessage function to send a Kafka message to a topic:

// Set KafkaDataSource to a valid and existing KafkaDataSource in Impact.


// The destination where the message is sent is obtained from the KafkaDataSource.

// Parameters
// 0. Data Source

Chapter 9. Working with the Apache Kafka DSA 109


// 1. Topic
// 2. Key
// 3. Value
// 4. Config

KafkaDataSource = "MyKafkaDataSource";
KafkaTopic= "Tickets";
KafkaKey= "myKey";

// Create a text message content


MsgTextBody = "This is the message body " + GetDate();

// Create a message properties object and populate its


// member variables with message header properties and custom properties
Config=NewObject();
Config.Properties=NewObject();
Config.Headers=NewObject();

Config.Properties['myFirstProperty'] = 'hello';
Config.Headers['Content-Type'] = 'application/json; charset=UTF-8';

// Call SendKafkaMessage
SendKafkaMessage(KafkaDataSource, KafkaTopic, KafkaKey, MsgTextBody,Config);
Log("SendKafkaMessage done.");

110 Netcool/Impact: DSA Reference Guide


Chapter 10. Working with the XML DSA
The XML DSA is a data source adaptor that is used to read and to extract data from any well-formed XML
document.

XML DSA overview


The XML DSA is used to read and extract data from any XML document.
The XML DSA can read XML data from files, strings, and HTTP servers by way of the network (XML over
HTTP). The Xerces DOMParser 2.6 parser is used for the XML DSA.
The XML DSA is installed with Netcool/Impact so you do not need to complete any additional installation
or configuration steps.
Before you can use the XML DSA, you must complete the following tasks:
• Create a set of XML data types that corresponds to the structure of the XML document you want to
read with Netcool/Impact. For more information about creating XML data types, see “Creating XML data
types” on page 113.
• Set up XML data type mappings that show the relationship between an XML data source, an XML
document, and XML data types.
• Write one or more XML DSA policies that read XML data from a file, a string or from an HTTP server over
a network.

XML documents
The DSA considers an XML document to be any well-formed set of XML data that descends from a single
root element.
This document can be in a string, a text file, or on an HTTP server.

XML DTD and XSD files


XML DTD and XSD files contain a document type description.
You must provide an XML DTD or XSD file for each type of XML document that you want the DSA to read.

XML data types


XML data types are Netcool/Impact data types that represent XML documents and their contents.
The DSA uses the following XML data types Super data types and Element data types:

Super data types


Super data types represent types of XML documents. The DSA uses one data type for each type of
document that it reads.
A super data type contains a single data item, called the document data item. This data item represents
the instance of the document that the DSA uses. The display name of the document data item is the same
as the name of the super data type. The document data item contains a static link to the element data
item that represents the root level element of the document.

© Copyright IBM Corp. 2006, 2023 111


Element data types
Element data types represent the elements in an XML document. The DSA requires one such data type for
each type of XML element.
An element data type contains one field for each attribute in the corresponding XML element. In addition,
the data type contains a field that corresponds to the PCDATA value of the element, if any.
Element data types contain one or more data items, called element data items. Each such data item
represents an instance of the element in the document.
The hierarchical relationship between XML elements is represented at the data item level by static links.
Element data items are statically linked in such a way that each data item contains links to other data
items. The other data items represent the child elements of the corresponding element in the XML
document.
Element namespaces are a convention that is used by the DSA to show that a set of element data types is
related to a single XML document. The DSA uses element namespaces to avoid ambiguity in cases where
more than one type of XML document that is used by the DSA has an element of the same name.

XML configuration files


XML configuration files are text files that store mapping information for XML data types.
The DSA reads the configuration files at startup and uses the information during run time to locate the
DTD or XSD file and data source for each XML document. For more information about XML configuration
files, see “Data type mappings” on page 114.

XML document and data type mapping


The XML DSA provides mapping between an XML document and a set of data types.
The DSA uses the information in an XML DTD or XSD file to understand the structure of the XML data and
to map the data to the corresponding data types.
One aspect of the structure of the XML data is the hierarchical relationship between XML elements.
The DSA uses static links to map this relationship to the XML data types. Each element data item is
linked to its logical child data item by a static link. When you read the XML data in a policy, you use the
GetByLinks function to traverse the resulting structure. You can also use the embedded linking syntax
to traverse the structure.
This example shows a partial XML document, and the linking relationship between the corresponding
element data items.

<XML_alert id="0123456789">
<XML_head>
<XML_sender>IBM</XML_sender>
<XML_subject>Alert</XML_subject>
</XML_head>
<XML_body>
<XML_node>NodeXYZ</XML_node>
<XML_summary>Node not responding</XML_summary>
</XML_body>
</XML_alert>

This figure shows the linking relationship between the corresponding element data items:

112 Netcool/Impact: DSA Reference Guide


Figure 1. Linking relationships between corresponding element data items

Creating XML data types


You must create XML data types to represent the structure of the XML document that you want to read
with Netcool/Impact.
To create the XML data types, you run either the create DTD types script or the create XSD types script,
depending on which type of schema you are using. The create types script creates a super data type
and then reads the XML DTD or XSD file. The create types script creates one element data type for each
type of element that is defined in the file, including the root level element. The script uses the names of
the elements in the DTD or XSD file as the names of the element data types. If you specify an element
namespace, add a prefix to the name of each element data type. The script then uses the command-line
service to insert data types into Netcool/Impact. For more information, see “Create data types scripts” on
page 113.
You can also use the XML DSA wizard to automate creating XML data types. For more information about
XML DSA wizards, see Policy wizards in the online help.
Important: If you create an XML data type in a server cluster, either by using the wizard or the script,
cluster members are updated with the new .type files. The following configuration files are not updated:
• XmlHttpTypes
• XmlFileTypes
The $IMPACT_HOME/dsa/XmlDsa will be replicated during startup of the secondary cluster members
from the primary server. If you are using XML DSA wizard, or using the scripts provide, the changes will
replicate in real time.

Create data types scripts


The XML DSA provides two scripts that you can run from the command line to create XML data types.
You can find these scripts in the $IMPACT_HOME/dsa/XmlDsa/bin directory. You use the
CreateDtdTypes script to create data types from an XML DTD. The script has the following syntax:

CreateDtdTypes server_name user password dtdFile type_name namespace_prefix

Chapter 10. Working with the XML DSA 113


You use the CreateXsdTypes script to create data types from an XML XSD. The script has the following
syntax:

CreateXsdTypes server_name user password xsdFile type_name namespace_prefix

Table 35 on page 114 explains the options that are used with the scripts.

Table 35. Create data type scripts options

Option Description

server_name The name of the Impact Server.

user The name of the Impact Server user.

password The Impact Server user's password.

dtdFile The path and file name of the XML DTD file that describes the XML document.
Relative to the $IMPACT_HOME/dsa/XmlDsa/bin directory.

xsdFile The path and file name of the XML XSD file that describes the XML document.
Relative to the $IMPACT_HOME/dsa/XmlDsa/bin directory.

type_name The name of the resulting super data type.

namespace_prefix The optional prefix added to the names of element data types. This string is
not prefixed to the name of the super data type.

The CreateDtdTypes and CreateXsdTypes scripts replace any colon character in XML element names
with an underscore when you create the data types. For example, if a DTD file contains an element
named netcool:alert, the create DTD types script creates a corresponding element data type named
netcool_alert.
Important: The Impact Server must be up for these scripts to run successfully.
Here is an example of the CreateDtdTypes script usage on UNIX:

./CreateDtdTypes.sh NCI impactadmin netcool ../TOC.dtd XmlStringTOC STEST_

Data type mappings


After you create the XML data types, you must set up data type mappings.
A data type mapping is a set of information that shows the relationship between an XML data source,
an XML document, and XML data types. The DSA uses this information to map the contents of an XML
document to the data types in Netcool/Impact. You must set up one data type mapping for each type of
XML document you want Netcool/Impact to read.
Data type mapping information is stored in XML configuration files. The DSA uses the following XML
configuration files:
• XmlFileTypes
• XmlHttpTypes
These files are in the $IMPACT_HOME/dsa/XmlDsa directory.
Note: After you edit the data types, you must restart the Impact Server.

114 Netcool/Impact: DSA Reference Guide


Setting up mappings for XML files and strings
For each XML string or file that you want the DSA to read, you must add the mapping information to the
XmlFileTypes file.
Add the following mapping information:
• Name of the super data type
• Path and file name of the corresponding XML DTD/XSD file
• Path and file name of the corresponding XML file (XML files only)
• Namespace prefix that is used for the element data types (optional)
You use the following format to specify mapping information:

XmlDsa.fileTypes.n.property=value

where n is a numeric value that identifies the mapping, property is the name of the mapping property, and
value is the value.
Table 1 shows the mapping properties in the XmlFileTypes file.

Table 36. XmlFileTypes mapping properties

Property Description

typeName Specifies the name of the corresponding super data type.

dtdFile Specifies the path and file name of a corresponding XML DTD or XSD file. The path
can be an absolute path or a path relative to the $IMPACT_HOME directory.

isXsd Boolean variable that specifies whether the schema is defined in XSD or DTD
format. If it is not specified, the default is DTD format. If it is not specified, the
default is DTD format.

xmlFile Specifies the path and file name of the corresponding file for XML files. The path is
relative to the $IMPACT_HOME directory. For XML strings, use the hyphen character
as a placeholder.

prefix Specifies the namespace prefix that is used to identify the corresponding element
data types. This property is optional.

This example shows a set of mapping properties for an XML document that is contained in a file.

XmlDsa.fileTypes.1.typeName XML_file_superType
XmlDsa.fileTypes.1.dtdFile dsa/XmlDsa/file.dtd
XmlDsa.fileTypes.1.xmlFile dsa/XmlDsa/file.xml
XmlDsa.fileTypes.1.prefix XML_

This example shows a set of mapping properties for an XML document that is contained in a string.

XmlDsa.fileTypes.2.typeName XML_string_superType
XmlDsa.fileTypes.2.dtdFile dsa/XmlDsa/string.dtd
XmlDsa.fileTypes.2.xmlFile -
XmlDsa.fileTypes.2.prefix XML_

Note: this example uses the hyphen character (-) for the xmlFile property.

Chapter 10. Working with the XML DSA 115


The following example shows an expression that uses the XmlFileTOC data type with isXsd set to
true. The name space prefix is FTEST. This prefix must be added to all data types that are a part of the
XML file.

XmlDsa.fileTypes.1.typeName XmlFileTOC
XmlDsa.fileTypes.1.dtdFile dsa/XmlDsa/TOC.xsd
XmlDsa.fileTypes.1.xmlFile dsa/XmlDsa/TOC.xml
XmlDsa.fileTypes.1.prefix FTEST_
XmlDsa.fileTypes.1.isXsd true

Setting up mappings for XML over HTTP


For each XML document that you want the DSA to read over HTTP, you must add the mapping information
to the XmlHttpTypes file.
Add the following mapping information:
• Name of the super data type.
• Base URL for the HTTP server.
• User name, password, and authentication realm (optional). This information is only required if the XML
document is in a password-protected area of the HTTP server.
• Namespace prefix that is used for the element data types (optional).
You use the following format to specify mapping:

XmlDsa.httpTypes.n.property=value

where n is a numeric value that identifies the mapping, property is the name of the mapping property,
and value is the value.
Table 1 shows the mapping properties in the XmlHttpTypes file.

Table 37. XmlHttpTypes mapping properties

Property Description

typeName Name of the corresponding super data type.

dtdFile Path and file name of the corresponding XML DTD file. Can be an absolute
path, or relative to the $IMPACT_HOME directory.

xsdFile Path and file name of a corresponding XML XSD file. Can be an absolute path,
or relative to the $IMPACT_HOME directory. Used only if the XML schema is an
XSD.

isXsd This Boolean variable specifies whether the schema is defined in XSD or DTD
format. Default is DTD, if not specified.

url Base URL for the HTTP server. The base URL includes the server host name,
and the path where the script or executable file that provides the XML data
is located. You do not need to specify the trailing backslash in the base URL.
This URL is combined with the contents of the FilePath parameter to form
the complete URL when you retrieve the XML data in a policy.

user User name valid under HTTP server authentication (optional).

password Password valid under HTTP server authentication (optional).

realm Authentication realm on the HTTP server (optional).

116 Netcool/Impact: DSA Reference Guide


Table 37. XmlHttpTypes mapping properties (continued)

Property Description

prefix Namespace prefix that is used to identify the corresponding element data
types (optional).

connectionsPerHost Number of connections per host. The default is 2. (Optional)

This example shows a set of mapping properties for XML data that is provided by an HTTP server.

XmlDsa.httpTypes.1.typeName XML_http_superType
XmlDsa.httpTypes.1.dtdFile dsa/XmlDsa/http.dtd
XmlDsa.httpTypes.1.url http://localhost:9080/cgi-bin
XmlDsa.httpTypes.1.user jsmith
XmlDsa.httpTypes.1.password pwd
XmlDsa.httpTypes.1.realm primary
XmlDsa.httpTypes.1.connectionsPerHost 5

Reading XML documents


You can read XML documents from within a policy.

Procedure
1. Retrieve the document data item.
You retrieve the data item by calling the GetByFilter function and passing the name of the super
data type and a filter string.
2. Retrieve the root level element data item.
To retrieve the root level element data item, use the GetByLinks function.
3. Retrieve the child element data item.
To retrieve child element data items, you can use successive calls to the GetByLinks function or you
can use the embedded linking syntax.
4. Access attribute values.
To access an element data item's attribute values, reference the corresponding data type fields.

Retrieving the document data item


You retrieve the data item by calling the GetByFilter function and passing the name of the super data
type and a filter string.
The content of the filter string varies depending on whether the data source is an XML string, XML file, or
XML data that is located on an HTTP server.
For XML strings, the filter is the entire XML string that you want to read. For XML files, the filter is an empty
string. For XML over HTTP, the filter string is an expression that specifies the method to use in retrieving
the XML data and the path to a script or executable file that provides the data on the HTTP server. For
more information, see “XML over HTTP” on page 118.
This example shows how to retrieve the document data item that is associated with an XML string, where
the corresponding super type is named XML_string_SuperType:

// Call GetByFilter and pass the name of the super type


// and the filter string

Type = "XML_string_superType";
Filter = "<alert><node>Node1234</node><summary>
Node not responding</summary></alert>";

Chapter 10. Working with the XML DSA 117


CountOnly = False;
DocDataItem = GetByFilter(Type, Filter, CountOnly);

This example shows how to retrieve the document data item that is associated with an XML file, where the
corresponding super type is named XML_file_superType:

// Call GetByFilter and pass the name of the super type// and the filter
stringType = "XML_file_superType";Filter = "";CountOnly = False;DocDataItem =
GetByFilter(Type, Filter, CountOnly);

XML over HTTP


For XML over HTTP, the filter string is an expression that specifies the method to use in retrieving the XML
data and the path to a script or executable file that provides the data on the HTTP server.
The XML DSA uses either the GET or POST method to retrieve the XML data. For example::

Operation = 'method' AND FilePath = 'path'

Where method is either GET or POST and path is the location of the script or executable relative to
the base URL. You specify the base URL when you set the mapping information for the document in the
XmlHttpTypes file.
Note: The FilePath specification can include query string values. You can retrieve XML documents from
the HTTP server that are dynamically created depending on values that are sent by Netcool/Impact as
part of the HTTP request.
This example shows how to use an HTTP GET request to retrieve the document data item that is
associated with XML data. In this example, the name of the super data type is XML_http_superType
and the location of the script that provides the XML data is getXMLdoc.pl.

// Call GetByFilter and pass the name of the super type// and the filter
stringType = "XML_http_superType";Filter = "Operation = 'GET' AND
FilePath = 'getXMLdoc.pl?node=NodeXYZ'";CountOnly = False;DocDataItem =
GetByFilter(Type, Filter, CountOnly);

Retrieving the root level element data item


To retrieve the root level element data item, use the GetByLinks function.
When you call GetByLinks, you must pass the name of the root level element data type, an empty filter
string, and the document data item.
This example shows how to use GetByLinks to retrieve the root level element data item.

// Call GetByLinks and pass the name of theDataTypes = {"XML_alert"};Filter = "";


MaxNum = "10000";DataItems = DocDataItem;RootDataItem =
GetByLinks(DataTypes, Filter, MaxNum, DataItems);

Retrieving child element data items


To retrieve child element data items, you can use successive calls to the GetByLinks function or you can
use the embedded linking syntax.
This example shows how to use the linking syntax to retrieve the first child element data item that is
linked to the root level element data item. Where the data type of the child data item is XML_body.

ChildNode = RootDataItem[0].links.XML_body.first;

This example shows how to retrieve an array that contains all child element data items that are linked to
the root level element data item.

ChildNodes = RootDataItem[0].links.XML_body.array;

118 Netcool/Impact: DSA Reference Guide


links is a keyword that is used to retrieve the linked data types that are associated with RootDataItem.
• RootDataItem[0].links returns all the linked data types:
• RootDataItem[0].links.XML_body returns all elements for the linked data type XML_body.
Once the elements retrieved, you can get an array of the elements by using the following command then
loop through every element by using an index.

bodyArray=RootDataItem[0].links.XML_body.array

In addition, RootDataItem[0].links.XML_body is treated as enumerated elements. You can traverse


the data by using the following example.

/*RootDataItem[0].links.XML_body.first to get the first element


RootDataItem[0].links.XML_body.last to get the last element
RootDataItem[0].links.XML_body.next to get the next element.
To use next keyword:*/
bodyElement=RootDataItem[0].links.XML_body.next;
while(bodyElement != null && bodyElement != NULL) {
Log("bodyElement : " + bodyElement);
}

The following policy example shows how to get PDCDATA from the XmlFileTOC and exists in
XmlFileTestPolicy in the project XML. Enumerated elements means when the element is retrieved by
using next, it is removed from the list and does not exist anymore.

BookNode = TopNodes[0].links.FTEST_JavaXML_Contents;
Log("BookNode size: " + BookNode.size);
BookNodeLinksTypes=BookNode.links;
Log("BookNodeLinksTypes size : " + BookNodeLinksTypes.size);
Log("BookNodeLinksTypes: " +BookNodeLinksTypes);
Chapters=BookNodeLinksTypes.first.FTEST_JavaXML_Chapter;
Log("Chapters and size: " + Chapters.links.size + " " + Chapters);
index =0;
Chapter= Chapters.first;
while(Chapter != null && Chapter != NULL) {
index = index + 1;
Log("Chapter" +index + ": " + Chapter);
Chapter= Chapters.next;
Topics =Chapter.links.FTEST_JavaXML_Topic;
i = 1;
Topic=Topics.first;
while(Topic != null && Topic != NULL) {
Log("Topic"+i+": " + Topic.PCDATA);
Topic=Topics.next;
i = i +1;
}
}

Accessing attribute values


To access an element data item's attribute values, reference the corresponding data type fields.
This example shows how to log the value of the ID attribute that is associated with the current element
data item:

Log("The message ID is: " + DataItem.id);

This example shows how to log the PCDATA value that is associated with the current element data item:

Log(DataItem.PCDATA);

Sample policies
The DSA provides four sample policies.
• XmlStringTestPolicy
• XmlFileTestPolicy

Chapter 10. Working with the XML DSA 119


• XmlHttpTestPolicy
• XmlXsdFileTestPolicy
These policies are configured to use the TOC.dtd, TOC.xsd and TOC.xml files in the
$IMPACT_HOME/dsa/XmlDsa directory.

XmlStringTestPolicy
The XmlStringTestPolicy shows how to use the XML DSA to read data from an XML string.
The policy reads the contents of an XML-formatted string and then prints the data to the policy log. Before
you use this policy, you must run the create DTD types script as follows:

./CreateDtdTypes.sh NCI impactadmin impactpass ../TOC.dtd XmlStringTOC STEST_

You do not need to edit the contents of the XmlFileTypes configuration file. By default, this file contains
the necessary data source mappings. The data type mappings are defined as follows:
This type declaration shows how to define an XmlFile type to parse an XML file. It also uses a DTD file as
the property "isXsd" is not defined.

XmlDsa.fileTypes.1.typeName=XmlFileTOC
XmlDsa.fileTypes.1.dtdFile=dsa/XmlDsa/TOC.dtd
XmlDsa.fileTypes.1.xmlFile=dsa/XmlDsa/TOC.xml
XmlDsa.fileTypes.1.prefix=FTEST_

This type declaration shows how to define an XmlFile type to parse an XML file. The difference between
this type and the previous one is that this one uses an XSD file to get the schema info from.

XmlDsa.fileTypes.2.typeName=XmlXsdFileTOC
XmlDsa.fileTypes.2.dtdFile=dsa/XmlDsa/TOC.xsd
XmlDsa.fileTypes.2.xmlFile=dsa/XmlDsa/TOC.xml
XmlDsa.fileTypes.2.prefix=XSDFTEST_
XmlDsa.fileTypes.2.isXsd=true

This type declaration shows how to define a XmlString type.

XmlDsa.fileTypes.3.typeName=XmlStringTOC
XmlDsa.fileTypes.3.dtdFile=dsa/XmlDsa/TOC.dtd
XmlDsa.fileTypes.3.xmlFile=-
XmlDsa.fileTypes.3.prefix=STEST_

XmlFileTestPolicy
The XmlFileTestPolicy shows how to use the XML DSA to read data from an XML file.
This policy reads the contents of the TOC.xml file and then prints the data to the policy log. Before you
use this policy, you must run the create DTD types script as follows:

./CreateDtdTypes.sh NCI impactadmin impactpass ../TOC.dtd XmlFileTOC FTEST_

You do not need to edit the contents of the XmlFileTypes configuration file. By default, this file contains
the necessary data source mappings. The following are the data type mappings:

XmlDsa.fileTypes.1.typeName XmlFileTOC
XmlDsa.fileTypes.1.dtdFile dsa/XmlDsa/TOC.dtd
XmlDsa.fileTypes.1.xmlFile dsa/XmlDsa/TOC.xml
XmlDsa.fileTypes.1.prefix FTEST_

120 Netcool/Impact: DSA Reference Guide


XmlHttpTestPolicy
The XmlHttpTestPolicy shows how to use the XML DSA to read data from a location on an HTTP server.
This policy reads the XML data from an HTTP server and then prints it to the policy log. Before you use this
policy, you must run the CreateDtdTypes script as follows:

./CreateDtdTypes.sh NCI impactadmin impactpass ../TOC.dtd XmlHttpTOC HTEST_

You must have CGI enabled on the HTTP Server. You do not have to copy the Perl CGI script onto HTTP
Server.
After you install the script, modify the XmlHttpTypes configuration file to reflect the location of the
script and to include a valid user name and password for the authentication realm, if any.
The following example shows the data type mappings:

XmlDsa.httpTypes.1.typeName XmlHttpTOC
XmlDsa.httpTypes.1.dtdFile dsa/XmlDsa/TOC.dtd
XmlDsa.httpTypes.1.prefix HTEST_
XmlDsa.httpTypes.1.url http://localhost:9080
XmlDsa.httpTypes.1.user John
XmlDsa.httpTypes.1.password Smith
XmlDsa.httpTypes.1.realm basicrealm

XmlXsdFileTestPolicy
The XmlXsdFileTestPolicy shows how to use the XML DSA to read data from an XML file.
This policy reads XML data returned from a URL and then prints the data to the policy log. Before you use
this policy, you must run the create XSD types script as follows:

./CreateXsdTypes.sh NCI impactadmin impactpass ../TOC.xsd XmlXsdFileTOC XSDFTEST_

where filename is the name and path of an XML file stored on the file system.
You do not need to edit the contents of the XmlFileTypes configuration file. By default, this file contains
the necessary data source mappings. The following are the data type mappings:

XmlDsa.fileTypes.2.typeName=XmlXsdFileTOC
XmlDsa.fileTypes.2.dtdFile=dsa/XmlDsa/TOC.xsd
XmlDsa.fileTypes.2.xmlFile=dsa/XmlDsa/TOC.xml
XmlDsa.fileTypes.2.prefix=XSDFTEST_
XmlDsa.fileTypes.2.isXsd=true

Chapter 10. Working with the XML DSA 121


122 Netcool/Impact: DSA Reference Guide
Chapter 11. Working with the SNMP DSA

The SNMP DSA is a data source adaptor that is used set and retrieve management information stored by
SNMP agents.

SNMP DSA overview


The SNMP DSA is a data source adaptor that is used to set and retrieve management information stored
by SNMP agents. It is also used to send SNMP traps and notifications to SNMP managers.
The SNMP DSA is installed automatically when you install Tivoli Netcool/Impact. You must make sure
that any MIB files that are to be used by the DSA are located in the $IMPACT_HOME/dsa/snmpdsa/
mibs directory when you start the Impact Server. For more information about installing MIB files,
see “Installing MIB files” on page 125. You are not required to perform any additional installation or
configuration steps.
Impact reuses the SNMP sessions it creates to connect to the SNMP agent. If you need to clear the SNMP
session cache in Impact, make changes to the SNMP data source.
You must perform the following tasks when you use the SNMP DSA:
• Create one data source for each SNMP agent that you want to access using the DSA, or create a
single data source and use it to access all agents. For more information about working with SNMP data
sources, see “Working with SNMP data sources” on page 125.
• Create data types that you will use to access variables and tables managed by SNMP agents. For more
information about working with SNMP data types, see “Working with SNMP data types” on page 128.
• Write one or more policies that set or retrieve variables and tables managed by SNMP agents, or that
send SNMP traps and notifications. For more information about SNMP policies, see “SNMP policies” on
page 130.

SNMP data model


An SNMP data model is an abstract representation of SNMP data managed by agents in your environment.
SNMP data models have the following elements:
• SNMP data sources
• SNMP data types

SNMP data sources


SNMP data sources represent an agent in the environment.
The data source configuration specifies the host name and port where the agent is running, and the
version of SNMP that it supports. For SNMP v3, the configuration also optionally specifies authentication
properties.
You can either create one data source for each SNMP agent that you want to access using the DSA, or you
can create a single data source and use it to access all agents. You can create and configure data sources
using the GUI. After you create a data source, you can create one or more data types that represent the
OIDs of variables managed by the corresponding agent.

© Copyright IBM Corp. 2006, 2023 123


SNMP data types
SNMP data types are Netcool/Impact data types that specify the structure and content of data associated
with an agent.
The identity of the agent is determined by the data source that is associated with the data type. Each data
type specifies one or more object IDs (OIDs) that reference variables managed by the agent.
The SNMP DSA supports the following categories of data types:
• Packed OID data types
• Table data types
Previous versions of this DSA supported another category of data type called discrete OID data types.
This category was used to reference single variable OIDs. In this version of the DSA, you access single
variables in the exact same way that you access the sets of variables represented by packed OID data
types.
For more information about OIDs and SNMP variables, see the reference documentation for the agent you
want to access using the SNMP DSA.

Packed OID data types


Packed OID data types are data types that reference the OIDs of one or more variables managed by
a single agent. You use this category of data type when you want to access single variables or sets of
related variables. When you create a packed OID data type, you specify the name of the associated data
source, the OID for each variable and options that determine the behavior of the DSA when connecting to
the agent.
For more information about creating packed OID data types, see “Working with SNMP data types” on
page 128.

Table data types


Table data types are data types that reference the OIDs of one or more SNMP tables managed by a single
agent. When you create a table data type, you specify the name of the associated data source, the OID for
the table and options that determine the behavior of the DSA when connecting to the agent.
For more information about creating data types, see “Creating SNMP data types” on page 128.

SNMP DSA process


The SNMP DSA process has the following phases:
• Sending Data to Agents
• Retrieving Data from Agents
• Sending Traps and Notifications to Managers
• Handling Error Conditions
• Handling Timeouts

Sending data to agents


The DSA supports two functions in the Netcool/Impact policy language (IPL) that allow you to send data
to an SNMP agent. These functions are the standard function AddDataItem and the SNMP function
SnmpSetAction.
When Netcool/Impact encounters a call to one of these functions in a Netcool/Impact policy, it assembles
an SNMP SET command using the information specified in the function parameters and passes this
command to the DSA for processing. The DSA then sends the command to the agent.

124 Netcool/Impact: DSA Reference Guide


If the SET command is successful, the agent sends a confirmation message to the DSA and Netcool/
Impact continues processing the policy.

Retrieving data from agents


The DSA supports three functions that allow you to retrieve data from an agent. These functions are the
standard function GetByFilter and the SNMP functions SnmpGetAction and SnmpGetNextAction.
When Netcool/Impact encounters a call to one of these functions in a Netcool/Impact policy, it assembles
an SNMP GET or GETNEXT command using the information specified in the function parameters. It then
passes this command to the DSA for processing. The DSA then sends the command to the agent.
If the GET or GETNEXT command is successful, the agent sends the requested data back to the DSA.
The DSA returns the information to Netcool/Impact, which then stores the information in a policy-level
variable that you can access in subsequent parts of the policy.

Sending traps and notifications to managers


The DSA supports an SNMP function named SNMPTrapAction that you use to send traps or notifications
to an SNMP manager.
When the Netcool/Impact encounters a call to SNMPTrapAction, it assembles an SNMP TRAP command
using the information specified in the function parameters. It then passes this command to the DSA for
processing. The DSA then sends the command to the manager.
If the TRAP command is successful, the manager sends a confirmation message to the DSA and the policy
is processed.

Handling error conditions


If a SET, GET, GETNEXT, or TRAP command sent to an agent or manager is unsuccessful, the DSA returns
an error string to Netcool/Impact that can be printed to the policy log or otherwise handled in the body of
the policy.

Handling timeouts
If an agent or manager does not respond to a SET, GET, GETNEXT, or TRAP command sent by the DSA
within the timeout period specified in the function call or the related SNMP data type, the DSA sets a
timeout message in the error string and returns it to Netcool/Impact. This error string can be handled in
the body of the policy in the same was as any other error message.

Installing MIB files


You must make sure that any MIB files that are to be used by the DSA are located in the
$IMPACT_HOME/dsa/snmpdsa/mibs directory when you start the Netcool/Impact server. By default,
this directory contains the RFC1213-MIB and RFC1271-MIB files. Other commonly used MIB files are
installed with Netcool/Impact and are located in the $IMPACT_HOME/dsa/snmpdsa/mibs directory.
You must copy these or other MIB files that you provide to the $IMPACT_HOME/dsa/snmpdsa/mibs
directory before you can use them with the DSA. After you copy a new file to this directory, you must stop
and restart the Netcool/Impact server.
Note: MIB files are written in ASN.1 notation. (ASN.1 stands for Abstract Syntax Notation 1.) ASN.1 is a
standard notation managed by the ISO (International Organization for Standardization). You must ensure
that no MIB files copied into an Impact install contain underscores. Underscores, "_" are not valid content
in MIB files.

Working with SNMP data sources


You use the GUI to perform the following tasks with SNMP data sources:
• Create new data sources

Chapter 11. Working with the SNMP DSA 125


• Edit data sources
• Delete data sources

Creating SNMP data sources

About this task


You can either create one data source for each SNMP agent that you want to access using the DSA, or you
can create a single data source and use it to access all agents.
If you plan to use the standard data-handling functions AddDataItem and GetByFilter to access
SNMP data, you must create a separate data source for each agent. In this scenario, the host name, port,
and other connection information for the agent is encapsulated as part of the data source configuration.
When you make a call to the AddDataItem or GetByFilter function, you pass the name of a data
type associated with the data source and Netcool/Impact uses this information to derive the identity and
location of the agent in the environment.
If you plan to use the SNMP functions that are provided with this release of the DSA, you can create a
single data source and use it to access all agents. In this scenario, the host name and port are passed
as runtime parameters when you call each function. You can dynamically specify the agent during policy
runtime that is based on host name information from incoming ObjectServer events or derived from other
external data sources.
This version of the DSA provides additional support for SNMP v3 authentication. If you are creating a data
source for use with SNMP v3, you must perform additional configuration tasks.

Creating SNMP v1 and v2 data sources


Use this procedure to create an SNMP v1 or v2 data source.

Procedure
1. Log in to the Netcool/Impact GUI using a web browser.
2. Click the Data Sources tab and select SNMP from the Source list.
3. Click the New Data Source button.
The New Data Source dialog box opens.
4. Type a unique name for the data source in the Data Source Name field.
5. If you are creating this data source for use with the standard data-handling functions AddDataItem
and GetByFilter, type the host name or IP address where the agent resides in the Host Name
field and the port in the Port field. If you are creating this data source for use with the new SNMP
functions, you can accept the default values with no changes.
6. Type the name of the SNMP read-community in the Read Community field. The default is public.
7. Type the name of the SNMP write-community in the Write Community field. The default is public.
8. Type a timeout value in seconds in the Timeout field. When the DSA connects to an agent associated
with this data source, it waits for the specified timeout period before returning an error to Netcool/
Impact.
9. Select 1 or 2 from the Version list.
10. Click OK.

Creating SNMP v3 data sources

About this task


To create a data source with SNMP v3 authentication, you specify the configuration properties and
then provide the information required for the agent to authenticate the DSA as an SNMP user. The

126 Netcool/Impact: DSA Reference Guide


authentication parameters can be overridden by calls to the SNMP functions in the Impact Policy
Language.
For information about authentication parameters, see the documentation provided by the SNMP agent
and manager.
To create an SNMP v3 data source:

Procedure
1. Log in to the GUI using a web browser.
2. Click the Data Sources tab and select SNMP from the Source list.
3. Click the New Data Source button.
The New Data Source dialog box opens.
4. Type a data source name, the host name and IP address of the SNMP agent, community strings and
timeout values as specified in the previous section.
5. Select 3 from the Version list.
6. Type the name of an SNMP v3 authentication user in the User field.
7. Select a protocol from the Authentication Protocol list. The default is MD5.
8. Type the password for the authentication user in the Password field.
9. Select a protocol from the Privacy Protocol field.
10. Type a privacy password in the Privacy Password field.
11. Type a context ID in the Context ID field.
12. Type a context name in the Context Name field.
13. Click OK.

Editing SNMP data sources


You can edit the configuration for a data source after you create it. To edit an SNMP data source:

Procedure
1. Log in to the GUI using a web browser.
2. Click the name of the data source in the Data Sources tab. The Edit Data Source window opens.
3. Set the configuration properties for the data source as described in the previous sections.
4. Click OK.

Results
Any changes to the configuration take effect immediately after you finish editing the data source. There is
no need to restart the Impact Server after making a change.

Deleting an SNMP data source

About this task


To delete an SNMP data source:

Procedure
1. Log in to the Netcool/Impact GUI using a web browser.
2. In the Data Sources tab, click the Delete Data Source icon next to the name of the data source you
want to delete.

Chapter 11. Working with the SNMP DSA 127


Working with SNMP data types
You use the GUI to perform the following tasks with SNMP data types:
• Create new data types
• Edit data types
• Delete data types

Creating SNMP data types

About this task


If you plan to use the standard data-handling functions AddDataItem and GetByFilter to access
SNMP data, you must create a separate data type for each set of variables (packed OID data types) or
each set of tables (table data types) that you want to access. In this scenario, the object IDs (OIDs) for the
variables or tables are encapsulated as part of the data type configuration. When you make a call to the
AddDataItem or GetByFilter function, you pass the name of a data type and this information is used
to determine the identity of the variables or table.
If you plan to use the SNMP functions that are provided with this release of the DSA, you can create a
single data type for each data source and use it to access all the variables and tables associated with the
agent. In this scenario, the variable or table OIDs are passed as runtime parameters when you call each
function. You can dynamically specify the OIDs during policy runtime that is based on information from an
external data source.

Creating packed OID data types


Packed OID data types are data types that reference the OIDs of one or more variables managed by
a single agent. You use this category of data type when you want to access single variables or sets of
related variables. When you create a packed OID data type, you specify the name of the associated data
source, the OID for each variable and options that determine the behavior of the DSA when connecting to
the agent.
To create a packed OID data type:
1. Log in to the Netcool/Impact GUI using a web browser.
2. Click the Data Types tab and select an SNMP data source from the Data Source list.
3. Click the New Data Type icon. The New Data Type editor opens.
4. Type a name for the data type in the Data Type Name field.
5. Select an SNMP data source from the Data Source Name field. By default, the data source you chose
in step 2 is selected.
6. Select Packed from the OID Configuration list.
7. If you are creating this data type for use with the standard data-handling functions AddDataItem and
GetByFilter, you must create an attribute on the data type for each variable you want to access. To
create an attribute, click the New Attribute button and specify an attribute name and the OID for the
variable.
If you are creating this data source for use with the new SNMP functions, you do not need to explicitly
create attributes for each variable. In this scenario, you pass the variable OIDs when you make each
function call in the Netcool/Impact policy.
8. Click Save.

128 Netcool/Impact: DSA Reference Guide


Creating table data types
Use this procedure to create a table data type.

Procedure
1. In the data types tab, select an SNMP data source from the list.
2. Click the New Data Type button to open the New Data Type editor.
3. Type a name for the data type in the Data Type Name field.
Important:
The data type name must match the table name that will be queried, for example, ifTable, or
ipRouteTable.
4. Select an SNMP data source from the Data Source Name field. By default, the data source you chose
in step 2 is selected.
5. Select Table from the OID Configuration list.
6. If you are creating this data type for use with the standard data-handling functions AddDataItem and
GetByFilter, you must create a new attribute on the data type for each table you want to access. To
create an attribute, click the New Attribute button and specify an attribute name and the OID for the
table.
Important:
The attributes are the column names in each table. For example, in the following ifTable, the attributes
will be ifIndex, ifDescr and other column names:

Column Names OID


ifIndex .1.3.6.1.2.1.2.2.1.1
ifDescr .1.3.6.1.2.1.2.2.1.2
... ...

If you are creating this data source for use with the new SNMP functions, you do not need to explicitly
create attributes for each table. In this scenario, you pass the table OIDs when you make each function
call in the Netcool/Impact policy.
7. If you want the DSA to retrieve table data from the agent using the SNMP GETBULK command instead
of an SNMP GET, select Get Bulk.
The GETBULK command retrieves table data using a continuous GETNEXT command. This option is
suitable for retrieving data from very large tables.
8. If you have selected Get Bulk, you can control the number of variables in the table for which the
GETNEXT operation is performed using the specified Non-Repeaters and Max Repetitions values.
The Non-Repeaters value specifies the first number of non-repeating variables and Max Repetitions
specifies the number of repetitions for each of the remaining variables in the operation.
9. Click Save.

Editing SNMP data types


You can edit the configuration for a data type after you create it. To edit an SNMP data types:

Procedure
1. Log in to the GUI using a web browser.
2. Click the name of the data type in the Data Types tab.
The Edit Data Type window opens.
3. Set the configuration properties for the data type as described in the previous sections.
4. Click OK.

Chapter 11. Working with the SNMP DSA 129


Results
Any changes to the configuration take effect immediately after you finish editing the data type. There is no
need to restart the Impact Server after making a change.

Deleting SNMP data types

About this task


To delete an SNMP data type:

Procedure
1. Log in to the Netcool/Impact GUI using a web browser.
2. In the Data Types tab, click the Delete Data Type button next to the name of the data type you want
to delete.

SNMP policies
You can perform the following tasks related to the SNMP DSA in a policy:
• Set packed OID data on SNMP agents using standard data-handling functions
• Set packed OID data on SNMP agents using SNMP functions
• Set table data on SNMP agents using standard data-handling functions
• Set table data on SNMP agents using SNMP functions
• Retrieve packed OID data on SNMP agents using standard data-handling functions
• Retrieve packed OID data on SNMP agents using SNMP functions
• Send SNMP traps and notifications

Setting packed OID data with standard data-handling functions

About this task


You can use the standard data-handling function AddDataItem to set the value of a single variable
managed by an agent or to set the value of multiple variables.

Setting the value of a single variable


To set the value of a single variable, you create a context, and populate its Oid and Value member
variables. You can also populate optional HostId and Port members variables. After you populate the
context variables, you call AddDataItem and pass the name of an SNMP data type and the context
as input parameters. If you specified values for the HostId and Port variables in the context, these
override the host and port information as defined in the data type.
To create a context, you call the NewObject function as shown in the following example.

// Call the NewObject function

MyContext = NewObject();

After you create the context, you can set the Oid and Value variables, as shown in the following example.
All member variables of the context must be set as strings.

// Populate the context variables

MyContext.Oid = ".1.3.6.1.2.1.1.4.0";
MyContext.Value = "MyValue";

130 Netcool/Impact: DSA Reference Guide


Oid and Value represent the OID of the variable managed by the agent and its corresponding value.
After you populate the context variables, you can call AddDataItem and pass the name of an SNMP data
type and the context as input parameters, as shown in the following example.

// Call AddDataItem and pass the name of an SNMP data type and the context

AddDataItem("MySnmpType", MyContext);

In this example, the host name, and port where the agent is located is specified by the MySnmpType data
type.
If the DSA is unable to successfully send the data to the agent, it stores an error message in the
policy-level variable ErrorString. The following example shows how to print the error message to the
policy log.

// Print any error message to the policy log

Log("Errors: " + ErrorString);

The following example shows how to set the value of a variable managed by an agent, where the
host name and port are specified by the MySnmpType data type. In this example, the variable OID is
.1.3.6.1.2.1.1.4.0 and the value is MyValue.

// Create a new context with the NewObject function

MyContext = NewObject();

// Populate the context variables

MyContext.Oid = ".1.3.6.1.2.1.1.4.0";
MyContext.Value = "MyValue";

// Call AddDataItem and pass the name of an SNMP data type and the context

AddDataItem("MySnmpType", MyContext);

// Print any error message to the policy log

Log("Errors: " + ErrorString);

The following example shows how to set the value of a variable managed by an agent, where the host
name and port specified by the data type are overridden by context variables set in the policy. In this
example, the host is 192.168.1.1 and the port is 161.

// Create a new context with the NewObject function

MyContext = NewObject();

// Populate the context variables

MyContext.Oid = ".1.3.6.1.2.1.1.4.0";
MyContext.Value = "MyValue";
MyContext.HostId = "192.168.1.1";
MyContext.Port = 161;

// Call AddDataItem and pass the name of an SNMP data type and the context

AddDataItem("MySnmpType", MyContext);

// Print any error message to the policy log

Log("Errors: " + ErrorString);

Setting the value of multiple variables


To set the value of multiple variables, you create a context and populate member variables that
correspond to the attributes you configured when you created the corresponding SNMP data type. You
can also populate optional HostId and Port members variables.

Chapter 11. Working with the SNMP DSA 131


After you populate the context variables, you call AddDataItem and pass the name of the SNMP data
type and the context as input parameters. If you specified values for the HostId and Port variables in
the context, these override the host and port information as defined in the data type.
To create a context, you call the NewObject function as shown in the following example.

// Call the NewObject function

MyContext = NewObject();

After you create the context, you can set the member variables, and the optional variables, as shown in
the following example. All member variables of the context must be set as strings.

// Populate the context variables

MyContext.SysLocation = "New York";


MyContext.SysName = "SYS01";

Here, SysLocation, and SysName are attributes that you defined in the configuration for the
corresponding SNMP DSA data source.
After you populate the context variables, you can call AddDataItem and pass the name of an SNMP data
type and the context as input parameters, as shown in the following example.

// Call AddDataItem and pass the name of an SNMP data type and the context

AddDataItem("MySnmpType", MyContext);

In this example, the host name, and port where the agent is located is specified in the data type
configuration.
If the DSA is unable to successfully send the data to the agent, it stores an error message in the
policy-level variable ErrorString. The following example shows how to print the error message to the
policy log.

// Print any error message to the policy log

Log("Errors: " + ErrorString);

The following example shows how to set the value of variables managed by an agent, where the host
name and port is specified by the MySnmpType data type.

// Create a new context with the NewObject function

MyContext = NewObject();

// Populate the context variables

MyContext.SysLocation = "New York";


MyContext.SysName = "SYS01";

// Call AddDataItem and pass the name of an SNMP data type and the context

AddDataItem("MySnmpType", MyContext);

// Print any error message to the policy log

Log("Errors: " + ErrorString);

The following example shows how to set the value of a variable managed by an agent, where the host
name and port specified by the data type are overridden by context variables set in the policy. In this
example, the host is 192.168.1.1 and the port is 161.

// Create a new context with the NewObject function

MyContext = NewObject();

// Populate the context variables

MyContext.SysLocation = "New York";

132 Netcool/Impact: DSA Reference Guide


MyContext.SysName = "SYS01";
MyContext.HostId = "192.168.1.1";
MyContext.Port = 161;

// Call AddDataItem and pass the name of an SNMP data type and the context

AddDataItem("MySnmpType", MyContext);

// Print any error message to the policy log

Log("Errors: " + ErrorString);

Setting packed OID data with SNMP functions

Procedure
You can use the SNMP function SnmpSetAction to set the value of a single or multiple variables
managed by an agent.
When you call SnmpSetAction, you pass an SNMP data type, the host name and port of the agent, an
array of OIDs, and the array of values that you want to set. If you are using SNMP v3, you can also specify
the information required to authenticate as an SNMP user.
For more information about SnmpSetAction, see “SNMPSetAction” on page 146.

Example
The following example shows how to set SNMP variables by calling SnmpSetAction and passing the
name of an SNMP data type, an array of OIDs, and an array of values as input parameters. In this example,
the SNMP data type is named SNMP_PACKED.

// Call SnmpSetAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP SET

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = {".1.3.6.1.2.1.1.4.0", ".1.3.6.1.2.1.1.5.0"};
ValueList = {"Value_01", "Value_02"};

SnmpSetAction(TypeName, HostId, Port, VarIdList, ValueList, NULL, NULL,


NULL, NULL, NULL, NULL, NULL, NULL, NULL);

For more examples, see “SNMPSetAction” on page 146.

Retrieving packed OID data from SNMP agents

About this task


Packed OID data types reference the OIDs of one or more variables managed by a single agent. You use
this category of data type when you want to access single variables or sets of related variables.
You can retrieve packed OID data from SNMP agents using one of the following functions:

Procedure
• Standard data-handling functions
• SNMP functions

Retrieving packed OID data with standard data-Handling functions


You can use the standard data-handling function GetByFilter to retrieve packed OID data managed by
an agent.

Chapter 11. Working with the SNMP DSA 133


To retrieve the packed OID data, you call GetByFilter and specify the name of an SNMP data type as a
runtime parameter. The data type configuration contains a list of OIDs for the variables whose value you
want to retrieve and attribute names that you can use to reference the values. The data source associated
with the data type specifies the host name and port where the agent is located.
The GetByFilter function returns an array of data items whose first element stores a context where the
member variables represent values retrieved from the agent. You can reference the returned values using
the attribute names that you defined when you created the data type.
If the DSA is unable to successfully retrieve the data, it stores an error message in a member variable on
the context called ErrorString.
The following example shows how to call GetByFilter and specify the name of an SNMP data type. You
can set the Filter parameter to an empty string and CountOnly to False.

// Call GetByFilter and pass the name of an SNMP data type

TypeName = "MySnmpType";
Filter = "";
CountOnly = False;

MySNMPValues = GetByFilter(TypeName, Filter, CountOnly);

The following example shows how to access values returned by the function. In this example,
MySnmpType defines attributes named HostId, SysContact, SysName, and SysLocation.

// Access the member variables of the context returned by GetByFilter

Log("HostId: " + MySNMPValues[0].HostId);


Log("SysContact: " + MySNMPValues[0].SysContact);
Log("SysName: " + MySNMPValues[0].SysName);
Log("SysLocation: " + MySNMPValues[0].SysLocation);

The following example shows how to access an error message returned by the call to GetByFilter.

Log("Errors: " + MySNMPValues[0].ErrorString);

The following complete example shows how to use GetByFilter and handle the values it returns.

// Call GetByFilter and pass the name of an SNMP data type

TypeName = "MySnmpType";
Filter = "";
CountOnly = False;

MySNMPValues = GetByFilter(TypeName, Filter, CountOnly);

// Access the member variables of the context returned by GetByFilter

Log("HostId: " + MySNMPValues[0].HostId);


Log("SysContact: " + MySNMPValues[0].SysContact);
Log("SysName: " + MySNMPValues[0].SysName);
Log("SysLocation: " + MySNMPValues[0].SysLocation);

Log("Errors: " + MySNMPValues[0].ErrorString);

Retrieving packed OID data with SNMP functions


You can use the SNMP function SnmpGetAction to retrieve packed OID data managed by an agent.
When you call SnmpGetAction, you pass an SNMP data type, the host name and port of the agent, and
other parameters. If you are using SNMP v3, you can also specify the information required to authenticate
as an SNMP user.
For more information about SnmpGetAction, see “SNMPGetAction” on page 138.

134 Netcool/Impact: DSA Reference Guide


The following example shows how to use SnmpGetAction. In this example, the variable OIDs are
specified by the SNMP_PACKED data type configuration.

// Call SnmpGetAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP GET

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;

SnmpGetAction(TypeName, HostId, Port, NULL, NULL, NULL, NULL, NULL, NULL, NULL,
NULL, NULL, NULL);

// Print the results of the SNMP GET to the policy log

Count = 0;

While (Count < Length(ValueList)) {


Log(ValueList[Count]);
Count = Count + 1;
}

Traversing SNMP trees


You can use the SnmpGetNextAction function to retrieve the value of the next SNMP variables in the
variable tree from an agent. This function is useful in situations where you want to traverse an entire tree
or in situations where you do not know the OID of subsequent variables in a tree that you want to retrieve.
When you call SnmpGetNextAction, you pass an SNMP data type and the host name and port where the
agent is located. If you are using SNMP v3, you can also specify the information required to authenticate
as an SNMP user. You can also optionally pass a list of OIDs and other information needed to retrieve the
data.
For more information about the SnmpGetNextAction function, see “SNMPGetNextAction” on page 142.
The following example shows how to use SnmpGetNextAction.

// Call SnmpGetNextAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP GETNEXT

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;

SnmpGetNextAction(TypeName, HostId, Port, NULL, NULL, NULL, NULL, NULL,


NULL, NULL, NULL, NULL, NULL);

// Print the results of the SNMP GETNEXT to the policy log

Count = 0;

While (Count < Length(ValueList)) {


Log(VarIdList[Count] + ": " + ValueList[Count]);
Count = Count + 1;
}

Retrieving table data from SNMP agents

About this task


Table data types reference the OIDs of one or more tables managed by a single agent. You use this
category of data type when you want to access SNMP tables.
You can retrieve table data from SNMP agents using:

Procedure
• Standard data-handling functions
• SNMP functions

Chapter 11. Working with the SNMP DSA 135


Retrieving table data with standard data-handling functions
You can use the standard data-handling function GetByFilter to retrieve table data managed by an
agent.
To retrieve the table data, you call GetByFilter and specify the name of an SNMP data type as a
runtime parameter. The data type configuration contains a list of OIDs for the tables whose value you
want to retrieve and attribute names that you can use to reference the tables. The data source associated
with the data type specifies the host name and port where the agent is located.
The GetByFilter function returns an array of data items whose first element stores a context where the
member variables represent values retrieved from the agent. You can reference the returned values using
the attribute names that you defined when you created the data type.
If the DSA is unable to successfully retrieve the data, it stores an error message in a member variable on
the context called ErrorString.
The following example shows how to call GetByFilter and specify the name of an SNMP data type. You
can set the Filter parameter to an empty string and CountOnly to False.

// Call GetByFilter and pass the name of an SNMP data type

TypeName = "MySnmpType";
Filter = "";
CountOnly = False;

MySNMPValues = GetByFilter(TypeName, Filter, CountOnly);

The following example shows how to access values returned by the function. In this example,
MySnmpType defines attributes named HostId, SysContact, SysName, and SysLocation.

// Access the member variables of the context returned by GetByFilter

Log("HostId: " + MySNMPValues[0].HostId);


Log("SysContact: " + MySNMPValues[0].SysContact);
Log("SysName: " + MySNMPValues[0].SysName);
Log("SysLocation: " + MySNMPValues[0].SysLocation);

The following example shows how to access an error message returned by the call to GetByFilter.

Log("Errors: " + MySNMPValues[0].ErrorString);

The following complete example shows how to use GetByFilter and handle the values it returns.

// Call GetByFilter and pass the name of an SNMP data type

TypeName = "MySnmpType";
Filter = "";
CountOnly = False;

MySNMPValues = GetByFilter(TypeName, Filter, CountOnly);

// Access the member variables of the context returned by GetByFilter

Log("HostId: " + MySNMPValues[0].HostId);


Log("SysContact: " + MySNMPValues[0].SysContact);
Log("SysName: " + MySNMPValues[0].SysName);
Log("SysLocation: " + MySNMPValues[0].SysLocation);

Log("Errors: " + MySNMPValues[0].ErrorString);

Sending SNMP traps and notifications

About this task


You use the SnmpTrapAction function to send a trap (for SNMP v1) or a notification (for SNMP v2) to an
SNMP manager.

136 Netcool/Impact: DSA Reference Guide


To send the trap or notification, you call the function and pass the host name and port where the manager
is located, a list of OIDs and corresponding values for the trap, and other related information. If the trap
or notification is not successful, the function stores an error message in the policy-level ErrorString
variable. You can handle the contents of ErrorString in subsequent parts of the policy.
For more information about the SnmpTrapAction function, see “SnmpTrapAction” on page 150.

Example
The following example shows how to send a trap using the SnmpTrapAction function.

// Call SnmpTrapAction and pass the host name, port, OID list, OID values
// and other required parameters

HostId = "192.168.1.1";
Port = 162;
Version = 1;
Community = "public";

SysUpTime = 1001;

Enterprise = ".1.3.6.1.2.1.11";
GenericTrap = 3;
SpecificTrap = 0;

VarIdList = {".1.3.6.1.2.1.2.2.1.1.0", "sysDescr"};


ValueList = {"2", "My system"};

SnmpTrapAction(HostId, Port, VarIdList, ValueList, Community, Version,


SysUpTime, Enterprise, GenericTrap, SpecificTrap, NULL);

// Print any errors to the policy log

Log("Error: " + ErrorList);

The following example shows how to send a notification using the SnmpTrapAction function. In this
example, you set a value for the SnmpTrapOid parameter.

// Call SnmpTrapAction and pass the host name, port, OID list, OID values
// and other required parameters

HostId = "192.168.1.1";
Port = 162;
Version = 1;
Community = "public";

SysUpTime = 1001;

Enterprise = ".1.3.6.1.2.1.11";
GenericTrap = 3;
SpecificTrap = 0;

VarIdList = {".1.3.6.1.2.1.2.2.1.1.0", "sysDescr"};


ValueList = {"2", "My system"};

SnmpTrapOid = ".1.3.6.1.2.4.1.11";

SnmpTrapAction(HostId, Port, VarIdList, ValueList, Community, Version,


SysUpTime, Enterprise, GenericTrap, SpecificTrap, SnmpTrapOid);

// Print any errors to the policy log

Log("Error: " + ErrorList);

SNMP functions
The SNMP DSA supports a special set of functions that you can use to send data to and retrieve data
from SNMP agents. You can also use the SNMP functions to send SNMP traps and notifications to SNMP
managers.
The SNMP DSA supports the following functions:

Chapter 11. Working with the SNMP DSA 137


• SnmpGetAction
• SnmpGetNextAction
• SnmpSetAction
• SnmpTrapAction
The SNMP DSA also supports the use of standard data-handling functions as described in “SNMP
policies” on page 130.

SNMPGetAction
The SnmpGetAction function retrieves a set of SNMP variables from the specified agent.
The values are stored in a variable named ValueList. This function operates by sending an SNMP GET
command to the specified agent.
When you call SnmpGetAction, you pass an SNMP data type and, for SNMP v3, any authorization
parameters that are required. To override the agent and variable information specified in the SNMP data
type, you can also optionally pass a host name, a port number, a list of OIDs, and other information
needed to retrieve the data.
Note: If you are running an snmpget and an snmpset in the same policy, do not reuse the ValueList
variable in the same policy due to the special usage of ValueList.

Syntax
The following is the syntax for SnmpGetAction:

SnmpGetAction(TypeName, [HostId], [Port], [VarIdList], [Community], [Timeout],


[Version], [UserId], [AuthProtocol], [AuthPassword], [PrivPassword],
[ContextId], [ContextName])

Parameters
The SNMPGetAction function has the following parameters.

Table 38. SNMPGetAction function parameters

Parameter Format Description

TypeName String Name of the SNMP data type that specifies the host name, port, OIDs,
and other information needed to retrieve the SNMP data.

HostId String Optional. Host name or IP address of the system where the SNMP
agent is running. Overrides value specified in the SNMP data type.

Port Integer Optional. Port where the SNMP agent is running. Overrides value
specified in the SNMP data type.

VarIdList Array Optional. Array of strings containing the OIDs of SNMP variables to
retrieve from the agent. Overrides values specified in the SNMP data
type.

Community String Optional. Name of the SNMP read community string. Default is public.

138 Netcool/Impact: DSA Reference Guide


Table 38. SNMPGetAction function parameters (continued)

Parameter Format Description

Timeout Integer Optional. Number of seconds to wait for a response from the SNMP
agent at first try. As the IBM SNMP API sends subsequent retries, it
uses the following incremental strategy: The second message is sent in
twice the time of the first message, the third message is sent in twice
the time of the second message, and so forth.
The default value for the Timeout property is 15 seconds. The total
amount of time to wait for the SNMP agent to respond before timing
out is determined by the Timeout value and the number of retries that
SNMP API makes.
If Timeout is set to 1 and retries is set to 3, the actual timeout is
15 seconds this is the initial 1 second wait after the first try, plus the
cumulative wait times after three subsequent retries, namely: 2+4+8).
If Timeout is set to 5 and retries is set to 3, the actual timeout is 75
seconds.
If Timeout is set to 15 and retries is set to 3, the actual timeout is 225
seconds.
Note: The default retries is 3. You can set this value
using the impact.snmp.session.retries property in the
$IMPACT_HOME/etc/<ServerName>_server.props file. For
example, the following entry sets retries to 2:
impact.snmp.session.retries=2
If Timeout is set to 1 and retries is set to 2, the actual timeout is 7
seconds.
If Timeout is set to 1 and retries is set to 1, the actual timeout is 3
seconds.
Note:
A property (totaltimeoutsnmp) can be added to the
$IMPACT_HOME/etc/<ServerName>_server.props file to ensure
that the call only block until it gets a response or the time expires.
The value for the impact.server.totaltimeoutsnmp property is
set in milliseconds. For example, the following property ensures that
an snmpget request times out after three seconds if a response is not
received.

impact.server.totaltimeoutsnmp=3000

Version Integer Optional. SNMP version number. Possible values are 1, 2 and 3. Default
is 2.

UserId String Required for SNMP v3 authentication. If using SNMP v1 or v2, or using
v3 without authentication, pass a null value for this parameter.

AuthProtocol String Optional. For use with SNMP v3 authentication only. Possible values
are. MD5, MD5_AUTH, NO_AUTH, SHA, SHA_AUTH. NO_AUTH is the
default.

Chapter 11. Working with the SNMP DSA 139


Table 38. SNMPGetAction function parameters (continued)

Parameter Format Description

AuthPassword String Optional. For use with SNMP v3 authentication only. Authentication
password associated with the specified SNMP User ID.

PrivProtocol String Optional. Privacy policy to be used with this function. Possible values
are. DES, AES, None. None is the default.
Note: This parameter does not form a part of the function call. It must
be defined before the call to the function.

PrivPassword String Optional. For use with SNMP v3 authentication only. Privacy password
associated with the specified SNMP User ID.

ContextId String Optional. For use with SNMP v3 authentication only. Authentication
context ID.

ContextName String Optional. For use with SNMP v3 authentication only. Authentication
context name.

Return Values
When you call SnmpGetAction, it sets the following variables in the policy context: ValueList.
The ValueList variable is an array of strings, each of which stores the value of one variable retrieved
from the SNMP agent. The strings in the array are assigned in the order that the variable OIDs are
specified in the SNMP data type or the VarIdList parameter.

Error handling
If the SNMP operation fails, the Impact policy engine throws an SnmpDSAException. You can handle this
exception using the Handle() function:

// Handle SNMP DSA Exceptions


Handle com.micromuse.dsa.snmpdsa.SnmpDSAException {
log("ErrorMessage: " + ErrorMessage);
javaCall("com.micromuse.dsa.snmpdsa.SnmpDSAException", ExceptionMessage, "getCause", null);
log("MyException is " + ExceptionMessage);
}

Example 1
The following example shows how to retrieve a set of SNMP variables by calling SNMPGetAction and
passing the name of an SNMP data type as an input parameter. In this example, the SNMP data type is
named SNMP_PACKED. The data type configuration specifies the host name and port where the SNMP
agent is running and the OIDs of the variables you want to retrieve.

// Call SNMPGetAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP GET

TypeName = "SNMP_PACKED";

SnmpGetAction(TypeName, "192.168.1.1", 161, null, null, null,


null, null, null, null, null, null, null);

// Print the results of the SNMP GET to the policy log

Count = 0;

while (Count < Length(ValueList)) {


Log(ValueList[Count]);

140 Netcool/Impact: DSA Reference Guide


Count = Count + 1;
}

Example 2
The following example shows how to retrieve a set of SNMP variables by calling SNMPGetAction and
explicitly overriding the default host name, port, and other configuration values set in the SNMP data type.
Example 2 using IPL.

// Call SnmpGetAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP GET

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = {".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"};
Community = "private";
Timeout = 15;

SnmpGetAction(TypeName, HostId, Port, VarIdList, Community,


Timeout, null, null, null, null, null, null,null);

// Print the results of the SNMP GET to the policy log

Count = 0;

while (Count < Length(ValueList)) {


Log(ValueList[Count]);
Count = Count + 1;
}

Example 2 using JavaScript.

// Call SnmpGetAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP GET
TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = [".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"];
Community = "private";
Timeout = 15;
SnmpGetAction(TypeName, HostId, Port, VarIdList, Community,
Timeout, null, null, null, null, null, null,null);
// Print the results of the SNMP GET to the policy log
Count = 0;
while (Count < Length(ValueList)) {
Log(ValueList[Count]);
Count = Count + 1;
}

Example 3
The following example shows how to retrieve a set of SNMP variables using SNMP v3 authentication.
Example 3 using IPL.

// Call SnmpGetAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP GET

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = {".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"};
Community = "private";
Timeout = 15;
Version = 3;
UserId = "snmpusr";
AuthProtocol = "MD5_AUTH";
AuthPassword = "snmppwd";
ContextId = "ctx";

SnmpGetAction(TypeName, HostId, Port, VarIdList, Community,


Timeout, Version, UserId, AuthProtocol, AuthPassword, null, ContextId, null);

Chapter 11. Working with the SNMP DSA 141


// Print the results of the SNMP GET to the policy log

Count = 0;

while (Count < Length(ValueList)) {


Log(ValueList[Count]);
Count = Count + 1;
}

Example 3 using JavaScript.

// Call SnmpGetAction and pass the name of the SNMP data type that contains
// configuration information required to perform the SNMP GET
TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = [".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"];
Community = "private";
Timeout = 15;
Version = 3;
UserId = "snmpusr";
AuthProtocol = "MD5_AUTH";
AuthPassword = "snmppwd";
ContextId = "ctx";
SnmpGetAction(TypeName, HostId, Port, VarIdList, Community,
Timeout, Version, UserId, AuthProtocol, AuthPassword, null, ContextId, null);
// Print the results of the SNMP GET to the policy log
Count = 0;
while (Count < Length(ValueList)) {
Log(ValueList[Count]);
Count = Count + 1;
}

SNMPGetNextAction
The SnmpGetNextAction function retrieves the next SNMP variables in the variable tree from the specified
agent.
It stores the resulting OIDs in a variable named VarIdList, the resulting values in a variable named
ValueList. The function sends a series of SNMP GETNEXT commands to the specified agent where each
command specifies a single OID for which the next variable in the tree is to be retrieved.
When you call SnmpGetNextAction, you pass an SNMP data type and, for SNMP v3, any authorization
parameters that are required. To override the agent and variable information specified in the SNMP data
type, you can also optionally pass a host name, a port number, a list of OIDs, and other information
needed to retrieve the data.

Syntax
The following is the syntax for SnmpGetNextAction:

SnmpGetNextAction(TypeName, [HostId], [Port], [VarIdList], [Community],


[Timeout], [Version], [UserId], [AuthProtocol], [AuthPassword],
[PrivPassword], [ContextId], [ContextName])

Parameters
The SnmpGetNextAction function has the following parameters.

Table 39. SnmpGetNextAction function parameters

Parameter Format Description

TypeName String Name of the SNMP data type that specifies the host name, port,
OIDs, and other information needed to retrieve the SNMP data.

142 Netcool/Impact: DSA Reference Guide


Table 39. SnmpGetNextAction function parameters (continued)

Parameter Format Description

HostId String Optional. Host name or IP address of the system where the SNMP
agent is running. Overrides value specified in the SNMP data type.

Port Integer Optional. Port where the SNMP agent is running. Overrides value
specified in the SNMP data type.

VarIdList Array Optional. Array of strings containing the OIDs of SNMP variables to
retrieve from the agent. Overrides values specified in the SNMP data
type.

Community String Optional. Name of the SNMP read community string. Default is
public.

Timeout Integer Optional. Number of seconds to wait for a response from the SNMP
agent at first try. As the IBM SNMP API sends subsequent retries, it
uses the following incremental strategy: The second message is sent
in twice the time of the first message, the third message is sent in
twice the time of the second message, and so forth.
The default value for the Timeout property is 15 seconds. The total
amount of time to wait for the SNMP agent to respond before timing
out is determined by the Timeout value and the number of retries
that SNMP API makes.
If Timeout is set to 1 and retries is set to 3, the actual timeout is
15 seconds this is the initial 1 second wait after the first try, plus
the cumulative wait times after three subsequent retries, namely:
2+4+8).
If Timeout is set to 5 and retries is set to 3, the actual timeout is 75
seconds.
If Timeout is set to 15 and retries is set to 3, the actual timeout is
225 seconds.
Note: The default retries is 3. You can set this value
using the impact.snmp.session.retries property in the
$IMPACT_HOME/etc/<ServerName>_server.props file. For
example, the following entry sets retries to 2:
impact.snmp.session.retries=2
If Timeout is set to 1 and retries is set to 2, the actual timeout is 7
seconds.
If Timeout is set to 1 and retries is set to 1, the actual timeout is 3
seconds.

Version Integer Optional. SNMP version number. Possible values are 1, 2 and 3.
Default is 2.

UserId String Required for SNMP v3 authentication. If using SNMP v1 or v2, or v3


without authentication, pass a null value for this parameter.

Chapter 11. Working with the SNMP DSA 143


Table 39. SnmpGetNextAction function parameters (continued)

Parameter Format Description

AuthProtocol String Optional. For use with SNMP v3 authentication only. Possible values
are. MD5, MD5_AUTH, NO_AUTH, SHA, SHA_AUTH. NO_AUTH is the
default.

AuthPassword String Optional. For use with SNMP v3 authentication only. Authentication
password associated with the specified SNMP User ID.

PrivProtocol String Optional. Privacy policy to be used with this function. Possible values
are. DES, AES, None. None is the default.
Note: This parameter does not form a part of the function call. It
must be defined before the call to the function.

PrivPassword String Optional. For use with SNMP v3 authentication only. Privacy
password associated with the specified SNMP User ID.

ContextId String Optional. For use with SNMP v3 authentication only. Authentication
context ID.

ContextName String Optional. For use with SNMP v3 authentication only. Authentication
context name.

Error handling
If the SNMP operation fails, the Impact policy engine throws an SnmpDSAException. You can handle this
exception using the Handle() function:

// Handle SNMP DSA Exceptions


Handle com.micromuse.dsa.snmpdsa.SnmpDSAException {
log("ErrorMessage: " + ErrorMessage);
javaCall("com.micromuse.dsa.snmpdsa.SnmpDSAException", ExceptionMessage, "getCause", null);
log("MyException is " + ExceptionMessage);
}

Example 1
The following example shows how to retrieve SNMP variables in the variable tree by calling
SNMPGetNextAction and passing the name of an SNMP data type as an input parameter. In this
example, the SNMP data type is named SNMP_PACKED. The data type configuration specifies the host
name and port where the SNMP agent is running and the OIDs of the variables whose subsequent values
in the tree you want to retrieve.

// Call SNMPGetNextAction and pass the name of the SNMP


// data type that contains configuration information required
// to perform the SNMP GETNEXT

TypeName = "SNMP_PACKED";

SnmpGetNextAction(TypeName, "192.168.1.1", 161, null, null,


null, null, null, null, null, null, null, null);

// Print the results of the SNMP GETNEXT to the policy log

Count = 0;

while (Count < Length(ValueList)) {


Log(VarIdList[Count] + ": " + ValueList[Count]);
Count = Count + 1;
}

144 Netcool/Impact: DSA Reference Guide


Example 2
The following example shows how to retrieve SNMP variables in the variable tree by calling
SNMPGetNextAction and explicitly overriding the default host name, port, and other configuration
values set in the SNMP data type.
Example 2 using IPL.

// Call SnmpGetNextAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP GETNEXT

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = {".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"};
Community = "private";
Timeout = 15;

SnmpGetNextAction(TypeName, HostId, Port, VarIdList, Community,


Timeout, null, null, null, null, null, null, null);

// Print the results of the SNMP GETNEXT to the policy log

Count = 0;

while (Count < Length(ValueList)) {


Log(VarIdList[Count] + ": " + ValueList[Count]);
Count = Count + 1;
}

Example 2 using JavaScript.

// Call SnmpGetNextAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP GETNEXT
TypeName = "ipRouteTable";
HostId = "localhost";
Port = 161;
VarIdList = [".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"];
Community = "public";
Timeout = 15;
SnmpGetNextAction(TypeName, HostId, Port, VarIdList, Community, Timeout, null, null,
null, null, null, null, null);
// Print the results of the SNMP GETNEXT to the policy log
Count = 0;
while (Count < Length(ValueList)) {
Log(VarIdList[Count] + ": " + ValueList[Count]);
Count = Count + 1;
}

Example 3
The following example shows how to retrieve subsequent SNMP variables in the variable tree using SNMP
v3 authentication.
Example 3 using IPL.

// Call SnmpGetNextAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP GETNEXT

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = {".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"};
Community = "private";
Timeout = 15;
Version = 3;
UserId = "snmpusr";
AuthProtocol = "MD5_AUTH";
AuthPassword = "snmppwd";
ContextId = "ctx";

SnmpGetNextAction(TypeName, HostId, Port, VarIdList, Community,

Chapter 11. Working with the SNMP DSA 145


Timeout, Version, UserId, AuthProtocol, AuthPassword, null,
ContextId, null);

// Print the results of the SNMP GET to the policy log

Count = 0;

while (Count < Length(ValueList)) {


Log(VarIdList[Count] + ": " + ValueList[Count]);
Count = Count + 1;
}

Example 3 using JavaScript.

// Call SnmpGetNextAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP GETNEXT
TypeName = "ipRouteTable";
HostId = "localhost";
Port = 161;
VarIdList = [".1.3.6.1.2.1.1.5.0", ".1.3.6.1.2.1.1.6.0"];
Community = "public";
Timeout = 15;
Version = 3;
UserId = "snmpuser";
AuthProtocol = "MD5";
AuthPassword = "snmppwd";
PrivProtocol = "DES";
PrivPassword = "privpwd";
SnmpGetNextAction(TypeName, HostId, Port, VarIdList, Community, Timeout, Version,
UserId, AuthProtocol, AuthPassword, PrivPassword, null, null);
// Print the results of the SNMP GET to the policy log
Count = 0;
while (Count < Length(ValueList)) {
Log(VarIdList[Count] + ": " + ValueList[Count]);
Count = Count + 1;
}

SNMPSetAction
The SnmpSetAction function sets variable values on the specified SNMP agent.
This function operates by sending an SNMP SET command to the specified agent.
When you call SNMPSetAction, you pass an SNMP data type, the host name, and port of the agent, an
array of OIDs, and the array of values that you want to set. If you are using SNMP v3, you can also include
information required to authenticate as an SNMP user.
Note: If you are running an snmpset and an snmpget in the same policy, do not reuse the ValueList
variable in the same policy due to the special usage of ValueList.

Syntax
The following is the syntax for SNMPSetAction:

SnmpSetAction(TypeName, [HostId], [Port], [VarIdList],


ValueList, [WriteCommunity], [Timeout], [Version], [UserId], [AuthProtocol],
[AuthPassword], [PrivPassword], [ContextId], [ContextName])

Parameters
The SNMPSetAction function has the following parameters.

146 Netcool/Impact: DSA Reference Guide


Table 40. SNMPSetAction function parameters

Parameter Format Description

TypeName String Name of the SNMP data type that specifies the host name,
port, OIDs, and other information needed to set the SNMP
data.

HostId String Optional. Host name or IP address of the system where the
SNMP agent is running. Overrides value specified in the SNMP
data type.

Port Integer Optional. Port where the SNMP agent is running. Overrides
value specified in the SNMP data type.

VarIdList Array Array of strings containing the OIDs of SNMP variables to set
on the agent. Overrides values specified in the SNMP data
type.

ValueList Array Array of objects containing the values you want to set. You
must specify these values in the same order that the OIDs
appear either in the SNMP data type or in the VarIdList
variable.
Note: Integer values must not be supplied within quotes, or
they will be treated as strings.

WriteCommunit String Optional. Name of the SNMP write community string.


y
Note: If the WriteCommunity is not set in the policy, then
the value will be taken from the Write Community field
configured in the SNMP Data Source.

Chapter 11. Working with the SNMP DSA 147


Table 40. SNMPSetAction function parameters (continued)

Parameter Format Description

Timeout Integer Optional. Number of seconds to wait for a response from


the SNMP agent at first try. As the IBM SNMP API sends
subsequent retries, it uses the following incremental strategy:
The second message is sent in twice the time of the first
message, the third message is sent in twice the time of the
second message, and so forth.
The default value for the Timeout property is 15 seconds. The
total amount of time to wait for the SNMP agent to respond
before timing out is determined by the Timeout value and the
number of retries that SNMP API makes.
If Timeout is set to 1 and retries is set to 3, the actual
timeout is 15 seconds this is the initial 1 second wait after the
first try, plus the cumulative wait times after three subsequent
retries, namely: 2+4+8).
If Timeout is set to 5 and retries is set to 3, the actual
timeout is 75 seconds.
If Timeout is set to 15 and retries is set to 3, the actual
timeout is 225 seconds.
Note: The default retries is 3. You can set this value
using the impact.snmp.session.retries property in the
$IMPACT_HOME/etc/<ServerName>_server.props file.
For example, the following entry sets retries to 2:
impact.snmp.session.retries=2
If Timeout is set to 1 and retries is set to 2, the actual
timeout is 7 seconds.
If Timeout is set to 1 and retries is set to 1, the actual
timeout is 3 seconds.

Version Integer Optional. SNMP version number. Possible values are 1, 2 and
3. Default is 2.

UserId String Required for SNMP v3 authentication. If using SNMP v1 or v2,


or using v3 without authentication, pass a null value for this
parameter.

AuthProtocol String Optional. For use with SNMP v3 authentication only. Possible
values are. MD5, MD5_AUTH, NO_AUTH, SHA, SHA_AUTH.
NO_AUTH is the default.

AuthPassword String Optional. For use with SNMP v3 authentication only.


Authentication password associated with the specified SNMP
User ID.

PrivProtocol String Optional. Privacy policy to be used with this function. Possible
values are. DES, AES, None. None is the default.
Note: This parameter does not form a part of the function call.
It must be defined before the call to the function.

148 Netcool/Impact: DSA Reference Guide


Table 40. SNMPSetAction function parameters (continued)

Parameter Format Description

PrivPassword String Optional. For use with SNMP v3 authentication only. Privacy
password associated with the specified SNMP User ID.

ContextId String Optional. For use with SNMP v3 authentication only.


Authentication context ID.

ContextName String Optional. For use with SNMP v3 authentication only.


Authentication context name.

Error handling
If the SNMP operation fails, the Impact policy engine throws an SnmpDSAException. You can handle this
exception using the Handle() function:

// Handle SNMP DSA Exceptions


Handle com.micromuse.dsa.snmpdsa.SnmpDSAException {
log("ErrorMessage: " + ErrorMessage);
javaCall("com.micromuse.dsa.snmpdsa.SnmpDSAException", ExceptionMessage, "getCause", null);
log("MyException is " + ExceptionMessage);
}

Example 1
The following example shows how to set SNMP variables by calling SNMPSetAction and passing the
name of an SNMP data type, an array of OIDs, and an array of values as input parameters. In this example,
the SNMP data type is named SNMP_PACKED.
Example 1 using IPL.

// Call SnmpSetAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP SET

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = {".1.3.6.1.2.1.1.4.0", ".1.3.6.1.2.1.1.5.0"};
ValueList = {"Value_01", "Value_02"};

SnmpSetAction(TypeName, HostId, Port, VarIdList, ValueList,


null, null, null, null, null, null, null, null, null);

Example 1 using JavaScript.

// Call SnmpSetAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP SET
TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = [".1.3.6.1.2.1.1.4.0", ".1.3.6.1.2.1.1.5.0"];
ValueList = ["Value_01", "Value_02"];
SnmpSetAction(TypeName, HostId, Port, VarIdList, ValueList,
null, null, null, null, null, null, null, null, null);

Example 2
The following example shows how to set SNMP variables using SNMP v3 authentication.

Chapter 11. Working with the SNMP DSA 149


Example 2 using IPL.

// Call SnmpSetAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP SET

TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = { ".1.3.6.1.2.1.1.4.0", ".1.3.6.1.2.1.1.5.0"};
ValueList = {"Value_01", "Value_02"};
Community = "private";
Timeout = 15;
Version = 3;
UserId = "snmpusr";
AuthProtocol = "MD5_AUTH";
AuthPassword = "snmppwd";
ContextId = "ctx";

SnmpSetAction(TypeName, HostId, Port, VarIdList, ValueList,


Community, Timeout, Version, UserId, AuthProtocol,
AuthPassword, null, ContextId, null);

Example 2 using JavaScript.

// Call SnmpSetAction and pass the name of the


// SNMP data type that contains configuration information
// required to perform the SNMP SET
TypeName = "SNMP_PACKED";
HostId = "192.168.1.1";
Port = 161;
VarIdList = [".1.3.6.1.2.1.1.4.0", ".1.3.6.1.2.1.1.5.0"];
ValueList = ["Value_01", "Value_02"];
Community = "private";
Timeout = 15;
Version = 3;
UserId = "snmpusr";
AuthProtocol = "MD5_AUTH";
AuthPassword = "snmppwd";
ContextId = "ctx";
SnmpSetAction(TypeName, HostId, Port, VarIdList, ValueList,
Community, Timeout, Version, UserId, AuthProtocol,
AuthPassword, null, ContextId, null);

SnmpTrapAction
The SnmpTrapAction function sends a trap (for SNMP v1) or a notification (for SNMP v2) to an SNMP
manager. Sending traps or notifications is not supported for SNMP v3.

Syntax
The following is the syntax for SnmpTrapAction:

SnmpTrapAction(HostId, Port, [VarIdList], [ValueList],


[Community], [Version], [SysUpTime], [Enterprise],
[GenericTrap], [SpecificTrap], [SnmpTrapOid])

Parameters
The SnmpTrapAction function has the following parameters.

Table 41. SnmpTrapAction function parameters

Parameter Format Description

HostId String Host name or IP address of the system where the SNMP
manager is running.

150 Netcool/Impact: DSA Reference Guide


Table 41. SnmpTrapAction function parameters (continued)

Parameter Format Description

Port Integer Port where the SNMP manager is running.

VarIdList Array Optional. Array of strings containing the OIDs of SNMP


variables to send to the manager.

ValueList Array Optional. Array of strings containing the values you want to
send to the manager. You must specify these values in the
same order that the OIDs appear in the VarIdList variable.

Community String Optional. Name of the SNMP write community string. Default
is public.

Version Integer Optional. SNMP version number. Possible values are 1 and 2.
Default is 2.

SysUpTime Integer Optional for SNMP v1. Required for SNMP v2. Number of
milliseconds since the system started. Default is the current
system time in milliseconds.

Enterprise String Required for SNMP v1 only. Enterprise ID.

GenericTrap String Required for SNMP v1 only. Generic trap ID.

SpecificTrap String Required for SNMP v1 only. Specific trap ID.

SnmpTrapOid String Required for SNMP v2. SNMP trap OID. (The OID that
identifies the trap.)

Example 1
The following example shows how to send an SNMP v1 trap to a manager using SnmpTrapAction.

// Call SnmpTrapAction

HostId = "localhost";
Port = 162;
Version = 1;
Community = "public";
SysUpTime = 1001;
Enterprise = ".1.3.6.1.2.1.11";
GenericTrap = 3;
SpecificTrap = 0;
VarIdList = {".1.3.6.1.2.1.2.2.1.1.0", "sysDescr"};
ValueList = {"2", "My system"};

SnmpTrapAction(HostId, Port, VarIdList, ValueList,


Community, Version, SysUpTime, Enterprise, GenericTrap,
SpecificTrap, NULL);

Example 2
The following example shows how to send an SNMP v2 notification to a manager using
SnmpTrapAction. SNMP v2 requires that you specify an SNMP trap OID when you call this function.

// Call SnmpTrapAction

HostId = "localhost";

Chapter 11. Working with the SNMP DSA 151


Port = 162;
Version = 2;
Community = "public";
SysUpTime = 1001;
VarIdList = {".1.3.6.1.2.1.2.2.1.1.0", "sysDescr"};
ValueList = {"2", "My system"};
SnmpTrapOid = ".1.3.6.1.2.4.1.11";

SnmpTrapAction(HostId, Port, VarIdList, ValueList,


Community, Version, SysUpTime, Enterprise,
GenericTrap, SpecificTrap, SnmpTrapOid);

152 Netcool/Impact: DSA Reference Guide


Chapter 12. Working with the ITNM DSA
The ITNM DSA is a Direct Mode, bi-directional DSA that is used to send queries to the Netcool/Impact
ITNM application and get the results of those queries.

ITNM DSA overview


The ITNM DSA is a Direct Mode, bi-directional DSA that is used to send queries to the ITNM application
and get the results of those queries.
After you set up Netcool/Impact and install the DSA, you can read the data in a policy using the
GetByFilter function. The DSA can also receive asynchronous messages from ITNM regarding alerts.
The ITNM DSA requires ITNM version 3.8 or higher.

For more information about ITNM hardware and software requirements, see Tivoli
Network Manager IP Edition at http://www-01.ibm.com/support/knowledgecenter/SSSHRK/landingpage/
product_welcome_itnm.html.

Setting up the DSA


The drivers required to connect Netcool/Impact to ITNM version 3.8 and 3.9 are available in
$IMPACT_HOME/integrations/itnm in your Netcool/Impact installation.
• To connect to ITNM 3.8 use, ncp_j_api-3.8.0.50.jar.
• To connect to ITNM 3.9 or higher use, ncp_j_api-3.9.0.32.jar.
For the version of ITNM you want to receive events from, complete the following steps:
1. Copy the appropriate jar file from $IMPACT_HOME/integrations/itnm and place it in
$IMPACT_HOME/dsalib folder.
2. Restart the Netcool/Impact server.
3. If you are running in a clustered mode, repeat this step for each server in the cluster.
To set up the ITNM DSA, complete following tasks:
1. Edit the precisiondsa.properties file. For more information about this task, see “Editing the DSA
properties file” on page 153.
2. Configure the ITNM Event Listenerservice for the DSA (optional). For more information about this task,
see “Running the ITNM event listener service for the DSA” on page 154.
3. If you plan to receive asynchronous events from ITNM, start the ITNM Event Listener Service.
A preconfigured data type, data source, and two sample policies are included in Netcool/Impact.

Editing the DSA properties file

Procedure
1. After you set up the DSA and restarted the server, you must edit the precisiondsa.properties
file, which you can find in the directory $IMPACT_HOME/dsa/precisiondsa.

2. The following image shows an example of the ITNM DSA properties file. Edit the information as
required to connect to the ITNM Listener Daemon, following the instructions in the file.

© Copyright IBM Corp. 2006, 2023 153


Running the ITNM event listener service for the DSA
The ITNM event listener service is preconfigured in Netcool/Impact. When the INTNM DSA is set up you
can log in to Tivoli Netcool/Impact and, run the ITNMEventListener service available in theServices node
for the ITNM project. This step is optional. It is only necessary to set up an event listener service if you
want to listen for events asynchronously from IBM Tivoli Network Manager.

About this task


The ITNMEvent Listener service monitors a non-ObjectServer event source for events. They typically
work with DSAs that allow bidirectional communication with a data source.
To run the ITNMEvent Listener service:

Procedure
1. From the Project selection list, select the ITNM project.
2. Click the Services tab.
3. The ITNMEvent Listener service is displayed.

4. Enter the required information in new the Event Listener configuration window.
5. If you want to view the preconfigured settings, right click the service and click Edit.
• Listener Filter Leave this field blank
• Policy To Execute shows the ITNMSampleListenerPolicy that runs when an event is received
from the IBM Tivoli Network Manager application.

154 Netcool/Impact: DSA Reference Guide


• Direct Mode Class Name this field is prepopulated

con.micromuse.dsa.precisiondsa.
PrecisionEventFeedSource

• Direct Mode Source Name this field is prepopulated with a unique name that identifies the data
source, for example, ITNMServer
6. Close the ITNMEvent Listener service tab.
7. To run the service, in the Services tab, select the ITNMEvent Listener service and click the Start
Service icon to receive events from IBM Tivoli Network Manager.
For information about IBM Tivoli Network Manager, see the IBM Tivoli Network Manager
documentation available from the following link, https://www.ibm.com/support/knowledgecenter/en/
SSSHRK/product_welcome_itnm.html.

ITNM DSA data type


The ITNM data type is the only one that works with the ITNM DSA.
You cannot rename an ITNM data type.
When the DSA queries the ITNM database, the records are returned as data items of the ITNM data type.
Each field in the records is turned into an attribute of the corresponding data item.
For example, a record can contain fields such as:
• ObjectId
• EntityName
• Address
• Description
• ExtraInfo
To access the values, you can directly access the attributes just like any other data items using the
following command:

log("Description is " + DataItem.Description);

This command prints out the Description field string that was on the ITNM record returned by the
query.

ExtraInfo field
Some fields that are returned by the query to IBM Tivoli Network Manager contain a hierarchy with
sub fields. One example in IBM Tivoli Network Manager 3.8 and 3.9 is the ExtraInfo field in the
master.entityByName table, which has sub fields such as m_BaseName and DNSName.
If you log the complex field as

log("ExtraInfo is " + DataItem.ExtraInfo);

you get a print out of the fields in ExtraInfo in a long String, like the following example.

ExtraInfo is {m_AccessProtocol=, m_IfDescr=FastEthernet9/6,


m_BaseName=r6509-k02-1.b7199, m_LocalNbrVlan=601, m_IfSpeed=100000000}

Individual fields in ExtraInfo can be extracted by using string parsing in the policy.
In the example the m_BaseName field could be obtained by extracting the portion of the string between
m_BaseName and the following comma using this command:

m_basename = rextract(DataItem.ExtraInfo, ".*m_BaseName=([^,]+).*");

Chapter 12. Working with the ITNM DSA 155


In IBM Tivoli Network Manager 4.1 and 4.1.1 the master.entityByName table has been replaced. You
can access the entities with the ncimCache.entityData table. This table contains several hierarchal
fields similar to ExtraInfo, and the fields can be extracted by using rextract in a similar way as
described for the ExtraInfo field.

Writing policies using the ITNM DSA


The ITNM DSA supports only the GetByFilter function. This function has three components for the
filter argument for this DSA, as described in Table 42 on page 156.

Table 42. ITNM DSA Filter Arguments

Argument Description

Subject This argument specifies on what service the OQL query has been sent to.
For MODEL, the value is RIVERSOFT.3.0.MODEL.QUERY.

Query This is the actual query to be sent to the subject described in the previous row. If
this component exists, than all the records from the subject will be retrieved.
Make sure that the OQL query contains NO " ' " characters.

Timeout This is the timeout value for getting the results back. It uses the value in the
precisiondsa properties file if you do not specify the timeout value in the filter.

GetByFilter
The GetByFilter function retrieves data items from a data type by using a filter as the query condition.
To retrieve data items using a filter condition, you call GetByFilter and pass the data type name and
the filter string as input parameters. The syntax for the filter string varies depending on whether the data
type is an internal, SQL database, LDAP, or Mediator data type.
GetByFilter returns an array of references to the retrieved data items. If you do not assign the returned
array to a variable, the function assigns it to the built-in DataItems variable and sets the value of the Num
variable to the number of data items in the array.
You can use GetByFilter with internal, SQL database, and LDAP data types. You can also use
GetByFilter with some Mediator data types.
Important: When data items are assigned to the built-in DataItem variable, they are not immediately
updated but are stored in a queue to optimize the number of calls to the database. So, for example, if
you update multiple fields in the DataItems variable there will only be one call to update the underlying
database, when a function call is made. To force all queued updates, call the CommitChanges() function in
your policy. The CommitChanges() function does not take any arguments.

Syntax
The GetByFilter function has the following syntax:

[Array =] GetByFilter(DataType, Filter, [CountOnly])

Parameters
The GetByFilter function has the following parameters.

156 Netcool/Impact: DSA Reference Guide


Table 43. GetByFilter function parameters

Parameter Format Description

DataType String Name of the data type.

Filter String Filter expression that specifies which data items to retrieve from the data
type.

CountOnly Boolean Pass a false value for this parameter. Provided for compatibility with
earlier versions only.

Return value
Array of references to the retrieved data items. Optional.

Examples
The following example shows how to retrieve data items from an internal or SQL database data type.

// Call GetByFilter and pass the name of the data type


// and an SQL database filter expression

DataType = "Admin";
Filter = "Level = 'Supervisor' AND Location LIKE 'NYC.*'";
CountOnly = false;

MyAdmins = GetByFilter(DataType, Filter, CountOnly);

The following example shows how to retrieve data items from an LDAP data type.

// Call GetByFilter and pass the name of the data type


// and an LDAP filter expression

DataType = "Customer";
Filter = "(|(facility=NYC)(facility=NNJ))";
CountOnly = false;

MyCustomers = GetByFilter(DataType, Filter, CountOnly);

The following example shows how to retrieve data items from a Mediator data type.

// Call GetByFilter and pass the name of the data type


// and the Mediator filter exprssion

DataType = "SWNetworkElement";
Filter = "ne_name = 'DSX1 PNL-01 (ORP)'";
CountOnly = false;

MyElements = GetByFilter(DataType, Filter, CountOnly);

Writing policies to receive events from ITNM


The ITNM Event Listener Service that you optionally configured after installing the DSA is similar to the
OMNIbusEventReader, with the exception that it can asynchronously receive events from ITNM.

Policy Variables
After an event is received, the policy assigned to it is invoked with the variables described in Table 44 on
page 158. The variables are stored in the EventContainer and must be referenced in the policy using
the EventContainer or @ notation. See the ITNMSampleListenerPolicy for an example.

Chapter 12. Working with the ITNM DSA 157


Table 44. Variables Returned by a Policy after Event Received from ITNM

Variable Description

ActionName This variable describes the type of action that is in the update. The possible
values are:
• "REC_DELETE"
• "REC_UPDATE"
• "REC_NEW"
• "DontKnow"

FieldNames This variable gives the names of the fields that are in the CRIV_Record that
is received from ITNM. Since the field names returned in this record are
not known before the policy is executed, a string concatenation of all these
fieldNames, with a delimiter of "##", is used. This is a sample value in the
FieldNames variable:
##Field1##Field2##Field3##field4 and so on.

Field1 One of the fields in the record returned by ITNM.

Field2 One of the fields in the record returned by ITNM.

Field3 One of the fields in the record returned by ITNM.

Field4 One of the fields in the record returned by ITNM.

Sample policies
The DSA provides the following sample policies:
• ITNMSampleListenerPolicy
• ITNMSamplePolicy

ITNMSampleListenerPolicy
ITNMSampleListenerPolicy.ipl shows how to use the ITNM DSA to read data from an ITNM
Listener. The policy reads the contents of an ITNM formatted string and then prints the data to the Policy
log.

ITNMSamplePolicy
ITNMSamplePolicy.ipl shows how to use the ITNM DSA to read data from an ITNM database. The
policy reads the contents of an ITNM formatted string and then prints the data to the Policy log.

158 Netcool/Impact: DSA Reference Guide


Chapter 13. Working with the socket DSA

The socket DSA is a data source adaptor that provides an interface between Tivoli Netcool/Impact and a
socket server.

Socket DSA overview


The socket DSA is a data source adapter that provides an interface between Netcool/Impact and a
socket server. You can use the socket DSA as a generic connector between Netcool/Impact and third-
party entities where dedicated DSAs do not exist. These third-party entities can be any sort of device,
application, or system that provides an interface accessible by a scripting or programming language that
supports network sockets. Such languages include C/C++, Java, and Perl.

Socket server
A socket server is a program that acts as a mediator between a third-party entity and the socket DSA.
You can implement a custom socket server or you can expand and use the sample socket server that
is provided with the DSA. The socket server uses the Berkeley socket protocol to communicate with
Netcool/Impact via a network. The socket server is a required part of a socket DSA solution. For more
information about implementing a custom socket server, see “Implementing a custom socket server” on
page 167.

Data model
The socket DSA data model consists of a data source and set of data types that you define. You must
define one data type for each type of data that you plan to exchange between the socket DSA and the
socket server. For more information, see “Socket DSA data model” on page 160.

Process
At run time, Netcool/Impact uses the socket DSA to send queries to the socket server for information that
is stored or provided by a corresponding third-party entity. The socket server then makes a request for to
the entity for the data. When the socket server receives a reply, it forwards the information back to the
DSA. The DSA then populates the socket DSA data types with the data.

Setting up the socket DSA


The socket DSA is installed automatically when you install Netcool/Impact. You are not required to
perform any additional installation or configuration steps.

Writing socket DSA policies


You use the standard Netcool/Impact function AddDataItem to send data to the socket server from
within a policy. You use GetByFilter, GetByKey, and GetByLinks to retrieve data from the server.

Using the sample socket server


The socket DSA provides a sample socket server written in Perl that you can use to test and explore the
function of the DSA. You can also customize the sample server and use it as part of your real world socket
DSA solution.
Note: The best practice is to use the sample socket server to learn about the DSA function before you
attempt to implement a custom server using any other development tools. The sample socket server is
the best way to get started using the socket DSA.

© Copyright IBM Corp. 2006, 2023 159


For more information about working with the sample socket server, see “Working with the sample socket
server” on page 164.

Implementing a custom socket server


If you do not want to use the sample socket server that is distributed with the DSA, you can implement
your own using any scripting or programming language that provides access to network sockets. These
languages include Java, C/C++, and Perl. For more information about implementing a custom socket
server, see “Implementing a custom socket server” on page 167.

Socket DSA data model


The socket DSA data model consists of the following elements:
• Socket DSA data source
• Socket DSA data types

Socket DSA data source


The socket DSA data source is named SocketMediatorDataSource.
Netcool/Impact automatically generates a socket DSA named SocketMediatorDataSource at installation
time. Configuration properties for this data source should never be changed.

Socket DSA data types


The socket DSA data model consists of a set of data types that you use to represent logical types of data
passed between the DSA and the socket server.
Instead, you must analyze the types of data that you plan to exchange between the DSA and the socket
server and define one new data type for each required type of data.
For example, if you are using the socket server to mediate between Netcool/Impact and a network
inventory system, you might define one data type for each network element whose information you want
to access in a policy. These data types might be named Location, Facility, Rack, Port or Card. If
you are using the socket server as an interface between Netcool/Impact and a messaging system, you
might define one data type for each type of message that is to be passed between the DSA and the socket
server. These data types might be named ProblemRequest, ProblemReply, EnhancementRequest
and EnhancementReply.
When you customize the sample socket server or implement your own, you specify how the socket server
handles requests from Netcool/Impact that are related to a particular data type.

Configuring the socket DSA


Procedure
To configure the socket DSA, you must manually set the properties in the DSA properties file.
The DSA properties file is named socketdsa.properties and is located in the $IMPACT_HOME/dsa/
socketdsa directory. Property values must not contain any trailing space characters.
Note: You must stop and restart the Netcool/Impact server after you change the properties file.
This table shows the properties in the DSA properties file.

160 Netcool/Impact: DSA Reference Guide


Table 45. Socket DSA Configuration Properties

Property Description

socketdsa.sockethost Host name of the system where the socket server is


running.

socketdsa.socketport Port number used by the socket server to listen to


incoming requests from the socket DSA.

Note: If you want to enable trace logging, you will need to update the
impactserver.log4j.properties file using the following steps:
a. Change the following line:
logger.rolling.level = INFO
to:
logger.rolling.level = DEBUG
b. Add the socketdsa package to the list:
logger.socketdsa.name = com.micromuse.dsa.socketdsa
logger.socketdsa.level= TRACE

Writing socket DSA policies


You can perform the following tasks with the socket DSA from within a Netcool/Impact policy:
• Retrieve data from the socket server by filter
• Retrieve data from the socket server by key
• Retrieve data from the socket server by links
• Send new data to the socket server
The results of these tasks are dictated in large part by how the socket server is implemented. To
understand how a socket server handles these operations, you can review the UserDataInterface.pm
file in the sample socket server. This file demonstrates how a simple socket server responds to requests
to retrieve or add data.
A sample is automatically imported into the Netcool/Impact server during installation. This policy is
named TestSocketDsa and demonstrates how to perform all of the function supported by the DSA.
The sample policy works with the sample socket server that is included in the DSA tar file. For more
information about the sample server, see “Working with the sample socket server” on page 164.

Retrieving data by filter


To retrieve data by filter from the socket server, you call the GetByFilter function and pass it the name
of a socket DSA data type and the filter expression as runtime parameters.
The structure and content of the filter expression used in GetByFilter are specified when you
customize or implement the socket server.
When Netcool/Impact encounters the call to GetByFilter in the policy, it passes the request to the
socket DSA, which in turn passes the name of the data type and the full filter expression to the socket
server. The socket server then analyzes the request and returns a nested array of sets of name/value pairs
to the DSA that fulfill the terms of the specified filter. The DSA uses this nested array to populate the data
items returned by the function in the policy.

Chapter 13. Working with the socket DSA 161


The following example shows how to retrieve data by filter from the sample socket server
distributed with the DSA. The code that handles requests to retrieve data by filter is located in the
UserDataInterface.pm file.

Seconds = GetDate();
log("Starting TestSocketDSA with type SocketData at " + LocalTime(Seconds, "HH:mm::ss"));
Type = "SocketData";
Types = {"SocketData"};
// GetByFilter testing
// First, no filter (should get all the items)
log("Testing GetByFilter -- finding all");
Filter = "";
CountOnly = false;
All = GetByFilter(Type, Filter, CountOnly);
i = 0;
while (i < Num) {
i = i + 1;
log("All[" + i + "] => " + All[i-1].FirstName);
// Save All DataItems for later on when we test the Links.
SocketDataItem = All[i-1];
}
// Provide a filter this time.
log("Testing GetByFilter -- finding Carl");
Filter = "FirstName = 'Carl'";
CountOnly = false;
Carl = GetByFilter(Type, Filter, CountOnly);
if (Num == 1) {
log("Found Carl! => " + Carl[0].FirstName + " " + Carl[0].LastName + " and his Hobby is: " + Carl[0].Hobby);
} else {
log("Error: Didn’t find Carl!");
}
log("Testing GetByFilter -- finding Nick (bogus entry)");
Filter = "FirstName = 'Nick'";
CountOnly = false;
Nick = GetByFilter(Type, Filter, CountOnly);
if (Num == 1) {
log("Yikes We found something! => " + Nick[0]);
} else {
log("Great! We didn’t find Nick!");
}

Retrieving data by key


To retrieve data by key from the socket server, you call the GetByKey function and pass the name of a
socket DSA data type and a key expression as runtime parameters.
When Netcool/Impact encounters the call to GetByKey in the policy, it passes the request to the socket
DSA, which in turn passes the name of the data type and the full key expression to the socket server. The
socket server then analyzes the request and returns a set of name/value pairs to the DSA that fulfills the
terms of the specified key expression. The DSA uses this nested array to populate the data items returned
by the function in the policy.
The structure and content of the key expression used in GetByKey are specified when you customize or
implement the socket server.
On the part of the socket server, retrieving data by key is different from retrieving data by filter in that
each set of name/value pairs that it returns to the socket DSA can contain a name called KEY and a
corresponding value. The KEY attribute is then used by the socket DSA to populate the key field in the
corresponding Netcool/Impact data types. If there is no attribute named KEY, the socket DSA considers
the first name/value pair returned to represent the key field for a data item.
The following example shows how to retrieve data by key from the sample socket server
distributed with the DSA. The code that handles requests to retrieve data by key is located in the
UserDataInterface.pm file.

Seconds = GetDate();
log ("Starting TestSocketDSA with type SocketData at " + LocalTime(Seconds, "HH:mm::ss"));
Type = "SocketData";
Types = {"SocketData"};
// Test GetByKey
log ("Testing GetByKey with existing key == Kate");
Key = "Kate";
MaxNum = 1;
Kate = GetByKey(Type, Key, MaxNum);
if (Kate == NULL) {
log("Error: Didn’t find Cindy! Num is " + Num);

162 Netcool/Impact: DSA Reference Guide


} else {
log("Found Kate! => " + Kate[0].FirstName + " Num is " + Num);
}
log("Testing GetByKey with nonexistent key == Judy");
Key = "Judy";
Judy = GetByKey(Type, Key, MaxNum);
if (Num == 0) {
log("Great! Didn’t find Judy! Num is " + Num);
} else {
log("Yikes! Found Judy! => " + Judy[0].FirstName + " Num is " + Num);
}

Retrieving data by links


To retrieve data by links from the socket server, you call GetByLinks and pass the name of a socket DSA
data type, an optional link filter, the maximum number of data items to return and a data item that the
returned data items are linked to.
When Netcool/Impact encounters the call to GetByLinks in the policy, it passes the request to the
socket DSA. The socket DSA determines the key field in the linked data item and then passes that value
along with the data type name, the link filter, and the maximum number of data items to the socket server.
The socket server then analyzes the request and returns a nested array of sets of name/value pairs to the
DSA that are linked to the specified data item and fulfill the terms of the specified key expression. The
DSA uses this nested array to populate the data items returned by the function in the policy.
The structure and content of the link filter used in GetByLinks are determined when you customize or
implement the socket server.
The following example shows how to retrieve data by links from the sample socket server
distributed with the DSA. The code that handles requests to retrieve data by links is located in the
UserDataInterface.pm file.
Note: The first half of the policy is used to pre-populate object SocketDataItem so that the filter can be
tested.

Seconds = GetDate();
log("Starting TestSocketDSA with type SocketData at " + LocalTime(Seconds, "HH:mm::ss"));
Type = "SocketData";
Types = {"SocketData"};
// GetByFilter testing
// First, no filter (should get all the items)
log("Testing GetByFilter -- finding all");
Filter = "";
CountOnly = false;
All = GetByFilter(Type, Filter, CountOnly);
i = 0;
while (i < Num) {
i = i + 1;
log("All[" + i + "] => " + All[i-1].FirstName);
// Save All DataItems for later on when we test the Links.
SocketDataItem = All[i-1];
}
// Provide a filter this time.
log("Testing GetByFilter -- finding Carl");
Filter = "FirstName = 'Carl'";
CountOnly = false;
Carl = GetByFilter(Type, Filter, CountOnly);
if (Num == 1) {
log("Found Carl! => " + Carl[0].FirstName + " " + Carl[0].LastName + " and his Hobby is: " + Carl[0].Hobby);
} else {
log("Error: Didn’t find Carl!");
}
log("Testing GetByFilter -- finding Nick (bogus entry)");
Filter = "FirstName = 'Nick'";
CountOnly = false;
Nick = GetByFilter(Type, Filter, CountOnly);
if (Num == 1) {
log("Yikes We found something! => " + Nick[0]);
} else {
log("Great! We didn’t find Nick!");
}

Seconds = GetDate();
log ("Starting TestSocketDSA with type SocketData at " + LocalTime(Seconds, "HH:mm::ss"));

Chapter 13. Working with the socket DSA 163


Type = "SocketData";
Types = {"SocketData"};
// GetByLinks testing// First, no filter (should get Judy and Brobot)
log("Testing GetByLinks -- finding all links from Jimmy");
Filter = "";
CountOnly = false;
SocketDataItems = {};
SocketDataItems = SocketDataItems + SocketDataItem;
JimmyLinks = GetByLinks(Types, Filter, null, SocketDataItems);
i = 0;
while (i < Num) {
i = i + 1;
log("JimmyLinks[" + i + "] => " + JimmyLinks[i-1].FirstName);
}
//Provide a filter this time.log ("Testing GetByLinks -- finding Judy");
Filter = "FirstName = ’Judy’";
CountOnly = false;
Judy = GetByLinks(Types, Filter, null, SocketDataItems);
if (Num == 1) {
log("Found Judy! => " + Judy[0].FirstName + " " + Judy[0].LastName);
} else {
log("Error: Didn’t find Judy!");
}

Sending data
To send data to the socket server, you call AddDataItem and pass the name of a socket DSA data type
and a context that contains a set of name/value pairs.
When Netcool/Impact encounters the call to AddDataItem, it passes the data type name and the set of
name/value pairs to the socket DSA. The socket DSA sends these to the socket server. The socket server
then analyzes the request and uses the data in the name/value pairs to perform an operation such as
adding a new row to a database or sending a message to a messaging system.
The following example shows how to send data to the sample socket server distributed with the DSA. The
code that handles requests to send data is located in the UserDataInterface.pm file.

// Test AddDataItemlog("Testing AddDataItem -- Adding Hugh Example");


Type = "SocketData";
Jimmy = NewObject();
Jimmy.FirstName = "Hugh";
Jimmy.LastName = "Example";
Jimmy.Hobby = "Ducks";
ObjectToCopy = Jimmy;
AddDataItem(Type, ObjectToCopy);

Working with the sample socket server


The socket DSA provides a sample socket server written in Perl that you can use to test and explore the
function of the DSA. You can also customize the sample server and use it as part of your real world socket
DSA solution.
Note: The best practice is to become familiar with the sample socket server before you attempt to
implement a custom socket server using any other development tools. The sample socket server is the
best way to get started using the Socket DSA.

Setting up the sample socket server

Procedure
The DSA properties file contains settings for the host name and port of the socket server. This file is
named socketdsa.properties and is located in the $IMPACT_HOME/dsa/socketdsa directory. You
must make sure that the properties in this file reflect the actual location of the server.

Sample socket server components


The sample socket server consists of the following components:

164 Netcool/Impact: DSA Reference Guide


• Server.pl, which contains the main server framework and the function required to communicate with
the socket DSA
• UserDataInterface.pm, which contains the data source-facing function of the sample server

Server.pl
Server.pl contains the main server framework and the function required to communicate with the
socket DSA. You can run Server.pl with version 5.8 and later of the Perl interpreter. You will also
require Java 1.7 or above. The Server.pl script is designed to work as provided. No additional
customization is required. You can, however, rewrite this script to better suit your needs. To customize the
sample socket server, change the UserDataInterface.pm module.
Server.pl uses the Net::Server module to communicate across a network with the DSA.
Net::Server is a freely available Perl module that provides the core function required to build a server
that communicates with other applications using Internet protocols. Specifically, Server.pl requires the
following modules:
• Net::Server::PreFork;
• Net::Server::Proto::TCP;
• Switch;
• Getopt::Long;
For more information about Net::Server, see http://seamons.com/net_server.html.
Server.pl contains the Netcool/Impact-facing function of the sample server. To handle requests from
the socket DSA to return data from or add new data to a data source, it uses calls to functions defined in
the UserDataInterface.pm module.
At initialization, Server.pl binds to the port address specified by the $portnum variable. The default
port address is 22180.
After initialization, Server.pl waits for incoming messages from the socket DSA on the specified port.
The socket DSA initiates each message exchange by sending the string hi to the port where the server is
running. When the server receives the string, it replies with an identical hi message.
The server then waits to receive a request from the socket DSA. Each request starts with a message that
contains the name of the operation to perform. The operation names correspond directly to the function
names GetByFilter, GetByKey, GetByLinks, and AddDataItem. Server.pl responds to this initial
message by requesting additional information from the socket DSA based on the parameters that are
required to perform the operation. The parameters correspond to the parameters passed to the function
from within a Netcool/Impact policy.
For example, when brokering a request for the GetByFilter operation, Server.pl asks the socket DSA
for the name of the data type and the filter string. Server.pl assigns the contents of the replies from the
DSA to the $typename and $filter variables.
When Server.pl has received the parameters required by a particular operation, it calls the
corresponding function defined in UserDataInterface.pm and passes the parameter data that it
received from the socket DSA. UserDataInterface.pm assembles the result set for the request and
returns it to Server.pl, which in turn sends the results back to the DSA.
Server.pl sends the results back to the socket DSA as sets of name/value pairs, where each set
represents a data item and each name/value pair represents a data item field. The format of the results
is a series of messages, where each name and value is sent as a distinct message and an empty string is
sent to signify the end of a data item.

UserDataInterface.pm
UserDataInterface.pm is a Perl module that contains the data source-facing function of the sample
socket server. This module is responsible for acquiring the information requested by the socket DSA from
the underlying vendor software, device or system, and for passing on new information that originated with
Tivoli Netcool/Impact.

Chapter 13. Working with the socket DSA 165


By default, UserDataInterface.pm uses sample data hard-coded into the Perl module file. This data
is suitable for use when learning about socket servers and when running the sample policies that are
distributed with the DSA. When you create a custom solution based on the sample socket server, you
modify UserDataInterface.pm so that it works with data sources specific to your environment.
UserDataInterface.pm contains one function for each of the operations supported by the socket
DSA. These functions are GetByFilter, GetByKey, GetByLinks, and AddDataItem. Server.pl calls
these functions when it brokers requests from the socket DSA. For example, when the socket server
receives a request from the DSA to perform a GetByFilter operation, it calls the GetByFilter function
defined in UserDataInterface.pm.
In the case of GetByFilter, GetByKey, and GetByLinks, UserDataInterface receives parameters
that specify the terms of the operation from Server.pl and then returns either a single hash (in the case
of GetByKey) or an array of hashes (in the case of GetByFilter and GetByLinks). In all cases, each
hash represents a single data item, where the name/value pairs that it contains represent data item fields.
In the case of AddDataItem, the function receives parameters that specify the contents of the new data
element and returns a single hash that represents the new data that has been passed to the vendor
software, device, or system.
Note that all these functions require Server.pl to pass the name of an underlying data type. In the
default functions provided with UserDataInterface.pm, the type name is used with select statements
to determine the appropriate information to perform for each type of data. The data type name can also
be used in a more general way to specify different types of operations that you want the socket server to
perform that are not necessarily associated with underlying data sets.
When you customize the sample socket server, you rewrite one or more of these functions to either return
the appropriate sets of data or send new data to the third-party data source. You can use these functions
to specify any manner of operations, such as calls to Perl database drivers or calls to custom interfaces
that you have written to work with third-party systems.

Running the sample socket server

Procedure
1. Before you run Server.pl, you must modify the first line of the file so that it specifies the location on
the file system where Perl is installed. If you do not modify the first line, you must explicitly invoke the
Perl interpreter when you run the script.
2. You must also set the PERL5LIB environment variable so that it includes the directory where you
installed the sample server.
For example, if you installed the server in /usr/local/socketdsa/SocketDSAServer/
Server.pl, you can set this variable in bash or sh by entering the following command at a command
prompt:

PERL5LIB=/usr/local/socketdsa; export PERL5LIB

The directory that you specify must be two levels up from Server.pl.
3. To run Server.pl, enter the following command at a command line prompt:

Server.pl -port port_number

where port_number is the port where you want the sample socket server to run. If you do not specify
a port, the server uses 22180, which is the default.
You can also run Server.pl by explicitly invoking the Perl compiler as follows:

perl Server.pl -port port_number

166 Netcool/Impact: DSA Reference Guide


Testing the socket server
The socket DSA provides a command line client that you can use to test the availability of socket servers,
including the sample server provided in the DSA tar file. You use this client to send messages to a socket
server using a simple command line input. The client is named TestClient and is located in the DSA jar
file.

About this task

Procedure
• To test the socket server, enter the following command at a command-line prompt on the system
where you are running Tivoli Netcool/Impact:
java -cp [$Impact_home]/wlp/usr/servers/NCI_A/apps/NCI_A.ear/nci.jar
com.micromuse.dsa.socketdsa.TestClient hostname port
Where hostname is the name of the system where the socket server is running and port is the port
number used by the server.
• To test the availability of a socket server, enter the following string at the command line:

hi

The test client sends this string to the socket server and prints the response. If the socket server is
running correctly, the response will be a hi string identical to the one sent from the command line.

What to do next
You can perform additional testing by entering additional strings at the command line, following the
command sequence documented in the code comments in Server.pl.

Implementing a custom socket server


If you do not want to use the sample socket server that is distributed with the DSA, you can implement
your own using any scripting or programming language that provides access to network sockets. These
languages include Java, C/C++, and Perl.
A custom socket server must perform the following tasks:
• Create a receiver socket and bind to a port
• Wait for DSA connections and create connection-specific sockets
• Perform handshaking with the DSA
• Listen for operation requests from the DSA
• Request the operation parameters from the DSA
• Perform the operations requested by the DSA
• Return operation results to the DSA

Creating a socket
At startup, the custom socket server must create a new socket and bind to the port that it will use for
communication with the socket DSA. You specify this port in the DSA properties file when you configure
the DSA, as described in “Configuring the socket DSA” on page 160.

Chapter 13. Working with the socket DSA 167


Waiting for DSA connections

Procedure
After you created a new socket, the socket server must listen at the port for a connection from the socket
DSA. When a connection arrives, the server must create a new socket to use for communication specific
to that connection.

Performing handshaking with the DSA

Procedure
After the DSA establishes a connection, it sends the greeting string hi to the socket server. The socket
server must reply with its own identical hi message in order for handshaking to be complete.

Listening for operation requests from the socket DSA


The socket DSA is capable of sending the following operation requests to the socket server:
• GetByFilter
• GetByKey
• GetByLinks
• AddDataItem
After the DSA and the server exchange handshaking messages, the DSA sends an operation request to
the server. The operation request is a message that consists of the name of the operation (for example,
AddDataItem). The socket server must accept this request and determine which tasks to perform based
on the contents of the message.

Requesting operation parameters from the socket DSA


After the socket server has received the operation request from the socket DSA, it must request the
operation parameters from the DSA one at a time in a series of messages. The DSA replies by sending the
parameter values as specified in the call to GetByFilter, GetByKey, GetByLinks or AddDataItem in
the Netcool/Impact policy.
The following table shows the contents of the messages that the socket server must send to the socket
DSA to request the parameters for a GetByFilter operation.

Table 46. GetByFilter Operation Request Messages

Request Message Description

sendtype Requests the name of the data type associated with the operation. This
is the DataType parameter specified in the call to the GetByFilter
function in a Netcool/Impact policy. The DSA returns a string that
contains the data type name.

sendfilter Requests the filter string associated with the operation. This is the
Filter parameter specified in the call to the GetByFilter function
in a Netcool/Impact policy. The DSA returns a string that contains the
filter.

The following table shows the contents of the messages that the socket server must send to the socket
DSA to request the parameters for a GetByKey operation.

168 Netcool/Impact: DSA Reference Guide


Table 47. GetByKey Operation Request Messages

Request Message Description

sendtype Requests the name of the data type associated with the operation.
This is the DataType parameter specified in the call to the GetByKey
function in a Netcool/Impact policy. The DSA returns a string that
contains the data type name.

sendkey Requests the filter string associated with the operation. This is the Key
parameter specified in the call to the GetByKey function in a Netcool/
Impact policy. The DSA returns a string that contains the filter.

The following table shows the contents of the messages that the socket server must send to the socket
DSA to request the parameters for a GetByLinks operation.

Table 48. GetByLinks Operation Request Messages

Request Message Description

sendfromtype Requests the name of the source data type associated with the
operation. This is the data type of the first element in the DataItems
parameter specified in the call to the GetByLinks function in a
Netcool/Impact policy. The DSA returns a string that contains the data
type name.

sendfromkey Requests the filter string associated with the operation. This is the Key
parameter specified in the call to the GetByKey function in a Netcool/
Impact policy. The DSA returns a string that contains the filter.

sendtotype Requests the name of the target data type associated with the
operation. This is the data type of the first element in the DataTypes
parameter specified in the call to the GetByLinks function in a
Netcool/Impact policy. The DSA returns a string that contains the data
type name.

sendfilter Requests the filter string associated with the operation. This is the
LinkFilter parameter specified in the call to the GetByLinks function
in the Netcool/Impact policy. The DSA returns a string that contains the
filter.

The following table shows the contents of the messages that the socket server must send to the socket
DSA to request the parameters for a AddDataItem operation. Note that AddDataItem returns a set of
name/value pairs to the Netcool/Impact server that represent the contents of the new data item added.

Table 49. AddDataItem Operation Request Messages

Request Message Description

sendtype Requests the name of the data type associated with the operation. This
is the DataType parameter specified in the call to the AddDataItem
function in a Netcool/Impact policy. The DSA returns a string that
contains the data type name.

Chapter 13. Working with the socket DSA 169


Table 49. AddDataItem Operation Request Messages (continued)

Request Message Description

sendattributes Requests the attributes of the data item associated with the operation.
These are a series of name/value pairs that represent the member
variables in the ContentToCopy parameter specified in the call to
AddDataItem. The DSA returns a series of names and values, each
of which is a separate string. The DSA indicates that there are no more
attributes in the data item by sending an empty string.

Performing operations requested by the DSA

Procedure
After the socket server requests the parameters from the socket DSA, it can perform operations to
retrieve data from or add data to the underlying software, device or system. For example, you can use the
information sent by the socket DSA to query an external database or to send a message on a message
system.

Returning operation results to the DSA

Procedure
After the socket server has performed the requested operation, it can return the results to the DSA. The
results must be returned as a series of messages that describe the contents of the data items resulting
from the operation. The first message in this series is a string that contains the number of data items that
will be returned. Following this are sets of messages that contain name/value pairs that represent data
item fields. The socket server indicates the end of each data item by sending a newline character.
For more details, refer to the sample socket implementation and to the inline comments in the socket
server code.

Socket DSA and socket server connection state


The connection state between the socket DSA and a socket server is affected when either the DSA or the
server goes down during the communication process.
If the socket DSA goes down, the server will stay up. Any communication between the components is
terminated.
If the socket server goes down, the DSA sends messages to the server log that indicate that it cannot
connect to the server. The DSA will then try to reconnect one time before terminating the communication
process with the socket server.
When the socket server is brought back up, the DSA will automatically reconnect the next time it handles
a request for an operation from Netcool/Impact. If it cannot reconnect, it will send a message to the
server log indicated that it was not able to communicate with the socket server.
The socket DSA and sample socket server do not time out connections after a certain length of time.
You can extend the sample socket server to handle timeouts using information in the Net::Server
documentation.

Socket server threading


If you have configured Netcool/Impact to use a multi-threaded event processor, the best practice is to run
the socket server as a multi-threaded application.

170 Netcool/Impact: DSA Reference Guide


The default behavior of Server.pl is to allow multiple threads based on UNIX forking. This function is
provided by the Net::Server module, which provides a flexible set of threading options that you can use
to adapt to your specific implementation.
Recent versions of Perl also provide additional options for managing application threading.

Chapter 13. Working with the socket DSA 171


172 Netcool/Impact: DSA Reference Guide
Appendix A. Notices

This information was developed for products and services offered in the U.S.A. IBM may not offer the
products, services, or features discussed in this document in other countries. Consult your local IBM
representative for information on the products and services currently available in your area. Any reference
to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this
document. The furnishing of this document does not give you any license to these patents. You can
send license inquiries, in writing, to:

IBM Director of Licensing


IBM Corporation
North Castle Drive
Armonk, NY 10504-1785 U.S.A.
For license inquiries regarding double-byte (DBCS) information, contact the IBM Intellectual Property
Department in your country or send inquiries, in writing, to:

Intellectual Property Licensing


Legal and Intellectual Property Law
IBM Japan Ltd.
1623-14, Shimotsuruma, Yamato-shi
Kanagawa 242-8502 Japan

The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law:
INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION "AS IS"
WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED
TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A
PARTICULAR PURPOSE.
Some states do not allow disclaimer of express or implied warranties in certain transactions, therefore,
this statement might not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically
made to the information herein; these changes will be incorporated in new editions of the publication.
IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this
publication at any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in
any manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of
the materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose of enabling: (i) the
exchange of information between independently created programs and other programs (including this
one) and (ii) the mutual use of the information which has been exchanged, should contact:

IBM Corporation
2Z4A/101
11400 Burnet Road
Austin, TX 78758 U.S.A.

© Copyright IBM Corp. 2006, 2023 173


Such information may be available, subject to appropriate terms and conditions, including in some cases
payment of a fee.
The licensed program described in this document and all licensed material available for it are provided by
IBM under terms of the IBM Customer Agreement, IBM International Program License Agreement or any
equivalent agreement between us.
Any performance data contained herein was determined in a controlled environment. Therefore, the
results obtained in other operating environments may vary significantly. Some measurements may have
been made on development-level systems and there is no guarantee that these measurements will be
the same on generally available systems. Furthermore, some measurement may have been estimated
through extrapolation. Actual results may vary. Users of this document should verify the applicable data
for their specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their
published announcements or other publicly available sources. IBM has not tested those products and
cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM
products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of
those products.
All statements regarding IBM's future direction or intent are subject to change or withdrawal without
notice, and represent goals and objectives only.
All IBM prices shown are IBM's suggested retail prices, are current and are subject to change without
notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to change before the
products described become available.
This information contains examples of data and reports used in daily business operations. To illustrate
them as completely as possible, the examples include the names of individuals, companies, brands, and
products. All of these names are fictitious and any similarity to the names and addresses used by an
actual business enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform
for which the sample programs are written. These examples have not been thoroughly tested under
all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work, must include a copyright
notice as follows:
© (your company name) (year). Portions of this code are derived from IBM Corp. Sample Programs. ©
Copyright IBM Corp. _enter the year or years_. All rights reserved.
If you are viewing this information softcopy, the photographs and color illustrations may not appear.

Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
“Copyright and trademark information” at www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered trademarks or
trademarks of Adobe Systems Incorporated in the United States, other countries, or both.

174 Netcool/Impact: DSA Reference Guide


Java and all Java-based trademarks and logos are trademarks or registered trademarks
of Oracle and/or its affiliates.

Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other product and service names might be trademarks of IBM or other companies.

Appendix A. Notices 175


176 Netcool/Impact: DSA Reference Guide
Index

A data type (continued)


table 129
accessibility x data type mapping 112
adding JDBC drivers 7 data types 22, 38
authentication definition of DSAs 1
with plain text password 73 directory names
notation xiv
DSA
B XML 111
books DSAs
see publications ix categories 3
definition 1
even readers 4
C event listeners 4
policies 4
calling WSSetDefaultPKGName 56
categories of DSAs 3
changing character set encoding 8 E
compiler script 44
compiling WSDL files 43 Editing the DSA properties file 153
compiling WSDL files in an Impact split installation 45 education x
configuration properties 104 element data types 112
configuring JDBC connection properties data source 18 Enabling and disabling proxy settings 45
configuring JDBC connection properties JDBC driver 18 encrypt messages 77
configuring JDBC connection properties overview 18 encryption 75, 78, 80
configuring JDBC connections environment variables
impact.jdbc.pool.simplifiedkey property 19 notation xiv
Connecting to WebSphere MQ. 97 even readers 4
conventions event listeners 4
typeface xiii ExtraInfo field 155
create data types scripts 113
creating a message properties context 95 F
Creating a socket 167
Creating an event listener service for the DSA 154 failback 14
creating message body string or context 93 failover
creating message properties context 91 configurations 14
creating RESTful DSA data sources 33, 35 customizing 16
creating UI data provider data sources 21 defaults 15
creating UI data provider data types 22 setting up 15
customer support xi standard 14
customizing fixes
failover 16 obtaining x
function
GetByFilter 156
D ReceiveJMSMessage 94
data items 38 SendJMSMessage 91
data model 3, 21, 33, 37 SnmpGetAction 138
data source 21, 33, 35 SnmpGetNextAction 142
data source adapter SNMPSetAction 146
ITNM 153 WSInvokeDL 50
JMS 85, 103 WSNewArray 48
data sources WSNewEnum 55
JMS 86 WSNewObject 47
Kafka 103, 104 WSNewSubObject 48
LDAP 37 WSSetDefaultPKGName 46
SQL database 10 functions 46
data type

Index 177
G M
GetByFilter 156 making requests 36
GetByFilter output parameters 23 manuals
see publications ix
Mediator DSAs 3
H message body string or context 93
handing incoming messages from a JMS message listener 96 message integrity 74
handling a retrieved message 95 message properties context 91

I N
integration with third party Web services 69 non-repudiation 74
ITNM DSA data type 155 notation
ITNMSampleListenerPolicy 158 environment variables xiv
ITNMSamplePolicy 158 path names xiv
typeface xiv
nternational character support 41
J
JMS O
data source 86
JMS data source 88 obtaining WSDL files 44
JMS DSA online publications
creating a message properties context 95 accessing ix
creating message body string or context 93 OpenJMS 86
creating message properties context 91 Oracle
handing incoming messages from a JMS message enabling Kerberos authentication 9
listener 96 ordering publications x
handling a retrieved message 95
overview 85 P
retrieving JMS messages from a topic or queue 94
sending messages to JMS topic or queue 91 path names
setting up OpenJMS 86 notation xiv
setting up the JMS DSA 85 Performing handshaking with the DSA 168
writing JMS DSA policies 90 Performing operations requested by the DSA 170
JMS DSA policies plain text password 73
writing 90 policies
JNDI properties 88 sample 66
using editor 69
using wizard 67
K Policy Variables 157
Kafka problem determination and resolution xii
data source 103, 104 process 58
Kafka DSA proxy server RESTful DSA data sources 35
writing Kafka DSA policies 109 proxy server settings 35
Kafka DSA policies publications
writing 109 accessing online ix
ordering x

L
R
LDAP data sources
creating 37 ReceiveJMSMessage 94
LDAP DSA Requesting operation parameters from the socket DSA 168
data items 38 RESTful API 33
data model 37 RESTful data model DSA 33
data types 38 RESTful DSA 33
international character support 41 RESTful DSA data model 33
overview 37 RESTful DSA data source 33, 35, 36
policies 39 retrieving data 25, 29
referrals 40 retrieving JMS messages from a topic or queue 94
retrieving data 39, 40 Retrieving packed OID data with SNMP functions 134
supported LDAP servers 37 Returning operation results to the DSA 170
Listening for operation requests from the socket DSA 168 run Policy 63

178 Netcool/Impact: DSA Reference Guide


run Policy Response 65 Software Support
Running the sample socket server 166 contacting xi
runtime parameters 60 overview x
receiving weekly updates xi
SQL database DSA 7, 8, 18, 19
S SQL database DSAs
Sample policies 158 adding data 12
sending messages calling database functions 13
to JMS topic or queue 91 calling stored procedures 14
SendJMSMessage 91 customizing failover 16
SendKafkaMessage 109 data items 10
service data model 9
JMS message listener 89 data types 10
Kafka message listener 107, 108 deleting data 13
service, failback 14
Kafka message listener 107, 108 failover 14
sample policy 108 failover configurations 14
setting up failover defaults 15
failover 15 list of provided 5
JMS DSA 85 modifying data 12
Web services listener 58 policies 11
Setting up the DSA 153 retrieving data 11
Setting up the sample socket server 164 setting up failover 15
setting useJDBC4ColumnNameAndLabelSemantics 9 standard failover 14
sign messages 77 SSL 67
SNMP DSA super data types 111
data model 123 supported LDAP servers 37
data sources
creating 126 T
deleting 127
editing 127 Testing the socket server 167
data types Tivoli Information Center ix
creating 128 Tivoli technical training x
deleting 130 training
editing 129 Tivoli technical x
functions 137 typeface conventions xiii
policies 130
retrieving packed OID data 133
retrieving table data from SNMP agents 135
U
sending traps and notifications 136 UI data provider data model 21
setting packed OID data with SNMP functions 133 UI data provider data source 21
setting packed OID data with standard data-handling UI data provider data type 22, 23
functions 130 UI data provider data types 22
traversing SNMP trees 135 UI data provider DSA 21, 22
SNMPGetAction 138 UI Data Provider server cache 29
SnmpGetNextAction 142 uidataprovider data source 25, 29
SNMPSetAction 146
SNMPTrapAction 150
SOAP endpoints 62 V
socket DSA
variables
configuring 160
notation for xiv
custom socket servers 167
data model 160
data source 160 W
data types 160
policies 161 Waiting for DSA connections 168
retrieving data by filter 161 Web Service Listener 67
retrieving data by key 162 Web services DSA
retrieving data by link 163 calling WSSetDefaultPKGName 56
sample socket server 164 compiling WSDL files 43
sending data 164 compiling WSDL files in an Impact split installation 45
Socket DSA and socket server connection state 170 creating policies using editor 69
socket server 159 creating policies using wizard 67
Socket server threading 170 examples 56

Index 179
Web services DSA (continued) XML XSD files 111
functions 46
integration with third party Web services 69
obtaining WSDL files 44
overview 43
policies 55
running the WSDL compiler script 44
sample client 66
sample policies 66
sending messages 55
SOAP endpoints 62
Web services listener 58, 60
WSDL file 63, 65
WSListenerResult 61
Web services listener
process 58
runtime parameters 60
setting up 58
writing policies 60
Web services security
enabling 71
encryption 75, 78, 80
message integrity and non-repudiation with signature
74
sign and encrypt messages 77
user name token authentication 72, 73
WebSphere MQ 97
writing
Web services listener policies 60
Writing policies to receive events from ITNM 157
Writing policies using the ITNM DSA 156
WSDL 59
WSDL file
message 63, 65
WSDL files 45
WSInvokeDL 50
WSListenerException 65
WSListenerResult 61
WSListenerResult variable 61
WSNewArray 48
WSNewEnum 55
WSNewObject 47
WSNewSubObject 48
WSSetDefaultPKGName 46

X
XML configuration files 112
XML documents 111
XML DSA
create data types scripts 113
data type mapping 112
overview 111
reading XML data 117
sample policies 119
XML configuration files 112
XML data types
creating 113
setting up mappings 114
XML documents 111
XML DTD files 111
XML mapping 112
XML XSD files 111
XML DTD files 111

180 Netcool/Impact: DSA Reference Guide


IBM®

You might also like