SG 245285
SG 245285
SG 245285
1
Paul Fearn, Arne Olsson, Larry Bajuk, David Edwards, Peter Glasmacher, Gareth Holl, Istvan Szarka
SG24-5285-00
International Technical Support Organization Integrated Management Solutions Using NetView Version 5.1 February 1999
SG24-5285-00
Take Note! Before using this information and the product it supports, be sure to read the general information in Appendix D, Special Notices on page 391.
First Edition (February 1999) This edition applies to Tivoli NetView 5.1 for use with the AIX and Windows NT operating systems. Tivoli Framework 3.6, Tivoli Inventory 3.6, Tivoli TEC 3.6, and Tivoli Distributed Monitoring 3.6, Tivoli Service Desk 5.0, Tivoli Manager for Network Connectivity 1.0 and Tivoli Maestro 6.0 were used in the integration examples in this redbook. Comments may be addressed to: IBM Corporation, International Technical Support Organization Dept. OSJB Building 045 Internal Zip 2834 11400 Burnet Road Austin, Texas 78758-3493 When you send information to IBM, you grant IBM a non-exclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you.
Copyright International Business Machines Corporation 1999. All rights reserved Note to U.S Government Users - Documentation related to restricted rights - Use, duplication or disclosure is subject to restrictions set forth in GSA ADP Schedule Contract with IBM Corp.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix The Team That Wrote This Redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Comments Welcome . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.1 NetView 5.1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1 1.2 Tivoli Integration Pack for NetView (TIPN) . . . . . . . . . . . . . . . . . . . . . . . . .2 1.3 Tivoli Enterprise Console (TEC) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1.4 Tivoli Distributed Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 1.5 Tivoli Inventory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.6 Tivoli Service Desk (TSD) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.7 Tivoli Manager for Network Connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.8 Tivoli Maestro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 1.9 Our Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5 1.9.1 Installing the Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7 1.9.2 Relational Databases and NetView . . . . . . . . . . . . . . . . . . . . . . . . . . .8 1.9.3 Installing Sybase for the RIM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 1.9.4 Installing TEC 3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8 1.9.5 Installing Inventory 3.6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9 1.10 Other Applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .11 Chapter 2. Installation of NetView 5.1 for AIX . . . . . . . . . . . . . . . . . . . . . . .13 2.1 Installation Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 2.1.1 Installation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13 2.2 Customizing NetView for AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 2.2.1 Discovery of Our Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .19 2.2.2 Polling and SNMP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . .27 2.3 Discovery Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 2.3.1 Regulating NetView Traffic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .29 2.3.2 Regulating netmon Broadcast Traffic. . . . . . . . . . . . . . . . . . . . . . . . .30 2.4 Configuring and Using the NetView Web Interface for AIX . . . . . . . . . . . .30 2.4.1 Starting the Web Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .31 2.4.2 Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .32 2.4.3 Submaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .37 2.4.4 MIB Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .42 2.4.5 NetView Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .43 2.4.6 Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .44 2.4.7 NetView Web Interface Online Help . . . . . . . . . . . . . . . . . . . . . . . . .46 2.4.8 NetView Web Server Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .47 2.4.9 NetView Web Interface Problems . . . . . . . . . . . . . . . . . . . . . . . . . . .48 Chapter 3. Installing NetView 5.1 for NT . . . . . 3.1 NT Configuration . . . . . . . . . . . . . . . . . . . . . . 3.1.1 Creating the NetView Account Manually 3.1.2 NetView Share . . . . . . . . . . . . . . . . . . . 3.1.3 IP Name Resolution . . . . . . . . . . . . . . . 3.1.4 SQL Databases . . . . . . . . . . . . . . . . . . . 3.2 Prerequisites. . . . . . . . . . . . . . . . . . . . . . . . . 3.2.1 Hardware . . . . . . . . . . . . . . . . . . . . . . . 3.2.2 Software . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Setting Up NetView for NT . . . . . . . . . . . . . . . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . .. .. .. .. .. .. .. .. .. .. . . . . . . . . . . . . . .51 . . . .51 . . . .51 . . . .52 . . . .52 . . . .53 . . . .54 . . . .54 . . . .54 . . . .55
iii
3.3.1 System Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.2 Installation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.3 Adding NetView Clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.4 Client/Server Relationship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Post Installation Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.1 Database Setup for Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.2 Discovery of Our Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.3 Setting the Polling Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.4 Customizing Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.4.5 Diagnosing IP Discovery Problems . . . . . . . . . . . . . . . . . . . . . . . . . 3.5 Configuring and Using the NetView Web Interface for Windows NT . . . . . 3.5.1 Enabling the NetView Web Interface . . . . . . . . . . . . . . . . . . . . . . . . 3.5.2 NetView Web Interface Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.3 Starting a Web Client . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.4 Submaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.5 SmartSets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.6 NetView Event Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.7 SNMP Collection Browser . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.8 NetView Object Database Browser Application . . . . . . . . . . . . . . . . 3.5.9 Diagnostic Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.5.10 Remote Execution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 New NetView Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.1 Subnet Explorer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6.2 Service Status Monitoring and SmartSets . . . . . . . . . . . . . . . . . . . . 3.6.3 Event Correlation and Event Browser . . . . . . . . . . . . . . . . . . . . . . . 3.6.4 Online Help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
55 55 57 59 60 61 62 65 67 67 68 68 69 70 73 81 83 86 88 89 90 91 91 93 93 94
Chapter 4. Examples Using NetView Version 5.1 . . . . . . . . . . . . . . . . . . . . 97 4.1 NetView Version 5.1 - New Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 4.1.1 Submap Sorting and Status Filtering . . . . . . . . . . . . . . . . . . . . . . . . 97 4.1.2 The nvsniffer Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 4.1.3 The nvwakeup Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 4.1.4 The nvdbformat Command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 4.1.5 The nvdbimport Command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 4.1.6 The chmod_web_ovw Command . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.2 Event Management with NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.2.1 Information about Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 4.2.2 Event Flow for NetView on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 4.2.3 Displaying Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 4.2.4 Filtering Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 4.2.5 Event Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 4.2.6 Environment Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 4.2.7 Rule Example 1: Existing Interface Up and Interface Down Rule . . 149 4.2.8 Rule Example 2: Logging Interface Down after Correlation . . . . . . 153 4.3 Network Performance Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 4.3.1 Monitoring Performance and Traffic . . . . . . . . . . . . . . . . . . . . . . . . 160 Chapter 5. NetView and the Mid-Level Manager (MLM) . . 5.1 NetView Hierarchy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Our NetView and MLM Structure . . . . . . . . . . . . . . . . . . 5.3 The Central NetView Server . . . . . . . . . . . . . . . . . . . . . 5.3.1 Setting the Trap Port . . . . . . . . . . . . . . . . . . . . . . . 5.3.2 Setting the MLM Trap Destination . . . . . . . . . . . . . .. .. .. .. .. .. .......... .......... .......... .......... .......... .......... 169 169 169 170 171 172
iv
5.3.3 Defining the MLMs Role . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .172 5.4 NetView MLMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173 5.4.1 MLM on AIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .173 5.4.2 MLM on Windows NT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178 5.4.3 SNMP/Community Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . .179 5.5 MLM Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181 5.5.1 Consideration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .181 5.5.2 MLM Configuration Application . . . . . . . . . . . . . . . . . . . . . . . . . . . .182 5.5.3 Filtering Traps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .189 5.6 Testing the SNMP Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .190 Chapter 6. Tivoli Integration Pack for NetView (TIPN) . . . . . . . . . . . . . . .193 6.1 TIPN Installation Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .193 6.1.1 Installing the 3.2 Super Patch . . . . . . . . . . . . . . . . . . . . . . . . . . . . .194 6.1.2 TIPN Patch Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .194 6.2 Installing TIPN Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .196 6.2.1 Installing the Network Diagnostic Components . . . . . . . . . . . . . . . .198 6.2.2 Testing the Framework Network Diagnostics . . . . . . . . . . . . . . . . . .198 6.2.3 Installing the Network Diagnostics for NetView Server . . . . . . . . . .199 6.2.4 Installing Tivoli NetView/TEC Integration for NetView Server . . . . .201 6.2.5 Tivoli NetView/Inventory Integration for the NetView Server . . . . . .203 6.3 Using the TIPN Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .207 6.3.1 Extending the Tivoli Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .208 6.3.2 Using the Network Diagnostics for NetView Component . . . . . . . . .211 6.3.3 Using the Tivoli Reports Menu . . . . . . . . . . . . . . . . . . . . . . . . . . . .214 6.3.4 Using the NetView/TEC Integration Component . . . . . . . . . . . . . . .218 6.3.5 Building New Event Sources Based On Collections. . . . . . . . . . . . .226 6.3.6 Using the NetView/Inventory Integration Component. . . . . . . . . . . .231 Chapter 7. Integration with the Tivoli Framework . . . . . . . . . . . . . . . . . . .239 7.1 The Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .239 7.1.1 Collection Based on Profile Manager Information . . . . . . . . . . . . . .240 7.1.2 Status Filtering and Data Collection . . . . . . . . . . . . . . . . . . . . . . . .240 7.1.3 Setting Up a Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .241 7.2 Example 1 - Creating the Profile Manager Structure in NetView . . . . . . .241 7.2.1 Creating a Collection on NetView for AIX . . . . . . . . . . . . . . . . . . . .242 7.2.2 Using the MirrorPM Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .242 7.3 Example 2 - Automating Profile Distribution . . . . . . . . . . . . . . . . . . . . . .246 7.3.1 Using the Easy Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .248 Chapter 8. Integration with Distributed Monitoring and the TEC . . . . . . .255 8.1 The Systems Management and Network Management Environment . . . .255 8.1.1 Software Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .256 8.1.2 Design Considerations for a Client/Server Application . . . . . . . . . . .257 8.1.3 Building the Monitor Profiles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .259 8.1.4 Preparing NetView . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .261 8.1.5 Preparing TEC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .262 8.1.6 Building the Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .264 8.2 Systems Management Client/Server Scenario . . . . . . . . . . . . . . . . . . . .273 8.3 Network and Systems Management Integration Scenario . . . . . . . . . . . .276 8.3.1 Opening a NetView Submap from TEC . . . . . . . . . . . . . . . . . . . . . .279 Chapter 9. Integration with Network Connectivity . . . . . . . . . . . . . . . . . .281 9.1 Integration Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .281 v
9.2 Installation, Startup and Configuration of TFNC . . . . . . . . . . . . . . . . . . . 9.2.1 Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.2 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.3 Startup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.4 Problems Encountered . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.2.5 Configuration Files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3 Using TFNC for Network Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.1 Example Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9.3.2 TFNC Compared to nvserverd Forwarding . . . . . . . . . . . . . . . . . . . 9.3.3 Performance of TFNC Root Cause Analysis . . . . . . . . . . . . . . . . . 9.3.4 Example of Automated Action for Root Cause Event . . . . . . . . . . . 9.3.5 Application Management with TFNC . . . . . . . . . . . . . . . . . . . . . . . 9.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 10. Integration with Tivoli Service Desk . . . . . . . . . . . . . . . . . . . 10.1 Installation and Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.1 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.2 Configuration Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.1.3 Configuration of Node-Specific Options . . . . . . . . . . . . . . . . . . . . 10.2 Example of Automated Problem Management . . . . . . . . . . . . . . . . . . . 10.2.1 New Problem Opened and Closed . . . . . . . . . . . . . . . . . . . . . . . . 10.2.2 New Problem Diagnosis Using Preview . . . . . . . . . . . . . . . . . . . . 10.2.3 Multiple Calls for the Same Problem Instance . . . . . . . . . . . . . . . 10.2.4 Reopened Problem: New Calls for Recurring Problem . . . . . . . . . 10.2.5 Observations on Problem Correlation (Auto Close) . . . . . . . . . . . 10.3 NetView Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.1 List of Options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.3.2 All Problems Option . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.4 Conclusion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Chapter 11. Integration with Tivoli Maestro . . . . . . . . . . . . . . . . . . . . . . . 11.1 What the Integration Provides . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Example Scenario for Maestro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 Installation of Maestro/NV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3.1 Installing Maestro Integration on the NetView Server . . . . . . . . . . 11.3.2 Installation on Maestro Managed Nodes . . . . . . . . . . . . . . . . . . . 11.3.3 Maestro/NV Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.4 Using Maestro/NV to Manage the Example Job Network . . . . . . . . . . . 11.4.1 Discovery of the Maestro Network . . . . . . . . . . . . . . . . . . . . . . . . 11.4.2 Discovery of Maestro Application Nodes . . . . . . . . . . . . . . . . . . . 11.4.3 Maestro Process Condition Notification . . . . . . . . . . . . . . . . . . . . 11.4.4 Accessing the Maestro Console from NetView . . . . . . . . . . . . . . . 11.4.5 Monitoring Job Schedules from NetView . . . . . . . . . . . . . . . . . . .
281 281 282 285 285 286 289 289 291 297 298 310 311 313 313 313 318 323 325 326 328 331 331 333 334 334 335 337 339 339 340 341 342 345 346 351 351 353 355 358 360
Appendix A. Files Used in the Framework Integration Examples . . . . . . .365 A.1 Example 1 - Creating the Profile Manager Structure in NetView . . . . . . . . . .365 A.1.1 MirrorPM.pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .365 A.1.2 DeletePM.pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367 A.1.3 ITSO.reg - Menu Registration File . . . . . . . . . . . . . . . . . . . . . . . . . . . . .367 A.2 Example 2 - Automating Profile Distribution. . . . . . . . . . . . . . . . . . . . . . . . . .368 A.2.1 Easy.pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .368 A.2.2 unselect.pl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .374 A.2.3 BusinessSet3.txt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .375 A.2.4 Endpoint.format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .375 vi
Integrated Management Solutions Using NetView Version 5.1
A.2.5 Deadnode.format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 A.2.6 Endpoint.conf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 376 Appendix B. TIPN Tables, Views and Queries . . . . . . . . . . . . . . . . . . . . . . . 377 B.1 TIPN Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 377 B.2 TIPN Views . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Appendix C. Files Used in the Network Connectivity Examples . . . . . . . C.1 Class Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.1 ipfm.baroc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.1.2 itso.baroc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.2 Ruleset Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.2.1 ipfm.rls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.2.2 itso.rls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.3 Custom Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.3.1 wzlogprob Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.3.2 wlclrprob Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.4 Compressed ovstatus Listing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.4.1 wzovstatus Script . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . C.4.2 wzovstatus Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 381 381 381 383 384 384 386 387 387 389 389 390 390
Appendix D. Special Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 Appendix E. Related Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.1 International Technical Support Organization Publications. . . . . . . . . . . . . . E.2 Redbooks on CD-ROMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . E.3 Other Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 393 393 393
How to Get ITSO Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .395 How IBM Employees Can Get ITSO Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . 395 How Customers Can Get ITSO Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396 IBM Redbook Order Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 List of Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .399 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .401 ITSO Redbook Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .405
vii
viii
Preface
This redbook will help you install and configure Tivoli NetView V5.1 on both AIX and NT platforms. By using examples it shows the base NetView functions and how these can be used for areas such as network monitoring and device and event management. This redbook also contains examples of integration between Tivoli NetView and other Tivoli products such as Tivoli Framework, Tivoli Inventory, Tivoli TEC, Tivoli Manager for Network Connectivity, Tivoli Distributed Monitoring, Tivoli Service Desk and Tivoli Maestro. It will help you tailor solutions where network and systems management applications work together to solve customer business problems. This book will provide a valuable addition to the product documentation when implementing a solution and a good reference for I/T architects designing network and systems management solutions. A basic understanding of the network and systems management functions is assumed.
ix
management. His areas of expertise include technical consulting on the Tivoli availability set of products, particularly the TEC. Peter Glasmacher is an IT Architect in Dortmund, Germany. He holds a degree in communications engineering. He joined IBM Germany in 1973. He has worked in various positions including hardware and software support, software development and services covering multiple OS platforms and networking architectures. He is currently a member of Systems Management and Networking Services, a branch of IBM Global Services. He has more than 12 years of experience in the network and systems management arena. His areas of expertise include architectural and design work for systems management, network design and security consulting. He has designed and implemented complex IP networks and systems management solutions in large customer environments. Gareth Holl is a Tivoli Services Specialist in Sydney, Australia. He holds a degree in Computer Engineering from The University of Wollongong and is currently studying for a Masters in Commercial Law at Macquarie University. Gareth joined IBM two years ago supporting products such as TCP/IP, VTAM and SNA on the MVS platform. He is now focused on system and network management software, including the Tivoli range of products and Nways Campus Managers. Tivoli NetView on NT and AIX is Gareths primary area of expertise at present. Istvan Szarka is a Systems Management Specialist with IBM Global Services Systems Management and Networking Services unit in Budapest, Hungary. He holds a degree in Electric Engineering from the Technical University of Budapest. He joined IBM Hungary in 1989. He worked in various positions including hardware and software support and has three years of experience in the systems management field. His areas of expertise include architectural, design and consulting for the Tivoli product line, IBM Nways products and Remedy help desk solutions. Istvan has implemented systems management products in a number of large-scale customer environments. Thanks to the following people for their invaluable contributions to this project: Stefan Uelpenich ITSO, Austin Jeffrey Snover, Christina Zabeu, Gerry Roy, Michael Ayotte Tivoli Systems, Austin George deSocia, Scott Donohoo, Christine Nutt Tivoli Systems, Raleigh Joe Steinfeld Tivoli Systems, Indianapolis
Comments Welcome
Your comments are important to us! We want our redbooks to be as helpful as possible. Please send us your comments about this or other redbooks in one of the following ways:
Fax the evaluation form found in ITSO Redbook Evaluation on page 405 to the fax number shown on the form. Use the electronic evaluation form found on the Redbooks Web sites: For Internet users For IBM Intranet users
http://www.redbooks.ibm.com http://w3.itso.ibm.com
xi
xii
Chapter 1. Introduction
The purpose of this redbook is to provide examples of how to implement various networking and systems management solutions using Tivoli NetView and the Tivoli core applications. The main focus is on the transition from traditional network management solutions to network and systems management solutions. We have divided the book into two sections and these are as follows: Section 1 includes: An introduction to NetView 5.1 and discussion of the new features How to install and customize Tivoli NetView for the AIX and Windows NT Examples of using Tivoli NetView for network management Section 2 includes: How to install, customize and use Tivoli Integration Pack for NetView (TIPN) to show the integration with NetView and the Tivoli Framework, Inventory and TEC. How to integrate NetView with Tivoli Service Desk (TSD) to provide automated creation and deletion of problem management records from NetView. How to install and use the Tivoli Manager for Network Connectivity (TFNC) application to provide network event correlation for a network. How to integrate NetView with both the TEC and Distributed Monitoring to show how to manage both network devices and applications running on a server. How to manage the Maestro application from NetView. How to perform systems management tasks using NetView by displaying the availability of the nodes that will be the target nodes for the distribution of Tivoli profiles. The following section provides a brief overview of the applications used in this redbook. Additional product-specific information can be found at:
http:/www.tivoli.com
The list of management products are as follows: Tivoli Tivoli Tivoli Tivoli Tivoli Tivoli Tivoli Tivoli Tivoli NetView Version 5.1 for AIX and NT NetView Mid-Level Manager (MLM) Integration Package for NetView (TIPN) Enterprise Console (TEC) Distributed Monitoring Inventory Manager for Network Connectivity (TFNC) Service Desk (TSD) Maestro
The following sections briefly discuss the applications and how we used them for the examples shown in this publication.
the ability to provide both network management and systems management as it becomes more tightly integrated with Tivolis other management applications. This integration is examined within this redbook to some degree. Combined with its ability to easily effect changes on many devices, a global support infrastructure, and the backing of hundreds of third-party vendors, Tivoli's NetView has become an important tool for the management of complex networks. Further, Tivoli NetView not only enables you to manage your network, it also positions you for planned growth with a complete systems management solution, which is only limited by your imagination. The new Java-based Web browser interface to Tivoli NetView makes it easy to view dynamically updated network information. Information on node status, object collections, events, object information, NetView process status, and diagnostics (ping, traceroute, demand poll, non-graphical generated MIB applications and MIB browser), is available to anyone with a Web browser and the appropriate security authorization. The topology maps are the same as the NetView GUI on the server. The latest version of NetView has added new or enhanced functionality and covers areas such as: SmartSets/Collections and data collection Event management including the browser and correlation functions Web interface Attended MLMs There are other new features that take advantage of a particular operating system such as the Submap Explorer in NetView for Windows NT.
Integration with Tivoli Inventory Integration with TEC With topology integration you can display Tivoli resources, such as managed nodes on the NetView map and also access functions for these resources from the menus associated with the icons on the map. You can also directly start the Tivoli desktop from the NetView map with a single mouse click. The integration with Tivoli Inventory is perhaps the most valuable part of TIPN. It makes certain fields from the Tivoli NetView database available in the Tivoli Inventory schema and therefore allows you to access that information from Tivoli applications. The integration with TEC allows the events from the console to be available in the NetView event display and therefore allows the ability to use the NetView event correlation on TEC events. This integration works exactly in the opposite direction as the TEC adapter for NetView, which sends events from NetView to TEC.
Introduction
Automatic preventive and corrective actions Over 1500 predefined monitors and responses available out of the box to monitor and control your key computing resources.
No maintenance required. It automatically adapts to changes in network topology. It is very fast and very accurate, even if some events are dropped by the network management station. Tivoli Manager for Network Connectivity collects real-time network topology information and alarm notifications from Tivoli NetView. It relies on the fact that every problem in a networked system generates a particular set of symptoms. Tivoli Manager for Network Connectivity combines these characteristic symptoms with topology information to create a table of unique, problem-identifying codes. The incoming alarm notifications are quickly compared with these codes, enabling fast root-cause determination. This automated analysis reduces the time normally required to identify the cause of network downtime from hours to minutes because it eliminates the need to manually search for the cause of a network failure.
Introduction
In addition we have included a brief summary of how we set up some of the more important components of our environment. This section can also be used as a checklist to make sure you have all components installed that are needed to show the examples described in the remaining chapters. Figure 1 shows our initial configuration.
rs60008 9.24.104.30 AIX 4.2 Managed Node Framework 3.6 NetView 5.1 Server Inventory 3.6 RS600015 9.24.104.215 AIX 4.3 RS600028 9.24.104.4 AIX 4.2 Managed Node Framework 3.6 Sybase RIM host T/EC Server 3.6 Inventory 3.6 Managed Node / Gateway Framework 3.6
Managed Node Framework 3.6 NetView 5.1 Server WTR05097 9.24.104.211 NT 4.0
End Point Framework 3.6 NetView 5.1 Client WTR05073 9.24.104.154 NT 4.0 Server WTR05246 9.24.106.162 NT 4.0
Table 1 lists the management software we used and what devices the applications where installed on.
Table 1. List of Software Applications
Application NetView for AIX NetView for NT NetView for NT client Tivoli Framework
Server rs600015 wtr05097 wtr05073 wtr05246 rs60008 rs600015 rs600028 hebble wtr05097 hebble wtr05246 rs600028 rs60008 rs600015
Application Inventory Gateway TEC Sybase Mid-Level Manager TIPN Distributed Monitoring
Server hebble rs600028 rs600028 rs600015 rs600015 Several machines as required for the scenarios rs600028 rs60008 rs600015 Existing ITSO TSD environment rs600033t
Maestro
6.0
5.0 1.0
Extensions to our environment were required to accommodate the additional Tivoli software and to allow the operation of several scenarios in parallel. New diagrams are provided in the appropriate chapters, which show which machines contain the software. For some of the examples contained in this redbook we used a second NetView server machine called rs600033t. This contained the same installed software and configuration as the rs600015 server.
Introduction
We installed NetView NT Version 5.1 on both hosts wtr05097 and wtr05073. This could only be done from each NT machine. That is, it is not possible yet to create a NetView server or client on an NT box from the Framework desktop. For installation instructions for each of these applications please see the relevant documentation (including the Release Notes).
We entered the values as shown in Figure 2, indicating that our Sybase server is installed in the /usr/local/sybase directory. The database server ID is SYBASE, as this is the default used during the Sybase installation. The other values for the database user ID and database ID are the defaults used by TEC. We installed the TEC server as well as the event console component on rs600028. Once the installation of TEC was completed, we executed the script for the creation of the TEC database, which is called cr_tec_db.sh and located in the directory $BINDIR/TME/TEC/sql. When the script asked us where to create the database, we specified that we wanted to create the database on the master device file with the default size of 50MB. Once the script has completed, you can start the event server by typing: wstartesvr
Introduction
We installed Tivoli Inventory on rs600028, which is also the RDBMS server, so we left the RIM Host field with the default of ALI_host. To verify which machine is the RIM host for Inventory, issue the following command: wgetrim inventory Our output is shown below:
RIM Host: RDBMS User: RDBMS Vendor: Database ID: Database Home: Server ID: Instance Home:
The Database ID field and the RDBMS User are set to the default (inventory and tivoli respectively). These are the values that are used by the database creation scripts for Tivoli Inventory. For the database home directory we entered /usr/local/sybase, which is the directory where Sybase is installed. Finally the Server ID was set to SYBASE. We installed the Tivoli Inventory gateway component on hebble, which is the node set up to be the endpoint gateway in our environment. After the installation was completed we ran the database creation scripts for Inventory which are located in $BINDIR/TME/INVENTORY/SCRIPTS/RDBMS.
10
To run the scripts you must use the isql command for Sybase. This is shown below: isql -U sa -i tivoli_syb_admin.sql isql -U tivoli -P tivoli tivoli_syb_schema.sql Next we created the Tivoli Inventory default queries using the script located in $BINDIR/TME/INVENTORY/SCRIPTS/QUERIES. To do this issue the command: ./inventory_queries.sh rs60008-region Inventory A new query library called Inventory is created in the rs60008-region policy region.
Introduction
11
The Tivoli Service Desk and NetView can be integrated together in such a way as to automate the creation of a problem record (TSD problem ticket) when a pre-defined event occurs in the network, which NetView detects. For an example of TSD in action, please see Chapter 10, Integration with Tivoli Service Desk on page 313. Tivoli Maestro Maestro can be integrated with NetView in an effort to achieve systems management of the Maestro application. NetView nodes within a Maestrospecific submap can be used to represent the status of scheduled Maestro jobs and/or Maestro processes. For a Maestro integration scenario, please see Chapter 11, Integration with Tivoli Maestro on page 339.
12
4. Create the NetView server. 5. Verify the installation. Tivoli NetView for AIX is designed to be installed and run from within the Tivoli Desktop. The Framework must be installed and configured prior to installing NetView. Please see the release notes for more details. 2.1.1.1 Back Up the Tivoli Database As with any Tivoli installation the object database should be backed up prior to any new software being installed. This is done from the Tivoli desktop by selecting Desktop->Backup or by issuing the command wbkupdb. The release notes also suggest backing up the entire file system on the node to be the NetView server. 2.1.1.2 Creating a Managed Node Prior to installing the NetView server, the NetView server node must be defined to Tivoli as a managed node. This allows NetView to use the Framework shared services and object database. The node rs600015 was created as a managed node in the rs60008-region using the standard Create -> ManagedNode menu items. See the TME 10 Framework 3.6 Users Guide, GC31-8433 for further details. 2.1.1.3 Install the NetView Framework Patch The NetView Framework patch called NVTMP311 adds the NetViewServer and NetViewClient resources to the Framework. These allow the server and client products to be created from the Tivoli desktop. The Framework patch should be installed on both the TMR server, where the root users desktop is executed, and the NetView server node. The patch was installed by selecting Install -> Install Patch from the Desktop pull-down menu on the Tivoli desktop.
14
We selected TME 10 NetView Framework Patch - 5.1. The clients we installed were rs600015, our NetView server, and rs60008, the TMR server. To complete the installation of the patch, Tivoli must be stopped and restarted. To do this we closed the desktop and issued the command odadmin reexec all. 2.1.1.4 Create the NetView Server Prior to creating a NetView server or client, the NetViewServer and NetViewClient resources must be allocated to the policy region where the server and clients will reside. To do this we selected the rs60008-region policy region with the right mouse button and selected the Managed Resources menu item. This started the Set Managed Resources dialog box.
15
In this dialog box the new resources, NetViewServer and NetViewClient were moved to the left side making them managed by the rs60008-region and the dialog closed using the Set and Close button. To create the server in the policy region we selected Create->NetViewServer (see Figure 7).
The product to install was NetView Server 5.1 on rs600015. Once the installation completed the NetView Server icon appeared in the policy region.
16
This completed the basic installation of the NetView server. 2.1.1.5 Verify the Installation There are three tasks that can be performed to verify the installation. These are the following: 1. Check the install logs. 2. Check that all NetView processes are running. 3. Start the NetView GUI. Getting a successful install message in the Install Product dialog does not guarantee that the install was completely successful. The installation log files are: /tmp/update.log, /tmp/NV*.debug and /tmp/NV*.fail. The key file is /tmp/update.log. This file can be scanned for the following strings: SEV_, ERROR, WARNING. These indicate installation problems and will be accompanied by advice on fixing the problem. To check that all NetView processes are running we used the Display Status of Daemons function. It is run by selecting the NetView Server icon with the right mouse button and selecting Control -> Display TME10 NetView Status -> Display Status of Daemons (see figure Figure 9 on page 18).
17
This starts a confirm dialog box where you will be asked whether it is ok to continue.
All NetView processes are listed. All should be in a state of RUNNING . If there are some processes not running, the Restart all stopped daemons and Stop all running daemons functions can be selected. The final verification we performed was to start the NetView GUI. From the Tivoli Desktop using the right mouse button on the NetView Server icon, select Control -> Start user interface. A number of startup options can be specified for the user interface. We left the default settings and selected OK.
18
The NetView GUI dialogs were displayed along with a Command Output dialog.
The dialog shows the results of starting NetView. One of the lines tells you the startup error logs. In our case the logfile was /nv6000.log. If there are any problems with the GUI startup, this log should be consulted.
19
The local segment (9.24.104) is discovered and this includes a number of routers (POK2210A, WTR6611A) an AS/400 (ralyas4c) and a System/390 (mvs25) which are defined as routers. All these IP addresses are managed (shown as green on the map). The wheat (beige) network segments (such as 9.24.106 and 9.24.105) are unmanaged. Drilling down into these segments will not show any discovered devices. There are a number of ways to discover more of the network: Manually setting the subnet icon to a managed state. Use a seed file to force NetView to discover certain nodes or networks. 2.2.1.1 Manually Managing Network Objects To manually manage an object, use the right mouse button on the object and select the Options->Manage Object menu items as shown in Figure 14 on page 21.
20
The object will change from wheat to blue to indicate the object status is unknown. When the object has been checked by NetView, the icon will change to green (normal status), yellow (marginal status, that is, the object has at least some unsatisfactory condition) or red (critical status, that is, the object cannot be reached by NetView). However, managing the IP map 9.24.105, with IP address 9.24.105.0, does not immediately manage all 9.24.105.* IP addresses. Drilling down into the IP map shows two segments (see Figure 15).
A number of the objects (Segment 1, 8260dmm1 and 8271_1) have a known status (green, yellow or red) while others (WTR6611A, POK2210A and
21
Segment2) have an unknown status (blue). One of the reasons the blue objects remain unknown is that some of their connections are unmanaged (shown as wheat). The router WTR6611A has an interface in the 9.24.105 domain (9.24.105.1) which is unmanaged; thus the status of the device is unknown from that domain. This is why the icon is shown as blue. However, the IP Internet submap (Figure 13 on page 20) shows the device as normal (green) as the 9.24.104 interface to the router is up and sending a healthy status to NetView. Drilling down into Segment 1 as shown in Figure 16 shows a number of interfaces that are not managed, even though they are in the 9.24.105 domain.
To manually manage every device in a large network would be a very timeconsuming job. This is where a seed file can be used to aid discovery. 2.2.1.2 Using a Seed file A seed file can be used in a number of ways to drive network discovery. These include: Seeding a number of IP addresses beyond the connectors on the currently managed networks and letting the netmon daemon discover the networks around the seed addresses. Limiting the discovery to only those nodes listed in the seed file. Limiting the discovery to a range of nodes using seed file wildcards. Using the vi editor we created a seed file that will discover the subnets 9.24.105, 9.24.106 and 9.24.
9.24.105.1 9.24.106.1 9.24.*.* # Find a node in the 9.24.105 domain # Find a node in the 9.24.106 domain # Manage all nodes in 9.24.* domains
22
First we had to remove the existing nodes that have already been discovered. This is done from the Tivoli NetView Server icon, by selecting Maintain -> Clear databases -> Clear topology database (limited). Once the task has completed, the results are displayed in a Command Output dialog box (see Figure 17 on page 23). This shows the actions taken by the command.
The NetView daemons were restarted using the Control -> Restart all stopped daemons option under the NetView server icon. To force NetView to use the seed file, from the Tivoli NetView Server icon, using the right-hand mouse button select Configure -> Set options for daemons -> Set options for topology, discovery, and database daemons -> Set options for netmon daemon. For the Seed File field we entered /tmp/demo_seedfile.txt.
23
After clicking OK, a confirmatory dialog box was displayed (see Figure 19). This showed that netmon had been restarted.
We restarted the GUI. The IP map was monitored to see the changes as NetView discovered the network components specified in the seed file. Figure 20 shows the domains 9.24.104, 9.24.105 and 9.24.106 now in a managed state.
24
Figure 21 show the contents of the domain 9.24.105 where the subnet is now automatically managed.
25
We encountered some problems when using the seed file. The seed file was changed a number of times to see what different wildcard combinations produced the required network discovery. A number of times the user interface would not completely start and a Daemon Error dialog box would be displayed (see Figure 23).
Restarting the daemon manually also produced the same error. On the command line we ran the ovstatus command. This showed the netmon daemon as NOT_RUNNING with a last message of Error in seed file. We found the following wildcards will not work: 9.24.10?.* - The ? can only be used with hostnames. 9.24.10*.* - The * must match whole numbers, not individual digits. (9.24.*.* is ok.) We also found that a number of seed file definitions did not produce the results we expected. For this reason we checked that the logic was correct, and that valid IP addresses were listed in the seed file. A generic IP address (for example, 9.24.105.*) only defines the range that managed IP addresses must reside in. The netmon daemon must also be provided with an IP address within the IP address range.
26
There are four options, all of which can be set to on or off. The Enable Polling and Discovery Settings option must be set on to access the other three options. If it is not on, the remaining options cannot be selected: Poll for Status - This sets on polling for node status states. If a node does not respond to a poll within the time specified in the Node Down Delete Interval field in the SNMP Configuration dialog, the node will be deleted from the object database. If this option is not turned on, the associated field in the SNMP Configuration dialog cannot be changed. Discover New Nodes - This is used with any specified seed file to determine how nodes are found and added to NetView. It is related to the Polling Interval type (fixed or auto-adjusting) in the SNMP Configuration dialog box. Use auto-adjusting polling to generate less polling traffic once most of the network has been discovered. Poll for Configuration Changes - Turning on this option allows you to specify a value in the Fixed Polling Interval field in the SNMP Configuration dialog box. 2.2.2.2 Configuring SNMP Values The SNMP configuration dialog is accessed from the Options -> SNMP Configuration menu items within NetView. The dialog is shown in Figure 25 on page 28.
27
The existing configurations are shown as one of: specific nodes, node groups (by specifying a wildcard), communities of nodes and the default settings. Thus, you could only use the default settings or set specific overrides for nodes or groups of nodes. The values in this dialog are stored in the /usr/OV/conf/ovsnmp.conf_db/* files. These files should not be edited directly. Some SNMP polling parameters can also be set using Tools -> Data Collection & Thresholds: SNMP from within NetView. Figure 26 on page 29 shows the default data collections.
28
Specific MIB objects may be specified for collection. Each MIB object can have unique polling intervals, thresholds, rearm values and other parameters.
29
Stop collecting MIB data on some of the MIB objects that are configured to collect data by changing the status to Suspend. Modify the details of the configured MIB objects by deleting nodes, excluding nodes, or changing polling intervals.
2.4 Configuring and Using the NetView Web Interface for AIX
Configuration and use of the NetView Web server and NetView Web interface are described in the following documents:
TME 10 NetView V5R1 for UNIX Release Notes TME 10 NetView Web Interface - README
The NetView Web server is based on the Apache Web server and is included and installed when you install the NetView server. The NetView Web server daemon NetViewWebServer is started by default when the NetView server is started. By default this daemon is configured to run on port 80. It can be changed and the procedure is described in the TME 10 NetView for AIX Version 5 Release 1 Administrators Guide, SC31-8440. We recommend leaving it at 80 since we experienced problems with the NetView Web interface when we changed the port number.
30
Web security is now fully integrated with NetView security. It is highly recommended to implement NetView security. See the TME 10 NetView for AIX Version 5 Release 1 Administrators Guide, SC31-8440 for details on implementing NetView security. You then create Web client user IDs for each NetView operator who will be accessing NetView using a Web browser. These user IDs must be in the Web NetView security group, which is one of the default groups in NetView security administration. Note that once you have turned on NetView security you must stop and restart the NetView Web server daemon from the NetView console in order to activate NetView Web security.
Enter your NetView Web client user ID and password. After successful logon the following NetView Web interface page is displayed. Figure 28 shows that we entered rs600015.
31
Here you can select README for more information about the TME 10 NetView Web interface. To use the NetView Web interface select Enter or click the left mouse button anywhere on the NetView logo page. This will display the NetView Web interface home page with the main menu.
Figure 29. NetView Web Interface AIX Home Page: The Main Menu
This page displays the functions available on the NetView Web interface. The row of icons at the bottom of the page are links to the Tivoli corporate home page, the NetView Web interface home page as well as to the following NetView Web interface functions: SubMap, MIB Browser, Diagnostics, Events and NetView Processes. The Diagnostics, SubMaps and MIB Browser functions always open new windows. The NetView Processes, Events and Help functions do not. The NetView Web interface home page icon can be used to open additional windows from the NetView Processes or Events pages. This can be used if you want to view the NetView Processes and the Events functions at the same time. Throughout the NetView Web interface you can navigate between pages using the browser arrows or click Go to go directly to a page. If you are using a function that opens a new window and have finished, just close that window to save the resources.
2.4.2 Diagnostics
Select Diagnostics from the NetView Web interface main menu or click the Diagnostics icon at the bottom of the page to open a new diagnostics window. Whenever this initial diagnostics page appears, you must enter the name or the IP address of the node you want to investigate and click the Show Tasks button. Note that the node name is case-sensitive on this page. If the node is not in the NetView object database, you will get the message Unknown Object. In this case, or if you want to work on a different node, you must close this window and open a new diagnostics window for it to work properly.
32
We entered rs60008 in the Diagnostics for fields, the summary information page is displayed (see Figure 30 on page 33).
The diagnostics pages also have a row of icons below the node address/name field. These are links to the following tasks for this node: Summary information Ping Demand poll Traceroute View MIB Application Show ovw database field Information Show events
Below we briefly describe the available functions: Ping - Click on the Ping icon to send a single ping from the NetVeiw server to the node you are investigating. Demand Poll - Click on the Demand Poll icon to start an SNMP demand poll, which will update the MIB variables for this node in the NetView object database.
33
Traceroute - Click on the Traceroute icon to trace the route from the NetView server to the node you are investigating. The following page is displayed, which shows the trace results as the trace is being done.
34
View SNMP MIB Applications - This function is only displayed for nodes that support SNMP. Click on the MIB icon to display the SNMP MIB Applications page. You can select the MIB variables you wish to query and display from the menu in the selection field.
Figure 33. NetView Web Interface AIX: Diagnostics - View SNMP MIB Applications Selection Menu
Once you have selected the variables you wish to look at, click on the Start Query button. The following page shows a query of the interfaces for this node.
35
Figure 34. NetView Web Interface AIX: Diagnostics - View SNMP MIB Applications
Show ovw Field Information - Click on the Field Information icon to display a subset of the information stored in the NetView ovw object database for this node.
Figure 35. NetView Web Interface AIX: Diagnostics - ovw Field Information
36
Show Events - Click on the Events icon to display the events that have been collected for this node. You can scroll through the list on this page.
2.4.3 Submaps
The submap function of the NetView Web interface has been greatly improved. It is now faster, more robust and has basically the same view as the native NetView AIX GUI. An exception is still the segment view, where the nodes are displayed in rows without any connections. Note: The submaps function is demanding in resources. In our case we were using Netscape Navigator V4.06 on Windows NT workstations with 128MB RAM. A number of times the Web browser froze. In order to avoid this we closed all other applications, especially any memory-intensive applications. To start the submap function, select SubMaps from the NetView Web interface main menu or click the SubMaps icon at the bottom of the page. This will open a new NetView submap window. As with the native NetView AIX GUI, the root map is displayed.
37
Once you select a node, various buttons on the right side become active, depending on the node selected. Select the IP Internet icon by single clicking on it to work with the IP subnet.
38
The NetView Web interface submap is almost identical to the native NetView IP map application. If there are any changes in the network status, then this will be reflected on this map dynamically. However, there are no set operations available from the submap except for Acknowledge and Unacknowledge. It now supports complex submaps of up to 350 objects. It is not possible to zoom in or out on these displays. On this view you can select hosts or subnets. If you select a host, you can then view the adapters or go directly to the diagnostics function for this host by clicking the appropriate button. If you select a subnet, the view function will display the subnet submap while diagnostics will display a diagnostics field view window for this subnet. In this case we selected a subnet segment on the IP Internet submap and clicked on View.
39
From here you can select hosts or segments. If you select a host, you can use view to display its adapters or go to the diagnostics function. If you select a segment, the view function will display the hosts submap for this segment while diagnostics will display the diagnostics field view page in a new window for this segment. Figure 40 on page 41 shows the subnet view for 9.24.104.
40
By selecting the node and clicking View the adapter page is displayed.
41
You can select any adapter and click Diagnostics or just double-click on any adapter and a new diagnostics window is opened for this node. The operations are the same as described before in 2.4.2, Diagnostics on page 32.
42
Enter a node name or its IP address. Then move up or down the MIB tree using the Up Tree or Down Tree buttons. When you get to a variable that has a value the Start Query button will be active. Click on the Start Query button and the results will appear in the output window.
43
You can view the details of a single process by single clicking on the man above the process of interest; netmon is shown in Figure 45.
Figure 45. NetView Web Interface AIX: NetView Process Status - Selected Process
2.4.6 Events
Besides displaying events for a single node within the diagnostics function, you can also launch a more sophisticated event browser by clicking Events on the NetView Web interface main menu.
44
Figure 46. NetView Web Interface AIX: Events Filtering Ruleset Selection
Select the filtering ruleset and then click the View button. The events filtered by the ruleset are displayed. The following page shows a view of all forwarded events. You can also filter events by severity on this page. You can also change the ruleset and click View to get a modified listing.
45
To look at an event in more detail, select the event by clicking on it and all the buttons at the bottom of the page become active. Click the Show Node button to go directly to the NetView Web interface diagnostics function. A new diagnostics window is opened with the node address filled in and the summary information displayed. Note that the row of icons are added to the bottom of the page.
46
47
48
After many attempts to solve the problem we changed the port back to 80 for the NetView Web server and disabled the other Web server. There were no more socket problems after this was done. 2.4.9.2 Browser Issues As stated before, we used Netscape Navigator V4.06 as our Web browser running on Windows NT on an IBM Netfinity 3000 with 128MB RAM. Many times the browser did not respond or completely froze; it did not respond and could not be closed down. In these situations we had to use the Task Manager to end the Netscape Navigator task. There were basically two situations where this happened: 1. When changing pages too fast, in other words before the browser finished displaying a page. 2. When using the Submap function. When using the NetView Web interface it is recommended to allow the entire page to be loaded before selecting another option or page. In most cases this avoided any browser difficulties. If the browser is not responding you can first try to reload the page. If it is a page with a Java applet running, hold Shift and then select Reload in order to reload the Java applet too. If this does not help, try closing the page, especially in the situation where multiple pages are open. If you cannot do this, then you have to end the browser task. Try doing this from the task bar, but if there is no response to the close command you will have to use the Task Manager to end the task. If you stop and start the browser many times and still have problems, shut down and restart your machine. At the same time make sure the NetView server is running and that there is an active NetView GUI application running on this server. Remember that it is the map of this GUI that is published by the NetView Web server. The GUI must be running on the NetView server machine, not on a client. Next make sure the NetView Web server daemon NetViewWebServer is up and running. This daemon is started by default when NetView is started on the server. This daemon is not stopped or started together with other NetView daemons. you can view the status and stop and start the daemon from the NetView console using the Administer and NetView Web Interface pull-down menus.
49
50
3.1 NT Configuration
When installing NetView, the installation procedure will create a local account on the machine for the NetView service. The user performing the installation must have sufficient local administrator privileges to create the account. The NetView account has local administrator privileges and the right to log on as a Service. If your account administration policy requires that a domain account be used for the NetView Service account, be sure that this account is created with sufficient local privileges to act as an administrator and to log on as a service. Incorrect local authority can cause the NetView service to fail on startup. For performance reasons, it is not advisable to install NetView on a domain or domain backup controller. However, if you do this, the installation will try to create a NetView account on the local machine. This is acceptable on a primary domain controller, but NT does not allow you to create an account, local or domain, on a domain backup controller. To install NetView on a domain backup controller, first create a NetView domain account.
51
If you are required to enter a password, make sure you use the NetView servers account password, as the client is attempting to connect to the server using the NetView (administrator) account that was created during the installation process.
52
(computer name). Currently, NetView derives the NetBIOS name from the resolved IP address. In our situation, 9.24.106.162 resolves using DNS to wtr05246.itso.ral.ibm.com, and thus the NetBIOS name is assumed to be WTR05246. If the NetBIOS name is MYNODE instead of WTR05246, the following message is displayed from the client installation:
Unable to connect to NetView Server sharename \\NETVIEW\WTR05246
To avoid this problem, if the NetBIOS name is different, add an alias in the hosts file (\WINNT\system32\drivers\etc\Hosts) similar to the following:
9.24.106.162 MYNODE
You should add aliases on the server for each client whose computer name is not the same as the one derived from the IP name. IP address and name resolution is important for communication between the NetView client and the server and can cause problems if unexpected resolutions occur. To aid in diagnosing such problems, use the \usr\ov\bin\host.exe utility to see how NetView is resolving IP addresses and names. If successful, the host command displays the name resolution from the address and the reverse resolution from the name. Check to see that the IP addresses and names are what you expect. The example below shows the output from our host command:
Even with MYNODE as the NetBIOS name, resolution was successful because of the entry we made to the hosts file. Some DNSs will have the facility to keep their own record of your IP address, fully qualified name, and NetBIOS name and thus not require the additional entries in the hosts file, but it is usually good practice to update the hosts file anyway. Using the /s switch, the host.exe utility can use net send to send a pop-up box to the machine to verify that the NetBIOS name is for the correct machine.
host.exe [/s] [IP address | IP name | NetBIOS name]
With no parameters, the host utility gets the local address and resolves it. This can be used to verify IP addresses for all clients.
53
3.2 Prerequisites
In the following section we summarize the prerequisites for the NetView product. AIX Version 5 Release 1 Installation and Configuration Guide, SC31-8442 along with the relevant release notes should be checked at this stage. The release notes for NetView can be found on the NetView CD-ROM. The prerequisites are documented in the readme file provided.
3.2.1 Hardware
The prerequisites below are contained in the release notes for NetView for NT. They are the minimum requirements. CPU: Intel PC or Alpha PC. For Intel PCs, Pentium 90 minimum. Memory: 48MB of RAM for Intel PCs, 64MB of RAM for Alpha PCs. Paging space: 128MB (minimum). File system: NTFS partition or a FAT partition that supports long file names. LAN: Network connection. Video: SVGA graphics card and monitor (minimum 800x600 pixels x 16 colors). 1024x768 is recommended.
We installed NetView on an IBM Netfinity 3000 server that was configured as follows: CPU with 300MHZ intel processor 128MB of RAM 256MB paging space 4 Gigbytes of disk space Token-ring network card IBM P200 Monitor
3.2.2 Software
The software requirements for client, single user, or server mode, as well as for the Web Client Service are: Windows NT Version 4.0 TCP/IP protocol installed and configured SNMP Service installed and configured To use NetView via a Web client, additional software is required: Windows NT Server Version 4.0 (with Service Pack 3) Internet Information Server (IIS) 3.0 with Active Server Pages (ASP) The installation process for the Web client software will check that both the Internet Information Server (IIS) and Active Server Page (ASP) packages have been installed. The NetView Web Client is currently supported with: Internet Explorer V4.0 or higher Netscape V4.0 or higher
54
At this point you have to decide the type of NetView installation that is appropriate, keeping in mind your overall client/server configuration. The following are the choices: 1. Single User: It will be a stand-alone server without any clients. It will not interact with other NetView servers. 2. Server: This will be your regional NetView server and you will be connecting one or several clients to it. This option is also appropriate for an environment where you intend to forward events/traps to a central UNIX NetView server.
Installing NetView 5.1 for NT
55
3. Client: You have already installed one NetView server and you want to install clients that will interact with an existing NetView server.
Type in the password for the NetView account. This is the account mentioned earlier in 3.1, NT Configuration on page 51. This password will be needed for the client installation discussed in section 3.3.3, Adding NetView Clients on page 57.
You can decide the scope of the discovery at this stage. This can be changed after the installation process in complete by invoking the Server Setup from the NetView Console or selecting Server Setup from the NetView Administration folder, from the Windows Start menu.
56
With Version 5.1 of NetView several community names can be configured in addition to the default from within the SNMP Service control panel. It is possible to add or change the community names after installing with the use of the Server Setup function. Before the setup application terminates you will be prompted to reboot the system. After the reboot the daemons will start automatically. The daemons can also be started from the NetView GUI by clicking on Start->Programs->NetView->NetView Console.
You should select Client to install a NetView client for an existing server.
57
The account password is the same password entered for the NetView account on the server during its installation time. You will be required to reboot your system before being able to use the NetView client. When it becomes time to use the client, ensure everything is working properly by checking the client status on the server using Server Setup -> Client/Server.
58
59
3.3.4.3 Tips about Deleting Objects Heres some tips about deleting objects you may find helpful: If you delete a submap object, all the child objects will be deleted, and therefore, those symbols on all client maps as well. If you delete a map, only the map will be deleted but no objects. If you delete the last symbol on any map for that object, the object will also be deleted. If you cut a symbol but exit before doing a paste, the symbol will be deleted. That is, the cut has been converted to a delete symbol. 3.3.4.4 Time Synchronization between NetView Server and Clients NetView attempts to keep the system time of the clients and server in synchronization. This is important for maintaining the propagated map actions, such as manage, unmanage, acknowledge, and unacknowledge on each client's map. By default, the time is checked when the NetView Console starts up and every hour after that. You can change when NetView checks the time by setting the following system variable in the System dialog on the Control Panel:
TIMESYNC_INTERVAL=3600 (seconds)
By default, the tolerance is 1 second. You can change this by setting the following system variable:
TIMESYNC_VARIANCE=1 (seconds)
3.3.4.5 Client/Server Map Propagation The client/server model for the maps consists of a centralized object and topology database, but a localized map database. This means that you can expect all maps to share object and topology properties including object status and configuration attributes. Because the map database is localized, you can customize each client map differently. Only the propagation of adding and deleting nodes is supported. Use the getservermap.bat utility to manually distribute customized maps.
Note
For migration procedure and prerequisites please read the release notes of NetView for Windows NT located on the CD-ROM.
60
Advanced Menu from the Options pull-down menu. You will need to restart the NetView Console before the change will take affect.
From the Server Setup window you can view the status of all the daemons. From here you can also start and stop the daemons. If you select a particular daemon, in most cases you will be presented with further configuration options specific to that daemon.
61
By clicking on the Client/Server tab, it is possible to check which of the NetView clients are currently connected and also identify potential problems if a particular client is missing from the list.
62
The alternative to automatic network discovery is the use of a seed file. You can define the management region where the discovery process is to begin. The generation of the topology map can begin at any host name(s) or IP address(es) of SNMP nodes within your domain. To use a seed file for discovery, select the Discovery tab within the Server Setup window. For best results, you should add a known network device to the file for example, a local router. When specifying a network device, use the IP address if possible because the discovery will not work if the hostname cannot be resolved in the situation where the DNS is unavailable for some reason.
63
You may also use the seed file to limit discovery within a range of nodes or even exclude certain nodes. If your environment is built up of connected networks and you dont want to see a specific subnet in your NetView submap, you can disable it by specifying that network range starting with a !. Dynamic Host Configuration Protocol (DHCP) nodes can also be managed by a seed file. Working in a DHCP environment, you can define the DHCP network range. This is useful because DHCP clients frequently change their IP address, so one physical machine might have several logical object views on the NetView submap. To aid the discovery process, the community names that are configured throughout your network should be provided to the NetView netmon process as shown below:
64
Edit/add community names or select another file containing your community names
Figure 67. Specify/Edit the Community Names File within NetView for NT
Over time, the netmon daemon discovers all new nodes on the network that are one hop away from the management system. One hop includes all the nodes in a subnet, up to the management systems side of the router.
65
The general polling options for daemons are shown in Figure 69.
You can set the polling intervals of individual network devices by using Options from the menu bar and then selecting SNMP. From here click on New to add new nodes or Properties to edit existing node definitions. 66
If you are planning to dump the whole topology database, that is, not specifying a node, it would be a good idea to pipe the output to a file. Please see the man pages or Tivoli manuals for more options and the format of the information produced. Another way to determine whether the symbol status is accurate is to examine the output of a netmon trace. From this trace, you can determine whether a ping is
67
lost or even if netmon pinged a node in the first place. Use the following commands to start and stop a netmon trace of ICMP echo requests, replies and timeouts for both AIX and NT:
netmon -M 3
To stop tracing.
netmon -M 0
The trace file is located in \usr\OV\log\netmon.trace. Please refer to the TME 10 NetView Diagnosis Guide, LY43-0066 for an alternative that might be better suited to your environment.
3.5 Configuring and Using the NetView Web Interface for Windows NT
Configuration and use of the TME 10 NetView Web interface for Windows NT is described in the following documents:
TME 10 NetView for NT Release Notes V5R1 TME 10 NetView Web Interface for Windows NT - README V5R1
We installed the products using an administrator account on a system that had Windows NT Server 4.0 installed in the following order: 1. Install and configure SNMP Manager (Windows NT Server 4.0 CD-ROM) 2. Install Microsoft Internet Information Server (IIS) 2.0 (Windows NT Server 4.0 CD-ROM) 3. Run Windows NT FixPack 3 (Windows NT FixPack 3 CD-ROM) 4. Install TME 10 NetView Server for Windows NT V5.1 (NetView for NT CD-ROM) 5. Upgrade Microsoft IIS to 3.0 (Windows NT FixPack 3 CD-ROM) 6. Install Microsoft Active Server Pages (Windows NT FixPack 3 CD-ROM) 7. Run Windows NT FixPack 3 (Windows NT FixPack 3 CD-ROM) 8. Install TME 10 NetView Web Interface for Windows NT by running nvwebkit from the NetView for NT CD-ROM). Do not change the default target destination of <NetView server drive>:\usr\ov. After this process shut down and restart the server.
68
Select WWW for the NetView Server system, wtr05073 in our case, in the Internet Service Manager window. Then select Properties --> Service Properties.
69
Clear the Allow Anonymous check box under Password authentication in this window (it is checked by default) and click Apply. You can select the Advanced tag and be more specific by granting or denying access by specific IP address.
70
Enter your Windows NT user ID and password. After successful logon the following NetView Web interface logo page is displayed.
You can select README for more information about the TME 10 NetView Web interface for Windows NT. To use the NetView Web interface select Enter or click anywhere on the NetView logo. This will display the NetView Web interface for NT home page with the main menu.
71
This page displays the functions that are available on the NetView Web interface for NT. There have been some changes from Version 5.0: Collections are now called SmartSets. MIB Applications is now within Diagnostics when accessed from the SubMaps or SmartSets functions. SNMP Collect, Objects and Remote Execution are added. There are nine functions available from this page, which are described below: 1. Submaps: Submap browser with limited graphics 2. SmartSets: Faster access to groups of nodes or resources such as critical nodes 3. Event browser 4. Collected SNMP data browser 5. Object database browser application 6. Diagnostic applications 7. SNMP MIB browser 8. Remote execution of NetView commands 9. Display NetView process status Throughout the NetView Web interface you can navigate between pages in the usual way using the browser arrows or click Go to go directly to a page. If you are using a function that opens a new window and have finished, just close that window to save resources.
72
3.5.4 Submaps
To start the Submap function, select SubMaps from the NetView Web interface main menu or click the SubMaps icon on the navigation menu to display the root map.
The following description applies to the NetView Web interface for NT submap pages in general. You can change the sort order of the objects displayed in the submap by making a selection from the Submap Order pull-down menu. When you position the cursor over an object icon, additional information is displayed for this object. The default is IP Address. You can change what is displayed by making a selection from the Property Tip pull-down menu. There is a status summary across the top of the page. Each status has a check box and count. The check box acts as a filter. If it is checked, the objects with this status are displayed on the submap. If it is cleared, they are not displayed (filtered out). This is useful when there are too many objects on a given display and you want to reduce the number of objects displayed for clarity. To the right of the status summary is a count by type: Abnormal, All, Managed, or Normal. Above the status summary is a line displaying the name of the open map followed by a navigation path to the current submap page in the format:
<Map name>:Root-<submap1>-<submap2>-...<current submap>
All the submaps except for the current submap are also links to the corresponding submap page. You can use these links to jump to the given submap, but be aware that these are not jumps back to previous pages but a new page is displayed for the selected submap. If you use the browser Back and Forward buttons for navigation, you will have to press them 2 to 4 times to show the previous/next page. In our experience it was quicker to click on the browser (Netscape) Go option and select the page (submap) to go to it directly. You can also go directly to
73
the NetView Web interface home page by clicking Go and selecting NetView on <NetView Server Name>.
The object icon and label are usually different links: The icon is usually a link to a lower level submap. If the icon is an adapter, then the icon is a link to the object field information page for this adapter and is displayed in a new window. The label is usually a link to the object field information page for the object. If the object is a host the label is a link to the diagnostics function. 3.5.4.1 SmartSets Click on the SmartSets icon on the root submap to display the SmartSets submap page.
74
This displays all the smartset collections in submap format. If you want to see this in list format, click the SmartSets icon in the navigation menu or select SmartSets from the NetView Web Interface main menu. This takes you directly to the SmartSets function. By selecting a smartset, Routers for example, the Smartsets-Routers submap page is displayed.
75
All the discovered routers are displayed on this page. If you click one of the router icons, the next lower submap page is displayed showing its adapters.
Figure 80. NetView Web Interface NT: Submaps - SmartSets - Router - Adapters
To view the object field information for a given adapter, select either its icon or its label. This displays the object field information for the selected adapter in a new window.
76
Figure 81. NetView Web Interface NT: Submaps - SmartSets - Router - Interfaces - Show Fields
If you click on one of the router labels, the diagnostics window is opened and the summary information page is displayed.
Figure 82. NetView Web Interface NT: SmartSets - Node Diagnostics - Summary Information
This page as well as all the following diagnostics pages have a row of icons below the node name field. The icons are links to the following diagnostics functions: Summary Information Ping
Installing NetView 5.1 for NT
77
Demand Poll Trace Route View MIB Applications Show ovw Field Information Go To <host name> home page (only displayed if host is a Web server)
All of these diagnostics functions are described in more detail in 3.5.9, Diagnostic Applications on page 89 except for View MIB Applications, which is currently only available on this page. 3.5.4.2 View MIB Applications Click on the View MIB Applications icon to display the MIB applications page.
Figure 83. NetView Web Interface NT: SmartSets - Diagnostics - MIB Applications
Select a MIB application from the field pull-down menu to display the desired information. Click Query to launch the MIB application. The results are displayed on this page.
78
Figure 84. NetView Web Interface NT: SmartSets - Diagnostics - MIB Applications - IP Addresses
3.5.4.3 IP Internet Click on the IP Internet icon from the root submap to display the IP Internet submap page. The objects are simply displayed in rows with no network connections.
79
Figure 86. NetView Web Interface NT: Submaps - Location Container Submap
On this submap we can choose between hosts and segments. We selected the 9.24.104 subnet icon.
Select the token-ring segment 9.24.104.Segment2 icon to see Figure 88 on page 81. 80
3.5.5 SmartSets
Select SmartSets from the NetView Web interface main menu or click the SmartSets icon on the navigation menu to display a list of the NetView SmartSets.
81
You can scroll through this list and select a smartset to look at by clicking on its name. A simple submap page is displayed with the objects arranged in rows.
Figure 90. NetView Web Interface NT: SmartSets - Router SmartSet Map
82
Single clicking on the node icon or its label opens a new diagnostics window and the summary information page is displayed for this node.
The initial event browser page is in list format. There are buttons at the bottom for moving through the list: beginning of list, previous page, next page, end of list. You can update the list by clicking Requery. To set up filters you must first go to the form view. You can do this by either selecting one of the events by clicking its number or by clicking Form View.
83
Figure 92. NetView Web Interface NT: Event Browser - Form View
You can return to list format by clicking List View or go to the filter criteria page by clicking Filter.
Figure 93. NetView Web Interface NT: Event Browser - Filter Criteria
84
Enter your filter criteria on this page and click Apply. A form view page is displayed with the first event that matches the filter criteria.
Figure 94. NetView Web Interface NT: Event Browser - Filter Results Form View
You can browse the filtered events on this form view page, one at a time, by using the buttons at the bottom: first, previous, next, last. It is more practical to go back to the list format by clicking List View. You can update the list based on the filter criteria at any time by clicking Requery. On the other hand, if you want to go back to viewing all events, click All Records.
Figure 95. NetView Web Interface NT: Event Browser - Filter Results List View
85
Figure 96. NetView Web Interface NT: SNMP Data Browser - List View
Figure 97. NetView Web Interface NT: SNMP Data Browser - Form View
You can return to list format by clicking List View or go to the filter criteria page by clicking on Filter.
86
Figure 98. NetView Web Interface NT: SNMP Data Browser - Filter Criteria
Enter your filter criteria on this page. First select a MIB variable from the pull-down menu, then enter any related field information and click Apply. A form view page is displayed with the first SNMP data record that matches the filter criteria.
Figure 99. NetView Web Interface NT: SNMP Data Browser - Filtered List View
87
To set a new filter or return to viewing all of the collected SNMP data records, you have to go back to the form view and make your selections there.
Figure 100. NetView Web Interface NT: Object Database Browser Application Options
Select an object type, selector field and selector operator from the choices in the pull-down menus. Enter a value to be compared to the selector field. In this example we want to look at all nodes where IP Hostname is to contain the value itso. Click Query NetView to display the table.
88
Figure 101. NetView Web Interface NT: Object Database Browser Application Results
A table is displayed if at least one object in the database matches the selection criteria.
89
This page as well as all the following diagnostics pages have a row of icons below the node address/name field. The icons are links to the following diagnostics tasks for this node. Note that these tasks do not open new windows.
90
Select the task you wish to execute remotely from the pull-down menu, enter required arguments if applicable and click Run Program. The results are displayed on the page.
91
In the NetView submap window select the Window menu and the Exploring IP Internet from the pull-down menu. This will open the subnet explorer as shown below. Note that the Exploring option contains the value from the previous time the Explorer function was used so you may see values other than IP Internet.
92
We can display the objects attributes by: IP Address View System Configuration View Availability View Last Event View Default Route View IP Configuration View TCP Configuration View TCP Connection View SNMP View Tivoli View Web View Interface View
93
94
95
96
97
You can sort the objects by their IP address in the submap. This is useful when you need to find a node contained in a submap that has a large number of icons present. It will also show gaps in the IP address range. You can also filter nodes by node status. To access this function, select View from the menu bar and then Filter By Status as shown in Figure 109 on page 99.
98
For example, in the following figure we have selected to show only the nodes that have critical status.
99
This function is not dynamic. A hidden symbol that changes status will remain hidden, and visible symbols that change to a filtered status remain displayed. 4.1.1.1 Submap Sorting on NT To access this function select Submap from the menu bar and then Sort By from the pull-down menu as shown in Figure 111.
You can sort the objects by their IP address in the submap. This is useful when you need to find a node contained in a submap that has a large number of icons present. It will also show gaps in the IP address range. You can also filter nodes by node status. On Windows NT you will use the Filter tool bar for status filtering. The status colors you select will be hidden on the submap (Figure 112 on page 101). For example, in the following figure we have selected to show only the nodes that have critical status.
100
This function is not dynamic. A hidden symbol that changes status will remain hidden, and visible symbols that change to a filtered status remain displayed.
101
collection. A configuration file is used to control the behavior of the nvsniffer command. This file is located in: /usr/OV/conf/nvdbtools/nvsniffer.conf. The following figure shows the default file. The format of the configuration file is: <Name of database field> <TCP port> <Collection Name>
In our example we modified the file to discover the following services (Telnet, FTP, NetView, and LotusNOTES) and create these collections.
isTelnet 23 Telnet_Servers isFTP 21 FTP_Servers isNetView 8879 NetViewServers isLotusNotes 1352 LotusNotes
Figure 114. Nvsniffer Configuration File
In its simplest form you can run the command as shown below:
102
nvsniffer -n rs60008.itso.ral.ibm.com
Added Collection: [Telnet_Servers] Added Collection: [FTP_Servers] Added Collection: [NetViewServers] Added Collection: [LotusNotes] Inspecting [rs60008.itso.ral.ibm.com]: isTelnet IsFTP Inspection Complete.
Figure 115. Nvsniffer output screen
If we look at the object properties for rs60008 they show the fields that have been modified for the managed object. Click on the object using the right mouse button and select Tools->Display Object Information.
103
4.1.2.2 The nvsniffer Command on Windows NT On NetView/NT you need to have the NT AT service registered and running since the nvsniffer command adds entries to the AT scheduler. By inspecting the AT service, you find a default entry for nvsniffer.
In addition, every time you use nvsniffer on NT with its -r parameter to schedule nvsniffer on a regular basis, an AT entry will be added. The command has two modes at execution: Discover mode Discover new services and status check services already discovered. Status mode Status check services already discovered. Services that have already been discovered by nvsniffer will be monitored for status changes automatically during a discovery run of nvsniffer. The default amount of time that a service object can exist in a critical state is the same value as that used by netmon for deleting nodes that have been down for X days (where X is between 1 and 1000). You can set this value using the Options -> Polling dialog. A new command, \usr\ov\bin\nvsetservicestatus, can be used for setting (or overriding) the status of service objects discovered by nvsniffer. Enter nvsetservicestatus -? at a command prompt for the details of the arguments required for this command.
104
A configuration file is used to control the behavior of the nvsniffer command. This file is located in:
/usr/ov/conf/nvsniffer.conf.
The following figure shows the content of this file in our example.
isTELNET|23|TELNETServers|TELNET Server|||* isFTP|21|FooSmartSet|Foo Server||c:\usr\ov\bin\customFtpStatusTest.dll|* isNetView|8879,1663|NetViewServers|NetView Server|||* isDNS|53|DNSServers|DNS Server|||* isLotusNotes|1352|LotusNotes|Lotus Notes Server|||*
Figure 119. nvsniffer Configuration File on NT
Please note that the Windows NT implementation gives you the possibility to define more than one TCP/IP port for testing and the option to invoke a special services testing application (Discover Test). For detailed information on how to create the Custom Service Discovery and Status Checking application please read the nvsniffer.readme file in the /usr/ov/doc directory. The nvsniffer command will create both the fields and the collections that you specify. It then walks the database and for all IP nodes that are managed, it attempts to connect to the TCP/IP port specified. If the connection is successful, the field is set to TRUE for that node and it will populate the SmartSets. In its simplist form we ran the command as shown below:
nvsniffer -n rs60008.itso.ral.ibm.com
105
By default this screen will also contain all the default NetView SmartSets. If we look at the object properties for rs60008 they show the fields that have been modified for the managed object. Select Object->Object Properties, and select the Capabilities tab.
Note: that the compliant adapter card has to be installed for this function and in the BIOS this function has to be enabled. Read the adapter cards installation guide about the procedure. The command can be executed from NetView using either the command line interface or the NetView menu. To execute nvwakeup from the command line use the following syntax for the command:
nvwakeup [[-m Mcast Addr -t ttl] | [-b Bcast Addr]] [-c] [-p port] [-n nodeList] [-a macList] -m <Mcast Addr>
4.1.3.1 The nvwakeup Command on AIX To use the nvwakeup command from the NetView GUI select Tools from the menu bar and then Wake-Up from the pull-down menu as shown in the following figure.
107
4.1.3.2 The nvwakeup Command on Windows NT To use the nvwakeup command from the NetView GUI select Tools from the menu bar and then Wired for Mgmt -> Wake-Up from the pull-down menu as shown in the following figure.
108
selection criteria is for a given object and what properties of that object should be written to the report, also the format of the report can be configured. 4.1.4.1 The nvdbformat Command on AIX In this example we make a report on the discovered routers in the network. (All routers known by NetView will be listed with their hostname.) We used the routers.format file, which is supplied with NetView and can be found in the following directory:
/usr/OV/conf/nvdbtools
The report will be sent to the standard output (to get it into a file use the pipe command).
4.1.4.2 The nvdbformat Command on Windows NT This example shows how you can create a report on the discovered routers in the network. (All routers known by NetView will be listed with their hostname.) The report used the routers.format file, which is supplied with NetView and can be found in the following directory:
\usr\ov\databases\templates
109
This example shows the flexibility of output formatting. We used the interfaces.format template file to generate an HTML output we redirected the output of the nvdbformat command to a file. The HTML file can be loaded by an Internet browser for viewing. The following command was used to generate the HTML file:
nvdbformat -f interfaces.format > interfaces.htm
110
111
This file was used when we executed the nvdbimport command shown in Figure 130. Note the parameter -v in the command, since it will verbose all messages to the standard output.
Using the ovobjprint -s 9.24.106.152 command we can check the results of nvdbimport.
4.1.5.2 The nvdbimport Command on Windows NT The template file is supplied with NetView and can be found at the following directory \usr\ov\databases\templates. The contents of the businessset1.import
112
file used in our example with our modifications is shown in Figure 132 on page 113:
# Business Set 1 Sample File # -------------------------# # This file can be used to load many objects into the # SmartSet BusinessSet1 by setting the objects # field isBusinessSet1 to TRUE. # # IP Hostname,isBusinessSet1 wTr05095,TRUE # Following line is provided as an Example # # hera.dev.tivoli.com,TRUE
The nvdbimport command was issued with the verbose message option and we get the following messages on the standard output:
To see the effects of the command we can use the NetView console to access the object capabilities as shown in Figure 134 on page 114 or we can use the ovobjprint -s 9.24.106.152 command.
113
where: -a Allows non-root users to start the GUI with Web function on. -d Allows only root users to start the GUI with Web function on (default).
Events are messages that are sent by applications such as netmon (that is, internal management events) or in response to a request to get or set information sent to a management agent. Traps are unsolicited messages sent by SNMP agents to SNMP network management stations. For example, the SNMP agent sends traps to the Tivoli NetView program. The term events is used to cover both events and traps for the remainder of this section. NetView uses two types of events: Map events - Notifications issued because a user or application does something that affects the status of the current map or of the NetView graphical interface. Network events - Messages sent by an agent to one or more managers to provide notification of an occurrence affecting a network object. Network events are normally generated when an SNMP agent detects a change in a network device. The change may be the device functioning inconsistently, a devices status changes or a network devices configuration changes. Events may also be sent when the SNMP agent receives a trap from one of its devices or a pre-set network activity threshold is exceeded.
nv a c tio ns lo gs
nv pag e rd lo g
a c tio ns v r rule s e ts
nv p ag erd
C lie nts n v c o rrd nv s e rv erd n v ev ents filters S e rv e r nv ev e nts tra pd log trap d c on f o v s nm p c o nf_db
A g e n ts
tra pd
ne tm on
tralertd
O b jec t DB
T opo lo gy DB
N V /3 9 0
pm d
o v es m d
ov e lm d
o rs d
A pp lic a tio ns
o v ev e n t log
5 2 8 5 \5 2 8 5 0 1
O RS DB
115
The daemons and how they process the events are detailed in the following sections. The key files involved will be highlighted. For further details see TME 10 NetView for AIX Version 5 Release 1 Administrators Guide , SC31-8440. 4.2.2.1 The trapd Daemon The trapd daemon receives and distributes traps and events. It receives traps from agents and internal processes. To receive traps from agents it uses a socket (/usr/OV/sockets/trapd.socket) and the nvtrapd_trap service (defined in /etc/services). By default it listens to the 162 TCP and UDP ports. Other processes register themselves with trapd to receive events from it. The netmon, nvcorrd, pmd and tralertd daemons receive events from trapd. Internal traps are generated by netmon and sent to trapd for distribution to other daemons. All events are written to the trapd.log file (/usr/OV/log/trapd.log). This logging can be turned off or sent to a different file from the Tivoli desktop (see Figure 136). A number of other settings can be changed such as tracing file, trap forwarding and TCP/UDP ports used.
The trapd.conf file (/usr/OV/conf/C/trapd.conf) contains definitions for the handling of SNMP traps, including how to format trap log entries and what action to take, if any, when a trap is received. The trapd daemon uses these formats when logging the message to the trapd.log file. The formats are also used to present messages in the NetView Event Cards or List. This file is maintained using NetView Options->Event Configuration->Trap Customization->SNMP.
116
4.2.2.2 The pmd Daemon The postmaster daemon supports the external communications for all applications and processes, using CMOT or SNMP. It receives events from trapd and forwards them to the ovesmd (event sieve agent) daemon. As incoming events have no addressing information, pmd must use its locator function to route the outbound requests. It consults the data in the object registration database, which includes the agent locations and the protocol used to access them. This is done via the orsd daemon, which maintains the consistency of the database. The startup parameters for osrd can be changed using the Tivoli desktop. The pmd daemon startup parameters are maintained through the Tivoli desktop. Figure 137 shows the dialog used to change the parameters. The daemon must be restarted after the parameters are changed.
4.2.2.3 The ovesmd and ovelmd Daemons The ovesmd daemon receives events from pmd. The pmd daemon has attached routing and protocol information but doesnt do anything with it. The ovesmd daemon uses the routing information to distribute events throughout the network based on the filters in effect for a particular application or user. The daemon also forwards events on to the ovelmd (event log agent) daemon for logging. It stores SNMP traps, CMIS events and the event log configuration values in the ovevent.log (/usr/OV/log/ovevent.log). This file, and the backup, are binary files that are the source of information for the dynamic and historical event displays. For example, on startup the nvevents daemon checks the ovevent.log for any events since its last execution. 4.2.2.4 The netmon Daemon This daemon polls SNMP agents to discover network topology. When changes to the topology, configuration or status are discovered on an agent by the polling, an SNMP trap is sent to trapd with the details. It also sends events to the objects
117
topology daemon (ovtopmd) to update the topology database. This event may also be forwarded to the object database daemon (ovwdb) to update the object database. The ipmap application uses the topology database for building or updating the maps in the graphical interface. The object database is used for the describe operations by selecting Modify/Describe -> Objects from the pull-down menu. The netmon daemon startup parameters are maintained through the Tivoli desktop. Figure 138 shows the dialog used to change the parameters.
4.2.2.5 The nvcorrd Daemon The correlation daemon processes events according to rulesets. It receives events from trapd, correlates or compares the events to event processing decisions and actions defined in rulesets, and forwards them to registered applications (actionsvr and nvserverd for client GUIs and nvevents for the server GUI). 4.2.2.6 The nvevents and nvserverd Daemons The nvevents daemon displays events in the main window Control Desk in either the Event Cards or List presentation format. The display can be filtered by using filters. These filters are defined using the Options -> Filter Controls pull-down menu item in the Control Desk. For the NetView server, events are sent directly from nvcorrd. For NetView clients, events are sent from nvcorrd to the nvserverd daemon. It then distributes them to the different client nvevents daemons. The nvserverd daemon provides
118
the coordination of actions against events across all clients. The nvserverd daemon communicates with the client workstations on port 1665. 4.2.2.7 The actionsvr and nvpagerd Daemons When an action is to be processed in an event correlation rule, the nvcorrd daemon passes the action to the action server daemon. It manages the actions, starting a child process, while the nvcorrd daemon continues to process the event correlation ruleset. All actions requested and the events that caused those actions are logged in the nvaction logs (/usr/OV/log/nvaction.alog and nvaction.blog). The nvpagerd daemon controls routing page commands to the physical hardware. Page commands can be issued from the command line using the nvpage command or from event correlation rulesets (via actionsvr). All paging actions are logged in the pagerd.log file (/usr/OV/log/pagerd.log).
119
The display can be changed to list view by either: Selecting the View -> List menu option Placing the cursor over an event, clicking the right mouse button to display the work menu and selecting the Switch List <--> List work menu options Double-clicking an event from the list view will show the event attributes in a single card. Use File -> Exit to close the window and return to the list view.
120
4.2.3.2 Starting the NT Event Browser With the NT version of the product, the Event Browser is not automatically started. To start it, select Monitor -> Events -> All from the NetView main menu. The NT Event Browser only comes in list form as shown in Figure 141.
121
4.2.3.3 Changing the AIX Control Desk Display Apart from the List <--> Card views, there are no options for changing the way the data is displayed on the Control Desk. Unlike the NT NetView Event Browser, the events cannot be sorted or have different attributes displayed. There are a number of settings in the /usr/OV/app-defaults/Nvevents file. See the file and the TME 10 NetView for AIX Version 5 Release 1 Administrators Guide, SC31-8440 for the settings options. The automatic update of the display can be turned off by selecting the Freeze option on the bottom of the Control Desk dialog. 4.2.3.4 Changing the NT Event Browser Display There are a number of display settings that can be changed on the NT Event Browser display, including: General settings (Options -> Settings) such as the maximum number of events to display, how frequently to refresh at the display and commands to run when events are received. Colors (Options -> Colors) of the events on the display. Columns (Options -> Columns) to display. Figure 142 shows the columns that can be selected for displaying on the Event Browser display.
The automatic update of the display can be turned off by selecting the View -> Pause Display menu option. The Statistics, Toolbar and Status bar can be turned on/off using the appropriate option under the View menu item. 4.2.3.5 Performing Actions in the AIX Control Desk A number of actions can be performed on a selected event from the menu. These include; Create (File -> Create Report) or view (File -> View Report) a report on all or a selection of events. Clear Selected (Edit -> Clear Selected) removes the event from this workspace alone, all workspaces for this user or from all users. Highlight node on map (Options -> Highlight node on map...) highlights the node on a topology map. When highlighted, the node name is shown in white lettering with a black background.
122
Browse MIB (Options -> Browse MIB...) starts the MIB browser application for that node. Event Description (Options -> Event Description...) starts the Event Configuration application for this event. It provides further information on the enterprise and event IDs. Change Severity (Options -> Change Severity...) allows the operators to change the severity of the event on all their workspaces or all workspaces of all users. Perform Additional Actions (Options -> Additional Actions) provides a number of actions that can be run against a selection of, or all, events. These actions include printing reports and sorting the events. Note that the sort doesnt apply to the Control Desk event display. Search (Search -> By Criteria... and Search -> By Filter...) for events and set them as selected in the Control Desk event display. If searching by filter, a new workspace can be created or events can be selected from an existing workspace. Workspaces are discussed below. A number of system-wide actions can be set including ringing a bell when an event is received and suppression of traps from unmanaged nodes. 4.2.3.6 Performing Actions in the NT Event Browser The NT Event Browser provides a number of common actions to its AIX counterpart. It also has a number of unique actions available. The actions include: Highlight node on map (Event -> Show Node on Map...) highlights the node on a topology map. When highlighted, the node name is shown in white lettering with a black background. Show Node Properties (Event -> Node Properties...) displays all of the attributes of the NetView node object for the node in the event. Event Details (Event -> Event Details...) shows all the attributes of the event. Ping Node (Event -> Ping Node...) and display the results in a dialog box. Run Command (Event -> Run Command) using the command defined in the Options -> Settings dialog and required attributes from the event. For example, the default command is nvecho %d, which displays the event description in a dialog box. A number of other actions can be run including graphing event traffic and showing the nodes with most events. There is no search facility in the NT Event Browser.
123
While the concept of filtering is the same between the NT and AIX versions of NetView, implementation is vastly different and thus discussed in separate sections. 4.2.4.1 Filtering on AIX NetView The AIX version of NetView uses the term workspace to describe the window that displays events. The default workspace shows all events. Workspaces can be static or dynamic. A static workspace results from a select or create operation, or from an order operation performed on the contents of a dynamic workspace. It is a snapshot of events. The following menu options will produce static workspaces: Search -> By Criteria with the create new workspace option Search -> By Filter with the create new workspace option Create -> Static Workspace If you used one of the Search options and did not create a new workspace, the events resulting from the search would be highlighted in the current workspace. A dynamic workspace displays real-time events as they are received by NetView. Filters and event correlation rules can be used to display selected events. The default workspace is a dynamic workspace. Within NetView filters are defined as Simple or Compound. Simple filters are expressions that include SNMP criteria and can be stand-alone. Compound filters are expressions that are composed of several simple filter expressions. The simple expressions can be grouped and combined with the logical operators AND, OR and NOT. The following sections describe adding and using a simple filter to create a dynamic workspace to display only Node down and Interface down events from all nodes. 4.2.4.2 Creating a Simple Filter on AIX NetView To create a filter use the Options -> Filter Control menu options (see Figure 143 on page 125).
124
Figure 144 shows the Filter Control dialog box. Displayed are the current filter file (/usr/OV/filters/filter.samples) and the available filters in that file.
Before adding the new filter you may want to change to a different input file. You do this by selecting the File List button. Figure 145 on page 126 shows the dialog box used to change the Filter file. We selected the ITSOdemo.filter file in the /usr/OV/filters directory.
125
Clicking OK returns you to the Filter Control dialog, which now shows the filters from the ITSOdemo.filters file. Selecting the Start Filter Editor button shows the Filter Editor dialog (see Figure 146).
The existing filters are shown in the bottom portion of the dialog. The options on this dialog are: Display Add Simple Display the currently selected filter. Add a simple filter to this filter file.
Add Compound Add a compound filter to this filter file. Delete Modify Copy to File 126 Remove the currently selected filter from this filter file. Edit the currently selected filter. Copy the currently selected filter to another filter file.
In our example, we wanted to add a simple filter to an existing filter file (ITSOdemo.filter). Thus we selected the Add Simple... button. This displayed the Simple Filter Editor dialog as shown in Figure 147.
A Filter Name must be specified and adding a description is advisable. This dialog allows you to specify one or more of the following type of filters: EVENT IDENTIFICATION, which allows you to restrict events by Enterprise name/object ID. OBJECT IDENTIFICATION, which allows you to restrict events by hostname or IP address. TIME RANGE, which allows you to specify date and time ranges for events. THRESHOLD, which allows you to restrict events by a frequency over a period of time. You could use this to only show the first three matching events or only show events once five have been sent in an interval. We want to restrict events to the Node Down and Interface Down events, so we were only concerned with the EVENT IDENTIFICATION portion of the screen. We selected the Events Equal to Selected option and clicked the Add/Modify... button. The Enterprise Specific Trap Selection (see Figure 148 on page 128) dialog is displayed.
127
Select a specific Enterprise Name/Object ID. The Available Trap Types will show all configured events. In our example we have two traps: Node Down (Generic=6, Specific=58916865) and Interface Down (Generic=6, Specific=58916867). To do this we had to: Select ALL in the Enterprise Name/Object ID field. Enter 58916865 in the Specific Trap Number field and click the Add To List button. (This adds an entry to the Selected Trap Types field.). We repeat this process for the trap 58916867. When OK is selected, the dialog is closed and control returns to the Simple Filter Editor dialog. It now has the two new traps listed. (The top of this dialog is shown in Figure 149.)
128
Selecting OK from this dialog returns control to the Filter Editor dialog as shown in Figure 150.
The dialog is closed by selecting the Close button. This returns control to the Control Desk display. 4.2.4.3 Creating a Dynamic Workspace on AIX NetView At this stage the events in the Control Desk have not changed. To use the filter we have to search by filter (menu option Search -> By Filter) or create a dynamic workspace (menu option Create -> Dynamic Workspace). We chose to create a dynamic workspace (see Figure 151).
129
This dialog is used to specify what events will be displayed on the workspace. You can select one or more of the following: a correlation rule, category, severity, source, a string contained in the event or a filter. We wanted to use the newly created filter so we selected the Filter Activation... button to display the Filter Control dialog (see Figure 153).
This is the same dialog used for creating filters except that the Start Filter Editor button cannot be selected.
130
To use an existing filter: 1. Ensure the correct input file is being used. 2. Select the filter(s) to be used from the Available Filters in File field. 3. Click in the Activate button to transfer the filter(s) to the Active Filters List. 4. Click on Close to return to the Control Desk. The new Dynamic Workspace is shown in the Control Desk display. It defaults to Card View. When changing to List View the display looks like Figure 154.
The two workspaces are indicated by the icons on the left of the window. The Events icon is for the default workspace and the IF_or_Node_Down icon is for the new workspace. You can also save the new workspace by selecting the File -> Save As menu option. A dialog box will prompt you for a file name (see Figure 155).
However, once it is saved it becomes a static workspace as only the events are saved, not the workspace definition. This workspace can be later loaded using the File -> Load... menu option.
131
4.2.4.4 Creating a Compound Filter on AIX NetView The process for creating a compound filter is basically the same as creating a simple filter. A compound filter consists of multiple simple filters combined with logical (AND or OR) conditions. Prior to creating the compound rule we created another simple rule, Local_Nodes, to show only events from the local subnet (9.24.104). The only difference between this and the IF_or_Node_Down filter (Figure 147 on page 127) is that objects are specified rather than event IDs. Objects can be specified by hostname, IP address or IP address with wildcards. We specified a wildcard of 9.24.104.* to get events from all objects in the 9.2.104 subnet. From the Filter Editor dialog, shown in Figure 156, the two simple filters we need, IF_or_Node_Down and Local_Events, are displayed.
To create the compound filter, select the Add Compound... button. This brings up the Compound Filter Editor dialog (see Figure 157 on page 133).
132
A filter name must be entered. (The OK and Save As... buttons will remain grayed out until one is entered.) A description should also be entered. To create the filter expression, you select a number of simple (or other compound) filters and between each one select one of the AND or OR buttons. Select the Get Filter... button to add a filter. This brings up the Get Filter Dialog as shown in Figure 158.
133
This dialog allows you to select a filter to use. We selected the IF_or_Node_Down filter and clicked OK. This returned control to the Compound Filter Editor dialog as shown in Figure 159.
Figure 159. Compound Filter Editor Dialog with One Filter Added
The IF_or_Node_Down filter is displayed in the Filter Expression field. Selecting the AND button places a && after the line in the Filter Expression field. The Get Filter function is used to get the Local_Nodes filter. This results in the Filter Expression shown in Figure 160.
After this, the dialog was closed by clicking OK. This can be a powerful tool for giving operators part of a network or specific event types to manage. These filters can also be combined with correlation rules to automate actions.
134
4.2.4.5 Saving and Restoring the AIX Workspace Environment The static and dynamic workspaces can be saved when NetView is shut down and reloaded when its restarted. To do this you need to edit the Nvevents file in /usr/OV/app-defaults. This file is an ASCII text file containing all of the nvevents settings. It contains three settings for workspace saving and reloading; nvevents.loadEnvOnInit. This setting tells nvevents to load the saved environment on initialization. It defaults to false and must be set to true before workspaces are loaded. nvevents.saveEnvOnExit. This setting tells nvevents to save the workspace environment on shutdown. It defaults to false. When set to true, it will save the workspace definitions into $HOME/NvEnvironment/Workspaces. If there are any static workspaces to be saved, they will also be saved in this directory and referenced in the Workspaces file. This setting changes the menu option Options -> Save Environment (see Figure 161). nvevents.considerStaticWrkSpcs. This setting tells nvevents whether to consider static (or loaded) workspaces when saving and restoring. If it is left at the default of False, no static workspace definitions will be saved on shutdown or loaded at startup. If set to true, static (or loaded) workspaces will be saved on shutdown and loaded on startup.
These settings can be set at a global level by editing the file in /usr/OV/app-defaults/Nvevents. To set it for specific users copy the file to $HOME/.Xdefaults. The following examples show how these settings could be used: If you want operators to get the same dynamic workspaces every time they start up, irrespective of how they changed their session, you could:
135
Code a specific Workspaces file in each users NvEnvironment directory. Use a global Nvevents file with the options:
nvevents.loadEnvOnInit True nvevents.saveEnvOnExit False nvevents.considerStaticWrkSpcs False
If you want users to get back what they had before NetView was shut down, you would use a global Nvevents file with the options:
nvevents.loadEnvOnInit True nvevents.saveEnvOnExit True nvevents.considerStaticWrkSpcs True
Note: The new settings will not take effect until NetView is shutdown and restarted. 4.2.4.6 Filtering on NT NetView Unlike the AIX version of NetView, the NT version doesnt have Simple and Compound filters. All filters are treated the same and edited using the same dialogs. All filters can be accessed from the Select Filters pull-down list on the Event Browser window. This is shown in Figure 162.
Figure 162. Select Filters Pull-Down List on the Events Browser Dialog
As no custom filters have been defined at this stage the Select Filters pull-down list shows all of the supplied filters. These include: All Events: Displays all events except for those with the log only category. Application Alert Events: Displays events with the application alert category. Error Events: Displays events with the error category. Events with Notes: Displays events with notes. Events with Owners: Displays events with owners.
136
Recent Critical Events: Displays critical events received within the last hour. Service Events: Displays all events (same as All Events). Status Events: Displays events with the same status category. Topology Events: Displays events with the network topology category. Custom Filter: Does not use a predefined filter. This option can be used to create a new filter. To select one of the predefined filters, click on it using the left mouse button from the pull-down list. A progress status dialog is displayed. We selected the Recent Critical Events item and the result is shown in Figure 163.
Figure 163. Event Browser with Recent Critical Events Filter Applied
4.2.4.7 Creating Filters on NT NetView This section describes creating custom filters on NT NetView. The following examples were used: 1. Defining a filter for all Interface Down and Node Down events. 2. Creating a SmartSet and use the SmartSet to filter events for routers. You can create a new filter with one of the following: Selecting the Custom Filter item from the Select Filters pull-down menu Selecting Filter -> Set from the menu We did the latter to create the first filter. The menu option selection is shown in Figure 164 on page 138.
137
Filters can be specified with the criteria: By node or SmartSet. You can specify a wildcarded node name or point to a SmartSet of nodes. A blank node selection means all nodes. By trap. A number of traps can be selected and the filter can include or exclude these traps. The default is all traps.
138
By time. This can include events before a time, after a time or within a period (for example, within one hour). The default is no time restrictions. By event with note or with/without owners. By severity, category and source. These criteria are available when the Options button is selected. To implement the first filter (that is, Interface Down and Node down events) the two traps were selected from the trap list (use Ctrl to select the second trap). Then the Include Selected Traps item was selected from the pull-down list. All other attributes were left at their default settings. To save the filter the Save As button was selected. This started the Save As dialog as shown in Figure 166.
By default the filters are saved in the /usr/OV/filters directory. All filters must be in the same directory. Our first filter was called IF or Node Down. Selecting the Save button saves the filter definition in the specified file and returns control to the Filter dialog. Closing this dialog returns control to the Event Browser. Selecting the Select Filters pull-down list now shows the new filter in the list of available filters (Figure 167 on page 140).
139
Figure 167. Modified Select Filters Pull-Down List On The Event Browser Dialog
We can also create a SmartSet and then use this SmartSet with any of the pre-defined or user filters. SmartSets are an NT NetView equivalent of collections used in AIX NetView. To create a SmartSet select Submap -> New SmartSet... from the NetView main window. This is shown in Figure 168.
This selection starts the Find dialog. This dialog has three tabs: Simple, Advanced and SmartSets. The Simple tab is used first and is shown in Figure 169 on page 141.
140
Individual conditions are specified on this dialog. The settings we chose to demonstrate filtering using SmartSets were as follows: Find By: Other Properties Type: Router The next step is to add this definition to the Advanced tab window. This is done by clicking on the Add to Advanced> button. This switches you to the Advanced tab in the Find dialog (see Figure 170 on page 142).
141
The Advanced tab of the Find dialog is used to combine a number of simple conditions together (with AND or OR). It shows the code for the condition we entered on the previous screen. If we were going to add further conditions, we would select the <Add Condition button. Once we returned to this dialog, the new condition(s) would be appended to the first with AND/OR statements. We chose to stay simple and only use one condition, so we selected the Create SmartSet button. This brings up the New SmartSets dialog (see Figure 171) so the definition can be saved.
Selecting OK brings up the SmartSets tab of the Find dialog (see Figure 172 on page 143).
142
The new SmartSet is highlighted. The Cancel button is used to exit this dialog. We can now define the filter using the new SmartSet. Figure 173 shows the Filter dialog with the isRouter SmartSet selected from the SmartSets pull-down list.
143
It is at this point where you could select the type of conditions that you would like to combine with the SmartSet to achieve the ideal filtering. Again the Save As... button is used to save the filter definition. Once the dialog is closed, the Event Browser shows the results of the new filter. Since we didnt change any other settings all traps for routers will be shown (see Figure 174).
Figure 174. Browser Showing All Traps for the Routers in Our SmartSet
144
4.2.5.1 The Rules Editor, Rulesets and Templates The Rules Editor can be started by selecting Tools->Ruleset Editor... from the NetView pull-down menu or issuing the nvrsEdit command. The first method is shown in Figure 175.
When the Ruleset Editor is started two windows are displayed: The Ruleset dialog, which initially shows the Default.rs ruleset The Templates dialog, showing all templates that can be used to build rules The Rules Editor uses the term node to refer to a node in a graphical rule. So this should not be confused with network nodes. The Ruleset dialog is shown in Figure 176 on page 146.
145
This dialog is showing the default ruleset (/usr/OV/conf/rulesets/Default.rs). The only node in this ruleset is the Event Streams node, often called the pizza icon. This icon represents the stream of events from the trapd daemon. It is the starting point for any ruleset. Double-clicking this node shows the configuration options for the ruleset. The default action for events in any ruleset is block. this means if a node condition fails, the event will be blocked (or dropped from further processing). The other option, Pass, is the reverse of this. To build a ruleset, you will need the node templates, event attribute values and environment variables. The node templates are shown on the Templates dialog in Figure 177.
The icons on the Templates dialog represent nodes that are used for building rules. The nodes can be a decision node or an action node. These nodes are linked in a rule. As a general rule, if a decision node fails, processing does not continue to the next linked node(s). The nodes are as follows:
146
Event Attributes - Compares any attribute of the incoming event to a literal value. You can use this node to check for events generated by a particular device. Trap Settings - Specifies a specific trap to be processed and is identified by a pair of generic and specific trap numbers. Thresholds - Checks for repeated occurrences of the same trap or of traps with one or more attributes in common. You can use this node to forward an event after receiving the specific number of the same event received within a specific time period. Use this node with the Trap Settings node to identify a specific trap number. Pass on Match - Compares some attribute of the event being processed with the attribute of all traps received in a specified period of time. Multiple attribute comparisons (up to ten) can be defined. You can stop processing when the first incoming event matches the criteria you defined, or you can continue processing to find matches in all incoming events received in the specified time period. Reset on Match - Compares some attribute of the event being processed with an attribute of all traps received in a specified period of time. This node is similar to the Pass on Match node, except that if a match is found, the event is not passed on to the next node in the ruleset and processing stops. Set State - Sets the correlation state of an object in the NetView object database. The current state is updated in the corrstat1 field in the object database. The previous value in the corrstat1 field is moved to the corrstat2 field. This process continues until the current state and as many as four previous states are stored in the object database. Compare MIB Variable - Compares the current value of a MIB variable against a specified value. When a trap is processed by this node, the ruleset processor issues an SNMP GET request for the specified MIB value. Set MIB Variable - Issues an SNMP SET command to set the value of a variable in the MIB representing any network resource. Check Route - Checks for communication between two network nodes and forwards the event based on the availability of this communication. For example, you can use this node to check the path from the manager to a device before forwarding a node down trap. Forward - Forwards the event to applications that have registered to receive the output of a ruleset. A trap that is processed through this node is marked so that it will not be handled by the default processing action specified for this rule. Block Event Display - Prevents events from being forwarded to the Event Display application. Use this node if you have changed the default processing action to pass (forward) events to the Event Display application and you do not want to forward events that meet specific conditions. Resolve - Forwards a message to all registered applications, such as the Event Display application, when a previous event has been resolved. The receiving application determines how to handle a trap that has been forwarded from this node. You can use the Resolve node to delete an interface or node down event from the Event Display application when an interface or node up event is received.
147
Query Database Field - Compares a value from the NetView object database to a literal value or to a value contained in the incoming event. For example, you can use this node to check if the originating device is a router. Set Database Field - Sets the value of any NetView non-boolean object database field. Fields that have TRUE or FALSE values cannot be changed. Query Database Collection - Tests whether a node is a member of the specified collection. This allows rules to take advantage of collections of nodes and thus not have to hardcode nodes into rulesets. Query Global Variable - Queries the value of the global variable that has been previously set using the Set Global Variable node. Set Global Variable - Sets a variable for use within the ruleset itself. For example, use this node to set a flag whose value will be checked later in the ruleset using the Query Global Variable node. Override - Overrides the object status or severity assigned to a specific event and updates applications that have registered to receive the output of the ruleset. The Event Display application is registered to receive the output. For example, you can use this node to change the severity to Major when a node down event is received for a router. Action - Specifies the action to be performed when an event is forwarded to this node. Actions are operating system commands, shell scripts, executables or NetView commands. Inline Action - Specifies the action to be performed when an event is forwarded to this node. Unlike a command specified in an Action node, a command specified in an Inline Action node is not sent to the actionsvr daemon. Instead, the command is executed immediately, and the processing continues to the next node if the actions return code matches the return code you specify within the specified time period. Pager - Issues a call to a pager that has been defined in a NetView user profile. The paging utility must be configured before this node will work. 4.2.5.2 Event Attribute Values These nodes are used to build rules. A number of the nodes require the use of event attribute values. These are the following: Severity - Specifies the severity of the event, such as Critical. Category - Specifies the type of event, such as a status event. Source - Specifies the internal component of NetView that generated the event, such as netmon. EnterpriseID - Specifies the enterprise that sent the trap. Origin - Specifies the host name generating the trap. The management station (in our case rs600015) is the origin for traps generated as a result of NetView polling operations. Generic - Specifies the generic trap value defined by SNMP. Specific - Specifies the specific trap value defined by SNMP. sysObjectID - Specifies the MIB object describing the agents hardware, software, and so forth.
148
sysUpTime - Specifies the MIB system up time since the agent has been started. Community Name - Specifies the community name. 1 to 50 - Trap variables, which vary by enterprise or trap. The NetView internal traps use variable bindings 1 through 5. The NetView traps are detailed in the TME 10 NetView for AIX Version 5 Release 1 Administrators Guide, SC31-8440.
These environment variables are frequently used with the Action node and the Pager node scripts. For example, an Action node may be calling a trouble ticket application with parameters. The $NV* variables would be used to build the trouble ticket parameters. The following sections describe, by examples, the use of the template nodes, event attribute values and environment variables.
This opens the Open dialog as shown Figure 179 on page 150.
149
The sample rulesets are in the /usr/OV/conf/rulesets directory. The sampcorrIuId.rs file is selected and the dialog closed with the OK button. The loaded ruleset is shown in Figure 180.
For an Interface Up event it will find any corresponding Interface Down events from the same host and close them. This is implemented in the rule by: Using two Trap Settings nodes to filter out anything but the Interface Up and Interface Down events. Use a Pass on Match node to allow any Interface Down events from the last ten minutes through where the hostname equals the hostname from the Interface Up event. Use a Resolve node to discard the Interface Down events that were passed through from the Pass on Match node. The node functions and dialogs are explained in detail in the next few paragraphs. The dialog boxes for each node are opened by double-clicking on the node.
150
The dialog box for the top Trap Settings node is shown in Figure 181.
Figure 181. Trap Settings Node Dialog for Interface Down Event
As with all nodes in a ruleset, if the node fails, rule processing ends. If a node passes the event, the event is passed to the next node. In this case we are checking the event against a set of Trap Settings. The netView6000 Enterprise Name is selected giving a list of specific events. The IBM_NVIDWN_EV (Specific 58916867) event is highlighted and the Comparison Type is Equal To. So the node passes if the incoming event is equal to a netView6000 event with a specific trap number of 58916867 (that is, Interface Down). The same logic applies to the other Trap Settings node, shown in Figure 180 on page 150. This means that only Interface Down and Interface Up events are sent to the Pass on Match node (see Figure 182 on page 152).
151
The dialog lists the comparisons made between the two events. There is one comparison: Is attribute 2 from Event 1 equal to attribute 2 from Event 2. You can change the attributes by clicking the Select... button to the right. Attribute 2 is variable binding 2 for NetView internal traps, that is, the hostname. The ruleset dialog (Figure 180 on page 150) has two arrows from the Trap Settings nodes, labelled Input 1 and Input 2. Thus the Interface Down event will be Event 1 from Input 1 and the Interface Up event will be Event 2 from Input 2. The event retention is the time the first event (Interface Down) will be held to wait for the second event (Interface Up). If an attribute match is found between the two events within the specified period or time (10 minutes) processing for the first event continues to the next node in the ruleset (the Resolve node). Otherwise the node fails and the event gets the default action (Display Event). The match on multiple events flag can be set to yes or no. If it was set to Yes, all Interface Down events that match in the ten minute period would be sent to the Resolve node. It is set to No, so only the first Interface Down will be resolved. This completes the first example. The second example will expand on it.
152
This is the same procedure for adding any node to a ruleset. Selecting this node starts the Inline Action dialog box (see Figure 184 on page 154).
153
The fields that can be entered on this dialog are: The command to be run, in this case /usr/local/bin/logevents.sh. The Wait Interval field is used to specify a delay before the command is run. The command exit code comparison specifies how the exit code is to be used. In this case we want Exit Code equal to zero to be the valid condition. Clicking OK adds this node to the ruleset (Figure 185).
154
The new node has to be placed between the Pass on Match node and the Resolve node. This is done by deleting the existing line joining the two and adding lines to insert the new node. Figure 186 shows the menu options, Edit -> Delete Line, to remove a line.
When this option is selected the mouse cursor changes to an eraser symbol. This is placed over the line to be deleted and the left mouse button is clicked. To add a new line, the Edit -> Connect Two Nodes menu options are used. The first node is clicked and the cursor changes symbol type. The second node is clicked and a line is drawn between the two nodes. This process is repeated to join the Inline Action and Resolve nodes. Following this the layout should be redrawn. To do this, the Edit -> Refresh Layout menu option is used. The ruleset is then redrawn. The result is shown in Figure 187 on page 156.
155
Before the new ruleset is used, it must be saved. If you use the File -> Save menu options, you will overwrite the existing ruleset. The other option is to use File -> Save As. In the Save As dialog (Figure 188), the new ruleset name is entered (logevents.rs).
The ruleset is now ready to be used. The final section of this chapter shows how to load and test the ruleset under AIX NetView. 4.2.8.1 Loading and Testing Rulesets in AIX NetView Rulesets can be used for a number of functions in NetView, including building dynamic workspaces. To create a dynamic workspace with a ruleset, follow the steps listed in 4.2.4.3, Creating a Dynamic Workspace on AIX NetView on page
156
129. Instead of specifying a filter, you can specify a ruleset. This is done by clicking on the Rules List button at the top of the Dynamic Workspace dialog (see Figure 152 on page 130). The appropriate ruleset is specified in the Correlation File Selection dialog (Figure 189).
When the Correlation File Selection dialog and Dynamic Workspace dialogs are closed, the new workspace will be shown. The new ruleset can be tested by selecting the Diagnose -> Send event to trapd daemon... option from the NetView server icon in the Tivoli desktop. The Send events to trapd daemon... dialog is shown in Figure 190.
The event details are entered on the dialog, including: Number of events to be sent Type of event Node name for event Event source character
157
Event text description Event Data If the ... button next to the Type of Event field is selected, a list of available choices is presented in a dialog (see Figure 191).
An Interface Down event and Interface Up event were generated on node edwardsd. The Interface Down event is now gone from IF_UP_DOWN_Log. However the inline action ran the script /usr/local/bin/logevents.sh. This script is shown in Figure 192.
#!/bin/sh # # Log Interface Up and Interface Down events # LOGFILE="/tmp/logevents.log" echo echo echo echo echo echo echo echo echo echo echo echo echo "---------------------------------------------" >> $LOGFILE Enterprise ID: $NVE >> $LOGFILE Agent Address: $NVA >> $LOGFILE Generic Trap: $NVG >> $LOGFILE Specific Trap: $NVS >> $LOGFILE Time Stamp: $NVT >> $LOGFILE Community Name: $NVC >> $LOGFILE VarBind Attribute 1: $NVATTR_1 >> $LOGFILE VarBind Attribute 2: $NVATTR_2 >> $LOGFILE VarBind Attribute 3: $NVATTR_3 >> $LOGFILE VarBind Attribute 4: $NVATTR_4 >> $LOGFILE VarBind Attribute 5: $NVATTR_5 >> $LOGFILE >> $LOGFILE
exit 0
Figure 192. logevents.sh Script
It dumps the environment variables into the /tmp/logevents.log file. The last entry in this file is as shown in Figure 193 on page 159.
158
Enterprise ID: 1.3.6.1.4.1.2.6.3.1 Agent Address: rs600015.itso.ral.ibm.com Generic Trap: 6 Specific Trap: 58916867 Time Stamp: Wednesday 18 November 1998 19:42:59 Community Name: public VarBind Attribute 1: 2 VarBind Attribute 2: edwardsd VarBind Attribute 3: edwardsd is down VarBind Attribute 4: VarBind Attribute 5: topo_db
Figure 193. Logged Interface Down Event
Ensure that the community names for all gateways between the selected nodes are correct. To monitor interfaces or interface traffic on a selected node, the node must support SNMP.
159
If input and output data on interface traffic is abnormally high, you might have a routing problem, an overloaded interface, or an overloaded gateway. In this case, select Test -> Locate Route to see if there are any alternate routes.
With this menu option you can make a quick check on the traffic coming in or going out on all of its interfaces of a given node. You can quickly check how data is changing by clicking the Restart button. MIB Browser With MIB Browser you can read all MIB values of a given node, one MIB object at a time. To retrieve special MIB objects be sure the node-specific MIB is loaded. Select Tools -> MIB -> Browser from the menu to launch the MIB browser. The MIB browser can retrieve various in and outgoing traffic information about the node and its interfaces. An example of interfaces information for a 2210 router is shown in Figure 197 on page 161.
160
Data Collection To see trends, for example incoming frames on a given interface, you could create a data collection. The collected data can be viewed in a table format or in a graph. We show you an example of creating such a data collection on an interface of a router and display the data in a graph. Select Tools -> MIB -> Collect Data from the menu. The Collection Wizard will start up. Select the MIB object you want to collect information about.
161
After selecting the given MIB you can proceed by clicking on the Next button to select the node to collect the data from.
Select the node and click on the OK button. After setting up the collection it will have the status of To Be Collected. It is not yet collecting data, only marked to be started.
To make the collection start gathering data you will have to restart the SNMPCollect daemon.
After restarting the daemon the status of the collection changes to Collecting.
162
After a short period of time click on the Data button. This opens a new window with the collected data. Click on the Apply button to refresh the data.
To see the collected data in graph format click on the Graph button. In the graph window all data will be shown as it is collected by the collection, so it is always refreshed automatically.
163
The collection runs until it is turned off or deleted. If a collection is not needed anymore, remember to disable the collection since the data collection process increases network traffic. MIB Tool Builder To monitor performance and traffic on other vendor devices, or to monitor MIB variables other than MIB-2, create MIB applications by selecting Tools -> MIB -> Tool Builder from the menu. In the example below we built a tool called SPEED to retrieve port speeds of a given node.
164
Figure 205. Tool Wizard - Defining Basic Parameters for Custom Built Tool on NT
We called the new tool SPEED. We want to execute it from the menus Monitor -> Other -> Speed. Click on the Next button to select the MIB objects.
We selected ifSpeed MIB to be retrieved from the selected nodes. Click on the OK button to continue. Figure 207 on page 166 shows the created tool in the Tool Wizard.
165
Now we can try the tool by selecting a node and execute the tool. The tool can be found under Monitor -> Other -> Speed.
The results of the Speed custom built tool can be seen in Figure 209 on page 167.
166
167
168
169
Central NetView Server 5.1 Framework 3.6 Mid-Level Manager for AIX rs600015 9.24.104.215 AIX 4.2 9.24.106.0
MLM (unattended)
Router
9.24.104.0
Router
9.24.105.0
During our setting up and testing, we also had a couple of NetView clients, both on separate stand-alone AIX and NT boxes. Although this has no real bearing on the operation of MLMs in a multi-managed NetView environment, it is important to realize that it does consume valuable resources and thus the number of clients that you might want to use should be closely balanced with the available resources as per the prerequisites for NetView.
170
In our environment, it probably makes no sense to have the MLM daemon operating on the central server because we dont have a large network to manage and thus do not expect a great volume of SNMP traps. But we have set it up this way to outline the steps required to correctly configure it in a typical customer network environment.
AIX Box - NetView Central Server 162 Incoming Traps MLM NetView 165 Server
Figure 211. MLM and NetView Server Listening to Different Ports for SNMP Traps
Use smit to configure the port for the NetView server and select the following menu options Communications Applications and Services->TME 10 NetView->Configure->Set options for Daemons->Set options for event and trap processing daemons->Set options for trapd daemon. Then the following options should be configured to suit your environment. In our case, the port used to receive traps was changed to 165. The queue size can be increased up to 4096 if lots of incoming traps are expected.
Port used to receive snmp traps over udp:[165] Port used to receive snmp traps over tcp:[165] Set Trapd connected applications queue size:[500]
Please note the MLM listens to port 162 by default and this cannot be changed. After making the necessary changes to the ports, cycle both the NetView server and the MLM daemon using the following commands. But make sure you have closed down all NetView GUIs before stopping the NetView daemons, otherwise database corruption could occur: 1. Stop the daemons using the command ovstop. 2. Stop the Mid-Level Manager daemon using the command:
/usr/etc/smmlm stop
171
AIX Box - NetView Central Server Incoming Traps MLM forwarded Traps 162 MLM 165 165 NetView Server
Figure 212. MLM Trap Forwarding Port Set to That of the NetView Server
In a similar fashion to configuring the port used to receive SNMP traps, we can also use smit to configure trap forwarding. As an alternative and recommended as it will highlight any problems, performing a demand poll from the central server to the MLM to be configured will set the MLMs trap destination port to that of the NetView servers. Note the case where the NetView server and the MLM reside on the same machine, the demand poll will be performed and sent to the same location:
Figure 213. Demand Poll the MLM to Set Trap Destination Port
172
NetView->Configure->Set options for daemons->Set options for topology, discovery, and database daemons->Set options for netmon daemon. Then the following options can be altered:
Use Systems Monitor MLM for polling? Use Systems Monitor MLM for discovery? yes no
173
On the Tivoli desktop select the policy region with the NetView server. Use the right mouse button on the NetView server icon to display the pop-up menu and select Administer MLM from the submenu.
Select Install/Control the Mid-Level Manager (MLM) on a node... and the related window is displayed. The required fields on this window are marked with an asterisk.
The default operation is to install with a community file (file ovsnmp.conf from the NetView server). Other operations include alternate install options, install PTFs, working with flags (start options), MLM status check and starting and stopping MLMs. Required fields for all operations are the MLM host name/IP address and its root password.
174
Figure 216. MLM for AIX Install: Tivoli Desktop - Install/Startup Options
The minimum required fields are the full path name of the source install image (on the NetView server) and remote directory for the install image (on the target MLM node). You can change any of the other install/startup options. Select OK and MLM will be installed. Once installation is complete the midmand daemon is automatically started with the given startup options. 5.4.1.3 MLM for AIX Remote Install from NetView You can also install MLM for AIX remotely from an active NetView GUI, specifically from a NetView submap. Select the remote AIX node which is to be the target of the MLM installation. From the NetView Administer pull-down menu select TME 10 NetView MLM.
175
Select Install/Control MLM from map and the Node and Password window is displayed.
Figure 218. MLM for AIX Install: NetView - Node Name and Password
The node name is filled in. Enter its password and click OK.
176
On this screen the node name is fixed, along with the password that was entered on the previous screen. Here you can select the remote operation. The default operation is to install with a community file. The remote operations list is the same as for the Tivoli Desktop MLM install procedure, which includes alternate installation options, installation PTFs, working with flags (start parameters), MLM status check and starting and stopping the MLM. When doing an installation, be sure to verify that the two required fields, the full path name of the source install image and remote directory for the install image on the target MLM node, are correct. You can change any of the other install/startup options. Select OK to continue and install the MLM. If you select the SMIT option from the TME 10 NetView MLM submenu, a SMIT window is opened. This is a shortcut. You can then select MLM remote->Operations-> MLM local operations from this screen. Select Remote operations to display the remote operations menu. Select Install/Control the Mid-Level Manager (MLM) on a remote node from this menu and the MLM password window is displayed, but this time there is no host name/IP address. You have to enter the host name/IP address and its password. Next the MLM install/control screen is displayed and the procedure is the same as described above. 5.4.1.4 Starting and Stopping the MLM for AIX You can start and stop the MLM daemon (midmand) from a Tivoli desktop, NetView GUI, or the AIX command line. On the Tivoli desktop select the policy region with the NetView server and select Administer MLM --> Install/Control the Mid-Level Manager (MLM) on a node...from the NetView server submenu. Select the MLM host and enter its root password on the Install/Control MLM window. To stop the MLM select the Stop
177
midmand operation. To start the MLM, first verify the startup options in the window and then select one of the Start with... operations to start the daemon. From a NetView GUI, select the MLM node on a submap, then select Administer --> TME 10 NetView MLM --> Install/Control MLM from map. After entering the root password for the MLM node, select Stop midmand from the Remote operation option menu to stop MLM. To start MLM, first verify the startup options in the window and then select one of the Start with... operations to start the daemon. You can use the AIX command line to stop and start the midmand daemon on an MLM node. Use the smmlm command in the /usr/etc directory. To stop MLM, issue the command smmlm stop. To start the MLM, issue the command smmlm to restart the midmand with the saved startup options. You can also use flags with this command to change the startup options. 5.4.1.5 MLM for AIX Status You can verify that the MLM is running by checking the status of the midmand daemon from a Tivoli Desktop or a NetView GUI. On the Tivoli Desktop select the policy region with the NetView server and select Administer MLM --> Install/Control the Mid-Level Manager (MLM) on a node from the NetView server submenu. Select the Status operation, the MLM host and enter its root password on the Install/Control MLM window. From a NetView GUI, select the MLM node on a submap, then select Administer --> TME 10 NetView MLM --> Install/Control MLM from map. and enter the root password for the MLM node. Then select Status from the Remote operation option menu.
178
5.4.2.2 Starting and Stopping MLM for NT Starting the MLM for NT is simply a matter of starting the SNMP Service which in itself is usually started at system startup. By using the midmand.exe utility, the SNMP Service can be started and the MLM extension agents configuration changed. The MLM for NT configuration can be changed by using a different configuration file and issuing the following command, where the file midmand.config can be substituted with any other appropriate configuration file: midmand -c "midmand.config" Note the \Program Files\Tivoli\MLM\log\smMlmCurrent.config file will be used as a default where no other file is specified. One of the problems we encountered when trying to get the MLM for NT to receive traps was a conflict with another NT process that also listened for traps on port 162. Other products such as IBM Nways RouteVision Manager (application to manage IBMs 8274 switches) has its own SNMP Trap Service that listens to port 162. We had to disable this function in the Nways software. 5.4.2.3 MLM for NT Status You can verify that the MLM is running by checking the status of the SNMP service and the MLM Extension Agent from the command line. Issue the command midmand -status. The following shows the output:
midmand -status The SNMP Service is running The MLM extension agent is running The MLM extension agent is using configuration file: "C:\Program FIles\Tivoli\MLM\log\smMlmCurrent.config"
179
and must exist on each AIX MLM to allow NetView to talk to the MLM. The following is an extract from this file:
# FILE:/etc/snmpd.conf community community community community public private 127.0.0.1 255.255.255.255 readWrite tivoli 9.24.104.215 255.255.255.255 readWrite system 127.0.0.1 255.255.255.255 readWrite 1.17.2
In the above definitions, you can see that we have set a community name of tivoli and allowed the host 9.24.104.215 to have readWrite privileges and thus make requests of this snmpd agent. No other host can use this community name for this snmpd agent. Note that a community name of public exists and no hostnames or permissions are specified. This effectively gives any host readOnly access for the community name of public. 5.4.3.2 SNMP Configuration of the NetView Server The setup here affects outgoing SNMP requests whereas the SNMP setup of the system as discussed above affects SNMP requests coming into the machine. From the NetView GUI, select Options -> SNMP Configuration and the following screen will be displayed:
SNMP agents will check the rules defined on this screen, starting from the top, that is, Specific Nodes, and moving down to the Default SNMP configuration before determining whether or not it is an authentic SNMP message. For our setup, we defined a read and write community name of tivoli for the host of the NetView server (rs600015 or 9.24.104.215). This was necessary as NetView will be issuing SNMP sets to the MLM which resides on the same machine and the MLM has been set up with read/write access for a community name of tivoli.
180
When the NetView server sends SNMP requests to other hosts (which dont appear as Specific Nodes in its SNMP Configuration table), it will use the default or Global Default as we have it defined. This default has a read community name of public but no write community specified. The write community, by default, is set to public. Each MLM should be included in the NetView servers SNMP Configuration table as individual hosts in the Specific Nodes section. The file /usr/OV/conf/ovsnmp.conf contains the community name definitions. This ovsnmp.conf file should not be edited manually as it is simply a copy of the information contained in the SNMP configuration database. The MLM Configuration Application smconfig uses the ovsnmp.conf file.
5.5.1 Consideration
Status polling is usually frequent in order to have a current picture of the network status, usually every 5 to 15 minutes. In a large network of thousands of nodes spanning WANs, these status polls can become a large overhead, especially at critical nodes such as central routers and relatively low-speed WAN links. Therefore, it is good practice to assign this function to an MLM node on each subnet, whether local or remote to do the actual status polling and sending topology changes to the central NetView server. On the other hand, the rate of new node discovery is very high when NetView is first started and then usually slows down in most networks. NetView has a very
NetView and the Mid-Level Manager (MLM)
181
efficient discovery algorithm and can be set to fixed intervals or set to dynamic. The dynamic time interval varies depending on the number of new nodes per discovery cycle. If many new nodes are discovered, the interval is shortened, if few or no new nodes are discovered, the interval is increased. The maximum dynamic interval in our stable network was 24 hours. Node configuration changes are usually infrequent, and the usual interval for checking is between 1 and 7 days and is usually scheduled at a time of low traffic. Therefore, common practice is to leave new node discovery and node configuration changes with NetView and move the node status polling and trap filtering to the MLMs.
182
From the MLM managers collection submap select the MLM node, ntmgt2 in our case, and then start the smconfig application from the NetView menu bar by selecting Tools --> MLM Configuration Application.
If the Agent Configuration window has a list of options and the message area has a message stating that an MLM has been detected, you have access to an active MLM. If the Agent Configuration window is empty and the message area contains the warning TME 10 NetView Agent(s) not found. Community name could be invalid for this Name or IP Address, first check that the host name is correct. If these are correct, then verify that the MLM daemon is running. Some of the options listed are used to set configuration options and the others are for displaying information. The most important options to check are the trap reception and trap destination settings. To view an option, select it and click View/Modify. The following can be set using the GUI. The MLM Administration Table is an area where administrators can write, update and delete notes and messages, a notepad for the MLM. The MLM Alias Table contains a list of the aliases known to this MLM. An alias is a group of up to 24 nodes or aliases (nested aliases). Use Start Query to see the contents of an alias and how it resolves in the case of nested aliases. Alias definitions can be sent from the central NetView, or created at the MLM node. Aliases are used in defining thresholds, filters, trap destinations and status monitoring. You can define and modify alias groups, but you should not 183
modify the ones sent by NetView itself. These are labeled NV<NetView IP Address>.<sequence number>.
MLM Data Collection Settings contains the startup options of the data collection file, midmand.col by default, which can be modified here. MLM Data Collections Log is used to view the data collected by the MLM due to thresholds that were defined. The MLM Filter Table is used to add and modify trap filtering for the MLM. All traps that are received by the MLM pass through this rule table. If there is a match, then the given defined action is taken, that is forward or discard trap. If no match is found, then the default action is taken as defined under MLM Trap Reception Settings. MLM Node Discovery Settings is used to view and set the new node discovery options for the MLM if this has been enabled at the central NetView as described in section 5.3.3, Defining the MLMs Role on page 172. Note that some of the options are not available for MLM for NT, for example sensor discovery. See the MLM for NT release notes for details.
184
The MLM Node Discovery Table lists the nodes discovered by this MLM if node discovery is enabled. MLM Program Description lists details about the installed MLM program. MLM Program Log Settings contains the startup options of the MLM log file, midmand.log by default, which can be modified here. The MLM Status Monitor Table contains a list of the aliases used for status monitoring by this MLM. If node discovery is off, the aliases sent by NetView are listed here. You can query an alias to see the host resolution and the polling parameters. These are initially taken from the NetView SNMP Configuration tables and can be modified here.
185
The MLM Network Interface Status Table contains a list of the nodes being monitored by the MLM. You can select a node and click Start Query to display details about the status of this node.
186
The MLM Threshold and Collection Table is used to add and modify thresholds and actions to be taken when a threshold is exceeded. Usually these are logged in the data collection log. The MLM Threshold Arm Info Table lists the states of the thresholds defined for this MLM, including information about nodes that exceeded the thresholds. The MLM Trap Destination Table lists the hosts to which the MLM will send traps. The host listed here is the central NetView node. This information was initially set when the NetView detected that MLM was running on this node, either at node discovery or at a demand poll/configuration check. You can add additional destination NetViews to this table if required. The MLM will send traps that it generates due to a status change, node discovery, or threshold event. It will also forward traps that it has received and have not been filtered out by the filtering rules.
187
There is an additional filter for each destination, the mask that filters against the generic trap numbers. The mask is an 8-bit number, and the generic trap number is the bit position. A 1 in the position means send, a 0 means filter. The mask 11111110 (254) lets all traps through. For example, you can filter out all authentication failure traps by using the mask 11110110 (246). The following table shows the generic traps and how masks are made.
Table 2. MLM Config Trap Destination: Generic Trap Mask
0 Cold start 0 1 1
1 Warm start 1 1 1
2 Link up 2 1 1
3 Link down 3 1 1
4 Authentic. failure 4 1 0
MLM Trap Log Settings contains the startup options of the MLM trap log file, smtrap.log by default, which can be modified here. MLM Trap Log is used to view the traps that have been received by the MLM. The MLM Trap Reception Settings is used to verify and modify trap reception and default filtering action. The Default filter action can be set to either
188
sendTraps or blockTraps. Make sure that trap reception is enabled for both TCP and UDP protocols. If either of these has a status of Port Busy, then you must check and disable the application using the required port number. When you have solved the problem, check again to see that the trap reception is set to enabled and that the status is also enabled.
The current configuration of the MLM is saved in the smMlmCurrent.config file on the MLM node and is used by default when MLM is started. Changes made during operation are stored here. The full path to the file is shown below, depending on the MLM node operating system: AIX: /var/adm/smv2/log/smMlmCurrent.config NT: <system drive>:\Program Files\Tivoli\MLM\log\smMlmCurrent.config You can save a copy of the configuration file from MLM Configuration application and then use this file at a later time when starting MLM, that is, save a configuration file before making major changes as a backup. To do this select File --> Save MLM Configuration/Reinitialize from the MLM Config menu bar. Then enter the full MLM Configuration file name where the configuration is to be saved, select True from the Save selection menu, and click Apply.
189
Figure 229 shows the types of filtering that can be performed at the Mid-Level Manager. Here we set up an MLM alias name for all our PCs connected on the managed segment. From here we can filter any traps generated for these devices, such as Node Up. For tips on what to enter select the Context Help button, then click on a specific field. Other filter options include: Enterprise Match against any enterprise ID such as NetView6000
Generic/Specific Expression Match against generic and specific trap identifiers Variable Expression Match against information contained in the trap
190
received by the MLM by using the MLM Configuration application and selecting the MLM Trap Log option. Click Refresh to update the display. You can see the node up trap for 9.24.105.122 (specific trap 12 is status up).
Next we can see the trap reception at the central NetView node, rs600015, by viewing its /usr/OV/log/trapd.log file. In the following figure you can see both the traps coming in from the MLM for node 9.24.105.122 and the two traps generated by NetView so that netmon will apply the status changes to the maps.
191
Next we can open the NetView Events display and see the actual events.
Once the event is registered, the status color changes for the node on the maps. Another file you can check is /usr/OV/log/netmon.trace. While the MLM is active, netmon will not monitor (ping) the subnet 9.24.105.
192
193
In case you are running TME 10 Framework Version 3.2 the following patch is required. (This is not needed for Framework Version 3.6.) TME 10 Framework 3.2 Super Patch If you wish to use the TIPN package in conjunction with the Tivoli Enterprise Console you need to install the following patches, regardless of which supported Framework Version you are using: TME 10 Enterprise Console Patch 3.1-TEC-0012 TME 10 Enterprise Console Patch 3.1-TEC-0030 If your NetView runs under Windows NT, you need to install additional enabling software for Netview NT V 5.1. You can find the required files under the NVNT directory on the TIPN CD-ROM. Tivoli NetView Server 5.1 Enabler for Windows NT Tivoli NetView Client 5.1 Enabler for Windows NT Finally you need to install the TIPN component itself, depending on what integration services you would like to use: Tivoli Framework Network Diagnostics for NetView Server Tivoli Framework Network Diagnostics for NetView Client Tivoli Framework Network Diagnostics Tivoli NetView/TEC Integration Adapter for NetView Server Tivoli NetView/TEC Integration Adapter for NetView Client Tivoli NetView/Inventory Integration Adapter for NetView Server Tivoli NetView/Inventory Integration Profile
The TIPN documentation states the Super Patch is included on the NetView 5.1 CD. This is not the case. The Super Patch was distributed as a separate CD.
194
T/EC Console (!) 3.1-TEC-012 Patch 3.1-TEC-030 Patch TIPN Patch TIPN Patch TIPN Patch
We suggest that you install the TEC patches first. From the Tivoli Desktop menu, select Install->Install Patch and set the path to the patch directory using the Set Media or Set Media & Close buttons. You will find the patches on the Tivoli Integration Package for NetView CD-ROM under the directory /PATCHES. The patch, 3.1-TEC-0012, must be installed on the TMR server and the NetView server. In our environment the nodes were rs600015 and rs60008 (see Figure 233).
Next we installed the patch 3.1-TEC-0030 on the TMR server and the NetView server.
195
Once you have installed the TEC patches, you need to proceed by installing the TIPN patch to all managed nodes including the TMR server. This patch adds the necessary resources and name registry entries. You will find the TIPN patch in the same directory as the TEC patches. We suggest you install the patch on your TMR server first and then check the existence of the new resources introduced by the TIPN patch. To verify the installation, issue the wlookup command to see if the required resources are successfully installed. Assuming your TMR server runs on a UNIX machine you can enter wlookup -R. You should find three new entries called: netstat ping traceroute The output from our TMR server on rs60008 is shown in Figure 234.
After you successfully install the TIPN patch onto your TMR server, you need to install the patch on all managed nodes.
196
The TIPN components appeared as shown Figure 235. We started installing the components as shown in Table 4.
Table 4. TIPN Installation of Components
Component Tivoli Framework Network Diagnostics Tivoli Framework Network Diagnostics for NetView Server
Functions CLI network diagnostics Launch Tivoli desktop from NetView Discover/highlight managed resources Generate Web-based reports Access CLI network diagnostics from NetView Same as above. You need to install the diagnostics for NetView server first. Gives network administrators access to system information stored in Inventory repositories. Allows Tivoli NetView information to be exported into Inventory repositories.
Tivoli Framework Network Diagnostics for NetView Server Client Tivoli NetView Inventory Integration for NetView Server
NetView Client
NetView Server
197
Component Tivoli NetView Inventory Integration Profile Tivoli NetView/TEC Integration for NetView Server Tivoli NetView/TEC Integration for NetView Client
Functions Create and distribute NetView/Inventory profiles Adds the TEC Events menu option to the Monitor branch of the NetView pull-down menu Adds the TEC Events menu option to the Monitor branch of the NetView pull-down menu.
NetView Client
198
Using the wrtraceroute command as shown in Figure 237, we used our Windows NT managed node wtr05097 as the starting point to issue a remote traceroute to a station, which is more than one hop away from our network segment. The second wrtraceroute command had our AIX TMR server as the starting point.
199
The TIPN package will add a number of registration files to NetView and extend the NetView object database definitions by adding some fields. The installation will execute a number of NetView configuration steps so you should shut down all EUIs prior to installing this product. Select Continue Install to install the component. 6.2.3.1 Testing the NetView Diagnostics for NetView Server To test the package, you will need to bring up a NetView EUI. You should find a new menu entry called Tivoli on the NetView menu bar.
200
Launch the Tivoli Desktop from NetView by selecting Tivoli->Tivoli Desktop->Launch Tivoli Desktop. Shortly after you make the selection, the Tivoli desktop will appear. To test Tivoli Network Diagnostics we selected a running node and selected Tivoli->Tivoli Network Diagnostics->Remote netstat from the menu. The result is shown in Figure 240.
The remote netstat command may take a while to show up, because a netstat -a command is executed on the remote node by default.
201
We installed the TEC Integration package on the NetView server (rs600015). 6.2.4.1 Testing Tivoli NetView/TEC Integration In order to check the installation of the TEC Integration package, you need to start the NetView EUI. The Monitor->Events menu entry now has a new submenu called TEC Events. This new menu entry allows you to display TEC events from NetView. To test this function, select a node from a NetView map and select Monitor->Events->TEC Events. If TEC has any events originating from that node, they will be displayed as shown in Figure 242 on page 203.
202
The NetView/Inventory Integration component consist of two subcomponents: Tivoli NetView/Inventory Integration Adapter for NetView Server Tivoli NetView/Inventory Integration Profile The subcomponents are installed on different machines due to the different functions they provide. The integration adapter is the component that retrieves information from the NetView server about NetView discovered objects. The
203
integration profile is required to distribute profiles to NetView servers that will export information from NetView to the Inventory database. To start the installation Tivoli Inventory must be installed and set up on the TMR server, NetView server and on the nodes to which you want to distribute the inventory profiles. You do not have to install all the TIPN components to use the NetView/Inventory integration. For instructions how to set up your Inventory environment please read TME 10 Inventory 3.6 Users Guide, GC31-8381. 6.2.5.1 Installation of NetView/Inventory Integration Adapter The first step in the installation process is to install the Tivoli NetView/Inventory Integration Adapter for NetView Server. The product installation is done with the Tivoli Framework product installation function. Start your Tivoli desktop and select Desktop -> Install -> Install Product. Select Tivoli NetView/Inventory Integration Adapter and the node(s) the product will be installed on. In our lab environment, it was installed on the NetView server rs600015. Select Continue Install and finish the installation. 6.2.5.2 Installation of NetView/Inventory Integration Profile After successful installation of the integration adapter on the NetView server we can install the integration profile on the TMR server, RIM host and other nodes that we need to distribute profiles to. In our case it will be the TMR server rs60008 and the RIM host rs600028. The installation is done in a similar way as with the integration adapter. Start by selecting from the desktop the Desktop -> Install -> Install Product menu items. Select the Tivoli NetView/Inventory Integration Profile and select the TMR server, the RIM host and any managed nodes as required. 6.2.5.3 NetView/Inventory Integration Configuration A script is provided with TIPN that performs this extension of the Tivoli Inventory schema. It adds views as well as tables to the schema. Once this extension is performed, the data gathered from NetView can be treated like any other data managed by Tivoli Inventory. The first step is to create the tables and the views. To do this you will need to run a script. (Selection is based on what RDBMS you have installed in your environment.) Our RIM is connected to a Sybase RDBMS so we used the Sybase version. All the tables and views are created in the same database as the existing Tivoli Inventory tables and views. Execute the script on the RIM host. The following is the command we used to create the tables and views:
isql -U tivoli -P tivoli -i nvinv_syb_schema.sql -o nvinv_syb_schema.log
For the list of tables and views that will be created see Appendix B, TIPN Tables, Views and Queries on page 377. The location of the sql scripts is $BINDIR/TMF/TIPN.
204
The tables and views are created under the user tivoli (userid for the RIM). The output of the sql script is directed to the log file nvinv_sys_schema.log. With the tables and views created we added queries to the desktop. The query library is created with a script called nvinv_create_queries.sh. The script needs to be executed on a managed node that we have already installed the NetView/Inventory Integration Profile subcomponent on. In our case that it is the TMR server or the RIM host rs600028. The command used for the creation is the following:
./nvinv_create_queries.sh rs60008-region
The parameter of the command is the region name where we would like to create the NetView query library. Also check that the TMR roles and managed resources have sufficient access rights.
This command will also create the icon in the given policy region.
205
Figure 244. The Policy Region after the Query Library Creation
With the query library created we have configured our systems to use the NetView/Inventory integration component. 6.2.5.4 Testing the Inventory Integration In this section we give you tips on what to check for correct installation of the component. Subcomponent installation: Check the product install window for any errors and check that NetView/InventoryProfile was created. From the Tivoli desktop click on a region with the right mouse button. From the object menu select Managed Resources. You should see the Netview/InventoryProfile resource on the right side in the Available Resources field. Tables and views creation: Check the execution log file with the following command:
more nvinv_syb_schema.log
As you view the file you should not see any errors besides the following lines:
Cannot drop the table <table_name>, because it doesnt exist in the system catalogs.
Errors are due to the fact that before table or view creation a drop command is executed to delete any previous version. Hence this is the first time we executed it on this node, so it could not find any previous version of the table or the view. Another method for verification is to use the sql select statement on the last table or view that was created. Follow this procedure to do this, log on to the RBMS with the tivoli user ID:
isql -U tivoli -P tivoli
206
go
The command should not give any errors and you should see the table column names. To exit isql enter exit at the prompt. With this procedure you can be sure that all tables were created. If the database is full all the tables may not have been created so you can check the space usage of your database with the following command:
sp_spaceused
Query Library: Go to the region where the library was created and open the library to see what queries have been created and compare these with the content of the script. Inventory Menu in NetView: Start the NetView console and from the menu bar select Tivoli. You should see the new Tivoli Inventory Queries item on the drop-down menu list as shown in Figure 245.
Field isTivoliServer Tivoli Server Label Tivoli Server Port isTMAGateway TMA Gateway Label TMA Gateway OID
Content True/False Servers short name Port number True/False Gateways short name OID
207
Field TMA Gateway Port isTMAEndpoint TMA Endpoint Label TMA Endpoint OID isTivoliMn Tivoli ManagedNode Label Tivoli ManagedNode OID Tivoli ManagedNode Port isTivoliPcMn Tivoli PcManagedNode Label Tivoli PcManagedNode OID Tivoli PcManagedNode Port Tivoli PcManagedNode Proxy Tivoli Interp Tivoli os_name Tivoli os_version Tivoli os_release
Type Integer32 Boolean StringType StringType Boolean StringType StringType Integer32 Boolean StringType
Content Gateways listening port True if Endpoint Endpoints name Endpoints OID True/False MN name OID listening port True/False PcMn name
208
MenuBar "Tivoli" _T { <60> "Tivoli Desktop" <50> "Tivoli Network Diagnostics" <40> "Tivoli Discovery" <30> "Tivoli Reports" <25> "Tivoli Web Access" <20> "Tivoli Locate" <26> "Tivoli Odadmin" }
Figure 246. The New Menu Entry in Tivoli.reg
Locate the Menu_Bar Tivoli. This is the section where we will insert a menu entry for the odadmin commands. Figure 246 shows the new entry. We entered Tivoli Odadmin for both the label and the menu reference. You can enter any name or reference, but they must match other changes you need to do in the Tivoli.reg file. The next step you need to do in order to extend the Tivoli menu, is to define the actual menu entries and the commands you would like to execute.
Menu "Tivoli Odadmin" { "Odadmin odlist" f.action "odlist"; "Odadmin stats" f.action "odstats"; } Action "odlist" { Command " /usr/OV/bin/xnmappmon -cmd odadmin odlist"; } Action "odstats" { MinSelected 1; SelectionRule (isTivoliMn); Command "/usr/OV/bin/xnmappmon \ -commandTitle \" odadmin status of ${OVwSelection1} \" \ -cmd odadmin stats \ $(odadmin odlist |grep ${OVwSelection1} \ |awk { if ( $1 < 1000 ) { print $1} else { print $2} } )"; }
Figure 247. Submenu and Command Entries for odadmin
Figure 247 shows the two examples. First, define a submenu using the Menu keyword. The label for this submenu must match the entry you defined inside the Menubar Tivoli definition. This submenu contains two entries: the first one being Odadmin odlist and the second one Odadmin stats. Each of these entries has an action assigned, which you can see in the remaining lines. Actions are defined using the Action keyword. The label of the action must match the label you assigned to the f.action clause in your menu. Finally, inside the Action block, you need to specify all necessary information to carry out your function. The second example is somewhat more complicated. It delivers the remote status of a managed node using the odadmin command. If we want to get information from a specific node we need to specify that node. The best way to achieve this is to select it on a submap. So, the first entry in the action block is Minselected 1, which causes this submenu to be grayed out in case you dont have a valid node selected on a NetView submap.
Tivoli Integration Pack for NetView (TIPN)
209
The odadmin stats command only gives information about managed nodes. During the TIPN installation, a few object database fields have been added by TIPN. You can find a list of added fields in Table 6 on page 207. One of these fields is called isTivoliMn and will be set for all Tivoli managed nodes discovered by NetView. To limit the command to managed nodes only, we use this field in the SelectionRule keyword. The last entry we need to specify is the command to be executed. The odamin odstat command requires the dispatcher number of the managed node as the only parameter. Unfortunately, NetView is not aware of Tivoli Dispatcher assignments. But it can provide us with a hostname. Using the hostname, you will be able to extract the dispatcher number using the odadmin command and take the result as the input into odadmin stats command to get the correct results. After you change your Tivoli.reg profile, you need to exit the NetView GUI and restart it in order to read the registration files. Back in the NetView GUI, you will see the new menu entries when you click the Tivoli menu bar. Note, that as long as no node is selected, the odadmin stats entry is grayed out. Figure 248 shows the output from the odadmin odlist command executed from the NetView menu.
Next we executed the odadmin stats command. The node needs to be selected and this node must be of type isTivoliMn. Selecting one of our managed nodes rs60028 activated the menu entry. The result of the odadmin stats command is shown in Figure 249 on page 211.
210
211
In addition the discovery process will analyze the discovered resources and build a number of collections containing the discovered resources in NetView. After the execution of the name registry discovery, you should have a set of new collections residing in your collections map similar to Figure 251 on page 213.
212
If you have a look into one of the collections and restart the discovery process, you will notice the collection being rebuilt. We believe those collections are static and you can only update them by re-running the Name Registry Discovery process. You can find the command that gets executed whenever you issue the Discovery menu option in /usr/OV/bin/. Its name is lstmr and accepts an optional parameter. The arguments are TMR or TMA where: TMA - List TMA Gateways and Endpoints TMR - List TMR ManagedNodes and PcManagedNodes Our output for the lstmr command is shown below:
Tivoli Server instance label instance object oserv port interpreter type host interface operating system name operating system release operating system version
= = = = = = = =
Figure 252. Name Registry Discovery Initiated from the Command Line
So, if you want your collections updated on a regular bases, we suggest you add a cron job (at on NT), which executes the lstmr command.
213
Note
Starting lstmr without parameters performs a complete name registry discovery. Issuing the command with one of the two optional parameters discovers the requested subset of your Tivoli resources. The command will first remove all the objects from your collections and only the requested information will be shown in the collection.
After the necessary preparations you can make selections from the reports list. By selecting an item on the menu the Internet browser is invoked and the report is loaded. In the case of UNIX platforms the only supported browser is Netscape, on Windows NT it can be Netscape or Internet Explorer. (The default is always invoked.) The data displayed in the reports are not updated unless you do a new discovery and a report generation.
Table 7. Tivoli Reports
214
To generate reports select the Tivoli Report -> Generate Reports menu item. When invoking this function a window is opened and you will be able to see each table as it is created by the nvdbformat command. When the Restart button is highlighted the execution has finished. Scroll down in the window to look for any errors encountered during execution. In the last line you should see the following: Done Generating Tivoli Reports.
Now we are ready to look at the reports; select the report Endpoints by Segment. Our Internet browser is started and the report file epXseg.html is loaded. All the generated report files are stored in the /usr/OV/tmp directory. On the loaded file you will find links; the first link is at the segment short list. By clicking on the segment the browser quickly positions to the detailed information of the segments endpoints.
215
From the IP Address column clicking on the link will load the NetView Field Information page shown in Figure 256 on page 217. (The NetView Web interface should be operational on the server to use these links.)
216
When selecting the link under the Hostname column the NetView Diagnostics page for the selected node will be loaded (Figure 257 on page 218).
217
6.3.4.1 Forwarding Events from NetView to TEC To forward events from NetView to TEC, you need to perform a few configuration steps on both sides: Configure NetView to send events to TEC Import the NetView baroc file into the current rulebase used by TEC Before you start sending events to TEC, you need to decide what events you would like to forward. You might not want to send every event from a large network to your enterprise console. In other words we need to provide a filter. To apply a filter to the stream of events that are supposed to go from NetView to TEC, you will use the NetView Ruleset Editor. To demonstrate event forwarding, we define a simple rule, which only sends Node Up and Node down events to TEC. To start the Ruleset Editor, select Tools->Ruleset Editor from the NetView menu or enter nvrsEdit from a command line. In the edit window, click File->New. Then drag two Trap Setting blocks and one Forward block from the Templates window
After connecting the blocks (or nodes) you will get a graphic representation of your ruleset (see Figure 258).
219
When you drag the Trap Settings node into the ruleset, the configuration window opens automatically and presents you with a dialog as in Figure 259. For one of the nodes, select Enterprise NetView 6000 1.3.6.1.4.1.2.6.3 and event IBM_NVDWN_EV Specific 58916865. For the other node, select the same enterprise and event IBM_NVNUP_EV 58916864. Before you save the ruleset, you need to change the behavior of the event stream source.
220
Using the right-hand mouse button click on the event stream node that represents all the events NetView is receiving and choose Edit. The default action for the event stream is normally set to Pass. With the default settings, the ruleset would forward all incoming events to TEC except the two we defined. So, set the default action to Block.
You can now activate the forwarding to TEC. Enter the NetView SMIT using smitty nv6000. Then select Configure->Configure event forwarding to T/EC in the smit dialog. We entered the information as shown in Figure 262. Enter your event
221
server hostname and select your filter rule by pressing F4 and choosing the ruleset.
Note
If you want to edit a ruleset used for TEC forwarding later on, you need to start and stop the NetView daemon nvserverd in order to activate your changes. To stop and start the daemon on a UNIX machine enter ovstop nvserverd;ovstart nvserverd. NetView will now forward events to TEC, but they are not displayed until the baroc file containing the NetView slots is added to your rulebase. You will find the file under /usr/OV/conf and is called nvserverd.baroc. Import this file into the active rulebase using the Tivoli GUI or the command line as follows: wimprbclass nvserverd.baroc <rulebase> wcomprules <rulebase> wloadrb -u <rulebase> Note: The baroc file tecad_ov.baroc was already installed in our rulebase. In addition, you might have a look at /usr/OV/conf/nvserverd.rls. This is a TEC ruleset, which correctly correlates Node up and down and Interface up and down events. To test the forwarding of events you can issue a Node down event using the event command: /usr/OV/bin/event -e NDWN_EV.
This will generate a NetView event, which then shows up in TEC as shown in Figure 263. If the nvserverd.rls TEC rule is activated and a node up event is sent using the command usr/OV/bin/event -e NUP_EV, the TEC will close the previous node down event. 6.3.4.2 Display a NetView Submap Based on Selected TEC Message TIPN allows you to display a NetView submap based on a selected TEC message. To activate this feature, you need to set up the following: 222
Integrated Management Solutions Using NetView Version 5.1
1. The dispsub process, which handles TEC submap requests, must be up and running. Normally this process is launched at NetView start-up. If not, you can start the process by clicking Administer->Start Application->dispsub from the NetView menu. 2. An X-Server must be running and the DISPLAY variable must be correctly set to your workstation. Locate your TEC Event Groups dialog. Click Options->NetView Connections as shown in Figure 264.
Enter the name of your NetView server, press Enter to activate and click on Set & Close. If you use specific maps, you need to specify the NetView maps. We use the default map and do not need to specify a map in the NetView Connections Dialog.
223
Note
The NetView connection must be specified either as a fully qualified name, for example, rs600015.itso.ral.ibm.com or as an IP address in dotted decimal format, for example, 192.168.1.3. This finishes the TEC configuration and we can open a submap.
If you now select a TEC message (sent from NetView) and select Event->NetView Submap..., a NetView submap with the highlighted object causing the TEC event will be displayed as shown in Figure 266. In case it does not work as expected and the submap does not show up, verify that the dispsub process is running. If not, this can be started from the pull-down menu by selecting Administer->Start Application->dispsub. TEC sends notifications to display a submap in the form of an SNMP trap to NetView. By default, this trap is configured as log only, which means it does not appear in the NetView event console. If you need to verify that the trap is correctly sent by TEC, you can re-configure the event definition. In the NetView menu, select Options->Event Configuration->Trap Customization: SNMP ....
224
As in Figure 267, select netView6000 in the Enterprise list and look for the IBM_DISPSUB (59179073) event definition. Select the event and click on Modify to modify the event.
225
In the Modify Event dialog, change the Event Category field from Log Only to Application Alert events. In order to get all passed parameters displayed, add the $* parameter to the Event Log Message field. Select OK then Apply and Cancel to activate the changes.
When we now display a submap based on a TEC message the event appears in the events window (see Figure 269).
226
Figure 270 shows the TME_Managed collection and our nodes. To build a new event source for this collection we executed the following command:
collToEg -c TME_Managed
From the Tivoli desktop, right-click the EventServer icon. Select Event Groups. The new event group should appear.
227
Click Edit in the Event Group Management menu. The event group consists of all the nodes residing in our TME_Managed collection.
Finally, assign the event group to a TEC console. Click on the TEC console icon with the right mouse button and select Assign Event Groups and move the event group to the assigned group window.
228
6.3.5.1 Execute NetView Tasks from TEC To complete the integration between TEC and NetView, there are a number of tasks that can be executed in order to get NetView-related information.
In Figure 275, we selected a TEC message and selected Task->Execute on Selected Event from the TEC menu bar.
229
The TEC Tasks dialog as shown in Figure 276 is displayed. Here we selected NetView T/EC Task which shows the available tasks. We selected TCP connections and clicked on Select task. Finally we launched the task by clicking on the Execute button. The result is shown in Figure 277, which shows all the TCP connections from that node.
230
For profile distribution the administrator should also have either the super, senior or admin role. To create a profile we first opened the rs60008-region.
In addition to the standard objects, NetView_Queries query library and the NetView server region exists. These are created during the installation process. When we installed Tivoli Inventory we also created a profile manager called Inventory. We will create all our profiles within the Inventory profile manager. 231
First we verified the managed resource settings for the region by selecting Properties -> Managed Resources from the menu bar. If you have the NetView/InventoryProfile resource in the available resources window, select it and with the arrow buttons transfer it to the Current Resources window and click on Set and Close.
Next, open the profile manager and create a profile for the NetView/Inventory component. The only subscriber should be the managed node where you have NetView installed. To create a profile select the Create -> Profile menu items from the menu bar.
From the list select NetViewInventoryProfile. We entered NetView Inventory in the label box and clicked Create & Close.
232
To open the profile double-click on the profile icon using the left mouse button.
To make selections on the collectable data click on Add. This will open a new window (see Figure 283 on page 234).
233
For our initial example, we want to read all available information from Tivoli NetView into Tivoli Inventory. To do so, we first click on Interfaces and then mark all available entries in the Fields available for Export section.
Then we click the right arrow button and click on Add. This creates the first entry in the NetView Inventory Profile.
234
We repeated these step for Networks, Nodes and Segments and then clicked on Close. Four entries have been added to the NetView Inventory Profile as shown in Figure 286.
We made the selection on the data to be collected. Now we can select the NetView servers we want the data to be collected from. The data collection or profile distribution can be done in two ways: drag and drop or by from the menu options. Before distributing the profile we should set the distribution defaults. The default setting is done by selecting Customize from the profile objects menu. This will open the profile. From the profile window we click on Profile from the menu bar and then select Distribution Defaults. For distribution defaults select the options
235
as shown in Figure 287. This will guarantee that all endpoints will receive the profile.
Start the distribution process by selecting the NetView Inventory profile and selecting the subscriber (rs600015), which is the NetView server.
Select Profile Manger -> Distribution to start the distribution to the subscriber. Once this process completes the data will be collected and written to the RDBMS server database. We are ready to query the data. For the queries we can use the TIPN-supplied queries. Select the NV_NODE query. This query creates a report on all NetView discovered nodes and shows all the information defined in the NV_NODE view (see B.2, TIPN Views on page 379).
236
To execute the query there are two possibilities: 1. Select the query icon by clicking on it with the right mouse button and from the object menu select Run Query.
2. Double-click on the icon with the left mouse button and the Edit Query window will open. Besides running the query you have the flexibility to quickly modify the query statements.
The output of the NV_NODES query is shown in Figure 291 on page 238.
237
Their are certain things you should consider when exporting NetView data. First of all if you select (like we did in our example) all the available data for the profile, this export profile should be used rarely, due to the fact that in a live environment there will be a large amount of data moving from the NetView server to the Inventory server. This process will affect both the servers and the network. We would recommend to use this type of profile for the first population of the tables after installation and occasionally to update basic node data. For frequent usage it is more practical to select only data that changes often, such as IP status or other data, and it is crucial to have up-to date data.
238
Function
Platform AIX
Node rs60008
TMR Server
239
Function
AIX Sybase/AIX
To see what can be simplified in the process of distribution, lets look at where the given information resides. First we take a look at the criteria set up by the administrator.
Information pertaining to a large number of nodes can be queried quickly with the nvdbformat command mentioned in 4.1.4, The nvdbformat Command on page 108. The command gives flexibility in querying data about nodes and the report format can be tailored to the given situation.
To set up such a mirrored profile manager structure in NetView we need to mark the nodes so we can create a NetView collection for them. The available ovw fields we can use, either standard fields or the ones created by the TIPN application are of type boolean, that is, either True or False. Therefore we cannot pass text values to these, such as the profile manager name. The MirrorPM script is provided in MirrorPM.pl on page 365. This script is used to create the NetView collection and the logic is described below: 1. We collected all our profile managers using the following command:
wlookup -a -L -r ProfileManager
241
2. For each profile manager we retrieved its subscribers list and saved it to a file with the same name as the profile manager by issuing the command:
wgetsub @Development
3. Based on the subscribers list we created the collection for each profile manager using the smartsetutil NetView for NT command:
smartsetutil a "name" "desc" "rule"
The name of the collection and the description is the same in our example. The following is an example of a rule:
"(IN \"rs60008 wtr05095\")"
Please note that the rule is further parsed by the smartsetutil command, hence the back slash and apostrophe combination is critical to the collection definition. Using this type of rule for a collection doesnt allow you to create empty collections. If we have an empty profile, a collection will not be created.
242
The script queries the profile manager structure of the TMR server. Our structure used in the example is shown in the Figure 294. During execution two profile managers were discovered with no subscribers. These are not acted on by the script.
After execution the SmarSets for our profile managers were created with the nodes (subscribers) defined. Figure 295 on page 244 shows the configuration from the TMR perspective.
243
During the execution the MirrorPM script will display the name of the profile manager under investigation. While processing the collections the following basic errors could be seen: NO subscribers - Collection will not be created. Add failed - The collection already exists. To recreate or update the collections you must start with the deletion of all the profile manager collections or only the ones needed to be updated prior to execution of the MirrorPM script. Please note that when executing the script all standard environment variables should be set for NetView and Framework. Please refer to TME 10 NetView for AIX Version 5 Release 1 Installation and Configuration, SC31-8442 and TME 10 Framework 3.6 Planning and Installation Guide, SC31-8432 on how to set them. 7.2.2.1 Running the Script from NetView The same commands can be invoked from the menu structure of NetView. To achieve this copy the registration file itso.reg to \usr\OV\registration\c subdirectory and start the NetView console. The itso.reg file is provided in Appendix A.1.3, ITSO.reg - Menu Registration File on page 367. On the menu bar of the console you should see a new item called ITSO to the right of the Help item.
244
When selecting ITSO on the menu bar you will get a drop-down menu Example 1. By selecting it you get two functions. The first one is our MirrorPM.pl function; the second is a new function that deletes the collections. When selecting the first function Mirror PM structure a window is opened. Within the window you will get all the messages sent by MirrorPM. At end of the execution you will see a message Finished execution. At this point you can click on Close.
When selecting the second function Remove PM Structure, you will see the names of the collections being deleted. The deletion is based on the list of profile manager names supplied by the Tivoli Framework. The DeletePM script is provided in Appendix A.1.2, DeletePM.pl on page 367.
245
These examples can be further improved by adding new functions. The following functions could be added to the MirrorPM script: Could be scheduled for daily execution to always be up-to-date. Add special name convention handling.
The import file is created from two files. One file is a so-called header file (Businessset3.txt). This contains information like selection name and field name definition. The data (node list) part is added from another file which is created by the wgetsub (get subscribers) command and the field value (TRUE) is added to each line by the script. When the marking is done we can create the collection. For the rule we will use the following criteria:
isBusinessSet3 = TRUE and isTMAEndpoint = TRUE
We decided to use this criteria, because we are targeting only LCF endpoints for our distribution. So the collection will be a subset of our profile manager subscribers. To look at all of the subscribers you could use the standard
246
isBusinessSet3 SmartSet collection. This is created at installation time by NetView. To achieve the best results you will need to use the lstmr command before creating the collection. The command will set the Tivoli fields (isTMAEndpoint, etc.) to their most recent status. At this point we are almost ready to distribute our profile. Before doing so we will do a check on the IP status of the nodes. To accomplish this we use the nvdbformat command with two format files:
nvdbformat -f endpoint.format nvdbformat -f deadnode.format
These format files generate two lists of the original subscriber list. 1. Endpoint.format generates a list of nodes ready for distribution. 2. Deadnode.format generates a list of unavailable nodes. From the output of nvdbformat the script compiles the argument list of the wdistrib command and executes it. The script uses the wdistrib command with the following syntax to distribute inventory profiles, for example:
wdistrib @InventoryProfile:UpdateAssetData @Endpoint:wtr05095
Finally after the distribution we perform error checking to verify whether there have been any errors during distribution of the profile. If any node failed during distribution, its status is changed to User1. The status setting is done with the NetView command with the following syntax:
nvstatustrap User1 <failed node> wtr05097
The status changing is visible only when the objects setting allows it. It must be set to Set for this Symbol Only. In case it is not set for the given object symbol the graphical notification will not be seen, but the event collection will have the status change information. Note: An object can have several symbols in different collections and IP submaps. It is not sufficient to set it only for one if you want to see the same status in different submaps for the given node.
247
To clean up the collections (delete the distribution SmartSet) and reset the field values a separate script is provided called unselect.pl. The source code for the unselect script is provided in Appendix A.2.2, unselect.pl on page 374.
Filename
Comment
Main script file. Deletes the collection (distribution) and resets the field values for isBusinessSet3 field. Parameter file for endpoints in normal status. Parameter file for endpoints in not normal status. Header for the import file. Text file containing the name of the profile to be distributed. The location of the file is in the data subdirectory.
The script source files and data files can be found in Appendix A.2, Example 2 Automating Profile Distribution on page 368. Before starting Easy.pl we must issue the following Tivoli Framework command in a separate window in the itso directory:
wtailnotif -g Inventory > data\error
248
This command will collect all the distribution messages in the file named error and this file will be parsed by the main script. The command will be automatically terminated by the main script, so for each execution this command has to be issued. To start the inventory profile distribution we need the name of the profile manager (its endpoint subscribers will be our target nodes) and the name of the profile we want to distribute. The syntax of the command is the following:
perl Easy.pl <ProfileManager name>.CF
The .CF extension is used with the profile manager name as an input argument (full selection name). The name of the profile is read from the file ProfileName located in the Data subdirectory.
During the execution a couple of temporary files are created. These files can be used for error debugging or for documenting the distribution. The files are over written each time the batch file is invoked. All of the files used and created are of normal ASCII type and are located in the Data subdirectory. Table 10 show a list of the files created by the Easy script:
Table 10. Files Created by the Easy Script
Files
Comment
Nodes to be marked for selection - nvdbimport List of endpoints with NORMAL status List of endpoints with NOT NORMAL status Subscribers list for the profile manager, with TRUE flag value added for each - data part for nvdbimport Description of the Distribution collection Error messages created by wdistrib List of all subscribers of the ProfileManager Process information from wtailnotif command
249
In addition to the temporary files the Distribution collection is also created and populated with the subscribers (only TMA endpoints).
The execution messages will be sent to the display. The first such message is the input argument. The next message lists all the subscribers of the profile manager. It includes the non-TMA endpoints and does the following: After the list you will be notified about each processing step that is started. At the distribution phase (executing WDISTRIB.....) it waits, since it needs a little time to execute the command. After distribution it shuts down the wtailnotif command and starts collecting the failed nodes. If any are found, they will be written on the display. After parsing the error file execution is finished. To see an audit log of the distribution we can look at the error file.
250
In Figure 302, only two nodes (wtr05100, wtr05086) are listed as targets and one endpoint is missing (wtr05095). This is due to that wtr05095 was not switched on at the time of distribution and was automatically discarded. (In Figure 300 on page 249 you can see all the subscribers listed. The rs600015 is a managed node so it is excluded from the distribution due to its type.) Besides listing the failed nodes on the display during execution, the Easy script will set the status of the nodes. In the collection you will easily see which ones had problems.
In Figure 303 we can see our distribution results. The normal status (wtr05100) shows the node is OK and the distribution succeeded. The User1 status (wtr05086) shows that the distribution failed for some reason. The same results can be seen using the submap explorer in Figure 304 on page 252. The explorer gives you quick access to other information such as why wtr05095 is in marginal state (node down).
251
The object symbol wtr05095 has four child submaps: one interface and three service submaps (isFOO Server, isTMAEndpoint, isWEB Server). In this example these services were created using nvsniffer in status mode with a custom configuration file Endpoint.conf. This file is supplied to help you identify TMAready endpoints. The conf file contains the following:
isTMAEndpoint|9494|TMA_Endpoint|TMAEndpoint|||*
The nvsniffer refreshed the TMAEndpoint service status to critical, but left the other two in Normal state. The interface status was automatically set by netmon to critical, so the overall status that was propagated is marginal. When we are ready to do a new distribution, we need to clean up the Distribution collection. We can do this with the unselect script, which resets everything that the easy script has changed (collection and field values). To execute the script do the following:
perl unselect.pl
252
The script will display messages about its progress. It will also display which profile manager was used to create the collection. After completion the isBusinessSet3 collection should be empty also. This process could be enhanced even further to include: Added capability to distribute software profiles Added capability to handle hierarchical profile structures Sending events to TEC or Service Desk if errors are detected Using the other isBusinessSet fields to have concurrent distribution Building the function into the menu system of NetView using registration files Detailing service status check and correlation A good practice would be to execute Tivoli registry name discovery and nvsniffer (status mode) before doing distributions. The nvsniffer command as mentioned in this chapter and in Chapter 4.1.2, The nvsniffer Command on page 101, can detect the LCF endpoint service availability to help you in narrowing down the failed profile distribution cases. To really benefit from it a correlation must be done with other services statuses (such as IP Status) of the same node. This is worth the effort, because failed executions are a waste of time.
253
254
Rules Engine
Tivoli TEC
TEC Console
Consolidation Events
NetView
Distributed Monitoring
Tivoli NetView provides us with network-related information by polling nodes in the network and handling all the SNMP traps in the network.
255
Distributed Monitoring is used to define, distribute and execute the application monitors to the managed nodes. The Sentry engine executes locally in the managed node and uses the monitoring schedule defined in the monitor. Distributed monitors can use various notification methods such as sending TEC events and creating pop-up windows on the administrators desktop. In the following example we used these two notification options as a result of a monitor triggering on a certain condition. The third part of the scenario is the Tivoli Enterprise Console (TEC). Information from NetView and Distributed Monitoring will be sent to the TEC Event Server. The TEC Event Server is responsible for accepting TEC events and storing those events in a relational database. The TEC Event Server will apply filters against each incoming event and execute rules using information in the events. We use TEC rules to correlate application and network events to provide a management solution for a client/server application. The last component in this environment is the TEC console. The TEC console allows you to view TEC events and the console can be customized to allow administrators to have a customized view based on their responsibilities.
rs60008 AIX 4.2 TMR Server Tivoli Framework 3.6 Tivoli Distributed Monitoring 3.6
rs600028 AIX 4.2 Managed Node Sybase TEC Server 3.1, TEC Console TIPN Tivoli Distributed Monitoring 3.6
rs600015 AIX 4.2 Managed Node NetView 5.1/TIPN Tivoli Distributed Monitoring 3.6
256
Our sample application is FTP to demonstrate that many clients use the server application called ftpd. The FTP example is of course just an example but shows how a similar solution could be built for a real business application.
A p p lic a t io n S e r v e r A I X (ftp d )
rs 6 0 0 0 2 7
h e b b le
w tr 0 5 0 9 7
A p p lic a t io n C lie n ts o n N T ( F T P . E X E )
257
automatically see that users have been successful in reconnecting to the server application when problems have been corrected. These are some very common design considerations for a client/server application. We show how these can be addressed using standard Tivoli products and utilizing the integration capabilities available in these standard products. To solve these requirements we use standard events from NetView and Distributed Monitoring. These are the standard events we required for the simple example.
Table 11. NetView Events
Class OV_Node_Down
Both groups of events can be uniquely identified by the class that defines the type of event, whether it is an application or a network event and second the origin, which specifies the physical node where the event occurs. We needed an additional event class to allow us to create the notification events. We called this new event class APPL_NOTIFY and the origin in these events is the IP address of the server.
Table 13. New Event Class
Class
APPL_NOTIFY
The following table summaries the rules we have to define to solve the management requirements we have on our client server example.
Table 14. Rules
# 1 Rule Client_Application_Down Rule Type Simple Event Classes universal_application APPL_NOTIFY Conditions Origin equal to IP address of clients and severity is minor. Actions Create new event with severity WARNING informing the administrator of a potential server problem. Set status to ACK. Generate new event in APPL_NOTIFY class informing administrator that the client has reconnected to the server. Cancel current event. Cancel current event. Create event in the universal_application class with severity CRITICAL.
Client _Application_UP
Simple
universal_application APPL_NOTIFY
3 4 5
Origin is equal to current event. Origin is equal to current event. Origin equal to application server.
Before implementing the rules we must create the Tivoli Distributed Monitoring profile and application monitors.
258
Figure 310 on page 260 shows the monitors in the AppMon profile. The Application Status monitor is available in the Universal Monitor collection of Tivoli Distributed Monitoring, which must be installed. When we installed the Universal Monitors collection, the install dependencies didnt show that anything would be installed. We found that the Universal Monitoring collection must be installed, so at the time of writing this book the installation dependencies for this package are a little confusing.
259
As Figure 311 shows, we set the application status monitor for the client FTP application to trigger a response level of critical once the application becomes unavailable. This is mapped to a TEC event with severity MINOR. When the FTP application becomes available it triggers a WARNING response level, which is mapped to a TEC event with severity HARMLESS.
As shown in Figure 312 on page 261 we set the application status monitor for the server ftpd application to trigger a response level of CRITICAL once the 260
Integrated Management Solutions Using NetView Version 5.1
application becomes unavailable. This is mapped to a TEC event with severity CRITICAL. As soon as the monitor triggers, we want a pop-up window on the administrators desktop. When the ftpd application becomes available it triggers a WARNING response level, which is mapped to a TEC event with severity HARMLESS. The monitoring schedule is set to a 1 minute for both the FTP and the ftpd monitors. In a production environment this interval may not be appropriate but we wanted to get the results as fast as possible when we tested the monitors.
Dont forget to save and distribute your profile to the defined subscribers. To test these monitors, you need to establish an ftp connection between the nodes you subscribed to your monitors. You dont need to transfer any files. Just open the connection, wait at least a minute and close the ftp session. A pop-up should appear, which tells you the ftpd server application became unavailable.
261
TEC_CLASS: APPL_NOTIFY ISA EVENT DEFINES { source: default = "Application"; severity: default = CRITICAL; msg: default = "Client/Server Problem:FTP Down"; }; END
Figure 314. The baroc Definition for the APPL_NOTIFY Class
To activate the new class, we created a new rulebase and imported this baroc file. Dont forget to compile the rulebase, load it and restart the event server. We need to create an event source and an event group for our example. Do a right mouse click on the EventServer icon and select Sources... from the pull-down menu. The T/EC source list window appears. Enter values for the name and the label. Its a good idea to always use uppercase letters for labels. Enter the name and click on the Add Source button. Then save it by clicking Save & Close.
262
Then add a new event group for the event class we have created and include SENTRY and nvserverd events. Figure 316 shows the filter definition.
263
Finally, assign the new event group called Application to the TEC event console. The resulting TEC console with a NetView event is shown in Figure 317.
In the rule set dialog, select Rule Set from the menu bar and then New Ruleset... from the pull-down menu. Enter enter a name for the new rule set in the Set Name box and press Enter.
264
Then select the new ruleset and choose Edit Ruleset... from the pull-down menu. This should give you an empty dialog as in Figure 320. Starting with this dialog, you can define the first simple rule. Select Rule from the menu bar and then New Rule->Simple... from the pull-down menu.
Give the rule the name Client_Application_Down in the Description field and then click on Event Class... and select Universal_Application from the list of event classes that pops up.
265
The next thing to do is provide a condition for the rule. Click on the Conditions... button in the dialog. The conditions dialog appears and you should click on origin in the Available Attribute(s) list and select in list from the Relation list box. Enter the nodes that are client workstations using this server application. In our scenario these were 9.24.104.86 (hebble) and 9.24.104.211 (wtr05097).
Enter the nodes in the Edit Value field and press Enter.
266
The second condition in this rule is that when the TEC event with severity minor comes in from the Distributed monitor we check for severity minor before issuing the action.
To add the new condition click on the Add button and then on OK to close the window. This new rule requires an action definition to specify what should be done once an event matches the rule. In the Simple Rule window click on the Actions... button. This will open the Actions window as shown in Figure 324.
267
Select When event is received from the When to Run list box and add Set status action. Set status to ACK to collect all events from effected clients in the ACK status. This is useful if the administrator wants to keep track of all effected clients. The administrator could just select all TEC events with ACK status on the TEC console and have a complete list of the effected clients. The second action is to run a command that creates a new event informing us that clients are now experiencing problems. Select Launch a command in the Add list box on the right side of the window. In the file browser window that appears, select the wpostemsg command. The wpostemsg command requires additional parameters and you can input them by clicking on the Edit Arguments... button. The Edit Arguments dialog is shown in Figure 325. In this dialog we create the new TEC event informing the administrator that there is a potential problem with the server application running on rs600027. This TEC event will have severity WARNING and is using the new TEC class APPL_NOTIFY. The Origin is set to the IP address of the server where the ftpd is running. Since we dont specify a message in this event the default message from the baroc file will be used (refer to Figure 313 on page 262).
Click OK to close the dialog. In the same fashion enter all the other rules according to Table 14 on page 258. The following sections show the output of our definitions using the TEC Rule Builder. These are provided for completeness and allow you to reference these rules when we execute the scenario to understand the various steps of the scenario. 8.1.6.1 Client_Application_UP Rule The Client_Application_UP rule is used to inform us that the clients are now able to work again. It is a simple rule that uses the universal_application class and the conditions are that the origin must match our clients and the Distributed Monitoring monitor Event is HARMLESS. Refer to Figure 311 on page 260 where
268
you see that the HARMLESS event is created when the client FTP application becomes available again.
The actions defined for this rule are the set status to ACK and to run the wpostemsg to create a new TEC event that tells the administrator that the client/server application is now working again. This is accomplished by passing the required parameters to the wpostemsg command as shown in Figure 327. This new event will have severity HARMLESS and the message text is Client/Server OK:FTP. The hostname and origin are both representing the ftpd server. We are using the APPL_NOTIFY event class.
8.1.6.2 Delete Previous Events Rule This is a compound rule that will do correlation with the objective of only keeping the most current events for a resource open on the administrators TEC console.
Integration with Distributed Monitoring and the TEC
269
This rule is therefore defined as a compound rule, which has a correlation action defined. The event class used is universal_application and the correlation type is defined as universal_application cancels universal_application events if universal_application origin is equal to universal_application origin. The cancel action means the previous event is closed on the TEC Console. The effect of this is to make sure that the TEC administrator only sees events with the most current status of events from Distributed Monitoring monitors.
8.1.6.3 APPL_Notify Rule This rule is equivalent to the Delete_Prev_Events rule but it is used to handle the events we create in the APPL_NOTIFY class. This rule will ensure that the administrator only has TEC events with the most current status on the TEC console. The correlation type is defined as APPL_NOTIFY cancels APPL_NOTIFY events. The condition is again that the origin must match the origin of the currently open event in this class. The cancel action will in effect close the previous event on the TEC console.
270
8.1.6.4 App_Down_By_NetView This rule is used to show the integration with NetView. It uses the OV_Node_Down Event for the application server and generates an application down event. The function of this rule is to show the administrator that there is a network problem (that is, the OV_Node_Down event) and to create the universal_application event saying that the server component of the application is not accessible. When there is a network problem the Distributed Monitoring events from the server cannot be sent to TEC so the universal_application event must be created locally on the TEC server.
The parameters to wpostemsg will generate a CRITICAL event with the message text "Universal - Generated by NODE DOWN". The origin and the hostname are specified as the IP address and hostname of the machine where the server
Integration with Distributed Monitoring and the TEC
271
application is running. The source is specified as SENTRY and the class is universal_application.
When all the five rules have been defined you have a rule set as shown in Figure 332.
Finally close the ruleset and save your results by selecting Save in the TEC Rule Bases window. To activate your rules, you will need to compile the rule base and load it. Use your preferred method to do so. Figure 333 on page 273 shows the output when we compiled the rulebase using the graphical user interface. As you can see the only rule in our rulebase is Appl_rule1.rls.
272
273
The server application is our primary focus since many clients will be effected if it experiences a problem. To force an error on the server application ftpd we issued kill -9 processnumber for ftpd. The Distributed Monitoring monitor will discover when the ftpd daemon becomes unavailable and send a CRITICAL event to TEC. Refer to Figure 312 on page 261 for the definition of the Distributed Monitoring monitor. This Critical event will arrive in TEC and the rules we created will be applied to it. The Delete_Prev_Events rule will be executed. This rule will close the previous TEC event in class universal_application if the origin is the same. Figure 335 shows the result on the TEC console where the CRITICAL event closed the previous HARMLESS event.
In Figure 312 on page 261 you can also see that we defined a pop-up notification for when the server application becomes unavailable. This pop-up is shown in Figure 336 and it can be used to alert the administrator about this severe problem.
The administrator now knows that the server application is down but the clients effected by this error are not known to the administrator. In our scenario we had 274
Integrated Management Solutions Using NetView Version 5.1
just entered ftp rs600027 on workstation hebble. This prompts us for the user ID so we entered our user ID and pressed Enter. Since the server application is no longer available we will not get a response and the Distributed Monitoring monitor for the client application FTP application will discover that the client application becomes unavailable. Refer to Figure 311 on page 260 where you can see that a TEC event with severity MINOR is generated. This event will arrive in TEC and the Client_Application_Down rule will be applied to it. In this rule we set the status to ACK and we also run a wpostemsg. The wpostemsg command will generate an additional event in the APPL_NOTIFY class with severity WARNING. It allows us to tell the administrator which clients are affected and which server application they are trying to use. The resulting TEC console is shown in Figure 337.
At this point we made the assumption that the server application has been corrected and restarted. In the case of ftp that we use as an example the ftpd daemon is restarted when a client issues the ftp rs60027 command. We issued this command from workstation hebble and the following happens on the server side: The Distributed Monitoring monitor for ftpd will discover that the ftpd daemon is up and available. Refer to Figure 312 on page 261 where you see the HARMLESS event being generated when ftpd is up and available. The Delete_Prev_Events rule will be applied to this HARMLESS event and it will cancel the open CRITICAL event for ftpd. Since the client is now able to access the server application the following occurs: The Distributed Monitoring monitor for FTP will discover that the client becomes available and will send a HARMLESS event to TEC (refer to Figure 311 on page 260). The Delete_Prev_Events rule will be applied to this HARMLESS event and it will cancel the open MINOR event for FTP. The Client_Application_UP rule will be executed and it will generate a HARMLESS event in the APPL_NOTIFY class. Refer to Figure 326 on page
275
269 where you can see this event being defined and note the message Client/Server OK:FTP in that event. This message will be displayed on the TEC console informing the administrator that the client has been able to connect to the server again. The HARMLESS event generated by the Client_Application_UP rule will also cause the correlation rule for events in the APPL_NOTIFY class to be executed. This rule is called APPL_NOTIFY and it will cancel the currently open WARNING event in the APPL_NOTIFY class. The resulting TEC console display is shown in Figure 338.
276
As the administrator could see that the network problem caused the application server unavailability we now want to correct the network problem. In our example we just connect rs600027 back to the network and OV_Node_DOWN will be closed when NetView discovers that rs600027 is back up. In Figure 340 on page 278 you can see that the OV_NODE_DOWN event has been closed from the TEC console. The application events are still open since just fixing the network problem may not correct the server application. Since the network is now working the events generated by Distributed Monitoring will now arrive in TEC. As shown in Figure 340 the CRITICAL event tells us that the server application ftpd is not running. The MINOR and WARNING events are displayed since the client hebble was unable to connect or timed out during the network problem.
277
To clear the application events we need to reissue the ftp rs600027 command from hebble just as we did in the application scenario. Since we used ftp in our example this is required but other applications may work differently. The important point here is that the administrator was able to determine that after fixing the network problem the application may still be down. When the server application becomes available we want to make sure that the client can connect to the server and show the HARMLESS event that tells the administrator that everything is running and that clients are able to perform their business tasks. The result is shown Figure 341 on page 279.
278
279
280
9.2.1 Prerequisites
The prerequisites are listed in the Users Guide and the Release Notes. For AIX there are components of two software APARs, IX62253 and IX73729, to be applied to the NetView server. If you do not apply these APARs, the installation will fail in the final stages. These APARs can be applied to AIX 4.1.5, 4.2.0 or 4.2.1. Our NetView server was running 4.2.0 and we found that applying these patches was not a simple task
281
due to a number of patch prerequisites. Our advice is to upgrade the NetView server to 4.2.1 before applying the APARs. There is no mention of rebooting the AIX node after applying the APARs, but we found we had to. This may have been due to applying a number of other 4.2.1 components rather that the ones in the APARs.
9.2.2 Installation
In our environment, the TEC and NetView server products were on different nodes. Thus there were three steps in the installation: 1. Install the TFNC module on the NetView server using the Tivoli installation process. 2. Run a post installation script on the NetView server to complete the installation. 3. Add TFNC TEC components on the TEC server. These steps are detailed in the following sections. 9.2.2.1 Installation of TFNC Module The TFNC module is installed using the standard Tivoli installation process. From the desktop the File -> Install -> Install Product menu options are selected. This starts the Install Product dialog (Figure 342).
After pointing at the correct media, the Tivoli Manager for Network Connectivity product should be listed. The node to install on is the NetView server (in this case rs600015). After the Install and Close button is clicked, the Product Install dialog is displayed (Figure 343 on page 283).
282
Installation of this module only adds some binaries to the NetView server. The installation is started by selecting the Continue Install button. When it completes, the dialog is closed using the Close button. 9.2.2.2 Post-Installation Script Following the product install, the post_ipfm_user.sh script must be run. It performs three functions: It installs a number of tasks into Tivoli that can be used to manage TFNC. It can add the TFNC TEC class and rule definitions to a TEC rulebase. It will register the TFNC processes with NetView (or OpenView). This means they will be started and stopped with the other NetView processes. The script and parameters we used was:
$BINDIR/smarts/conf/tivoli/post_ipfm_user.sh -t TFNC -v nv
IPFM Tasks in TFNC and classes NOT installed. Static Registration Utility Completion Static Registration Utility Completion Static Registration Utility Completion
The options used were: -t TFNC Add the task libraries to the policy region named TFNC.
283
-v nv
If the TEC server was also on the NetView node, the -r <rulebase> option would be used. As can be seen in Figure 344 on page 283, the script: Adds the tasks in the TFNC policy region. Does not install the rules and classes. Use the ovaddobj command to add the three NetView processes for the TFNC Broker, the TFNC server/adapter, and the TEC adapter. 9.2.2.3 TEC Configuration The TFNC module has two TEC files: ipfm.baroc with the class definitions and ipfm.rls with the rules. These must be copied over to the TEC server and installed into the current rulebase before the TFNC module can be used. For our installation we created a new rulebase, called TFNC, and imported the two ipfm files into it. The commands used were: 1. wcrtsrc -l "TFNC" INCHARGE Define the new event source INCHARGE and labels it TFNC for the event source display. 2. wcrtrb -d /usr/local/Tivoli/ITSO/rulebases/TFNC TFNC Create the new rulebase, TFNC, in the /usr/local/Tivoli/ITSO/rulebase/TFNC directory. 3. wcprb Default TFNC Copy all TEC_CLASSES, TEC_RULES and TEC_TEMPLATES from the default rulebase to the new TFNC rulebase. 4. wimprbclass ipfm.baroc TFNC Import the ipfm.baroc class definition file into the TFNC rulebase. This was run from the /usr/local/Tivoli/ITSO/rulebases/TFNC/TEC_CLASSES directory after the file was copied there from rs600015. 5. wimprbrules ipfm.rls TFNC Import the ipfm.rls ruleset file into the TFNC rulebase. This was run from the /usr/local/Tivoli/ITSO/rulebases/TFNC/TEC_RULES directory after the file was copied there from rs600015. 6. wcomprules TFNC Compile the new rulebase. 7. wloadrb TFNC Load the new rulebase into the event server so it will be used when the server is restarted. 8. wstopesvr and wstartesvr Restart the event server to use the new rulebase. Figure 345 on page 285 shows the Source Groups dialog with the new TFNC Source Group icon.
284
As TFNC has not been started, there are no events belonging to the source yet.
9.2.3 Startup
There are three components of TFNC to be started; TFNC Broker TFNC server and adapter TEC adapter These processes are all registered with NetView and can be started by either: Restarting all NetView processes using the ovstop and ovstart commands. Starting each process in turn. This is done by: 1. Starting the TFNC Broker using the ovstart brstart command. 2. Starting the TFNC server and adapter using the ovstart sm_ipfm command. 3. Starting the TEC adapter using the ovstart tecad_ipfm command. Once these are started, the correlation begins. Events will start to appear on the TEC console. If the NetView adapter is also running there will be duplicate events, with different class names, on the TEC console. The next section details the problems encountered when installing this module.
285
We found that the ovors file (/usr/OV/conf) had entries for both network adapters. We ran the reset_ci command and this fixed the problem.
The grouping.conf file is not discussed as its unlikely to be customized. The remaining three files are discussed in the following sections. There is also a configuration file for the TFNC TEC adapter, $BINDIR/smarts/conf/tivoli/tecad_ipfm.conf. This file is a standard TEC adapter file. See the TME 10 Enterprise Console Adapters Guide Version 3.6, SC31-8507 for details. 9.2.5.1 The Flapping Configuration File Flapping is when a physical interface is oscillating between an available state and an unavailable state. TFNC is configured to detect flapping and generate the appropriate event to TEC. Otherwise, TFNC may not be able to determine a root cause event. The flapping.conf configuration file defines the flapping policies. By default there are a number of flapping policies defined: the Default-Flapping-Policy, the Long-Flapping-Policy and the Short-Flapping-Policy. These are assigned to the different types of nodes in the grouping.conf file. The default grouping.conf file assigns, the Short-Flapping-Policy to the Router-Group, the Long-Flapping-Policy to the Host-Group and the Default-Flapping-Policy to the Default-Group. The parameters in each flapping policy section are: maxStableTime Specifies the amount of time, in seconds, TFNC waits for a physical interface to stop flapping before it clears the physical interface flapping event. Specifies the amount of time, in seconds, TFNC waits between flapping bursts before clearing a physical interface flapping event.
minStableTime
stableThreshold Specifies the coefficient TFNC uses to determine whether to send a physical interface event or clear an existing flapping event. minTraps Specifies the minimum number of SNMP link-down events TFNC should receive before it sends a physical interface flapping event.
286
How these parameters are applied also depends on the TFNC polling interval, which is the fastestMonitor parameter set in the server.conf file. An example of flapping and how the parameters are used is in the Users Guide. 9.2.5.2 The Server Configuration File The server configuration file, server.conf, is used by the TFNC server. The server uses network topology to compute problem signatures, which are events that uniquely identify each problem. TFNC compares incoming events to pre-computed problem signatures to identify root cause problems. The following configuration parameters are explained in the Users Guide and can be changed: resyncInterval (Server-Control section) defines how often the server compares its topology with the topology of the network manager, that is, NetView. The default setting is 1800 seconds (30 minutes). If the network topology is changing frequently, then this value may have to be reduced so that TFNC has an accurate view of the topology. However this will have performance overheads (CPU and network traffic). If the network is relatively stable, this setting could be reduced.
correlationInterval (Correlator section) defines how often the TFNC server correlates incoming events to see if they match problem signatures. The default is 60 seconds. Decreasing the interval will report results more quickly but will be more likely to give intermediate results before the genuine root cause. Decreasing the interval will also use more system resources, particularly with a large network topology. Increasing the interval may lengthen the time taken to produce the root cause problem but it will reduce the number of intermediate results. The netmon polling interval is also relevant to this interval. If the netmon polling interval is greater than the correlation interval, correlation will sometimes be performed without current netmon information. For example, if the netmon polling interval is 5 minutes and the correlation interval is 1 minute, then only 20% of the correlations will be performed with the most upto-date information, thus increasing the likelihood of intermediate results. fastestMonitor (Correlation section) defines how often the TFNC server checks the properties of managed objects in its topology. This is normally set to 80-90% of the correlationInterval setting. (Partitioning section) defines whether partitions are created or not. A partition represents a group of connected managed devices that are only connected to the rest of the network by an unmanaged device (such as an unmanaged router). This is used to correlate unmanaged devices with managed devices.
createPartitions
287
epsilon
(Partitioning section) defines the minimum number of managed devices a group must contain before TFNC creates a partition to represent it. The default is zero.
The most common tuning to be performed on TFNC would involve the netmon polling interval and the correlationInterval and fastestMonitor parameters. We found in the tests that determining the root cause may take some time if the network is large, so tuning may be a worthwhile exercise to ensure the root cause is determined as soon as possible. 9.2.5.3 The Adapter Configuration File The adapter configuration file, adapter.conf, is used by the TFNC adapter (not the TEC adapter). The following configuration parameters are explained in the Users Guide and can be changed: ovtopodump (ovTopology-Program section) defines the topology source. By default it is /usr/OV/bin/ovtopodump. The parameter could be used to specify a different topology source program or specify filters for the topology source program. (ovTopology-File section) defines an alternative topology source. If topology information is held in a file, the file name can be specified here. The topology information must in the same format as produced by ovtopodump with the -rl arguments. If this parameter is used, the TopologyDriver parameter must specify ovTopology-File-Driver. (Adapter-Drivers section) determines the source of the topology information for TFNC. If the (default) ovTopology-Driver value is set, TFNC will get its information from the ovtopodump program. If the ovTopology-File-Driver is set, TFNC will use the fileName value.
fileName
TopologyDriver
There are a number of other parameters in our adapter.conf file that are not documented in the User Guide. The descriptions listed are from our adapter configuration file. The parameters are: fileName eventFilter EventDriver InitDriver enableColorChange enableNotifications (TopologyImport-File section) specifies additional source of the topology in the import format. (Filters section) specifies event filters for OpenView events. (Adapter-Drivers section). No information is available for this parameter. (Adapter-Drivers section) is used to make TFNC well-behaved. (Adapter-Drivers section) is set to TRUE to enable colors in OpenView maps. (Adapter-Drivers section) is set to FALSE to disable root cause notifications sent back to OpenView.
288
These parameters may be related to older, or OpenView, versions of the product. They may work with NetView but changing the values in the file may not be supported.
289
T/EC Console
TEC_CLASSES/ipfm.baroc TEC_RULES/ipfm.rls
T/EC
forwardall.rs
forwardall.rs
TEC Adapter
Server
nvserverd
trapd
Adapter
NetView
Events
TFNC
Events are received by trapd and forwarded to applications registered with it. In our environment, this includes the nvserverd daemon and TFNC. The nvserverd daemon forwards events to the NetView Event Console using the forwardall.rs ruleset (for example, forward all events). It also sends events to TEC using the forwardall.rs ruleset. Within TEC the NetView events from nvserverd are defined in the tecad_ov.baroc and tecad_hpov.baroc files. The rules associated with the nvserverd events (that is, the OV_* events) are in the ov_defaults.rls TEC ruleset file. On the TFNC side of the diagram, the TFNC adapter receives the events coming from trapd. The TFNC server performs the correlation based on the network topology. The TEC adapter formats events into TEC event format and sends them to TEC. Within TEC the TFNC events are defined in the ipfm.baroc file. The rules associated with these events (that is, the OV_* events) are in the ipfm.rls TEC ruleset file. The TEC NetView adapter (tecad_ov) was not used as it produces the same results as nvserverd forwarding. The network elements used to test TFNC are shown in Figure 347 on page 291.
290
NT_RV1
9.24.105
9.24.105.70
192.168.254.1
2210_local 192.168.253.1
2210_remote 192.168.253.2
F/R
There are two 2210s, 2210_local and 2210_remote, connected via a WAN link. On the other side of the 2210_remote are an 8271 Ethernet switch and a Windows NT server. All tests were performed with the polling settings of: Polling Interval: 2minutes Retry: 3 Time-out: 13.0 minutes None of the polling was forced by performing demand polling or forced pinging of the nodes. This allowed us to measure the time it took for TFNC to present a steady state. This was the time it took between the first nvserverd trap displaying on TEC and the correct TFNC event displaying on TEC. This gave us an indication of how long it took TFNC to recognize, by correlation, the true problem and send it to TEC.
291
The frame relay cable at 192.168.253.1 was disconnected. The IP Internet submap changed to that shown in Figure 349 on page 293.
292
This shows all components from the 192.168.253.1 interface to the 192.168.254.112 NT server as down. For comparison purposes, the TFNC events were shown in a different TEC window to the nvserverd events. Figure 350 shows some TFNC events before a steady state was achieved.
293
IC_SEGMENT_UNAVAILABLE for the 192.168.253.1 segment IC_ROUTER_UNAVAILABLE for 2210_remote. Both of these events are in a status of OPEN. As TFNC thinks it has established a root cause it sends the appropriate IC_ event to TEC. What it thinks as the root cause will change as it receives all of the traps from trapd. Thus it thought that the 192.168.253.1 segment being unavailable was the root cause until it received more traps and then it thought that the 2210_remote router being unavailable was the root cause. Following this it closed the segment unavailable event. This process continues until a steady state is reached. This steady state is shown in Figure 351.
TFNC has determined the root cause to be IC_INTERFACE_UNAVAILABLE for 192.168.253.1 (which is correct). The other IC_ events have been closed, so the only open event represents the true root cause. By comparison Figure 352 on page 295 shows the nvserverd events for the same interruption.
294
Figure 352. nvserverd Events for Failure in NetView Events TEC Display
There are eleven events of varying severity. The two CRITICAL events give a clue to the problem, but without looking at a network diagram, you cannot tell which is the root cause. The root cause is actually the OV_IF_Down event with a MINOR severity at the bottom of the display. 9.3.2.2 Interface Up The network cable was re-connected. Eventually the steady state was represented by TFNC events as shown in Figure 353.
295
Again TFNC has sent intermediate events when it thought that it had the root cause of the problem as it processed traps from trapd. The IC_INTERFACE_UNAVAILABLE for the 192.168.254.1 interface was sent when some node and interface up events were processed but some were still outstanding. The steady state was reached when IC_ events were closed. Note that the only correlation done within TEC for the events is changing the status from OPEN to CLOSED. This is achieved by TFNC sending an INCHARGE_PROBLEM_CLEAR event with slot values that match those of the open IC_ event. The ipfm.rls ruleset will find an open IC_ event that matches the slot values and closes it. Thus all of the true correlation is performed within the TFNC server process on the NetView node. Figure 354 shows the nvserverd events at steady state.
This shows the correlation provided by the ov_default.rls ruleset. The only correlation it provides is to CLOSE open OV_Node_Up and OV_Interface_Up events. The OV_Network_Critical and OV_Segment_Critical events are not correlated, although this could be done by changing the rules in the ov_default.rls rule. There can be no correlation between these events to determine the root cause as there is no knowledge of network topology in TEC. Thus correlation is just providing a means to clean up events when the problem is resolved. 9.3.2.3 Comparison We have seen that TFNC provides the only true correlation to give the root cause of the problem. In so doing it discards the events that are irrelevant. The final factor to compare is the time taken to get to the steady state. It took between two and three minutes for all nvserverd events to arrive at the TEC console. For TFNC to show the IC_INTERFACE_DOWN event as the root cause 296
Integrated Management Solutions Using NetView Version 5.1
took ten minutes. The corresponding time for a service restored steady state was eight minutes. This is a very long time for a small network. During this time, the system appears to be in a steady state as the nvserverd events do not change. So while TFNC is providing much more useful information, it is taking some time to do so. Any automation based on the TFNC events, such as problem logging, would have to include a timer so that the final root cause event is used rather than an intermediate event. The timer setting would have to be measured as it changes with the number of network interfaces involved as shown in the following section.
Scenario 1.
Up/Down Down Up
2.
Down Up
IC_INTERFACE_UNAVAILABLE (192.168.254.1)
297
Scenario 3.
Up/Down Down
intermediate Events IC_INTERFACE_UNAVAILABLE (192.168.253.1) IC_SEGMENT_UNAVAILABLE (192.168.253.Segment1, FrameRelaySegment) IC_HOST_UNAVAILABLE (192.168.254.112) IC_INTERFACE_UNAVAILABLE (192.168.254.1)
Time 10 mins
8 mins 10 mins
IC_INTERFACE_UNAVAILABLE (192.168.253.2) IC_INTERFACE_UNAVAILABLE (9.24.105.70) IC_ROUTER_UNAVAILABLE (2210_remote) IC_INTERFACE_UNAVAILABLE (192.168.253.1) IC_INTERFACE_UNAVAILABLE (192.168.254.1)
Up
19 mins
5.
Down Up
IC_ROUTER_UNAVAILABLE (2210_local)
4 mins 7 mins
Two observations can be made from the results shown in the table. The first observation is that the time TFNC takes to decide on the root cause event increases with the number of interfaces on the other side of the failing device. The last two scenarios are for the same network device failing, but the time TFNC takes to establish that this is an IC_ROUTER_UNAVAILABLE ranges from four minutes to ten minutes depending on tuning of the TFNC server parameters. The notification of the service being restored also varies widely between seven and nineteen minutes. The time taken to produce and clear the root cause are significantly improved by tuning the correlationInterval and fastestMonitor values in the server.conf file. If you plan to use the root event to perform an action, such as logging a problem, then you should set a ruleset timer for an interval when all other intermediate events have been closed. The second observation is that there is no pattern to the intermediate events generated by TFNC when the service is going down or being restored. The order depends on which traps it receives and in which order, which may change.
298
and itso.rls respectively. The customized TEC baroc and ruleset files are listed in Appendix C, Files Used in the Network Connectivity Examples on page 381 along with the scripts used. For further details on Tivoli Service Desk, see: Problem Management Using Tivoli Service Desk and the TEC, SG24-5301 Tivoli Service Desk Tivoli Problem Management Network System Management Gateway Network Administrators Guide , GC31-5178 formerly known as ExpertView 5.0 Network System Administrators Guide 9.3.4.1 Automatically Opening a Problem in Service Desk To log a problem with Service Desk we carry out the following steps: 1. When the TFNC event arrives at TEC, we wait a period to determine that it is the root cause event. 2. If its still open after the period, a script is run to log the details in Service Desk. 3. When the call to Service Desk ends it returns the new problem number, which is sent back to TEC in a special TEC event. 4. This TEC event tells TEC to put the problem number into the appropriate slot in the original TFNC event. This is used later for closing the problem. 5. If the problem logging fails, an event is sent to the TEC console. This flow is shown in Figure 355.
11 10 9 8 7
12
1 2 3 4 5
wzclrprob 2 3
T/EC Cache
Figure 355. Flow for TFNC Event Logging a Problem in Service Desk
299
The rest of this section details how this flow was achieved. When an TFNC event, such as IC_HOST_UNAVAILABLE, is processed by the rulebase it is put on a timer. The code for this is shown in Figure 356.
rule: wait_root_cause: ( event: _event of_class INCHARGE_PROBLEM where [ status: equals OPEN ] , reception_action: ( set_timer(_event, 600, ) ) ).
The set_timer predicate puts the event processing on hold for ten minutes (600 seconds). After this time the following timer rule is invoked (Figure 357).
timer_rule: check_root_cause: ( event: _event of_class INCHARGE_PROBLEM where [ status: equals OPEN ] , action: ( sprintf(_evstr,%#d,[_event]), exec_program(_event, /usr/local/Tivoli/ITSO/scripts/wzlogprob, %s, [_evstr], NO) ) ).
If the event is still open after ten minutes, it is deemed to be the root cause event. All other TFNC events would have been closed within ten minutes. For this event, the event ID (_event variable) is converted to a string using the sprintf function. The wzlogprob script is used to log the call in Service Desk, and the event ID, as a string, is one of the key parameters. The wzlogprob script uses the EVProb command to log a problem with Service Desk. This is the command line interface to Service Desk. Figure 358 on page 301 shows the variable declaration and EVProb call in the wzlogprob script. The complete script is provided in Appendix C.3.1, wzlogprob Script on page 387.
300
# Setup variables needed for EVProb call OUTFILE=/tmp/EVProb.out GTWY="wtr05368" MOBJ=echo $EVENT_CLASS | dd ibs=15 count=1 2>/dev/null UNID=$EV_KEY CCDE="TEC" SEVR=2 SYST="Network" COMP="TFNC" ITEM=$EVENT_CLASS DESC="$ic_class_name $ic_instance_name is $ic_event_name" ARGS="CALL_CODE:$CCDE;SEVERITY:$SEVR;" ARGS=$ARGS"SYSTEM:$SYST;COMPONENT:$COMP;ITEM:$ITEM;" ARGS=$ARGS"DESCRIPTION:$DESC" # ********************************************************************* # log a problem using EVProb # ********************************************************************* EVProb -h $GTWY -n $MOBJ -x $UNID -a "$ARGS" > $OUTFILE 2>&1 RC=$?
The parameters used in the EVProb call are: -h <gateway-host> The host name of the Service Desk Gateway Module. For our environment this was wtr05368.
-n<managed-object> The name of the managed object. We used the class name, such as IC_HOST_UNAVAILABLE. -x <ext-prob-id> The external problem identifier. This is used to uniquely identify the problem record. Without it, Service Desk would update the first existing record it found with the same managed object id (that is, class name). We use the event ID that was passed as a parameter from the TEC rule. The list of problem-specific arguments. These include the call code, severity, system, component, item and description fields used in our script. There are a number of others that are listed in the references.
-a <args>
The stdout and stderr from this script is piped to a file, /tmp/EVProb.out. For a successfully logged problem, this file contains the problem number. The format is shown in Figure 359.
If the EVProb call fails, an event (ITSO_Prob_Log_Failed) will be sent back to TEC with the error text in the msg slot. If the EVProb call succeeds, the problem number will be returned to TEC using an ITOS_Add_Prob_Nbr event. The code for this is shown in Figure 360.
301
# If the EVProb failed (rc!=0) send a fail event and exit # otherwise, get the problem number if [ $RC -ne 0 ]; then MSG=cat $OUTFILE wpostemsg msg="$MSG" ITSO_Prob_Log_Failed ITSO exit 1 else PNBR=awk {print $5} $OUTFILE fi # ********************************************************************* # Tell the TEC event that a problem was logged # The problem number is returned from the EVProb call above # ********************************************************************* wpostemsg ev_event=$EVENT_CLASS \ ev_hostname=$hostname \ ev_probnbr=$PNBR \ ITSO_Add_Prob_Nbr ITSO
These two events are defined in the itso.baroc file (see Appendix C.1.2, itso.baroc on page 383). The rules to process the ITSO_Add_Prob_Nbr event are shown in Figure 361.
rule: set_problem_number: ( event: _itsoev of_class ITSO_Add_Prob_Nbr where [ status: equals OPEN , ev_event: _ev_event, ev_hostname: _ev_hostname, ev_probnbr: _ev_probnbr ] , reception_action: ( all_instances(event: _event of_class _ev_event where [ hostname: equals _ev_hostname, status: outside [CLOSED] ], _itsoev - 600 - 0 ), bo_set_slotval( _event, sd_trouble_ticket, _ev_probnbr), re_mark_as_modified(_event,_), commit_action ),
This rule will look for the original TFNC event that caused the problem to be logged. The slot values that identify this original event are passed from the wzlogprob script to the ruleset in the _ev_* slots. When the original rule is found the sd_trouble_ticket slot value is set to the problem number. This is done using the bo_set_slotval predicate. The problem number is passed from the wzlogprob script in the _ev_probnbr slot. The re_mark_as_modified predicate tells TEC to update its display. This means that the Service Desk problem number is now held in the sd_trouble_ticket slot and may be viewed from the TEC console. The following section shows an example of automatically logging a problem.
302
9.3.4.2 Example of Automatically Logging a Problem For this example, an IC_HOST_UNAVAILABLE event was generated. This was done by removing the cable connecting 192.168.254.112 to the 8271_remote device (see Figure 347 on page 291). After the event appeared in the TEC console and the timer had elapsed, a problem was logged in Service Desk. Figure 362 shows the Service Desk gateway log.
This shows a previous problem being opened, updated and closed. It also shows the problem for the current IC_HOST_UNAVAILABLE event being opened. It is problem number 00002094. The problem list in Tivoli Problem Management is shown in Figure 363.
The new problem number 00002094 is at the top of the list. It is shown in an OPEN status with a number of the problem attributes. Selecting the Resume button brings up the Problem Status dialog for this problem (Figure 364 on page 304).
303
This dialog shows a number of the fields that were passed in the EVProb call from the wzlogprob script: Severity is 2: Important, Critical from the SEVERITY: argument. System is Network from the SYSTEM: argument. Component is TFNC from the COMPONENT: argument. Item is IC_HOST_UNAVAIL from the ITEM: argument. Note that this has been truncated to 15 characters. Most Service Desk attributes have a maximum length of 15. The remaining fields have been created by Service Desk when the problem was logged. Selecting the Calls tab shows the calls currently assigned to this problem (Figure 365 on page 305).
304
There is only one call associated with this problem, the one that opened the problem. The dialog shows the remaining arguments passed with the EVProb call, including the description (DESCRIPTION: argument). A number of attributes, such as LocationID and CalledID, are default or derived from passed arguments. The problem would then be managed from within Service Desk. The TEC operator can check the problem number by opening the event in TEC. This done by double-clicking the event or selecting the event and clicking on the View Message... button. Figure 366 on page 306 shows the message view for the IC_HOST_UNAVAILABLE event.
305
The last slot value is sd_trouble_ticket and it contains the problem number. The next sections show the automatic closing of problems when the TFNC event is closed. 9.3.4.3 Automatically Closing a Problem in Problem Management TFNC sends a special event, INCHARGE_PROBLEM_CLEAR, to close all other TFNC events. We modified the rules associated with this event so that: 1. When this event is received and it finds the matching open TFNC event, it checks to see if the sd_trouble_ticket slot contains a problem number. 2. If it does, a script is run to close the problem record in Service Desk. 3. If the close fails, an event is sent to the TEC console. This flow is shown in Figure 367 on page 307.
306
wzclrprob 2
T/EC Cache
Figure 367. Flow for TFNC Event Closing a Problem in Service Desk
The rest of this section details how this flow was achieved. When an INCHARGE_PROBLEM_CLEAR event is received, it is processed by the rules shown in Figure 368.
reception_action: ( all_instances(event: _ic_event of_class INCHARGE_PROBLEM where [ ic_class_name: equals _ic_class_name, ic_instance_name: equals _ic_instance_name, ic_event_name: equals _ic_event_name, sd_trouble_ticket: _sd_trouble_ticket, status: outside [CLOSED] ], _event - 86400 - 0 ), change_event_status( _ic_event, CLOSED ), _sd_trouble_ticket\=, sprintf(_evstr,%#d,[_ic_event]), exec_program(_ic_event, /usr/local/Tivoli/ITSO/scripts/wzclrprob, %s, [_evstr], NO) ),
The rules will do the following: Find any open TFNC events (INCHARGE_PROBLEM) that matches the _ic_* slot values and was logged in the last twelve hours. Changes the status of the event to closed.
307
Check to see if the sd_trouble_ticket slot is not empty. If its not empty, the event ID is converted to a string (for matching the problem in Service Desk) and the wzclrprob script is run to close the problem. The wzclrprob script is similar to the wzlogprob script. The code related to closing the problem is shown in Figure 369. The complete script is provided in Appendix C.3.2, wlclrprob Script on page 389.
# Setup variables needed for EVProb call OUTFILE=/tmp/EVProb.out GTWY="wtr05368" MOBJ=echo $EVENT_CLASS | dd ibs=15 count=1 2>/dev/null UNID=$EV_KEY DESC="$ic_class_name $ic_instance_name is now ok" # ********************************************************************* # add a call to a problem using EVProb # ********************************************************************* EVProb -h $GTWY -n $MOBJ -x $UNID -m CLOSE -a "DESCRIPTION:$DESC" > $OUTFILE 2>&1 RC=$?
The call to EVProb is similar to the call used to log the problem. There is an extra parameter, -m, that determines the problem action and when not defined it defaults to an open action so to close a problem you must define it. The detailed list of arguments used in the logging call are not required. As with wzlogprob, the results of the call are checked. If the call failed, an event (ITSO_Prob_Close_Failed) is sent to the TEC console with the EVProb output in the msg slot. The problem number is left in the closed TFNC event for future cross reference. 9.3.4.4 Example of Automatically Closing a Problem For this example, the previous IC_HOST_UNAVAILABLE event was closed by reconnecting the cable. When the TFNC event in TEC changes to a CLOSED status, the problem is closed in Service Desk. The gateway log is shown in Figure 370.
308
The last entry shows the problem from the IC_HOST_UNAVAILABLE event, number 00002094, being closed. The Problem List dialog is shown in Figure 371.
Problem 00002094 is now showing as closed. If the Resume button was selected, the problem could be re-opened. We selected the View button to see the Problem Status dialog (Figure 372).
All the fields are the same as when the problem was logged, except for the Status field. The status message has been fabricated by Service Desk. It states that EVProb reported IC_HOST_UNAVAIL restored, external problem ID 539068424. This means that the service to the managed object, in this case the unavailable host, has been restored. The Calls tab shows the details from the close call (Figure 373 on page 310).
309
The description is as specified in the wzclrprob script. This completes the examples showing integration of TFNC with Service Desk. The next section discusses combining TFNC-based network management and application management.
310
Console Rule Builders Guide Version 3.6 , SC31-8508) but this may be unworkable with many routers and thousands of network devices.
Thus combined network and application management is not easily achievable with the current version of TFNC.
9.4 Conclusion
This chapter has shown the NetView interrogation provided by the Tivoli Manager for Network Connectivity module. It provides correlation of network events by using the network topology in NetView and generating root cause events for TEC. We have shown how to install, startup and configure the product. We have also shown the use of the product. This has included: A comparison of TFNC events against events sent to TEC and correlated from the NetView event forwarding (nvserverd). An analysis of the performance of TFNC root cause analysis over different network complexities. An example of automated actions for the TFNC root cause events. The example included problem logging and clearing in Tivoli Service Desk. A discussion about combining application management with the network management provided by TFNC. We found the Tivoli Manager for Network Connectivity to be a very useful product for network management. It was easy to install and did not require any configuration to produce valuable information. A concern is the time taken to produce the root cause event, but this can be tuned and the time taken would be less than the time taken to sift through the events that would have been produced if TFNC was not used.
311
312
Problem Management Using Tivoli Service Desk and the TEC, SG24-5301
TEC Implementation Examples, SG24-5216 Tivoli Service Desk Installation Guide, GC31-5167 formerly known as SA-Expertise for ESM 5.0 Installation Guide Tivoli Service Desk Tivoli Problem Management Network System Management Gateway Network Administrators Guide , GC31-5178 formerly known as ExpertView 5.0 Network System Administrators Guide The Software Artistry Expert Advisor has been renamed to Tivoli Problem Management. Since we used an existing Service Desk environment some of the figures show the Expert Advisor name.
10.1.1 Installation
The installation of Tivoli Service Desk is documented in Problem Management Using Tivoli Service Desk and the TEC, SG24-5301 which should be referenced prior to installing the NSM Commands.
313
10.1.1.1 install.esm Installation involves loading and mounting the CD. The /cdrom/unix/install.esm script is run to perform the installation. The installation process is shown in the following figures (Figure 374 on page 314 through Figure 379 on page 316).
The first step of the installation is entering the product authorization key. If this is incorrectly entered, the installation aborts.
-----------------------------------------------------------------------------SA-EXPERTISE for ESM 5.0.1 -----------------------------------------------------------------------------SA-EXPERTISE for ESM : Product Selection List 1 2 3 4 5 6 7 8 SA-ESMBuild SA-Expert Advisor SA-Expert Advisor Server SA-Expert Distributed Data Manager Client SA-Expert Distributed Data Manager Server SA-Expert Mail Agent SA-Expert View Gateway SA-Expert View NSM Commands
The product to be installed is 8 - SA-Expert View NSM Commands. This is entered at the prompt.
-----------------------------------------------------------------------------SA-Expert View NSM Commands 5.0.1 -----------------------------------------------------------------------------SA-Expert View NSM Commands : Option Selection List 2 3 4 5 6 7 8 9 10 11 12 13 14 HP Network Node Manager Integration Files System Manager Integration Files IBM NetView/AIX Integration Files SunNet Manager Integration Files Cabletron Spectrum Integration Files Update Workstation Configuration ESM Installation Guide Gateway Administrator Guide Network Administrator Guide Network Specialist Guide ExpertView Extensions to EA Guide Technical Reference Guide ExpertView NSM Unix Man Pages
Enter options for installation (2-14) [2 7] 4 7 Enter product installation directory [/usr/lpp/sai/evcmds]
314
The options we install are: 4 - IBM NetView/AIX Integration Files and 7 - Update Workstation Configuration. These are entered as shown in the figure. The install program then asks for the install directory. We took the default of /usr/lpp/sai/evcmds.
SA-Expert View NSM Commands IBM NetView/AIX Integration Files Update Workstation Configuration (EOF): Ready to install [yes]
The install program displays the options we selected. When we press Return at the (EOF): prompt, the Ready to install prompt is displayed. To confirm and continue we press Return.
-----------------------------------------------------------------------------SA-Expert View NSM Commands 5.0.1 -----------------------------------------------------------------------------Installing. Installing NetView for AIX interface... Extracting files from archive aixnv.tar... Enter the TCP/IP host name of the ExpertView gateway [rs600033t] wtr05368 Creating symbolic links... Registering objects with NetView for AIX... Adding fields to NetView for AIX database... Updating cshrc.esm and profile.esm configuration files... Updating cshrc.esm and profile.esm configuration files... Extracting files from archive evw.tar... Creating SA-Script "parse" file parse_evw... Creating SA-Script "run" file evw_wwprobs... Creating SA-Script "run" file evw_wwnodes... Creating SA-Script "run" file evw_npquery... Adding product section to saiapp.ini ... Product installation SUCCESSFUL.
This figure shows the steps the install process goes through. The aixnv.tar file contains all of the bin, conf, fields, help, lrf, and registration files. The process prompts for the gateway hostname. The host we used was wtr05368. After entering the hostname, the installation continues. The remaining messages indicate the installation steps. These include a number of customizations to NetView that are covered in the next section. The last comment says that the install is adding product details to the saiapp.ini file. This file is a Windows ini file and is not used on UNIX.
315
-----------------------------------------------------------------------------SA-EXPERTISE for ESM 5.0.1 -----------------------------------------------------------------------------Updating cshrc.esm and profile.esm configuration files... Updating cshrc.esm and profile.esm configuration files... Display Message Log file [yes] -- SA-Expert View NSM Commands Installing NetView for AIX interface... Extracting files from archive aixnv.tar... OVInstall:Customizing configuration files... Registering objects with OpenView... Adding fields to NetView for AIX database... Updating cshrc.esm and profile.esm configuration Updating cshrc.esm and profile.esm configuration Extracting files from archive evw.tar... Creating SA-Script "parse" file parse_evw... Creating SA-Script "run" file evw_wwprobs... Creating SA-Script "run" file evw_wwnodes... Creating SA-Script "run" file evw_npquery... Adding product section to saiapp.ini ... Updating cshrc.esm and profile.esm configuration Updating cshrc.esm and profile.esm configuration (EOF): Display Error Log file [yes] -- SA-Expert View NSM Commands (EOF):
files... files...
files... files...
After the install the message and log files can be viewed by accepting the default options at the prompts as shown above. NetView must be recycled after the installation to use the NSM Module functions. 10.1.1.2 install.esm NetView Customization The NetView customization include: Adding a field to the NetView database. Registering the NSM Module daemons with NetView. Updating cshrc.esm and profile.esm files that contain environment variable definitions needed to run NSM Module commands. Extracting some dialog and command definitions from the evw.tar file and creating scripts. These are detailed below.
Additional Field in the NetView Database A new file, /usr/OV/fields/C/ExpertView, is added and used to define a new field to the NetView database. The file is shown in Figure 380 on page 317.
316
New Registration File The installation creates a new registration file, /usr/OV/registration/C/ExpertView. This file defines the menu options added to the NetView maps, which give access to some Service Desk functions. Addition of Daemons to NV The NSM module includes two daemons: EVEventd and EVQueryd. The EVEventd process is the event handler. It registers with trapd to receive certain traps, performs limited correlation and sends problem log/update requests to Service Desk. The EVQueryd process manages the integration between the NetView pull-down menus and Service Desk. The install process registers these daemons with NetView.
Figure 381 shows the two lrf files.
rs600033t:/usr/OV/lrf > cat EVEventd.lrf # lrf for ExpertView event daemon EVEventd:/usr/lpp/sai/evcmds/nvaix/bin/EVEventd:OVs_YES_START:ovwdb,trapd:-h,wtr05368 /usr/OV/conf/C/nodes.conf:OVs_WELL_BEHAVED :: rs600033t:/usr/OV/lrf > cat EVQueryd.lrf # lrf for ExpertView query daemon EVQueryd:/usr/lpp/sai/evcmds/nvaix/bin/EVQueryd:OVs_YES_START:ovwdb,::OVs_WELL_BEHAVED::
Environment Variable Files The install process creates two environment files: cshrc.esm for C shell environments and profile.esm for Bourne/Korn/Posix shell environments.
Figure 382 shows the profile.esm file.
rs600033t:/usr/lpp/sai > pg profile.esm PATH=$PATH:/usr/lpp/sai/evcmds/nvaix/bin:/usr/lpp/sai/esmbin export PATH LIBPATH=$LIBPATH:/usr/lpp/sai/evcmds/nvaix/bin export LIBPATH SAIPATH=$SAIPATH:/usr/lpp/sai/evcmds:/usr/lpp/sai/esmicons export SAIPATH
This file may not contain all entries you need. We found that some LIBPATH and PATH entries were missing. We had to add: /usr/lpp/sai/aixASE/bin to LIBPATH. /usr/lpp/sai/aixASE/lib to LIBPATH.
317
/usr/lpp/sai/aixASE/bin to PATH.
The value must match that used for the rest of the Service Desk installation. 10.1.2.2 Creation of Nodes List For the automatic problem logging function to work, the nodes to be managed must be defined. When the NSM module was installed, the isManagedByEV field was added to the database. If this field is set to TRUE for a node, the node will be managed by the NSM module. This field can be set by the EVSetFlag program. It uses a filter condition in the nodes.conf configuration file to determine which nodes should have the field set. To show the managed nodes, a NetView collection can be defined. The steps to define the managed nodes and create a collection are detailed in the following sections. 10.1.2.3 Building nodes.conf The nodes.conf file can be found in $SAI_ROOT/evcmds/<NMP>/conf/C, where <NMP> is the network management product. Our file is in /usr/lpp/sai/evcmds/nvaix/conf/C. The default nodes.conf file is shown in Figure 384 on page 319.
318
FALSE (* TRUE *)
*)
*)
(* (* Basic network infrastructure is managed by ExpertView isHub OR isRouter OR isBridge OR isRepeater *) (* (* Example of more complicated expression vendor = "IBM" OR ( isPrinter AND vendor = "Hewlett-Packard" ) *)
*)
*)
This file shows four different filters: 1. The first filter has the entry: FALSE. This means no nodes will have the isManagedByEV field set. This is the active value. (The other three are commented out.) 2. The second filter has the entry: TRUE. This will turn on the isManagedByEV field for all nodes. 3. The third filter has the entry isHub OR isRouter OR isBridge OR isRepeater. This will turn on the isManagedByEV field for only those node types. 4. The final filter shows a more complex rule with the entry: vendor = "IBM" OR (isPrinter AND vendor = "Hewlett-Packard" ). The isManagedByEV field will only be set for IBM nodes or printers from Hewlett Packard. For the examples in this chapter, we were only interested in the routers so we set the file to only have one entry: isRouter.
Execute EVSetFlag Once the nodes.conf file is set up, the EVSetFlag program is run. It does not produce any output if successful. Once the gateway configuration is complete (see the next section), the NSM module will create problems in Service Desk based on the managed nodes list.
The true indication of this program having worked is when you create a collection in NetView.
Create a Collection A collection is created in the standard way. The Add Collection dialog is shown in Figure 385 on page 320.
319
The Definition 1 field shows the definition required to show the NSM module managed nodes. It contains a boolean check for isManagedByEV=TRUE. The resulting SERVICE_DESK collection is shown in Figure 386 on page 321.
320
It shows all routers that NetView on rs600033t is managing. 10.1.2.4 Gateway Configuration Option The final configuration is done on the gateway node. In our case this was wtr05368. To open the Gateway System Configuration dialog, select Edit -> System Configuration... from the Gateway menu bar. There are tabs for: General Options Options for Opening a problem Options for Closing a problem Callback Options Error Options Terminology Options
We are only concerned with the General and Open options. 10.1.2.5 General Options The General options screen is shown in Figure 387 on page 322.
321
The options specified on this screen are: The user ID to use for logging problems. Problems for our system will be logged against the user EXAV. The host to query for node status. This is the NetView node, which is rs600033t in our case. The character EVProb will use to separate arguments. This was left as the semi-colon. When to automatically escalate a problem. In our case its set to escalate when three faults are logged. When to automatically compress a problem. In our case its set to compress after ten minutes. The EVTrap Group Name to apply. We arent using any trap groups, so nothing is specified. Selecting the Open tag brings up the Open Options screen. 10.1.2.6 Open Options The Open Options screen is shown in Figure 388.
322
This screen sets the default values for opening a problem or call through the gateway. The values we have specified are: Caller, the person or process logging the call, is NETV. Problem Type is COMMUNICATIONS. Call Code is Incoming Call. Default Severity is Important, Critical which is 2. System is Network. Component is Router. A default Item is not defined. An incoming call will also notify the nodes contact on automatic problem open. This completes the standard installation and configuration. Call should now be logged by the NSM module. Node-specific options can also be specified. This is covered in the following section.
323
This dialog shows a number of default values that can be set for this node. However, this dialog does not allow the Node Specific Problem Defaults to be set. To access the additional dialog, you must enter the Network Nodes dialog from the Gateway dialog. Select the Edit -> Network Nodes menu option. This will present a list of network nodes (that is, the entries in the EV_NODE_INFO table) as shown in Figure 390.
Selecting a node and selecting the Edit button opens the Network Nodes dialog, as shown in Figure 391.
324
This dialog is basically the same as the one from NetView (Figure 389 on page 324) but it has the additional Problem Configuration button. Selecting this button opens the Problem Configuration dialog as shown in Figure 392 on page 325.
This is very similar to the Gateway System Configuration dialog (Figure 387 on page 322) that was used to set the default values for all problems being logged through the gateway. This dialog sets the default values for all problems logged for this node. These settings override the gateway system configuration settings. This concludes the configuration that can be done for the NSM module and completes the Installation and Configuration section of this chapter. The next section of the chapter shows an example of automated problem management from the NSM module.
325
NSM gateway host, so it was easier to monitor both the NSM gateway log and the Problem Management window.
Figure 393. TSD New Problem: NSM Gateway Log (Problem 2113)
Figure 394 shows that alarms have been issued for the new problem.
Select Problem --> Work with Problem to display the problem inquiry window (Figure 395 on page 327).
326
This dialog is used to specify search criteria for the problem list display. You can just select the OK button to list all problems and then make a selection, or you can narrow the search. In this case we enter the problem number since it is known, and selected the OK button to display the Problem List window (Figure 396).
As we specified the problem to list, it is the only one shown and it is selected. If there had been multiple problems listed, we would select the one we want to work with. To work with the selected problem we click on the Resume button to display the Problem Status window (Figure 397 on page 328).
327
From this window we can view the details of the problem, all calls concerning this problem (initial or recurring opens from NetView) and the audit trail. In our case we fixed the problem by reconnecting the WAN cable to the router, typed in the solution in the status area, and selected both check boxes (Notify all Contacts and Make Active Solution). In this way the solution is registered and all users who have been either working or forwarding this problem are informed that the problem is closed. To complete the resolution of the problem, the Resolve button is clicked. This will close the problem in the Problem Management application.
328
As there is existing history for similar problems we can use Service Desks diagnosis facilities to speed the resolution of this problem. In a customer site, this feature would be particularly useful. To use the diagnosis facility for this problem, we click the Preview button (in the Diagnosis group of buttons). A search of the Problem Management database is performed and the number of matches found is displayed on the Diagnosis buttons. In this case there are two matches for Common Problems (C/P). To see the problems that have similar details to our problem, we click on the C/P (for Common/Problems) button. The common problems are shown in Figure 399 on page 330.
329
Each problem listed contains details and solutions that may assist in resolving the new problem. In our case the second one is indeed the solution needed, so we select it and click the Activate button to use this solution. This copies the solution to our problem and returns us to the Problem Status window for the new problem (Figure 400). This will not perform the action described in the solution. Actions still have to be taken to resolve the problem.
As with the previous example, we select both the Notify all Contacts and Make Active Solution check boxes and then click the Resolve button to finish and close the problem.
330
Figure 401. TSD: Problem 2116 Opened - Gateway Log with Multiple Calls
Opening the problem (see Figure 402) and selecting the Calls tab shows both calls. The two calls were logged in close sequence, one for the WAN link, one for the LAN link of the 2210_REMOTE.
331
works from the last time a problem was closed for the affected node and is used to determine if a problem should be reopened. Figure 403 shows the gateway log for a re-opened problem.
The last line on the screen is the entry for problem 2117 being re-opened. This is for the WAN adapter on 2210_LOCAL dropping out of service. When we open problem 2117, we see that the problem is OPEN again (Figure 404).
We select the Calls tab to see the calls linked to the problem (Figure 405 on page 333).
332
There are three separate calls. The first one corresponds to the initial problem generation and the other two are reopenings. We can use the Audit Trail facility to get a better idea of what has gone on. When we select the Audit Trail tab, we see the audit trail for this problem (Figure 406).
This shows how the problem was opened, closed and then re-opened.
333
Figure 407. TSD Problem: NSM Gateway Log - Problem 2115 Closed Automatically
Here we see a message on the NSM gateway log closing a problem 2115. This was received from the NetView host after the WAN cable was reconnected. This automatically closed the open problem in Problem Management.
334
The menu options are: Open Problems for Node - List all open problems for the NetView node that is currently selected on the submap. All Problems for Node - List all problems for the NetView node that is currently selected on the submap. Node Configuration - Access the Node Configuration dialogs to specify the node-specific problem logging defaults. Node/Problem Query - Open the Problem search parameters dialog (Figure 395 on page 327). Work With Problems - Open the problem list for all problems (that is, not restricted to node). Work With Nodes - Open the Node Configuration dialogs for all nodes. The Node Configuration is described in 10.1.3, Configuration of Node-Specific Options on page 323. The following section shows the All Problems option. All other options are similar, but just start different Service Desk Problem Management (Advisor) dialogs. The menu options can also be run from the command line. See the NSM Gateway EventView registration file (/usr/OV/registration/C) for command line equivalents to the menu options.
335
The two problems associated with 2210_LOCAL are listed (numbers 2097 and 2110). Selecting problem 2097 and clicking the Edit button opens the problem details dialog, that is, the Problem Status Notebook dialog, as shown in Figure 410.
This is basically the same dialog as can be accessed from the Service Desk Advisor screens. Compare it to Figure 404 on page 332. This dialog has the node listed and it does not have the check boxes that Figure 404 has. There are differences in the other dialogs available from NetView, but these are trivial. However, the key here is that much of the problem management can by run by the NetView operator with a NetView window.
336
10.4 Conclusion
This chapter has discussed the NetView to Service Desk integration. We have detailed the installation of the module and used a number of examples to show the two mechanisms to log problems from NetView events: EVProb, a command line interface The NSM Gateway (ExpertView) NSM commands, which provide basic correlation of events and automatic problem logging The integration also provides access to Problem Management from the NetView menus. Thus, in a NetView-centric environment, Problem Management can be integrated very neatly. In a TEC centric environment, use of the EVPRob command can also provide a neat integration.
337
338
339
Maestro: Tivoli: TEC, Inventory Slave & NV Agent RIM Host rs600028
All three nodes have Maestro running. The TMR server node (rs60008) is also the Maestro master node. The scenario involved using Maestro to back up the Tivoli software. This involved a coordinated backup across the three nodes including a database backup (wbkupdb) on rs60008 and a backup of the /usr/local/Tivoli directories on all three nodes. Tivoli must be shut down for these backups to occur, it cannot be used to monitor and manage the backup process. Thus the NetView console was used to manage the scheduled task. Prior to installing Maestro/NV, a job network was created to perform the backup function. This is shown in Figure 412 on page 341:
340
rs600015
rs60008
TIVBU_BKUPDB
rs600028
TIVBU_SHUTDOWN
TIVBU_TIVHOMEBU
TIVBU_TIVHOMEBU
TIVBU_TIVHOMEBU
TIVBU_STARTUP
The jobs are as follows: TIVBU_BKUPDB. This job performs a backup of the Tivoli object database using the wbkupdb command. This job is the first in the schedule and is started by a timer. TIVBU_SHUTDOWN. This job uses the odadmin shutdown all command to stop Tivoli on all nodes. This job is triggered by the successful completion of the BKUPDB job. TIVBU_TIVHOMEBU. This job takes a backup of the Tivoli home directory, /usr/local/Tivoli, on each of the three nodes. The TIVHOMEBU job on rs60008 is triggered by the successful completion of the SHUTDOWN job. On the other nodes, it is triggered by a timer. All three TIVHOMEBU jobs are dependent on the successful completion of the SHUTDOWN job on rs60008. TIVBU_STARTUP. This job restarts Tivoli on all nodes by using the odadmin start all command. It is triggered by the TIVHOMEBU job on rs60008 but it wont start executing until all three TIVHOMEBU jobs have completed. There are three separate schedules defined, one for each node. All are called TIVBU. This batch network was defined and tested prior to installing Maestro/NV. The next section describes the Maestro/NV installation.
341
All installations use the /usr/local/maestro/OV/customize script. This script can be viewed if you want to see what it actually does. This script can also be run with a -noinst option so the changes can be reviewed. If there are problems with the installation, it can be backed out using the /usr/local/maestro/OV/decustomize script.
[maestro@:/usr/local/bin] > conman shutdown MAESTRO for UNIX (AIX)/CONMAN 6.0 (3.36.1.44.1.14) (C) Tivoli Systems Inc. 1998 (X) Installed for Tivoli Systems Inc. under group DEFAULT. Schedule (Exp) 11/23/98 (#10) on RS600015. Batchman LIVES. Limit: 10, Fence: 0 shutdown [maestro@:/usr/local/bin] >
2. As user root (for the remaining steps) the customize script was run: /bin/sh /usr/local/maestro/OV/customize. The results of the customization is shown below.
rs600015:/tmp > /bin/sh /usr/local/maestro/OV/customize modifying files for maestro in /usr/local/maestro/OV/.. Copying the appropriate application registration file. Copying sample filters Copying the field registration Compiling the field registration Making the help directories Copying the dialog help texts Copying the task help texts Copying the function help texts Copying the local registration ovaddobj - Static Registration Utility Successful Completion Adding OVW path to maestro .profile maestro .profile was modified the old file the previous version is in /usr/local/maestro/OV/../.profile.old
Figure 414. Customize for NetView Server Output
The NetView integration tasks shown in Figure 414 are as follows: Create a Maestro.app in directory /usr/OV/registration/C. Add filters to the /usr/OV/filters directory. Create Maestro.fields in the directory /usr/OV/fields/C. 342
Integrated Management Solutions Using NetView Version 5.1
Register the Maestro daemon, Unison_Maestro_Manager (mdemon), with the NetView local registration using the ovaddobj command. The .profile file for the user maestro is altered to add the /usr/OV/bin directory to the PATH.
Adding traps to trapd.conf Trap uTtrapReset has been added. Trap uTtrapProcessReset has been added. Trap uTtrapProcessGone has been added. Trap uTtrapProcessAbend has been added. Trap uTtrapXagentConnLost has been added. Trap uTtrapJobAbend has been added. Trap uTtrapJobFailed has been added. Trap uTtrapJobLaunch has been added. Trap uTtrapJobDone has been added. Trap uTtrapJobUntil has been added. Trap uTtrapJobCancel has been added. Trap uTtrapSchedAbend has been added. Trap uTtrapSchedStuck has been added. Trap uTtrapSchedStart has been added. Trap uTtrapSchedDone has been added. Trap uTtrapSchedUntil has been added. Trap uTtrapGlobalPrompt has been added. Trap uTtrapSchedPrompt has been added. Trap uTtrapJobPrompt has been added. Trap uTtrapJobRecovPrompt has been added. Trap uTtrapLinkDropped has been added. Trap uTtrapLinkBroken has been added. Trap uTtrapDmMgrSwitch has been added.
Figure 415. Customize for NetView Server - Adding Traps
Figure 415 shows the traps that are added to the trapd configuration file (/usr/OV/conf/C/trapd.conf). These are the traps for the various Maestro status changes. The names give an indication of the traps purpose. For example, the uTtrapJob* traps are for Maestro managed jobs. There are Maestro Job traps for: abended, failed, launched, done, until passed, cancelled, prompt sent, and recovery prompt.
343
IMPORTANT Be sure to add an appropriate entry to each maestro cpus rhost This could be "<manager> <user>" if a user other than maestro will be running the management station. Or "<manager>" if the maestro user will be managing from the management station. see documentation on remsh and .rhosts for further information. will install the agent reporting to rs600015 changing ownership and permissions on agent Processing the agent files /etc/snmpd.peers has been replace the old file is /etc/snmpd.peers.old /etc/snmpd.conf has been replace the old file is /etc/snmpd.conf.old Recompile the defs for this machine /etc/mib.defs has been replace the old file is /etc/mib.defs.old Refreshing snmpd 0513-095 The request for subsystem refresh was completed successfully. Setting up the agent files Copying the configuration files. /usr/local/maestro/OV/../StartUp has been replaced changing ownership and permissions on programs You should now restart Netview/Openview and Maestro
Figure 416. Customize for NetView Server Output
Figure 416 shows the final configuration tasks. The IMPORTANT note details changes to the .rhosts file. This is covered in the 11.3.3, Maestro/NV Setup on page 346. The remaining changes are for the configuration of SNMP and the agent for Maestro. These include: Creating a new copy of the /etc/snmpd.peers file to configure the new smux agent. Creating a new copy of the /etc/snmpd.conf file to add a new smux agent and define the destination node for traps. Using the mosy command to compile the /usr/samples/snmpd/smi.my SMI file and the Maestro.mib MIB file into an snmpinfo object file, /usr/local/maestro/MAGENT.defs. This is then copied over the /etc/mib.defs file. This adds the new Maestro MIB. Refreshing the SNMP daemon with the new definitions. Setting the owner and permissions for: /usr/local/maestro/bin/magent /usr/local/maestro/*.conf /usr/local/maestro/OV/maestroEvents and de-customize (the uninstall script) /usr/local/maestro/bin/mdemon and muser Altering the /usr/local/maestro/StartUp file to start up the Maestro/NV agent (magent)
344
3. The Maestro processes were restarted using /usr/local/maestro/StartUp. See Figure 418 for the output from this command. 4. The Maestro/NV daemon (mdemon) was started using ovstart Unison_Maestro_Manager.
rs600028:/ > /bin/sh /usr/local/maestro/OV/customize -manager rs600015 modifying files for maestro in /usr/local/maestro/OV/.. will install the agent reporting to rs600015 changing ownership and permissions on agent Processing the agent files /etc/snmpd.peers has been replace the old file is /etc/snmpd.peers.old /etc/snmpd.conf has been replace the old file is /etc/snmpd.conf.old Recompile the defs for this machine /etc/mib.defs has been replace the old file is /etc/mib.defs.old Refreshing snmpd 0513-095 The request for subsystem refresh was completed successfully. Setting up the agent files Copying the configuration files. /usr/local/maestro/OV/../StartUp has been replaced You should now restart Netview/Openview and Maestro
Figure 417. Customize Managed Node Output
The output shows the results of installing the SNMP agent for Maestro. As with the server setup, the StartUp file is modified to start the new process (magent). 3. The Maestro processes were restarted using /usr/local/maestro/StartUp. The results of the StartUp command are shown in Figure 418.
rs600028:/ > /usr/local/maestro/StartUp MAESTRO for UNIX/STARTUP 6.0 (C) Unison Software Inc. 1997 UNISON UNIX (AIX)/NETMAN 6.0 (3.25.1.13.1.7) (C) Tivoli Systems Inc. 1998 Program patch revision: 3.25.1.13.1.7 Netman installed under group DEFAULT. [12100.57] MAESTRO for UNIX (AIX)/MAGENT 6.0 (1.66.1.8.1.8) (C) Tivoli Systems Inc. 1998 (X) Installed for Tivoli Systems Inc. under group .
Figure 418. Maestro StartUp Output
345
The second step is to add this user to the Maestro security file. The Maestro UNIX Users Guide V6.0, GC31-5136 implies that on UNIX nodes, the root user has access to everything. However when we checked the security file (using the dumpsec command) we found that only the maestro user was contained in the file. The file had to be modified to add root and uploaded using the makesec command. This is described in detail in the Maestro UNIX Users Guide V6.0, GC31-5136. 11.3.3.2 Defining Maestro Maps in NetView The installation notes say that you should use the File -> Describe Map... options from NetView. We found that you cannot change some of the settings on the default map, so we created a second NetView map called Maestro by selecting File -> New Map. To change the map setting select File->Describe Map as shown in Figure 419 on page 347.
346
This option opens the New Map dialog (see Figure 420).
347
After a map name is entered, the Maestro - Unison Software(c) item is selected from the Configurable Applications list. Next the Configure For This Map button is clicked. This starts the Configuration dialog (see Figure 421).
The Enable Maestro for this map option must be set to true. All other options, the commands run under the Maestro menu items, are left. To complete the addition of the new map, click on OK. The new map called Maestro is shown in Figure 422 on page 349.
348
The new icon labelled Tivoli Systems Inc. represents the new addition. 11.3.3.3 Loading the Maestro MIB The new MIB, Maestro.mib, is added in the same way as any other MIB. This is done by selecting Options -> Load/Unload MIBs -> SNMP from the NetView menu (see Figure 423).
349
The new MIB is not in the list displayed on this dialog, so select Load. This starts the Load MIB From File dialog (Figure 425).
Enter the MIB file name /usr/OV/snmp_mibs/Maestro.mib in the MIB Files to Load field then click on OK. A message Loading Maestro.mib MIB will appear in the messages field. When control returns to the Load/Unload MIBs dialog, the Maestro.mib entry is the last one in the list. Exit by clicking on Cancel. 11.3.3.4 Restart Maestro on all Systems Once the changes have been made, Maestro has to be restarted. To do this, log on to the Maestro master node, rs60008 in our case, and issue the command conman start @. The @ symbol is the Maestro wildcard for all nodes.
350
After the installation of Maestro/NV this map has an icon labelled Tivoli Systems, Inc. (c). This application symbol represents all discovered Maestro networks. The color of the icon represents the aggregate status of all the Maestro networks. Opening this symbol shows all the Maestro networks as shown in Figure 427 on page 352.
351
Figure 427. Maestro Networks Submap with RS60008 Maestro Network Icon
There is only one Maestro network in this application submap, labelled RS60008:Maestro. If there were multiple Maestro networks there would be multiple icons, each labelled with the specific Maestro network name. The icon color represents the status of all CPUs and links that comprise the Maestro network. Opening the Maestro network icon shows all CPUs and links in the network (Figure 428).
352
This figure is equivalent to the diagram of our Maestro network (Figure 411 on page 340). There are three Maestro nodes, with the center one (rs60008) being the master node. If there were more slave nodes, the slaves would be arranged in a star pattern around the master node. Each node symbol represents the job scheduling on that CPU. The color represents the status of the job scheduling. If a trap is received indicating a change in status of a job scheduling component, the icon color will be changed. The links represent the Maestro CPU links with the color representing the status of the CPU link. There are no submaps to these maps. The Maestro process information is found from the node symbol under the main IP submap.
Double-clicking on this icon opens the rs60008 submap (see Figure 430 on page 354).
353
This submap shows three icons: The Maestro CPU icon, as shown on the Maestro network submap. The Maestro software on the CPU. It represents all Maestro processes running on the node. The color indicates the aggregate status of all the monitored processes on a Maestro CPU. The physical interface for this node. Opening the Maestro software icon shows all Maestro processes on the node (see Figure 431 on page 355).
354
This node, rs60008, has the standard Maestro daemons: batchman, jobman, magent, mailman and netman. As it is also the Maestro master it has writer daemons for each slave. There is an icon for each daemon. The color of the icon indicates the status of the process. The magent and netman icons are raised. Clicking on the NETMAN symbol performs the startup action on the CPU. Clicking on the MAGENT symbol starts the magent process on the CPU. The next sections show an example of managing processes with these submaps.
355
The MAILMAN icon is red indicating that the process has abended. The BATCHMAN and JOBMAN icons are yellow indicating that the processes are stopped. The processes can be restarted by: Running the conman start command on the node (or from the Maestro master) Using the pull-down menus that were added with the Maestro/NV installation Selecting an icon with the right mouse button will show the pull-down menu (see Figure 433 on page 357).
356
Under the Maestro option are the following items: View - Opens a child submap for a Maestro/NV symbol. Master conman - Runs the conman program on the Maestro master CPU. This can be used to run the start command. Acknowledge - Acknowledges the status of selected Maestro/NV symbols. This resets the icon to green but doesnt resolve the problem. You dont need to use this option as once the processes are running again and polled, the symbol will be set back to a normal status. Conman - Runs conman on the selected Master CPU(s). Start - Issues a conman start command for the selected CPU(s). Down (stop) - Issues a conman stop command for the selected CPU(s). Start Up - Executes the startup script on the selected CPU(s). Re-discover - Locates new agents and new Maestro objects. Remove Network - Removes the Maestro objects from NetView. Many of these items are grayed out when selecting this menu for the process icon. This is because many options are not relevant to a Maestro process. One of the grayed out items is the start command, which we need to use. To access the start command we need to go up a level and select the rs60008 CPU icon as shown in Figure 434 on page 358.
357
The Start menu item can now be used to restart all Maestro processes on the CPU. The next section shows how to access Maestro from within NetView.
358
Selecting the Schedules icon starts the Schedules dialog (see Figure 437 on page 360).
359
Here we can see our three example schedules, all called TIVBU, and some other schedules. The three TIVBU schedules are in a HOLD state. The RS60008#TIVBU schedule is waiting for the timer to expire at 12:15 and the other two are waiting for the RS60008#TIVBU to trigger them and for the timer to expire (also at 12:15). This interface and the Maestro topology submap can be used to monitor schedule execution. This is shown with the continuation of the example in the next section.
360
This indicates problems with jobs or schedules on two nodes: rs60008 and rs600015. Opening the Maestro master console from either red icon shows the Schedule dialog (see Figure 439).
Figure 439. Maestro Abended and Stuck Schedules in the Schedules Dialog
The RS60008#TIVBU schedule is showing as STUCK because it is waiting for the RS600015#TIVBU schedule to finish. This is why the rs60008 icon is red. The RS600015#TIVBU schedule is in an ABEND state. To see more detail, the schedule is selected and the right mouse button brings up the Schedule menu (see Figure 440 on page 362).
361
Selecting the Jobs... option starts the Jobs dialog (see Figure 441).
This schedule only has one job, TIVBU_TIVHOMEBU, which is in an ABEND state. As this is the only job and the job options are set to stop when the job abends, the schedule is also in an ABEND state. This is why the NetView icon for this node (in the Maestro topology) is set to red. After fixing the problem with the job, it is restarted. See the Maestro UNIX Users Guide V6.0, GC31-5136 for details of resolving and restarting jobs. The schedule after the restart is shown in Figure 442 on page 363.
362
After the job runs successfully, the RS600015#TIVBU and RS60008#TIVBU schedules complete successfully (see Figure 443).
Note that when the schedules complete successfully after initial failures, the NetView icons are not restored to a normal green state. The display looks the same as Figure 438 on page 361. This is because the Maestro/NV integration doesnt change the symbol state with the incoming success traps. This could be configured by a user.
363
364
#--------------------------------------------------------------------------# # This script will mirror the Tivoli Framework Profile Manager hierarchy # # to the Tivoli NetView SmartSets. Each SmartSet name and its members will # # comply with the ProfileManager name and its subscribers. # #--------------------------------------------------------------------------# # # Create list of the available ProfileManagers # system("wlookup -a -L -r ProfileManager > data/pr_mgr.lst"); # # Open file and start to read it line by line # open(filehandle, "data/pr_mgr.lst"); $_ = ""; while ($_ == eof(filehandle))
Figure 444. MirrorPM.pl Part 1
365
{ $_ = <filehandle> until (/\n/ || eof(filehandle)); chop; chop; print "\ProfileManager: $_ \n"; $pm = $_; # # Translate spaces and slashes to underscore in profilemanager names! # tr" /"__"; # # for each Profile manager create a subscribers list # (Name of each file is the name of the Profile manager # (Remember to put @ before each ProfileManager name.) # system("wgetsub @$pm \> data/$_ "); open(file2, "data/$_"); $old = $_; $_ = ""; $var = "\"(IN \\\""; while ($_ == eof(file2)) { $_ = <file2> until (/\n/ || eof(file2)); chop; $var = $var.$_." "; $_ = ""; } chop($var); close(file2); $_ = $old; $var = $var."\\\")\""; # # Create the SmartSets for each profile manager and load # it with the subscribers # # SmartSetUtil a name desc rule # # ex: smartsetutil a "nv" "nv" "(IN "rs600015 rs600028")" # if ($var ne "\"(IN \\\\\")\"") {system("smartsetutil a $_ $_ $var ")} else {print " NO subscribers, Collection NOT created. \n"}; $_ = ""; } close(filehandle);
Figure 445. MirrorPM.pl Part 2
366
A.1.2 DeletePM.pl This script will delete the Tivoli Framework profile manager hierarchy from the Tivoli NetView SmartSets.
#--------------------------------------------------------------------------# # # # This script will delete the Tivoli Framework Profile Manager hierarchy # # from the Tivoli NetView SmartSets. # # # #--------------------------------------------------------------------------# # # Create list of the available ProfileManagers # system("wlookup -a -L -r ProfileManager > data/pr_mgr.lst"); # # Open file and start to read it line by line # open(filehandle, "data/pr_mgr.lst"); $_ = ""; while ($_ == eof(filehandle)) { $_ = <filehandle> until (/\n/ || eof(filehandle)); chop; chop; print "\ProfileManager: $_ \n"; # # for each Profile manager delete its smartset # system("smartsetutil D $_ "); } close(filehandle);
A.1.3 ITSO.reg - Menu Registration File The itso.reg file was used to define the commands to NetView and allowed us to run these commands from the NetView menu.
367
Application "Distribution" { Description { "profile distribution" } MenuBar <05> "ITSO" _I { <08> "Example One" _O f.menu "A"; } Menu "A" { <20> "Mirror PM Structure" _M f.action "Mirror"; <10> "Remove PM Structure" _R f.action "Delete"; } Action "Mirror" { Command "/usr/OV/bin/appmon.exe \ -commandTitle \"Mirroring PM Structure\" \ -commandHeading \"MirrorPM\" \ -cmd /usr/OV/itso/mirror.bat"; } Action "Delete" { Command "/usr/OV/bin/appmon.exe \ -commandTitle \"Mirroring PM Structure\" \ -commandHeading \"Delete Mirror\" \ -cmd /usr/OV/itso/umirror.bat"; } }
368
#--------------------------------------------------------------# # # # Modify # # # # This script will prepare the node subscribers list for the # # nvdbimport. # # # #--------------------------------------------------------------# $var = @ARGV[0]; if ($var eq "") { print " NO ProfileManager (Collection) selected \n"; exit(0); }; # # Chop off .CF in case it is used from menu with selection # chop($var); chop($var); chop($var); print "\ ProfileManager: $var \n"; print "\ Subscribers: \n"; # # Get all subscribers (including NON Endpoints # system("wgetsub \"@\"$var \> data/$var"); open(filein, "data/$var"); open(fileout, ">data/endpoint.txt"); $_ = ""; while ($_ == eof(filein)) { $_ = <filein> until (/\n/ || eof(filein)); chop; $var = $_.",TRUE\n"; printf (fileout $var); print "\ $_ \n"; $_ = ""; }
Figure 448. Easy.pl Part 1
369
close(filein); close(fileout); #--------------------------------------------------------------# # # # Copy # # # # This script will prepare the IMPORT file # # Header + data file = Import file # # # #--------------------------------------------------------------# print "\ Preparing IMPORT file \n"; open(file1, "businessset3.txt"); open(file2, "data/endpoint.txt"); open(file3, ">data/businessset3.import"); $_ = ""; while ($_ == eof(file1)) { $_ = <file1> until (/\n/ || eof(file1)); print(file3 $_); $_ = ""; } $_ = ""; while ($_ == eof(file2)) { $_ = <file2> until (/\n/ || eof(file2)); print(file3 $_); $_ = ""; } close(file1); close(file2); close(file3);
370
#--------------------------------------------------------------# # # # system # # # # This script will create the collection and discard nodes # # that are not switched on # # # #--------------------------------------------------------------# print "\ executing NVDBIMPORT..... \n"; system("nvdbimport -f data/businessset3.import \> NUL"); print "\ creating SmartSet..... \n"; system("smartsetutil a \"Distribution\" @ARGV[0] \"(\"isBusinessSet3\" = \"TRUE\") && (\"isTMAEndpoint\" = \"TRUE\")\" "); print "\ collecting Available nodes..... \n"; system("nvdbformat -f endpoint.format \> data/endpoint.lst"); print "\ collecting NOT active nodes..... \n"; system("nvdbformat -f deadnode.format \> data/deadnode.lst"); #--------------------------------------------------------------# # # # Distribution # # # # This script will prepare the parameters for wdistrib command # # # #--------------------------------------------------------------# open(filein, "data/ProfileName"); $_ = <filein> until (/\n/ || eof(filein)); chop; $profile = $_; print "\ Profile selected for distribution: $profile \n"; close(filein); open(filein, "data/endpoint.lst"); $_ = ""; $var = ""; while ($_ == eof(filein)) { $_ = <filein> until (/\n/ || eof(filein)); chop; if ($_ ne "") { $var = $var."@Endpoint:".$_." "; $_ = ""; } } close(filein);
Figure 450. Easy.pl Part 3
371
# # Example: # wdistrib @InventoryProfile:Inventory @Endpoint:wtr05095 # print "\ executing WDISTRIB..... \n"; system("wdistrib @InventoryProfile:$profile $var "); sleep(5); #--------------------------------------------------------------# # # # Kill wtailnotif # # # # The script will stop wtailnotif. # # # #--------------------------------------------------------------# print " \ .....shutting Down wtailnotif..... \n"; system("ntprocinfo | grep wtailnotif \> data/notif"); open(filein, "data/notif"); $var = ""; read(filein, $var, 4); if ($var ne "") { system("ntprocinfo -k $var"); }; close(filein); #--------------------------------------------------------------# # # # Error parsing # # # # The script will parse the error file for failed nodes, # # it will execute nvstatustrap for them to show the # # execution status in the smartsets. # # # #--------------------------------------------------------------# print "\ collecting FAILED nodes..... \n"; $var = 0; $cnt = 5; while($var != 1) { $var = open(filehandle, "data/error"); print "\ .....waiting to open error file..... \n"; sleep(5); $cnt = $cnt - 1; if( $cnt == 0)
Figure 451. Easy.pl Part 4
372
{ $var = 1; print "\ cant open error file. skipping error parsing. \n"; } } $_ = ""; while ($_ == eof(filehandle)) { $_ = <filehandle> until (/\n/ || eof(filehandle)); chop; $var = $_; $var = substr($var, -7); chop($var); # # Does the msg contain any failed messages for us? # if ( $var eq "failed" ) { # # get the start position of last parameter in error msg # $nof = rindex($_, ""); # # get the end position of last parameter in error msg # $nol = rindex($_, ""); # # get the target node name in error msg # $var = substr($_, $nof+1, ($nol-$nof-1)); # # change target to User1 status in collection # print "\ $var \n"; system("nvstatustrap User1 $var wtr05097 \> NUL"); }; $_ = ""; } close(filehandle); print "\ Finished processing.....END \n"; exit(0);
Figure 452. Easy.pl Part 5
373
A.2.2 unselect.pl The unselect script resets the environment set up by the Easy script.
#--------------------------------------------------------------# # This script will prepare the node subscribers list for the # # nvdbimport. # #--------------------------------------------------------------# # print "\ Deleting SmartSet: Distribution \n"; system("smartsetutil g \"Distribution\" \> \"data/description\""); open(filein1, "data/description"); open(fileout, ">data/endpoint.txt"); $_ = ""; $_ = <filein1> until (/\n/ || eof(filein1)); chop; read(filein1, $var, 14); $_ = <filein1> until (/\n/ || eof(filein1)); chop; chop; chop; chop; $var = $_; print "\ collection belongs to: $var \n"; $_ = ""; close(filein1); open(filein2, "data/$var"); $_ = ""; while ($_ == eof(filein2)) { $_ = <filein2> until (/\n/ || eof(filein2)); chop; $var = $_.",FALSE\n"; print(fileout $var); $_ = ""; } close(filein2); close(fileout); print "\ Preparing IMPORT file..... \n"; # # copy businessset3.txt + data/endpoint.txt data/businessset3.import \> NUL open(file1, "businessset3.txt"); open(file2, "data//endpoint.txt"); open(file3, ">data//businessset3.import"); $_ = ""; while ($_ == eof(file1)) { $_ = <file1> until (/\n/ || eof(file1)); print(file3 $_); $_ = "";
Figure 453. unselect.pl Part 1
374
} $_ = ""; while ($_ == eof(file2)) { $_ = <file2> until (/\n/ || eof(file2)); print(file3 $_); $_ = ""; } close(file1); close(file2); close(file3); print "\ unselecting nodes..... \n"; system("nvdbimport -f data/businessset3.import \> NUL"); print "\ deleting collection..... \n"; system("smartsetutil D \"Distribution\" "); print "\ Processing finished.....END \n"; exit(0);
Figure 454. unselect.pl Part 2
375
376
Table NV_NODES
Column Name
Selection_Name IP_Hostname IP_Status isPrinter isIPRouter vendor isComputer isConnector isBridge isRouter isPC isIP isMLM isSNMPSupported SNMP_sysDescr SNMP_sysLocation SNMP_sysContact SNMP_sysObjectID SNMP_Agent isTivoliMN isTivoliPcMn isTMA isTME isTMAGateway isTMAEndpoint Tivoli_Interp
377
Table NV_INTERRFACES
Column Name
Selection_Name IP_Address IP_Subnet_Mask IP_Status isCard isInterface isIP SNMP_ifType SNMP_ifPhysAddr SNMP_ifDescr TopM_Network_ID TopM_Segment_ID TopM_Node_ID NV_SEGMENTS Selection_Name IP_Status isLocation isNetwork isInternet isSegment isBusSegment isStarSegment isTokenRingSegment isFDDIRingsegment isSerialSegment isIP TopM_Network_ID NV_NETWORKS Selection_Name IP_Address IP_Subnet_Mask IP_Status
378
Table
Column Name IP_Network_Name isLocation isNetwork isInternet isSegment isBusSegment isStarSegment isTokenRingSegment isFDDIRingSegment isIP
Description All columns selected from NV_NODES table. All columns selected from NV_INTERFACES table. All columns selected from NV_SEGMENTS table. All columns selected from NV_NETWORKS table. All columns selected from tables NV_NODES and NV_INTERFACES and selection names from tables NV_SEGMENTS and NV_NETWORKS. The link between the tables are: NV_INTERFACES.TopM_Node_ID= NV_NODES.Selection_Name and NV_INTERFACES.TopM_Network_ID= NV_NETWORKS.Selection_Name and NV_INTERFACES.TopM_Segment_ID= NV_SEGMENTS.Selection_Name
379
Views NV_INVENTORY_VIEW
Description All columns selected from tables NV_NODES and NV_INTERFACES and Inventory tables INVENTORYDATA and NETWORK_NODE. The link between the tables are: NV_INTERFACES.IP_Address= NETWORK_NODE.NETWORK_NODE_ADDRESS and NV_INTERFACES.TopM_Node_ID= NV_NODES.Selection_Name and NETWORK_NODE.HARDWARE_SYSTEM_ID= INVENTORYDATA.HARDWARE_SYSTEM_ID and NV_INTERFACES.TopM_Network_ID= NV_NETWORKS.Selection_Name and NV_INTERFACES.TopM_Segment_ID= NV_SEGMENTS.Selection_Name
380
381
TEC_CLASS: INCHARGE_PROBLEM_CLEAR ISA EVENT DEFINES { source: default = "INCHARGE"; severity: default = WARNING; ic_class_name: STRING, default = "", dup_detect=YES; ic_instance_name: STRING, default = "", dup_detect=YES; ic_event_name: STRING, default = "", dup_detect=YES; ic_event_certainty: REAL, default = 1.0; }; END TEC_CLASS: IC_HOST_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END TEC_CLASS: IC_ROUTER_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END TEC_CLASS: IC_SEGMENT_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END TEC_CLASS: IC_INTERFACE_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END TEC_CLASS: IC_PARTITION_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END TEC_CLASS: IC_SWITCH_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END
382
TEC_CLASS: IC_HUB_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END TEC_CLASS: IC_PHYSICAL_IF_UNAVAILABLE ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END TEC_CLASS: IC_PHYSICAL_IF_FLAPPING ISA INCHARGE_PROBLEM DEFINES { source: default = "INCHARGE"; sub_source: default = "IPFM"; severity: default = CRITICAL; }; END #######################################################
C.1.2 itso.baroc This file contains the three classes used to: Add a problem number to an existing TFNC record Flag a failed attempt to open a problem Flag a failed attempt to close a problem
383
####################################################### # itso.baroc - ITSO custom TEC Classes # # (c) Copyright 1998 Tivoli Systems, an IBM company ####################################################### TEC_CLASS: ITSO_Add_Prob_Nbr ISA EVENT DEFINES { source: default = "TEC"; severity: default = HARMLESS; ev_event: STRING; ev_hostname: STRING; ev_probnbr: STRING; }; END TEC_CLASS: ITSO_Prob_Log_Failed ISA EVENT DEFINES { source: default = "TEC"; severity: default = CRITICAL; }; END TEC_CLASS: ITSO_Prob_Close_Failed ISA EVENT DEFINES { source: default = "TEC"; severity: default = MINOR; }; END #######################################################
384
/* Rule: dup_incharge_problem This rule supresses any duplicate IC_ events. */ rule: dup_incharge_problem: ( event: _event of_class INCHARGE_PROBLEM, reception_action: ( first_duplicate( _event, event: _dup_ic_ev where [ status: outside [CLOSED] ], _event - 86400 - 0 ), commit_rule, add_to_repeat_count(_dup_ic_ev, 1), drop_received_event ) ). /* Rule: incharge_reset This rule sets an open IC_ event to closed. If there is a trouble ticket open for this event, it will be closed. */ rule: incharge_reset: ( event: _event of_class INCHARGE_PROBLEM_CLEAR where [ status: equals OPEN , ic_class_name: _ic_class_name, ic_instance_name: _ic_instance_name, ic_event_name: _ic_event_name ] , reception_action: ( all_instances(event: _ic_event of_class INCHARGE_PROBLEM where [ ic_class_name: equals _ic_class_name, ic_instance_name: equals _ic_instance_name, ic_event_name: equals _ic_event_name, sd_trouble_ticket: _sd_trouble_ticket, status: outside [CLOSED] ], _event - 86400 - 0 ), change_event_status( _ic_event, CLOSED ), _sd_trouble_ticket\=, sprintf(_evstr,%#d,[_ic_event]), exec_program(_ic_event, /usr/local/Tivoli/ITSO/scripts/wzclrprob, %s, [_evstr], NO) ), reception_action: ( drop_received_event ) ). /* Rule: wait_root_cause This rule sets a timer to wait and see if the event is still open after a period. If it is then it must be the root cause. */
385
rule: wait_root_cause: ( event: _event of_class INCHARGE_PROBLEM where [ status: equals OPEN ] , reception_action: ( set_timer(_event, 600, ) ) ). /******************************************************************/ /*********************** TIMER RULES ***********************/ /******************************************************************/ /* Timer Rule: check_root_cause This rule will check for the event still being open. If it is then it must be the root cause, so log a problem with service desk. */ timer_rule: check_root_cause: ( event: _event of_class INCHARGE_PROBLEM where [ status: equals OPEN ] , action: ( sprintf(_evstr,%#d,[_event]), exec_program(_event, /usr/local/Tivoli/ITSO/scripts/wzlogprob, %s, [_evstr], NO) ) ). /* **** EOF **** */
C.2.2 itso.rls This file is only concerned with updating the sd_trouble_ticket slot in a TFNC event when an ITSO_Add_Prob_Nbr event is received. There are a number of Prolog code features included in the ruleset, including: The bo_set_slotval predicate that changes a slot value. This can be used instead of place_change_request template so that change rules are not invoked. The re_mark_as_modified predicate, which tells T/EC that the slot value has changed. This should be used if the bo_set_slotval predicate is used. These features should be used with care.
386
/*************************************************************** itso.rls: custom ITSO rules ***************************************************************/ /* set_problem_number: when a request arrives to add a trouble ticket number to an event, find the event and set the value. */ rule: set_problem_number: ( event: _itsoev of_class ITSO_Add_Prob_Nbr where [ status: equals OPEN , ev_event: _ev_event, ev_hostname: _ev_hostname, ev_probnbr: _ev_probnbr ] , reception_action: ( all_instances(event: _event of_class _ev_event where [ hostname: equals _ev_hostname, status: outside [CLOSED] ], _itsoev - 600 - 0 ), bo_set_slotval( _event, sd_trouble_ticket, _ev_probnbr), re_mark_as_modified(_event,_), commit_action ), reception_action: ( drop_received_event ) ). /* **** EOF **** */
387
# Setup the Tivoli environment variables . /etc/Tivoli/setup_env.sh # Add Service Desk directories so EVProb can run export PATH=$PATH:/usr/lpp/sai/evcmds/sysmgr/bin export LIBPATH=$LIBPATH:/usr/lpp/sai/evcmds/sysmgr/bin:/usr/lpp/sai/aixASE/bin # Setup variables needed for EVProb call OUTFILE=/tmp/EVProb.out GTWY="wtr05368" MOBJ=echo $EVENT_CLASS | dd ibs=15 count=1 2>/dev/null UNID=$EV_KEY CCDE="TEC" SEVR=2 SYST="Network" COMP="TFNC" ITEM=$EVENT_CLASS DESC="$ic_class_name $ic_instance_name is $ic_event_name" ARGS="CALL_CODE:$CCDE;SEVERITY:$SEVR;" ARGS=$ARGS"SYSTEM:$SYST;COMPONENT:$COMP;ITEM:$ITEM;" ARGS=$ARGS"DESCRIPTION:$DESC" # ********************************************************************* # log a problem using EVProb # ********************************************************************* EVProb -h $GTWY -n $MOBJ -x $UNID -a "$ARGS" > $OUTFILE 2>&1 RC=$? # If the EVProb failed (rc!=0) send a fail event and exit # otherwise, get the problem number if [ $RC -ne 0 ]; then MSG=cat $OUTFILE wpostemsg msg="$MSG" ITSO_Prob_Log_Failed ITSO exit 1 else PNBR=awk {print $5} $OUTFILE fi # ********************************************************************* # Tell the T/EC event that a problem was logged # The problem number is returned from the EVProb call above # ********************************************************************* wpostemsg ev_event=$EVENT_CLASS \ ev_hostname=$hostname \ ev_probnbr=$PNBR \ ITSO_Add_Prob_Nbr ITSO exit 0
388
C.3.2 wlclrprob Script This script is called from the ipfm.rls ruleset to close a logged problem.
#!/bin/sh ###################################################################### # wzclrprob: Add a call to an existing SD problem saying the service # been restored. ###################################################################### # ********************************************************************* # Define Variables # ********************************************************************* # Get event key (passed as parameter) EV_KEY=$1 # Add Service Desk directories so EVProb can run export PATH=$PATH:/usr/lpp/sai/evcmds/sysmgr/bin export LIBPATH=$LIBPATH:/usr/lpp/sai/evcmds/sysmgr/bin:/usr/lpp/sai/aixASE/bin # Setup variables needed for EVProb call OUTFILE=/tmp/EVProb.out GTWY="wtr05368" MOBJ=echo $EVENT_CLASS | dd ibs=15 count=1 2>/dev/null UNID=$EV_KEY DESC="$ic_class_name $ic_instance_name is now ok" # ********************************************************************* # add a call to a problem using EVProb # ********************************************************************* EVProb -h $GTWY -n $MOBJ -x $UNID -m CLOSE -a "DESCRIPTION:$DESC" > $OUTFILE 2>&1 RC=$? # If the EVProb failed (rc!=0) send a fail event and exit if [ $RC -ne 0 ]; then MSG=cat $OUTFILE wpostemsg msg="$MSG" ITSO_Prob_Close_Failed ITSO exit 1 fi exit 0
389
C.4.1 wzovstatus Script This is the source for the wzovstatus script:
#!/bin/sh #################################################################### # wzovstatus: compressed ovstatus listing #################################################################### # Define fixed length variables for output typeset -L22 xobj typeset -L12 xsta typeset -L6 xpid # Format output from ovstatus /usr/OV/bin/ovstatus | while read xparm xval1 xval2 xval3 do if [[ $xparm = "object" ]] then xobj=$xval3 xsta="" xpid="" xmsg="" fi if [[ $xparm = "state:" ]] then xsta=$xval1 fi if [[ $xparm = "PID:" ]] then xpid=$xval1 fi if [[ $xparm = "last" ]] then xmsg=echo $xval2 $xval3 fi if [[ $xparm = "exit" ]] then echo "$xobj $xpid $xsta $xmsg" fi done exit 0
C.4.2 wzovstatus Output The following lines are an extract of the wzovstatus output. The columns are: Process name Process number State, RUNNING or NOT_RUNNING Last message
nvcold snmpCollect nvlockd gtmd nvot_server cmld ahmclp ahmdbserver netmon ems_sieve_agent ems_log_agent 21642 41128 23944 22972 22206 26054 26898 36884 30694 23468 39360 RUNNING RUNNING RUNNING RUNNING RUNNING RUNNING RUNNING RUNNING RUNNING RUNNING RUNNING Initialization Initialization Initialization Initialization Initialization Initialization Initialization Initialization Initialization complete. complete. complete. complete. succeeded. completed succeeded. succeeded. complete.
390
391
Any pointers in this publication to external Web sites are provided for convenience only and do not in any manner serve as an endorsement of these Web sites. The following document contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples contain the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to the names and addresses used by an actual business enterprise is entirely coincidental. Reference to PTF numbers that have not been released through the normal distribution process does not imply general availability. The purpose of including these reference numbers is to alert IBM customers to specific information relative to the implementation of the PTF when it becomes available to each customer according to the normal IBM PTF distribution process. The following terms are trademarks of the International Business Machines Corporation in the United States and/or other countries:
AIX NetView IBM RS/6000
The following terms are trademarks of other companies: C-bus is a trademark of Corollary, Inc. Java and HotJava are trademarks of Sun Microsystems, Incorporated. Microsoft, Windows, Windows NT, and the Windows 95 logo are trademarks or registered trademarks of Microsoft Corporation. PC Direct is a trademark of Ziff Communications Company and is used by IBM Corporation under license. Pentium, MMX, ProShare, LANDesk, and ActionMedia are trademarks or registered trademarks of Intel Corporation in the U.S. and other countries. Tivoli, Tivoli Enterprise Console, Tivoli Management Framework, Tivoli Reporter, Tivoli Plus, Tivoli Manager, Tivoli ADSM, Tivoli OPC, Tivoli Maestro, and Tivoli Ready are trademarks of Tivoli Systems Inc, an IBM Company. UNIX is a registered trademark in the United States and other countries licensed exclusively through X/Open Company Limited. Other company, product, and service names may be trademarks or service marks of others.
392
TME 10 Cookbook for AIX Systems Management and Networking Applications, SG24-4867 TME 10 Deployment Cookbook: Inventory and Company, SG24-2120 TME 10 Inventory 3.2: New Features and Database Support, SG24-2135 An Introduction to Tivoli NetView for OS/390 V1R2, SG24-5224 TEC Implementation Examples, SG24-5216 Problem Management Using Tivoli Service Desk and the TEC, SG24-5301
393
TME 10 NetView for AIX Version 5 Release 1 Diagnosis Guide, LY43-0066 (available to IBM-licensed customers only) Tivoli Integration Pack for NetView Users Guide, GC32-0286 TME 10 Framework 3.6 Planning and Installation Guide, SC31-8432 TME 10 Framework 3.6 Users Guide, GC31-8433 TME 10 Framework 3.6 Reference Manual, SC31-8434 Maestro UNIX Users Guide V6.0, GC31-5136 Maestro NMPL Users Guide V6.0, SC31-5138 Tivoli Manager for Network Connectivity Users Guide, GC32-0301
TME 10 Inventory 3.6 Users Guide, GC31-8381 (must be ordered with software product)
Tivoli Service Desk Tivoli Problem Management Network System Management Gateway Network Administrators Guide , GC31-5178 Tivoli Service Desk Installation Guide, GC31-5167
394
PUBORDER to order hardcopies in the United States Tools Disks To get LIST3820s of redbooks, type one of the following commands:
TOOLCAT REDPRINT TOOLS SENDTO EHONE4 TOOLS2 REDPRINT GET SG24xxxx PACKAGE TOOLS SENDTO CANVM2 TOOLS REDPRINT GET SG24xxxx PACKAGE (Canadian users only)
To register for information on workshops, residencies, and redbooks, type the following command:
TOOLS SENDTO WTSCPOK TOOLS ZDISK GET ITSOREGI 1998
REDBOOKS Category on INEWS Online send orders to: USIB6FPL at IBMMAIL or DKIBMBSH at IBMMAIL Redpieces For information so current it is still in the process of being written, look at "Redpieces" on the Redbooks Web Site (http://www.redbooks.ibm.com/redpieces.html). Redpieces are redbooks in progress; not all redbooks become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows.
395
1-800-IBM-4FAX (United States) or (+1) 408 256 5422 (Outside USA) ask for: Index # 4421 Abstracts of new redbooks Index # 4422 IBM redbooks Index # 4420 Redbooks for last six months On the World Wide Web Redbooks Web Site IBM Direct Publications Catalog Redpieces For information so current it is still in the process of being written, look at "Redpieces" on the Redbooks Web Site (http://www.redbooks.ibm.com/redpieces.html). Redpieces are redbooks in progress; not all redbooks become redpieces, and sometimes just a few chapters will be published this way. The intent is to get the information out much quicker than the formal publishing process allows. http://www.redbooks.ibm.com http://www.elink.ibmlink.ibm.com/pbl/pbl
396
Last name
Address
City
Card issued to
Signature
We accept American Express, Diners, Eurocard, Master Card, and Visa. Payment by credit card not available in all countries. Signature mandatory for credit card payment.
397
398
List of Abbreviations
APM ARP ASP CMIS CMOT DNS IBM ICMP IIS ITSO MLM NSM SNMP TEC TFNC TIPN TMR TSD
Agent Policy Manager Address Resolution Protocol Active Server Pages Common Management Information Services CMIP Over TCP/IP Transport Domain Name Server International Business Machines Corporation Internet Control Message Protocol Internet Information Server International Technical Support Organization Mid-Level Manager Network and Systems Management Simple Network Management Protocol Tivoli Enterprise Console Tivoli Manager for Network Connectivity Tivoli Integration Pack for NetView Tivoli Management Region Tivoli Service Desk
399
400
Index A
abended or failed jobs 339 access the Network Nodes dialog from NetView 323 accessing the Maestro Console from NetView 358 actionsvr 119 adding a field to the NetView database 316 addition of an entry to the services file 318 administer MLM 174 agent configuration 183 Agent Policy Manager 181 all problems for node 335 App_Down_By_NetView 271 APPL_Notify Rule 270 application monitoring example 257 application status monitor 259 assign event groups 228 attended MLM 169 authentic SNMP message 180 default values for opening a problem 323 defining Maestro maps in NetView 346 defining the MLMs role 172 delete previous events rule 269 DeletePM.pl 367 demand poll the MLM 172 disable netmon polling 181 discover new nodes 169 discovering Tivoli managed resources from NetView 211 discovery 19 discovery of Maestro application nodes 353 discovery of the Maestro network 351 display NetView submap based on selected TEC message 222 display TEC events from NetView 193, 202 dispsub process 223 dispsub trap 225 Distributed Monitoring 3 Distributed Monitoring pop-up 274 Distributing NetView Inventory Profiles 236 dynamic workspace 129
B
backup of the Tivoli home directory 341 backup of the Tivoli object database 341 broken link between the Maestro CPUs 339 building new event sources based on collections building nodes.conf 318 building the Monitor profiles 259 BusinessSet3.txt 375 226
E
Easy.pl 368 edit arguments to commands in rule 268 edit event group filters 263 edit rules 264 edit ruleset 265 enable Maestro for this map 348 enable polling 27 enabling the NetView Web interface 68 Endpoint.conf 376 Endpoint.format 375 event group management 228 events 44, 114 EVEventd 326 EVEventd process 317 EVGetNodes script 323 EVProb command line interface 313 EVQueryd process 317 EVSetFlag program 318 example of automated problem management 325 execute EVSetFlag 319 execute NetView tasks from TEC 229 ExpertView NSM commands 313 extending the Tivoli Menu 208 extension to the SNMP service 178
C
central NetView server 170 Client_Application_UP rule 268 collToEg 227 compound rule 269 condition in rule 266 configuration of node specific options 323 configuration of the gateway 318 configure NetView to send events to TEC 219 configure trap forwarding 172 configuring event forwarding to TEC 262 configuring SNMP 27 conman shutdown 342 correlate application and network events 256 correlation action 270 create a collection 319 create an event source and a event group 262 creating reports about Tivoli resources 211 creating the NetView server 15 creation of a nodes list 318 cshrc.esm 317 customizing NetView 19
F
failed Maestro processes 339 filter options 190 filtering events 123 filtering traps 189 finding Tivoli resources in the network forward traps 170
D
data types selected for collection 235 Deadnode.format 376 default SNMP configuration 180
211
401
G
Gateway Configuration option generate reports 193, 214 generic trap mask 188 321
I
IBM NetView/AIX integration files 315 import the NetView baroc file 219 install with community file 174, 177 install.esm script 314 install/control MLM from map 176 installation of NetView/Inventory integration adapter 204 Installation of NetView/Inventory integration profile 204 installation on Maestro managed nodes 345 installation prerequisites 13 Installation Process 13 installing Inventory 9 installing Maestro integration on the NetView server 342 installing TEC 8 installing the environment 7 installing TIPN components 196 installing Tivoli NetView/TEC integration for Netview server 201 integration with Distributed Monitoring and the TEC 255 integration with Tivoli Maestro 339 integration with Tivoli Service Desk 313 Inventory 4 Inventory integration 206 inventory_queries.sh 11 ipfm.baroc 381 ipfm.rls 384 isql 11, 204 itso.baroc 383 ITSO.reg - menu registration file 367 itso.rls 386
J
job network 340 job or schedule status changes 339 jobs selection from schedules 362
L
launch the Tivoli desktop 193 list TMA gateways and endpoints 213 list TMR ManagedNodes and PcManagedNodes load the MLM extension 178 load/unload MIBs 350 loading Maestro MIB 349, 350 local network status polling 170 213
Maestro specific sub-maps 339 Maestro to back up the Tivoli software 340 Maestro/NV setup 346 make active solution 330 management solution for a client/server application 255 managing the Maestro Network 351 menu options added to the NetView maps 317 MIB applications 78 MIB browser 42 Mid-Level Managers 169 midmand daemon 175 midmand.config 179 MirrorPM.pl 365 MLM 11 MLM administration table 183 MLM alias table 183 MLM configuration 181 MLM configuration application 182 MLM data collection settings 184 MLM data collections log 184 MLM filter table 184, 189 MLM for AIX local install 173 MLM for AIX remote install from NetView 175 MLM for AIX remote install from Tivoli desktop 173 MLM for AIX status 178 MLM for NT status 179 MLM Managers submap 182 MLM network interface status table 186 MLM node discovery settings 184 MLM node discovery table 185 MLM on AIX 173 MLM on the central server 170 MLM on Windows NT 178 MLM program log settings 185 MLM seed file 181 MLM status monitor table 185 MLM threshold and collection table 187 MLM threshold arm info table 187 MLM trap destination 172 MLM trap destination table 181, 187 MLM trap generation 190 MLM trap log 188 MLM trap log settings 188 MLM trap reception settings 188 mlmsetup.exe 178 monitor the status of Tivoli resources 211 monitoring job schedules from NetView 360 monitoring schedule 261 multiple calls for the same problem 331
N
NetBIOS 52 netmon 117 netmon.trace 192 netstat -a command 201 NetView 5.1 1 NetView account 51 NetView and MLM structure 169 NetView and the Mid-Level-Manager 169 NetView database fields added by TIPN 207
M
Maestro as an application from the NetView console 339 Maestro master node 340 Maestro master/slave topology 339 Maestro process condition notification 355 Maestro schedule 360 Maestro security file 346
402
NetView Framework Patch 14 NetView Inventory profile 234 NetView MLMs 173 NetView query library 205 NetView submap with the highlighted object 224 NetView to Service Desk integration 337 NetView Web Interface 30 NetView Web Interface Online Help 46 NetView/Inventory integration configuration 204 NetViewInventoryProfile as a Managed Resource 232 network and systems management integration scenario 276 network performance 29 new map dialog 347 Node Configuration 335 node information selection for NetView Inventory Profile 235 Node/Problem Query 335 notify all contacts 330 NSM Gateway Log 326 NSM Module managed nodes 320 NV_NODE query 236 nv_syb_schema.sql 204 nvcorrd 118 nvdbformat 247 nvdbimport 111 nvevents 118 nvgenrpts.sh 214 nvinv_create_queries.sh 205 nvinv_syb_schema.log 206 nvmlmsetup.exe 178 nvpagerd 119 nvrsEdit 219 nvserverd 118, 222, 281 nvserverd.baroc 222 nvsniffer 104 nvwakeup 106
R
regional managers 169 register the Maestro daemon 343 registering the NSM Module daemons with NetView Relational Databases 8 remote network diagnostics 193 remote ping between managed nodes 199 response level 260 restart Maestro on all systems 350 restart the Mid-Level Manager 171 restarts Tivoli on all nodes 341 RIM 8 run query 237 316
S
SA-Expert View NSM commands 314 saiapp.ini 315 seed file 22 Sentry engine 255 Service Desks diagnosis facilities 329 SERVICE_DESK Collection 321 setting the NetView connection in TEC 223 Setting the Trap Port 171 setting the Trap Port 171 setting the trap port 171 shutdown Maestro 342 SmartSets 74, 81 smconfig application 181 SMIT option 177 smmlm start 171 smmlm stop 171 smMlmCurrent.config 189 smtrap.log 190 SNMP agents on the Maestro nodes 339 SNMP collection 86 SNMP Community Configuration 179 SNMP configuration 27 SNMP configuration of the NetView server 180 SNMP configuration of the system 179 SNMP Set and Get requests 179 snmpd.conf 179 sql select statement 206 standard events from NetView 258 start Maestro processes 339 starting and stopping MLM for AIX 177 starting and stopping MLM for NT 179 starting the Web client 31 status filtering 240 status of the Maestro processes 339 status polling for specific subnets 169 stop Maestro processes 339 stop midmand 177 stop the daemons 171 stop the Mid-Level manager daemon 171 submap sorting 97 submap with Maestro icons 354 submaps 37, 73 systems management client/server scenario 273
O
odadmin odlist command 208 odadmin stats command 208 open problems for node 335 outgoing SNMP requests 180 ovelmd 117 ovesmd 117 ovstart 171 ovstart Unison_Maestro_Manager 345 ovstop 171
P
perform name registry discovery 211 perform NetView tasks from the TEC console 193 permissions for the public community 179 pmd 117 polling 27 polling parameters 65 preview option of the Diagnosis function 328 profile.esm 317
403
T
TEC 3 TEC Event Server 256 TEC rule builder 264 tecad_ov.baroc 222 testing the Framework Network Diagnostics 198 testing the NetView Diagnostics for NetView Server 200 testing the Network Diagnostics 198 Testing the SNMP Configuration 190 TFNC 281 TIMESYNC_INTERVAL 60 TIMESYNC_VARIANCE 60 TIPN 2 TIPN installation prerequisites 193 TIPN Patch Installation 194 TIPN tables 377 TIPN Tivoli Reports Menu 214 TIPN view 379 Tivoli database 14 Tivoli discovery 193 Tivoli Distributed Monitoring events 258 Tivoli Enterprise Console 3 Tivoli Framework Network Diagnostics 193 Tivoli Framework Network Diagnostics for NetView 193 Tivoli Integration Pack 2 Tivoli Integration Pack for NetView 193 Tivoli Maestro Network Management Platform Integration for Unix 339 Tivoli Manager for Network Connectivity 4 Tivoli menu 200 Tivoli NetView Client 5.1 Enabler for Windows NT 194 Tivoli NetView Inventory Integration 193 Tivoli NetView Server 5.1 Enabler for Windows NT 194 Tivoli NetView TEC Integration Adapter 193 Tivoli NetView/Inventory integration adapter for NetView server 203 Tivoli NetView/Inventory integration for NetView server 203 Tivoli NetView/Inventory Integration Profile 203 Tivoli NetView/TEC Integration 202 tivoli registration file 208 Tivoli Service Desk 4 tivoli_syb_admin.sql 11 tivoli_syb_schema.sql 11 TME 10 Enterprise Console Patch 3.1-TEC-0012 194 TME 10 Enterprise Console Patch 3.1-TEC-0030 194 TME 10 Framework 3.2 Super Patch 194 TME 10 Framework TIPN patch 193 TME 10 NetView MLM submenu 177 TMR 7 traceroute 34 trapd 116 trapd.log file 192 TSD new problem 327
updating cshrc.esm and profile.esm files 316 use Systems Monitor MLM for discovery 173 use Systems Monitor MLM for polling 173 using the NetView/Inventory integration component 231 using the NetView/TEC integration component 218 using the Network Diagnostics for NetView component 211 using the TIPN components 207 using the Tivoli Reports 214
V
verifying the installation 17
W
Wake on LAN 193 wcomprules 222 Web Interface Security 69 wimprbclass 222 wlclrprob Script 389 wloadrb 222 work with nodes 335 work with problems 335 work with the selected problem 327 wrnetstat 193 wrping 193 wrtraceroute 193 wrtraceroute command 199 wzlogprob script 387 wzovstatus script 390
U
unattended MLM 169 Universal Monitors collection 259 unselect.pl 374
404
_ IBM employee
Please rate your overall satisfaction with this book using the scale: (1 = very good, 2 = good, 3 = average, 4 = poor, 5 = very poor) Overall Satisfaction Please answer the following questions: Was this redbook published in time for your needs? If no, please explain: Yes___ No___ __________
Comments/Suggestions:
405
SG24-5285-00
SG24-5285-00