TransNav Management System Documentation. Product Overview Guide
TransNav Management System Documentation. Product Overview Guide
TransNav Management System Documentation. Product Overview Guide
Release TN4.2.x
Publication Date: October 2008
Document Number: 800-0005-TN42 Rev. B
FCC Compliance
This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to
Part 15 of the FCC Rules. This equipment generates, uses, and can radiate radio frequency energy and, if not
installed and used in accordance with the installation instructions may cause harmful interference to radio
communications.
Canadian Compliance
This Class A digital apparatus meets all requirements of the Canadian Interference-Causing Equipment
Regulations. Cet appareil numérique de la classe A respects toutes les exigences du Règlement sur le
matériel brouilleur du Canada.
Japanese Compliance
This is a Class A product based on the standard of the Voluntary Control Council for Interference by
Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio
disturbance may occur, in which case, the user may be required to take corrective actions.
Contents
About this Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index-1
Traverse The Traverse® system product documentation set includes the documents described in
System the table below.
Product Table 1 Traverse System Product Documentation
Documentation
Document Description Target Audience
Traverse Product This document provides a detailed overview of the Anyone who wants to
Overview Traverse system. It also includes engineering and understand the Traverse
planning information. system and its
applications.
Traverse This document provides required equipment, tools, Installers, field, and
Installation and and step-by-step procedures for: network engineers
Configuration • Hardware installation
• Power cabling
• Network cabling
• Node power up
• Node start-up
TraverseEdge The TraverseEdge™ 100 User Guide includes the sections described in the table below.
System
Product Table 2 TraverseEdge 100 System Product Documentation
Documentation
Section Description Target Audience
Product Overview This section provides a detailed overview of the Anyone who wants to
TraverseEdge system. understand the
TraverseEdge system
and its applications
Description and This section includes engineering and planning Field and network
Specifications information. engineers
Installation and This document identifies required equipment and Installers, field, and
Configuration tools and provides step-by-step procedures for: network engineers
• Hardware installation
• Power cabling
• Network cabling
• Node power up
• Node start-up
Provisioning the This section provides step-by-step procedures for Network engineers,
Network provisioning a TraverseEdge network using the provisioning, and
TransNav management system. Also see the network operations
TransNav Management System Product center (NOC)
Documentation. personnel
Creating TDM This section provides step-by-step procedures for Network engineers,
Services provisioning a TraverseEdge network using the provisioning, and
TransNav management system. Also see the network operations
TransNav Management System Product center (NOC)
Documentation. personnel
Creating Ethernet This section provides step-by-step procedures for Network engineers,
Services provisioning a TraverseEdge network using the provisioning, and
TransNav management system. See the TransNav network operations
Management System Product Documentation. center (NOC)
personnel
Appendices This section provides installation and provisioning Installers and anyone
checklists, compliance information, and acronym who wants reference
descriptions. information.
TransNav The TransNav™ management system product documentation set includes the
Management documents described in the table below.
System
Product Table 3 TransNav Management System Product Documentation
Documentation
Document Description Target Audience
TransNav This document provides a detailed overview of the Anyone who wants to
Management TransNav management system. understand the
System Product TransNav management
Overview system
This document includes hardware and software
requirements for the management system. It also
includes network management planning information.
TransNav This document describes the management server Field and network
Management component of the management system and provides engineers,
System Server procedures and troubleshooting information for the provisioning, and
Guide server. network operations
center (NOC)
TransNav This document describes the graphical user interface personnel
Management including installation instructions and logon
System GUI procedures.
Guide
Operations The document below provides operations and maintenance information for Turin’s
Documentation TransNav managed products.
Node Operations This document identifies required equipment and Field and network
and Maintenance tools. It also provides step-by-step procedures for: engineers
• Alarms and recommended actions
• Performance monitoring
• Equipment LED and status
• Diagnostics
• Test access (SONET network only)
• Routine maintenance
• Node software upgrades
• Node hardware upgrades
Information Traverse, TransNav, and TraverseEdge 100 system documentation uses the Information
Mapping Mapping format which presents information in small units or blocks. The beginning of
an information block is identified by a subject label in the left margin; the end is
identified by a horizontal line. Subject labels allow the reader to scan the document and
find a specific subject. Its objective is to make information easy for the reader to
access, use, and remember.
Each procedure lists the equipment and tools and provides step-by-step instructions
required to perform each task. Graphics are integrated into the procedures whenever
possible.
If You Need If you need assistance while working with Traverse products, contact the Turin
Help Networks Technical Assistance Center (TAC):
• Inside the U.S., toll-free: 1-866-TURINET (1-866-887-4638)
• Outside the U.S.: 916-348-2105
• Online: www.turinnetworks.com/html/support_overview.htm
TAC is available 6:00AM to 6:00PM Pacific Time, Monday through Friday (business
hours). When the TAC is closed, emergency service only is available on a callback
basis. E-mail support (24-hour response) is also available through:
[email protected].
Calling for If repair is necessary, call the Turin Repair Facility at 1-866-TURINET (866-887-4638)
Repairs for a Return Material Authorization (RMA) number before sending the unit. The RMA
number must be prominently displayed on all equipment cartons. The Repair Facility is
open from 6:00AM to 6:00PM Pacific Time, Monday through Friday.
When calling from outside the United States, use the appropriate international access
code, and then call 916-348-2105 to contact the Repair Facility.
Contents
Chapter 1
Overview
What Is the TransNav Management System?. . . . . . . . . . . . . . . . . . . . . . . . . 1-1
TransNav Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Client Workstation Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Management Server Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Node Agent Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
TransNav Management System Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Interoperability with Third-party Management Systems . . . . . . . . . . . . . . . . . 1-4
Autodiscovery and Pre-provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Simultaneous Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Reliability, Availability, and Serviceability (RAS) . . . . . . . . . . . . . . . . . . . . . . . 1-5
Chapter 2
Network Management Features
Fault and Event Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Alarm Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Data Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Flexible Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Flexible Scoping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Clearing Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Equipment Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Pre-provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Service Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Secondary Server Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Accounting Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Role-based Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Domain Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Node Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Node Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
System Log Collection and Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Report Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
General Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Data Set Snapshots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12
Chapter 3
User Interfaces
Access to User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
Graphical User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Map View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Shelf View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
Domain Level CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
Node Level CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
TL1 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
List of Figures
Figure 1-1 TransNav Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Figure 1-2 Map View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Figure 1-3 Shelf View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
List of Tables
Table 1-1 Accessing the TransNav Management System. . . . . . . . . . . . . . . 1-13
Chapter 1
Overview
What Is the The TransNav management system is an advanced element and subnetwork
TransNav management system designed for comprehensive management of the Traverse network
Management consisting of Traverse, TraverseEdge, and TransAccess products. The Java™-based
System? software smoothly integrates into existing automated and manual operations support
system (OSS) infrastructure.
The multi-level management architecture applies the latest distributed and evolvable
technologies. These features enable you to create and deploy profitable new services,
as well as transition gracefully to a more dynamic and data-centric, multi-service
optical transport network.
The TransNav management system consists of an integrated set of software
components that reside on the server(s), the client workstations, and individual nodes.
• Client Workstation Application, page 1-2. Provides the user interface for
managing the network. The management system supports a graphical user interface
(GUI), a command line interface (CLI), and a TL1 interface.
• Management Server Application, page 1-2. Communicates with the nodes and
the servers, as well as provides classical element management FCAPS
functionality (fault, configuration, accounting, performance, and security), policy
management, reporting, and system administration.
• Node Agent Application, page 1-3. Resides on the control card and maintains a
persistent database of management information for specific nodes. It also controls
the flow of information between the management server and specific nodes.
TransNav The TransNav management system is an all Java-based, highly-integrated system that
Software uses the identical architecture on the Traverse network nodes and the management
Architecture server(s). The architecture leverages the Java Dynamic Management Kit (JDMK) and
All communication between nodes and the server or between the client application and
the server uses the Java Remote Method Invocation (RMI) system over TCP/IP. The
server also uses RMI internally between the JDMK servers and JDMK clients.
Information flows southbound – from the user on the client workstation, to the Session
Manager, to the application server, to the Traverse Node Gateway Client inside the
management server, and finally down to the Traverse Node Gateway Agent embedded
in the node – via RMI over TCP/IP.
Client The client workstation application provides the user interface for managing the
Workstation network. The TransNav management system supports GUI, CLI, and TL1 interfaces.
Application See Figure 1-1 TransNav Software Architecture for a graphical representation of the
client workstation application.
The client workstation application communicates with the session manager on the
management server. Download the GUI application from the management server, or
simply telnet to the management server, to access the CLI or TL1.
Management The management server application communicates with nodes and provides classical
Server element management FCAPS functionality (fault, configuration, accounting,
Application performance, and security), as well as policy management, reporting, and system
administration. See Figure 1-1 TransNav Software Architecture for a graphical
representation of the management server application.
Security management, logging, and external interfaces to upstream applications are all
implemented in the upper level session management component on the management
server. These functions are implemented as a JDMK server and are responsible for
servicing both the GUI client applet and the northbound interfaces. Enhanced security
is achieved using Functional Groups to provide RBAC (Role-based Access Control)
functionality.
A separate SMNP agent, also implemented as a JDMK server, supports SNMP traps
(fault management) for simplified version control. The SNMP agent works with the
fault management application card.
The agent on the node passes node-level data to the management server via RMI over
TCP/IP. On the management server, the Node Gateway Controller receives the
information and pre-processes it. The Node Gateway Controller then passes the
pre-processed information to the management functions within the application server.
The application server is responsible for persistence at the server side and, to this end,
manages the entire interface with the underlying SQL database.
Each TransNav management system supports up to eight servers; one server is
designated as the Primary server, the remaining servers are designated as Secondary
servers. The Primary server actively manages the network. The Secondary servers
passively view the network but cannot perform any management operations that would
change the state of the network. Any Secondary server can be promoted to the Primary
server role in case of failure or maintenance. The switch in server roles requires some
degree of user intervention.
Node Agent Each node has a redundant control card with a persistent relational database
Application management system that records provisioning, alarm, maintenance, and diagnostic
information for the node. See Figure 1-1 TransNav Software Architecture for a
graphical representation of the node agent application.
Each control card uses Java agents (M-Beans [management beans]) to communicate
with Java applications on the management server and synchronize data between the
server and the nodes it manages.
TransNav The TransNav management system provides comprehensive management for both the
Management nodes and for the connections between nodes through the Intelligent Control Plane.
System This specifically includes efficient integration of management plane and control plane
Features functions, and policy-based management.
The TransNav management system features include:
• Interoperability with Third-party Management Systems, page 1-4
• Autodiscovery and Pre-provisioning, page 1-4
• Simultaneous Users, page 1-4
• Scalability, page 1-4
• Reliability, Availability, and Serviceability (RAS), page 1-5
Autodiscovery Each node uses a process called autodiscovery to learn the addresses of all equipment
and Pre-provi- in its control plane domain. Commission the node using the CLI and enter the host
sioning name or IP address of the gateway node(s). The management system then discovers and
manages all the nodes in the domain without requiring any other preprovisioned
information.
The TransNav management system supports preprovisioning which allows
provisioning functions independent of service activation. The effectiveness of
preprovisioning depends upon effective traffic engineering to ensure network capacity
is available upon activation. Upon installation, a node is discovered automatically and
the management server forwards the preprovisioned information to the node.
Simultaneous The number of simultaneous users of user sessions is configurable on the server
Users (MaxNoOfUserSessions). The default is 20 simultaneous users. The management
system does not restrict the number of simultaneous users either by software licensing
or system configuration parameters. Customer usage patterns may allow more
simultaneous users with reasonable response time than specified.
One GUI session, one CLI session, or one TL1 session counts as a simultaneous user.
Up to 10 simultaneous users can log into a node-level CLI session.
Scalability Turin works with customers to specify configurations to support the scalability
required. The TransNav management system supports:
• 1 to 8 TransNav servers. One server is designated the Primary server, the remaining
servers are Secondary servers.
• Up to 200 Traverse nodes and simultaneous users for servers, based on specific
user behaviors, by:
– Selecting a multi-processor server with the potential capacity to support the
estimated maximum requirements, and the addition of CPUs, memory, and
disk capacity as needed.
– Distributing various components of the management system over multiple
servers.
Reliability, Turin works closely with customers to configure hardware and software to achieve
Availability, desired levels of high availability for their Sun Solaris server-based TransNav system
and deployments. This includes supporting secondary network operation centers for
Serviceability disaster recovery. Our goal is to achieve exceptional service reliability and availability
(RAS) in a cost-effective manner.
Chapter 2
Network Management Features
Fault and The TransNav management system graphical user interface (GUI) enables each
Event technician to open multiple Alarm windows. The number of windows is limited only by
Management effective use of the workstation’s screen area and the client workstation system
resources, such as memory and CPU load.
If technicians have their nodes grouped, clicking a node group in the navigation tree or
clicking a node group map displays only the alarms associated with that node group.
This includes nodes and node groups within the parent-level node group.
In the GUI, windows and dialog boxes have the following characteristics:
Alarm Data
The system provides a count of the number of outstanding alarms by severity level.
This information is available at a network level as well as for each individual node.
Data Sequence
Each user can specify the sequence in which data fields will appear for each window.
Flexible Filtering
The user can determine what data appears in the selected fields for each separate Alarm
window.
Flexible Scoping
The user can determine which nodes and equipment appear in the selected fields for
each separate Alarm window.
Sorting
When a column heading (e.g., “severity”) is selected, the Alarm window is sorted by
that category.
Clearing Alarms
Only a node clears alarms. Alarms received by the management system are
automatically marked as cleared and added to the display. The user can also set the
retention duration of cleared alarm messages in the server alarm database and the alarm
display.
Graphical buttons and a context menu provide the following options:
• Acknowledge the alarm.
• Select a detailed alarm view that allows the user to view alarm details in addition to
adding comments.
• Set filters that allow the user to include or exclude alarms from specific sources
from being displayed in the Alarm window.
• Open a new Alarm window.
Configuration Use the TransNav management system for all configuration management requirements:
Management • Equipment Configuration, page 1-8
• Pre-provisioning, page 1-9
• Service Provisioning, page 1-9
• Secondary Server Support, page 1-9
• Report Generation, page 1-11
Equipment After a node is installed and activated, it discovers its specific components and
Configuration forwards that information to the management system, The system, in turn, populates its
databases and builds the graphical representation of the equipment. The Intelligent
Control Plane automatically discovers the network and forwards that information to the
management plane which creates the network topology map.
Use node-level CLI for initial system commissioning. For detailed information, see the
Traverse Installation and Commissioning Guide, Section 11—Node Start-up and
Commissioning Procedures, Chapter 1—“Node Start-up and Commissioning,”
page 11-1.
The TransNav management system supports Telcordia CLEI™ (Common Language®
Equipment Identifier) codes per GR-485-CORE. These are encoded on individual
cards.
Pre-provisioning The TransNav management system supports complete preprovisioning of all nodes.
Preprovisioning facilitates rapid turn-up of new nodes and node expansions as well as
support for planning and equipment capital control. Preprovisioning of customer
services enables the service provider to efficiently schedule provisioning work
independent of service activation.
The management system stores the parameters of the service request and sends them to
the Intelligent Control Plane upon activation. If the management system is unable to
complete activation, it provides appropriate alarms including insight into the nature of
the inability to complete provisioning and activation of the service. The effectiveness
of preprovisioning depends upon effective traffic engineering to ensure that network
capacity is available upon activation.
Service The TransNav management system provides end-to-end provisioning of services and
Provisioning requires minimal input from the user. Alternatively, the user can set the constraints
(each hop and time slot) of a service. You can provision a service using any of the
following methods:
• Graphical user interface
• Script language (typical for batch provisioning)
• Domain-level CLI interface
Secondary The Traverse management system supports one Primary server and up to seven
Server Support Secondary servers in the network. The Primary server actively manages the network,
while the Secondary servers passively view the network but do not perform any
management operations that would change the network. If the Primary server fails or is
scheduled for maintenance, any Secondary server can be manually changed to take the
Primary server role.
Critical information on the Secondary servers is synchronized with the network
elements automatically in real time. This includes current provisioning, service state,
alarm and event information from the Traverse nodes. To synchronize PM data,
Domain user login profiles, user references and roles, customer records, alarm
acknowledgement and annotations, reports, report templates and schedules, the
Primary server database must be exported and then imported to the Secondary server
database. Depending on the network size, the import process takes between one and
five minutes.
Manual synchronization should be performed on a Secondary server database before it
is promoted to a Primary server role. For detailed information on promoting a
Secondary server, see the TransNav Management System Server Guide,
Section 2—Management Server Procedures, Chapter 3—“Server Administration
Procedures,” or the TransNav Management System CLI Guide, Chapter 2—“CLI
Quick Reference.”
Accounting Accounting data for all services is based primarily on performance management data
Management and transmitted from the nodes to the management system.
Using this data, the service provider can track service levels and ensure that traffic
complies with service level agreements (SLAs). SLA monitoring enables the service
provider to create a billing opportunity and to charge a premium for the guaranteed
level of service.
Performance Nodes collect performance management data and forward it to the Primary
Management management server to store in the database. The data is processed in two ways:
• The service provider’s management system administrator can set threshold
crossing alert limits. The threshold crossing alert appears as an event on the GUI
Events tab.
• The TransNav management system on the Primary server provides basic reports.
The data can be exported for analysis and graphical presentation by applications
such as Microsoft® Excel.
Role-based Security management enables the network administrator to create and manage user
Access Control accounts with specific access privileges.
Access control on the management system is through a combination of functional
groups and access groups for domain users, and through access groups for node users.
Domain Users
A domain user can only belong to one functional group at a time. With the exception of
administrators, functional groups are user-defined combinations of pre-defined access
groups and specific nodes. Domain users in a functional group who have Administrator
roles can access all of the system resources, including user management. They assign
access privileges of other domain users to a set of system features (access groups) and
resources (nodes) with user-defined functional groups. Security applies to both the GUI
and the CLI. For more information on domain security, see the TransNav Management
System GUI Guide, Section 2—Administrative Tasks, Chapter 1—“Managing Server
Security,” page 2-1.
Node Users
The management system has several pre-defined access groups for node users. Any
node user can be in one or more access groups. Within the access groups, access is
cumulative; a user who is in two access groups has the privileges of both access groups.
See the TransNav Management System GUI Guide, Section 2—Administrative Tasks,
Chapter 2—“Managing Node Security,” page 2-11 for more information on node
security.
Node The TransNav management system provides the following capabilities to support
Administration efficient remote administration of nodes:
• Software management and administration
The GUI interface allows users to view an entire network, a group of nodes, or a
specific node. Groups of nodes can be set up in a hierarchical fashion, and can be
associated with specific geographical maps that coincide with each node group.
System Log The TransNav management system collects a broad array of information that is stored
Collection and in the server database for reporting and analysis.
Storage The following list represents data that can be extracted from the server database:
• All user actions from the domain-level GUI or CLI or through the node-level CLI.
• Alarm and event history including performance management threshold crossing
alerts:
– Equipment configuration history
– Node equipment alarm log
• Security logs:
– User list denoting each user’s profile
– Sign-on/sign-off log
– Failed log-on attempts
• Performance management data
Report All reports can be printed or exported as text-formatted comma delimited files.
Generation
General Reports
The TransNav management system allows a set of pre-defined reports to be either
scheduled or executed on demand. These reports encompass such functions as:
• Equipment inventory
• Historical alarms
• Historical events
• Performance monitoring and management
• Resource availability
• Service availability
• Domain service
Reports can be set to be run once, hourly, daily, weekly, and monthly.
Chapter 3
User Interfaces
Introduction The TransNav management system supports the following user interfaces:
• Access to User Interfaces, page 1-13
• Graphical User Interfaces, page 1-14
• Command Line Interface, page 1-16
• TL1 Interface, page 1-17
Access to User The following table lists the different access methods you can use to connect to a
Interfaces TransNav management server.
Table 1-1 Accessing the TransNav Management System
Management System
Access Method
Interface
Graphical User The GUI supports domain-level operators and administrators who are located in a
Interfaces network operations center or in a remote location. There is no GUI at the node level.
The GUI allows domain-level personnel to perform a wide range of provisioning and
monitoring tasks for a single node, groups of nodes, or a network of nodes attached to a
specific server. Users can only see those nodes to which they have security access
rights.
There are two main views in the GUI:
• Map View, page 1-14
• Shelf View, page 1-15
See the TransNav Management System GUI Guide for detailed descriptions of the
GUI. See the TransNav Management System Server Guide for information on saving
background images.
Map View Map View displays all of the node groups and discovered nodes for a server when you
first start the GUI from that server. From Map View, you can see and manage all the
nodes, node groups, links between the nodes, and network services. The graphic area
displays a background image (usually a map of physical locations of the nodes) and
icons representing the nodes. This initial background image is the Network Map view.
Each node group can have a different background image associated with it; this is the
Group Map.
Each domain user can group the nodes to which they have access in order to more
easily manage their areas of responsibility. They can also add node groups within
existing node groups. The node groups appear in the server network navigation tree.
Menu bar
Alarm summary
tree
Network
navigation tree
Currently selected
object
Context-
sensitive tabs
summary tree gives you visibility at a glance to network alarms. If you select a node
group, only alarms associated with that node group display.
The network navigation tree shows you the node groups and node networks attached to
the server in an outline format in alphanumeric order. Node groups display first, then
nodes. In Map View, clicking a node group or a node displays the node group or node
name on the top and bottom bars of the window. To view the nodes in a node group,
double-click the Group icon in Map View or expand the node group in the navigation
tree. In Shelf View, right-clicking a node in the navigation tree or double-clicking the
node in Map View to display a graphical representation of the node and related
information; you can see which object (card or port) you have selected by the white
rectangle around the object and the name that displays on the top and bottom bars of the
window.
The context-sensitive tabs provide server, node group, or node information on alarms,
events, configuration information, protection, services, and service groups.
Double-click a node group to display the node groups and nodes associated with it.
Click a node to display node-specific information. Click anywhere on the map to
display network information specific to the server.
Shelf View Shelf View displays all of the cards in a node and their associated ports. You can
navigate to Shelf View in the following ways:
• Click the node in Map View, then select Show Shelf View from the View menu.
• Double-click the node in Map View.
• Right-click a node in Map View and select Show Shelf View.
• Right-click a node name in the Navigation Tree and select Show Shelf View.
Menu bar
BITS clock
Port LED
status
OR
Alarm
indicators
Context-
sensitive tab
screen
Command Line You can also access the TransNav management system using a command line interface
Interface (CLI). The CLI has these features:
• Command line editing: Use backspace and cursor keys to edit the current line and
to call up previous lines for re-editing and re-submission.
• Hierarchical command modes: Organization of commands into modes with
increasingly narrow problem domain scope.
• Context-sensitive help: Request a list of commands for the current context and
arguments for the current command, with brief explanations of each command.
Domain Level Use domain-level commands from the TransNav management server to perform
CLI network commissioning, provisioning, synchronizing, and monitoring tasks.
Domain-level commands affect multiple nodes in a network and include:
• Setting the gateway node
• Configuring network links
• Creating performance monitoring templates and alarm profiles
• Creating protection rings and services
• Generating reports
Accessing the domain-level CLI also gives you access to the node-level CLI through
the node command.
Node Level CLI Use node-level CLI commands to perform commissioning, provisioning, or monitoring
tasks on any node on the network. Node-level commands affect only one node in the
network.
TL1 Interface The TransNav management system supports a TL1 interface to the management servers
and to individual nodes. Currently, the TransNav management system supports a subset
of TL1 commands.
Turin supports these node and network management tasks through the TL1 interface:
• Fault and performance management (including test access and report generation)
• Equipment configuration and management
• Protection group configuration and management
• Security management
For information on TL1 and how to use the TL1 interface, see the TransNav
Management System TL1 Guide.
Contents
Chapter 1
TransNav Management System Requirements
Management System Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
TransNav Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Intelligent Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Control Plane Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Management Gateway Nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Sun Solaris Platform for TransNav Management Server . . . . . . . . . . . . . . . . 2-3
Windows Platform for TransNav Management Server . . . . . . . . . . . . . . . . . . 2-5
TransNav GUI Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Chapter 2
TransNav Management System Planning
Recommended Procedure to Create a Network . . . . . . . . . . . . . . . . . . . . . . . 2-7
Chapter 3
IP Address Planning
IP Addresses in a TransNav Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
IP Addressing Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
IP Networks and Proxy ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
In-Band Management with Static Routes . . . . . . . . . . . . . . . . . . . . . . . . 2-11
Out-of-Band Management with Static Routes . . . . . . . . . . . . . . . . . . . . . 2-12
Out-of-Band Management with no DCC Connectivity . . . . . . . . . . . . . . . 2-12
TraverseEdge 50 and TransAccess Mux . . . . . . . . . . . . . . . . . . . . . . . . 2-12
Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
Proxy ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-14
In-Band Management with Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-15
In-Band Management with Router and Static Routes . . . . . . . . . . . . . . . . . . . 2-16
In-Band Management of CPEs Over EOP Links . . . . . . . . . . . . . . . . . . . . . . 2-17
Out-of-Band Management with Static Routes. . . . . . . . . . . . . . . . . . . . . . . . . 2-19
Chapter 4
Network Time Protocol (NTP) Sources
NTP Sources in a Traverse Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21
Daylight Saving Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21
NTP Sources on a Ring Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
NTP Sources on a Linear Chain Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
List of Figures
Figure 2-1 Management System Deployment . . . . . . . . . . . . . . . . . . . . . . . . 2-2
List of Tables
Table 2-1 Sun Solaris Requirements, TransNav Management Server . . . . . 2-3
Table 2-2 Windows Requirements, TransNav Management Server . . . . . . . 2-5
Table 2-3 TransNav GUI Application Requirements . . . . . . . . . . . . . . . . . . . 2-6
Table 2-4 Network Configuration Procedure and References . . . . . . . . . . . . 2-7
Table 2-5 IP Address Node Connectivity Parameters . . . . . . . . . . . . . . . . . . 2-10
Chapter 1
TransNav Management System Requirements
Introduction The TransNav management system software package contains both server and client
workstation applications. The server functions communicate with the nodes and
maintain a database of topology, configuration, fault, and performance data for all
nodes in the network. The client workstation application provides the user interface for
managing the network.
Use the requirements listed in the following sections to help you determine the
management system requirements for your network.
• Management System Deployment, page 2-2
• TransNav Network Management, page 2-2
• Sun Solaris Platform for TransNav Management Server, page 2-3
• Windows Platform for TransNav Management Server, page 2-5
TransNav GUI Application, page 2-6
Management The TransNav management system software package contains server applications,
System client workstation applications, and agent applications that reside on the node.
Deployment server response
client request
TransNav In addition to the management system applications, the TransNav management system
Network uses the following Traverse software components:
Management
Intelligent Control Plane
An Intelligent Control Plane is a logical set of connections between TransNav-managed
network elements through which those network elements exchange control and
management information. This control and management information can be carried
either in-band or out-of-band.
• See Chapter 3—“IP Address Planning,” Quality of Service, page 2-13 for an
example and description of IP quality of service routing protocol.
• See Chapter 3—“IP Address Planning,” Proxy ARP, page 2-14 for information o
using the proxy address resolution protocol.
• See Chapter 3—“IP Address Planning,” In-Band Management with Static
Routes, page 2-15 for an example and a detailed description.
• See Chapter 3—“IP Address Planning,” Out-of-Band Management with Static
Routes, page 2-19 for an example and a detailed description.
Sun Solaris This table lists the minimum requirements for a Sun Solaris system TransNav
Platform for management server.
TransNav
Table 2-1 Sun Solaris Requirements, TransNav Management Server
Management
Server Component Description
Hardware
Hard Drives Up to 100 nodes: 73 GB of hard disk space (RAID controller optional; more
disk space if a hot-spare is desired or if more storage is desired for log files)
Up to 200 nodes: 146 GB of hard disk space (RAID controller optional; more
disk space if a hot-spare is desired or if more storage is desired for log files)
Network Two 10/100Base-T Ethernet cards. One card connects to the Data
Communications Network (DCN), and the other card connects to the Local
Area Network (LAN) connecting the client workstations.
Software
Component Description
Management System Obtain the latest version of the TransNav management system software in the
Software Software Downloads section on the Turin Infocenter. Access the Infocenter at
www.turinnetworks.com. User registration is required. Contact your Turin
Sales Support group.
Windows This table lists the minimum requirements for a Windows platform TransNav
Platform for management server.
TransNav
Table 2-2 Windows Requirements, TransNav Management Server
Management
Server Component Description
Hardware
Disk Backup System Required if unable to back up TransNav database to server on the network.
Network One or two 10/100BaseT Ethernet cards. One Ethernet Network Interface
Card (NIC) connects to the Data Communications Network (DCN). The
second optional Ethernet NIC connects to the Local Area Network (LAN)
connecting the client workstations.
Software
Management System Latest version of the TransNav management system software provided by
Software Turin Networks, Inc., Technical Assistance Center. Obtain the latest version of
the TransNav management system software in the Software Downloads
section on the Turin Infocenter. Access the Infocenter at
www.turinnetworks.com. User registration is required.
Component Description
TransNav GUI You require a client workstation to access the TransNav management server from the
Application graphical user interface (GUI). Turin recommends installing the application directly on
the client workstation for faster initialization, operation, and response time.
Table 2-3 TransNav GUI Application Requirements
Component Description
Hardware
Software
1
The GUI application has not been tested on the Sun i386 or Intel-based LINUX configurations.
Chapter 2
TransNav Management System Planning
Introduction This chapter includes the following information on creating and managing a network
using the TransNav management system:
• Recommended Procedure to Create a Network, page 2-7
4 Add routes for the node-ips to the This step depends on the server platform (Solaris or
management server. Windows) and local site practices. Contact your local site
administrator.
6 Initialize, then start, the server. Start TransNav Management System Server Guide,
the Primary server first, then Section 2—Management Server Procedures,
initialize and start the Secondary Chapter 3—“Server Administration Procedures,”
servers. page 2-23
8 Start the user interface and discover TransNav Management System GUI Guide,
the nodes in the network. Section 1—Installation and Overview,
Chapter 3—“Starting the Graphical User Interface,”
page 1-17
Traverse Provisioning Guide, Section 1—Configuring the
Network, Chapter 2—“Discover the Network,” page 1-3
TraverseEdge 50 User Guide
TraverseEdge 100 User Guide, Section 4—Configuring
the Network, Chapter 1—“Configuring the Network,”
page 4-1
TransAccess 200 Mux User Guide
9 Configure timing options for the Traverse Provisioning Guide, Section 1—Configuring the
network. Network, Chapter 4—“Configuring Network Timing,”
page 1-13
TraverseEdge 50 User Guide
TraverseEdge 100 User Guide, Section 4—Configuring
the Network, Chapter 2—“Configuring Network
Timing,” page 4-9
TransAccess 200 Mux User Guide
Chapter 3
IP Address Planning
Introduction This chapter includes the following information on creating and managing a network
using the TransNav management system:
• IP Addresses in a TransNav Network, page 2-9
• IP Addressing Guidelines, page 2-11
• Quality of Service, page 2-13
• Proxy ARP, page 2-14
• In-Band Management with Static Routes, page 2-15
• In-Band Management with Router and Static Routes, page 2-16
• In-Band Management of CPEs Over EOP Links, page 2-17
• Out-of-Band Management with Static Routes, page 2-19
IP Addresses The network management model (in-band or out-of-band) determines the IP address
in a TransNav requirements of the network. A TransNav-managed network requires a minimum of
Network two separate IP network addresses:
• The IP address assigned to the Ethernet interface on the back of the shelf
(bp-dcn-ip) determines the physical network.
• The IP address assigned to the node (node-ip) is used by the management server
to manage the network.
Assign the relevant IP addresses through the CLI during node commissioning.
Table 2-5 IP Address Node Connectivity Parameters
Parameter Turin
Required? Description
Name Recommendation
node-id Required on A user-defined name of the node. Enter alphanumeric Use the site name or
every node. characters only. Do not use punctuation, spaces, or special location.
characters.
node-ip Required on This parameter specifies the IP address of the node. This 10.100.100.x where
every node. address is also known as the Router ID in a data network x is between 1 and
environment. 254.
In a non-proxy network, Turin recommends that this address be Use a unique number
the same as the bp-dcn-ip. If it is not equal to the bp-dcn-ip, it for each network
must be on a different IP network. node.
Turin recommends that the node-ips for all nodes in one
network be on the same IP network.
In a proxy network, the node-ips for all nodes in one Depends on network
network must be on the same IP network. plan and site
This IP address has the following characteristics: practices.
• For the proxy node, proxy-arp is enabled; the
bp-dcn-ip and the node-ip must be the same IP
address.
• For the other nodes in the proxy network, the node-ip
must be in the same subnetwork as the bp-dcn-ip address
of the proxy node.
bp-dcn-ip Required on This parameter specifies the IP address assigned to the Ethernet Use a different
each node that interface on the back of the node. subnet for each site.
is connected or In a non-proxy network, Turin recommends that this address
routed to the be the same as the node-ip. If it is not equal to the node-ip,
management it must be on a different IP network.
server or on
any node with Enter an IP address if this node is connected to the management
a subtended server (either directly or through a router) or to a TransAccess
device. product.
In a proxy network on a proxy node, the bp-dcn-ip and the Depends on network
node-ip must be the same IP address. plan and site
practices.
bp-dcn-mask Required for Enter the appropriate address mask of the bp-dcn-ip address. Depends on site
each practices.
bp-dcn-ip.
bp-dcn-gw-ip Required for If the node is connected directly to the management server, this Depends on site
each address is the IP gateway of the management server. practices.
bp-dcn-ip. If there is a router between the management server and this
node, this address is the IP address of the port on the router
connected to the Ethernet interface on the back of the Traverse
node.
Parameter Turin
Required? Description
Name Recommendation
ems-ip Required if This address is the IP address of the TransNav management Depends on site
there is a server. practices.
router between This IP address must be on a separate network from any
this node and node-ip and gcm-{a | b}-ip.
the
management For in-band management, this address must be on or routed to
server. the same network as the bp-dcn-ip of the management gateway
node (the node with the physical connection to the management
server).
For out-of-band management, this address must be connected
or routed to all bp-dcn-ip addresses.
ems-gw-ip Required for This address is the IP address of the port on the router Depends on site
each ems-ip. connected to the Ethernet interface on the back of the Traverse practices.
shelf. This address is the same address as bp-dcn-gw-ip.
ems-mask Required for Required if there is a router between the node and the Depends on site
each ems-ip. management server. This address is the address mask of the practices.
IP address on the management server (ems-ip).
proxy-arp Required on Enable this parameter if this node is to be used as the proxy Depends on network
the node acting server for the IP subnet. plan and site
as proxy server The bp-dcn-ip and the node-ip of the proxy node must be the practices.
for the IP same IP address.
subnet.
Once you plan the network with one node as the proxy, you
cannot arbitrarily re-assign another node to be the proxy ARP
server.
• For all other nodes in the network, the node-id and the node-ip are the only
required commissioning parameters.
• The management server must be able to communicate with all node-ip addresses.
– Add routes to the management server using the node-ip, the address mask of
the bp-dcn-ip, and bp-dcn-ip of the node that is connected to the management
server.
– The IP address of the management server must be on or routed to the same
network as the bp-dcn-ip of the management gateway node.
Quality of The IP QoS (IP Quality of Service) routing protocol enables a Traverse node to
Service broadcast its forwarding table over the backplane for the data control network
(bp-dcn-ip), thus improving the quality of service over the backplane DCN ethernet
interface. Setting up static routes on intermediate routers between the Traverse
management gateway element and the TransNav management server is no longer
necessary. Existing traffic engineering and security capabilities are not changed.
When IP QoS is enabled on the management gateway node during commissioning,
source IP address packets are user-configured to block or allow traffic originated by
certain IP hosts or networks using the access control list (ACL). Received packets are
filtered, classified, metered, and put in queue for forwarding.
The ACL searches received IP address packets for the longest prefix match of the
source IP address. When the address is found, it is dropped or forwarded according to
the ACL settings (permit or deny). If no instruction is present in the ACL, the packet is
forwarded.
Outgoing IP address packets are prioritized as either High Priority or Best Effort and
put in queues for forwarding. The queue size for outgoing address packets is set by the
percent of available bandwidth.
EMS Server
IP Network
IP QoS
Enabled
Port IP A
Traverse Network
TN 00155
See the TransNav Management System GUI Guide, Chapter 1—“Creating and
Deleting Equipment Using Preprovisioning,” Node Parameters, page 3-3 for detailed
information about setting up IP Quality of Service in a TransNav-managed network.
Proxy ARP Proxy address resolution protocol (ARP) is the technique in which one host, usually a
router, answers ARP requests intended for another machine. By faking its identity, the
router accepts responsibility for routing packets to the real destination. Using proxy
ARP in a network helps machines on one subnet reach remote subnets without
configuring routing or a default gateway. Proxy ARP is defined in RFC 1027.
IP 172.168.0.2
Gateway 172.168.0.1 IP Network
Mask 255.255.255.0
EMS Server
172.140.0.1
Port IP A
Node1 node-id
172.140.0.2 node-ip
172.182.1.0 Gateway
172.140.0.2 bp-dcn-ip
172.140.0.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask
172.168.0.2 ems-ip
172.140.0.1 ems-gw-ip
255.255.255.0 ems-mask
enabled Proxy ARP Node2 node-id
disabled ospf-over-dcn 172.140.0.3 node-ip
0 area-id 172.182.1.1 bp-dcn-ip
TE-100
172.182.1.0 bp-dcn-gw-ip
node-id NodeA 255.255.255.0 bp-dcn-mask
node-ip 172.140.0.5
Node3 node-id Optional
172.140.0.4 node-ip TransAccess Name
TransAccess
TE-100 Mux 172.182.1.2 IP
172.168.1.1 Gateway
node-id NodeB 255.255.255.0 Mask
TE-100 172.182.1.1 Trap-1
node-ip 172.140.0.6
NodeC node-id
172.140.0.7 node-ip TN 00156
In-Band In-band management with static routes means the management server is directly
Management connected by static route to one node (called the management gateway node), and the
with Static data communications channel (DCC) carries the control and management data.
Routes In this simple example, the TransNav management server (EMS server) is connected to
a management gateway node (Node 1) using the Ethernet interface on the back of the
shelf. The server communicates to the other nodes in-band using the DCC.
172.168.0.1 Port A IP
Node1 node-id
10.100.100.1 node-ip
172.168.0.2 bp-dcn-ip 172.168.1.1 Port B IP
172.168.0.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask
Node2 node-id
TE-100 10.100.100.2 node-ip
node-id Node6 172.168.1.2 bp-dcn-ip
172.168.1.1 bp-dcn-gw-ip
node-ip 10.100.100.6
255.255.255.0 bp-dcn-mask
Node3 node-id
10.100.100.3 node-ip Optional
TE-100 TransAccess Name
TransAccess
172.168.1.3 IP
Mux
node-id Node5 TE-100 172.168.1.2 Gateway
node-ip 10.100.100.5 255.255.255.0 Mask
172.168.1.2 Trap-1
Node4 node-id
10.100.100.4 node-ip
TN 00157
In this example, to get the management server to communicate to all nodes, add routes
on the server to the node-ip of each node. The server communicates with the nodes
using the bp-dcn-ip of the management gateway node (Node 1). Note that all IP
addresses on Node 1 (node-ip and bp-dcn-ip) are in separate networks.
Node 2 has a subtending TransAccess Mux (either a TA155 or a TA200) connected by
Ethernet. The bp-dcn-ip address is necessary to connect the TransAccess system. The
bp-dcn-ip of this node must be in a separate network from the bp-dcn-ip on Node 1.
At Node 3, the node-id and the node-ip are the only required commissioning
parameters. However, Node 3 also has subtending TraverseEdge 100 network managed
in-band through the management gateway node. The IP address requirements are the
same as for the Traverse platform.
See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.
In-Band In this example, the management server is connected by static route to a router that, in
Management turn, is connected to the management gateway node (Node 1). The server
with Router communicates to the other nodes in-band using the DCC.
and Static
Routes
EMS
Server
172.169.0.10 IP
Add routes for each node-ip to EMS server.
172.169.0.1 Gateway
<node-ip> <mask> <Router Port IP A>
10.100.100.1 255.255.255.0 172.169.0.1 255.255.255.0 Mask
Add routes for each node-ip to router.
10.100.100.2 255.255.255.0 172.169.0.1 <node-ip> <mask> <Node1 bp-dcn-ip>
10.100.100.3 255.255.255.0 172.169.0.1 10.100.100.1 255.255.255.0 172.168.0.2
10.100.100.4 255.255.255.0 172.169.0.1 10.100.100.2 255.255.255.0 172.168.0.2
172.169.0.1 Port IP A
10.100.100.5 255.255.255.0 172.169.0.1 10.100.100.3 255.255.255.0 172.168.0.2
10.100.100.6 255.255.255.0 172.169.0.1 10.100.100.4 255.255.255.0 172.168.0.2
172.168.0.1 Port IP B 10.100.100.5 255.255.255.0 172.168.0.2
10.100.100.6 255.255.255.0 172.168.0.2
Node1 node-id
10.100.100.1 node-ip
172.168.0.2 bp-dcn-ip 172.168.1.1 Gateway
172.168.0.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask
172.169.0.10 ems-ip
172.168.0.1 ems-gw-ip
255.255.255.0 ems-mask
Node2 node-id
TE-100
10.100.100.2 node-ip
node-id Node6 172.168.1.2 bp-dcn-ip
node-ip 10.100.100.6 172.168.1.1 bp-dcn-gw-ip
Node3 node-id 255.255.255.0 bp-dcn-mask
10.100.100.3 node-ip
TE-100 Optional
TransAccess Name
node-id Node5 TE-100 TransAccess 172.168.1.3 IP
node-ip 10.100.100.5 Mux 172.168.1.2 Gateway
255.255.255.0 Mask
Node4 node-id 172.168.1.2 Trap-1
10.100.100.4 node-ip
TN 00158
In this example, to get the management server to communicate to each node, add routes
on the server to the node-ip of each node. The gateway through which the management
server communicates with the nodes is the IP address of the port on the router
connected to the server.
At the router, add the routes for each node-ip using the gateway bp-dcn-ip of the
management gateway node (Node 1).
See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.
In-Band In this example, the management server is connected by static route to a router that, in
Management of turn, is connected to the management gateway node (Node 1). The server
CPEs Over communicates to the other nodes in-band using the DCC, including the node that has
EOP Links CPE devices attached (Node 3). The IP packets from CPE devices are forwarded
through the node over electrical cards to EOP links on the EoPDH cards, and then
through the Ethernet Control Channel interface (ECCI) for forwarding over the system
by Traverse Ethernet services.
EMS
Add routes for Traverse network to EMS server Server
<node-ip> <mask> <Router Port IP A>
172.169.1.10 IP
10.100.100.0 255.255.255.0 172.169.0.1
Node1 node-id
10.100.100.5 node-ip
10.100.100.1 bp-dcn-ip
CPEs
10.100.100.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask
CPE-ip 192.168.20.2
172.169.0.0 ems-ip
CPE-ip 192.168.20.3 10.100.100.1 ems-gw-ip
255.255.0.0 ems-mask
CPE-ip 192.168.20.4
CPE-ip 192.168.20.6
Node2 node-id
ECC
10.100.100.2 node-ip
CPEs
Node3 node-id
CPE-ip 192.168.30.2 10.100.100.3 node-ip
CPE-ip 192.168.30.3 EoPDH, Slot 8
Entered on GCM. Routes packets to Slot 5
CPE-ip 192.168.30.4 192.168.20.1 ecci-gw-ip
255.255.255.0 ecci-gw-mask
CPE-ip 192.168.30.5 Entered on GCM. Routes packets to Slot 8
192.168.30.1 ecci-gw-ip
CPE-ip 192.168.30.6 TN 00160
255.255.255.0 ecci-gw-mask
In the above example, add routes on the management server to communicate to the
node-ip of the nodes that have CPEs attached. This allows IP packets from the CPEs to
be transmitted over the Traverse system. The server communicates with all the nodes
over a static route using the bp-dcn-ip of the management gateway node (Node 1).
At Node 3, the node-id and node-ip are required commissioning parameters, as are the
CPE-ip’s of each CPE device. A default ECC interface gateway IP address (ecci-gw-ip)
must also be configured on each CPE device to allow all IP packets to be sent through
the electrical card to the ECC interface on the node. Node 3 must have an EoPDH card
with an EOP port set up. Each EOP port is a member port on the ECC interface. The
VLAN tag of each ECCI member port corresponds to the management VLAN of the
attached CPE device, thus providing the interface between the CPEs and the
management system using an ECC interface.
The EoPDH cards are connected by EOP links through the electrical cards to the CPEs
as shown below.
See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.
Out-of-Band Out-of-band management with static routes means that the management server is
Management directly connected by static route to each node by the Ethernet interface on the back of
with Static each shelf. In this example, the management server communicates to each node directly
Routes or through a router.
Add routes for each node-ip to router.
<node-ip> <mask> <Router Port IPs F & D>
10.100.100.2 255.255.255.0 172.169.0.2
10.100.100.3 255.255.255.0 172.170.0.2
IP Network IP Network
172.168.0.1
Port IP A
Add route for node-ip to router. Add route for node-ip to router.
<node-ip> <mask> <Node3 bp-dcn-ip> <node-ip> <mask> <Node2 bp-dcn-ip>
10.100.100.3 255.255.255.0 172.182.0.2 10.100.100.2 255.255.255.0 172.171.0.2
Port IP D Port IP F
172.170.0.2 172.169.0.2
Node1 node-id
10.100.100.1 node-ip
172.168.0.3 bp-dcn-ip
172.182.0.1 172.168.0.1 bp-dcn-gw-ip 172.171.0.1
Port IP E 255.255.255.0 bp-dcn-mask Port IP G
172.168.0.2 ems-ip
172.168.0.1 ems-gw-ip
255.255.255.0 ems-mask
node-id Node3
node-ip 10.100.100.3 Node2 node-id
bp-dcn-ip 172.182.0.2 10.100.100.2 node-ip TransAccess
bp-dcn-gw-ip 172.182.0.1 172.171.0.2 bp-dcn-ip Mux
bp-dcn-mask 255.255.255.0 172.171.0.1 bp-dcn-gw-ip TransAccess Name
ems-ip 172.168.0.2 255.255.255.0 bp-dcn-mask 172.171.0.3 IP
ems-gw-ip 172.182.0.1 172.168.0.02 ems-ip 172.171.0.2 Gateway
ems-mask 255.255.255.0 172.171.0.1 ems-gw-ip 255.255.255.0 Mask
255.255.255.0 ems-mask 10.100.100.2 Trap-1
TN 00159
Add a route to the management server using the bp-dcn-ip of Node 1. Add separate
routes to the node-ip of Node 2 and Node 3 using the IP address of the port on the
router connected to the server (Port IP A) as the gateway address.
At each router in the network, an administrator must add a route to the node-ip of the
nodes.
At Node 2, the bp-dcn-ip can be in the same network as the TransAccess Mux
connected to it.
See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.
Chapter 4
Network Time Protocol (NTP) Sources
Introduction This chapter includes the following information on managing a Traverse network:
• NTP Sources in a Traverse Network, page 2-21
• NTP Sources on a Ring Topology, page 2-22
• NTP Sources on a Linear Chain Topology, page 2-22
NTP Sources Network Time Protocol provides an accurate Time of Day stamp for performance
in a Traverse monitoring and alarm and event logs. Turin recommends using the TransNav
Network management system server as the primary NTP source if you do not already have a
NTP source defined. If no primary NTP source is configured, the TransNav system
defaults to the TransNav server as the primary NTP source. A secondary NTP IP server
address is optional.
Depending on the topology, configure a primary NTP source and a secondary NTP
source for each node in a network.
• For ring topologies, see NTP Sources on a Ring Topology, page 2-22.
• For linear chain topologies, see NTP Sources on a Linear Chain Topology,
page 2-22.
NTP Sources Turin recommends using the adjacent nodes as the primary and secondary NTP sources
on a Ring in a ring configuration. Use the Management Gateway Node (MGN) or the node closest
Topology to the MGN as the primary source and the other adjacent node as the secondary source.
The following example shows NTP sources in a ring topology.
Node 2
NTP1 = Node 1
Node 1 NTP2 = Node 3 Node 3
Management Server
NTP1 = Node 3
NTP2 = Node 1
In the above example, the MGN selects the management server as the primary NTP
server and does not select a secondary server. At Node 2, you would configure the
primary server as Node 1 (the MGN), and the secondary server as Node 3.
NTP Sources On a linear chain topology, Turin recommends using the upstream node as the primary
on a Linear NTP source and the management server as the secondary NTP source.
Chain In the following example, Node 1 (the MGN) selects the management server as the
Topology primary NTP server and does not select a secondary server. At Node 2, you would
configure Node 1 as the primary NTP server and the management server as the
secondary source.
Node 1
Node 2 Node 3 Node 4
Management Gateway Node
Management Server
A G
Access Graphical user interface
groups description, 1-14
see Role-based Access Control fault and event management, 1-7
Accounting data hardware requirements, 2-6
basis, 1-10 menu bar, 1-14
Administration performance management, 1-10
data collection, 1-11 shelf view, 1-15
nodes, 1-10 software requirements, 2-6
reports, 1-11 views
Alarms map view, 1-14
GUI windows, 1-7 navigation tree, 1-15
node group, 1-7, 1-15 network map, 1-14
Auto-discovery GUI, see Graphical user interface
intelligent control plane, 1-8
H
C Hardware
CLI requirements
commands GUI application, 2-6
description, 1-16 Sun Solaris server, 2-3
Configuration Windows, 2-5
management
equipment, 1-8 I
multiple servers, 1-9
Intelligent control plane
preprovisioning, 1-9
auto-discovery, 1-8
service provisioning, 1-9
connectivity
Control
node, 1-3
RBAC, see Role-based Access Control
service, 1-11
Control module
preprovisioning, 1-9
remote restore, 1-11
Interoperability
third party management systems
D SNMP traps, 1-4
Dataset snapshots, 1-12 TL1 interface, 1-4
Daylight Saving Time IP address
support, 2-23 requirements, 2-11
Domain
security M
see Role-based Access Control
Management
plane
E equipment configuration, 1-8
Event server
management, 1-7 primary, 1-3, 2-2
secondary, 1-3, 2-2
F system
Fault dataset snapshots, 1-12
management, 1-7 fault management, 1-7
reports, 1-11
security, Role-based Access Control, 1-10
R
Report
types, 1-11
Reports
dataset snapshots, 1-12
Role-based Access Control
access groups, 1-10
functional groups, 1-3, 1-10
security
domain, 1-10
node, 1-10
server, 1-10
S
Scalability, see System
Secondary server, see Servers
Security management, see Role-based Access Control
Server
parameter
MaxNoOfUserSessions, 1-4
Servers
function
primary, 1-9
Release TN4.2.x
TransNav Management System
Documentation
800-0005-TN42