TransNav Management System Documentation. Product Overview Guide

Download as pdf or txt
Download as pdf or txt
You are on page 1of 58

Turin Networks Inc.

TransNav Management System


Documentation

Product Overview Guide

Release TN4.2.x
Publication Date: October 2008
Document Number: 800-0005-TN42 Rev. B
FCC Compliance
This equipment has been tested and found to comply with the limits for a Class A digital device, pursuant to
Part 15 of the FCC Rules. This equipment generates, uses, and can radiate radio frequency energy and, if not
installed and used in accordance with the installation instructions may cause harmful interference to radio
communications.
Canadian Compliance
This Class A digital apparatus meets all requirements of the Canadian Interference-Causing Equipment
Regulations. Cet appareil numérique de la classe A respects toutes les exigences du Règlement sur le
matériel brouilleur du Canada.
Japanese Compliance
This is a Class A product based on the standard of the Voluntary Control Council for Interference by
Information Technology Equipment (VCCI). If this equipment is used in a domestic environment, radio
disturbance may occur, in which case, the user may be required to take corrective actions.

International Declaration of Conformity


We, Turin Networks, Inc. declare under our sole responsibility that the Traverse platform (models: Traverse
2000, Traverse 1600, and Traverse 600) to which this declaration relates, is in conformity with the following
standards:
EMC Standards
EN55022 EN55024 CISPR-22
Safety Standards
EN60950 CSA 22.2 No. 60950, ASINZS 3260
IEC 60950 Third Edition. Compliant with all CB scheme member country deviations.
Following the provisions of the EMC Directive 89/336/EEC of the Council of the European Union.
Copyright © 2008 Turin Networks, Inc.
All rights reserved. This document contains proprietary and confidential information of Turin Networks,
Inc., and may not be used, reproduced, or distributed except as authorized by Turin Networks. No part of this
publication may be reproduced in any form or by any means or used to make any derivative work (such as
translation, transformation or adaptation) without written permission from Turin Networks, Inc.
Turin Networks reserves the right to revise this publication and to make changes in content from time to time
without obligation on the part of Turin Networks to provide notification of such revision or change. Turin
Networks may make improvements or changes in the product(s) described in this manual at any time.
Turin Networks Trademarks
Turin Networks, the Turin Networks logo, Traverse, TraverseEdge, TransAccess, TransNav, and Creating
The Broadband Edge are trademarks of Turin Networks, Inc. or its affiliates in the United States and other
countries. All other trademarks, service marks, product names, or brand names mentioned in this document
are the property of their respective owners.
Government Use
Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in FAR 12.212
(Commercial Computer Software-Restricted Rights) and DFAR 227.7202 (Rights in Technical Data and
Computer Software), as applicable.
T RANS N AV P RODUCT O VERVIEW G UIDE

Contents
About this Document . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

Section 1 Overview and Features


Chapter 1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Chapter 2
Network Management Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Chapter 3
User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13

Section 2 Management System Planning


Chapter 1
TransNav Management System Requirements . . . . . . . . . . . . . . . . . . . . . . 2-1
Chapter 2
TransNav Management System Planning. . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
Chapter 3
IP Address Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
Chapter 4
Network Time Protocol (NTP) Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-23

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Index-1

Release TN4.2.x Turin Networks Page i


TransNav Product Overview Guide

Page ii Turin Networks Release TN4.2.x


Product Overview [TN4.2.x]
Document Description

About this Document

Introduction This description contains the following documentation topics:


• Traverse System Product Documentation, page 1
• TraverseEdge System Product Documentation, page 1
• TransNav Management System Product Documentation, page 1
• Operations Documentation, page 1
• Information Mapping, page 1
• If You Need Help, page 1
• Calling for Repairs, page 1
Refer to “What’s New in the Documentation?” to review the new and changed features
for this release.

Traverse The Traverse® system product documentation set includes the documents described in
System the table below.
Product Table 1 Traverse System Product Documentation
Documentation
Document Description Target Audience

Traverse Product This document provides a detailed overview of the Anyone who wants to
Overview Traverse system. It also includes engineering and understand the Traverse
planning information. system and its
applications.

Traverse This document provides required equipment, tools, Installers, field, and
Installation and and step-by-step procedures for: network engineers
Configuration • Hardware installation
• Power cabling
• Network cabling
• Node power up
• Node start-up

Traverse This document provides step-by-step procedures for Network engineers,


Provisioning provisioning a network of Traverse nodes using the provisioning, and
TransNav management system. See the TransNav network operations
Management System Product Documentation. center (NOC)
personnel

Release TN4.2.x Turin Networks Page iii


TraverseEdge System Product Documentation

TraverseEdge The TraverseEdge™ 100 User Guide includes the sections described in the table below.
System
Product Table 2 TraverseEdge 100 System Product Documentation
Documentation
Section Description Target Audience

Product Overview This section provides a detailed overview of the Anyone who wants to
TraverseEdge system. understand the
TraverseEdge system
and its applications

Description and This section includes engineering and planning Field and network
Specifications information. engineers

Installation and This document identifies required equipment and Installers, field, and
Configuration tools and provides step-by-step procedures for: network engineers
• Hardware installation
• Power cabling
• Network cabling
• Node power up
• Node start-up

Provisioning the This section provides step-by-step procedures for Network engineers,
Network provisioning a TraverseEdge network using the provisioning, and
TransNav management system. Also see the network operations
TransNav Management System Product center (NOC)
Documentation. personnel

Configuring This section provides step-by-step procedures for Network engineers,


Equipment configuring card and interface parameters of a provisioning, and
TraverseEdge using the TransNav management network operations
system. Also see the TransNav Management center (NOC)
System Product Documentation. personnel

Creating TDM This section provides step-by-step procedures for Network engineers,
Services provisioning a TraverseEdge network using the provisioning, and
TransNav management system. Also see the network operations
TransNav Management System Product center (NOC)
Documentation. personnel

Creating Ethernet This section provides step-by-step procedures for Network engineers,
Services provisioning a TraverseEdge network using the provisioning, and
TransNav management system. See the TransNav network operations
Management System Product Documentation. center (NOC)
personnel

Appendices This section provides installation and provisioning Installers and anyone
checklists, compliance information, and acronym who wants reference
descriptions. information.

Page iv Turin Networks Release TN4.2.x


TransNav Management System Product Documentation

TransNav The TransNav™ management system product documentation set includes the
Management documents described in the table below.
System
Product Table 3 TransNav Management System Product Documentation
Documentation
Document Description Target Audience

TransNav This document provides a detailed overview of the Anyone who wants to
Management TransNav management system. understand the
System Product TransNav management
Overview system
This document includes hardware and software
requirements for the management system. It also
includes network management planning information.

TransNav This document describes the management server Field and network
Management component of the management system and provides engineers,
System Server procedures and troubleshooting information for the provisioning, and
Guide server. network operations
center (NOC)
TransNav This document describes the graphical user interface personnel
Management including installation instructions and logon
System GUI procedures.
Guide

This document describes every menu, window, and


screen a user sees in the graphical user interface.

TransNav This document includes a quick reference to the


Management command line interface (CLI). Also included are
System CLI comprehensive lists of both the node-level and
Guide domain-level CLI commands.

TransNav This document describes the syntax of the TL1


Management language in the TransNav environment.
System TL1
Guide
This document also defines all input commands and
expected responses for retrieval commands as well as
autonomous messages that the system outputs due to
internal system events.

Release TN4.2.x Turin Networks Page v


Operations Documentation

Operations The document below provides operations and maintenance information for Turin’s
Documentation TransNav managed products.

Table 4 Operations Documentation

Document Description Target Audience

Node Operations This document identifies required equipment and Field and network
and Maintenance tools. It also provides step-by-step procedures for: engineers
• Alarms and recommended actions
• Performance monitoring
• Equipment LED and status
• Diagnostics
• Test access (SONET network only)
• Routine maintenance
• Node software upgrades
• Node hardware upgrades

Information Traverse, TransNav, and TraverseEdge 100 system documentation uses the Information
Mapping Mapping format which presents information in small units or blocks. The beginning of
an information block is identified by a subject label in the left margin; the end is
identified by a horizontal line. Subject labels allow the reader to scan the document and
find a specific subject. Its objective is to make information easy for the reader to
access, use, and remember.
Each procedure lists the equipment and tools and provides step-by-step instructions
required to perform each task. Graphics are integrated into the procedures whenever
possible.

If You Need If you need assistance while working with Traverse products, contact the Turin
Help Networks Technical Assistance Center (TAC):
• Inside the U.S., toll-free: 1-866-TURINET (1-866-887-4638)
• Outside the U.S.: 916-348-2105
• Online: www.turinnetworks.com/html/support_overview.htm
TAC is available 6:00AM to 6:00PM Pacific Time, Monday through Friday (business
hours). When the TAC is closed, emergency service only is available on a callback
basis. E-mail support (24-hour response) is also available through:
[email protected].

Calling for If repair is necessary, call the Turin Repair Facility at 1-866-TURINET (866-887-4638)
Repairs for a Return Material Authorization (RMA) number before sending the unit. The RMA
number must be prominently displayed on all equipment cartons. The Repair Facility is
open from 6:00AM to 6:00PM Pacific Time, Monday through Friday.
When calling from outside the United States, use the appropriate international access
code, and then call 916-348-2105 to contact the Repair Facility.

Page vi Turin Networks Release TN4.2.x


Calling for Repairs

When shipping equipment for repair, follow these steps:


1. Pack the unit securely.
2. Enclose a note describing the exact problem.
3. Enclose a copy of the invoice that verifies the warranty status.
4. Ship the unit PREPAID to the following address:
Turin Networks, Inc.
Turin Repair Facility
Attn: RMA # ________
1415 North McDowell Blvd.
Petaluma, CA 94954 USA

Release TN4.2.x Turin Networks Page vii


Calling for Repairs

Page viii Turin Networks Release TN4.2.x


S ECTION 1 O VERVIEW AND F EATURES
S ECTION 1MANAGEMENT SYSTEM OVERVIEW

MANAGEMENT S YSTEM OVERVIEW

Contents
Chapter 1
Overview
What Is the TransNav Management System?. . . . . . . . . . . . . . . . . . . . . . . . . 1-1
TransNav Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-1
Client Workstation Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Management Server Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Node Agent Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
TransNav Management System Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-3
Interoperability with Third-party Management Systems . . . . . . . . . . . . . . . . . 1-4
Autodiscovery and Pre-provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Simultaneous Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-4
Reliability, Availability, and Serviceability (RAS) . . . . . . . . . . . . . . . . . . . . . . . 1-5

Chapter 2
Network Management Features
Fault and Event Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Alarm Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Data Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-7
Flexible Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Flexible Scoping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Sorting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Clearing Alarms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Configuration Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Equipment Configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-8
Pre-provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Service Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Secondary Server Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-9
Accounting Management. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Performance Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Role-based Access Control. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Domain Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Node Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
Node Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-10
System Log Collection and Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Report Generation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
General Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-11
Data Set Snapshots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-12

Release TN4.2.x Turin Networks Page ix


TransNav Product Overview Guide, Section 1 Overview and Features

Chapter 3
User Interfaces
Access to User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-13
Graphical User Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Map View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Shelf View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-15
Command Line Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16
Domain Level CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
Node Level CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17
TL1 Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-17

List of Figures
Figure 1-1 TransNav Software Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 1-2
Figure 1-2 Map View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-14
Figure 1-3 Shelf View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1-16

List of Tables
Table 1-1 Accessing the TransNav Management System. . . . . . . . . . . . . . . 1-13

Page x Turin Networks Release TN4.2.x


SECTION 1OVERVIEW AND FEATURES

Chapter 1
Overview

Introduction This chapter describes the TransNav management system:


• What Is the TransNav Management System?, page 1-1
• TransNav Software Architecture, page 1-1
• Client Workstation Application, page 1-2
• Management Server Application, page 1-2
• Node Agent Application, page 1-3
• TransNav Management System Features, page 1-3

What Is the The TransNav management system is an advanced element and subnetwork
TransNav management system designed for comprehensive management of the Traverse network
Management consisting of Traverse, TraverseEdge, and TransAccess products. The Java™-based
System? software smoothly integrates into existing automated and manual operations support
system (OSS) infrastructure.
The multi-level management architecture applies the latest distributed and evolvable
technologies. These features enable you to create and deploy profitable new services,
as well as transition gracefully to a more dynamic and data-centric, multi-service
optical transport network.
The TransNav management system consists of an integrated set of software
components that reside on the server(s), the client workstations, and individual nodes.
• Client Workstation Application, page 1-2. Provides the user interface for
managing the network. The management system supports a graphical user interface
(GUI), a command line interface (CLI), and a TL1 interface.
• Management Server Application, page 1-2. Communicates with the nodes and
the servers, as well as provides classical element management FCAPS
functionality (fault, configuration, accounting, performance, and security), policy
management, reporting, and system administration.
• Node Agent Application, page 1-3. Resides on the control card and maintains a
persistent database of management information for specific nodes. It also controls
the flow of information between the management server and specific nodes.

TransNav The TransNav management system is an all Java-based, highly-integrated system that
Software uses the identical architecture on the Traverse network nodes and the management
Architecture server(s). The architecture leverages the Java Dynamic Management Kit (JDMK) and

Release TN4.2.x Turin Networks Page 1-1


TransNav Product Overview Guide, Section 1: Overview and Features
Client Workstation Application

implementation of Java Management Extensions (JMX) to provide an efficient


client-server architecture.

Figure 1-1 TransNav Software Architecture

All communication between nodes and the server or between the client application and
the server uses the Java Remote Method Invocation (RMI) system over TCP/IP. The
server also uses RMI internally between the JDMK servers and JDMK clients.
Information flows southbound – from the user on the client workstation, to the Session
Manager, to the application server, to the Traverse Node Gateway Client inside the
management server, and finally down to the Traverse Node Gateway Agent embedded
in the node – via RMI over TCP/IP.

Client The client workstation application provides the user interface for managing the
Workstation network. The TransNav management system supports GUI, CLI, and TL1 interfaces.
Application See Figure 1-1 TransNav Software Architecture for a graphical representation of the
client workstation application.
The client workstation application communicates with the session manager on the
management server. Download the GUI application from the management server, or
simply telnet to the management server, to access the CLI or TL1.

Management The management server application communicates with nodes and provides classical
Server element management FCAPS functionality (fault, configuration, accounting,
Application performance, and security), as well as policy management, reporting, and system
administration. See Figure 1-1 TransNav Software Architecture for a graphical
representation of the management server application.

Page 1-2 Turin Networks Release TN4.2.x


Chapter 1 Overview
TransNav Management System Features

Security management, logging, and external interfaces to upstream applications are all
implemented in the upper level session management component on the management
server. These functions are implemented as a JDMK server and are responsible for
servicing both the GUI client applet and the northbound interfaces. Enhanced security
is achieved using Functional Groups to provide RBAC (Role-based Access Control)
functionality.
A separate SMNP agent, also implemented as a JDMK server, supports SNMP traps
(fault management) for simplified version control. The SNMP agent works with the
fault management application card.
The agent on the node passes node-level data to the management server via RMI over
TCP/IP. On the management server, the Node Gateway Controller receives the
information and pre-processes it. The Node Gateway Controller then passes the
pre-processed information to the management functions within the application server.
The application server is responsible for persistence at the server side and, to this end,
manages the entire interface with the underlying SQL database.
Each TransNav management system supports up to eight servers; one server is
designated as the Primary server, the remaining servers are designated as Secondary
servers. The Primary server actively manages the network. The Secondary servers
passively view the network but cannot perform any management operations that would
change the state of the network. Any Secondary server can be promoted to the Primary
server role in case of failure or maintenance. The switch in server roles requires some
degree of user intervention.

Node Agent Each node has a redundant control card with a persistent relational database
Application management system that records provisioning, alarm, maintenance, and diagnostic
information for the node. See Figure 1-1 TransNav Software Architecture for a
graphical representation of the node agent application.
Each control card uses Java agents (M-Beans [management beans]) to communicate
with Java applications on the management server and synchronize data between the
server and the nodes it manages.

TransNav The TransNav management system provides comprehensive management for both the
Management nodes and for the connections between nodes through the Intelligent Control Plane.
System This specifically includes efficient integration of management plane and control plane
Features functions, and policy-based management.
The TransNav management system features include:
• Interoperability with Third-party Management Systems, page 1-4
• Autodiscovery and Pre-provisioning, page 1-4
• Simultaneous Users, page 1-4
• Scalability, page 1-4
• Reliability, Availability, and Serviceability (RAS), page 1-5

Release TN4.2.x Turin Networks Page 1-3


TransNav Product Overview Guide, Section 1: Overview and Features
Interoperability with Third-party Management Systems

Interoperability The TransNav management system supports other telecommunications management


with network layer functions at the network management layer, the service management
Third-party layer, and the business management layer through a variety of northbound interfaces.
Management The management system provides options to support the following interfaces:
Systems • Forwarding of SNMP traps to SNMP network management systems for integrated
higher-layer fault management
• Domain-level and node-level CLI via scripts
• TL1 alarm and performance management forwarding from the management server
• TL1 equipment and protection group configuration and test access

Autodiscovery Each node uses a process called autodiscovery to learn the addresses of all equipment
and Pre-provi- in its control plane domain. Commission the node using the CLI and enter the host
sioning name or IP address of the gateway node(s). The management system then discovers and
manages all the nodes in the domain without requiring any other preprovisioned
information.
The TransNav management system supports preprovisioning which allows
provisioning functions independent of service activation. The effectiveness of
preprovisioning depends upon effective traffic engineering to ensure network capacity
is available upon activation. Upon installation, a node is discovered automatically and
the management server forwards the preprovisioned information to the node.

Simultaneous The number of simultaneous users of user sessions is configurable on the server
Users (MaxNoOfUserSessions). The default is 20 simultaneous users. The management
system does not restrict the number of simultaneous users either by software licensing
or system configuration parameters. Customer usage patterns may allow more
simultaneous users with reasonable response time than specified.
One GUI session, one CLI session, or one TL1 session counts as a simultaneous user.
Up to 10 simultaneous users can log into a node-level CLI session.

Scalability Turin works with customers to specify configurations to support the scalability
required. The TransNav management system supports:
• 1 to 8 TransNav servers. One server is designated the Primary server, the remaining
servers are Secondary servers.
• Up to 200 Traverse nodes and simultaneous users for servers, based on specific
user behaviors, by:
– Selecting a multi-processor server with the potential capacity to support the
estimated maximum requirements, and the addition of CPUs, memory, and
disk capacity as needed.
– Distributing various components of the management system over multiple
servers.

Page 1-4 Turin Networks Release TN4.2.x


Chapter 1 Overview
Reliability, Availability, and Serviceability (RAS)

Reliability, Turin works closely with customers to configure hardware and software to achieve
Availability, desired levels of high availability for their Sun Solaris server-based TransNav system
and deployments. This includes supporting secondary network operation centers for
Serviceability disaster recovery. Our goal is to achieve exceptional service reliability and availability
(RAS) in a cost-effective manner.

Release TN4.2.x Turin Networks Page 1-5


TransNav Product Overview Guide, Section 1: Overview and Features
Reliability, Availability, and Serviceability (RAS)

Page 1-6 Turin Networks Release TN4.2.x


S ECTION 1OVERVIEW AND FEATURES

Chapter 2
Network Management Features

Introduction The TransNav management system provides classical element management


functionality (FCAPS—fault, configuration, accounting, performance, and security),
plus policy management, reporting, and system administration.
• Fault and Event Management, page 1-7
• Configuration Management, page 1-8
• Secondary Server Support, page 1-9
• Accounting Management, page 1-10
• Performance Management, page 1-10
• Role-based Access Control, page 1-10
• Node Administration, page 1-10
• System Log Collection and Storage, page 1-11
• Report Generation, page 1-11

Fault and The TransNav management system graphical user interface (GUI) enables each
Event technician to open multiple Alarm windows. The number of windows is limited only by
Management effective use of the workstation’s screen area and the client workstation system
resources, such as memory and CPU load.
If technicians have their nodes grouped, clicking a node group in the navigation tree or
clicking a node group map displays only the alarms associated with that node group.
This includes nodes and node groups within the parent-level node group.
In the GUI, windows and dialog boxes have the following characteristics:

Alarm Data
The system provides a count of the number of outstanding alarms by severity level.
This information is available at a network level as well as for each individual node.

Data Sequence
Each user can specify the sequence in which data fields will appear for each window.

Release TN4.2.x Turin Networks Page 1-7


TransNav Product Overview Guide, Section 1: Overview and Features
Configuration Management

Flexible Filtering
The user can determine what data appears in the selected fields for each separate Alarm
window.

Flexible Scoping
The user can determine which nodes and equipment appear in the selected fields for
each separate Alarm window.

Sorting
When a column heading (e.g., “severity”) is selected, the Alarm window is sorted by
that category.

Clearing Alarms
Only a node clears alarms. Alarms received by the management system are
automatically marked as cleared and added to the display. The user can also set the
retention duration of cleared alarm messages in the server alarm database and the alarm
display.
Graphical buttons and a context menu provide the following options:
• Acknowledge the alarm.
• Select a detailed alarm view that allows the user to view alarm details in addition to
adding comments.
• Set filters that allow the user to include or exclude alarms from specific sources
from being displayed in the Alarm window.
• Open a new Alarm window.

Configuration Use the TransNav management system for all configuration management requirements:
Management • Equipment Configuration, page 1-8
• Pre-provisioning, page 1-9
• Service Provisioning, page 1-9
• Secondary Server Support, page 1-9
• Report Generation, page 1-11

Equipment After a node is installed and activated, it discovers its specific components and
Configuration forwards that information to the management system, The system, in turn, populates its
databases and builds the graphical representation of the equipment. The Intelligent
Control Plane automatically discovers the network and forwards that information to the
management plane which creates the network topology map.
Use node-level CLI for initial system commissioning. For detailed information, see the
Traverse Installation and Commissioning Guide, Section 11—Node Start-up and
Commissioning Procedures, Chapter 1—“Node Start-up and Commissioning,”
page 11-1.
The TransNav management system supports Telcordia CLEI™ (Common Language®
Equipment Identifier) codes per GR-485-CORE. These are encoded on individual
cards.

Page 1-8 Turin Networks Release TN4.2.x


Chapter 2 Network Management Features
Secondary Server Support

Pre-provisioning The TransNav management system supports complete preprovisioning of all nodes.
Preprovisioning facilitates rapid turn-up of new nodes and node expansions as well as
support for planning and equipment capital control. Preprovisioning of customer
services enables the service provider to efficiently schedule provisioning work
independent of service activation.
The management system stores the parameters of the service request and sends them to
the Intelligent Control Plane upon activation. If the management system is unable to
complete activation, it provides appropriate alarms including insight into the nature of
the inability to complete provisioning and activation of the service. The effectiveness
of preprovisioning depends upon effective traffic engineering to ensure that network
capacity is available upon activation.

Service The TransNav management system provides end-to-end provisioning of services and
Provisioning requires minimal input from the user. Alternatively, the user can set the constraints
(each hop and time slot) of a service. You can provision a service using any of the
following methods:
• Graphical user interface
• Script language (typical for batch provisioning)
• Domain-level CLI interface

Secondary The Traverse management system supports one Primary server and up to seven
Server Support Secondary servers in the network. The Primary server actively manages the network,
while the Secondary servers passively view the network but do not perform any
management operations that would change the network. If the Primary server fails or is
scheduled for maintenance, any Secondary server can be manually changed to take the
Primary server role.
Critical information on the Secondary servers is synchronized with the network
elements automatically in real time. This includes current provisioning, service state,
alarm and event information from the Traverse nodes. To synchronize PM data,
Domain user login profiles, user references and roles, customer records, alarm
acknowledgement and annotations, reports, report templates and schedules, the
Primary server database must be exported and then imported to the Secondary server
database. Depending on the network size, the import process takes between one and
five minutes.
Manual synchronization should be performed on a Secondary server database before it
is promoted to a Primary server role. For detailed information on promoting a
Secondary server, see the TransNav Management System Server Guide,
Section 2—Management Server Procedures, Chapter 3—“Server Administration
Procedures,” or the TransNav Management System CLI Guide, Chapter 2—“CLI
Quick Reference.”

Release TN4.2.x Turin Networks Page 1-9


TransNav Product Overview Guide, Section 1: Overview and Features
Accounting Management

Accounting Accounting data for all services is based primarily on performance management data
Management and transmitted from the nodes to the management system.
Using this data, the service provider can track service levels and ensure that traffic
complies with service level agreements (SLAs). SLA monitoring enables the service
provider to create a billing opportunity and to charge a premium for the guaranteed
level of service.

Performance Nodes collect performance management data and forward it to the Primary
Management management server to store in the database. The data is processed in two ways:
• The service provider’s management system administrator can set threshold
crossing alert limits. The threshold crossing alert appears as an event on the GUI
Events tab.
• The TransNav management system on the Primary server provides basic reports.
The data can be exported for analysis and graphical presentation by applications
such as Microsoft® Excel.

Role-based Security management enables the network administrator to create and manage user
Access Control accounts with specific access privileges.
Access control on the management system is through a combination of functional
groups and access groups for domain users, and through access groups for node users.

Domain Users
A domain user can only belong to one functional group at a time. With the exception of
administrators, functional groups are user-defined combinations of pre-defined access
groups and specific nodes. Domain users in a functional group who have Administrator
roles can access all of the system resources, including user management. They assign
access privileges of other domain users to a set of system features (access groups) and
resources (nodes) with user-defined functional groups. Security applies to both the GUI
and the CLI. For more information on domain security, see the TransNav Management
System GUI Guide, Section 2—Administrative Tasks, Chapter 1—“Managing Server
Security,” page 2-1.

Node Users
The management system has several pre-defined access groups for node users. Any
node user can be in one or more access groups. Within the access groups, access is
cumulative; a user who is in two access groups has the privileges of both access groups.
See the TransNav Management System GUI Guide, Section 2—Administrative Tasks,
Chapter 2—“Managing Node Security,” page 2-11 for more information on node
security.

Node The TransNav management system provides the following capabilities to support
Administration efficient remote administration of nodes:
• Software management and administration
The GUI interface allows users to view an entire network, a group of nodes, or a
specific node. Groups of nodes can be set up in a hierarchical fashion, and can be
associated with specific geographical maps that coincide with each node group.

Page 1-10 Turin Networks Release TN4.2.x


Chapter 2 Network Management Features
Report Generation

• Synchronization of the node and management system databases


The management system database is a superset of each node’s database and
eliminates the need for remote backup and restore of the node itself. The database
on each node is synchronized with the management server database, based on
user-defined policies.
• Equipment alarm and event history analysis
• Remote restore of the database on the node for disaster recovery in the event of:
– A failure of both control cards or a major central office (CO) catastrophe.
– A major, unpredictable service provider network failure that creates
uncertainty about the general state of node databases.
The TransNav management system has a local persistent database on the
fault-protected control cards that protects against a single control card failure. A major
advantage of the Intelligent Control Plane automatic mesh service setup and restoration
mechanism is to maintain service connectivity.

System Log The TransNav management system collects a broad array of information that is stored
Collection and in the server database for reporting and analysis.
Storage The following list represents data that can be extracted from the server database:
• All user actions from the domain-level GUI or CLI or through the node-level CLI.
• Alarm and event history including performance management threshold crossing
alerts:
– Equipment configuration history
– Node equipment alarm log
• Security logs:
– User list denoting each user’s profile
– Sign-on/sign-off log
– Failed log-on attempts
• Performance management data

Report All reports can be printed or exported as text-formatted comma delimited files.
Generation
General Reports
The TransNav management system allows a set of pre-defined reports to be either
scheduled or executed on demand. These reports encompass such functions as:
• Equipment inventory
• Historical alarms
• Historical events
• Performance monitoring and management
• Resource availability
• Service availability
• Domain service
Reports can be set to be run once, hourly, daily, weekly, and monthly.

Release TN4.2.x Turin Networks Page 1-11


TransNav Product Overview Guide, Section 1: Overview and Features
Report Generation

Data Set Snapshots


The TransNav management system also provides a simple form of reporting that
produces a file based on a set of information that is currently displayed in the GUI. For
example, the GUI displays active alarms in a dialog box. The set of active alarms is a
data set; the windowing capability of the GUI presents as much of this data set as
possible in the display’s dialog box, allowing the user to scroll to view more of the data
set. The management system allows the user to print, or save to a file, any data that the
system can display in a dialog box. (Note: This is different from the “screen capture”
function of the client workstation’s operating system that captures only the data set
information visible in the dialog box.)

Page 1-12 Turin Networks Release TN4.2.x


S ECTION 1OVERVIEW AND FEATURES

Chapter 3
User Interfaces

Introduction The TransNav management system supports the following user interfaces:
• Access to User Interfaces, page 1-13
• Graphical User Interfaces, page 1-14
• Command Line Interface, page 1-16
• TL1 Interface, page 1-17

Access to User The following table lists the different access methods you can use to connect to a
Interfaces TransNav management server.
Table 1-1 Accessing the TransNav Management System

Management System
Access Method
Interface

TransNav GUI • Installed client application (recommended)


• Local connection to node and remote connection
(DCC bytes) to a management server
• Installed application on a Citrix server
TransNav CLI • Telnet to a management server
• Local connection to node and remote connection
(DCC bytes) to a management server
TransNav TL1 • Local connection to the management system and
telnet to a node
Node CLI • Local connection to the node
• Local connection to the node and remote login to a
different node in the domain
Node TL1 • Telnet to the management system and connect to a
node
• Local connection to the node

Release TN4.2.x Turin Networks Page 1-13


TransNav Product Overview Guide, Section 1: Overview and Features
Graphical User Interfaces

Graphical User The GUI supports domain-level operators and administrators who are located in a
Interfaces network operations center or in a remote location. There is no GUI at the node level.
The GUI allows domain-level personnel to perform a wide range of provisioning and
monitoring tasks for a single node, groups of nodes, or a network of nodes attached to a
specific server. Users can only see those nodes to which they have security access
rights.
There are two main views in the GUI:
• Map View, page 1-14
• Shelf View, page 1-15
See the TransNav Management System GUI Guide for detailed descriptions of the
GUI. See the TransNav Management System Server Guide for information on saving
background images.

Map View Map View displays all of the node groups and discovered nodes for a server when you
first start the GUI from that server. From Map View, you can see and manage all the
nodes, node groups, links between the nodes, and network services. The graphic area
displays a background image (usually a map of physical locations of the nodes) and
icons representing the nodes. This initial background image is the Network Map view.
Each node group can have a different background image associated with it; this is the
Group Map.
Each domain user can group the nodes to which they have access in order to more
easily manage their areas of responsibility. They can also add node groups within
existing node groups. The node groups appear in the server network navigation tree.

Menu bar

Alarm summary
tree

Network
navigation tree

Currently selected
object

Context-
sensitive tabs

Figure 1-2 Map View

The menu bar is context-sensitive. Commands display as available (highlighted) or


unavailable (grayed out), depending on the selected object. The server network alarm

Page 1-14 Turin Networks Release TN4.2.x


Chapter 3 User Interfaces
Shelf View

summary tree gives you visibility at a glance to network alarms. If you select a node
group, only alarms associated with that node group display.
The network navigation tree shows you the node groups and node networks attached to
the server in an outline format in alphanumeric order. Node groups display first, then
nodes. In Map View, clicking a node group or a node displays the node group or node
name on the top and bottom bars of the window. To view the nodes in a node group,
double-click the Group icon in Map View or expand the node group in the navigation
tree. In Shelf View, right-clicking a node in the navigation tree or double-clicking the
node in Map View to display a graphical representation of the node and related
information; you can see which object (card or port) you have selected by the white
rectangle around the object and the name that displays on the top and bottom bars of the
window.
The context-sensitive tabs provide server, node group, or node information on alarms,
events, configuration information, protection, services, and service groups.
Double-click a node group to display the node groups and nodes associated with it.
Click a node to display node-specific information. Click anywhere on the map to
display network information specific to the server.

Shelf View Shelf View displays all of the cards in a node and their associated ports. You can
navigate to Shelf View in the following ways:
• Click the node in Map View, then select Show Shelf View from the View menu.
• Double-click the node in Map View.
• Right-click a node in Map View and select Show Shelf View.
• Right-click a node name in the Navigation Tree and select Show Shelf View.

Release TN4.2.x Turin Networks Page 1-15


TransNav Product Overview Guide, Section 1: Overview and Features
Command Line Interface

Menu bar

BITS clock

Port LED
status
OR
Alarm
indicators

Context-
sensitive tab
screen

Currently selected object

Figure 1-3 Shelf View

The menu bar is context-sensitive. Commands are displayed as available (highlighted)


or unavailable (grayed out), depending on the selected object.
You can see which object you have selected by the white rectangle around the object in
the graphic and the name displayed on the top and bottom bars of the window.
Context-sensitive tabs (in the bottom half of the screen) provide information on alarms,
events, configuration information, protection, and services. In Shelf View, these tabs
provide single node, card, or port information. Click a card to display card-specific
information. Click a port to display port-specific information. Click an external clock
to display external clock timing information.
A shortcut menu also exists for Shelf View. For more information, see TransNav
Management System GUI Guide, Section 1—Installation and Overview,
Chapter 4—“Graphical User Interface General Description,” page 1-34.

Command Line You can also access the TransNav management system using a command line interface
Interface (CLI). The CLI has these features:
• Command line editing: Use backspace and cursor keys to edit the current line and
to call up previous lines for re-editing and re-submission.
• Hierarchical command modes: Organization of commands into modes with
increasingly narrow problem domain scope.
• Context-sensitive help: Request a list of commands for the current context and
arguments for the current command, with brief explanations of each command.

Page 1-16 Turin Networks Release TN4.2.x


Chapter 3 User Interfaces
TL1 Interface

• Command completion: Enter a command or argument’s left-most substring and


view a list of possible allowable completions. Abbreviate any command or
argument to its left-most unique substring (for many commands, one character).
• Context-sensitive prompt: The prompt for each command displays the current
command mode.
You can access a single node or a network of nodes using the CLI.
See the TransNav Management System CLI Guide for detailed information on the
command line interface.

Domain Level Use domain-level commands from the TransNav management server to perform
CLI network commissioning, provisioning, synchronizing, and monitoring tasks.
Domain-level commands affect multiple nodes in a network and include:
• Setting the gateway node
• Configuring network links
• Creating performance monitoring templates and alarm profiles
• Creating protection rings and services
• Generating reports
Accessing the domain-level CLI also gives you access to the node-level CLI through
the node command.

Node Level CLI Use node-level CLI commands to perform commissioning, provisioning, or monitoring
tasks on any node on the network. Node-level commands affect only one node in the
network.

TL1 Interface The TransNav management system supports a TL1 interface to the management servers
and to individual nodes. Currently, the TransNav management system supports a subset
of TL1 commands.
Turin supports these node and network management tasks through the TL1 interface:
• Fault and performance management (including test access and report generation)
• Equipment configuration and management
• Protection group configuration and management
• Security management
For information on TL1 and how to use the TL1 interface, see the TransNav
Management System TL1 Guide.

Release TN4.2.x Turin Networks Page 1-17


TransNav Product Overview Guide, Section 1: Overview and Features
TL1 Interface

Page 1-18 Turin Networks Release TN4.2.x


S ECTION 2 M ANAGEMENT S YSTEM P LANNING
SECTION 2M ANAGEMENT S YSTEM PLANNING

Contents
Chapter 1
TransNav Management System Requirements
Management System Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
TransNav Network Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Intelligent Control Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-2
Control Plane Domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Management Gateway Nodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Sun Solaris Platform for TransNav Management Server . . . . . . . . . . . . . . . . 2-3
Windows Platform for TransNav Management Server . . . . . . . . . . . . . . . . . . 2-5
TransNav GUI Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6

Chapter 2
TransNav Management System Planning
Recommended Procedure to Create a Network . . . . . . . . . . . . . . . . . . . . . . . 2-7

Chapter 3
IP Address Planning
IP Addresses in a TransNav Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-9
IP Addressing Guidelines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
IP Networks and Proxy ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-11
In-Band Management with Static Routes . . . . . . . . . . . . . . . . . . . . . . . . 2-11
Out-of-Band Management with Static Routes . . . . . . . . . . . . . . . . . . . . . 2-12
Out-of-Band Management with no DCC Connectivity . . . . . . . . . . . . . . . 2-12
TraverseEdge 50 and TransAccess Mux . . . . . . . . . . . . . . . . . . . . . . . . 2-12
Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13
Proxy ARP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-14
In-Band Management with Static Routes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-15
In-Band Management with Router and Static Routes . . . . . . . . . . . . . . . . . . . 2-16
In-Band Management of CPEs Over EOP Links . . . . . . . . . . . . . . . . . . . . . . 2-17
Out-of-Band Management with Static Routes. . . . . . . . . . . . . . . . . . . . . . . . . 2-19

Chapter 4
Network Time Protocol (NTP) Sources
NTP Sources in a Traverse Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21
Daylight Saving Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-21
NTP Sources on a Ring Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
NTP Sources on a Linear Chain Topology . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-22

List of Figures
Figure 2-1 Management System Deployment . . . . . . . . . . . . . . . . . . . . . . . . 2-2

Release TN4.2.x Turin Networks Page i


TransNav Product Overview Guide, Section 2 Management System Planning

Figure 2-2 IP Quality of Service . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-13


Figure 2-3 Traverse Node Enabled as a Proxy ARP Server. . . . . . . . . . . . . . 2-14
Figure 2-4 TransNav Management System In-Band Management . . . . . . . . 2-15
Figure 2-5 In-Band Management with Router and Static Routes . . . . . . . . . . 2-16
Figure 2-6 In-Band Management of CPEs Over EOP Links . . . . . . . . . . . . . . 2-17
Figure 2-7 Connecting CPEs through EOP Links . . . . . . . . . . . . . . . . . . . . . . 2-18
Figure 2-8 TransNav Management System Out-of-Band Management . . . . . 2-19
Figure 2-9 NTP Sources on a Ring Topology . . . . . . . . . . . . . . . . . . . . . . . . . 2-22
Figure 2-10 NTP Sources on a Linear Chain Topology . . . . . . . . . . . . . . . . . . 2-22

List of Tables
Table 2-1 Sun Solaris Requirements, TransNav Management Server . . . . . 2-3
Table 2-2 Windows Requirements, TransNav Management Server . . . . . . . 2-5
Table 2-3 TransNav GUI Application Requirements . . . . . . . . . . . . . . . . . . . 2-6
Table 2-4 Network Configuration Procedure and References . . . . . . . . . . . . 2-7
Table 2-5 IP Address Node Connectivity Parameters . . . . . . . . . . . . . . . . . . 2-10

Page ii Turin Networks Release TN4.2.x


S ECTION 2MANAGEMENT SYSTEM P LANNING

Chapter 1
TransNav Management System Requirements

Introduction The TransNav management system software package contains both server and client
workstation applications. The server functions communicate with the nodes and
maintain a database of topology, configuration, fault, and performance data for all
nodes in the network. The client workstation application provides the user interface for
managing the network.
Use the requirements listed in the following sections to help you determine the
management system requirements for your network.
• Management System Deployment, page 2-2
• TransNav Network Management, page 2-2
• Sun Solaris Platform for TransNav Management Server, page 2-3
• Windows Platform for TransNav Management Server, page 2-5
TransNav GUI Application, page 2-6

Release TN4.2.x Turin Networks Page 2-1


TransNav Product Overview Guide, Section 2: Management System Planning
Management System Deployment

Management The TransNav management system software package contains server applications,
System client workstation applications, and agent applications that reside on the node.
Deployment server response
client request

Client Management System


Workstation Server Host

Data Communications Network

Network Nodes TN 00031

Figure 2-1 Management System Deployment

Each TransNav management system supports up to eight servers; one server is


designated as the Primary server, the remaining servers are designated as Secondary
servers. The Primary server actively manages the network. The Secondary servers
passively view the network but cannot perform any management operations that would
change the state of the network. Any Secondary server can be promoted to the Primary
server role in case of failure or maintenance. The switch in server roles requires some
degree of user intervention.
The server applications communicate with the nodes and maintain a database of
topology, configuration, fault, and performance data for all nodes. The client
workstation application provides the user interface for managing the network (GUI or
CLI). The agent application resides on the node control card and maintains a persistent
database of management information for the node. It also controls the flow of
information between the management server and the node itself.

TransNav In addition to the management system applications, the TransNav management system
Network uses the following Traverse software components:
Management
Intelligent Control Plane
An Intelligent Control Plane is a logical set of connections between TransNav-managed
network elements through which those network elements exchange control and
management information. This control and management information can be carried
either in-band or out-of-band.
• See Chapter 3—“IP Address Planning,” Quality of Service, page 2-13 for an
example and description of IP quality of service routing protocol.
• See Chapter 3—“IP Address Planning,” Proxy ARP, page 2-14 for information o
using the proxy address resolution protocol.
• See Chapter 3—“IP Address Planning,” In-Band Management with Static
Routes, page 2-15 for an example and a detailed description.
• See Chapter 3—“IP Address Planning,” Out-of-Band Management with Static
Routes, page 2-19 for an example and a detailed description.

Page 2-2 Turin Networks Release TN4.2.x


Chapter 1 TransNav Management System Requirements
Sun Solaris Platform for TransNav Management Server

Control Plane Domain


A control plane domain is a set of nodes completely interconnected by the intelligent
control plane. One TransNav management system can manage up to 200 nodes in a
single control plane domain.
Domain management includes tasks such as:
• Setting the gateway node
• Configuring network links
• Creating performance monitoring templates and alarm profiles
• Creating protection rings and services
• Generating reports

Management Gateway Nodes


The TransNav management server connects to nodes over the service provider’s
TCP/IP data communications network. The management system accesses a network
through one or more nodes that are designated as management gateway nodes (MGN).
For in-band management, only one node is connected to the management server.
Therefore, there is one MGN in a network that is managed in-band.
For out-of-band management, each node is connected to the management server either
directly or through a router. Each node is considered a MGN.

Sun Solaris This table lists the minimum requirements for a Sun Solaris system TransNav
Platform for management server.
TransNav
Table 2-1 Sun Solaris Requirements, TransNav Management Server
Management
Server Component Description

Hardware

System Up to 100 nodes: 2 UltraSPARC IIIi CPU processors (1.5 GHz)


Up to 200 nodes: 2 UltraSPARC IV CPU processors (1.6 GHz)

Memory (RAM) Up to 100 nodes: 4 GB, 2 MB cache


Up to 200 nodes: 8 GB, 4 MB cache

Hard Drives Up to 100 nodes: 73 GB of hard disk space (RAID controller optional; more
disk space if a hot-spare is desired or if more storage is desired for log files)
Up to 200 nodes: 146 GB of hard disk space (RAID controller optional; more
disk space if a hot-spare is desired or if more storage is desired for log files)

CD-ROM Drive Internal or External

Backup System Internal is optional; SAN (Storage Area Network) is recommended

Network Two 10/100Base-T Ethernet cards. One card connects to the Data
Communications Network (DCN), and the other card connects to the Local
Area Network (LAN) connecting the client workstations.

Software

Release TN4.2.x Turin Networks Page 2-3


TransNav Product Overview Guide, Section 2: Management System Planning
Sun Solaris Platform for TransNav Management Server

Table 2-1 Sun Solaris Requirements, TransNav Management Server (continued)

Component Description

Operating Sun Solaris 8, 9, or 10


Environment Solaris 8 recommended patch cluster: Generic_108528-15 or later (July 29,
2002) (Note: For pre-TN3.1 releases only.)
Solaris 9 recommended patch cluster: date stamp of July 7, 2004
Bash shell

Management System Obtain the latest version of the TransNav management system software in the
Software Software Downloads section on the Turin Infocenter. Access the Infocenter at
www.turinnetworks.com. User registration is required. Contact your Turin
Sales Support group.

PDF Viewer To view product documentation:


Adobe® Acrobat® Reader® 7.0 or 8.0 for Windows and 7.0.8 for Solaris.
Distributed on the documentation CD or download the application for free
from Adobe’s site at: www.adobe.com/.

Page 2-4 Turin Networks Release TN4.2.x


Chapter 1 TransNav Management System Requirements
Windows Platform for TransNav Management Server

Windows This table lists the minimum requirements for a Windows platform TransNav
Platform for management server.
TransNav
Table 2-2 Windows Requirements, TransNav Management Server
Management
Server Component Description

Hardware

System Up to 100 nodes: PowerEdge1850, 3.0 GHz


Up to 200 nodes: PowerEdge6850, 3.6 GHz

Memory (RAM) Up to 100 nodes: 4 GB, 2 MB cache


Up to 200 nodes: 8 GB, 4 MB cache

Hard Drives Up to 100 nodes: 73 GB of hard disk space


Up to 200 nodes: 146 GB of hard disk space

CD-ROM Drive Internal or External

Monitor Server only: High resolution 15-inch (1024 x 768)


Server and client: High resolution 21-inch (1280 x 1024)

Disk Backup System Required if unable to back up TransNav database to server on the network.

Network One or two 10/100BaseT Ethernet cards. One Ethernet Network Interface
Card (NIC) connects to the Data Communications Network (DCN). The
second optional Ethernet NIC connects to the Local Area Network (LAN)
connecting the client workstations.

Software

Operating Windows 2000 Service Pack 2


Environment Windows XP Professional Service Pack 2
Windows Server 2003. Microsoft client licenses are not required for clients to
connect to TransNav software running on Microsoft Windows 2003 Server
platform.
Windows Microsoft Vista (limited to TransNav Client running on Microsoft
Vista)

Management System Latest version of the TransNav management system software provided by
Software Turin Networks, Inc., Technical Assistance Center. Obtain the latest version of
the TransNav management system software in the Software Downloads
section on the Turin Infocenter. Access the Infocenter at
www.turinnetworks.com. User registration is required.

PDF Viewer To view product documentation:


Adobe® Acrobat® Reader® 7.0 or 8.0 for Windows and 7.0.8 for Solaris.
Distributed on the documentation CD or download the application for free
from Adobe’s site at: www.adobe.com/

FTP server To distribute TransNav software to network elements:


application Turin recommends WAR FTP for Windows. Download the application for free
from Adobe’s site at: www.warftp.org.

Release TN4.2.x Turin Networks Page 2-5


TransNav Product Overview Guide, Section 2: Management System Planning
TransNav GUI Application

Table 2-2 Windows Requirements, TransNav Management Server (continued)

Component Description

Telnet server To access the TransNav management server remotely.


application

Compression Turin recommends the popular compression application WinZip. See


software www.winzip.com/.

TransNav GUI You require a client workstation to access the TransNav management server from the
Application graphical user interface (GUI). Turin recommends installing the application directly on
the client workstation for faster initialization, operation, and response time.
Table 2-3 TransNav GUI Application Requirements

Component Description

Hardware

CPU Sun SPARC (Solaris version independent) workstation1


or
Windows PC capable of running Windows 2000 Professional, Windows XP
Professional, Windows 2003 Server, or Windows Vista
Memory (RAM) Up to 100 nodes: 4 GB
Up to 200 nodes: 8 GB

Hard Drive Space 73 GB or more recommended


Monitor High resolution 21-inch (1280 x 1024) monitor or high resolution laptop

CD-ROM Drive Internal or External


Network One 10/100BaseT Ethernet Card

Software

Operating Any of the following operating environments:


Environment Sun Solaris 8, 9, or 10 (Sun Solaris 8 for pre-TN3.1 releases only)
Microsoft Windows NT v4 Service Pack 6 or 6a
Microsoft Windows 2000 Service Pack 2
Microsoft Windows XP Professional Service Pack 2
Windows Microsoft Vista (limited to TransNav Client running on Microsoft
Vista)

PDF Viewer To view product documentation:


Adobe® Acrobat® Reader® 7.0 or 8.0 for Windows and 7.0.8 for Solaris.
Distributed on the documentation CD or download the application for free
from Adobe’s site at: www.adobe.com/

Compression Turin recommends the popular compression application WinZip. See


software www.winzip.com/.

1
The GUI application has not been tested on the Sun i386 or Intel-based LINUX configurations.

Page 2-6 Turin Networks Release TN4.2.x


S ECTION 2MANAGEMENT SYSTEM P LANNING

Chapter 2
TransNav Management System Planning

Introduction This chapter includes the following information on creating and managing a network
using the TransNav management system:
• Recommended Procedure to Create a Network, page 2-7

Recommended Use these steps as a guideline to create a TransNav managed network.


Procedure to Table 2-4 Network Configuration Procedure and References
Create a
Network Step Procedure Reference

1 Create a network plan. Traverse Product Overview Guide


TraverseEdge 50 User Guide
TraverseEdge 100 User Guide
TransAccess 200 Mux User Guide
TransNav Management System Product Overview Guide

2 Assign IP addresses to the TransNav Management System Product Overview Guide,


management server(s) and network Section 2—Management System Planning,
elements. Chapter 3—“IP Address Planning,” page 2-9

3 Set a management server as the TransNav Management System Server Guide,


primary NTP server. Section 2—Management Server Procedures,
Chapter 1—“Creating the Management Servers,”
page 2-5

4 Add routes for the node-ips to the This step depends on the server platform (Solaris or
management server. Windows) and local site practices. Contact your local site
administrator.

5 Install the TransNav management TransNav Management System Server Guide,


system software. Section 1—Installation and Description

6 Initialize, then start, the server. Start TransNav Management System Server Guide,
the Primary server first, then Section 2—Management Server Procedures,
initialize and start the Secondary Chapter 3—“Server Administration Procedures,”
servers. page 2-23

7 Install, connect, and commission Traverse Installation and Commissioning Guide


nodes and peripheral equipment TraverseEdge 50 User Guide
according to the network plan.
TraverseEdge 100 User Guide
TransAccess 200 Mux User Guide

Release TN4.2.x Turin Networks Page 2-7


TransNav Product Overview Guide, Section 2: Management System Planning
Recommended Procedure to Create a Network

Table 2-4 Network Configuration Procedure and References (continued)

Step Procedure Reference

8 Start the user interface and discover TransNav Management System GUI Guide,
the nodes in the network. Section 1—Installation and Overview,
Chapter 3—“Starting the Graphical User Interface,”
page 1-17
Traverse Provisioning Guide, Section 1—Configuring the
Network, Chapter 2—“Discover the Network,” page 1-3
TraverseEdge 50 User Guide
TraverseEdge 100 User Guide, Section 4—Configuring
the Network, Chapter 1—“Configuring the Network,”
page 4-1
TransAccess 200 Mux User Guide

9 Configure timing options for the Traverse Provisioning Guide, Section 1—Configuring the
network. Network, Chapter 4—“Configuring Network Timing,”
page 1-13
TraverseEdge 50 User Guide
TraverseEdge 100 User Guide, Section 4—Configuring
the Network, Chapter 2—“Configuring Network
Timing,” page 4-9
TransAccess 200 Mux User Guide

10 Create protection groups. Traverse Provisioning Guide, Section 3—Creating


Protection Groups
TraverseEdge 50 User Guide
TraverseEdge 100 User Guide, Section 4—Configuring
the Network
TransAccess 200 Mux User Guide

11 If necessary, configure equipment, Traverse Provisioning Guide, Section 2—Configuring


cards, and interfaces. TDM Equipment
TraverseEdge 50 User Guide
TraverseEdge 100 User Guide
TransAccess 200 Mux User Guide

12 Add peripheral equipment to the Traverse Provisioning Guide, Section 2—Configuring


user interface and configure the TDM Equipment, Chapter 4—“Creating a TransAccess
equipment. 200 Mux,” page 2-43

13 Create services or other Traverse Provisioning Guide


applications. TraverseEdge 50 User Guide
TraverseEdge 100 User Guide
TransAccess 200 Mux User Guide

Page 2-8 Turin Networks Release TN4.2.x


S ECTION 2MANAGEMENT SYSTEM P LANNING

Chapter 3
IP Address Planning

Introduction This chapter includes the following information on creating and managing a network
using the TransNav management system:
• IP Addresses in a TransNav Network, page 2-9
• IP Addressing Guidelines, page 2-11
• Quality of Service, page 2-13
• Proxy ARP, page 2-14
• In-Band Management with Static Routes, page 2-15
• In-Band Management with Router and Static Routes, page 2-16
• In-Band Management of CPEs Over EOP Links, page 2-17
• Out-of-Band Management with Static Routes, page 2-19

IP Addresses The network management model (in-band or out-of-band) determines the IP address
in a TransNav requirements of the network. A TransNav-managed network requires a minimum of
Network two separate IP network addresses:
• The IP address assigned to the Ethernet interface on the back of the shelf
(bp-dcn-ip) determines the physical network.
• The IP address assigned to the node (node-ip) is used by the management server
to manage the network.

Release TN4.2.x Turin Networks Page 2-9


TransNav Product Overview Guide, Section 2: Management System Planning
IP Addresses in a TransNav Network

Assign the relevant IP addresses through the CLI during node commissioning.
Table 2-5 IP Address Node Connectivity Parameters

Parameter Turin
Required? Description
Name Recommendation

node-id Required on A user-defined name of the node. Enter alphanumeric Use the site name or
every node. characters only. Do not use punctuation, spaces, or special location.
characters.

node-ip Required on This parameter specifies the IP address of the node. This 10.100.100.x where
every node. address is also known as the Router ID in a data network x is between 1 and
environment. 254.
In a non-proxy network, Turin recommends that this address be Use a unique number
the same as the bp-dcn-ip. If it is not equal to the bp-dcn-ip, it for each network
must be on a different IP network. node.
Turin recommends that the node-ips for all nodes in one
network be on the same IP network.

In a proxy network, the node-ips for all nodes in one Depends on network
network must be on the same IP network. plan and site
This IP address has the following characteristics: practices.
• For the proxy node, proxy-arp is enabled; the
bp-dcn-ip and the node-ip must be the same IP
address.
• For the other nodes in the proxy network, the node-ip
must be in the same subnetwork as the bp-dcn-ip address
of the proxy node.

bp-dcn-ip Required on This parameter specifies the IP address assigned to the Ethernet Use a different
each node that interface on the back of the node. subnet for each site.
is connected or In a non-proxy network, Turin recommends that this address
routed to the be the same as the node-ip. If it is not equal to the node-ip,
management it must be on a different IP network.
server or on
any node with Enter an IP address if this node is connected to the management
a subtended server (either directly or through a router) or to a TransAccess
device. product.

In a proxy network on a proxy node, the bp-dcn-ip and the Depends on network
node-ip must be the same IP address. plan and site
practices.

bp-dcn-mask Required for Enter the appropriate address mask of the bp-dcn-ip address. Depends on site
each practices.
bp-dcn-ip.

bp-dcn-gw-ip Required for If the node is connected directly to the management server, this Depends on site
each address is the IP gateway of the management server. practices.
bp-dcn-ip. If there is a router between the management server and this
node, this address is the IP address of the port on the router
connected to the Ethernet interface on the back of the Traverse
node.

Page 2-10 Turin Networks Release TN4.2.x


Chapter 3 IP Address Planning
IP Addressing Guidelines

Table 2-5 IP Address Node Connectivity Parameters (continued)

Parameter Turin
Required? Description
Name Recommendation

ems-ip Required if This address is the IP address of the TransNav management Depends on site
there is a server. practices.
router between This IP address must be on a separate network from any
this node and node-ip and gcm-{a | b}-ip.
the
management For in-band management, this address must be on or routed to
server. the same network as the bp-dcn-ip of the management gateway
node (the node with the physical connection to the management
server).
For out-of-band management, this address must be connected
or routed to all bp-dcn-ip addresses.

ems-gw-ip Required for This address is the IP address of the port on the router Depends on site
each ems-ip. connected to the Ethernet interface on the back of the Traverse practices.
shelf. This address is the same address as bp-dcn-gw-ip.

ems-mask Required for Required if there is a router between the node and the Depends on site
each ems-ip. management server. This address is the address mask of the practices.
IP address on the management server (ems-ip).

proxy-arp Required on Enable this parameter if this node is to be used as the proxy Depends on network
the node acting server for the IP subnet. plan and site
as proxy server The bp-dcn-ip and the node-ip of the proxy node must be the practices.
for the IP same IP address.
subnet.
Once you plan the network with one node as the proxy, you
cannot arbitrarily re-assign another node to be the proxy ARP
server.

IP Addressing IP Networks and Proxy ARP


Guidelines On the proxy node:
• The Proxy ARP parameter must be enabled on the management gateway node. In
Map View, click a node, click the Config tab, and change the value in Proxy ARP
to enabled.
• The bp-dcn-ip and the node-ip of the proxy node must be the same IP address.
In a proxy network, all of the node-ip addresses must be in the same subnetwork as the
bp-dcn-ip of the proxy node.
Once you plan the network with one node as the proxy, you cannot arbitrarily re-assign
another node to be the proxy ARP server.

In-Band Management with Static Routes


General guidelines to assign IP addresses in a TransNav network managed in-band with
static routes are:
• Turin recommends that all node-ip addresses are in a physically non-existent
(virtual) IP network.
• For the node connected to the management server (either directly or through a
router), all IP addresses provisioned on the node MUST be in separate networks.

Release TN4.2.x Turin Networks Page 2-11


TransNav Product Overview Guide, Section 2: Management System Planning
IP Addressing Guidelines

• For all other nodes in the network, the node-id and the node-ip are the only
required commissioning parameters.
• The management server must be able to communicate with all node-ip addresses.
– Add routes to the management server using the node-ip, the address mask of
the bp-dcn-ip, and bp-dcn-ip of the node that is connected to the management
server.
– The IP address of the management server must be on or routed to the same
network as the bp-dcn-ip of the management gateway node.

Out-of-Band Management with Static Routes


General guidelines to assign IP addresses in a TransNav network managed out-of-band
with static routes are:
• Turin recommends that all node-ip addresses are in a physically non-existent
(virtual) IP network.
• Each node is connected to the management server through an IP network. All IP
addresses provisioned on one node are in separate networks.
• The management server must be able to communicate with all node-ip addresses.
– Add routes using the node-ip, address mask of the bp-dcn-ip, and the IP
address of the port on the router that is connected to the management server.
– The IP address of the management server must be connected or routed to all
bp-dcn-ip addresses.

Out-of-Band Management with no DCC Connectivity


If there is no DCC connectivity between individual nodes, each node must still
communicate to the node-ip of the other nodes in the network. In this case, create
routes at relevant IP routers for all node-ips in the network.

TraverseEdge 50 and TransAccess Mux


The node to which the TraverseEdge 50 or TransAccess Mux is connected must have
the backplane IP address information provisioned:
• bp-dcn-ip: For in-band management, this address must be in a separate network
than the bp-dcn-ip of the node that is connected to the management server.
• bp-dcn-gw-ip: This address is in the same subnetwork as the bp-dcn-ip of this
node.
• bp-dcn-mask: The address mask of the bp-dcn-ip of this node.
The IP address of the TransAccess Mux will have the following characteristics:
• IP address: This IP address can be on the same subnetwork as the node bp-dcn-ip.
• Gateway: This IP address is the bp-dcn-ip of the node.
• Mask: This mask is the address mask of the bp-dcn-ip of the node.
• Trap-1: This address is the bp-dcn-ip of the node to which it is connected.

Page 2-12 Turin Networks Release TN4.2.x


Chapter 3 IP Address Planning
Quality of Service

Quality of The IP QoS (IP Quality of Service) routing protocol enables a Traverse node to
Service broadcast its forwarding table over the backplane for the data control network
(bp-dcn-ip), thus improving the quality of service over the backplane DCN ethernet
interface. Setting up static routes on intermediate routers between the Traverse
management gateway element and the TransNav management server is no longer
necessary. Existing traffic engineering and security capabilities are not changed.
When IP QoS is enabled on the management gateway node during commissioning,
source IP address packets are user-configured to block or allow traffic originated by
certain IP hosts or networks using the access control list (ACL). Received packets are
filtered, classified, metered, and put in queue for forwarding.
The ACL searches received IP address packets for the longest prefix match of the
source IP address. When the address is found, it is dropped or forwarded according to
the ACL settings (permit or deny). If no instruction is present in the ACL, the packet is
forwarded.
Outgoing IP address packets are prioritized as either High Priority or Best Effort and
put in queues for forwarding. The queue size for outgoing address packets is set by the
percent of available bandwidth.

EMS Server

IP Network

IP QoS
Enabled
Port IP A

Traverse Network

TN 00155

Figure 2-2 IP Quality of Service

See the TransNav Management System GUI Guide, Chapter 1—“Creating and
Deleting Equipment Using Preprovisioning,” Node Parameters, page 3-3 for detailed
information about setting up IP Quality of Service in a TransNav-managed network.

Release TN4.2.x Turin Networks Page 2-13


TransNav Product Overview Guide, Section 2: Management System Planning
Proxy ARP

Proxy ARP Proxy address resolution protocol (ARP) is the technique in which one host, usually a
router, answers ARP requests intended for another machine. By faking its identity, the
router accepts responsibility for routing packets to the real destination. Using proxy
ARP in a network helps machines on one subnet reach remote subnets without
configuring routing or a default gateway. Proxy ARP is defined in RFC 1027.

IP 172.168.0.2
Gateway 172.168.0.1 IP Network
Mask 255.255.255.0
EMS Server

172.140.0.1
Port IP A

Node1 node-id
172.140.0.2 node-ip
172.182.1.0 Gateway
172.140.0.2 bp-dcn-ip
172.140.0.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask
172.168.0.2 ems-ip
172.140.0.1 ems-gw-ip
255.255.255.0 ems-mask
enabled Proxy ARP Node2 node-id
disabled ospf-over-dcn 172.140.0.3 node-ip
0 area-id 172.182.1.1 bp-dcn-ip
TE-100
172.182.1.0 bp-dcn-gw-ip
node-id NodeA 255.255.255.0 bp-dcn-mask
node-ip 172.140.0.5
Node3 node-id Optional
172.140.0.4 node-ip TransAccess Name
TransAccess
TE-100 Mux 172.182.1.2 IP
172.168.1.1 Gateway
node-id NodeB 255.255.255.0 Mask
TE-100 172.182.1.1 Trap-1
node-ip 172.140.0.6
NodeC node-id
172.140.0.7 node-ip TN 00156

Figure 2-3 Traverse Node Enabled as a Proxy ARP Server

In this example network, the EMS server communicates through an IP network to


Node 1. Node 1 (the proxy node) learns all the IP addresses of the nodes in the
subtending network and takes responsibility to route packets to and from the correct
destinations.
The EMS server keeps the IP-to-network-address mapping found in the reply in a local
cache and uses it for later communication with the nodes. The proxy node can proxy
addresses for any Traverse node, TraverseEdge node, or TransAccess Mux equipment
connected to it.
In a proxy network, all of the node-ip addresses must be in the same subnetwork as the
bp-dcn-ip of the proxy node. On the proxy node, the Proxy ARP parameter is enabled
and the bp-dcn-ip and the node-ip must be the same IP address. Once you plan the
network with one node as the proxy, you cannot arbitrarily re-assign another node to be
the proxy ARP server.

Page 2-14 Turin Networks Release TN4.2.x


Chapter 3 IP Address Planning
In-Band Management with Static Routes

In-Band In-band management with static routes means the management server is directly
Management connected by static route to one node (called the management gateway node), and the
with Static data communications channel (DCC) carries the control and management data.
Routes In this simple example, the TransNav management server (EMS server) is connected to
a management gateway node (Node 1) using the Ethernet interface on the back of the
shelf. The server communicates to the other nodes in-band using the DCC.

Add routes to EMS server for each node-ip.


<node-ip> <mask> <bp-dcn-ip of Node1>
10.100.100.1 255.255.255.0 172.168.0.2
10.100.100.2 255.255.255.0 172.168.0.2
172.168.0.10 IP 10.100.100.3 255.255.255.0 172.168.0.2
EMS
172.168.0.1 Gateway 10.100.100.4 255.255.255.0 172.168.0.2
Server
255.255.255.0 Mask 10.100.100.5 255.255.255.0 172.168.0.2
10.100.100.6 255.255.255.0 172.168.0.2

172.168.0.1 Port A IP

Node1 node-id
10.100.100.1 node-ip
172.168.0.2 bp-dcn-ip 172.168.1.1 Port B IP
172.168.0.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask

Node2 node-id
TE-100 10.100.100.2 node-ip
node-id Node6 172.168.1.2 bp-dcn-ip
172.168.1.1 bp-dcn-gw-ip
node-ip 10.100.100.6
255.255.255.0 bp-dcn-mask
Node3 node-id
10.100.100.3 node-ip Optional
TE-100 TransAccess Name
TransAccess
172.168.1.3 IP
Mux
node-id Node5 TE-100 172.168.1.2 Gateway
node-ip 10.100.100.5 255.255.255.0 Mask
172.168.1.2 Trap-1
Node4 node-id
10.100.100.4 node-ip
TN 00157

Figure 2-4 TransNav Management System In-Band Management

In this example, to get the management server to communicate to all nodes, add routes
on the server to the node-ip of each node. The server communicates with the nodes
using the bp-dcn-ip of the management gateway node (Node 1). Note that all IP
addresses on Node 1 (node-ip and bp-dcn-ip) are in separate networks.
Node 2 has a subtending TransAccess Mux (either a TA155 or a TA200) connected by
Ethernet. The bp-dcn-ip address is necessary to connect the TransAccess system. The
bp-dcn-ip of this node must be in a separate network from the bp-dcn-ip on Node 1.
At Node 3, the node-id and the node-ip are the only required commissioning
parameters. However, Node 3 also has subtending TraverseEdge 100 network managed
in-band through the management gateway node. The IP address requirements are the
same as for the Traverse platform.
See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.

Release TN4.2.x Turin Networks Page 2-15


TransNav Product Overview Guide, Section 2: Management System Planning
In-Band Management with Router and Static Routes

In-Band In this example, the management server is connected by static route to a router that, in
Management turn, is connected to the management gateway node (Node 1). The server
with Router communicates to the other nodes in-band using the DCC.
and Static
Routes
EMS
Server
172.169.0.10 IP
Add routes for each node-ip to EMS server.
172.169.0.1 Gateway
<node-ip> <mask> <Router Port IP A>
10.100.100.1 255.255.255.0 172.169.0.1 255.255.255.0 Mask
Add routes for each node-ip to router.
10.100.100.2 255.255.255.0 172.169.0.1 <node-ip> <mask> <Node1 bp-dcn-ip>
10.100.100.3 255.255.255.0 172.169.0.1 10.100.100.1 255.255.255.0 172.168.0.2
10.100.100.4 255.255.255.0 172.169.0.1 10.100.100.2 255.255.255.0 172.168.0.2
172.169.0.1 Port IP A
10.100.100.5 255.255.255.0 172.169.0.1 10.100.100.3 255.255.255.0 172.168.0.2
10.100.100.6 255.255.255.0 172.169.0.1 10.100.100.4 255.255.255.0 172.168.0.2
172.168.0.1 Port IP B 10.100.100.5 255.255.255.0 172.168.0.2
10.100.100.6 255.255.255.0 172.168.0.2

Node1 node-id
10.100.100.1 node-ip
172.168.0.2 bp-dcn-ip 172.168.1.1 Gateway
172.168.0.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask
172.169.0.10 ems-ip
172.168.0.1 ems-gw-ip
255.255.255.0 ems-mask
Node2 node-id
TE-100
10.100.100.2 node-ip
node-id Node6 172.168.1.2 bp-dcn-ip
node-ip 10.100.100.6 172.168.1.1 bp-dcn-gw-ip
Node3 node-id 255.255.255.0 bp-dcn-mask
10.100.100.3 node-ip
TE-100 Optional

TransAccess Name
node-id Node5 TE-100 TransAccess 172.168.1.3 IP
node-ip 10.100.100.5 Mux 172.168.1.2 Gateway
255.255.255.0 Mask
Node4 node-id 172.168.1.2 Trap-1
10.100.100.4 node-ip
TN 00158

Figure 2-5 In-Band Management with Router and Static Routes

In this example, to get the management server to communicate to each node, add routes
on the server to the node-ip of each node. The gateway through which the management
server communicates with the nodes is the IP address of the port on the router
connected to the server.
At the router, add the routes for each node-ip using the gateway bp-dcn-ip of the
management gateway node (Node 1).
See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.

Page 2-16 Turin Networks Release TN4.2.x


Chapter 3 IP Address Planning
In-Band Management of CPEs Over EOP Links

In-Band In this example, the management server is connected by static route to a router that, in
Management of turn, is connected to the management gateway node (Node 1). The server
CPEs Over communicates to the other nodes in-band using the DCC, including the node that has
EOP Links CPE devices attached (Node 3). The IP packets from CPE devices are forwarded
through the node over electrical cards to EOP links on the EoPDH cards, and then
through the Ethernet Control Channel interface (ECCI) for forwarding over the system
by Traverse Ethernet services.

EMS
Add routes for Traverse network to EMS server Server
<node-ip> <mask> <Router Port IP A>
172.169.1.10 IP
10.100.100.0 255.255.255.0 172.169.0.1

Add routes for CPE-ip's to EMS server


<CPE-ip> <mask> <Router Port IP A> Add routes to reach each CPE-ip to router
192.168.0.0 255.255.0.0 172.169.1.10 <CPE-ip> <mask> <Router Port IP A>
172.169.0.1 Port IP A
192.168.0.0 255.255.0.0 10.100.100.5
10.100.100.1 Port IP B

Node1 node-id
10.100.100.5 node-ip
10.100.100.1 bp-dcn-ip
CPEs
10.100.100.1 bp-dcn-gw-ip
255.255.255.0 bp-dcn-mask
CPE-ip 192.168.20.2
172.169.0.0 ems-ip
CPE-ip 192.168.20.3 10.100.100.1 ems-gw-ip
255.255.0.0 ems-mask
CPE-ip 192.168.20.4

CPE-ip 192.168.20.5 EoPDH, Slot 5

CPE-ip 192.168.20.6

Node2 node-id
ECC
10.100.100.2 node-ip
CPEs
Node3 node-id
CPE-ip 192.168.30.2 10.100.100.3 node-ip
CPE-ip 192.168.30.3 EoPDH, Slot 8
Entered on GCM. Routes packets to Slot 5
CPE-ip 192.168.30.4 192.168.20.1 ecci-gw-ip
255.255.255.0 ecci-gw-mask
CPE-ip 192.168.30.5 Entered on GCM. Routes packets to Slot 8
192.168.30.1 ecci-gw-ip
CPE-ip 192.168.30.6 TN 00160
255.255.255.0 ecci-gw-mask

Figure 2-6 In-Band Management of CPEs Over EOP Links

In the above example, add routes on the management server to communicate to the
node-ip of the nodes that have CPEs attached. This allows IP packets from the CPEs to
be transmitted over the Traverse system. The server communicates with all the nodes
over a static route using the bp-dcn-ip of the management gateway node (Node 1).
At Node 3, the node-id and node-ip are required commissioning parameters, as are the
CPE-ip’s of each CPE device. A default ECC interface gateway IP address (ecci-gw-ip)
must also be configured on each CPE device to allow all IP packets to be sent through
the electrical card to the ECC interface on the node. Node 3 must have an EoPDH card
with an EOP port set up. Each EOP port is a member port on the ECC interface. The
VLAN tag of each ECCI member port corresponds to the management VLAN of the
attached CPE device, thus providing the interface between the CPEs and the
management system using an ECC interface.

Release TN4.2.x Turin Networks Page 2-17


TransNav Product Overview Guide, Section 2: Management System Planning
In-Band Management of CPEs Over EOP Links

The EoPDH cards are connected by EOP links through the electrical cards to the CPEs
as shown below.

Figure 2-7 Connecting CPEs through EOP Links

See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.

Page 2-18 Turin Networks Release TN4.2.x


Chapter 3 IP Address Planning
Out-of-Band Management with Static Routes

Out-of-Band Out-of-band management with static routes means that the management server is
Management directly connected by static route to each node by the Ethernet interface on the back of
with Static each shelf. In this example, the management server communicates to each node directly
Routes or through a router.
Add routes for each node-ip to router.
<node-ip> <mask> <Router Port IPs F & D>
10.100.100.2 255.255.255.0 172.169.0.2
10.100.100.3 255.255.255.0 172.170.0.2

IP Network IP Network

172.168.0.1
Port IP A

Add routes for each node-ip to EMS server.


IP 172.168.0.2 10.100.100.1 255.255.255.0 172.168.0.3
Gateway 172.168.0.1 10.100.100.2 255.255.255.0 172.168.0.1
Mask 255.255.255.0 10.100.100.3 255.255.255.0 172.168.0.1
EMS
Server

Add route for node-ip to router. Add route for node-ip to router.
<node-ip> <mask> <Node3 bp-dcn-ip> <node-ip> <mask> <Node2 bp-dcn-ip>
10.100.100.3 255.255.255.0 172.182.0.2 10.100.100.2 255.255.255.0 172.171.0.2
Port IP D Port IP F
172.170.0.2 172.169.0.2

Node1 node-id
10.100.100.1 node-ip
172.168.0.3 bp-dcn-ip
172.182.0.1 172.168.0.1 bp-dcn-gw-ip 172.171.0.1
Port IP E 255.255.255.0 bp-dcn-mask Port IP G
172.168.0.2 ems-ip
172.168.0.1 ems-gw-ip
255.255.255.0 ems-mask

node-id Node3
node-ip 10.100.100.3 Node2 node-id
bp-dcn-ip 172.182.0.2 10.100.100.2 node-ip TransAccess
bp-dcn-gw-ip 172.182.0.1 172.171.0.2 bp-dcn-ip Mux
bp-dcn-mask 255.255.255.0 172.171.0.1 bp-dcn-gw-ip TransAccess Name
ems-ip 172.168.0.2 255.255.255.0 bp-dcn-mask 172.171.0.3 IP
ems-gw-ip 172.182.0.1 172.168.0.02 ems-ip 172.171.0.2 Gateway
ems-mask 255.255.255.0 172.171.0.1 ems-gw-ip 255.255.255.0 Mask
255.255.255.0 ems-mask 10.100.100.2 Trap-1

TN 00159

Figure 2-8 TransNav Management System Out-of-Band Management

Add a route to the management server using the bp-dcn-ip of Node 1. Add separate
routes to the node-ip of Node 2 and Node 3 using the IP address of the port on the
router connected to the server (Port IP A) as the gateway address.
At each router in the network, an administrator must add a route to the node-ip of the
nodes.
At Node 2, the bp-dcn-ip can be in the same network as the TransAccess Mux
connected to it.
See the topic IP Addresses in a TransNav Network, page 2-9 for detailed information
about assigning IP addresses in a TransNav-managed network.

Release TN4.2.x Turin Networks Page 2-19


TransNav Product Overview Guide, Section 2: Management System Planning
Out-of-Band Management with Static Routes

Page 2-20 Turin Networks Release TN4.2.x


S ECTION 2MANAGEMENT SYSTEM P LANNING

Chapter 4
Network Time Protocol (NTP) Sources

Introduction This chapter includes the following information on managing a Traverse network:
• NTP Sources in a Traverse Network, page 2-21
• NTP Sources on a Ring Topology, page 2-22
• NTP Sources on a Linear Chain Topology, page 2-22

NTP Sources Network Time Protocol provides an accurate Time of Day stamp for performance
in a Traverse monitoring and alarm and event logs. Turin recommends using the TransNav
Network management system server as the primary NTP source if you do not already have a
NTP source defined. If no primary NTP source is configured, the TransNav system
defaults to the TransNav server as the primary NTP source. A secondary NTP IP server
address is optional.
Depending on the topology, configure a primary NTP source and a secondary NTP
source for each node in a network.
• For ring topologies, see NTP Sources on a Ring Topology, page 2-22.
• For linear chain topologies, see NTP Sources on a Linear Chain Topology,
page 2-22.

Daylight Saving Time


As part of a United States federal energy conservation effort, Daylight Saving Time
(DST) starts three weeks earlier and ends one week later than in years prior to 2007.
Certain telecommunications products contain the ability to synchronize to a network
clock or automatically change their time stamp to reflect time changes. Each device
may handle the recent change in DST differently.
All dates displayed in the TransNav management system CLI for alarms, upgrade
times, events, and performance monitoring (PM) will include the new DST as part of
Release TN3.1.x. The TraverseEdge 100 system CLI will include the new DST as part
of Release TE3.2.

Release TN4.2.x Turin Networks Page 2-21


TransNav Product Overview Guide, Section 2: Management System Planning
NTP Sources on a Ring Topology

NTP Sources Turin recommends using the adjacent nodes as the primary and secondary NTP sources
on a Ring in a ring configuration. Use the Management Gateway Node (MGN) or the node closest
Topology to the MGN as the primary source and the other adjacent node as the secondary source.
The following example shows NTP sources in a ring topology.

Node 2

NTP1 = Node 1
Node 1 NTP2 = Node 3 Node 3
Management Server

Primary NTP Server


Management Gateway Node NTP1 = Node 2
NTP1 = Management Server NTP2 = Node 4
Node 4

NTP1 = Node 3
NTP2 = Node 1

Figure 2-9 NTP Sources on a Ring Topology

In the above example, the MGN selects the management server as the primary NTP
server and does not select a secondary server. At Node 2, you would configure the
primary server as Node 1 (the MGN), and the secondary server as Node 3.

NTP Sources On a linear chain topology, Turin recommends using the upstream node as the primary
on a Linear NTP source and the management server as the secondary NTP source.
Chain In the following example, Node 1 (the MGN) selects the management server as the
Topology primary NTP server and does not select a secondary server. At Node 2, you would
configure Node 1 as the primary NTP server and the management server as the
secondary source.
Node 1
Node 2 Node 3 Node 4
Management Gateway Node
Management Server

Primary NTP Server


NTP1 = Management NTP1 = Node 1 NTP1 = Node 2 NTP1 = Node 3
Server NTP2 = Management Server NTP2 = Management Server NTP2 = Management Server

Figure 2-10 NTP Sources on a Linear Chain Topology

Page 2-22 Turin Networks Release TN4.2.x


I NDEX

A G
Access Graphical user interface
groups description, 1-14
see Role-based Access Control fault and event management, 1-7
Accounting data hardware requirements, 2-6
basis, 1-10 menu bar, 1-14
Administration performance management, 1-10
data collection, 1-11 shelf view, 1-15
nodes, 1-10 software requirements, 2-6
reports, 1-11 views
Alarms map view, 1-14
GUI windows, 1-7 navigation tree, 1-15
node group, 1-7, 1-15 network map, 1-14
Auto-discovery GUI, see Graphical user interface
intelligent control plane, 1-8
H
C Hardware
CLI requirements
commands GUI application, 2-6
description, 1-16 Sun Solaris server, 2-3
Configuration Windows, 2-5
management
equipment, 1-8 I
multiple servers, 1-9
Intelligent control plane
preprovisioning, 1-9
auto-discovery, 1-8
service provisioning, 1-9
connectivity
Control
node, 1-3
RBAC, see Role-based Access Control
service, 1-11
Control module
preprovisioning, 1-9
remote restore, 1-11
Interoperability
third party management systems
D SNMP traps, 1-4
Dataset snapshots, 1-12 TL1 interface, 1-4
Daylight Saving Time IP address
support, 2-23 requirements, 2-11
Domain
security M
see Role-based Access Control
Management
plane
E equipment configuration, 1-8
Event server
management, 1-7 primary, 1-3, 2-2
secondary, 1-3, 2-2
F system
Fault dataset snapshots, 1-12
management, 1-7 fault management, 1-7
reports, 1-11
security, Role-based Access Control, 1-10

Release TN4.2.x Turin Networks Index-1


Index

software components, 1-1 secondary, 1-9


Management system import
hardware requirements time, 1-9
GUI application, 2-6 multiple, 1-9
Sun Solaris server, 2-3 primary
Windows, 2-5 Shelf
server software requirements view
GUI application, 2-6 GUI, 1-15
Sun Solaris, 2-3 Software
Windows, 2-5 requirements
Map view GUI application, 2-6
group map, 1-14 Sun Solaris server, 2-3
network map, 1-14 Windows, 2-5
MaxNoOfUserSessions, see Server parameter System
interoperability, 1-4
N scalability, 1-4
Navigation tree simultaneous users, 1-4
GUI, 1-15
Network planning
T
creation process, 2-9 TL1
IP addresses, 2-11, 2-13 interface
NTP sources, 2-23 description, 1-17
Node security, see Role-based Access Control
U
P Users
Primary server, see Servers simultaneous, 1-4
Proxy ARP, 2-16 MaxNoOfUserSessions, 1-4

R
Report
types, 1-11
Reports
dataset snapshots, 1-12
Role-based Access Control
access groups, 1-10
functional groups, 1-3, 1-10
security
domain, 1-10
node, 1-10
server, 1-10

S
Scalability, see System
Secondary server, see Servers
Security management, see Role-based Access Control
Server
parameter
MaxNoOfUserSessions, 1-4
Servers
function
primary, 1-9

Index-2 Turin Networks Release TN4.2.x


Visit our website at:
www.turinnetworks.com

Release TN4.2.x
TransNav Management System
Documentation
800-0005-TN42

You might also like