R20.0 IQ NOS Overview Guide
R20.0 IQ NOS Overview Guide
Operating System
Overview Guide
Release 20.0
Version 001
Document ID 1900-001600
Infinera Corporation
140 Caspian Court
Sunnyvale, California 94089
www.infinera.com
- Please refer to the Infinera Customer Web Portal for the most recent version of this document. -
Copyright
Copyright © 2019 Infinera Corporation. All rights reserved.
This Manual is the property of Infinera Corporation and is confidential. No part of this Manual may be reproduced for any purposes
or transmitted in any form to any third party without the express written consent of Infinera.
Infinera makes no warranties or representations, expressed or implied, of any kind relative to the information or any portion thereof
contained in this Manual or its adaptation or use, and assumes no responsibility or liability of any kind, including, but not limited to,
indirect, special, consequential or incidental damages, (1) for any errors or inaccuracies contained in the information or (2) arising
from the adaptation or use of the information or any portion thereof including any application of software referenced or utilized in the
Manual. The information in this Manual is subject to change without notice.
Trademarks
Infinera, Infinera Intelligent Transport Networks, IQ NOS, FlexILS, DTN-X, DTN, ATN, FastSMP, FlexCoherent, What the Network
Will Be, iWDM, Enlighten and logos that contain Infinera are trademarks or registered trademarks of Infinera Corporation in the
United States and other countries.
All other trademarks in this Manual are the property of their respective owners.
Infinera DTN-X, DTN, FlexILS, Cloud Xpress, XT, and ATN Regulatory Compliance
FCC Class A
This device complies with Part 15 of the FCC rules. Operation is subject to the following two conditions: (1) this device may not
cause harmful interference, and (2) this device must accept any interference received, including interference that may cause
undesired operation. Modifying the equipment without Infinera's written authorization may result in the equipment no longer
complying with FCC requirements for Class A digital devices. In that event, your right to use the equipment may be limited by FCC
regulations, and you may be required to correct any interference to radio or television communications at your own expense.
DOC Class A
This digital apparatus does not exceed the Class A limits for radio noise emissions from digital apparatus as set out in the
interference-causing equipment standard titled “Digital Apparatus," ICES-003 of the Department of Communications.
Cet appareil numérique respecte les limites de bruits radioélectriques applicables aux appareils numériques de Classe A prescrites
dans la norme sur le matériel brouilleur: "Appareils Numériques," NMB-003 édictée par le Ministère des Communications.
Class A
This is a Class A product based on the standard of the VCCI Council. If this equipment is used in a domestic environment, radio
interference may occur, in which case, the user may be required to take corrective actions.
Warning
This is a class A product. In a domestic environment this product may cause radio interference in which case the user may be
required to take adequate measures.
FDA
This product complies with the DHHS Rules 21CFR 1040.10 and 1040.11, except for deviations pursuant to Laser Notice No. 50,
dated June 24, 2007.
Contents
About this Document...................................................................................................................................17
Objective................................................................................................................................................................ 18
Audience................................................................................................................................................................ 19
Document Organization..........................................................................................................................................20
Documents for Release 20.0..................................................................................................................................21
Conventions........................................................................................................................................................... 25
Technical Assistance..............................................................................................................................................26
Documentation Feedback...................................................................................................................................... 27
Objective
This guide provides an introduction and reference to the Infinera IQ Network Operating System that runs
on the DTN-X, DTN, Optical Amplifier, XT and FlexILS nodes and enables network-wide intelligent control
and operations.
Audience
The primary audience for this guide includes network architects, network planners, network operations
personnel, and system administrators who are responsible for deploying and administering the Intelligent
Transport Network. This guide assumes that the reader is familiar with the following topics and products:
■ Basic inter-networking terminology and concepts
■ Dense Wavelength Division Multiplexing (DWDM) technology and concepts
Document Organization
The following table describes each chapter in this guide.
Chapter Description
Introduction Provides an introduction to Infinera IQ Network Operating System. This
chapter also includes a list of hardware and software features.
Configuration and Management on Provides an overview of the extensive equipment inventory, management and
page 3-1 configuration capabilities supported by IQ NOS.
Service Provisioning on page 4-1 Provides an overview of the service provisioning capabilities of IQ NOS
network elements that allow users to engineer user traffic data transport
routes.
Performance Monitoring and Provides an overview of the performance monitoring capabilities of IQ NOS
Management on page 5-1 network elements.
Security and Access Management Provides an overview of user management and security features of IQ NOS
on page 6-1 network elements
Software Configuration Management Provides an overview of IQ NOS software and database image management.
on page 7-1
IQ NOS GMPLS Control Plane Provides an overview of the GMPLS control plane architecture that enables
Overview on page 8-1 automated end-to-end management of transport capacity across the Infinera
Intelligent Transport Network.
IQ NOS Management Plane Provides an overview of the management plane communications path for IQ
Overview on page 9-1 NOS network elements.
DTN-X Service Capabilities Lists the service provisioning and diagnostic capabilities for each service type
supported by the DTN-X
XT Service Capabilities on page B- Lists the service provisioning and diagnostic capabilities for each service type
1 supported by the XT(S)-3300 and XT(S)-3600.
Conventions
The table below lists the conventions used in this guide.
Technical Assistance
Customer Support for Infinera products is available, 24 hours a day, 7 days a week (24x7). For
information or assistance with Infinera products, please contact the Infinera Technical Assistance Center
(TAC) using any of the methods listed below:
■ Email: [email protected]
■ Telephone:
□ Direct within United States: 1-408-572-5288
□ Outside North America: +1-408-572-5288
□ Toll-free within United States: +1-877-INF-5288 (+1-877-463-5288)
□ Toll-free within Germany/France/Benelux/United Kingdom: 00-800-4634-6372
□ Toll-free within Japan: 010-800-4634-6372
■ Infinera corporate website: http://www.infinera.com
■ Infinera Customer Web Portal: https://support.infinera.com
Please see the Infinera Customer Web Portal to view technical support policies and procedures, to
download software updates and product documentation, or to create/update incident reports and
RMA requests.
Documentation Feedback
Infinera strives to constantly improve the quality of its products and documentation. Please submit
comments or suggestions regarding Infinera Technical Product Documentation using any of the following
methods:
■ Submit a service request using the Infinera Customer Web Portal
■ Send email to: [email protected]
■ Send mail to the following address:
Attention: Infinera Technical Documentation and Technical Training
Infinera Corporation
140 Caspian Court
Sunnyvale, CA 94089
When submitting comments, please include the following information:
■ Document name and document ID written on the document cover page
■ Document release number and version written on the document cover page
■ Page number(s) in the document on which there are comments
Introduction
The Infinera Intelligent Transport Network architecture includes an intelligent embedded control software
called the IQ Network Operating System, which operates on the DTN-X, DTN, Optical Amplifier, XT, and
FlexILS nodes. The IQ NOS software provides reliable and intelligent interfaces for the Operation,
Administration, Maintenance and Provisioning (OAM&P) tasks performed by the operating personnel and
management systems. The IQ NOS also includes an intelligent Generalized Multiprotocol Label Switching
(GMPLS) control plane architecture which provides automated end-to-end service provisioning and a
management plane architecture which provides reliable and redundant communication paths for the
management traffic between the management systems and the network elements.
IQ NOS supports the following features:
■ Operates on DTN-X, DTN, Optical Amplifier, XT, and FlexILS nodes
■ Standards based operations and information model (ITU-T, TMF 814, Telcordia).
■ Extensive fault management capabilities including current alarm reporting, alarm reporting
inhibition, hierarchical alarm correlation, configurable alarm severity assignment profile, event
logging, environmental alarms, and export of alarm and event data.
■ Network diagnostics capability including digital path and digital section level loopbacks, client side
loopbacks, circuit-level pseudo random binary sequence (PRBS) 31 and detection, trail trace
identifier (TTI) and synchronous optical network (SONET)/synchronous digital hierarchy (SDH) J0
monitoring and insertion at the tributaries.
■ Automatic equipment provisioning and equipment pre-provisioning.
■ Fully automated network topology discovery including physical topology and service topology
views.
■ Robust end-to-end automated circuit routing and provisioning utilizing GMPLS routing and signaling
protocols. Highlights of this feature include the ability to pre-configure circuits, optional selection of
SNC path utilizing constraint based routing, and optional specification of the channel/sub-channel
number within an optical carrier group (OCG) for a subnetwork connection (SNC).
■ Flexible software and configuration database management including remote software upgrade/
rollback, configuration database backup and restore, and bulk File Transfer Protocol (FTP)
transfers.
■ Analog performance monitoring at every node, digital performance monitoring at DTNs and DTN-
Xs, and native client signal performance monitoring at tributaries.
■ Supports Network Time Protocol (NTP) to synchronize the timestamps on all alarms, events and
performance monitoring (PM) data across the network.
■ GR-815-CORE based security administration and support for Remote Authentication Dial-In User
Service (RADIUS).
■ Hitless software upgrades.
■ Multi-chassis configurations utilizing the nodal control ports (NC ports or NCT ports, depending on
the chassis type).
■ Redundant control plane communication paths utilizing two control modules.
■ Redundant management plane communication paths utilizing Gateway Network Element (GNE)
and Management Proxy services.
■ Telcordia compliant TL1 for operations support system (OSS) integration.
■ Open integration interfaces including the TL1 interface and CSV formatted flat files that can be
exported using secure FTP.
Fault Management
IQ NOS provides extensive fault monitoring and management capabilities that are modeled after
Telcordia and ITU standards. All these capabilities are independent of the client signal payload type and
provide the ability to identify, correlate and correct faults based on actual digital and optical performance
indicators, leading to quicker problem resolution. Additionally, IQ NOS communicates all state and status
information of the network element automatically and asynchronously to the other network elements
within the Intelligent Transport Network and to all the registered management applications, thus
maintaining synchrony within the network.
IQ NOS provides the following fault management capabilities to help users in managing and maintaining
the network element:
■ Alarm Surveillance on page 2-2
■ Automatic Laser Shutdown (ALS) on page 2-11
■ Optical Layer Defect Propagation (OLDP) on page 2-16
■ Optical Loss of Signal (OLOS) Soak Timers on page 2-18
■ Software Controlled Power Reduction on page 2-23
■ Optical Ground Wire (OPGW) on page 2-24
■ Electronic Equalizer Gain Control Loop on page 2-25
■ Event Log on page 2-26
■ Maintenance and Troubleshooting Tools on page 2-27
■ Syslog on page 2-70
Alarm Surveillance
Alarm Surveillance functions include:
■ Detection of defects in the Infinera network elements and the incoming signals (see Defect
Detection on page 2-2).
■ Declaration of defects as failures (see Failure Declaration on page 2-2).
■ Reporting failures as alarms to the management applications (see Alarm Reporting on page 2-3).
■ Masking low level or lower order alarms in the presence of high level or higher order alarms (see
Alarm Masking on page 2-5).
■ Reporting alarms through local alarm indicators (see Local Alarm Summary Indicators on page 2-
6).
■ Configuring alarm reporting (see Alarm Configuration on page 2-6).
■ Isolating network faults utilizing Automatic Laser Shutdown feature (see Automatic Laser Shutdown
(ALS) on page 2-11).
■ Ability to configure the behavior of client tributaries in case the tributary is locked or faulted (see
Tributary Disable Action on page 3-41)
■ Ability to configure the encapsulated client disable action for certain TIMs and TAMs (see
Encapsulated Client Disable Action on page 3-46)
Defect Detection
IQ NOS detects and clears all hardware and software defects within the system. A defect is defined as a
limited interruption in the ability of an item to perform a required function. The detected defects are
analyzed and localized to the specific network site, network element, facility (or incoming signal) and
circuit pack. On detecting certain defects, such as defects in the incoming signal, IQ NOS transmits
maintenance signals to the upstream and downstream network elements indicating successful
localization of the defect. On termination of defects, IQ NOS stops transmitting maintenance signals. See
Automatic Laser Shutdown (ALS) on page 2-11 for more details.
The detection of facility defects, such as OLOS, AIS, BDI, etc., and transmission of maintenance signals
to the upstream and downstream network elements is in compliance with Telcordia and ITU
specifications.
Failure Declaration
Defects associated with facilities/incoming signal are soaked for a pre-defined period before they are
declared as failures. This measure prevents spurious failures from being reported. So, when a defect is
detected on a facility, it is soaked for a time interval of 2.5 seconds (+/- 1 second) before the
corresponding failure is declared. Similarly, when a facility defect clears, it is soaked for 12.5 seconds (+/-
2 seconds) before the corresponding failure is cleared. This eliminates pre-mature clearing of the failure.
Defects associated with hardware equipment, environmental alarms, and temperature-related alarms are
not soaked. The failure condition is declared as soon as the defect is detected. Similarly, the failure
condition is cleared as soon as the defect is cleared.
Alarm Reporting
IQ NOS reports the hardware and software failures as alarms. Detection of a failure condition results in
an alarm being raised which is asynchronously reported to all the registered management applications.
The clearing of a failure results in clearing the corresponding alarm, which is again reported
asynchronously to all the registered management applications. IQ NOS stores the outstanding alarm
conditions locally and they are retrievable by the management applications. Thus, at any given time users
see only the current standing alarm conditions.
Alarm reporting is also dependent on the administrative state (see Administrative State on page 3-36) of
the managed object and presence of other failure conditions and the user configuration, as described
below:
■ Administrative State—Alarms are reported when the administrative state of a managed object and
its ancestor objects are unlocked. When the administrative state of an object or any of its ancestor
objects are locked or in maintenance, alarms are not reported (except for the Loopback related
alarms). IQ NOS also supports alarms that indicate when a managed object is put in the locked or
maintenance administrative state. The severity of these alarms can be customized via the ASPS
feature (see Alarm Severity Profile Setting (ASPS) on page 2-9).
■ Alarm Hierarchy—An alarm is reported only if no higher priority alarms exist for the managed
object. Thus, only alarms corresponding to the root cause of the fault condition are reported. This
capability prevents too many alarms being reported for a single fault condition (see Alarm Masking
on page 2-5).
■ User Configuration—IQ NOS provides users the ability to selectively inhibit alarm reporting (see
Alarm Reporting Control (ARC) on page 2-7).
IQ NOS reports each alarm with sufficient information, as described below, so that the user can take
appropriate corrective actions to clear the alarm. For a detailed description of all the parameters of alarms
reported to the management applications, refer to the GNM Fault Management and Diagnostics Guide.
■ Alarm Category—This information isolates the alarm to a functional area of the system (see Alarm
Category on page 2-4 for the list of supported alarm types).
■ Alarm Severity—This information indicates the level of degradation that the alarm causes to service
(see Alarm Severity on page 2-5 for the list of supported severities). This information is reported
within the NTFCNCDE parameter in TL1 notifications.
■ Probable Cause—This information describes the probable cause of the alarm. This is a short
description of the detected problem. A detailed description is provided as Probable Cause
Description.
■ TL1 Condition Type—This field is analogous to the probable cause, except that the condition type
string is in accordance with the GR-833-CORE standard. This information is reported within the
CONDTYPE parameter in TL1 notifications.
Alarm Category
IQ NOS categorizes the alarms into the following types:
■ Facility Alarm—Alarms associated with the line and tributary incoming signals. For example: OLOS,
LOF, and AIS.
■ Equipment Alarm—Alarms associated with hardware failures. For example: Equipment Failure, and
Equipment Unreachable.
■ Communications Alarm—Alarms associated with communication failures within the network
element and between network elements. For example: No Communication with OSC Neighbor
(LOC OSC).
■ Software Processing Alarm—Alarms associated with software processing errors. For example,
Software Upgrade Has Failed, and Persistence Space Less Than 2%-Critical.
■ Environmental Alarm—Alarms caused by the change in the state of the environmental alarm input
contact.
Alarm Severity
Each alarm, TCA, and TCC generated by IQ NOS has one of the following default severity levels:
■ Critical—Indicates that a service affecting condition has occurred and an immediate corrective
action is required. This severity is reported, for example, when a managed object is rendered out-
of-service by a failure and must be restored to operation in order to recover lost system
functionality.
■ Major—Indicates that a service affecting condition has developed and an urgent corrective action is
required. This severity is reported, for example, when there is a severe degradation in the capability
of the managed object and full capability must be restored in order to recover lost system
functionality.
■ Minor—Indicates the existence of a non-service affecting fault condition and that corrective action
should be taken in order to prevent a more serious (for example, service affecting) fault. Such a
severity is reported, for example, when the detected alarm condition is not currently degrading the
capacity of the managed object.
■ Warning—Indicates the detection of a potential or impending service affecting fault, before any
significant effects have been felt. Action should be taken to further diagnose (if necessary) and
correct the problem in order to prevent it from becoming a more serious service affecting fault.
Note: This severity level maps to the non-alarmed standing condition in TL1.
With the exception of Warning, the alarm severity levels are referred to as the notification code in
GR-833-CORE, and are reported as such in TL1 notifications.
Users can customize the severity associated with an alarm, TCA, or TCC through the management
applications (see Alarm Severity Profile Setting (ASPS) on page 2-9.)
Alarm Masking
IQ NOS provides an alarm masking feature that complies with, and extends, GR-253 Section 6.2.1.8.2
and GR-474 Section 2.2.2.1. The network element masks (suppresses) higher layer alarms associated
with the same root cause as a lower level alarm. This prevents logs and management applications from
being flooded with redundant information. Suppression is based on a logical hierarchy. For instance,
when a network element experiences an Optical Transport Section (OTS) - Optical Loss of Signal (OLOS)
failure, the network element will report the OLOS-OTS alarm, but the associated Band - OLOS, Channel -
Loss of Frame (LOF), and Band - Optical Power Received (OPR) Out of Range – Low (OORL) alarms,
and all other associated lower layer alarms, are suppressed. These conditions are still retrievable by
request.
The masked condition is neither reported to the management applications nor recorded in the alarm table.
For individual alarm descriptions and the alarm masking hierarchy, refer to the DTN and DTN-X Alarm
and Trouble Clearing Guide or the GNM Fault Management and Diagnostics Guide.
Note: Bay-level LEDs are supported on DTCs, MTCs, and XTC-4s only.
■ Chassis Level Visual Alarm Indicators—These indicators provide the summary of the outstanding
alarm conditions of the chassis. A chassis level visual alarm indicator is lit if there is at least one
corresponding outstanding alarm condition within the chassis.
■ Chassis Level Office Alarm Indicators—As described in Office Alarms, the network elements
provide alarm output contacts to support chassis level visual and audio indication of critical, major
and minor alarms. As described in Alarm Cutoff, ACO buttons and ACO LEDs are also supported.
■ Card Level Visual Indicators—All circuit packs include LEDs to indicate the card status.
■ Port Level Indicators—These indicators are provided for each tributary port and line port.
Alarm Configuration
The following features are used to customize the alarm reporting to the management applications and
interfaces:
■ Alarm Reporting Control (ARC) on page 2-7 (see below)
■ Alarm Severity Profile Setting (ASPS) on page 2-9
■ Customizable Timer-Based Alarms on page 2-9
■ Power Draw Alarm on page 2-9
Note: The TL1 commands used to control the Alarm Reporting option are OPR-ARC (operate ARC)
and RLS-ARC (release ARC). The OPR-ARC command is used to disable alarm reporting, and the
RLS-ARC command is used to re-enable alarm reporting. See the DTN and DTN-X TL1 User Guide
for more information on configuring ARC via the TL1 interface.
Note: Although it is possible to use ARC to suppress OLOS alarms on newly installed tributary
interfaces whose services have not yet been turned up, it may be more convenient to use the
Automatic In-Service (AINS) feature. The AINS feature automatically suppresses alarms on a
tributary until the entity is fault-free for a configured time period, at which time the tributary is declared
to be “In-Service”. Unlike the ARC feature, the AINS feature automatically puts tributary interfaces
into service once all faults are cleared. For more information on AINS, see Automatic In-Service
(AINS) on page 3-40.
When Alarm Reporting is turned off for a managed object, the reporting of the alarms, events, and TCAs/
TCCs for the specified entity are stopped for all the management interfaces. Although the managed
object may be detecting alarms such as OLOS, the alarms are not transmitted to any client, or reported to
the management applications. Turning off Alarm Reporting also suppresses status indicators, such as
LEDs and audio/visual indicators. When Alarm Reporting is disabled for a managed object, alarms are
also inhibited for all the contained and supported managed objects. For example, when alarm reporting is
inhibited for the chassis object, alarm reporting is inhibited for all the circuit pack objects within that
chassis. See Managed Objects on page 3-3 for the description of the managed objects and relationship
between them.
The inhibited alarms are logged in the event log and are retrievable through the TL1 Interface. Note that
the DNA and GNM will not retrieve this information.
When Alarm Reporting is disabled for a managed object, the default ARC behavior is to maintain all pre-
existing alarms for the managed object; these alarms are cleared as usual when the alarm condition no
longer exists. However, this behavior can be re-configured on the network element to cause pre-existing
alarms on an object to be cleared when Alarm Reporting is disabled on that object. In this case, once
Alarm Reporting is re-enabled, existing alarms (including pre-existing alarms that are still outstanding) will
be reported. This switch is configured on a per-node basis, and the behavior of the two settings (the
default Leave Outstanding Alarms and the override Clear Outstanding Alarms) is shown in Figure 2-1:
ARC Behavior (Leave Outstanding Alarms vs. Clear Outstanding Alarms) on page 2-8 below.
Figure 2-1 ARC Behavior (Leave Outstanding Alarms vs. Clear Outstanding Alarms)
Note that the ARC behavior is the same for alarm events that are raised during the ARC period (Scenario
#1 and Scenario #2), regardless of whether ARC is set to Leave Outstanding Alarms or Clear
Outstanding Alarms.
■ When alarm conditions are raised and cleared during the ARC period (Scenario #1), the alarms are
not reported to the management interfaces.
■ When alarm conditions are raised during the ARC period but are not cleared during the ARC period
(Scenario #2), the alarms are reported to the management interfaces only at the end of the ARC
period, and the clearing event is reported to the management interfaces when the alarm is cleared.
However, the ARC behavior is different when alarm events are raised before the beginning of the ARC
period (Scenario #3 and Scenario #4), depending on whether ARC is set to Leave Outstanding Alarms or
Clear Outstanding Alarms:
■ When ARC is configured to Leave Outstanding Alarms, any pre-existing alarms will remain
outstanding and a clearing event will be reported to the management interfaces when the alarm
condition is cleared. In Scenario #3 the clearing event happens during the ARC period, and in
Scenario #4 the clearing event happens after the ARC period.
■ When ARC is configured to Clear Outstanding Alarms, any pre-existing alarms are cleared when
Alarm Reporting is disabled and a clearing event is sent to the management interfaces at the start
of the ARC period. If the alarm is cleared during the ARC period, the management interfaces will
not receive another clearing event. If the alarm is still outstanding at the end of the ARC period, the
management interfaces will receive a new alarm event for the alarm, and then will receive a
clearing event when the alarm is cleared.
Note: The severity is modified per object type, and not on a per managed object basis. For example,
when the severity of OLOS of an OCG termination point is modified, the new severity is applied to
OLOS alarms reported by all OCG termination points.
Note: The severity of an environmental alarm is assigned by the user when the alarm is provisioned.
The ASPS feature cannot be used to modify the provisioned severity of environmental alarms.
However, the severity of an environmental alarm can be changed from the Alarm Input Contact
window in the management applications.
ASPS allows the user to configure protection switching actions as alarms (see Protection Switch Alarm
Reporting on page 4-136).
Note: This applies only to newly-installed or re-seated modules; if these modules are cold reset the
XCM/IMM does not interfere with the reboot.
For DTC, MTC, and OTC, the user can configure the ideal maximum electrical power draw (in Watts) for
the chassis (see MTC/DTC Chassis Power Control on page 3-55). This power draw limit is compared
against the total maximum (worst-case) power draw for all of the equipment provisioned (or pre-
provisioned) in the chassis, and the chassis raises an alarm if the sum of the power values for the
provisioned/pre-provisioned equipment in the chassis exceeds the user-configured maximum power limit.
This feature is especially useful when a chassis is deployed in a co-location environment where “rented
power” limits may be enforced/limited by the service provider providing the co-location environment.
Note: This feature does not limit power draw, but instead provides a configurable alarm if the system
equipment is calculated to exceed the user-configured maximum.
Note: The chassis has no means for reporting its actual current draw, so instead, the user-configured
maximum power draw limit is compared against the sum of the maximum power draw values for the
equipment currently provisioned (or pre-provisioned) in the chassis.
When provisioning a new piece of equipment in a chassis, the equipment’s estimated power draw is
added to the estimate of the total power draw for the chassis. If the newly computed power consumption
exceeds the user-configured maximum power draw value, the chassis raises a “Power Draw”
(PWRDRAW) alarm.
The Power Draw (PWRDRAW) alarm is cleared when:
■ The user increases the configured maximum power draw value for the chassis to a value that is
equal to or greater than the total estimated power draw value.
■ Pre-provisioned or provisioned equipment is deleted (or removed and then deleted, in the case of
provisioned equipment) from the network element's database. The network element will then re-
evaluate the estimated power draw. If the estimated power draw value is equal to or less than the
configured maximum power draw value, the Power Draw alarm is cleared.
See Power Draw of Equipment on page 3-52 for more information about configuring the power draw
settings for a chassis.
Note: BDI-OTS and FDI-OTS conditions are not exposed in the user management interfaces; they
are detected and used internally by the system for ALS.
When the fiber is recovered, the OTS OLOS condition clears (recall that the OSC signal does not shut
down in the ALS state, so once the fiber is recovered, both ends will receive the OSC from the far end
and the OTS OLOS condition is cleared). Once the OTS OLOS condition clears, the C-band laser will
automatically turn back on, thus clearing the BDI-OTS signal sent towards upstream node. The upstream
node receives the C-band signal with no BDI-OTS signal, and therefore the upstream node turns on its C-
band laser, which clears the C-band OLOS at the near end. This link is now in the normal state.
Note: For SLTE links, which operate without the OSC (see Network Applications in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg), once ALS is triggered there is no automatic way for the
link to recover. ALS on the link must be manually disabled and then re-enabled. Alternatively, ALS
can be permanently disabled for SLTE links in order to support faster recovery from link failures. To
enable this feature, contact an Infinera Technical Assistance Center (TAC).
Note there is specialized ALS behavior for the following types of modules/configurations:
■ Raman Amplifier Modules (RAM-1, RAM-2-OR, and REM-2), see ALS with Raman Modules
(RAM-1, RAM-2-OR, and REM-2) on page 2-13.
■ Booster Amplifier/Preamplifier configurations, see ALS for Booster Amplifier/Preamplifier
Configurations on page 2-12.
■ IAMs and IRMs, see ALS with IAMs and IRMs on page 2-15.
field trials testing to measure the power levels in one direction when a single (i.e., uni-directional) fiber cut
is present. In order to disable ALS, the user must have a user account specifically configured with
“Restricted Access” privileges.
To prevent users from disabling ALS on modules with Raman amplifiers, a user with Network
Administrator privileges can set the network element’s ALS Administration Policy to “block.” When the
network element’s ALS Administration Policy is set to “block,” the network element does not allow users
to disable ALS on modules with Raman functionality: RAMs, REMs, ORMs, and IRMs. This setting does
not change the behavior for ALS on BMMs, IAMs, nor OAMs. The default setting is “do not block,” which
means that users are allowed to disable ALS on modules with Raman amplification.
For SLTE configurations, BMMs configured to SLTE mode and IAMs configured for SLTE or SLTE_TLA
mode support ALS disabling in order to allow the system to continue operating after a break in fiber
connectivity. ALS can be disabled in one of two modes:
■ Timer based—ALS may be disabled for a finite period of time. In this mode, a timer is set and ALS
is disabled until the expiration of the timer.
■ Permanent—ALS is permanently disabled, meaning the laser is on and will continue to transmit
even in presence of ALS triggers that otherwise would shut down the laser. ALS functionality is not
supported and never triggered. ALS-elated configuration settings are ignored for the IAM.
Note: Contact an Infinera Technical Assistance Center (TAC) for assistance in permanently disabling
ALS.
■ ALS will also trigger when the OTS patch cable between the BMM2C and the OAM/ORM
preamplifier is broken (the preamp EDFA will be muted to an eye safe level of 10dBm or less).
■ There is no ALS trigger if the OSC patch cable between the BMM2C and the OAM/ORM
preamplifier is removed.
Note: This section describes the behavior for RAMs (RAM-1, RAM-2-OR, and REM-2). ORM modules
behave similarly to BMMs and OAMs, as discussed in the previous section. Also note that IRMs have
a different behavior than the RAMs. IRMs are discussed in the next section (see ALS with IAMs and
IRMs on page 2-15).
Note: The RAM-1, RAM-2-OR, and REM-2 are not supported for configurations with IAM-1.
Because of their high power levels, RAMs generate a significant amount of Amplified Spontaneous
Emission (ASE) noise, so the system can’t rely on detecting out-of-range C-band and OSC signal powers
for ALS, which are used for ALS in non-Raman systems. For nodes that use RAMs, ALS is instead
implemented via a dedicated 1610nm pilot laser on the counter-pump Raman modules (RAM-1s, and
RAM-2-ORs only; the REM-2 module can detect but not generate a pilot tone). The pilot laser output is
launched co-propagating with the payload signal, and is modulated to produce one of two tone signals
that facilitate in the link shutdown and restoration processes:
■ Remote Receive Fault (RRF)—Used to notify the RAM in the far end of the link of a fiber break in
the opposite fiber span as detected by the near-end receiver. This prompts the far-end RAM to turn
off its pumps.
■ Normal (NRM)—Used to notify the RAM in the far end of the link to turn on its pumps (if and when it
detects the tone).
These tones are generated by the RAM-1 or RAM-2-OR module at the near end of the link and detected
by the corresponding RAM-1, or RAM-2-OR at the far end of the link (see Figure 2-2: Pilot Lasers in
RAMs on page 2-14).
Note: The pilot tone resides at 1610nm on the same fiber as the OSC and the OCGs. No additional
fiber is required to carry the pilot tone.
Based on the detection of the pilot tones, three ALS states are defined:
■ NoSignal No ALS tone detected. ALS event will be triggered.
■ RemoteRxFault RRF tone detected, indicating ALS event is detected by the upstream amplifier.
ALS event will be triggered.
■ Normal NRM tone detected. No ALS event present.
The pilot lasers will detect all fiber breaks occurring in the main fiber spans between the two RAM
modules. However, they are incapable of detecting fiber breaks in the local fiber spans between each
BMM/OAM/ORM/IAM-2 and RAM pair. For this purpose the RAM-1 and RAM-2-OR modules will rely on
C-band and OSC optical power detection from the BMM/OAM/ORM/IAM-2. ASE interference is not an
issue here since the pump lasers are located at the far end of the link.
Based on the detection of the BMM/OAM/ORM/IAM-2 C-band and OSC signals an additional ALS state is
defined:
■ LocalRxFault No C-band or OSC signal detected, indicating fiber break in local span. ALS event
will be triggered.
The LocalRxFault state has precedence over the other three states. While in this state the RAM module
will ignore any detected pilot tones.
Note: The four ALS states apply only to the RAM-1 and RAM-2-OR modules. For links which
incorporate a REM-2 module, there is a control line sent via the backplane to allow the RAM-2-OR
module to turn on or off the REM-2 pump lasers. This dictates that a span that requires both a
RAM-2-OR and REM-2 module must have these modules in the same chassis.
Note: The module type at each end of a link must match: Both modules must be IAMs or both
modules must be IRMs. It is not supported to have a link with an IRM at one end and an IAM at the
other end.
Note: For information on supported interconnectivity of IAM-1, IAM-2, and IRM, see FlexILS Optical
Line Amplifier - Network Applications in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg.
IAMs and IRMs utilize the chassis backplane for ALS functionality. Therefore, note the following
requirements for configurations with IAMs/IRMs:
■ For ROADM configurations (see FlexILS Reconfigurable Optical Add/Drop Multiplexer (ROADM)
and DTN-X with ROADM - Node Configurations in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg), which use both an FRM and an IAM or IRM for each
direction, the IAM/IRM must be in the same MTC-9/MTC-6 chassis as its associated FRM, and the
band PTP of the IAM/IRM must be associated to the band PTP of the FRM.
■ For FlexILS Optical Line Amplifier configurations (see Network Applications in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg), which use an IAM or IRM for each direction, both
amplifier modules (which can be IAMs, IRMs, or one of each) must be in the same MTC-9/MTC-6
chassis, and the band PTPs of the two modules must be associated each other.
Link-level optical layer defects are communicated using the overhead bits on the OSC. The IAM/IRM/
FRM-4D/FRM-20X receives information on upstream faults on the overhead bits of the incoming OSC.
The outbound IAM/IRM/FRM-4D/FRM-20X injects the required fault bits on the OSC overhead before
transmitting the OSC. Local faults are suppressed based on the fault bits received from the upstream
node. Optical layer alarms and status are thus transmitted from head-end node to tail-end node.
The table below lists the OLDP faults and the layer(s) that support each fault (an “X” indicates support):
■ IAM-B-ECXH2
■ IRM-B-ECXH1
■ FRM-9D-R-8-EC
■ FRM-4D-B-3-EC (when configured in Standalone with OSC Slot Operating Mode)
■ FRM-20X-R-EC (when configured in Standalone with OSC Slot Operating Mode)
Note: * For the indicated modules, support of the C-band OLOS soak timer depends on the specific
circuitry of the module. To verify whether the module supports C-band OLOS soak timer:
■ For TL1, run a RTRV-EQPT command on the module and note the value of the
CBANDSOAKCAPABLEFW response parameter: TRUE indicates soak timer is supported,
FALSE indicates soak timer is not supported.
■ For GNM/DNA, open the Span properties of the module. For modules that support the soak
timer, the Span/C-Band tab will have the OLOS Soak Time drop-down menu.
Note: The C-band OLOS soak timer values are set as listed above in order to meet the Class 1M
laser hazard level rating.
Note: For BMMs with mid-stage amplification, C-band OLOS soak timer applies to both stages of the
receive EDFA. However, if there is a glitch in the mid-stage fiber which results in OLOS condition, the
DCF OLOS alarm may not be suppressed.
■ BMM2C-16-CH ■ BMM-4-CX1
■ BMM2-8-CEH3 ■ BMM-4-CX2-MS
■ BMM2-8-CH3-MS ■ BMM-4-CX3-MS
■ BMM2-8-CXH2-MS ■ BMM-8-CXH2-MS
■ BMM2P-8-CEH1 ■ BMM-8-CXH3-MS
■ BMM2P-8-CH1-MS
Note the following for the OCG OLOS soak timer functionality:
■ The BMM OCG OLOS soak timer is implemented on BMM OCGs only, not on line module OCGs
nor GAM OCGs. Therefore, if there is a fiber glitch between a GAM-1 and a line module (a DLM,
XLM, ADLM, or AXLM in Gen1 mode), Auto-discovery will be retriggered between the line module
and the GAM-1, thus impacting traffic until the Auto-discovery is completed. In addition, the soak
timer is not supported on mid-stage (DCF port) fibers, nor on the optical channel between an LM-80
and a CMM.
■ The BMM OCG OLOS soak timer can be set from 0 to 60 seconds, and it is recommended to set
all a uniform value for all BMM OCGs on a system in order to most easily manage the soak timer
values. The following values are recommended:
□ For add/drop OCGs: 10 seconds
□ For Optical Express OCGs: 20 seconds
■ If the BMM OCG OLOS soak timer is configured when an OLOS condition is already present, the
changes will take effect only during a subsequent occurrence of OLOS.
■ The BMM OCG OLOS soak timer is not honored when a fiber glitch occurs during a warm reset of
the BMM.
longer data outage that would be created if Auto-discovery was restarted immediately. During the soak
time, the node will defer the SCG OLOS alarm reporting and Automated Gain Control will continue to
perform null sequencing; the node will not make any gain commitments in the link. By default, the SCG
OLOS soak timer is set to 0 seconds (disabled).
Note the following for the SCG OLOS soak timer functionality:
■ The SCG OLOS soak timer can be set from 0 to 60 seconds, and it is recommended to set all a
uniform value for all SCGs on a system in order to most easily manage the soak timer values. The
following values are recommended:
□ For add/drop SCGs: 10 seconds
□ For FRM to FRM SCGs: 20 seconds
■ If the SCG OLOS soak timer is configured when an OLOS condition is already present, the
changes will take effect only during a subsequent occurrence of OLOS.
■ The SCG OLOS soak timer is not honored when a fiber glitch occurs during a warm reset of the
FRM.
Note: Because Software Controlled Power Reduction relies on the software of the associated
modules in order to function, the EDFAs are not muted if the control plane is not accessible at the
time of the fiber cut, such as in the following scenarios:
■ The controller module is removed or cold rebooted
■ The base BMM2P-8-CH1-MS is warm rebooted
■ The preamplifier is warm rebooted (receive direction only)
■ The expansion BMM2P-8-CEH1 is warm rebooted (transmit direction only)
Note: In any of the above conditions, do not disconnect the DCF fiber nor the patch cable fiber
between the base and expansion BMM2P.
Software Controlled Power Reduction does take effect in the case of a controller module warm reboot
or a controller module switchover.
By default both of these parameters are disabled. To change the configuration for either of these
parameters, the associated optical channel CTP or carrier CTP must be in the maintenance or locked
administrative state. Changing either of these parameters is service-affecting.
Note: Do not configure Aggressive Tracking nor Rapid Recovery during warm reboot of the line
module.
Note: Do not configure Aggressive Tracking nor Rapid Recovery unless consulted to do so by an
Infinera Technical Assistance Center (TAC) resource.
Note: This feature is disabled by default. Enabling this feature can cause a minor decrease in
performance for PM-QPSK modulation format.
Note: Do not configure Electronic Equalizer Gain Control Loop unless consulted to do so by an
Infinera Technical Assistance Center (TAC) resource.
Note: For AOLM, SOLM, AOLX, SOLX, AOLM2, SOLM2, AOLX2, and SOLX2, these parameters are
on the OCH CTP of the line module. For OFx, these parameters are on the Carrier CTP of the
modules.
■ Steady State Control—Optimizes the steady state component of Automated Gain Control (AGC) for
the optical channel/carrier. The steady state optimization is non-service-affecting. When enabled,
the line module immediately optimizes the steady state control for the optical channel/carrier.
■ Coarse Tuning Control—Optimizes coarse tuning on the associated optical channel to put AGC in
the desired range. The coarse tuning optimization is service-affecting for the associated optical
channel/carrier, and requires that steady state control is also enabled. When coarse tuning is
enabled, the optical channel coarse tuning is adjusted on upon subsequent re-acquisition (note that
the optimization does not take place immediately; re-acquisition must be triggered for changes to
take place).
Note the following for the Electronic Equalizer Gain Control Loop feature:
■ Do not configure Steady State Control nor Coarse Tuning Control during warm reboot of the line
module.
■ Before configuring the Steady State Control or Coarse Tuning Control, the associated optical
channel/carrier CTP must be administratively locked.
■ Steady State Control must be enabled before enabling Coarse Tuning Control.
■ After enabling Coarse Tuning Control, run a Reset Rx operation on the optical carrier/channel CTP
(in TL1, this is performed via the OPR-RESETRX command).
Note: This will restart the receive acquisition and will be service affecting for the associated
OCHCTP/carrier. It is recommended to perform this operation in a planned maintenance
window.
Event Log
IQ NOS provides an historical event log that tracks all significant events in the system (including alarms)
and stores the events in a wrap-around buffer. Management interface sessions can retrieve the full
history and track ongoing events in real time. Synchronization is maintained between the connected
management interfaces and the network element. If a session communication failure occurs, the
reconnected management interface can query the events that occurred during session failure.
IQ NOS records the following types of events in the event log:
■ Alarm related events, which include alarm raise and clear events.
■ PM data thresholding related events, which include threshold crossing condition raise and clear
events.
■ Threshold crossing alerts as described in PM Thresholding on page 5-4.
■ Managed object creation and deletion events triggered by user actions.
■ Security administration related events triggered by user actions.
■ Network administration events triggered by user actions to upgrade software, downgrade software,
restore database, etc.
■ Attribute value change events triggered by the user actions to add or delete managed objects, or
change attribute values of managed objects.
■ State change events indicating the state changes of a managed object triggered by user action
and/or changes in the operation capability of the managed object.
Event logs are stored in the persistent storage on the network element so that event persisted across
controller module reboots or switchovers. Users can export the event log information in TSV format using
management applications.
Note: Attribute value change events are also stored in the event log. The attribute value change
events are not persisted across controller module reboots or switchovers.
Following are some of the important information stored for each event log record:
■ The managed object that generated the event.
■ The time at which IQ NOS generated the event.
■ The event type indicating the event category, including:
□ Update Event, which includes managed object create and delete events.
□ Report Event, which includes security administration related event, network administration
related event, audit events, and threshold crossing alerts (TCA).
□ Condition, which includes alarm raise and clear event, non-alarmed conditions, and
Threshold crossing condition events.
Refer to the DTN and DTN-X Alarm and Trouble Clearing Guide for a list of events recorded in the event
logs for Infinera nodes.
Note: When PRBS generation, PRBS monitoring, and loopbacks are performed on a port, the
administrative status of the port is set to Maintenance. In such cases, the operational status of the
associated cross-connects will be reflected as out-of-service although no cross-connect related
alarms are reported.
Loopbacks
Note: Unless specifically noted otherwise, all references to “line module” will refer interchangeably to
either the DLM, XLM, ADLM, AXLM, SLM, AXLM-80, ADLM-80 and/or SLM-80 (DTC/MTC only) and
AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and/or SOLX2 (XTC only). The term “LM-80”
is used to specify the LM-80 sub-set of line modules and refers interchangeably to the AXLM-80,
ADLM-80 and/or SLM-80 (DTC/MTC only). Note that the term “line module” does not refer to TEMs,
as they do not have line-side capabilities and are used for tributary extension.
Loopbacks are used to test newly created circuits before running live traffic or to logically locate the
source of a network failure on existing circuits. Loopbacks provide a mechanism where the signal under
test (either the user signal or the test pattern signal such as PRBS) is looped back at some location on
the network element in order to test the integrity and validity of the signal being looped back. Since
loopbacks affect normal data traffic flow, they must be invoked only when the associated facility is in
administrative maintenance state.
IQ NOS provides access to the loopback capabilities in the Infinera nodes, independent of the client
signal payload type. The loopbacks can be enabled or disabled remotely through the management
applications. The following sections describe the loopbacks supported to test each section of the network,
as well as the various hardware components along the data path:
■ Loopbacks Supported on the XTC on page 2-28
■ Loopbacks Supported on the DTC/MTC on page 2-39
■
Loopbacks Supported on the XT on page 2-44
Note: Loopbacks are not supported on an OTUk/ODUk when PRBS generation is enabled in either
direction (facility or terminal) on the ODUk.
Note: When a Client Tributary Facility Loopback is operated on an OTUk, the ODUk facility does not
support alarms nor PMs on its incoming signal.
Client Tributary Facility Loopback—A loopback is performed on the TIM/TIM2/MXP wherein the tributary
port Rx signal is looped back to the Tx on the TIM/TIM2/MXP. (The test signal will continue on its
provisioned path in addition to being looped back toward the originating point of the signal.) This loopback
test verifies the operation of the tributary side optics in the TOM and TIM/TIM2/MXP.
Figure 2-4 Client Tributary Facility Loopback (XTC-4/XTC-10 TIMs and OLx Example)
Figure 2-5 Client Tributary Facility Loopback (XTC-10 TIM2 and OFx-1200 Example)
Client Tributary Terminal Loopback—A loopback performed on the TIM/TIM2 wherein the signal is
received from the far-end node into the local node, and is transmitted through the local node switch fabric
and into the TIM/TIM2 where the signal is looped back and sent back out through the switch fabric and to
the far-end node.
Note: The system-wide behavior for client tributary terminal loopbacks can be configured so that the
laser on the client interface will be shut off to prevent the test signal from continuing to the client
equipment and the signal will only be looped back toward the originating point of the signal.
Otherwise, the default behavior is that the test signal will continue out the client interface in addition to
being looped back toward the originating point of the signal.
Figure 2-8 Client Tributary Terminal Loopback (XTC-10 TIM2 and OFx-1200 Example)
Figure 2-9 Client Tributary Terminal Loopback (XTC-4/XTC-10 TIM and OLx Example)
A loopback performed on the MXP wherein the signal is received from the far-end node into the local
node, and is transmitted through the 200G mapper on the MXP and looped back through the OTN multi-
service processor and sent out to the far-end node.
ODUk Facility Loopback—The incoming signal from the client interface (either from the OTM/OTM-1200
on the local node, or from the line side/far-end node, as in Figure 2-21: Ethernet Interface Loopbacks
(PXM only) on page 2-39) is looped back in the switch fabric (in the OXM), then sent back out to the
client interface.
Figure 2-12 ODUk Facility Loopback (from the OTM) (XTC-4/XTC-10 Example)
Figure 2-13 ODUk Facility Loopback (from the OTM) (XTC-2/XTC-2E Example)
The incoming signal from the client interface (corresponding to an MXP-400) from a local node is looped
back through the 200G mapper and then sent back out to the client interface.
Figure 2-15 ODUk Facility Loopback from the OTM-1200 XTC-10 Example
Figure 2-16 ODUk Facility Loopback (from Line Side) (XTC-4/XTC-10 Example)
Figure 2-17 ODUk Facility Loopback (from Line Side) (XTC-2/XTC-2E Example)
Figure 2-18 ODUk Facility Loopback (from line side): XTC-10 with OFx-1200
The incoming signal from the client interface (corresponding to an MXP-400) from the line side/far-end
node is looped back through the 200G mapper and then sent back out to the client interface
Figure 2-19 ODUk Facility Loopback (from Line Side) (MXP in XTC-2/XTC-2E Example)
Note: Client Tributary Facility Loopbacks are not supported at the 10G Fibre Channel services when
the corresponding DTP is configured for PRBS generation or monitoring.
Tributary Digital Transport Frame (DTF) Path Terminal Loopback—A loopback performed on the line
module or TEM circuit, wherein the cross-point switch on the line module or TEM loops back the received
client signal towards the TAM. (The test signal will continue on its provisioned path in addition to being
looped back toward the originating point of the signal.) This loopback verifies the operation of the tributary
side optics as well as the adaptation of the Tributary DTF into electrical signals performed in the TOM and
TAM and the cross-point switch on the line module or TEM.
Figure 2-23 Tributary Digital Transport Frame (DTF) Path Terminal Loopback
Client Tributary Terminal Loopback—A loopback performed on the TAM wherein the electrical signal
received from the OCG line is looped back to the OCG line transmit side in the TAM. This loopback
verifies the OCG line side optics on the line module, the DTF and FEC Mapper/Demapper in the line
module as well as the cross-point switch.
Note: The system-wide behavior for client tributary terminal loopbacks can be configured so that the
laser on the client interface will be shut off to prevent the test signal from continuing to the client
equipment and the signal will only be looped back toward the originating point of the signal.
Otherwise, the default behavior is that the test signal will continue out the client interface in addition to
being looped back toward the originating point of the signal.
Line DTF Path Facility Loopback—A loopback performed on the line module wherein the cross-point
switch on the line module loops back the received line DTF signal towards the OCG line. This loopback
verifies the line DTF connectivity and the DTF encapsulation performed in the line module.
In addition to the above loopbacks, the DTN and DTN-X also supports loopbacks on the Digital Channel
(DCh) and the Tributary DTF Path on the TAM-2-10GT and DICM-T-2-10GT:
■ DCh Client Facility Loopback (TAM-2-10GT and DICM-T-2-10GT only)—A loopback is performed
on the TAM/DICM wherein the tributary port Rx is looped back to the Tx on the TAM-2-10GT. This
loopback test verifies the operation of the tributary side optics in the TOM and TAM-2-10GT. The
TAM-2-10GT may reside within a line module or TEM.
■ DCh Client Terminal Loopback (TAM-2-10GT and DICM-T-2-10GT only)—A loopback performed
on the TAM/DICM wherein the electrical signal received from the OCG line is looped back to the
OCG line transmit side in the TAM/DICM. This loopback verifies the OCG line side optics on the
line module, the DTF and FEC Mapper/Demapper in the line module as well as the cross-point
switch.
■ Tributary DTF Path Facility Loopback (TAM-2-10GT and DICM-T-2-10GT only)—A loopback
supported by the TAM/DICM wherein a loopback is performed on the line module or TEM circuit,
wherein the cross-point switch on the line module or TEM loops back the received client signal
towards the client on the TAM/DICM to the DTN network. This loopback verifies the operation of
the tributary side optics as well as the adaptation of the Tributary DTF into electrical signals
performed in the TOM and TAM and the cross-point switch on the line module or TEM.
The figure below shows the DCh loopbacks supported by the TAM-2-10GT and DICM-T-2-10GT.
Tributary ODUk Loopback—A loopback is performed on the client so that packets received on the client
tributary ODUk are sent back towards the connected customer equipment. This loopback is only
supported on XT(S)-3600.
Line Loopback—A loopback is applied on the client so that packets received from the line side are sent
back towards the line.
OCG/SCG Loopback—A loopback is applied at the OCG level (for XT-500S) and SCG level for (XT-500F)
so that all packets received on all client Ethernet interfaces are sent back towards the connected
customer equipment.
SCG Loopback—A loopback is applied at the SCG level so that all packets received on all client Ethernet
interfaces are sent back towards the connected customer equipment.
The DTN-X supports PRBS tests on the following services originating on the XTC:
■ For ODUk switching services on the XTC, the DTN-X supports PRBS generation and monitoring for
both the facility and terminal directions (see ODU Switching on page 4-46).
■ For OTUk client services on the XTC, the DTN-X supports PRBS generation and monitoring for
both the facility and terminal directions (see Transparent Transport for OTN Services on page 4-
44 for information on OTUk with FEC transport).
■ For ODU multiplexing services on the XTC, the DTN-X supports PRBS generation and monitoring
for both the facility and terminal directions (see ODU Multiplexing on page 4-48).
■ For non-OTN services that are encapsulated in an ODUk wrapper, the DTN-X supports PRBS
generation and monitoring in the terminal direction only (see Transparent Transport for Non-OTN
Services on page 4-43 for information on which non-OTN services are supported). Note the
following for PRBS support for non-OTN services that are encapsulated in an ODUk wrapper:
□ PRBS tests are not supported in the facility direction.
□ PRBS is supported only for services encapsulated in an ODUk wrapper (e.g., ODU2,
ODU2e, ODU4, etc.); PRBS is not supported for services encapsulated in an ODUki wrapper
(e.g., ODU2i, ODUflexi, etc.). See DTN-X Network Mapping on page 4-52 for a list of the
payloads for which ODUk network mapping is supported.
□ PRBS tests are supported only for the segments in which the non-OTN service is
encapsulated in the ODUk wrapper.
■ For OC-768 and STM-256 clients on the TIM-1-40GM, the XTC supports the following PRBS tests:
□ Tributary PRBS—A PRBS signal is generated (transmitted) by the Infinera OC768/STM-256
tributary towards the client network side and is monitored (received) by the OC768/STM-256
tributary in the customer equipment or the test set connected to the Infinera tributary.
□ Line PRBS—A PRBS signal is generated (transmitted) by the Infinera OC768/STM-256
tributary towards the Infinera network side and is monitored (received) by the tributary at the
far-end TIM-1-40GM. When Line (terminal side) PRBS monitoring is enabled on an endpoint,
the LINE-PRBS-OOS alarm is raised if the PRBS signal is out of sync, or if there is not a
cross-connect/SNC present on the endpoint.
■ For 1GbE, OC-48, and STM-16 clients on the TIM-16-2.5GM, the XTC supports the following
PRBS tests:
□ Tributary PRBS—A PRBS signal is generated (transmitted) by the Infinera 1GbE/OC-48/
STM-16 tributary towards the client network side and is monitored (received) by the 1GbE/
OC-48/STM-16 tributary in the customer equipment or the test set connected to the Infinera
tributary.
Figure 2-39: PRBS Tests Supported by the XTC on page 2-53 shows the PRBS support for services on
the XTC.
Figure 2-40: Tributary and Line PRBS tests on the XTC (TIM-1-40GM/TIM-16.2.5GM) on page 2-54
shows tributary and line PRBS tests on the XTC.
Figure 2-40 Tributary and Line PRBS tests on the XTC (TIM-1-40GM/TIM-16.2.5GM)
Note: PRBS generation and PRBS monitoring can be enabled on an ODUk simultaneously, as
long as both the generation and monitoring are enabled in the same direction (facility or
terminal). PRBS monitoring can be enabled in two directions on the same ODUk as long as
PRBS generation is not enabled in either direction for the ODUk.
■ For ODU Multiplexing services on the XTC, only one of the following diagnostics is supported at a
time:
□ ODUk Facility Loopback
□ OTUk Client Tributary Facility Loopback
□ ODUk Client Facility PRBS generation
□ ODUj Client Facility PRBS generation
□ ODUj Client Terminal PRBS generation
Note: PRBS generation can be supported simultaneously in both the facility and terminal
directions on the ODUj facility if no other diagnostics are enabled (i.e., loopback on the OTUk/
ODUk/ODUj or PRBS generation on the ODUk). Simultaneous PRBS generation in the facility
and terminal directions is supported for ODUj PRBS tests only.
■ Also for ODU Multiplexing services on the XTC, note the following:
□ Client Facility PRBS generation is supported on only one ODUk/ODUj on the port.
□ Client Terminal PRBS generation is supported on only one ODUj on the port.
□ Client Facility PRBS monitoring is supported on only one ODUk/ODUj on the port.
□ Client Terminal PRBS monitoring is supported on only one ODUj on the port.
■ When PRBS generation is enabled in the terminal direction for ODUk services on the TIM-1-40G or
TIM-1-100G, the OTUk/ODUk facility does not support alarms nor PMs on its incoming signal
(receive direction PMs/alarms). The tributary physical termination point (PTP) will continue to
correctly report PMs, but the tributary PTP will not generate an alarm in case of optical loss of
signal (OLOS).
■ For OTUk transport services, PRBS tests are not supported in the terminal direction. PRBS tests
are supported in the facility direction for OTUk transport services.
■ For ODUk Client Terminal PRBS monitoring, the TERM-PRBS-OOS alarm is suppressed if there is
no cross-connect or SNC on the ODUk. Note that this behavior is different from the behavior for
endpoints on the DTC/MTC: When Line (terminal side) PRBS monitoring is enabled for endpoints
on the DTC/MTC, the LINE-PRBS-OOS alarm is raised in the absence of a cross-connect/SNC on
the endpoint.
Note the following for PRBS support for OC-768 and STM-256 clients on the TIM-1-40GM:
■ PRBS is not supported on the OC-768 and STM-256 client if a loopback is enabled on the client or
on the associated ODU3.
■ Tributary PRBS generation and Line PRBS generation can be enabled simultaneously on the
OC-768/STM-256, provided that ODUk Client Terminal PRBS generation is not enabled on the
associated ODU3.
■ Tributary PRBS generation on the OC-768/STM256 and ODUk Client Terminal PRBS generation
on the associated ODU3 can be enabled simultaneously, provided that Line PRBS generation is
not enabled on the OC-768/STM-256.
There are several types of PRBS tests (see Figure 2-41: PRBS Tests Supported by the DTC/MTC on
page 2-56 through Figure 2-43: PRBS Tests Supported by TAM-2-10GT and DICM-T-2-10GT on page 2-
58):
■ Client PRBS test (supported only for OC-768 and STM-256 interfaces)—A PRBS signal is
generated (transmitted) by the Infinera OC768/STM-256 tributary towards the client network side
and is monitored (received) by the OC768/STM-256 tributary in the customer equipment or the test
set connected to the Infinera tributary.
■ Tributary (facility side) PRBS test (supported only for OTUk, SONET, and SDH interfaces on the
TAM-8-2.5GM, TAM-2-10GM, and DICM-T-2-10GM)—A PRBS signal is generated (transmitted) by
the Infinera tributary towards the client network side and is monitored (received) by the tributary in
the customer equipment or the test set connected to the Infinera tributary.
■ Line (terminal side) PRBS test (supported only for OTUk, SONET, and SDH interfaces on the
TAM-8-2.5GM, TAM-2-10GM, and DICM-T-2-10GM)—A PRBS signal is generated (transmitted) by
the Infinera tributary towards the Infinera network side and is monitored (received) by the tributary
at the far-end TAM-8-2.5GM, TAM-2-10GM, or DICM-T-2-10GM. When Line (terminal side) PRBS
monitoring is enabled on an endpoint, the LINE-PRBS-OOS alarm is raised if the PRBS signal is
out of sync, or if there is not a cross-connect/SNC present on the endpoint.
Note: Line PRBS test is not supported on OTUk clients that are configured for service type adaptation
(see OTN Adaptation Services on page 4-21).
Note: For OTUk clients on the TAM-2-10GM and DICM-T-2-10GM, Line PRBS generation must be
disabled and re-enabled upon either failure or recovery of the client signal.
■ DTF Section-level PRBS test—A PRBS signal is generated by the near-end line module and it
is monitored by the adjacent nodes. This test verifies the quality of the digital link between two
adjacent nodes.
■ DTF Path-level PRBS test—A PRBS signal is generated by the near-end TAM and it is
monitored at the far-end TAM where the digital path is terminated. This test verifies the quality
of the end-to-end digital path. Historical performance monitoring data is collected for PRBS
sync errors and PRBS errors on the Tributary DTF Path.
Note: DTF Path-level PRBS test is not supported on the TAM-8-1G. The TAM-8-1G does support the
GbE Client Termination Point tests described in GbE Client Termination Point Tests on page 2-59.
Note: When configuring DTF Path-level PRBS test between TAM-8-2.5GMs, the TOMs must be
physically present in the TAM-8-2.5GMs for PRBS to be generated. If the generating TOM is pre-
provisioned but not physically present, the PRBS signal will not be sent and so the DTP on the
monitoring TOM will report an PRBS-OOS alarm and the PRBS Error and PRBS Sync Err PM
counters will increment.
In addition to the above PRBS tests, the system also supports specialized PRBS tests on the line-side
Digital Channel (DCh) of the LM-80:
■ Digital Channel (DCh) PRBS test (LM-80 only)—A PRBS signal is generated by the near-end
LM-80 and it is monitored by the far-end LM-80. This test verifies the functioning of the optical
channel between two LM-80s (see Figure 2-42: DCh Line PRBS Test Supported by the LM-80 on
page 2-57).
Note: Digital Channel PRBS tests are not supported for 20Gbps wavelengths (PM-BPSK modulation
format). If a DCh PRBS test is enabled on a PM-BPSK wavelength, traffic will be impacted in the
adjacent DCh in the LM-80 optical channel. For LM-80 wavelengths that use PM-BPSK modulation,
use the DTF Path-level PRBS test on the TAM.
Lastly, the system also supports specialized PRBS tests on the TAM-2-10GT and DICM-T-2-10GT:
■ Digital Channel (DCh) Section-level PRBS test (TAM-2-10GT and DICM-T-2-10GT only)—A PRBS
signal is generated by the near-end TAM/DICM and it is monitored by the far-end TAM/DICM. This
test verifies the functioning of the Digital Channel between two TAM-2-10GTs or DICM-T-2-10GTs
installed in the customer network and provider network (see Figure 2-43: PRBS Tests Supported
by TAM-2-10GT and DICM-T-2-10GT on page 2-58).
■ Tributary DTF Path-level PRBS test (TAM-2-10GT and DICM-T-2-10GT only)—A PRBS signal is
generated by the near-end TAM/DICM and it is monitored by the far-end TAM/DICM. This test
verifies the digital path, such as in a Layer 1 OPN in a provider network (see Figure 2-43: PRBS
Tests Supported by TAM-2-10GT and DICM-T-2-10GT on page 2-58).
See Rules for Performing PRBS Tests on a Tributary DTF Path (TAM-2-10GT and DICM-T-2-10GT) on
page 2-58 for information on performing PRBS tests on the TAM-2-10GT and DICM-T-2-10GT.
Rules for Performing PRBS Tests on a Tributary DTF Path (TAM-2-10GT and DICM-T-2-10GT)
The following rules apply to generating and monitoring PRBS test signals on a Tributary DTF Path for
TAM-2-10GT and DICM-T-2-10GT:
■ These rules are applicable only for 2.5G DTPs.
■ The PRBS generation and monitoring on 2.5G DTP cannot be enabled unless all the DTPCTPs on
that facility are in maintenance. That is, to enable PRBS on 1-A-3-T1-1-1, each of 1-A-3-T1-1-1, 1-
A-3-T1-1-2, 1-A-3-T1-1-3, 1-A-3-T1-1-4 (if they are present) must be in maintenance.
■ The Administrative state on any 2.5G DTP can be set to unlocked only if PRBS generation and
monitoring are disabled on all the DTPs on this facility.
■ If the PRBS generation/monitoring (and maintenance state) is set from the template and then the
DTPCTP object is created, PRBS will be enabled only if the above mentioned rules are satisfied.
Otherwise, the created DTP object will be put into maintenance.
■ If the PRBS (generation/monitoring) is enabled on the first DTP and a second facility is created
later, it will be forced to maintenance state irrespective of template configuration.
On the DTN-X, 8GFC and 10GFC services on the TIM-5-10GX and TIM-5-10GM support both test signal
generation and monitoring.
Note: Since the Fibre Channel test signal affects normal data traffic flow, it must be used only when
the associated facility is in administrative maintenance state.
Note: Since the GbE test signal affects normal data traffic flow, it must be used only when the
associated facility is in administrative maintenance state.
For DTC/MTC clients, the GbE CTP test signal is available only for the 1GbE ports on the TAM-8-1G and
TAM-8-2.5GM; for 10GbE ports on the TAM-2-10GM and DICM-T-2-10GM; for 40GbE ports on the
TAM-1-40GE and TAM-1-40GR; and for 100GbE ports on the TAM-1-100GE and TAM-1-100GR.
For XTC clients, GbE CTP test signals are supported by all Ethernet ports.
Note: GbE CTP test signal monitoring (both tributary and line) are disabled for TIM-1-100GM and
TIM-1-100GX when in 100GbE-ODU4-4i-2ix10V mode.
external test set is required to monitor the signal. Alternatively, PCS faults and PCS PMs can be
used to verify end to end data path.
■ The TAM-1-100GE, TAM-1-100GR, TAM-1-40GE, and TAM-1-40GR do not support monitoring of
line-side GbE client termination point tests. These TAMs can generate tributary-side GbE client
termination point test signals, but an external test set is required to monitor the signal.
■ The TAM-8-2.5GM, TAM-1-100GE, TAM-1-100GR, TAM-1-40GE, and TAM-1-40GR do not support
monitoring of tributary-side GbE client termination point tests. These TAMs can generate tributary-
side GbE client termination point test signals, but an external test set is required to monitor the
signal.
■ For monitoring of GbE client termination point tests for TIM-5-10GM, XICM-T-5-10GM,
TIM-5B-10GM, and TIM-5-10GX, note that PM counts for both Test Signal Sync Errors and Test
Signal Out of Sync Errors will increment at the same time
■ For information on generating and monitoring GbE CTP test signals on the TAM-8-1G, see the
following section, Rules for Performing 1GbE Client Termination Point Tests on the TAM-8-1G on
page 2-60.
Rules for Performing 1GbE Client Termination Point Tests on the TAM-8-1G
The ports on the TAM-8-1G are divided into two physical port sets: {1a, 1b, 2a, 2b} and {3a, 3b, 4a, 4b}.
The following rules apply to generating and monitoring the test signals on the TAM-8-1G port sets:
Monitoring GbE Test Signals:
■ Only one port in each port set can be configured to monitor the line-side test signal at any one time.
■ Only one port in each port set can be configured to monitor the tributary-side test signal at any one
time.
■ Although it is not possible to configure two ports in a port set to monitor the same side (tributary or
line) at the same time, it is possible to configure one port in a port set to monitor one side while
another port (or even the exact same port) is monitoring the other side. Meaning that one port can
monitor the tributary side while another port in the port set is monitoring the line side.
Generating GbE Test Signals:
■ Any number of ports in a port set can simultaneously generate a line-side test signal, even while
one or more ports in the same port set are generating tributary-side test signals.
■ Any number of ports in a port set can simultaneously generate a tributary-side test signal, even
while one or more ports in the same port set are generating line-side test signals.
Monitoring and Generating GbE Test Signals at the Same Time:
■ It is possible for a port to monitor the test signal from one direction at the same time that one or
more ports in the port set are generating test signals in the same direction. Meaning that:
□ It is possible for a port to monitor the tributary-side test signal when one or more ports in the
port set are generating tributary-side test signals.
□ It is possible for a port to monitor the line-side test signal when one or more ports in the port
set are generating line-side test signals.
■ It is not possible for a port to monitor the test signal from one direction when one or more ports in
the port set are generating test signals in the other direction. Meaning that:
□ It is not possible for a port to monitor the tributary-side test signal when one or more ports in
the port set are generating line-side test signals.
□ It is not possible for a port to monitor the line-side test signal when one or more ports in the
port set are generating tributary-side test signals
Trace Messaging
Trace messaging is a non-intrusive diagnostic tool that provides the Trail Trace Identifier functionality to
allow detection and validation of peer nodes/devices connected over the fiber. If a mismatch is detected
on the transmitted/expected and the received TTI, an alarm is raised to indicate the mismatch.
The XT supports TTI with the following extensions: Transmit TTI, Expected TTI, and Received TTI on:
■ OCG layer (for XT-500S)
■ SCG layer (for XT-500F)
■ Digital Wrapper layer (for XT-3300 and XTS-3300)
The DTN-X and DTN support the trace messaging functions as described below (see Figure 2-44: Trace
Messaging on page 2-62):
■ Trace messaging at the SONET/SDH J0 on the tributary ports.
The DTN-X and DTN provides the capability to monitor and transmit J0 messages received from
the client equipment. This capability enables the detection of misconnections between the client
equipment and the DTN/DTN-X. The DTN/DTN-X can monitor 1, 16 and 64 byte J0 trace
messages. The DTN/DTN-X can either transparently pass on the J0 message, or it can receive
then overwrite the incoming J0 message before transmitting the message toward the client
interface. The J0 message can be configured to comply with either the ANSI ITU or the GR-253
standard.
Note: For OC-768 and STM-256 services on the TIM-1-40GM, transparent J0 trace messaging
is supported; J0 overwrite is not supported.
Note: J0 trace messaging is not supported for SONET/SDH services on the TIM-16-2.5GM.
■ Trail trace identifier (TTI) trace messaging at the DTF Section and DTF Path.
The DTN supports DTF Section trace messaging to detect any misconnections between the DTNs
within a digital link, and DTF Path trace messaging is utilized to detect any misconnections in the
DTN circuit path along the Intelligent Transport Network. The DTF trace messaging is independent
of the client signal payload type.
■ TTI trace messaging on ODUk, ODUki, OTUk, and OTUki.
The DTN-X and DTN support trace messaging to detect any misconnections between the DTN-Xs
within a digital link. Both DTN and DTN-X support TTI for ODUk and OTUk; the DTN-X supports
TTI for ODUki and OTUki for services on the XTC.
■ TTI trace messaging on the optical channel for DLM, XLM, ADLM, AXLM, and SLM.
The optical channel on the line modules support TTI monitoring and insertion between line modules
over a fiber. This test verifies the functioning of the optical channel between two line modules
installed in the network.
■ J1 Path trace messaging over the OSC.
BMMs, OAMs, and ORMs on DTN-X and DTN support J1 Path trace messaging in order to
discover and continuously monitor the link connectivity between adjacent neighbor network
elements. J1 Path trace messages are continuously transmitted over the OSC, using the format “/
<NodeID>/<OTS TP AID>” (64 characters in length, padding unused bytes with ASCII null
characters, and terminated with “<CR><LF>”). BMMs, OAMs, and ORMs support J1 Path trace,
even when the OSC IP address is not configured, when the GMPLS is disabled, or the
BMM/OAM/ORM is put in the maintenance or locked state. J1 Path trace information won't be
available when there is fiber cut (OTS OLOS condition), OSC Loss of Communication condition, or
if the BMM/OAM/ORM is pre-provisioned. J1 Path trace is not supported on RAMs, as RAMs do
not terminate the OSC.
In addition to the above messaging capabilities, the system also supports trace messaging on the line-
side Digital Channel (DCh) of the LM-80:
■ Digital Channel (DCh) TTI messaging (LM-80 only)—The digital channel on the LM-80 supports TTI
monitoring and insertion between LM-80s over a fiber. This test verifies the functioning of the digital
channel between two LM-80s installed in the network (see Figure 2-45: DCh Trace Messaging
Supported by the LM-80 on page 2-63).
Lastly, the system also supports specialized TTI messaging on the TAM-2-10GT and DICM-T-2-10GT):
■ TTI messaging at the Digital Channel (DCh) Section level (TAM-2-10GT and DICM-T-2-10GT only;
see Figure 2-46: Trace Messaging Supported by the TAM-2-10GT and DICM-T-2-10GT on page 2-
63).
The TTI on the Digital Channel level supports TTI monitoring and insertion between TAMs/DICMs
over a fiber.
■ TTI messaging at the Tributary DTF Path level (TAM-2-10GT and DICM-T-2-10GT only; see Figure
2-46: Trace Messaging Supported by the TAM-2-10GT and DICM-T-2-10GT on page 2-63).
The DTF Path-level TTI supports monitoring and insertion between TAMs/DICMs, such as over a
Layer 1 OPN.
Note: TTI insertion toward the client side is not supported, and the tributary-side TTI cannot be
monitored when the line-side TTI transmission is enabled.
The DTN-X supports the following SAPI and DAPI directions for the following entities:
■ Tributary ODUk terminal SAPI/DAPI (towards the network).
■ Line ODUk terminal and facility SAPI/DAPI (towards the tributary and towards the network), when
the ODUk is in non-intrusion mode.
■ Line ODUki facility SAPI/DAPI (towards the network).
■ Line OTUki facility SAPI/DAPI (towards the network).
Note: The DTN allows a maximum of 3 TCM CTPs per side (facility side and terminal side). As a
result, the node allows up to 6 total ODUkT CTPs per a given ODUk client CTP. So, for example, the
user can activate TCM IDs 1, 4, 6 on FAC side, and can activate 2, 3, 5 on TERM side (but no more
than three TCM CTPs per side).
Note: Path loss checks are not supported for SCG ports that are associated with an optical cross-
connect or an optical SNC.
In addition to running a path loss check on an individual port, path loss checks can be initiated on a per-
module level or on a per-node level:
■ Per module—A path loss check is initiated on an FRM or FSM, so that a path loss check is run for
each SCG on the module that supports the path loss check operation (listed in the table below).
Per-module initiation is supported from management interfaces. Note the following for per-module
path loss checks:
□ The per-module path loss check is not supported if any SCG on the module is already
running a path loss check.
□ Once a per-module path loss check is initiated, it cannot be aborted.
□ For FSMs, if the FSM is equipped with an FSE, path loss checks will also be run on any
applicable SCGs on the FSE. (Note that the per-module path loss checks cannot be initiated
directly on an FSE, but must be initiated on the FSM that contains the FSE.)
■ Per node—A path loss check is initiated on a nodal level, so that a path loss check is run on each
SCG that supports the path loss check operation for all the FRMs, FSMs, and FSEs on the node.
Per node initiation is supported on GNM and DNA. Per node path loss check initiation is supported
by all node types that support the MTC-9/MTC-6 chassis.
The table below lists the SCG connections for which path loss check is supported. The figures below the
table illustrate the different path loss checks supported for the different configurations.
Figure 2-47: Path Loss Check for FRM-9D to FSP-C/FMP-C Connectivity on page 2-66 shows an
example of FRM-9D to FSP-C/FMP-C connectivity.
The following path loss check is supported for this type of configuration:
■ FRM-9D System SCG port to FSP-C or FMP-C (via loopback on the FSP-C/FMP-C)
Figure 2-48: Path Loss Check for FSM/FSE to FSP-S to FRM-9D Connectivity on page 2-67 shows an
example of FSM/FSE to FSP-S to FRM-9D connectivity.
Figure 2-48 Path Loss Check for FSM/FSE to FSP-S to FRM-9D Connectivity
The following path loss checks are supported for this type of configuration:
■ FSM line SCG port to FRM-9D system SCG port
■ FRM-9D system SCG port to FSM line SCG port
■ FSE line SCG port to FRM-9D system SCG port
■ FRM-9D system SCG port to FSE line SCG port
■ FSM line SCG port (loopback at the FSP-S)
■ FSE line SCG port (loopback at the FSP-S)
■ FRM-9D system SCG port (loopback at the FSP-S)
Figure 2-49: Path Loss Check for FRM-9D/FRM-20X to FSP-E to FRM-9D/FRM-20X Connectivity on
page 2-67 shows an example of FRM-9D/FRM-20X to FSP-E to FRM-9D/FRM-20X connectivity.
Figure 2-49 Path Loss Check for FRM-9D/FRM-20X to FSP-E to FRM-9D/FRM-20X Connectivity
The following path loss checks are supported for this type of configuration:
■ FRM-9D/FRM-20X (labeled “A”) system SCG port to FRM-9D/FRM-20X (labeled “B”) system SCG
port
■ FRM-9D/FRM-20X (labeled “B”) system SCG port to FRM-9D/FRM-20X (labeled “A”) system SCG
port
The following path loss checks are supported for this type of configuration:
■ FSE expansion SCG port to FSM base SCG port
■ FSM base SCG port to FSE expansion SCG port
Note: For path loss check from FSM base SCG port to FSE expansion SCG port, an external power
source is required, therefore the FSM tributary (add/drop) SCG port must be associated with an
AOFM/AOFX/SOFM/SOFX, and there should be no LOS condition on the FSM tributary port.
Furthermore, the FSM tributary (add/drop) SCG port must also be set to the “Path Loss Check
Source” mode (TRAFFICMOD=PATHLOSSCHECKSOURCE in TL1). At any given time, only one of
an FSM’s add/drop channels can be configured as a path loss check source.
the OTDM ports can be connected to the same IAM-2/IRM. The connected IAM-2/IRM modules
can be on the same chassis as the OTDM, or on a different chassis.
■ OTDR tests can be started on any one of the four OTDM ports, provided that the OTDM is not in
the Locked administrative state.
■ If one port of an OTDM is running a test, the other ports cannot start a new test.
■ For tests where the OTDM OTDR PTP port is associated to an Amplifier OTDR PTP on an IAM-2/
IRM,the Internal Spool Length of the IAM-2/IRM is compensated in the test.
■ OTDR tests are aborted in case of control module switchover, physical removal of the OTDM, cold/
warm reboot of the OTDM, or chassis disconnectivity from the node controller.
Syslog
The Syslog feature is introduced to support a standard based autonomous notification services. It can be
used for trouble shooting and by analytic applications. Syslog is supported on the following IQ NOS
chassis types:
■ XTC-10, XTC-4, XTC-2 and XTC-2E
■ MTC-9 and MTC-6
■ XT-3300 and XT-3600
■ DTC
The following figure shows an Example Syslog deployment scenario. In a GNE-SNE setup, A Syslog
server (SysLog Host) sends Requests for SysLog messages to network element. The network element
responds to this request by sending the Autonomous SysLog Notifications based on the request. Upto
three Syslog servers can be configured. In the current release, splunk v6.5.1 has been certified as the
syslog server.
The Rsyslogd server by default is available with the Linux OS. Starting Release 19.0, TLS is supported
as a Syslog Transport protocol along with UDP. TLS enables log sources to receive encrypted Syslog
events from network elements.
For Rsyslog Server, below are details of the certificates:
■ Local Cerificates - server_certificate.p12
■ Peer Certificates - CaDer.p7b
For Syslog Server, below are details of the certificates:
■ CAFile: ca.crt
■ CertFile: Server.crt
■ KeyFile: Server.key
Syslog message
A typical Syslog message includes information to identify the origination, destination, timestamp, and the
reason why the log is sent. The logs also has a severity level field to indicate the importance of the
message. For more information, Refer to the DTN and DTN-X Alarm and Trouble Clearing Guide.
An example Syslog notification message is shown below.
1 2017-02-07T010:40:52.0Z 10.220.70.195 - - ALARM [notification@21296
SourceName="/SIM070191001/CXOCGPTP=1-L1" EventSubType="ALARM" EventType=
"Condition" LogId="3030" LogType="Event" UserInformation="" PerceivedSeverity=
"Minor" AssertedSeverity="Minor" ProbableCause="TIM-OCG" Category="Facility"
ServiceAffecting="NSA" AdditionalText="" ProbableCauseDescription="OCG TTI
Mismatch" CircuitIdInfo="" Location="NearEnd" CurrentThreshold="0" Direction=
"Receive" SentryId="0" MoId="/SIM070191001/ALARM=CXOCGPTP%1-L1%TIM-OCG"
UniqueID="FAC0471" arcEnabledAlarm="false" FaultConditions="8192" SnmpIndex=
"4194305" AlarmCorrelationId="3855" NotificationId="3855"] BOM TIM-OCG-OCG
TTI Mismatch
Note: Every alarm raised by the network element can be identified by the Unique ID field’s value in
the syslog message. For more details of the alarm description and associated troubleshooting
procedures, refer to DTN and DTN-X Alarm and Trouble Clearing Guide. This document describes all
events and alarms indexed with Unique ID
IQ NOS provides the following configurations to manage the Infinera network elements. For information
on Node Configurations and Applications, see DTN and DTN-X System Description Guide
■ Equipment Management and Configuration on page 3-2
■ Migrating BMM based line systems to FRM based line systems on page 3-60
■ Migrating a DTN or Optical Amplifier to a DTN-X on page 3-58
Note: It is highly recommended to delete all pre-provisioned chassis and verify the reachability of all
chassis before initiating a software upgrade.
The PM data for the modules in a chassis are stored by the controller module on that same chassis.
■ If an Expansion Chassis has redundant controller modules, PM data is replicated on both the active
and standby controller modules.
■ If an Expansion Chassis does not have redundant controller modules, PM data is replicated to the
active controller module on the Main Chassis, and also to the standby controller module on the
Main Chassis (if the Main Chassis has redundant controller modules).
□ If the non-redundant controller module on the Expansion Chassis is replaced, the new
controller module will download the PM data from the active controller module on the Main
Chassis.
□ If a redundant controller module is subsequently installed on the Expansion Chassis, the PM
data will be deleted from the controller modules on the Main Chassis.
Managed Objects
IQ NOS defines software abstraction of all the hardware equipment, physical ports and logical termination
points, referred to as managed objects which are administered through the management applications.
Managed objects are modeled after the ITU-T and TMF general information modeling standards, which
provide an intuitive and convenient means to reference the managed objects.
The figures below illustrate the most commonly used managed objects in each type of Infinera chassis
and the hierarchical relationship between them.
As shown, there are three major categories: hardware equipment, physical ports, and logical termination
points which represent the termination of signals. Users can create and delete the equipment managed
objects while the physical port and logical termination points are automatically created with default
attributes when the parent equipment managed object is created. Users can modify the attributes of the
auto-created managed objects through the management applications. Note that multi-chassis network
elements are managed as single objects.
User operations, such as modifying the administrative state (see Administrative State on page 3-36) and
modifying the alarm reporting state (see Alarm Reporting Control (ARC) on page 2-7), of a given
managed object impact the behavior of the corresponding contained and supported/supporting managed
objects. For example, when a user modifies the administrative state of a BMM to locked, the service state
of the contained and supported managed objects, DCF, C-band, OCG, OSC, GMPLS link, etc., is
changed to out-of-service. Similarly, when ARC is enabled on a BMM, alarm reporting is inhibited for all
the corresponding contained and supported managed objects. The following figures show the hierarchy of
managed objects for the different chassis and configurations:
■ Figure 3-1: Managed Objects and Hierarchy (DTN-X) on page 3-5 shows the hierarchy of
managed objects on a DTN-X.
■ Figure 3-2: Managed Objects and Hierarchy (DTN-X with ODU Multiplexing) on page 3-6 shows
the hierarchy of managed objects on a DTN-X when using ODUk switching.
■ Figure 3-3: Managed Objects and Hierarchy (DTN-X with PXM) on page 3-7 shows the hierarchy
of managed objects on a DTN-X with PXM.
■ Figure 3-4: Managed Objects and Hierarchy (DTN-X with OFx) on page 3-8 shows the hierarchy
of managed objects on a DTN-X with AOFX/AOFM/SOFM/SOFX.
■ Figure 3-5: Managed Objects and Hierarchy (DTN-X with 100G VCAT) on page 3-9 shows the
hierarchy of managed objects on a DTN-X when using 100G virtual concatenation (VCAT).
■ Figure 3-6: Managed Objects and Hierarchy (MTC-9/MTC-6) on page 3-10 shows the hierarchy of
managed objects on an MTC-9/MTC-6 chassis with FSM, FMM-F250, FRM-9D, IAM, IRM.
■ Figure 3-7: Managed Objects and Hierarchy (MTC-9/MTC-6 with FRM-4D) on page 3-11 shows
the hierarchy of managed objects on an MTC--9/MTC-6 chassis with FRM-4D.
■ Figure 3-8: Managed Objects and Hierarchy (MTC-9/MTC-6 with OPSM) on page 3-12 shows the
hierarchy of managed objects on an MTC--9/MTC-6 chassis with OPSM.
■ Figure 3-9: Managed Objects and Hierarchy (DTC/MTC with Line Modules) on page 3-13 shows
the hierarchy of managed objects on a DTC/MTC with line modules.
■ Figure 3-10: Managed Objects and Hierarchy (DTC/MTC with LM-80s) on page 3-14 shows the
hierarchy of managed objects on a DTC/MTC with LM-80s. (Note that line modules and LM-80s
can be combined in the same chassis, but for simplicity Figure 3-9: Managed Objects and
Hierarchy (DTC/MTC with Line Modules) on page 3-13 shows only line modules and Figure 3-10:
Managed Objects and Hierarchy (DTC/MTC with LM-80s) on page 3-14 shows only LM-80s.)
■ Figure 3-11: Managed Objects and Hierarchy (Base/Expansion BMM2 on DTC/MTC) on page 3-
15 shows the managed objects on the BMM2 expansion/base modules.
■ Figure 3-12: Managed Objects and Hierarchy (OTC) on page 3-15 shows the hierarchy of
managed objects on an OTC.
■
■ Figure 3-14: Managed Objects and Hierarchy (XT-500S/XT-500F) on page 3-17 shows the
hierarchy of managed objects on an XT-500S/XT-500F.
■ Figure 3-15: Managed Objects and Hierarchy (XT(S)-3300) on page 3-18 shows the hierarchy of
managed objects on an XT(S)-3300.
Figure 3-2 Managed Objects and Hierarchy (DTN-X with ODU Multiplexing)
Figure 3-5 Managed Objects and Hierarchy (DTN-X with 100G VCAT)
Figure 3-9 Managed Objects and Hierarchy (DTC/MTC with Line Modules)
■ The Intelligent Transport Network topology, including Physical Topology and Service Provisioning
topology (see Network Topology on page 8-3)
■ The optical data plane connectivity (see Optical Data Plane Auto-discovery on page 3-20)
IQ NOS maintains the inventory of all the automatically discovered resources, as described above, and
also the user provisioned services which includes:
■ Cross-connects provisioned using Manual Cross-connect Provisioning mode (including
Channelized cross-connects and Associations)
■ Circuits provisioned using Dynamically Signaled SNC Provisioning mode (including Channelized
SNCs and sub-SNCs)
■ Cross-connects that are automatically created while creating circuits utilizing Dynamically Signaled
SNC Provisioning mode (including Channelized cross-connects and Associations)
■ Protection groups that have been provisioned
Refer to DTN Service Provisioning on page 4-2 and DTN-X Service Provisioning on page 4-33 for
more details.
Multi-chassis Discovery
IQ NOS provides the ability to automatically detect multiple chassis in Infinera nodes, along with detailed
information for each chassis, including:
■ Label name
■ CLEI code
■ Product ordering name (PON)
■ Manufacturing part number
■ Serial number
■ Hardware version
■ Manufacturing date
■ Internal temperature
■ Rack name
■ Provisioned serial number
■ Location in rack
■ Alarm Cutoff (ACO) state (enabled or disabled)
■ Chassis-level alarm reporting (enabled or disabled)
Note: For configurations with an FRM-9D/FRM-20X or FBM with an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600/MXP-400) in Open Wave configuration, auto-discovery is not
supported between the Open Wave ICE 4 line module or network element and the FRM or FBM.
However, auto-discovery and power control loops are supported between FRMs and FBMs.
Note: For configurations with an FBM or FRM-9D/FRM-20Xwith an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600/MXP-400) in SCG Line System Mode, Release 18.2 supports
native auto-discovery bypass between the ICE 4 module and and FRM/FBM but supports power
control loop between them when the following configurations are made:
■ The Infinite Capacity Engine 4 module (i.e. XT(S)-3300/OFx-1200/XT(S)-3600/MXP-400) is in
SCG Line System Mode
■ FBM is in Active Line Operating Mode
■ FRM-9D/FRM-20X/FBM SCG Interface type is set as Infinera Wave
For more information, see Bypass Native Auto-discovery
OCG-based Auto-discovery
Note: Unless specifically noted otherwise, all references to “line module” will refer interchangeably to
either the DLM, XLM, ADLM, AXLM, SLM, AXLM-80, ADLM-80 and/or SLM-80 (DTC/MTC only) and
AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and/or SOLX2 (XTC only). The term “LM-80”
is used to specify the LM-80 sub-set of line modules and refers interchangeably to the AXLM-80,
ADLM-80 and/or SLM-80 (DTC/MTC only). Note that the term “line module” does not refer to TEMs,
as they do not have line-side capabilities and are used for tributary extension.
Note: Unless specifically noted otherwise, all references to the BMM will refer to either the BMM,
BMM2C, BMM2, BMM2P, BMM1H, and/or BMM2H interchangeably.
Note: When a module is in the Auto-discovery process, there will be a delay in the retrieval of the
module’s performance monitoring data. The delay may be up to 15 seconds.
Note: Auto-discovery is not supported for line modules in Open Wave configuration, neither between
Open Wave line modules, nor between an Open Wave line module and a BMM (see Open Wave Line
Module Configuration - Network Applications in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg).
Note: Auto-discovery is not supported for XT(S)-3300 line modules in Open Wave configuration, nor
between an FBM and FRM-20X (see Open Wave Line Module Configuration - Network Applications
in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg).
The DTN and DTN-X support Auto-discovery for the following types of optical connections:
■ BMM-to-BMM connection (Optical Express): A front-accessible optical patch cord is connected
from the Optical Carrier Group (OCG) port on one BMM to the OCG port on the other BMM. Optical
Express allows optical pass-through of one or more OCGs through a node. Auto-discovery is
supported for connections between BMM2s, between BMM2Ps, and between BMM2Cs, but it is not
supported for Optical Express connections between Gen 1 BMMs (manual configuration is required
for Optical Express between Gen 1 BMMs). Auto-discovery is also not supported for Optical
Express connection in which one or both BMMs is set to SLTE mode. For more information on
Optical Express connections, see Optical Express on page 4-27.
Note: For Optical Express connections between BMMs, the BMM OCG can be locked without
affecting traffic. However, for Optical Express connections between BMM2s, between BMM2Ps, and
between BMM2Cs, if the BMM2/BMM2P/BMM2C OCG is locked, Auto-discovery is re-triggered, thus
impacting traffic. Make sure that BMM2/BMM2P/BMM2C OCGs are unlocked for Auto-discovery to
succeed, thereby restoring traffic.
■ Line module-to-BMM connection: A front-accessible optical patch cord is connected from the OCG
port on a BMM to the OCG port on a line module to carry the 100Gbps OCG (or 500Gbps OCG for
DTN-X) signal between the BMM and line module.
□ For GAM connections, Auto-discovery is supported only in the multiplex direction (from the
line module to the GAM to the BMM2/BMM2P).
■ Connections between a TOM on a DTN and a TOM on an ATN. The DTN supports Auto-discovery
for optical connections between a TOM on a TAM-2-10GT on a DTN and a TOM on a SIM-GT or
SIM-GMT on an ATN.
Infinera network elements support the Auto-discovery of line module-to-BMM connections, for both single
and multi-chassis configurations. Auto-discovery eliminates misconnections between the modules,
including:
■ Connecting a line module to a wrong OCG port on the BMM. For example, connecting a line
module with an OCG3 output to an OCG5 input port on a BMM.
■ Connecting a line module to a BMM in conflict with the pre-provisioned association of the BMM and
line module. For example, pre-provisioning an OCG3 port on a BMM to be associated with the line
module in slot 4, but then incorrectly connecting the fiber to the line module in slot 3 (though it may
support OCG3).
On detecting a misconnection, alarms are reported so that the user can correct the connectivity. Also, the
line module is prohibited from transmitting optical signals towards the BMM to prevent the misconnection
from interfering with the other operational line modules. In addition, the operational state of the line
module OCG is changed to disabled.
The optical data plane Auto-discovery involves control message exchanges between the active controller
module in the Main Chassis and the BMMs and line modules, in addition to the control message
exchange between the line module and BMM over the optical data path. The optical data plane Auto-
discovery requires the control plane to be available. Following are some limitations imposed by the
protocol, which prevent proper detection of a line module-to-BMM misconnection:
■ When the Auto-discovery is in progress, there is a 5 second window during which a BMM will not
discover any re-cabling performed by the user. Therefore, do not perform re-cabling while the Auto-
discovery is in progress. Below is a list of events during which BMM and line module automatically
initiate the optical data plane Auto-discovery.
■ If users inadvertently connect an incorrect high power signal to the OCG port on a BMM (for
example connecting the line port output to the OCG input port on a BMM), it could impact traffic on
the other operational OCG ports on the BMM.
■ The Auto-discovery procedure requires the connectivity between the BMM and line module be bi-
directional. In other words, the transmit and receive pair of a given OCG port on a BMM must be
connected to the transmit and receive pair of the same line port of the line module. If this is not
done properly, it will impact the active traffic.
■ The BMM may not detect a misconnection if the fiber is re-cabled under the following conditions,
during which the control messages pertaining to the Auto-discovery could be lost. Refrain from re-
cabling during these conditions:
□ BMM is rebooting
□ BMM is shut down
□ BMM is unplugged
FlexILS Auto-discovery
Note: The following terminology is used in this document for FlexChannel line modules:
■ "OFx" is used to refer to all FlexChannel line modules collectively (AOFM-500, SOFX-500B,
AOFM-100, etc.)
■ "OFx-500" is used to refer to FlexChannel 500G modules (AOFM-500, AOFX-500B,
SOFX-500, etc.)
■ "OFx-100" is used to refer to FlexChannel 100G modules (AOFM-100 and AOFX-100)
■ "OFx-1200" is used to refer to FlexChannel 1200G modules (AOFM-1200, AOFX-1200,
SOFM-1200, SOFX-1200, etc)
■ "SOFx" is used to refer only to Submarine FlexChannel modules (SOFX-500, SOFX-500B,
SOFX-1200, etc)
■ "AOFx" is used to refer only to non-Submarine FlexChannel modules (AOFM-500, AOFX-100,
etc.)
Where there are further differences in behavior/support, the exact module type(s) will be mentioned
specifically.
This section describes the optical connections for which Auto-discovery is supported for FlexILS nodes.
Note the following for Auto-discovery for connections on FlexILS nodes:
■ Unlike Auto-discovery for connections between OCG-based modules, FlexILS modules initiate
Auto-discovery only after an optical service (optical SNC or optical cross-connect) is provisioned
over the connection.
Note: To enable faster traffic recovery, starting from Release 10.1 Auto-discovery does not trigger if
an optical cross-connect or optical SNC is deleted and subsequently re-created, as long as the
service is re-created within 60 minutes of when it was deleted. (In the previous release, Auto-
discovery was triggered immediately upon re-creation.)
■ For configurations with FRM-9D and FMP-C, Auto-discovery is not supported for connections
between the FRM-9D and FlexChannel line modules (OFx-500) when connecting via an FMP-C. In
this case, the FlexChannel line module’s SCG must be configured for passive multiplexing mode.
■ For configurations with an FRM-9D/FRM-20X or FBM with an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600) in Open Wave configuration, auto-discovery is not
supported between the Open Wave ICE 4 line module or network element and the FRM or FBM.
However, auto-discovery and power control loops are supported between FRMs and FBMs.
■ For configurations with an FBM or FRM-9D/FRM-20X with an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600) in SCG Line System Mode, Release 18.2 supports native
auto-discovery bypass between the ICE 4 module and FRM/FBM but supports power control loop
between them when the following configurations are made:
□ The Infinite Capacity Engine 4 module (i.e. XT(S)-3300/OFx-1200/XT(S)-3600) is in SCG
Line System Mode
□ FBM is in Active Line Operating Mode
□ FRM-9D/FRM-20X/FBM SCG Interface type is set as Infinera Wave
For more information, see Bypass Native Auto-discovery on page 3-31.
■ For configurations with FMM-C-12 to FlexROADM Broadcast Module (FBM) to FRM-20X, i.e. FMM-
C-12 Line Operating Mode is set to is in Passive Modeling Mode, auto-discovery is not supported
between the FMM-C-12 and the FRM-20X.
■ For configurations with FMM-C-12 and FRM-20X through an FSP-C, i.e. FMM-C-12 Line Operating
Mode is set to Active, auto-discovery between FMM-C-12 (line OUT port) and FRM-20X (System
IN port) is supported.
■ Auto-discovery with FlexChannel line modules is unidirectional: from the OFx-500 to the FSM/FRM,
for example. Auto-discovery is not supported in the opposite direction (from the FSM/FRM towards
the OFx-500).
The figure below shows the Auto-discovery supported between OFx-500, FSM/FSE, and FRM-9D:
The figure below shows the Auto-discovery supported between OFx-500 and FRM-9D:
■ From the OFx-500 to the FRM-9D (system IN) port
The figure below shows the Auto-discovery supported between OFx-500 and FRM-4D:
■ From the OFx-500 to the FRM-4D (system IN) port
Figure 3-19 Auto-discovery for FRM-9D to FRM-9D (via FSP-E): Sample express between two FRM-9Ds
The figure below shows the Auto-discovery supported between FRM-4D in express connections:
■ From the system OUT port on the FRM-4D labeled “A” to the system IN port on the FRM-4D
labeled “B”.
■ From the system IN port on the FRM-4D labeled “A” to the system OUT port on the FRM-4D
labeled “B”.
Note: Express connections between FRM-4Ds requires two fibers to connect each system IN port to
the system OUT port on the other FRM-4D. Auto-discovery cannot complete for express connections
between FRM-4Ds unless both fibers are connected.
Note: For any pair of FRM-4Ds, only one express connection is supported between the two
FRM-4Ds. For example, if a pair of FRM-4Ds has an express connection between their System 6
ports, the same pair of FRM-4Ds cannot support an express connection between their System 5
ports.
The figure below shows the Auto-discovery supported between OFx-500, FSM/FSE, and FRM-4D:
■ From the OFx-500 to the FMM-F250 (add/drop IN) port
■ From the FMM-F250 (line OUT) port to the FRM-4D (system IN) port
The figure below shows the Auto-discovery supported between OFx-500, FMM-F250, FSP-C, and
FRM-9D:
■ From the OFx-500 to the FMM-F250 (add/drop IN) port
■ From the FMM-F250 (line OUT) port to the FRM-9D (system IN) port
The figure below shows the Auto-discovery supported between OFx-100, FMM-C-5, and FRM-4D:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the FRM-4D (system IN) port
The figure below shows the Auto-discovery supported between OFx-100, FMM-C-5, BPP, and FRM-4D:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the BPP (system IN) port
■ From the BPP (line OUT) port to the FRM-4D (system IN) port
Note: Because the BPP is a passive device, the BPP connections between the FMM-C-5 and the
FRM-4D must be manually provisioned:
■ If the BPP connections are not manually provisioned, Auto-discovery will complete between the
FMM-C-5 and the FRM-4D but the passive BPP will not be automatically discovered.
■ If the BPP connections are manually configured but the actual fiber is connected directly
between the FMM-C-5 and the FRM-4D, Auto-discovery has no way to detect that the fiber isn’t
connected through the BPP and Auto-discovery between the FMM-C-5 and the FRM-4D will
complete with no errors.
■ If the connection between the FMM-C-5 and the BPP System port is provisioned (the user has
configured the Provisioned Neighbor TP on the BPP to the AID of the FMM-C-5 Line AID), but
the connection between the BPP Line port and the FRM-4D is not provisioned, a
misconnection alarm will be reported on the FRM-4D System port and Auto-discovery will not
complete.
■ If the connection between the BPP Line port and the FRM-4D is provisioned (the user has
configured the Passive Provisioned Neighbor TP on the FRM-4D to the AID of the BPP Line
AID), but the connection between the FMM-C-5 and BPP System Port is not provisioned, a
misconnection alarm will be reported on the FRM-4D System port and Auto-discovery will not
complete.
The figure below shows the Auto-discovery supported between OFx-100, FMM-C-5, FSP-C, and
FRM-9D/FRM-20X:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the FRM-9D/FRM-20X (system IN) port
Figure 3-25 Auto-discovery for OFx-100, FMM-C-5, FSP-C, and FRM-9D/FRM-20X (Example with
FRM-9D)
The figure below shows the Auto-discovery supported between the FMM-C-12 and the FRM-9D:
■ From the FMM-C-12 (line OUT) port to the FRM-9D (system IN) port
Note: Auto Discover Neighbor is only applicable for FRM-9D/FRM-20X and FBMs with SCG
Interface type as Infinera Wave and is not supported for other SCG Interface types such as
Manual Mode 2, SLTE Manual etc.
■ Power Control Loop is enabled automatically when Auto Discover Neighbor is set to disabled state.
Note: "OFx-100" is used to refer to FlexChannel 100G modules (AOFM-100 and AOFX-100).
This section describes the optical connections for which Auto-discovery is supported for configurations
with OFx-100, FMM-C-5, and BMM.
Note: Auto-discovery with FlexChannel line modules is unidirectional: From the OFx-100 to the FMM-
C-5 and from the FMM-C-5 to the BMM. Auto-discovery is not supported in the opposite direction
(there is no Auto-discovery from the FMM-C-5 towards the OFx-100, nor from the BMM towards the
FMM-C-5).
Figure 3-27: Auto-discovery for OFx-100, FMM-C-5, and BMM on page 3-32 shows the Auto-discovery
supported between OFx-100, FMM-C-5, and BMM:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the BMM (OCG IN) port
Note: Contact Technical Assistance Center (TAC) before configuring target power offset.
The DTN-X and DTN use various modulation formats depending on the line modules on the system:
■ For configurations with DLMs, XLMs, ADLMs, AXLMs, and/or SLMs, a BMMs uses OOK (On Off
Keying) modulation.
■ For configurations with AOLMs, AOLM2s, AOLXs, AOLX2s, SOLMs, SOLM2s, SOLXs, SOLX2s,
and LM-80s that support different modulation formats (see System Data Plane Functions in
#unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg), a BMM may be transmitting across the
line a combination of differently modulated channels (OOK, PM-QPSK, PM-BPSK, etc.).
If all channels use the same modulation format, the BMM can use the same launch power for all of the
channels when transmitting the channels across the optical transport section (OTS). However, if a BMM is
transmitting channels that use different modulation formats, the co-existence of different modulation
schemes in the same OCG can present a problem: The reach of a phase-modulated signal such as PM-
QPSK can be impaired by neighboring channels that use OOK modulation if all channels have equal
channel power. This problem can be corrected by modifying the launch power for various channels or
OCGs based on link design. For this reason, the user can configure the target power offset on a per-OCG
basis on the BMM and on a per-optical channel (OCH) basis on the LM-80:
■ On the OCG level, the power of the ingress signal to the BMM on that OCG is reduced by the
power as defined by the target power offset value configured on the BMM OCG. Different offset
values (within the supported range) can be provisioned for every OCG within an OTS.
Note: When target power offset on the BMM OCG is changed from a more negative to a less
negative or zero offset (e.g., from -4dB to 0dB), the “OPR-OOR-L” and “Power Adjustment
Incomplete” alarms may be reported for the duration of the time it takes to adjust the power to
incorporate the new offset value.
■ On the OCH level the power of the ingress signal to the CMM from the LM-80 OCH is reduced by
the power as defined by the target power offset value configured on the CMM OCH PTP. Thus, the
power of individual channels within an OCG can also be offset. Different channel offset values
(within the supported range) can be provisioned for every LM-80 OCH within an OCG.
Infinera’s Automated Gain Control (AGC) relies on a minimum power received (and thus a minimum
number of channels) in order to detect a signal and complete Auto-discovery of optical connections as
well as to accurately perform calculations for amplifier gain settings. Therefore, AGC requires:
■ Per OCG: At least 2 channels
■ Per OTS:
□ For ILS1 and ILS2: At least 8 channels
□ For FlexILS: At least 2 channels (4 slices = 50GHz band equivalent power)
■ From an expansion BMM to its associated base BMM: At least 8 channels
Note: Release 18.1 supports Infinera's automated gain control for C-Band traffic only. Automated
gain control for L-Band signals is not supported in this release and will lead to errors in the gain
control calculations.
Note: For LM-80s, if an optical channel (OCH PTP) is in the locked state, that channel does not count
towards the minimum channel count.
Even if the above requirements are satisfied, it is possible that the launch power over the line is reduced
due to target power offset, thus dropping the launch power below what would be expected for two
channels, and this may also prevent Auto-discovery from completing and/or cause erroneous gain
calculations.
With the introduction of target power offset, it becomes important to consider the number of “effective
channels” carried by the OCG and also by the OTS.
For example, an OCG might contain two channels, but if a target power offset has been applied to the
OCG and/or to any of the channels in the OCG, the number of effective channels may be less than the
minimum requirement of two channels. Furthermore, because an LM-80’s channel power can be offset by
both the OCH target power offset and by the containing OCG target power offset, it is important to note
that each channel can support a maximum of -4dB total target power offset. For example, if the BMM
OCG target power offset is set to -3dB, the channels in that OCG can support a maximum CMM OCH
target power offset of -1dB, for a total target power offset of -4dB.
The table below shows how the total target power offset applied to OCG reduces the number of effective
channels in the OTS. For example, if an OCG has three channels and is configured with a target power
offset of -2dB, the effective channel count will be 1.9, which does not meet the minimum channel
requirement per OCG. The table indicates with an asterisk (*) the target power offset values that will not
meet the minimum channel requirement per OCG.
0 1.0* 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
-1 0.8* 1.6* 2.4 3.2 4.0 4.8 5.6 6.4 7.1 7.9
-2 0.6* 1.3* 1.9* 2.5 3.2 3.8 4.4 5.0 5.7 6.3
-3 0.5* 1.0* 1.5* 2.0 2.5 3.0 3.5 4.0 4.5 5.0
-4 0.4* 0.8* 1.2* 1.6* 2.0 2.4 2.8 3.2 3.6 4.0
Similarly, the table below shows how the target offset applied to channels at the LM-80 OCH PTP level
affects the effective channel count in the OCG. For example, if an OCG has three channels and each
channel is configured with a target power offset of -2dB, the effective channel count will be 1.8, which
does not meet the minimum channel requirement per OCG. The table indicates with an asterisk (*) the
target power offset values that will not meet the minimum channel requirement per OCG.
Table 3-2 Effective Channels as a Result of LM-80 OCH PTP Target Power Offset
Target Power Number of Actual Channels
Offset Value (dB)
1 2 3 4 5 6 7 8 9 10
0 1.0* 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
-1 0.8* 1.6* 2.4 3.2 4.0 4.8 5.6 6.4 7.1 7.9
-2 0.6* 1.3* 1.9* 2.5 3.2 3.8 4.4 5.0 5.7 6.3
-3 0.5* 1.0* 1.5* 2.0 2.5 3.0 3.5 4.0 4.5 5.0
Because of this, it is important to ensure that the total number of active channels included in each OCG,
OTS, and between base/expansion BMMs meets the minimum channel requirement, taking into account
the total target power offset for the OCG and for all LM-80 channels in the OCG, and also taking into
account any optical channels on the LM-80 that are in the locked state (because locked channels do not
count towards the number of effective channels in the OCG):
■ The effective channel count in an OTS is the sum of the channel counts from each OCG.
■ The effective channel count in a CMM OCG is the sum of the channel counts from each
LM-80/CMM OCH.
For example, if there are five LM-80 OCH channels in a CMM OCG:
■ LM-80 OCH Channel 1 (configured in the corresponding CMM OCH) is assigned a target offset of
-3dB. The effective channel count for this channel is 0.5.
■ LM-80 OCH Channel 2 (configured in the corresponding CMM OCH) is assigned a target offset of
0dB. The effective channel count for this channel is 1.
■ LM-80 OCH Channel 3 (configured in the corresponding CMM OCH) is assigned a target offset of
-3dB. The effective channel count for this channel is 0.5.
■ LM-80 OCH Channel 4 (configured in the corresponding CMM OCH) is assigned a target offset of
0dB. The effective channel count for this channel is 1.
■ LM-80 OCH Channel 5 (configured in the corresponding CMM OCH) is assigned a target offset of
0dB, but the LM-80 OCH is in the locked state. The effective channel count for this channel is 0.
■ Total effective channel count in the CMM/BMM OCG will be 0.5 + 1 + 0.5 + 1 + 0 = 3.
Equipment Configuration
IQ NOS supports two modes of equipment configuration as described in the following sections:
■ Equipment Auto-configuration on page 3-35
■ Equipment Pre-configuration on page 3-35
In both cases, the termination points are automatically created after the circuit pack is configured.
Equipment Auto-configuration
As described in System Discovery and Inventory on page 3-18, IQ NOS automatically discovers the
equipment installed in network element, enabling users to bring up a circuit packs without manual
configuration. The auto-configuration is performed when a circuit pack is installed in a slot which is not
configured, neither pre-configured (see below) nor auto-configured. IQ NOS discovers the installed circuit
pack and also creates and configures the corresponding circuit pack managed object using default
configuration parameters. The default administrative state of an automatically created circuit pack is
unlocked so the circuit pack can start operation without manual configuration. However, users can modify
this default state through management applications.
Note: In case of an XT-500 chassis in breakout mode, the TOM managed object is created with a
default locked administrative state. The port should then be manually unlocked to enable breakout
mode.
Once a slot is populated and the circuit pack auto-configuration completes, the slot is configured and any
attempt to replace the circuit pack with a different circuit pack type will raise an alarm. To enable auto-
configuration of a different circuit pack in the same slot, the circuit pack configuration for the slot must first
be deleted through management applications.
In the case of a multi-chassis system, the Main chassis must be auto-configured; you may not manually
create it through a management interface. In contrast, Expansion Chassis may not be auto-configured;
you must manually create them through the management interface.
Equipment Pre-configuration
IQ NOS supports circuit pack pre-configuration where users can configure the slots to house a specific
circuit pack before physically installing it in the chassis. Such slots are displayed as pre-configured but
unpopulated through the management applications. For multi-chassis systems, only the Expansion
Chassis may be pre-configured.
When the circuit pack is installed in a pre-configured slot, the circuit pack becomes operational using pre-
configured data.
Once a slot is pre-configured for a circuit pack type, insertion of a different circuit pack type causes the
network element to generate an equipment mismatch alarm.
State Modeling
IQ NOS implements state modeling that meets the various needs of all the supported management
applications and interfaces, and also communicates comprehensive state of the equipment and
termination points. IQ NOS state modeling complies with TMF814 and GR-1093 to meet the TL1
management interface standards.
Note: The TL1 agent software provides the appropriate translation of the node state model to reflect
the GR-1093 based TL1 state model.
IQ NOS defines a standard state model for all the managed objects, which includes equipment as well as
termination points as described in Managed Objects on page 3-3. IQ NOS defines the following states:
■ Administrative State—Represents the user’s operation on an equipment or termination point
(referred to as a managed object). See Administrative State on page 3-36.
■ Operational State—Represents the ability of the managed object to provide service. See
Operational State on page 3-39.
■ Service State—Represents the current state of the managed object, which is derived from the
administrative state and operational state. See Service State on page 3-40.
Administrative State
The administrative state allows the user to allow or prohibit the managed object from providing service.
The administrative state of the managed object can be modified only by the user through the
management applications. Also, a change in the administrative state of a managed object results in an
operational state change of the contained and supported managed objects. However, the administrative
states of the contained and supported managed objects are not changed.
Note: IQ NOS supports alarms that indicate when an entity is put in the locked or maintenance
administrative state. The severity of these alarms can also be customized via the ASPS feature (see
Alarm Severity Profile Setting (ASPS) on page 2-9).
Unlocked State
The managed object in unlocked state is allowed to provide services. Using management applications,
users can change the state of a managed object to unlocked state from either locked state or
maintenance state. This action results in the following behavior:
■ If there are any outstanding alarms on the managed object they are reported.
■ PM values for the managed object will be collected and reported as valid.
■ The managed object is available to provide services (provided its operational state is enabled).
However, if there is a corresponding redundant managed object that is active, the unlocked
managed object will be placed into standby mode (e.g., MCM).
Maintenance State
The managed object avails itself to management operations, such as trace messaging, loopbacks, PRBS
testing, GbE CTP test signals, etc. Users can change the state of a managed object to the maintenance
state from either locked state or unlocked state. This section results in the following behavior:
■ All outstanding alarms are cleared on the managed object and all its dependent equipment and
facility objects. All new alarm reporting and alarm logging for the managed object and all the
dependent equipment and facility objects are suppressed until the managed object is
administratively unlocked again. (For example, if a line module is in maintenance, all outstanding
alarms are cleared on the line module, TIM, TAM, TOM, etc. and also on the facility objects like line
module OCG, Optical Channel, DTF Path, Client CTP, Tributary PTP, OTUki, ODUki, etc.)
■ PM values will be marked invalid for managed objects in the maintenance state and all of the
dependent facility objects.
■ Users can perform service-impacting maintenance operations, such as loopback tests, PRBS tests,
etc., without having any alarms reported.
■ The operational and service state of all contained and supported managed objects are modified;
the operational state is changed to disabled and the service state is changed to the OOS-MT (out-
of-service-maintenance) state.
Locked State
The managed object is available for service affecting provisioning, such as modifying attributes or
deleting objects. Users can change the administrative state of a managed object to the locked state from
either unlocked state or maintenance state through all management applications except for the TL1
Interface. In the TL1 Interface, users can change the administrative state of a managed object to the
locked state only from the unlocked state. Changing the administrative state to the locked state results in
the following behavior:
■ Depending on the type of managed object, the locked state may or may not provide services to
users:
□ For all traffic-carrying modules (BMM, DLM, AOLX, AXLM-80, TIM, TAM, TOM, OAM, ORM,
RAM, etc.), the managed object does not provide services to users, meaning that any traffic
on the module will be affected when the module is locked.
□ For the expansion BMM2P-8-CEH1, transmit traffic is affected when the module is locked.
For the other expansion BMMs (BMM2-8-CEH3 and BMM2H-4-B3), traffic is not affected
when the module is locked.
□ For CMMs and LM-80s:
Locking the CMM will affect service on both of the CMM’s OCGs.
Locking a single OCG on a CMM will affect the service on the locked OCG, but not on
the CMM’s other OCG.
Locking a single optical channel physical termination point (OCH PTP) on an LM-80
will affect service on the locked OCH PTP, but not on the LM-80’s other OCH PTP.
Locking a OCH PTP on the CMM does not affect service.
□ Locking the SCM card shuts down the Idler lasers. The Idler lasers will turn on only when the
associated SCM is in the unlocked state.
□ For facilities that can be locked, such as optical channel, OCG, OTUki, Tributary DTF, Line
DTF Path, etc., the facility can be put in the locked state without preventing traffic with the
following exceptions:
For BMM2/BMM2P OCGs involved in Optical Express connections, traffic will be
impacted when the facility is put in the locked state.
For OCGs on line modules (AXLMs, SLMs, AOLXs, SOLMs, etc.), the OCG can be put
in the locked state without impacting traffic on the OCG. However, new SNCs cannot
be provisioned using OCGs in the locked state since the associated TE link will be
down. Manual cross-connects can still be provisioned on line module OCGs in the
locked state.
For SNCs, the associated signaled cross-connects will be deleted and traffic will be
impacted when the SNC is put in the locked state.
□ For Tributary PTPs:
The payload/service type can be changed when the Tributary PTP is in the locked
state.
For all payload types, traffic on the Tributary PTP will be impacted when the facility is
put in the locked state.
Furthermore, the behavior of the Tributary PTP in the locked state is determined by
the Tributary Disable Action setting of the Tributary PTP:
If the Tributary Disable Action is set to Disable Laser, the tributary laser will be
shut down (applicable for all payloads).
If the Tributary Disable Action is set to Send AIS, AIS will be sent (applicable for
SONET/SDH only; this setting does not apply to GbE, Clear Channel, Fibre
Channel, etc.).
■ All outstanding alarms are cleared on the managed object and all its dependent equipment and
facility objects. No new alarms are reported on this object, nor for its dependent equipment and
facility objects.
■ PM values will be marked invalid for managed objects in the locked state and all of the dependent
facility objects.
■ The service state of all the contained and supported managed objects are modified; the service
state is changed to OOS (out-of-service) state.
■ The operational state of this managed object is not changed, since the operational state is
determined by the object’s ability to provide service.
Note: Once the OSC is in the in-service, normal (IS-NR) operational state, it is not affected by the
operational state of the BMM/OAM/ORM. The OSC will remain IS-NR even if the operational state of
the BMM/OAM/ORM is set to the maintenance or locked state.
Operational State
The operational state indicates the operational capability of a managed object to provide its services. It is
determined by the state of the hardware and software, and by the state of the supporting/containing
object; it is not configurable by the user. Two operational states are defined:
■ Enabled—The managed object is able to provide service. This typically indicates that the
corresponding hardware is installed and functional.
■ Disabled—The managed object can not provide all or some services. This typically indicates that
the corresponding hardware has detected some faults or is not installed. For example, when a
provisioned circuit pack is removed, the operational state of the corresponding managed object
becomes disabled.
Each operational state may be further characterized by the following operational state qualifiers that
indicate an operational state due to the operational state of related objects, such as the supporting/
containing (ancestor) object:
■ Ancestor Unavailable, Supporting Unavailable, Related Object Unavailable—The managed object
is Unavailable.
■ Ancestor Locked, Supporting Locked, Related Object Locked—The managed object is Locked.
■ Ancestor Maintenance, Supporting Maintenance, Related Object Maintenance—The managed
object is in Maintenance state.
■ Ancestor Faulted, Supporting Faulted, Related Object Faulted—The managed object is faulted.
■ Ancestor Inhibited, Supporting Inhibited, Related Object Inhibited—The managed object is
Inhibited.
Service State
The service state represents the current functional state of the managed object which is dependent on the
operational state and the administrative state of the object and its ancestors. The following states are
defined:
■ In-service (IS)—Indicates that the managed object is functional and providing services. Its
operational state is enabled and its administrative state is unlocked.
■ Out-of-service (OOS)—Indicates that the managed object is not providing normal end-user services
because its operational state is disabled, the administrative state of its ancestor object is locked, or
the operational state of its ancestor object is disabled.
■ Out-of-service Maintenance (OOS-MT)—Indicates that the managed object is not providing normal
end-user services, but it can be used for maintenance test purposes. Its operational state is
enabled and its administrative state is maintenance.
■ Out-of-service Maintenance, Locked (OOS-MT, Locked)—Indicates that the managed object is not
providing normal end-user services, but it can be used for maintenance test purposes. Its
operational state is enabled and its administrative state is locked.
■ Automatic In-Service (AINS)—Indicates that the managed object will go automatically in-service
when associated tributary PTP faults are cleared (see Automatic In-Service (AINS) on page 3-40,
below).
PTP and any associated client CTP and/or SNC is declared in service. If the fault condition re-occurs
during the time when the timer is active, the equipment/termination point transitions to the “out-of-
service:AINS” state and the valid signal timer is reset to its configured value.
Listed below are the possible ways to enable AINS
■ If the tributary port is in the locked or maintenance state and the associated client CTP is not in the
maintenance state.
■ If the tributary port is in unlocked state and the tributary port or the associated client CTP has a
fault on it.
■ If the tributary port is already-provisioned with AINS disabled, there must be a fault condition
present on the tributary port or the corresponding Client CTP. On an attempt to enable AINS when
there is no such fault condition, an error message is displayed.
Note: Once AINS is enabled for a port, the associated client CTP cannot be changed to the
maintenance state.
The AINS state is supported for both local and remote SNCs and sub-SNCs. AINS state is not supported
for channelized SNCs.
Starting Release 20.0, SNC Fail and CSF (Client Signal Failure) handling in AINS condition has the
following possible configurations in case of both Unprotected SNC Configuration with CSF on T-ODUk
and Protected SNC Configuration with a CSF on L-ODUk.
■ The SNCFAIL Alarm is not reported on detecting a CSF and if AINS is Enabled on the
corresponding Tributary port of SNC Endpoint.
■ The SNCFAIL Alarm is reported on detecting and if AINS is Disabled on the corresponding
Tributary port of SNC Endpoint.
This feature is supported under all the below listed conditions.
■ Supported for ODUs created through GMPLS based SNCs
■ Supported for both Local and Remote SNCs and all types of Network Mappings, Adaptations and
SNC endpoints.
■ The Masking of SNCFAIL alarm is applicable even if the CSF is used as Protection Switch trigger.
Note: In addition, the DTN-X supports tributary disable action upon detection of a forwarded error,
see Forward Defect Triggering of Tributary Disable Action on page 3-45.
Note: For electrical transmit TOMs (TOM-1.485HD-TX and TOM-1.4835HD-TX), this setting is
called Disable Transmitter, see below.
Disable (Applicable to electrical transmit TOMs) When disabled, the electrical transmit TOM turns off its
Transmitter laser (this is the equivalent to Laser Off, but applicable only to electrical transmit TOMs).
Note: Because the transmitter on electrical transmit TOMs cannot be disabled, when an
electrical TOM is configured for Disable Transmitter setting, the TOM will transmit all zeros.
Note: AIS-L generation is not supported for OC48/STM16 on TIM-16-2.5GM. For these
services, the Generate Generic AIS option can be used instead.
Note: AIS-L generation is not supported for OC-3 and STM-1 payloads on the TAM-8-2.5GM.
It is recommended to use the Disable Laser setting for these payloads on the TAM-8-2.5GM.
Generate (Applicable to SONET and SDH interfaces on TIM-16-2.5GM only) When tributary disable action is
Generic AIS in effect, the tributary sends a generic AIS signal.
Send All Zeros (Applicable to Fibre Channel interfaces on TAMs only) When tributary disable action is in effect,
the tributary sends all zeros in the entire frame, which results in a PCS Loss of Sync alarm on the
downstream equipment or test set.
Do Nothing (Applicable to 10G DTF clients on the TAM-2-10GT and DICM-T-2-10GT and 1GFC-CC/1.0625G
and 2GFC-CC/2.125G clients on the TAM-8-2.5GM) Laser continues to transmit and DTF framing
is kept intact even when transmit DTPs are faulted.
Send LF (Applicable to 100GbE clients on TAM-1-100GR/TAM-1-100GE, 40GbE clients on TAM-1-40GR/
TAM-1-40GE, and for 10GbE and 1GbE signals on TAM-2-10GM/TAM-8-2.5GM/DICM-T-2-10GM
on a DTC/MTC; and for all Ethernet clients on the XTC and XT ) When tributary disable action is in
effect, the tributary sends a local fault (LF) signal towards the connected client equipment upon
receiving network-side faults or de-encapsulated faults due to issues in faults received from far-
end client equipment.
See Send Local Fault Signal on page 3-44 for more details.
Insert Idle (Applicable to Fibre Channel interfaces, for Ethernet interfaces on TAM-8-2.5GM, TAM-2-10GM,
Signal DICM-T-2-10GM, TAM-1-40GE, TAM-1-40GR, TAM-1-100GE, and TAM-1-100GR, and for all
Ethernet clients on the XTC When disabled, the tributary sends an idle signal.
Send NOS (Applicable to Fibre Channel interfaces on TAMs, 8G Fibre Channel on TIM-5-10GM/TIM-5-10GX,
and2G/4G Fibre Channel on the TIM-16-2.5GM) When disabled, the tributary sends an NOS (Not
Operational Primitive Sequence) signal.
The DTN-X and DTN support a configurable setting on the tributary physical termination point that
triggers the configured disable action on the tributary when a post-FEC Bit Error Rate - Signal Failure
(BER-SF) condition is present on the Tributary DTF Path or ODUk Path. By default, this setting is
disabled (meaning that the tributary disable action does not take place when the Tributary DTF Path
BER-SF condition is present). The user can enable this feature on a per-tributary basis.
A standing condition Laser Shutdown Active (LS-ACTIVE) is reported by the network element and will be
cleared when the laser turns back on. The LS-ACTIVE condition is masked by the conditions Equipment
Fail (EQPTFAIL), Improper Removal (IMPROPRMVL), and ALS Disabled (ALS-DISABLED).
TIM-1-100GE and TIM-1-100GE-Q with Tributary Disable Action in LaserOff/Disable Laser support setting
of Recovery Tributary Disable action to be used if there are toggling Rx faults. In case of such faults on
these TIMs, it is recommended to set the Recovery tributary disable action to Send IDLE/Send LF.
Note: In case of DTN-X network elements, the LOS alarms on the downstream line side clears when
the Tributary Disable Action of the upstream peer DTN-X port is set to IDLE. It raises LOS if it is set to
Turn Off Laser or Send LF.
The following are the supported Recovery Tributary Disable Action values:
■ Send IDLE: The tributary interface sends an idle signal
■ Send Local Fault: The tributary sends a local fault (LF) signal towards the connected client
equipment upon receiving networkside faults or de-encapsulated faults due to issues in faults
received from far-end client equipment.
■ None: The recovery trib disable action is disabled.
Note: For TAM-2-10GM and DICM-T-2-10GM, this scenario is applicable only when handling
the native Ethernet client signal in ingress and egress directions of the network.
■ 10GbE LAN incoming signal adapted to OTN via OTN Adaptation (see OTN Adaptation Services
on page 4-21)—During a signal fail condition of the network side of the 10GbE LAN client signal
(e.g., loss of signal, loss of sync, DTP-AIS, etc.), the failed 10GbE LAN signal is replaced by a
stream of 66B blocks, with each block carrying two local fault sequence ordered sets (as specified
in [IEEE 802.3]). This replacement signal is then mapped into the OPU2e/OPU1e, as specified in
the G.709/G.Sup43. The 10GbE LAN to OTN Adaptation case is only applicable to TAM-2-10GM
and DICM-T-2-10GM. The sending of local fault signal is accomplished by editing the 10GbE
properties so that the encapsulated signal disable action is set to Send LF (see Encapsulated
Client Disable Action on page 3-46).
■ 1GbE LAN (standard client handling, without OTN Adaptation)—During a signal fail condition of the
network side of the 1GbE client signal (e.g., loss of signal, loss of sync, DTP-AIS, etc.), the failed
1GbE signal is replaced by a stream of 10B blocks, with /v/ error propagation signal (as specified in
[IEEE 802.3]). The sending of local fault signal is accomplished by editing the tributary’s disable
action to Send LF.
Note: For TAM-8-2.5GM, this scenario is applicable only when handling the native Ethernet
client signal in ingress and egress directions of the network.
Note: XT(S)-3300 supports Send LF tributary disable action. A forwarded GbE Local
Fault triggers an LF signal on these tributaries.
■ For Fibre Channel (e.g., native Fibre Channel client transport service), forwarded FC Not
Operational Primitive Sequence (de-encapsulated NOS in the network to client direction) will trigger
tributary disable action as follows:
□ For tributaries with Disable Laser tributary disable action, turn off laser (LOL)
□ For tributaries with Insert Idle Signal tributary disable action, send idle signal
□ For tributaries with Send NOS tributary disable action, send "Not Operational" signal.
Note: For 8G Fibre Channel services on TIM-5-10GM/TIM-5-GX, the Not Operational Primitive
Sequence (NOS) signal in the trasmit direction cannot trigger the tributary disable action.
Therefore, for 8GFC services configured for Send NOS, Forward Defect Triggering must be
disabled.
Note: For 10G Fibre Channel services on TIM-5-10GM/TIM-5-GX, Forward Defect Triggering
must be enabled .
Figure 3-28: Example Scenario for Forward Defect Triggering of Tributary Disable Action on page 3-46
shows an example scenario for a tributary disable action triggered by a forward defect indication.
Figure 3-28 Example Scenario for Forward Defect Triggering of Tributary Disable Action
The Encapsulated Client Disable Action on the DTN-X is used to define the content of the OPUk sent by
a DTN-X toward the Infinera network in case of an ingress client interface failure of a SONET, SDH, or
Ethernet signal (including locking of the client interface).
Table 3-4: TIM Support of Encapsulated Client Disable Action on page 3-47 lists the TIMs that support
Encapsulated Client Disable Action, and shows the supported values and behavior based on the service
type and TIM.
Note: Before Release 15.3, the default values for Encapsulated Client Disable Action for all services
types was “No Replace.” In Release 15.3 and above, the default values are updated to those shown
in the table below. The setting for Encapsulated Client Disable Action for any existing services will not
be impacted by an upgrade to Release 15.3. However, for any new services created in Release 15.3
and above, if no value is specified for Encapsulated Client Disable Action, the node will apply the
default setting as shown in the table below.
For DTN, the Encapsulated Client Disable Action specifies the replacement signal type for the
encapsulated client interface upon egress from the DTN network in case of a signal fail condition from the
network side of the client signal.
Encapsulated Client Disable Action is supported only for SONET/SDH adaptation services, Ethernet
adaptation services, and ODUk transport services (see OTN Adaptation Services on page 4-21 and
ODUk Transport on page 4-23). The following TAMs and service types support Encapsulated Client
Disable Action:
RS-FEC is supported on any of the QSFP28 type TOMs supported on the XT (i.e., TOM-100G-Q-SR4
and TOM-100G-Q-LR4) and is enabled by default on the TOMs plugged into the XT-500S-100.
■ RxOnly—In this operation mode, the LLDP agent only receives LLDP frames on the port and does
not transmit LLDP frames to the client. The remote system information is gathered and stored in an
SNMP MIB (LLDP-MIB-v2).
Power Draw” PM measurement on the chassis (when PM collection is enabled on the chassis). The
power draw PM is supported for real time and historical (15 minute/24 hour) PM, and data is collected for
minimum, maximum, and average values.
Note: If a chassis exceeds its configured maximum power draw value, it raises the Power Draw
alarm, but does not power down or take any further action. See Power Draw Alarm on page 2-9 for
information about the behavior of the power draw alarm.
Note: The maximum power draw value is not retrieved for fan modules, as the power draw of two fans
is integrated into the power draw estimate for the chassis (along with two PEMs, the IO Panel. etc.).
Likewise, the maximum power draw value is not retrieved for TOMs, as the power draw value of
TOMs is integrated into the power draw estimate for each TAM (assuming maximum number of
TOMs).
Note: The XTC-10 must be set to Unmanaged 3rd Party power supply mode when the chassis is
configured to use a third-party AC-based power supply.
Note: The XTC-10 can be configured for an Unmanaged 3rd Party power supply mode only when the
XTC-10 chassis is administratively locked or in maintenance mode. The node will not allow
Unmanaged 3rd Party power supply mode if a PEM is detected by the node via the SCSI cables.
(There are no restrictions when setting the power supply mode from Unmanaged 3rd Party to Native.)
Note the following for an XTC-10 chassis configured for Unmanaged 3rd Party power supply mode:
■ All input voltage monitoring and power redundancy must be verified by the user since the system
will no longer monitor power input. This means that user must ensure the following:
□ The third party power supply provides the correct input (operating voltage, wattage, etc.).
□ The third power supply provides redundancy. The node will not report the PWR-PROT-FAIL
(chassis power redundancy lost) alarm.
■ The XTC-10 chassis does not support the Chassis Power Control feature; this feature will be set to
disabled. Meaning that when a new module is inserted into the chassis, the system will allow the
module to initialize without checking for available power, and without reporting the PWRCTRL-INIT
(Power Control) alarm. This means that the chassis may draw more power than what is available
from the third power supply.
■ The maximum available power information will not be available for the chassis.
■ The Power Draw (PWRDRW) alarm behavior is the same for both power supply modes (the
chassis will assert the PWRDRW alarm if the chassis’ estimated power requirements exceed the
user-configured maximum power draw threshold for the chassis). See Power Draw Alarm on page
2-9) for more information on maximum power draw settings.
not sufficient, the IMM will not allow the module to fully power up; the module remains in a reset state and
consumes a minimal amount of power. Once available power increases sufficiently, the IMM will
automatically power up modules in the reset state.
This applies only to newly-installed or re-seated modules; if these modules are cold reset the IMM does
not interfere with the reboot.
Note that the PEMs on the MTC-9 use an external circuit breaker with possible ratings of 15A or 20A. For
site configurations where a 20A circuit breaker is installed, the user can configure the circuit breaker
rating on the MTC-9 with possible values of 15A (default) or 20A. The maximum power supported by an
MTC-9 will vary depending on the circuit breaker used by the chassis (and indicated in the circuit breaker
rating value configured on the chassis). Note the following about changing the circuit breaker rating:
■ Increasing the circuit breaker rating from 15A to 20A is allowed without any restrictions. Increasing
the circuit breaker rating will increase the available power on the MTC-9 chassis from 600 watts to
800 watts, so modules held in the reset state will be powered up.
■ Decreasing the circuit breaker rating from 20A to 15A is allowed only if the new available power is
greater than the power draw from the currently installed equipment, and if the new available power
is also greater than the configured maximum power draw. This is done to prevent power draw from
exceeding the maximum available power from the PEMs.
■ Chassis has a temperature or fan fault that requires the fans to run at 90% of their maximum
Note: There is no way to measure the input voltage for chassis equipped with only DLMs/XLMs.
However, ADLMs, ADLM-80s, AXLMs, AXLM-80s, SLMs, SLM-80s, and MCM-Cs do have the ability
to measure their voltage input. If one of these modules is present in the chassis, the input voltage to
the chassis can be measured and used for the current draw estimation. In the absence of an ADLM,
ADLM-80, AXLM, AXLM-80, SLM, SLM-80, or MCM-C, a voltage level of -39V is assumed.
IQ NOS then takes the minimum preventative action needed to prevent power consumption from
exceeding a pre-defined threshold which would potentially trip the 70A circuit breaker under worst-case
operating conditions. IQ NOS does the following:
■ Raises an alarm indicating that the Power Control feature has taken action on the chassis (see
Power Draw Alarm on page 2-9)
■ Resets the receive-side application-specific integrated circuit (ASIC) components in the TAM that
are in the Loss of Frame (LOF) condition (i.e., not carrying traffic).
■ Governs the fan speed to 85% or less of the maximum rotation speed if more power savings is
needed.
Note: IQ NOS will NOT reset the ASIC component that is successfully carrying user traffic, nor will IQ
NOS reduce the fan speed such that component failure is a high probability.
Once initiated, the Power Control feature can be disabled in two ways:
■ The user can manually disable the Power Control feature via the management interfaces.
■ IQ NOS can disable the Power Control feature based on any of the following:
□ The fan speed has changed to less than 75% of its maximum for five seconds (due to the
ambient temperature dropping).
□ Equipment has been physically removed from the chassis, thereby decreasing the power
demand.
□ Any ADLMs, ADLM-80s, AXLMs, AXLM-80s, SLMs, or SLM-80s present in the chassis
report a voltage change indicating decreased current demand for the chassis.
Once the Power Control feature is disabled due to any of the above events, the fan speed control is
released and the line modules re-enable their receive-side ASICs.
When the network element is installed and powered on, the ZTP obtains all the necessary configurations
without any operator intervention to bring up the network element to a state where it can be managed
through management interfaces.
The ZTP prompt is displayed at the time of turn-up of network element. The ZTP feature can be disabled
at this point. If ZTP is not disabled any new configurations will override the existing configurations. Refer
to XT Turn up and Test Guide and FLEXILS ROADM Turn up and Test Guide for more information.
The ZTP work flow follows a series of steps which includes the following:
■ Network Element requests for a valid DHCP lease. DHCP server responds with a valid lease
having DCN IP address, gateway, subnet and also vendor specific options having the image and
configuration file locations (URI).
■ Check if the current release is different from the one specified in DHCP vendor options and
download a new software image, if it is different.
■ Download an initial configuration specified in the vendor specific options containing a list of
commands/operations for subsequent operation.
DHCP stage:
■ The configured DHCP server within the management network provides IP configuration (IP
Address, Net mask, Gateway Address, and DNS Address) in a dhcpd.conf file.
■ The DHCP client running on the network element receives device specific configuration and applies
it on the network element.
Software image download and install stage:
■ The network element downloads the software image only if the currently running software image on
the network element is lower than the software image referenced to by the DHCP server. If the new
software image is different then the current software image, it will download and install fresh image.
■ The network element installs the software image and performs a reboot.
Initial start-up configuration setup stage:
■ The network element downloads the initial configuration file from ZTP server. This file contains all
configuration commands pertaining to equipment, facilities, service provisioning and others.
■ The initial configuration file is executed within the network element and the configurations specified
in the config file are applied.
The user passwords, database passwords can be entered in encrypted form in the initial configuration
file. The encrypted text can be generated from the encryption tool provided to the customer. The
encrypted password can be decrypted from the network element. The parameter xform can be added to
the commands to specify if password must be encrypted. The parameter xform can be set to true/false.
Below is a sample for the CLI commands used in config file:
aaa authentication users changepasswd user secadmin curpasswd Infinera1
newpasswd Infinera\#2
aaa authentication users username secadmin role MA,SA,NA,NE,PR,TT,RA,EA
InactivityTimeout 0 PasswordAging 0 hostname NEA
!
!do show dhcp
!ztp system ztpmode disable cleandb false
!ztp ZtpMode Enable
!logging host SYSLOG-1 TransportProtocol TLS ServerIpAddress 1.1.1.1
!system MgmtProxyRoutePreference GFD
hostname NEA
ip xfr swdl XfrPrimaryIp46 192.168.0.2 XfrPrimaryUser sttester
XfrPrimaryPasswd sttester XfrFileName none XfrFilePath /Kumuda/ITN
AdministrativeState UnLocked
commission equipment chassis 1 ProvSerialNumber MA6814140892
ProvisionedChassisType MTC_9
controller. After that, the Optical Amplifier database is backed up remotely, downloaded to the new node
controller XCM on the DTN-X, then merged with the DTN-X database. Lastly, the OMM node controller on
the Optical Amplifier is then manually converted to a shelf controller. The Optical Amplifier and XTC are
then physically connected to each other to act as a DTN-X node.
For the detailed procedures, see the DTC/MTC Task Oriented Procedures Guide or contact an Infinera
Technical Assistance Center (TAC).
Service Provisioning
Infinera Intelligent Transport Networks feature service provisioning capabilities that allow users to
engineer user traffic data transport routes.Service provisioning is supported on Infinera DTN, DTN-X and
FlexILS network elements listed below:
DTN Service Provisioning on page 4-2
DTN-X Service Provisioning on page 4-33
Packet Switching Service Provisioning on page 4-62
FlexILS Service Provisioning on page 4-92
IQ NOS Digital Protection Services
Multi-layer Recovery in DTNs on page 4-167
Dual chassis Y-cable protection (DC-YCP) on page 4-170
IQ NOS provides service provisioning capabilities that include establishing data path connectivity
between endpoints for delivery of end-to-end capacity. The services are originated and terminated in a
DTN. IQ NOS defines the following types of cross-connect endpoints:
■ Tributary-side Endpoints—Client payload specific endpoints that can be any of the payload types
described in Client/Tributary Interfaces.
■ Tributary DTF Path Endpoints—Endpoints which are DTF encapsulated 2.5Gbps or 10Gbps
channels. the tributary-side paths are sourced and terminated in the TAM. (See Digital Transport
Frame (DTF) for more information.)
■ Line DTF Path Endpoints—Endpoints which are DTF encapsulated 2.5Gbps or 10Gbps channels
(see Digital Transport (DTN) for the description of DTF). The line-side paths are sourced and
terminated in a line module. As described in Digital Line Module (DLM) and Switching Line Module
(XLM), each XLM/DLM supports one OCG, which in turn includes ten 10Gbps optical channels.
The ADLM, AXLM, and SLM can be tuned to one of several OCGs (see Amplified Digital Line
Module (ADLM), Amplified Switching Line Module (AXLM), and Submarine Line Module (SLM)).
The ADLM-80, AXLM-80, and SLM-80 can be tuned to one of several optical channels (see Line
Module 80G (LM-80)).
IQ NOS automatically creates the endpoints when connections are configured on the equipment. IQ NOS
supports the following service provisioning modes to meet diverse users’ needs:
■ Manual Cross-connects (DTN)—End-to-end services are built by manually creating each of the
cross-connects that compose the circuit (see Manual Cross-connects (DTN) on page 4-3).
■ GMPLS Signaled Subnetwork Connections (SNCs)—End-to-end services are created dynamically
by GMPLS; the user specifies only the endpoints (see GMPLS Signaled Subnetwork Connections
(SNCs) on page 4-10).
IQ NOS supports pre-provisioning of circuits, enabling users to set up both manual cross-connects and
SNCs in the absence of line modules, TEMs, and TAMs. Pre-provisioning of data plane connections
keeps the resources in a pending state until the line module, TEM, and/or TAM is inserted. IQ NOS
internally tracks resource utilization to ensure that resources are not overbooked. The pre-provisioning of
circuits requires that the supporting circuit packs first be pre-configured.
IQ NOS has specialized functionality to provide the following service provisioning capabilities:
■ 1GFC and 1GbE Service Provisioning on page 4-14
■ 40Gbps and 40GbE Service Provisioning on page 4-18
■ 100GbE Service Provisioning on page 4-20
■ OTN Adaptation Services on page 4-21
■ ODUk Transport on page 4-23
Note: Note that electrical TOMs are uni-directional TOMs that either receive a signal or transmit a
signal, and therefore must be configured correctly for either add cross-connects (for TOM-1.485HD-
RX and TOM-1.4835HD-RX), or for drop cross-connects (TOM-1.485HD-TX and TOM-1.4835HD-
TX). The DTN does not block incorrect cross-connect provisioning, such as incorrectly provisioning
an add cross-connect on a transmit TOM, but traffic will not come up on incorrect cross-connect
provisioning on electrical TOMs. When bidirectional (add/drop) cross-connects are provisioned on the
uni-directional electrical TOMs, traffic will come up, but this is not a recommended configuration.
Note: SNCs and cross-connects at the OC-12/STM-4 rate and at the OC-3/STM-1 rate are not
supported between an endpoint on a TAM-8-2.5GM and an endpoint on a TAM-4-2.5G.
The following sections describe the types of manual cross-connects supported by the DTN:
■ Add/Drop Cross-connect on page 4-3
■ Add Cross-connect on page 4-5
■ Drop Cross-connect on page 4-6
■ Express Cross-connect on page 4-7
■ Hairpin Cross-connect on page 4-9
Add/Drop Cross-connect
The add/drop cross-connect is a bidirectional cross-connect that associates the tributary-side endpoint to
the line-side endpoint by establishing connectivity between a TOM tributary port (residing within a line
module or TEM) to a line-side optical channel within a line module. Any tributary port can be connected to
any line-side optical channel, subject to the bandwidth grooming rules.
The add/drop type of cross-connect is used to add/drop traffic at a Digital Add/Drop site (see Digital
Terminal Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg) and to drop traffic at a
site as part of the Multi-point Configuration feature (see Multi-point Configuration on page 4-23).
See Figure 4-1: No-hop Add/Drop Cross-connects on page 4-4 and Figure 4-2: Multi-hop Add/Drop
Cross-connect between ADLMs/DLMs and TEM on page 4-5 for examples of no-hop and multi-hop
add/drop cross-connects.
Add Cross-connect
An add cross-connect is a unidirectional cross-connect that associates the tributary-side endpoint to the
line-side endpoint by establishing connectivity between a TOM tributary port (residing within a line module
or TEM) to a line-side optical channel within a line module. Any tributary port can be connected to any
line-side optical channel, subject to the bandwidth grooming rules.
The add type of cross-connect is used to add traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg).
Figure 4-3: Multi-hop Add Cross-connect between ADLMs/DLMs and TEM on page 4-6 shows an
example add cross-connect.
Drop Cross-connect
A drop cross-connect is a unidirectional cross-connect that associates the line-side endpoint to tributary-
side endpoint the by establishing connectivity between a line-side optical channel within a line module to
a TOM tributary port (residing within a line module or TEM). Any line-side optical channel can be
connected to any tributary port, subject to the bandwidth grooming rules.
The drop type of cross-connect is used to drop traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg), and can be used to drop traffic
at a site as part of the Multi-point Configuration feature (see Multi-point Configuration on page 4-23).
Figure 4-4: No-hop Drop Cross-connect on page 4-7 shows an example drop cross-connect.
Express Cross-connect
An express cross-connect is a unidirectional or bidirectional cross-connect that associates one line-side
DTF endpoint to another line-side DTF endpoint by establishing connectivity between the optical channels
of two different OCGs (line modules) within a DTN. An express cross-connect can be established
between line modules using any of the supported grooming configurations.
The express cross-connect type is transparent to the payload type encapsulated in the DTF. A typical
application for this cross-connect is to establish a data path through a Digital Repeater site (see Digital
Repeater Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg.
Figure 4-5: Single-hop Express Cross-connect on page 4-8 and Figure 4-6: Multi-hop Express Cross-
connect on page 4-8 show example express cross-connects.
Alternatively, a multi-hop express cross-connect can be established between three switching-capable line
modules and TEMs residing within a chassis, again using the supported grooming configurations
described in Bandwidth Grooming. See Figure 4-6: Multi-hop Express Cross-connect on page 4-8 for
an example of a multi-hop cross-connect.
Hairpin Cross-connect
A hairpin cross-connect is a unidirectional or bidirectional cross-connect that is used to cross-connect two
tributary ports within a single DTN chassis. Hairpin circuits are supported in the following configurations:
■ Between two tributary ports within a given switching-capable line module (line module or TEM). The
two tributary ports may reside on the same or different TAMs (see Figure 4-7: No-hop Hairpin
Cross-connects on page 4-9).
■ Between a tributary port on one line module or TEM and a tributary port on another line module or
TEM, utilizing the bandwidth grooming capabilities (see Figure 4-8: Single-hop Hairpin Cross-
connect on page 4-10).
Hairpin cross-connects do not use the line-side optical channel resource. The hairpin cross-connects are
used in Metro applications for connecting two buildings within a short reach without laying new fibers.
■ User configured circuit identifiers for easy correlation of alarms and performance monitoring
information on the end-to-end circuit, aiding in service level monitoring. SNC circuit IDs are editable
after SNC creation.
■ Out-of-band GMPLS for circuit provisioning, for OTS over third party networks in cases where in-
band OSC is unavailable (e.g., submarine applications). See Out-of-band GMPLS on page 8-11
for more information.
■ Out-of-band GMPLS for Layer 1 OPN applications, which provides GMPLS control plane
connectivity between customer edge devices via out-of-band communication by the use of Generic
Routing Encapsulation (GRE) tunnels configured using any one of the DTN interfaces (e.g., DCN,
AUX, CRAFT) (see Layer 1 Optical Private Network (OPN) in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg).
■ Provisioning of 40Gbps services (see 40Gbps and 40GbE Service Provisioning on page 4-18).
■ Adaptation of OTN services to/from OC-48, OC-192, STM-16, STM-64, or 10GbE LAN signals for
transport through the Infinera network (see OTN Adaptation Services on page 4-21).
■ Bridge and Roll functionality that allows a sub-50ms switchover from one SNC to a new SNC (see
Bridge and Roll on page 4-32).
■ Circuit tracking, by storing and making the hop-by-hop circuit route and the source endpoint of the
SNC available to the management.
■ Automatic restoration will be initiated for an SNC if any one of the endpoints (source or destination)
detects a traffic-affecting fault. For further information about this optional feature, refer to Dynamic
GMPLS Circuit Restoration on page 4-140.
■ Automatic reversion to the working path for restorable SNCs that have been configured for
automatic reversion. For further information about this optional feature, refer to Dynamic GMPLS
Circuit Restoration on page 4-140.
Note: To use any feature related to SNC, ensure that the nodes connected in a network
element are upgraded to the same release.
Note: An SNC that has been re-routed through an intermediate node due to a restoration event
modifies the node’s database. Restoring a database snapshot taken prior to the restoration event will
delete any new SNC connections and result in traffic loss.
Note: SNC provisioning does not support uni-directional circuits or multi-point configuration; these
services must be provisioned via manual cross-connects. Unidirectional multi-point cross-connect
legs can be created on the line endpoints of tributary-to-tributary SNCs or of line-side terminating
SNCs, but the multi-point configuration legs are themselves cross-connects and not SNCs. See Multi-
point Configuration on page 4-23.
■ Unidirectional multi-point configuration connections for broadcast services such as video.
Unidirectional multi-point cross-connect legs can be created on the endpoints of an SNC. The
multi-point legs can be created on the line endpoints of tributary-to-tributary SNCs or of line-
side terminating SNCs.
Note: Dynamic GMPLS SNC Restoration is primarily designed to provide traffic restoration utilizing
available alternate route bandwidth in the event of a fiber cut or module failure/removal. Performing a
BMM reseat or cold reset will trigger the restoration process. Due to the additional BMM boot time
requirements associated with these actions, local node SNC restoration may be delayed until the boot
process is completed.
Note: Bandwidth is reserved for an SNC only once the SNC is created, and not at the time of route
computation for the SNC. This means that if several SNCs are computed and created in rapid
succession, the bandwidth will appear to be available during path computation, but may be reserved
by the time the system attempts to create some of the connections. This may result in an SNC set-up
failure until the system computes another route for the SNCs using available bandwidth.
Note: SNCs and cross-connects at the OC-12/STM-4 rate and at the OC-3/STM-1 rate are not
supported between an endpoint on a TAM-8-2.5GM and an endpoint on a TAM-4-2.5G.
Refer to IQ NOS GMPLS Control Plane Overview on page 8-1 for a detailed description of the GMPLS
functions.
Note: Only nodes running Release 6.0 or higher can originate a 2.5Gbps SNC on a TAM-2-10GT
endpoint. A pre-Release 6.0 node cannot originate a 2.5Gbps SNC on a TAM-2-10GT endpoint
(although a pre-Release 6.0 nodes can terminate such an SNC, either on a TAM-2-10GT endpoint or
an endpoint on another TAM type).
1 Port D-SNCP can be configured for endpoints on the TAM-2-10GT for 10Gbps SNCs. (If the user wants
to protect SNCs across Layer 1 OPN, services should be configured as 10Gbps SNCs, opposed to
2.5Gbps SNCs.) See 1 Port D-SNCP on page 4-126 for more information on 1 Port D-SNCP protection.
Line-side terminating SNCs enable the user to create a circuit that spans across GMPLS signaling
domains, which is very useful for networks that contain more nodes than are allowed in a single GMPLS
signaling domain. These large networks are often divided into several smaller domains by terminating the
OSC links at the border of the smaller domains. Line-side terminating SNCs allow a tributary-to-tributary
SNC to be realized as a concatenation of multiple disjoint SNCs.
Line-side terminating SNCs are supported on both DTN and DTN-X node:
■ On an XTC, the line-side endpoints are line-side ODU endpoints
■ On a DTC/MTC, the line-side endpoints are line-side DTP CTPs
Line-side terminating SNCs cannot be configured as restorable SNCs, but they can participate in Digital
Subnetwork Connection Protection (D-SNCP). For termination points on the DTC, MTC, and XTC:
■ The tributary-side endpoint can be part of 1 Port D-SNCP and 2 Port D-SNCP
■ The line-side endpoint can be part of 1 Port D-SNCP
See 1 Port D-SNCP on page 4-126 and 2 Port D-SNCP on page 4-123 for more information.
Note: Line module OCGs are by default enabled for line-side terminating SNCs. If a line module is
disabled for line-side terminating SNCs and is re-configured to be enabled for line-side terminating
SNCs, any existing TE links with neighboring nodes are maintained.
For tributary-to-line connections originating on a 1GbE client of a TAM-8-1G, the sub-SNC will be created
on the local side only. The remote Channelized SNC will be created on the remote node, but the remote
sub-SNC will not be created on the remote node. So to create a 1GbE circuit across three domains, the
user needs to create a tributary-to-line Channelized SNC in Domain 1, a line-to-line SNC in Domain 2 and
another tributary-to-line Channelized SNC in Domain 3. In addition, the user needs to create tributary-to-
line sub-SNC in Domain 1 and Domain 3 to realize end-to-end traffic.
See 1GFC and 1GbE Service Provisioning on page 4-14 for more information on Channelized SNCs
and sub-SNCs.
Note: For 1G Fibre Channel over Clear Channel services (i.e., 1GFC-CC or 1.0625GCC in TL1), each
1GFC-CC service is mapped to a single 2.5Gbps digital path, therefore there is no Channelized
cross-connect nor SNC required for 1 GFC-CC services.
A Channelized cross-connect is a special type of Add/Drop, Add, Drop, or Hairpin cross-connect that is
used to transport 1Gbps signals to and from the client interfaces on the TAM-8-1G and TAM-8-2.5GM. A
Channelized cross-connect represents connectivity from the Tributary DTPCTP to Line DTPCTP (in the
case of Add/Drop type of traffic) and the Tributary DTPCTP to the Tributary DTPCP (in the case of
Hairpin traffic type). The tributary-side payload is set to ‘Channelized_2x1Gbe’.
Note: When provisioning a 1GbE circuit or cross-connect, ensure that the following parameters on the
customer equipment (router or switch connected to the TAM-8-1G or TAM-8-2.5GM) are set as
follows:
■ Auto-negotiation set to “on”,
or
■ Auto-negotiation set to “off”, static configuration of the customer equipment Ethernet
port capabilities set to “full-duplex,” and the data rate set to “1GbE.”
The way 1Gbps signal types are mapped from the tributary port to the DTP is dependent on the TAM
type:
■ The TAM-8-1G has four tributary port pairs: ports 1a and 1b, ports 2a and 2b, ports 3a and 3b, and
ports 4a and 4b.The two 1GbE signals in a port pair are mapped together into a single 2.5Gbps
digital path and associated with a prescribed virtual channel on the digital transport path (DTP), as
shown in Figure 4-12: Tributary Port to DTP Mapping on the TAM-8-1G on page 4-16. This is
considered fixed mapping between the tributary port and the DTPCTP.
■ The TAM-8-2.5GM has eight ports, numbered 1-8. As with the TAM-8-1G, two 1Gbps services
must be mapped together into a 2.5Gbps Channelized cross-connect or Channelized SNC.
However, when creating 1GbE and 1GFC services on the TAM-8-2.5GM, the DTN allows for
flexible mapping of tributary port to DTPCTP, as shown in Figure 4-13: Flexible Mapping of
Tributary Port to DTP on the TAM-8-2.5GM on page 4-17. So when creating the SNC or cross-
connect, the user is able to specify the virtual channel in the DTPCTP to which the service should
be mapped. (If no virtual channel is specified, the TAM-8-2.5GM follows the default mapping, which
is the same as the mapping in the TAM-8-1G.
As shown in Figure 4-13: Flexible Mapping of Tributary Port to DTP on the TAM-8-2.5GM on page 4-17,
each DTPCTP has two virtual channels, and each of these virtual channels can carry a different 1G
service: one virtual channel can carry a 1GFC service while the other virtual channel carries a 1GbE.
Note the following constraints for flexible mapping on the TAM-8-2.5GM:
■ Tributary ports 1-4 on the TAM-8-2.5GM must be mapped to a virtual channel on DTPCTPs 1-4.
■ Tributary ports 5-8 on the TAM-8-2.5GM must be mapped to a virtual channel on DTPCTPs 5-8.
■ If a DTPCTP has a virtual channel associated with a 1G service, the tributary that faces that
DTPCTP cannot support a 2.5Gbps service (since the DTPCTP is already supporting a 1G service,
only another 1GbE or 1GFC service can be added).
■ 1GFC SNCs can originate and terminate only on the TAM-8-2.5GM on nodes running Release 6.0
or higher.
■ A single 1GbE SNC can be provisioned with one endpoint on a TAM-8-2.5GM and the other
endpoint on a TAM-8-1G. However, an 1GbE SNC originating on a pre-Release 6.0 node can be
terminated only on a TAM-8-1G (it cannot be terminated on an endpoint on a TAM-8-2.5GM on a
node running Release 6.0 or higher).
■ For 1G services on the TAM-8-2.5GM, 1 Port D-SNCP can be configured on the facing DTP of a
port which is already part of 2 Port D-SNCP (this interaction is not allowed on any other TAM type).
In this case, a 1G service cannot go through the facing DTP; it must go through a DTP which is not
configured with 1 Port D-SNCP.
For DTC/MTC endpoints, the GTP name for GTP-dependent cross-connects and SNCs is required to
be the AID of first constituent member of the GTP. For example: If a GTP has DTPs 1-A-3-L1-1, 1-
A-3-L1-2, 1-A-3-L1-3, 1-A-3-L1-4, then the GTP AID will be 1-A-3-L1-1. (In Release 6.0, the GTP was
specified by the user upon creation of a OC-768 or STM-256 cross-connect, and there was no
requirement for the GTP name to be the DTP AID.)
Note: If a pre-Release 7.0 system is upgraded to Release 8.1 or higher, the GTP AID value will be
automatically updated to the AID of first constituent member of the GTP for all pre-existing, GTP-
dependent SNCs, cross-connects, and D-SNCPs (1 Port or 2 Port).
Figure 4-14: VCGs and GTPs for 40G Services on page 4-19 shows the relationship between sub-
clients, VCG, GTP, and DTPCTPs.
Please note the following guidelines and features of Infinera 40G service provisioning:
■ 40G SNC provisioning is supported for tributary-to-tributary SNCs. The DTN does not support
tributary-to-line and line-to-tributary 40G SNCs.
Note: For 40GbE SNCs, all nodes in the route must be running Release 7.0 software or above.
■ 40G SNCs are supported over Layer 1 Optical Private Network. This configuration requires four
separate 10G tributary ports on the TAM-2-10GTs used to create the L1 OPN, and each of these
tributary ports must be configured as Layer 1 OPN TE endpoints. Also, the provider network must
have 40G of bandwidth available to support the SNC.
■ For 40G SNCs, restoration and reversion are supported, as are route diversity and preferred
restoration routes.
■ 40G services can be transported through the network on intermediate nodes via 40G express
cross-connects, meaning that the intermediate nodes do not require a 40G TAM for signal
regeneration in order to transport 40G services.
different TAM type, such as TAM-2-10GR, but again, both TAMs on that end of the connection must
match, so they must both be TAM-2-10GRs. If the TAMs at each end of the connection are not all of
the same TAM type, LOA alarm may be reported.
■ All of the channels carrying the 40G signal must be 10G channels, or channels that use the same
modulation format, either Binary Phase Shift Keying (BPSK) modulation or Quadrature Phase Shift
Keying (QPSK). If the 40G signal is transported over channels with a mix of 10G, BPSK, and
QPSK, a Loss of Alignment (LOA) alarm is raised and service is affected.
■ Multi-point configuration is supported for 40G services. See Multi-point Configuration on page 4-
23 for more information on this feature.
■ For manual cross-connect provisioning, 40G services can be routed through the network using
diverse OCGs, meaning that each of the 10Gbps or 10.3GCC channels can be routed through the
network using different OCGs. (In previous releases, each of the four sub-clients were required to
be routed in the same OCG through each of the nodes along the route.) This feature is not
supported for SNC provisioning. For SNCs, each of the four sub-clients must still be routed through
the same OCGs through each node along the route.
■ 1 Port D-SNCP and 2 Port D-SNCP protection are supported only for 40G services that are routed
on the same OCGs (and not on 40G services that are provisioned with diverse OCGs). See Digital
Subnetwork Connection Protection (D-SNCP) on page 4-122 for more information on this feature.
■ Fault management and troubleshooting tools that are available for 2.5Gbps and 10Gbps services
are also available for 40G services.
■ For performance monitoring support:
□ 40Gbps services support the PM data supported for 2.5Gbps and 10Gbps services.
□ 40GbE services support the PM data supported for 10GbE, with the exception of MAC layer
PM parameters, which are not supported by the TAM-1-40GE.
Note: When provisioning 100GbE tributary-to-tributary connections through an intermediate node with
back-to-back 10G TOMs, ensure that you provision ten individual 10.3G Clear Channel cross-
connects. In addition, the 10G TAM types at either end of the connection must be the same. In other
words, if one end of the tributary-to-tributary connection uses TAM-2-10GMs, all five of the TAMs on
that end of the connection must be TAM-2-10GMs. The other end of the connection can use a
different TAM type, such as TAM-2-10GR, but again, all five TAMs on that end of the connection must
match, so they must all be TAM-2-10GRs. If all of the TAMs at each end of the connection are not of
the same TAM type, a loss of alignment (LOA) alarm may be reported.
■ All of the channels carrying the 100G signal must be 10G channels, or channels that use the same
modulation format, either Binary Phase Shift Keying (BPSK) modulation or Quadrature Phase Shift
Keying (QPSK). If the 100G signal is transported over channels with a mix of 10G, BPSK, and
QPSK, a Loss of Alignment (LOA) alarm is raised and service is affected.
■ 100GbE cross-connects can be routed through the network using diverse OCGs, meaning that
each of the 10G channels can be routed through the network using different OCGs (provided that
the OCGs have the same number of hops and the skew is limited to 6μs).
■ 1 Port D-SNCP and 2 Port D-SNCP protection are supported for 100GbE services. See Digital
Subnetwork Connection Protection (D-SNCP) on page 4-122 for more information on this feature.
■ Multi-point configuration is supported for 100GbE services. See Multi-point Configuration on page
4-23 for more information on this feature.
■ Fault management and troubleshooting tools that are available for 2.5Gbps and 10Gbps services
are also available for 100GbE services.
■ 100GbE services support the PM data supported for 10GbE, with the exception of MAC layer PM
parameters for 100GbE services, which are supported on the TAM-1-100GR but not on the
TAM-1-100GE. The TAM-1-100GE does support Physical Coding Sublayer (PCS) PMs.
Adaptation mode is shown in Figure 4-16: Adaptation of OTN Services across the Infinera Network on
page 4-22. The OTU2 client on one end is being adapted with the STM-64 client at the other end.
Similarly, the OTU2e client on one end is being adapted with the 10GbE client at the other end.
The TAM-2-10GM and DICM-T-2-10GM support OTN adaptation between the following interfaces:
■ OC-192/10GbE WAN PHY to/from OTU2
■ STM-64 to/from OTU2
■ 10GbE LAN to/from OTU2e
■ 10GbE LAN to/from OTU1e
The TAM-8-2.5GM supports OTN adaptation between the following interfaces:
■ OC-48 to/from OTU1
■ STM-16 to/from OTU1
When transporting across the network between two identical OTUk client interfaces, the DTN supports
the fault and performance monitoring of all the OTN layers, including the encapsulated/adapted client.
If adaptation is enabled, the DTN provides only intrusive monitoring of OTUk, ODUk Path, and TCM
overheads at the edges of the network.
In addition, OTN adaptation services support an option to configure the disable action for encapsulated
Ethernet client interfaces upon egress from the DTN network in case of a signal fail condition from the
network side of the client signal (see Encapsulated Client Disable Action on Egress (DTN) on page 3-48).
ODUk Transport
The DTN supports ODUk transport service for OTUk client/tributary interfaces, in which the OTUk service
is terminated and the contained ODUk is maintained intact and transported across the Infinera network.
The TAM-8-2.5GM supports transport for the following ODUk services, which are mapped to a 2.5G DTP:
■ ODU1
The TAM-2-10GM and DICM-T-2-10GM support transport for the following ODUk services, which are
mapped to a 10G DTP:
■ ODU2
■ ODU2e
■ ODU1e
ODUk transport services support an option to configure the disable action for encapsulated Ethernet
client interfaces upon egress from the DTN network if a digital wrapper (DTP) defect is detected (e.g.,
AIS) from the network side of the client signal (see Encapsulated Client Disable Action on Egress (DTN)
on page 3-48).
Multi-point Configuration
Multi-point Configuration is the feature that digitally broadcasts a single service (e.g., 1GbE, 2.5Gbps,
40Gbps, 100GbE, etc.) from a single node in the Intelligent Transport Network and drops the signal at
several distribution points in the network (see Figure 4-17: Multi-point Configuration on page 4-24).
Multi-point Configuration enables bridging of an incoming service from any port (OCG or client) to up to
16 other ports (OCG or client). The service can be bridged/duplicated or a port on the same line module
or TEM as in the incoming service, or on any other line module or TEM. Multi-point Configuration also
allows an optional service return path from one of the legs of the digital bridge.
Multi-point Configuration is implemented by adding a unidirectional manual cross-connect from an
existing manual cross-connect or SNC within the Intelligent Transport Network to a new broadcast leg.
The existing cross-connect or SNC can be either unidirectional or bi-directional, and protected by 2 Port
or 1 Port Digital SNCP, or unprotected. Multi-point Configuration within the Intelligent Transport Network
can be protected by 2 Port or 1 Port Digital SNCP (see Digital Subnetwork Connection Protection (D-
SNCP) on page 4-122).
Multi-point legs can be created on the line endpoints of an SNC. The multi-point leg on an SNC must be a
unidirectional, drop cross-connect. A multi-point leg can use as its source the line endpoints of tributary-
to-tributary SNCs or of line-side terminating SNCs (see Line-side Terminating SNCs on page 4-12).
Note: If multi-point legs are added to an SNC and that SNC is deleted, locked, or restored, the cross-
connect on the local/source node will be deleted automatically and an event will be generated. The
remainder of the cross-connects in the multi-point leg of intermediate and remote/destination nodes
will be stale (orphaned) and must be deleted manually.
The primary application for Multi-point Configuration is to broadcast unidirectional traffic to multiple
endpoints in the network. Figure 4-17: Multi-point Configuration on page 4-24 shows a single signal
being distributed and dropped at various points in the network.
For DTN, the duplication of the service is performed on the cross-point switch of the line module or TEM
( Figure 4-18: Implementing Multi-point Configuration in a DTN on page 4-25), allowing the signal to be
sent in multiple directions. For DTN-X, the duplication of the service is performed in the switch fabric
(OXM). Figure 4-19: Implementing Multi-point Configuration in a DTN-X (Hairpin) on page 4-25 and
Figure 4-20: Implementing Multi-point Configuration in a DTN-X (Add/Drop) on page 4-26 show add/drop
and hairpin Multi-point Configurations on a DTN-X (note that hairpin and add/drop configurations can be
implemented for the same signal).
Figure 4-21 Multi-point Configuration Leg Used for Digital Test Access
Optical Express
In addition to the digital add/drop capabilities that are supported via the combination of the line modules
and the BMMs of a node, the DTN and DTN-X can also support direct BMM-to-BMM Optical Express,
wherein a fiber jumper cable is connected from the Optical Carrier Group (OCG) port on one BMM to the
corresponding OCG port of another BMM (the BMMs do not have to reside in the same chassis).
Note: Unless specifically noted otherwise, all references to the BMM will refer to either the BMM,
BMM2, BMM2P, BMM2C, BMM1H, and/or BMM2H interchangeably.
Figure 4-22: Optical Express in an Intelligent Transport Network on page 4-28 shows an example of
Optical Express in an Intelligent Transport Network. Note that Node D is configured for only add/drop of
its OCGs: no Optical Express is configured on Node D. For standard Optical Express configuration in a
ring network, there must be at least one node that add/drops all of its OCGs. For information on support
of Optical Express loops in a ring network, see Optical Express Loops on page 4-30.
Before disconnecting an OCG fiber, it is important to lock the BMM OCGs at the Optical Express site.
(And then unlock the BMM OCG once the fiber is reconnected.)
Note: Before disconnecting an OCG fiber between a CMM and a BMM, it is important that the
associated CMM OCG is set to the locked admin state before unplugging an OCG fiber. (And then
unlock the CMM OCG once the fiber is reconnected.)
Optical Express is supported on the following BMMs:
■ BMM1H-4-CX2
■ BMM2-8-CEH3
■ BMM-4-CX1-A
■ BMM2H-4-R3-MS
■ BMM-4-CX2-MS-A
■ BMM2H-4-B3
■ BMM-4-CX3-MS-A
■ BMM2P-8-CH1-MS
■ BMM2-8-CH3-MS
■ BMM2P-8-CEH1
■ BMM2-8-CXH2-MS
■ BMM2C-16-CH
BMM2s, BMM2Ps, and Gen 1 BMMs can optically express any OCG supported by the module. BMM2Cs
support Optical Express only for 500Gbps OCGs from AOLM/AOLM2/AOLX/AOLX2/SOLM/SOLM2/
SOLX/SOLX2. See the DTN and DTN-X System Description Guide for details.
Before configuring Optical Express, take note of the following configuration guidelines:
■ Optical Express is supported on links that are configured with RAMs. Optical Express is also
supported on links that are configured with ORMs and DSEs.
■ Optical Express is supported on all 4 OCG ports of 40-channel BMMs (this also applies to the 4
OCG ports for OCG 5-8 on the BMM2H-4-B3 expansion BMM).
■ Optical Express is supported on all 8 OCG ports of BMM2s (this also applies to the 8 OCG ports for
OCGs 9-16 on the BMM2-8-CEH3 expansion BMM).
■ BMM2s, BMM2Ps, or BMM2Cs can be used at an intermediate node to optically express traffic that
originates/terminates on BMM2s, BMM2Ps, or BMM2Cs. However, note that for 16 channel BMMs
a connection (either add/drop or Optical Express) must be made on a base OCG (OCG 1-8) before
a connection can be made on one of the expansion OCGs (i.e., OCGs 9-16).
■ Optical Express is supported on all 16 OCG ports of BMM2Cs with the following caveats:
□ Pre-provisioning of OCGs or physical OCG fiber connections for Auto-discovery is required
only for OCGs 1 - 8.
□ Optical Express is not supported on OCGs 1 - 8 when any of the corresponding peer ports
(OCGs 9 - 16) are provisioned for add/drop and vice versa due to implementation of OCG
port pairing on the BMM2C (refer to the Line Systems Hardware Description Guide for further
information). Each pair of OCG ports (OCG 1/OCG 9, OCG 2/OCG 10, OCG 3/OCG11, OCG
4/OCG 12, OCG 5/OCG 16, OCG 6/OCG 13, OCG 7/OCG 14, and OCG 8/OCG 15) can
either be dropped in a BMM2C or expressed by the BMM2C. For example, if one of the
paired ports is expressed (i.e., OCG 1), the peer port (OCG 9) is automatically expressed.
And if one of the paired ports is dropped (i.e., OCG 2), the peer port (OCG 10) can only be
dropped in the same BMM2C and cannot be expressed.
■ The modules in an Optical Express connection both must be BMM2s, both must be BMMs, both
must be BMM2Ps, or both must be BMM2Cs. Optical Express is not supported between a mix of
these BMM types (i.e., Optical Express is not supported between a BMM2 and a BMM, or a BMM2
and a BMM2P, etc.). Optical Express is supported between full-height and half-height BMMs and
between full-height and half-height BMM2s, as long as the OCG number is equivalent on both
modules in the connections.
■ Optical Express connections are supported between BMMs with different amplifier settings (SLTE,
Native, Third Party Amplifier). For example, a BMM2 configured for SLTE can support Optical
Express with a BMM2 that is set to Native Automated mode. Note, however, that Auto-discovery is
not supported for any Optical Express configuration where one or both BMMs is set to SLTE mode
or Third Party Amplifier mode. See the DTN Turn-up and Test Guide for details.
■ For Optical Express connections between BMMs, the BMM OCG can be locked without affecting
traffic. However, for Optical Express connections between BMM2s, between BMM2Ps, and
between BMM2Cs, if the BMM2/BMM2P/BMM2C OCG is locked, Auto-discovery is re-triggered,
thus impacting traffic. Make sure that BMM2/BMM2P/BMM2C OCGs are unlocked for Auto-
discovery to succeed, thereby restoring traffic. (See Optical Data Plane Auto-discovery on page 3-
20.)
■ Optical Express termination (via O-E-O conversion) is supported by the following module types only
(but there is no requirement that all optically expressed OCGs within a ring must be terminated at a
single node):
□ All AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and SOLX2 module types
□ DLM-n-C2 (where n=1 to 8)
□ DLM-n-C3 (where n=1 to 8)
□ XLM-n-C3 (where n=1 to 8)
□ ADLM-T4-n-C4 (where n=1, 3, 5, 7)
□ ADLM-T4-n-C5 (where n=1, 3, 5, 7)
□ SLM-T4-n-C4 (where n=1, 3, 5, 7)
□ SLM-T4-n-C5 (where n=1, 3, 5, 7)
□ AXLM-T4-n-C4 (where n=1, 3, 5, 7)
□ AXLM-T4-n-C5 (where n=1, 3, 5, 7)
□ AXLM-80-T1-C5
□ ADLM-80-T1-C5
□ SLM-80-T1-C5
■ An Optical Express connection requires correct OCG levels between the two BMMs. BMM2s
contain a variable optical attenuator (VOA), but 40-channel BMMs do not contain a VOA and
therefore require a 20dB or 22dB pad for correct optical span engineering. The express OCG
power needs to be within a 3dB capture window (1dB above and 2dB below the target power).
Typical target power for the receive OCG on a 40-channel BMMs is -14dBm to -13dBm.
■ Optical Express requires the correct placement of DCM units (as determined by the span design).
To configure an Optical Express loop in the network, each BMM in the loop must be enabled for the
Optical Express route loop (OER loop) feature. Unless this feature is enabled on each BMM, an Optical
Express loop is not supported, meaning that at least one node in the ring is required to add/drop all of its
OCGs.
The following BMMs support Optical Express loops:
■ BMM2P-8-CH1-MS
■ BMM2-8-CXH2-MS
■ BMM2-8-CH3-MS
■ BMM2H-4-R3-MS
■ BMM2C-16-CH
Note: The Power Control Loop Mode feature is not supported for OCGs that are provisioned for
manual (for example, when Optical Express is configured for BMM2s set to SLTE mode).
Before disconnecting an OCG fiber, it is important to lock the BMM OCGs at the Optical Express site.
(And then unlock the BMM OCG once the fiber is reconnected.)
Note: Before disconnecting an OCG fiber between a CMM and a BMM, it is important that the
associated CMM OCG is set to the locked admin state before unplugging an OCG fiber. (And then
unlock the CMM OCG once the fiber is reconnected.)
As shown in Figure 4-24: Network Migration with Optical Service Bridge and Roll on page 4-32, an
Infinera node is installed at one end of the existing network and traffic is routed through the client ports. At
the other end of the existing network, another Infinera node is installed and connected to the first node,
thus creating a bridge path through the Intelligent Transport Network. Traffic in the existing DWDM
network is then rolled from the existing DWDM network onto the Intelligent Transport Network.
The Bridge and Roll feature can also be applied to routes in the Intelligent Transport Network in order to
perform maintenance functions on a node or link with minimal traffic disruption.
Figure 4-24 Network Migration with Optical Service Bridge and Roll
Note: In lieu of the Bridge and Roll feature, the Digital Network Administrator (DNA) offers a function
where unprotected SNCs can be converted to protected SNCs (either as part of 1 Port D-SNCP or 2
Port D-SCNCP). This feature can be used to add a protect leg to an existing SNC. (DNA also allows
the user to convert protected SNCs to unprotected SNCs.)
Note: Unless specifically noted otherwise, all references to “line module” will refer interchangeably to
either the DLM, XLM, ADLM, AXLM, SLM, AXLM-80, ADLM-80 and/or SLM-80 (DTC/MTC only) and
AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and/or SOLX2 (XTC only). The term “LM-80”
is used to specify the LM-80 sub-set of line modules and refers interchangeably to the AXLM-80,
ADLM-80 and/or SLM-80 (DTC/MTC only). Note that the term “line module” does not refer to TEMs,
as they do not have line-side capabilities and are used for tributary extension.
Note: As with cross-connects on the DTC and MTC, the endpoints of an XTC cross-connect both
must be on the same chassis. Unlike the DTC/MTC, the XTC retains tributary-side endpoints of a
cross-connect after the cross-connect is deleted.
Add/Drop Cross-connect
The add/drop cross-connect is a bidirectional cross-connect that associates the tributary-side endpoint to
the line-side endpoint by establishing connectivity between a TOM tributary port (residing within an OTM)
to a line-side optical channel within a line module.
The add/drop type of cross-connect is used to add/drop traffic at a Digital Add/Drop site (see Digital
Terminal Configuration) and to drop traffic at a site as part of the Multi-point Configuration feature (see
Multi-point Configuration on page 4-23).
Figure 4-25: Add/Drop Cross-connects on a DTN-X on page 4-34 shows an example of an add/drop
cross-connect on a DTN-X.
Add Cross-connect
An add cross-connect is a unidirectional cross-connect that associates the tributary-side endpoint to the
line-side endpoint by establishing connectivity between a TOM tributary port (residing on an OTM) to a
line-side optical channel within a line module. Any tributary port can be connected to any line-side optical
channel.
The add type of cross-connect is used to add traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg).
Figure 4-26: Add Cross-connect on a DTN-X on page 4-35 shows an example of an add cross-connect
on a DTN-X.
Drop Cross-connect
A drop cross-connect is a unidirectional cross-connect that associates the line-side endpoint to tributary-
side endpoint by establishing connectivity between a line-side optical channel within a line module to a
TOM tributary port (residing within an OTM).
The drop type of cross-connect is used to drop traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg), and can be used to drop traffic
at a site as part of the Multi-point Configuration feature (see Multi-point Configuration on page 4-23).
Figure 4-27: Drop Cross-connect on an XTC on page 4-36 shows an example drop cross-connect.
Express Cross-connect
An express cross-connect is a unidirectional or bidirectional cross-connect that associates one line-side
endpoint to another line-side endpoint by establishing connectivity between the optical channels of two
different OCGs (line modules) within a DTN-X.
The express cross-connect type is transparent to the payload type encapsulated in the OTN wrapper. A
typical application for this cross-connect is traffic switching and grooming at OTN switching sites.
Figure 4-28: Express Cross-connect on an XTC on page 4-37 shows an example express cross-
connect.
Hairpin Cross-connect
A hairpin cross-connect is a unidirectional or bidirectional cross-connect that is used to cross-connect two
tributary ports within a single XTC chassis. Hairpin circuits are supported in the following configurations
(see Figure 4-29: Hairpin Cross-connects on a DTN-X on page 4-38):
■ Between two tributary ports within a given OTM. The two tributary ports may reside on the same or
different TIMs.
■ Between a tributary port on one OTM and a tributary port on another OTM.
Hairpin cross-connects do not use the line-side optical channel resource. The hairpin cross-connects are
used in Metro applications for connecting two buildings within a short reach without laying new fibers.
Note: For 100GbE services on the TIM-1-100GE/TIM-1B-100GE and for OTU4 transport without FEC
on the TIM-1-100G/TIM-1-100GM/TIM-1-100GX, the DTN-X can transport these services across
multiple channels (OCGs). See Multi-OCG Support for VCAT Services on page 4-42.
Figure 4-30: Virtual Concatenation Mode (100GbE Example) on page 4-39 below shows an example of
virtually concatenated 100GbE transport; Figure 4-31: Non-Virtual Concatenation Mode (100GbE
Example) on page 4-40 shows non-virtually concatenated 100GbE transport.
The VCAT option is selected when creating a service using the network mapping options (see DTN-X
Network Mapping on page 4-52). As with VCAT on the DTN, VCAT on the DTN-X uses the concepts of
virtual concatenation groups (VCGs) and group termination points (GTPs):
■ The VCG is created automatically when a VCAT cross-connect or SNC is created. The VCG is
created on the client side and is the monitoring point for the multiple ODU2i CTPs. Unlike VCGs for
endpoints on the DTC/MTC, VCGs with endpoints on the XTC do not contain sub-client entities.
■ The GTP is a list of user-specified ODUs that are logically grouped together into a GTP that serves
as an endpoint in a cross-connect or an SNC. Unlike for VCGs that are client-side only, for GTPs
there is a client-side GTP and a line-side GTP. The GTP is the termination point used to identify the
service for creating cross-connects, SNCs, and protection groups (1 Port and 2 Port Digital SNCP).
The GTP can be created as part of provisioning a VCAT cross-connect or SNC, or the GTP can be
created independently for subsequent use in provisioning.
Figure 4-32: VCG and GTPs for a 100GbE DTN-X VCAT Service on page 4-41 shows the relationship
between VCGs, GTP, and ODUs.
Figure 4-32 VCG and GTPs for a 100GbE DTN-X VCAT Service
Please note the following guidelines and behaviors for DTN-X VCAT service provisioning:
■ 1 Port D-SNCP protection is supported for VCAT services with endpoints on the XTC. The
protection units must both be VCAT or both must be non-VCAT.
■ 2 Port D-SNCP protection is supported for VCAT services with endpoints on the XTC. 2 Port D-
SNCP does support a mix of VCAT and non-VCAT protection units, so one route may be VCAT
and the other route may be non-VCAT.
Note: See DTN-X Service Capabilities on page A-1 for a full list of the services and modules
that support VCAT and D-SCNP .
■ For both cross-connect and SNC provisioning, VCAT services must be routed through the same
OCG, meaning that all ODU constituents must be on the same line module.
■ For VCAT SNCs with endpoints on the XTC, only tributary-to-tributary SNCs are supported. Line-
terminating SNCs (tributary-to-line or line-to-tributary SNCs) are not supported. For manual cross-
connects with endpoints on the XTC, VCAT is supported for tributary-to-tributary manual cross-
connects and also for line-terminating manual cross-connects.
■ Restoration and revertive restoration are supported for VCAT SNCs with endpoints on the XTC.
■ Route diversity is supported for VCAT SNCs with endpoints on the XTC.
■ Both Binary Phase Shift Keying (BPSK) modulation and Quadrature Phase Shift Keying (QPSK)
modulation are supported for VCAT services.
Note: See DTN-X Service Capabilities on page A-1 for the service provisioning and diagnostic
capabilities supported by the XTC-10, XTC-4, XTC-2, and XTC-2E.
Note: When creating adaptation services, both head-end and tail-end nodes must be running Release
9.0 or higher.
cDTF Transport
In order to support 1GbE and/or 2.5Gbps services from a DTC/MTC, the DTN-X supports the cDTF
service type (Clear Channel Digital Transport Frame), which is an 11.1G Clear Channel service that is an
aggregate of sub-10G services. (Note that this service is called 11G1CC in the TL1 interface.)
The cDTF is mapped to ODUflexi and requires 9 timeslots (see Provisioning ODUflexi Services on page
4-56 for an explanation of ODUflexi).
Connectivity between the XTC and a DTC/MTC is achieved via cDTF using the following module pair
combinations:
■ A DICM on the DTC/MTC and an XICM on the XTC (see DTN Interconnect Module (DICM) and
DTN-X Interconnect Module (XICM)).
Figure 4-33 cDTF Use for Low-speed Services over DTN-X Network (2.5Gbps Example)
For OTU4 transparent transport without FEC, the DTN-X separates the OTU4 client signal into ten ODU2i
virtually concatenated containers (ODU2i-10v network mapping) for transportation across the network.
Note: All ten ODU2i must be routed over the same OCG/SCG, and all ten ODU2i entities must
traverse through the same modulation type (QPSK or BPSK).
Figure 4-34: Virtual Concatenation Mode (OTU4 Example) on page 4-45 below shows an example of
virtually concatenated OTU4 transport.
sorting and list the line ODU2i entities so that the order sequence of the tributary ODU2i at source
and destination node match.
■ OTU4 transparent transport without FEC is supported with either BPSK or QPSK modulation
(mixed modulation is not supported).
■ Restoration and D-SNCP (1 Port and 2 Port D-SNCP) are not supported for OTU4 transparent
transport without FEC services.
■ For line-side endpoints, only manual cross connects are supported. Line-terminating SNCs
(tributary-to-line SNCs or line-to-line SNCs) are not supported for OTU4 transparent transport
without FEC.
■ The OTU4 is supports the alarms, performance monitoring, and diagnostics supported for OTU4
switching services (see ODU Switching on page 4-46). However, note the following:
□ The ODU4 is not monitored for alarms, performance monitoring, diagnostics, etc.
□ The virtual concatenation group (VCG) is not monitored for performance monitoring.
ODU Switching
The DTN-X supports ODUk switching, in which the client OTUk overhead is terminated at the ingress.
The ODUk overhead is switched at every network hop from one interface to the next interface (meaning
that the ODUk overhead is accessible at every hop).
Note: When creating ODU switching services, both head-end and tail-end nodes must be running
Release 9.0 or higher.
Figure 4-36: Entities Created for ODU Switching (ODU2 Example) on page 4-47 shows the entities that
are created at the DTN-X to perform ODU2 switching.
Figure 4-37: Entities Created for ODU Switching (ODU0 Example) on page 4-48 shows the entities that
are created at the DTN-X to perform ODU0 switching.
ODU Multiplexing
The DTN-X supports ODU multiplexing, in which the client OTUk and ODUk overhead is terminated at the
ingress, and the ODUj is switched across the network. The ODUj overhead is switched at every network
hop from one interface to the next interface (meaning that the ODUj overhead is accessible at every hop).
For each of the supported ODUj granularities, services can be provisioned either via GMPLS circuits
(SNCs) or by manually configured cross-connects. In addition, ODU multiplexed services can be
protected by 1 Port D-SNCP (see 1 Port D-SNCP on page 4-126); 2 Port D-SNCP is not supported for
ODU multiplexing services. ODU multiplexed SNCs also support restoration (see Dynamic GMPLS Circuit
Restoration on page 4-140).
The TIM-5-10GX, TIM-1-100GX, and LIM-1-100GX support single-stage multiplexing:
The TIM-5-10GX supports the following ODU multiplexing options ( Figure 4-38: ODU Multiplexing
(TIM-5-10GX) on page 4-49):
■ ODU1 (low order ODUj) to ODU2 (high order ODUk) to OTU2
■ ODU0 (low order ODUj) to ODU2 (high order ODUk) to OTU2
The TIM-1-100GX and LIM-1-100GX support the following ODU multiplexing options ( Figure 4-39: ODU
Multiplexing (TIM-1-100GX and LIM-1-100GX) on page 4-49):
Note: The TIM-1-100GX and LIM-1-100GX support an Operating Mode configuration for supporting
ODU Multiplexing services (see Operating Mode for TIM-1-100GM, TIM-1-100GX, and LIM-1-100GX
on page 4-50).
For ODU0 switching, the TIM-1-100GX/LIM-1-100GX must be in the ODUk-ODUj operating mode. In
ODUk-ODUj mode, the module can support a mix of ODU0, ODU2, and ODU2e services.
Low order ODUj entities can be multiplexed into high-order ODUk as follows:
■ A high order ODU2 can contain:
□ 8 ODU0
□ 4 ODU1
□ a mix of the above
■ A high order ODU4 can contain:
□ 10 ODU2
□ 10 ODU2e
□ 80 ODU0s
□ a mix of the above
For ODU1 multiplexing on the TIM-5-10GX, the TIM-5-10GX supports two user-configurable time slot
granularities for the high-order ODU2:
■ 1.25G granularity—The ODU2 is split into 8 sections of 1.25G each
■ 2.5G granularity—The ODU2 is split into 4 sections of 2.5G each
Note that when the time slot granularity is set for 1.25G, the 8 sections of the ODU2 are paired like this:
■ Pair #1: time slots 1 and 5
■ Pair #2: time slots 2 and 6
■ Pair #3: time slots 3 and 7
■ Pair #4: time slots 4 and 8
If either time slot in a pair is configured for a ODU1 service, the other timeslot in the pair can be used only
for ODU1 service as well, and likewise for ODU0 services: A mix of ODU1 and ODU0 services is not
supported in a time slot pair on the TIM-5-10GX. For example, If ODU1 uses time slots 1 and 2, then time
slots 5 and 6 cannot be used by ODU0, they need to be used by an ODU1 rate service only.
Note: The TIM-1-100GX and LIM-1-100GX support all of these options. The TIM-1-100GM supports
only ODU4-ODL and GBE100-ODU4-4i-2ix10V.
mode can inter-operate with TIM-1-100GXs or LIM-1-100GXs with the Operating Mode setting of
ODU4-ODU2-ODU2E or ODUk-ODUj.
Note: Before the Operating Mode can be changed, all existing services on the TIM/LIM must be
deleted and the equipment must be administratively locked.
Note: Do not physically remove or cold reset the TIM/LIM when the TIM/LIM is performing an
Operating Mode update. Wait until the Operating Mode Status is “Active” before removing the TIM/
LIM.
Note: If downgrading from Release 16.2 or higher to pre-Release 16.2, any TIM/LIM set for 100GbE
or one of the ODU multiplexing modes must first be configured to an operating mode supported in the
release to which the node is downgrading:
■ GBE100-ODU4-4i-2ix10V operating mode is supported in Release 16.2 and higher.
■ ODUk-ODUj operating mode is supported in Release 16.1 and higher.
■ ODU4-ODU2-ODU2E operating mode is supported in Release 11.0 and higher.
■ ODU4-ODL operating mode is supported in Release 10.0 and higher.
Note: .
The TIM-1-100GX/LIM-1-100GX requires specific firmware to support each of these modes. If the
operating mode is changed, the TIM/LIM will download and apply the appropriate firmware to support the
new operating mode. The status of the operating mode and firmware synchronization can be viewed in
Operating Mode Status fields on the equipment. The TIM/LIM will indicate the current status:
■ Not determined—The module is pre-provisioned or has not yet booted up after physical installation.
■ Change in progress—The module’s operating mode has been changed and the TIM is currently
downloading the required firmware and programming the provisioned operating mode. (All user
operations are blocked for the TIM/LIM during this time.)
■ Active—The module’s firmware matches its operating mode; firmware and software operating
modes are in sync.
Table 4-1 Cross-connect Network Mapping for Various Client Interfaces (continued)
PAYLOAD FROMAID TOAID NETMAP Description
(* indicates
the default
value)
ODU4 ODU4 Add/drop cross connect with 100GbE payload
mapped to ODU4
GTP AID GTP AID ODU2i-10v Add/drop cross connect with 100GbE payload
(VCAT mapping)
OTU4 ODU4 ODU4 (Line) ODU4 Add/drop cross connect for G.709 standard
(Tributary) ODU4 service
GTP AID GTP AID ODU2i-10v Add/drop cross connect with OTU4 payload
without FEC (transparent transport, VCAT
mapping)
OTU3 ODU3 ODU3 (Line) ODU3 Add/drop cross connect for G.709 standard
(Tributary) ODU3 service
OTU3e1 ODU3e1 ODU3e1 (Line) ODU3e1 Add/drop cross connect for G.Sup43 ODU3e1
(Tributary) service
OTU3e2 ODU3e2 ODU3e2 (Line) ODU3e2 Add/drop cross connect for G.Sup43 ODU3e2
(Tributary) service
OTU2 ODU2 ODU2 (Line) ODU2 Add/drop cross connect for G.709 standard
(Tributary) ODU2 service
ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with OTU2 payload with
(Tributary) (9 timeslots) FEC (transparent transport)
OTU2e ODU2e ODU2e (Line) ODU2e Add/drop cross connect for G.709 standard
(Tributary) ODU2e service
ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with OTU2e payload
(Tributary) (9 timeslots) with FEC (transparent transport)
OTU1e ODU1e ODU1e (Line) ODU1e Add/drop cross connect for G.709 standard
(Tributary) ODU1e service
ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with OTU1e payload
(Tributary) (9 timeslots) with FEC (transparent transport)
10G CC ODU2 ODU2 (Line) ODU2-BMP* Add/drop cross connect with 10G Clear
(Tributary) ODU2-AMP Channel payload and default network mapping
set to ODU2 BMP
10.3G CC ODU2e ODU2e (Line) ODU2e Add/drop cross connect with 10.3G Clear
(Tributary) Channel payload, mapped to an ODU2e
ODU1e ODU1e (Line) ODU1e Add/drop cross connect with 10.3G Clear
(Tributary) Channel payload, mapped to an ODU1e
ODU2i ODU2i (Line) ODU2i Add/drop cross connect with 10.3G Clear
(Tributary) Channel payload, mapped to an ODU2i
Table 4-1 Cross-connect Network Mapping for Various Client Interfaces (continued)
PAYLOAD FROMAID TOAID NETMAP Description
(* indicates
the default
value)
10G Fibre ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with 10G Fibre Channel
Channel (Tributary) (9 timeslots) payload
10G DTF/ ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with 11.1G Clear
cDTF (Tributary) (9 timeslots) Channel payload
(11.1G Clear
Channel)
8G Fibre ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with 8G Fibre Channel
Channel (Tributary) (7 timeslots) payload
GFP ODUflexi ODUflexi (Line) ODUflexi Add/drop PXM services.
(Tributary) (variable
number of
ODU0
timeslots)
High-order to low-order ODU mapping is created as part of service provisioning (see above table for the
low-order ODUj mapping for the client signals supported on the DTN-X).
The following table show the number of ODU0 (time slots) required for each rate of low-order ODUj.
Table 4-2 Timeslots Required for Low Order ODUj Entities (continued)
ODUj Rate Required Number of Time Slots
ODU4 80
ODUflexi (number varies, see previous table)
Note: Each of the 5 TOM slots on the 10G TIMs have 8 designated timeslots on the TIM/XICM.
However, for services that require 9 timeslots, the TIM/XICM automatically uses a timeslot from TOM
slot 5 to provide the additional timeslot for these services. This means that if a service is already
provisioned on slot 5, the TIM/XICM will not be able to support a new service that requires 9
timeslots. For this reason, it is recommended to provision all services on 10G TIMs/XICMs starting
with slot 1, keeping slot 5 available until the TIM/XICM is fully utilized.
Note: BPSK is supported only on the C13 versions of SOFx-500 and SOLx2, it is not supported on
C12 versions of SOFx-500 and SOLx2.
3QAM is supported only on the C13 versions of SOFx-500 and on C8 versions of AOFx-500; it is not
supported on C12 versions of SOFx-500, nor on C3, C5, nor C6 of AOFx-500, nor on line modules
other than OFx-500
The ODUCni framework supports the same digital provisioning services as ODUn services, such as add/
drop and express cross-connects/SNCs, bi-directional/unidirectional services, multipoint services, virtual
concatenation, etc. ODUCni services are constrained only by the bandwidth of the associated OTN entity
(in the case of 3QAM services, this means that ODUCni services require bandwidth in multiples
37.5Gbps, BPSK services require bandwidth in multiples of 25Gbps).
The actual ODUCni service is represented in the management interfaces as ODUCni-M, where:
■ n= number of 100G OTUC frames, rounded up to a nearest hundred. For example, if the bit rate is
37.5, it will be rounded up to nearest 100 (n=1).
■ M = number of 5G timeslots. The value M is only used for ODUCni-M nomenclature when the bit
rate isn't divisible by 100G. For example, if the bit rate is 37.5G, M =7.5. But for the bit rate of 300G
there is no M value, so the service type is denoted as ODUC3i.
The following ODUCni rates are supported:
■ 37.5G (ODUC1i-7.5)
■ 75G (ODUC1i-15)
■ 100G ( ODUC1i)
■ 112.5G (ODUC2i-22.5)
■ 150G (ODUC2i-30)
■ 187.5G (ODUC2i-37.5)
■ 200G (ODUC2i)
■ 225G (ODUC3i-45)
■ 250G (ODUC3i-50)
■ 262.5G (ODUC3i-52.5)
■ 300G (ODUC3i)
■ 337.5G (ODUC4i-67.5)
■ 375G (ODUC4i-75)
Note: For ODUCni on OFx-1200 or XT-3600, the ODUCni rates can vary from 50G to 1.2Tb based on
the super channel configuration.
For high order ODUCni entities with 3QAM modulation, the carrier mode model goes beyond single-
carrier/dual-carrier modes. As shown in the table below, each ODUCni rate requires a different number of
carriers. When provisioning ODUCni services, the user can combine any of the available carriers to be
used for the service.
The number of tributary slots required for the ODUCni service is the ODUCni container capacity divided
by 1.25. For example, for the ODUC2i-22.5 service which has a maximum capacity of 112.5Gbps, the
number of tributary slots required is 112.5/1.25, which is 90 tributary slots.
ODUC1i-15 3QAM 75 2 60
ODUC1i BPSK 100 4 80
ODUC2i-22.5 3QAM 112.5 3 90
ODUC2i-30 3QAM 150 4 120
BPSK 150 6 120
ODUC2i-37.5 3QAM 187.5 5 150
ODUC2i BPSK 200 8 160
ODUC3i-45 3QAM 225 6 180
ODUC3i-50 BPSK 250 10 200
ODUC3i-52.5 3QAM 262.5 7 210
ODUC3i 3QAM 300 8 240
ODUC4i-67.5 3QAM 337.5 9 270
ODUC4i-75 3QAM 375 10 300
Existing Line Side ODU Containers (for reference)
Table 4-3 Tributary Slots and Capacities of Line Side Containers (continued)
ODUCni Rate Modulation format Maximum Capacity Number of Carriers Number of
of ODU Container Required for the Tributary Slots
(Gbps) Service
ODU3i+ SC-PM-QPSK 50 1 40
DC-PM-BPSK 50 2 40
ODU4i DC-PM-QPSK 100 2 80
A port group cannot simultaneously support both GMP and AMP mappings. Therefore, each port group
can support either BMP and GMP mappings (default), or the port group can support BMP and AMP
mappings. The user can configure each of the four port groups; all ports in the port group share the same
mapping configuration and its associated restrictions.
Note: The port map mode cannot be changed if a service exists on the port that would conflict with
the new mode. Likewise, a service cannot be provisioned on any of the ports in the port group if the
port group is configured for a mapping mode that doesn’t support the service.
Note: Port group mapping mode settings are not applicable since AMP mapping is not currently
supported. Only BMP and GMP mappings are supported.
Note: 2 Port D-SNCP is not supported for ODU multiplexing services. So although the
DTN-X supports non-bookended services for 1GbE, OC-48, and STM-16 on the
TIM-16-2.5GM, these non-bookended services require ODU multiplexing and so do not
support 2 Port D-SNCP.
■ For 1GbE services on TIM-16-2.5GM, the encapsulated client disable action is always Send LF and
can not be disabled (see Encapsulated Client Disable Action on Ingress (DTN-X) on page 3-46).
■ For OC-48 and STM-16 services on the TIM-16-2.5GM, the encapsulated client disable action is
always Generic AIS and cannot be disabled.
Via the PXM, the DTN-X supports statistical multiplexing, in which services are mapped to flows. Without
statistical multiplexing each port is dedicated to a single service, so for ten services there will be ten
circuits of 10Gbps each over a 100Gbps circuit. With statistical multiplexing on the PXM, there can
instead be multiple Ethernet services (e.g., across the 16 ports of the PXM-16-10GE) over multiple
ODUflexi connections, each of which can range between 1.25Gbps and 100Gbps, as the services
demand.
Note: The maximum switching capacity of the PXM is 200Gbps (bidirectional). In order to avoid
oversubscribing the device capacity, the user can control the amount of traffic admitted across ports
using the Max Switching Capacity Factor parameter on the PXM equipment (in TL1, this is the
MAXSWCAPFAC parameter in the ENT/ED-EQPT commands). This parameter supports values from
0.5 to 1 and indicates the percentage of 200Gbps allowed on the PXM: 0.5 means that 100Gbps
switch capacity is used for admission control of the traffic flows, 0.96 means that 192Gbps switching
capacity is used, etc. The default value is 1, meaning that 200Gbps switching capacity is used for
admission control of the traffic flows.
■ Restoration and inclusion/exclusion lists are supported for PXM services provisioned using SNCs.
■ 1 Port D-SNCP is supported for PXM services (see 1 Port D-SNCP on page 4-126). However for
PXM services the reliable TP is the ODUflexi TP, instead of being the tributary PTP as it is for TIM
services. (This means that an empty 1 Port D-SNCP is not supported in the case of PXM services.)
Hairpin cross-connects are not supported for 1 Port D-SNCP.
The following sections describe the features and provisioning of packet switching services using the PXM:
■ Data Flow and Facilities for Packet Services on page 4-63
■ Ethernet Private Line (EPL) Ethernet, Virtual Private Line (EVPL), and Ethernet Local Area Network
(E-LAN) Services on page 4-64
■ MPLS and LSP Elements on page 4-67
■ Traffic Management and Quality of Service on page 4-68
■ Layer 2 Control Protocol (L2CP) Handling on page 4-77
■ Treatment of Packets Through the Network on page 4-78
■ Ethernet OAM on page 4-81
■ Scalability for Packet Services on the DTN-X on page 4-88
■ PXM Standard Compliance on page 4-90
PXM services are mapped to the following managed objects, listed here in order from the client service
ingress to the OTN layer of the network (see also Figure 3-3: Managed Objects and Hierarchy (DTN-X
with PXM) on page 3-7 for the managed object hierarchy of a DTN-X equipped with PXMs):
■ Ethernet Interface—The Ethernet Interface models the external interface (the physical port) for
packet services. An Ethernet Interface can support multiple service flows (ACs). The Ethernet
Interface is automatically created when a client TOM is provisioned on the PXM.
■ Attachment Circuit (AC)—The AC is the link between a customer edge (CE) device and a provider
edge (PE) device that connects a user network with the service provider network. The ends of the
AC are the Ethernet Interfaces on either end of the network. Multiple ACs can be configured on an
Ethernet Interface, but an AC can be associated to only one Ethernet Interface.
■ Virtual Service Instance (VSI)—The VSI is the Ethernet bridge function entity of a service instance
on a PE. The VSI forwards Layer 2 frames based on MAC addresses and VLAN tags. The
following VSI types are supported:
□ Virtual Private Wire Service (VPWS)—Point to point connectivity between an AC endpoint
and a PW endpoint (end-to-end service may involve two such VSIs, one on the ingress PXM
and another on another PXM in the network).
□ VLAN cross-connect—Point-to-point service between two AC endpoints on the same PXM,
such as for hairpin connections.
□ Virtual Private LAN service (VPLS)—Multipoint-to-multipoint service between numerous ACs
and PWs.
■ Pseudowire (PW)—The PW is a bidirectional virtual connection between VSIs on two PEs. A PW,
also called an emulated circuit, consists of two unidirectional MPLS virtual circuits (VCs).
■ Multi-Protocol Label Switching (MPLS) Tunnel—The MPLS tunnel defines the endpoints of LSPs
and enables packet switching on PXMs at intermediate nodes.
■ Label Switched Path (LSP)—LSPs are unidirectional paths that are co-routed in pairs between the
network interfaces of nodes across a network.
■ Network Interface—The Network Interface is the Ethernet interface on the network side that maps
the service to the ODU container for transport over the network. The Network Interface is the
demarcation point between the packet layer and the OTN layer, representing the aggregate higher
rate interface into which multiple Ethernet PWs are multiplexed/de-multiplexed before handing
over/from the OTN layer interfaces for GFP encapsulation/de-encapsulation and then further ODU
encapsulation/de-encapsulation.
Ethernet Private Line (EPL) Ethernet, Virtual Private Line (EVPL), and
Ethernet Local Area Network (E-LAN) Services
Service can be provisioned as an Ethernet private line or as an Ethernet virtual private line:
■ Ethernet private line (EPL)—Each Ethernet Interface (port) is mapped to a single, dedicated AC, as
in Figure 4-41: Ethernet Private Line (EPL) Services on page 4-65.
■ Ethernet virtual private line (EVPL)—Virtual private line service in which multiple ACs are mapped
to an Ethernet Interface (port), as in Figure 4-42: Ethernet Virtual Private Line (EVPL) Services on
page 4-65.
■ Port-based Ethernet LAN (EP-LAN)—All to one bundling, where all service frames are associated
to one EVC at the UNI. In a port-based (or private) service, all UNIs are configured for all-to-one
bundling, and all service frames are mapped to the EVC, regardless of CE-VLAN ID. EP-LAN
allows any UNI to forward Ethernet frames to any other UNI. The key advantage of a port-based
service is that the subscriber and the service provider do not have to coordinate VLAN IDs.
■ VLAN-based Ethernet LAN (EVP-LAN)—Service multiplexing, where multiple EVCs are associated
to a UNI. In a VLAN-based (or virtual private) service, CE-VLAN IDs are explicitly mapped to the
EVC at each UNI. The key advantage of a VLAN-based service is that VLAN-based services can
share UNIs and IDs.
The PXM supports E-LAN for packet services. E-LAN service is realized via Virtual Private LAN service
(VPLS) and MPLS: VPLS enables geographically separated LAN segments to be interconnected as a
single bridged domain over an MPLS network. The full functions of the traditional LAN such as MAC
address learning, aging, and switching are emulated across all the remotely connected LAN segments
that are part of a single bridged domain. VPLS delivers an Ethernet service that can span one or more
metro areas and that provides connectivity between multiple sites as if these sites were attached to the
same Ethernet LAN.
Multipoint connections are supported via VSIs associated with multiple ACs and PWs. The VSI connects
to CE devices via ACs and to other VSIs via point-to-point pseudowires (PWs). A set of VSIs (with one
VSI per PE) interconnected via PWs defines a VPLS instance. .
The figure above shows an example network of three nodes using E-LAN services. Note that the nodes
PE 1 and PE 3 are not physically connected, but via use of MPLS, a service from PE 1 can be routed
through PE 2 to PE 3. From the end customer perspective, services can still be routed from PE 1 to PE 3
MAC Learning
MAC learning attributes are supported on AC and PW entities, and for VSIs configured as VLSR. The VSI
maintains a Layer 2 forwarding database (FDB) to forward customer frames to the appropriate
destinations based on destination MAC addresses. The VSI learns MAC source addresses based on
frames received on ACs/PWs and dynamically updates the associated FDB.
When a VSI receives a unicast frame from an AC with a known unicast destination MAC, i.e., an entry
exists in the forwarding database (FDB) for the destination MAC, it forwards the frame over exactly one
point-to-point PW or one AC associated with the destination MAC address. In contrast, when a VSI
receives a broadcast frame or a multicast frame or unicast frame with an unknown destination MAC from
an AC. it forwards the frame to all ACs and all PWs except the AC on which the frame was received.
Similarly, when a VSI receives a broadcast frame or a multicast frame or a unicast frame with an
unknown destination MAC from an PW, it forwards the frame on all ACs only. To prevent loops in a full
mesh VPLS, a VSI does not forward traffic from one PW to another in the same VPLS (due to
enforcement of the split-horizon rule). The set of PWs that are not allowed to forward traffic to each other
is said to form a split-horizon group. Both the AC and PW support a split horizon group ID in order to
prevent loops. (Note that split horizon groups for ACs must be manually configured by the user.)
A split horizon group is a collection of bridge ports. Traffic cannot flow between members of a split
horizon group. The restriction applies to all types of traffic, including broadcast, multicast, unknown
unicast, and known unicast. If a packet is received on a bridge port that is a member of a split horizon
group, that packet will not be sent out on any other port in the same split horizon group.
The PW and AC also support a MAC flap action attribute for situations in which a source MAC address
repeatedly appears on different interfaces of a VSI (known as a MAC flap). PWs and AC support a Flap
Action Clear parameter in order to clear a MAC flap.
For AC and PW entities, MAC learning is always enabled. For VLSR VSIs, MAC learning can be enabled
or disabled by the user. The VSI also supports the MAC limit value, and the MAC limit action once the
value is reached. The MAC limit action can be configured to one of the following:
■ Do Not Learn Do Not Flood—The system will not learn the new source MAC address and will not
flood the unknown unicast frames with unknown destination MAC address. If the incoming frame’s
destination MAC is known, then the system will forward the frame following the normal forwarding
behavior.
■ Do Not Learn Flood—The system will not learn the new source MAC address and will flood all
unknown unicast frames with unknown destinations MAC addresses. If the incoming frame’s
destination MAC is known, then the system will forward the frame following the normal forwarding
behavior.
■ Do Not Learn Do Not Forward—The system will not learn the new source MAC address.
Furthermore, the system will stop forwarding for the VSI: the system will drop all frames inncluding
the ones with known destination MAC addresses for this particular VSI (likewise for a particular AC
limit).
If the VSI's MAC limit notification is enabled, the VSI will raise an alarm if the the MAC limit is reached.
The following sections describe the Traffic Management/Quality of Service elements supported for packet
services in the Infinera network:
■ Class of Service (CoS) Mapping on page 4-69
■ Metering and Bandwidth Profiles on page 4-70
■ Queuing and Congestion Management on page 4-72
■ Scheduling on page 4-74
■ Shaping on page 4-75
■ Connection Admission Control (CAC) on page 4-76
The CoS feature works by examining the traffic as it ingresses the network. Traffic is classified into
defined service groups to provide special treatment across the network. As traffic leaves the network at
the egress, the user can configure the system to tag the packets with a different CoS identifier if needed.
From a functionality perspective, the CoS components are:
■ Classification—Packet classification is the process of examining an incoming packet. In PXM,
classifiers associate the incoming packet with a traffic class and a drop precedence and, based on
the associated traffic class, assign the packet to output queues. The PXM can determine the traffic
class based on the CoS identifier in the packet header or based on the AC classification identifiers
(i.e., service).
■ Traffic Class—The traffic classes affect the forwarding, scheduling, and marking policies applied to
the packet as it transits the system. The PXM supports five traffic classes: 0, 2, 4, 6, and 7, with 0
being the lowest priority and 7 being the highest network priority. The traffic class can be defined at
the AC level or at the Ethernet Interface level (if the traffic class is defined at both the AC and the
Ethernet Interface levels, the AC-level traffic class is used for the service).
■ Drop Precedence/Color—Drop precedence or color control the priority of dropping a packet. Loss
priority may affect how the packets are scheduled without affecting the packet’s relative ordering in
the traffic stream. The PXM supports two drop precedence levels: high and low, with high being a
higher likelihood to be dropped.
Note: For srTCM metering, if the EBS value is zero, packets of all sizes may be recognized as
yellow.
■ Two rate three color meter (trTCM)—For trTCM, the following traffic parameters are configured:
□ Committed Information Rate (CIR)
□ Committed Burst Size (CBS)
□ Excess Burst Size (EBS)
□ Excess Information Rate (EIR)
For packet services on the PXM, the user creates bandwidth profiles to specify the set of parameters
(e.g., CIR, CBS, EIR, EBS, etc.) for the metering algorithm. The bandwidth profile is used to characterize
service frames for the purpose of metering or rate enforcement (i.e., policing):
■ An Ingress Bandwidth Profile is used to regulate the amount of ingress traffic at the ingress PXM
Ethernet Interface.
■ An Egress Bandwidth Profile is used to regulate the amount of egress traffic at the egress PXM
Ethernet Interface.
For MEF compliant metering, the PXM supports a couple flag to enable or disable coupling flag.
In addition to the above, meters can be color aware mode or color blind:
■ Color Aware—A color aware meter is used when each service frame already has a level of
compliance (i.e., a color) associated with it and that color is taken into account in determining the
level of compliance by the meter. The color on the packet will be used to direct the packets to the
appropriate bucket. Excess green packets will either become yellow or red. Excess yellow packets
will become red.
■ Color Blind—Ignore the color (green or yellow, from DEI field in the VLAN tag) on the packet.
Metering will use its own mechanism to determine the packet color. Metering result can be
overwritten on the packet on the way out. A meter is said to be in color blind mode when the color
(if any) already associated with each service frame is ignored by the meter.
The following metering actions are supported:
■ None—No action is taken with respect to setting of drop precedence even if the packets are Yellow
Compliant.
■ Remark drop precedence—The drop precedence of packets which fall in the Yellow category will
be SET. The drop action for red and yellow frames is based on global configuration (i.e., at the
PXM level). Only red packets are dropped. Yellow packets drop can be achieved by setting EIR=0.
The following tables provide the meter rate and burst size granularity supported by the PXM.
A queue stores packets and controls the packet departure and ordering of traffic streams. A first-in-first-
out (FIFO) queue enqueues a packet to the tail of the queue and dequeues a packet from the head of the
queue. Packets are dequeued in the order in which they were enqueued. The output of a queue is
connected to an input of the scheduler. A scheduler determines the departure time for each packet that
arrives at one of its inputs, based on a service discipline (see Scheduling on page 4-74).
The PXM supports two methods of queuing (see figures below):
■ Class-based queuing (CBQ) reserves a queue for each traffic class; traffic belonging to a traffic
class is directed to the queue reserved for that traffic class. CBQ provides a simple and scalable
QoS architecture (e.g., the number of queues remains constant as the number of flows increases).
However, CBQ can only provide a coarse QoS and is incapable of separating competing flows (one
misbehaving flow can degrade the QoS of well-behaved flows).
■ Enhanced class-based queuing (ECBQ) reserves a separate queue for each flow, and traffic
belonging to a flow is directed to the queue reserved for that flow. ECBQ enables a granular QoS
and also allows separation of competing flows. However, ECBQ is less scalable than CBQ (the
number of queues grows linearly with the number of flows).
In addition, the PXM supports single and dual drop precedence as follows:
■ TC-7 and TC-6 traffic classes support a single drop precedence.
■ TC-4, TC-2, and TC-0 traffic classes support dual drop precedence.
In addition to the above queuing techniques, queue management is used to anticipate congestion before
it occurs and attempt to avoid congestion. The PXM supports two queue management techniques:
■ Tail Drop (TD), which drops packets when a queue is full until congestion is eliminated. The TD
treats all traffic flows equally and does not differentiate between classes of service.
■ Weighted random early detection (WRED), which maintains an average queue length for each
queue configured for WRED. The PXM supports minimum and maximum for green and yellow
traffic.
Scheduling
The scheduling function determines the departure time of each packet that arrives at one of its inputs. A
scheduler has one or more inputs and exactly one output. Each input is connected to an upstream
element (such as a queue or another scheduler output) and a set of parameters that affects the
scheduling behavior of packets received at that input. A scheduler may be configured through one or
more parameters for each of its inputs that influence the scheduling behavior of packets received at that
input.
For the PXM, the scheduling parameters are configured via the Bandwidth Resource Profile.
Scheduling disciplines may be classified into the following broad categories:
■ Strict priority (SP) scheduling discipline assigns a priority to each scheduler input with respect to all
other inputs feeding into the same scheduler. A SP scheduler serves a higher priority input before a
lower priority input. An undesirable side-effect of SP scheduling is that a mis-behaving higher
priority input can starve a lower priority input. To prevent starvation of lower priority inputs, it is
recommended to meter the traffic at a network edge entry points. As an additional safeguard
measure, higher priority scheduler inputs may also be rate limited (e.g., shaped to a rate).
■ Fair queuing (FQ) is a scheduling discipline that allows multiple scheduler inputs to fairly share the
link capacity. A generalization of FQ is called weighted fair queuing (WFQ). Unlike an FQ
scheduler, a WFQ scheduler allows different inputs to have different bandwidth shares.
As described above, the PXM supports 5 traffic classes (TC-0, TC-2, TC-4, TC-6, and TC-7, with
TC-0 being the lowest priority and TC-7 being the highest network priority). The PXM supports the
5P3D traffic class model as specified in IEEE 802.1Q:
■ TC-7 and TC-6 are scheduled using SP and provide low latency/low jitter performance.
■ TC-4 and TC-2 are scheduled using WFQ and provide low drop probability performance.
■ TC-0 provides best-effort performance.
Shaping
Traffic shaping is the process of delaying packets within a traffic stream to achieve conformance to some
predefined temporal profile (shaping the egress traffic to smooth out possible bursts). For example,
minimum service rate parameter for a scheduler input may be specified and realized with a token bucket
rate shaper configured with CIR>0 and CBS=0. The CIR rate limiter on a scheduler input ensures that
packets on that input don’t exceed configured minimum service rate. Maximum service rate limit for a
scheduler input may be specified and realized through a token bucket rate shaper configured with EIR>0
and EBS=0. The EIR rate limiter ensures that packets on that scheduler input don’t exceed configured
maximum service rate.
For the PXM, the shaping parameters are configured via the Bandwidth Resource Profile.
Note that shaping applies only for services configured for enhanced class-based queuing (ECBQ); class-
based queuing (CBQ) does not use shaping (see Queuing and Congestion Management on page 4-72).
Table 4-6: PXM Flow Shaper Rate Granularity on page 4-76 and Table 4-7: PXM Flow Shaper Burst
Size Granularity on page 4-76 provide the flow shaper rate and burst size granularity supported by the
PXM.
□ The maximum of mapped CIR+ mapped EIR values should be less than the interface speed.
■ If the above CAC checks are successful, the service is allowed.
Note: Note that for ECBQ, the value specified for CAC is also used for the shapers (this doesn’t apply
for CBQ, because CBQ does not use shaping), see Queuing and Congestion Management on page
4-72 and Shaping on page 4-75.
CAC checks are performed at each Network Interface and Ethernet Interface, as shown in Figure 4-49:
Connection Admission Control (CAC) Checks in the Network on page 4-77.
Note: Any of the L2CP profiles can be applied to an Ethernet interface, regardless of whether a
service is EPL or EVPL (e.g., an EVPL L2CP profile can be applied on an Ethernet interface that
carries an EPL service). If no L2CP profile is specified for an Ethernet interface, the EPL L2CP Profile
2 is applied (discard all).
TPID(s) on page 4-79. This behavior performed on a port-by-port basis (per Ethernet Interface,
depending on the port’s interface type).
Table 4-9 Treatment of Incoming Packets Based on Ethernet Interface Type and TPID(s)
Port Type Incoming Packet Tag(s) Resulting Packet Treatment
(outer, inner) (Identified Tag Format)
802.1Q Untagged Untag
with outer TPID set to
Single tagged packet of 0x8100 TPID C-tag
default value (0x8100)
Single tagged packet of 0x9200 TPID Untag
Double tagged packet (0x9200, 0x8100) Untag
Double tagged packet (0x8100, 0x88a8) C-tag
Double tagged packet (0x8100, 0x9200) C-tag
Double tagged packet (0x8100, 0x8100) C-tag
Single priority tagged packet of 0x8100 TPID Cprio-tag
802.1Q Untagged Untag
with outer TPID set to
Single tagged packet of 0x9200 TPID C-tag
custom value (0x9200 is
used in this table as an Single tagged packet of 0x8100 TPID Untag
example custom value)
Double tagged packet (0x8100, 0x9200) Untag
Single tagged packet of 0x8100 TPID Untag
Double tagged packet (0x9200, 0x88a8) C-tag
Double tagged packet (0x9200, 0x8100) C-tag
Double tagged packet (0x9200, 0x9200) C-tag
Single priority tagged packet of 0x9200 TPID Cprio-tag
802.1ad Untagged Untag
with outer/inner TPIDs
Single tagged packet of 0x8100 TPID C-tag
set to default values
(0x88a8, 0x8100) Single tagged packet of 0x88a8 TPID S-tag
Double tagged packet (0x88a8, 0x8100) S-C tag
Single tagged packet of 0x9200 TPID Untag
Double tagged packet (0x8100, 0x88a8) C-tag
Double tagged packet (0x8100, 0x9200) C-tag
Double tagged packet (0x8100, 0x8100) C-tag
Single priority tagged packet of 0x8100 TPID Cprio-tag
Single priority tagged packet of 0x88a8 TPID Sprio-tag
Double tagged packet (0x88a8, 0x88a8) S-tag
Table 4-9 Treatment of Incoming Packets Based on Ethernet Interface Type and TPID(s) (continued)
Port Type Incoming Packet Tag(s) Resulting Packet Treatment
(outer, inner) (Identified Tag Format)
Double tagged packet (0x88a8, 0x9200) S-tag
802.1ad Untagged Untag
with outer/inner TPIDs
Single tagged packet of 0x9300 TPID C-tag
set to custom values
(0x9200, 0x9300 are Single tagged packet of 0x9200 TPID S-tag
used in this table as
example custom values) Double tagged packet (0x9200, 0x9300) S-C tag
Single tagged packet of 0x88a8 TPID Untag
Double tagged packet of (0x88a8, 0x9300) Untag
Double tagged packet of (0x88a8, 0x8100) Untag
Double tagged packet (0x9300, 0x9200) C-tag
Double tagged packet (0x9300, 0x88a8) C-tag
Double tagged packet (0x9300, 0x9300) C-tag
Single priority tagged packet of 0x9300 TPID Cprio-tag
Single priority tagged packet of 0x9200 TPID Sprio-tag
Double tagged packet (0x9200, 0x9200) S-tag
Double tagged packet (0x9200, 0x88a8) S-tag
802.1ad Untagged Untag
with outer/inner TPIDs
Single tagged packet of 0x8100 TPID Untag
set to the same value
(0x88a8, 0x88a8 is used Single tagged packet of 0x88a8 TPID S-tag
in this table as an
example) Double tagged packet (0x88a8, 0x8100) S-tag
Single tagged packet of 0x9200 TPID Untag
Double tagged packet (0x8100, 0x88a8) Untag
Double tagged packet (0x8100, 0x9200) Untag
Double tagged packet (0x8100, 0x8100) Untag
Single priority tagged packet of 0x8100 TPID Untag
Single priority tagged packet of 0x88a8 TPID Sprio-tag
Double tagged packet (0x88a8, 0x88a8) SC-tag
Double tagged packet (0x88a8, 0x9200) S-tag
At the packet ingress, the PXM determines the packet’s inner and outer TPID and based on the identified
tag format (Table 4-9: Treatment of Incoming Packets Based on Ethernet Interface Type and TPID(s) on
page 4-79), the Ethernet Interface supports the following ingress actions:
■ For Ethernet interfaces configured with interface type 802.1Q: none, push, pop, swap
■ For Ethernet interfaces configured with interface type 802.1ad: none, pop, swap
Once the Ethernet Interface has identified the packet and performed the configured ingress action, the
packet flow continues to the AC. As shown in Figure 4-50: Ingress VLAN Edit and Egress VLAN Edit on
the PXM on page 4-81, it is at the AC ingress that the PXM performs the ingress VLAN edit (IVE) for
incoming packets.
Figure 4-50 Ingress VLAN Edit and Egress VLAN Edit on the PXM
As shown in Figure 4-50: Ingress VLAN Edit and Egress VLAN Edit on the PXM on page 4-81, the PXM
performs the egress VLAN edit (EVE) at the egress of the AC.
The Ethernet Interface supports the following egress actions (supported for both 802.1Q and 802.1ad
interface types): none, push, pop, swap.
The PXM supports TPID editing: At the egress, the PXM supports overwriting of the TPID on outgoing
frames. The PXM uses the same attributes to identify incoming frames as with overwrites for outgoing
frames.
Ethernet OAM
DTN-X introduces Ethernet Operations, Administration and Maintenance on Layer 2 Ethernet Services
(EVCs). Ethernet OAM supports Ethernet connectivity fault management functionalities such as fault
detection and fault notification as defined in IEEE 802.1Q and ITU Y.1731.
Ethernet OAM is supported at the following levels:
■ Service OAM monitors entire EVC service inside the service provider network
■ Link Monitoring capabilities for connectivity links between customer edge and provider edge.
The following topics describe Ethernet OAM architecture, its components and the hierarchy of Ethernet
OAM managed objects:
■ Ethernet OAM Architecture on page 4-82
■ Ethernet OAM Managed Object Hierarchy on page 4-87
Maintenance Domain
A Maintenance Domain is defined as a sub-network over which an EVC is being monitored and is defined
by operational or contractual boundaries.
Each maintenance domain is assigned a unique maintenance level (in the range of 0 to 7). Maintenance
domain names along with their levels are used to define the hierarchy that exists among domains. Higher
levels (such as 7,6,5) provide a broader OAM reach compared to lower levels (such as 2,1,0). Typically
customers have larger maintenance domains and would have a higher level such as 7. Operators would
have the smallest domains with lower levels such as 0 or 1. Service provider domains would be in
between them in size.
Maintenance domains may nest within another, but should not intersect. In case of nested domains, the
outer domain must have a higher maintenance level than the domain(s) nested within it.
A maximum of three maintenance domain levels are supported per service on a given PXM. Following
are the guidelines to determine the levels:
■ MD Level 6: MEF Subscriber Level equivalence.
■ MD Level 4: MEF EVC Level equivalence.
■ MD Level 2: MEF Operator Level equivalence.
Infinera PXM supports setting Shared and Independent maintenance domain levels. In case of shared
levels, the maintenance domain roles and their corresponding entities need to be agreed upon by the
adminstrators.
Maintenance Association
A maintenance association represents a part of the end to end Ethernet service within a Maintenance
Domain. It is defined by a set of Maintenance End Points (MEPs) at the edge of the domain.
Each maintenance association entity is identified by MA name (that is unique within a maintenance
domain) and corresponds to an Ethernet Service.
As part of Connectivity check Messaging, an MEP attached to an MA sends periodic CCM messages to
remote MEP. The CCM Intervals (3.33ms, 10ms, 100ms,1 second, 10 seconds; default value 1 second)
are configured on a Maintenance Association.
If Continuity Check is enabled on an MEP and the interval for continuity check is defined, the MEP sends
periodic Continuity Check Messages (CCM) to remote MEP(s) of the maintenance association to which
this MEP is attached. In addition, MEP also inject maintenance signals like ETH-AIS and ETH-RDI.
MEPs are directional and are classified as Up MEP or Down MEP.
■ Up MEP - This MEP transmits CCM messages toward the Switch Fabric/ Bridge
■ Down MEP - This MEP transmits CCM messages away from Switch Fabric/Bridge
All MEPs under a maintenance association are required to have a unique MEP ID. All MEPs under a
maintenance association should be either Up MEPs or down MEPs .
Continuity Check Messaging
Once an MEP entity is created and associated to an interface, CCM PDUs are sent at the configured
CCM interval (as defined in the maintenance association). The MEP also expects CCM messages from
remote MEP(s) at the same interval as defined in the maintenance association.
CCM PDUs that are generated include the Port Status and Interface Status as Type-Length-Value (TLVs)
so that the recipient Remote MEPs can act upon it as needed. The ability to include the Port/Interface
status TLVs is available on both Up and Down MEPs to allow for unidirectional faults to be propagated to
the remote end.
The following CCM processing rules are followed by MEPs in accordance with the OAM architecture:
■ An MEP at a particular maintenance domain level transparently passes Service OAM (SOAM)
traffic at a higher maintenance domain level
■ An MEP at a particular maintenance domain level terminates SOAM traffic at its own maintenance
domain level
■ An MEP at a particular maintenance domain level discards SOAM traffic at a lower maintenance
domain level.
This results in a nesting requirement where a maintenance association with a maintenance domain level
cannot exceed the boundary of a maintenance association with a higher maintenance domain level.
The following Ethernet Continuity based alarms with respect to CCM handling are supported:
Note: Down MEPs do not support ETH-AIS generation. However, they support ETH-AIS monitoring
and report any alarms when such a condition is detected.
Note: ETH-AIS PDU is generated from the highest maintenance domain level present. Internally AIS
Indication Signal is sent from lower to higher maintenance levels. Once the highest maintenance level
is identified, the ETH-AIS PDU is sent from that level.
Ethernet RDI
When any outstanding Ethernet Continuity based alarms are present on the local MEP and the
corresponding defect’s priority is greater than or equal to the Lowest Fault Defect Priority configured on
the MEP, an Ethernet Remote Defect Indication (RDI) bit is set on the transmitted CCM PDU.
Ethernet Client Signal Failure (CSF) on MEP
The Ethernet CSF signal informs a peer MEP of the detection of a failure or defect in communication with
a client when the client itself does not support a means of notification to its peer, such as ETH-AIS or the
RDI function of ETH-CC.
Ethernet CSF is supported on OAM MEP for Ethernet Private Line (EPL) Ethernet Virtual Connection
(port based E Line service).
Note the following for Ethernet CSF:
■ Ethernet Interfaces on the PXM support tributary disable action, which will trigger transmit laser
shut down on the PXM port when ETH-CSF alarm is raised on an MEP whose parent Ethernet
Interface port is enabled for tributary disable action.
■ To enable tributary disable action on CSF, the PXM-1-100GE requires a cold reboot in order to
upgrade to the required firmware version. (This is not required for PXM-16-10GE.)
■ CSF related attributes are configurable only on Up MEPs associated with EP Line service.
■ Ethernet CSF is supported only when both AC have ingress match type of "Match Interface."
■ CSF messaging is applicable only on Up MEPs.
■ A port can support either Ethernet CSF transmission or Tributary Disable Action:
□ An Ethernet port cannot transmit Ethernet CSF towards the line side if the port itself is
already under Tributary Disable Action due to receiving a CSF message.
□ Conversely, an Ethernet port cannot perform its Tributary Disable Action upon receiving a
CSF message if the port is already transmitting Ethernet CSF toward the line side.
■ MD level is encoded in two places on a CSF PDU, and the MD levels from these two places must
match:
□ The destination MAC address
□ The PDU OAM header
If the port receives a CSF PDU where the MD levels do not match in these two places, the PDU is
forwarded instead of being dropped.
Remote MEP (RMEP)
RMEPs are remote MEPs for a local MEP. They are configured on other participating Attachment Circuits
(ACs) associated with an EVC. For example, if there are five end points in an EVC, then each MEP in the
five ACs should have four remote MEPs (RMEPs) configured for proper CCM transmission.
The following types of RMEPs are supported:
■ Manual - Users can create manually create RMEPs from management interfaces
■ Auto-created - RMEPs are auto-created by the network element when the CCM frame is received.
■ For configurations with FMM-C-5 and FRM, an add/drop optical cross connect is between the
tributary-side super channel CTP endpoint on the FMM-C-5 and the line-side super channel CTP
endpoint on the FRM, see Figure 4-60: Add/Drop Optical Cross-connect (example with FMM-C-5
and FRM-4D) on page 4-95. See Service Provisioning with OFx-100 and FMM-C-5 on page 4-115
for additional information on service provisioning with FMM-C-5.
■ For configurations with an XT(S)-3300/ XT(S)-3600, FBM and FRM, an add/drop optical cross-
connect is between the tributary-side super channel CTP endpoint on the FBM and the line-side
super channel CTP endpoint on the FRM. See Figure 4-62: Add/Drop optical cross-connect
between FBM and FRM (XT-3300/XT-3600 configuration) on page 4-96.
■ For configurations with an OFx-1200, FBM and FRM, an add/drop optical cross-connect is between
the tributary-side super channel CTP endpoint on the FBM and the line-side super channel CTP
endpoint on the FRM. See Figure 4-63: Add/Drop optical cross-connect between FBM and FRM
(OFx-1200 configuration) on page 4-96.
Figure 4-60 Add/Drop Optical Cross-connect (example with FMM-C-5 and FRM-4D)
Figure 4-62 Add/Drop optical cross-connect between FBM and FRM (XT-3300/XT-3600 configuration)
Figure 4-63 Add/Drop optical cross-connect between FBM and FRM (OFx-1200 configuration)
Note: For FRMs, SLTE mode is supported on the FRM-9D and FRM-20X only; SLTE is not supported
for FRM-4D.
■ Channel blocking optical cross-connects for cases where a certain portion of the optical spectrum is
not available for service provisioning. See Channel Blocking Optical Cross-connects on page 4-
98.
■ Addition of ASE idlers to the line system. See ASE Idler Optical Cross-connects on page 4-99.
■ Dynamic WSS resizing manual optical cross-connects in SLTE configuration. See Dynamic WSS
Resizing on page 4-100.
■ Split spectrum mode for manual optical cross-connects in SLTE configuration. See Split Spectrum
Mode for Manual Optical Cross-Connects on SOFx-500.
Figure 4-65: FlexILS SLTE Manual Optical Cross-connects on page 4-98 shows an example of a
FlexILS wave spectrum that includes ASE idlers and channel blocking.
■ Channel blocking is allowed on any slots in a super channel (in the middle range of the super
channel, or at the ends of the super channel, etc.). The blocked slices can be used by any other
optical cross-connect.
■ When specifying the frequency slot list, the selected slices to be used in the channel blocking
optical cross-connect must be within the range of the selected super channel.
■ Multiple pass bands are supported in a single super channel. However, each passband must
contain a minimum of 3 contiguous slices within the super channel.
■ The blocked bands can be from 1 to 20 contiguous slices.
■ On the associated SOFM/SOFX, the channels which have been blocked via the channel blocking
optical cross-connect must be administratively locked in order to prevent alarming on those
channels.
■ Pre-emphasis is applicable to channel blocking optical cross-connects to compensate for signal
quality deviations over long distances. The FRM-9D can apply fixed attenuation for each frequency
slot (12.5GHz granularity) across the C-band spectrum with 0 to 18dB.
■ Channel blocking optical cross-connects are supported for both contiguous spectrum (CS) super
channels and for split spectrum (SS) super channels.
Note: See Split Spectrum Mode for Manual Optical Cross-Connects on SOFx-500) for information on
Split Spectrum mode.
■ The range of slices can be selected anywhere in the spectrum, but the ASE idler optical cross-
connects must contain a minimum of 3 and a maximum of 40 contiguous slices.
■ Pre-emphasis is applicable to ASE idler optical cross-connects.
■ For ASE idler optical cross-connects (either for add/drop or for express connections between two
FRM-9Ds), only one passband is supported in each ASE idler optical cross-connect. To create
multiple passbands, the user can create multiple ASE idler optical cross-connects from same FSM
tributary (for add/drop cross-connects) or between the two FRM-9Ds (for SLTE express cross-
connects).
Note: Dynamic WSS resizing is supported only for traffic originating from the following line
modules
□ AOFx-500, SOFx-500
□ ICE 4 modules - XT(S)-3300, XT(S)-3600, AOFx-1200, SOFx-1200
The dynamic WSS resizing operation is performed by editing an existing optical manual cross-connect via
the Frequency Slot List parameter and the Possible Frequency Slot List parameter:
■ Frequency Slot List (FSL)—Defines the frequency slots to be used by an optical manual cross-
connect. The frequency slots specified in the FSL must be a subset of the frequency slots specified
in the Possible Frequency Slot List.
■ Possible Frequency Slot List (PFSL)—Defines the out-most boundary of the frequency slots that
can be used by the optical manual cross-connect. The default range of the PFSL is the entire
frequency slot list in the associated super channel.
To shrink a passband, the user can edit the FSL optical cross-connect to specify a narrower range of
slices (alternatively, the user can delete the PFSL and corresponding FSL entries). To widen a passband,
the user can add additional frequency slot entries in the PFSL, and then edit the FSL values to include
the additional frequency slot entries as well within the PFSL range only. During the process of WSS
resizing, a passband cannot be shrunk below 3 slices, it can be deleted instead.
Note the following for dynamic WSS resizing:
■ Both super channel CTP endpoints associated with the manual optical cross-connect must be in
the locked or maintenance administrative state.
■ The passband can be changed with a granularity of 1 slice (12.5GHz):
□ For contiguous spectrum optical cross-connects, the passband must be a minimum of 3
slices and a maximum of 40 slices.
□ For split spectrum optical cross-connects, the passband must be a minimum of 3 slices and
a maximum of 40 slices (with a channel blocking super channel in between).
□ For ASE idler cross-connects, the passband must be a minimum of 3 slices and a maximum
of 40 slices.
■ The values specified in an optical cross-connect’s FSL must be a sub-set of the values in the
PFSL.
■ For data-carry optical cross-connects, the value range of both the PFSL and the FSL must be
within the supported frequency slot range supported by the provisioned super channel number.
(Note that this does not apply for ASE idler optical cross-connects, which can span across super
channels.)
■ Starting Release 20.0, for data-carry optical cross-connects with ICE-4 line modules, both the
PFSL and the FSL support any frequency range within the 40 slice limit.
■ The FSL cannot contain any slice(s) that are already used in an existing passband. (Note that for
ASE idler cross-connects, the PFSL can contain slices that are also specified in the PFSL of
another optical ASE idler cross-connect.)
■ If the re-provisioning of the passband uses any frequency slot not included in the PFSL, then the
re-provision operation will be rejected.
■ For data-carry optical cross-connects, any passband narrowing operation (where slices are
removed from the FSL), will impact traffic on the removed slices.
■ Multiple entries are not supported for ASE idler cross-connects: ASE idler cross-connects support
only one entry in the PFSL parameter and one entry in the FSL parameter. Starting Release 20.0,
this is applicable in case of superchannels with both AOFx-500, SOFx-500 and
XT(S)-3600,AOFx-1200, SOFx-1200 . Also,the PFSL parameter cannot be edited and only the FSL
can be edited for ASE idler cross-connects.
■ For contiguous spectrum and split spectrum data-carrying optical cross-connects, the PFSL and
FSL parameters support multiple entries to specify ranges and groups of frequency slots (e.g.,
“-274&10&--254&10”). Note the following for multiple entries in these fields:
□ For contiguous spectrum optical cross-connects, it is not supported to have the ratio of n:m
entries (more than one entry in the PFSL parameter and a different number greater than one
in the FSL parameter). The following ratios are supported for the number of entries in the
PFSL to the number of entries in the FSL:
1:1 (one entry in the PFSL parameter to one entry in FSL parameter)
1:n (one entry in the PFSL parameter to n entries in FSL parameter)
n:n (the same number of entries in both PFSL and FSL parameters).
□ For split spectrum optical cross-connects, the PFSL and FSL parameters must have more
than one entry (to specify a range of slices from both of the super channels in the split
spectrum cross-connect). Therefore, for split spectrum only the ratio of n:n is supported (the
same number of entries in both PFSL and FSL parameters, where n is greater than 1).
Neither the 1:n, 1:1, nor m:n ratios are supported for split spectrum optical cross-connects.
□ The FSL parameter values can be edited to merge multiple entries into one entry, or to split a
single entry into multiple entries. Merging or splitting entries in the FSL parameter will affect
traffic. For FSL parameter, in a single operation either the value of an entry can be edited or
an entry can be added/removed, but not both at the same time. Note that this feature is not
applicable for Gen4 superchannels.
□ The entries in the PFSL parameter cannot be edited, entries in PFSL can only be added or
deleted.
□ For the PFSL parameter, multiple entries can specify overlapping frequency slots.
□ For the PFSL parameter, an entry can be added only to the end of the list in the parameter
field. For deleting, an item can be deleted from any position in the list.
□ In the case of split spectrum optical cross-connects, an item in the PFSL parameter cannot
span both parts of the split spectrum super channel.
■ When resizing ASE idler cross-connects, the values in the new (resized) FSL must have at least
one slice in common with the values in the existing/previous FSL.
■ After an upgrade, an existing optical cross-connects will have the same value for Possible
Frequency Slot List and Frequency Slot List for FRMs associated with SOFx-500.
The above example spectum illustrates a configuration where two adjacent SCHs are separated by a
12.5 GHz guard band. This 12.5 GHz guard band is required irrespective of whether both the SCHs are
from cross-connections on the same port or from adjacent ports. When the carriers are multiplexed
together and passed through a single port (as in the case of a FMP-C), the guardband between the SCHs
(12.5 GHz) can be removed. A guardband spacing of 4 GHz is required between the last carrier of the
first superchannel to the first carrier of the next superchannel(C4 to C1' in the second spectrum above) to
handle tuning considerations between the two carrier sources leading to an effective bandwidth saving of
8.5 GHz when compared to individual SCHs. However the composite SCH still needs to have a guard
band of 12.5 GHz (6.25 GHz on either side) for it to be cross-connected through the WSS.
Note: The Alphabin_2 configurations for XTS-3300 are applicable for XTS3312-YN-EZC15 PONs
only.
The use of this Alpha bin is for the Minimized guard band support to optimally tune the carriers for the
most effective utilization of the spectrum, typically when combined with a FMP-C for multiplexing carriers
from two different line modules into a single contiguous passband thereby minimizing the inter- SCH
guard band between the two line modules.
Figure 4-67 Optical and Digital TE links and SNCs (DTN-X with ROADM sample configuration)
Figure 4-68 Optical, Digital TE Links and Optical SNCs (XT-3300/XTS-3300 sample configuration)
Figure 4-69 Optical TE Links and Optical SNCs (ICE 4 modules and FBM/FRM sample configuration)
Note the following about optical and digital TE links and service provisioning:
■ A digital TE link between the FlexILS line modules (AOFXs in Figure 4-67: Optical and Digital TE
links and SNCs (DTN-X with ROADM sample configuration) on page 4-105) is supported only after
an optical connection is defined between the associated FRMs, such as an optical SNC (O-SNC) or
an optical cross-connect.
Note: For deployments in which GMPLS control channel over GRE is enabled, digital SNCs or
cross-connects can be provisioned without first provisioning an optical cross-connect in the
FlexILS link. Once the GMPLS control channel over GRE is brought up and the FRM is
connected to the IAM on each end of the link, connecting the FlexILS line module (AOFM/
AOFX/SOFM/SOFX) to the FRM at each end of the link will bring up the digital TE link. An
optical cross-connect can then be configured to bring up the data path.
■ Digital services can use optical TE links to traverse through the network, meaning that a digital TE
link might traverse the network contained within an optical TE link. An optical TE link can be used
by many different services:
□ Optical SNCs and optical cross-connects
□ Digital TE links (and any digital SNCs or cross-connects associated with the digital TE link)
■ Failure of an optical TE link will affect all optical SNCs or optical cross-connects associated with it,
thereby affecting any associated digital TE links and in turn their associated digital SNCs or digital
cross-connects.
■ Optical TE links are created between the following node types:
Figure 4-70 Optical TE Links, OELs, and Optical SNCs in a DTN-X Network
An OEL is also designed for a given module type in terms of the optical characteristics. For example, an
OEL might be defined for line modules with enhanced reach characteristics, which is indicated by a line
module PON with “C6.” An OEL created for C6 PONs would support a path using any of C6 line modules:
■ AOLX-500-T4-n-C6
■ AOLX-500B-T4-n-C6
■ AOLM-500-T4-n-C6
■ AOLM-500B-T4-n-C6
An explicit route can be defined for the OEL in the case the user wants to OEL to traverse specific optical
TE links.
Figure 4-71 Optical TE Links and FRM end-point based Optical SNCs in an ICE 4 Network (XT-3300
example)
Figure 4-72 Optical TE Links and FBM end-point based Optical SNCs in an ICE 4 Network
Starting R18.2.1, OELs can be used to define the work and restoration path of a restorable optical SNC
on ICE 4 line modules.
■ An explicit route can be defined for an OEL if the user wants to the OEL to traverse specific optical
TE links.
■ The work path of an optical SNC can be constrained to an OEL and by setting Frequency Slot
■ The Restored Path of a restorable optical SNC refers to the recovery path setup in order to restore
the user traffic when the working path gets faulted.
■ Restored Path can be based constrained to a list of OELs selected during OSNC creation
For more information see Optical Restoration on Optical Subnetwork Connections on page 4-113.
Figure 4-74: Optical SNCs Using FRM and FBM Endpoints on page 4-110 shows an example network
configuration, along with example O-SNC routes using FBM and FRM endpoints
■ For an SLTE optical span between subsea point of presence (POP) stations. In this configuration,
the OPSM provides O-SNCP between two IAMs configured for one of the SLTE modes (SLTE
Mode 1, SLTE-TLA, or TLA).
■ For links between a cable landing station (CLS) and a point of presence (POP). In this
configuration, the OPSM provides O-SNCP between two IAMs configured for one of the SLTE
modes (SLTE Mode 1, SLTE-TLA, or TLA).
■ For links with a mix of OCGs (over SOLx2 modules via BMM2) and super channels (over SOFx
modules).
In SLTE applications, the OPSM is supported with IAMs; the IAM’s line port is connected to the OPSM to
optically protect the OTS (C-Band + OSC) signal via two line ports of the OPSM. The OPSM optical
switches provide optical protection that is agnostic to data rate, modulation format, and number of optical
channels. Each optical switch in the OPSM modules supports the extended C-Band (4.8THz) optical
window for optical protection.
Release 16.3 introduces OPSM-2 based optical protection for a network with a mix of OCGs (over SOLx2
modules via BMM2) and super channels (over SOFx modules).
For more information on configurations required for this network, see OPSM-2 protection for SOLx2 (via
BMM2) and SOFx.
Tributary-side O-SNCP
For terrestrial applications, the OPSM supports tributary side protection wherein the OPSM is deployed
between an AOFx-500 and an FMM-F250/FRM-9D.
■ The AOFx-500 SCG’s Interface Type must be set to Open Wave (see Open Wave Line Module
Configuration).
■ The FRM SCG’s Interface Type must be set to Infinera Wave, and the FMM SCG's Interface Type
must be set to Manual Mode 1 (see Manual Mode 1 Configuration).
■ The AOFx-500's line in/out port is connected to the OPSM facility (FAC) port, and the Provisioned
Neighbor TP on the OPSM PTP must be configured as the AOFx-500 SCG.
■ The FMM-F250's add/drop port is connected to the OPSM line port.
■ The FRM SCG's Provisioned Neighbor TP must be configured to the FMM Line SCG PTP to bring
up the link.
■ Both FRMs connected to the AOFx-500 will have a cross-connect for the same SCH.
■ Auto-discovery is not supported between the AOFx-500, FMM-F250, FSP-C, and FRM-9D in this
configuration while the AOFx-500 is in the OpenWave mode.
Provisioning Considerations
To provision OSNC Restoration, users nned to perform the following
■ Create an OEL for the workpath with preferred attributes
■ Create multiple OEL paths with the same set of attributes
■ Create an O-SNC service (workpath) starting from the trib of FBM at near-end to trib of FBM at
farend
■ Auto-Restore - Set the value of this attribute to ‘yes’ to enable Dynamic GMPLS SNC Restoration
for an SNC. This attribute may be modified at any time.
■ Auto-Reversion - Enable automatic reversion to revert the restorable SNC back to its original
working path after a restoration event.
■ Use Preferred Restoration Route Info - Check this option to configure the inclusion and exclusion
list that should be used as a first option when restoring the SNC. This attribute may be modified at
any time. Preferred Restoration constraints take effect only if auto restoration is enabled.
■ Priority - Set the priority value from 0 -7. At the network element level, each priority level is
assigned a hold-off timer value to indicate how long GMPLS should wait before attempting to
restore the SNC (see Restoration Priority on page 4-142). The priority attribute can only be set for
SNCs enabled with auto-restore. The default value for the priority attribute is zero. The priority
attribute may be modified at any time, even after creation of the SNC.
□ BMM2-8-CXH2-MS
□ BMM2H-4-R3-MS
□ BMM2-8-CH3-MS
□ BMM2-8-CEH3
■ Gen 1 Mode:
□ BMM2C-16-CH
□ BMM-4-CX2-MS-A
□ BMM-4-CX3-MS-A
□ BMM-8-CXH2-MS
□ BMM-8-CXH3-M
□ BMM1H-4-CX2
Note: With the exception of the BMM2C-16-CH, all of the BMMs listed for Gen 1 mode
require:
15dB of pad between the FMM-C-5 and the BMM OCG
7dB of pad between FMM-C-5 transmit side (Tx) and BMM OCG receive side (Rx)
For configurations from OFx-100 to FMM-C-5 to FRM, where the FMM-C-5 operating mode is set to
FlexILS mode, an add/drop optical cross-connect is required between the tributary-side super channel
CTP endpoint on the FMM-C-5 and the line-side super channel CTP endpoint on the FRM, as shown by
the orange line in the figure below.
Note that the user can create this optical cross-connect either by provisioning the manual optical cross-
connect between the FMM-C-5 and the FRM, or by provisioning an optical SNC from the FMM-C-5 to
another FMM-C-5 in the network. In the case of OFx-100 to FMM-C-5 to FRM, the OFx-100's super
channel number is provisioned as part of the optical cross-connect/SNC provisioning (the user must
specify the super channel number while creating the optical cross-connect/SNC, and the OFx-100 is
automatically configured accordingly).
Figure 4-80 Add/Drop Optical Cross-connect (example with FMM-C-5 and FRM-4D)
For configurations from OFx-100 to FMM-C-5 to BMM, where the FMM-C-5 operating mode is set to
either Gen 1 or Gen 2 mode, there is no associated optical cross-connect (see the figure below). With this
BMM configuration, the user must configure the super channel CTP on the OFx-100 to match the OCG
number on the BMM port.
The user must specify OFx-100's super channel as the OCG number and the carrier pair within the OCG
in the format OCGn-<carrier pair>, where n = 1-16 and the carrier pair can be one of the following pairs:
1-2, 3-4, 5-6, 7-8, or 9-10. For example, to specify carriers 3 and 4 in OCG 2, the user would configure
the OFx-100's super channel number to OCG2-3-4.
Note: For configuring OFx-100 for OCGs 5-8 or 13-16, the carrier pair 3-4 is not supported, due to the
optimization of the OFx-100 for the ITU 50GHz channel plan.
Note the following for configurations where the FlexILS line module (AOFx-500) is connected to an FRM
via an FMP-C:
■ FMP-C is supported only for FRMs in Native-Automated mode (FMP-C is not supported for FRMs
in SLTE mode).
■ A FlexILS line module (AOFx-500) with an FMP-C connection will ramp up to operational power
only after the user has created an optical cross-connect or an optical SNC on the line module.
■ For configurations where the FlexILS line module (AOFx-500) is connected to an FRM via an FMP-
C, specific associations and provisioning steps are required for FMP-C connections in order to
prevent mis-connections. For FlexILS line module connections via FMP-C, the following three
associations are required (see the procedure below):
□ AOFx-500 to FMP-C
□ FMP-C to FRM-9D
□ FRM-9D SCH CTP to AOFx-500 SCH CTP
In order for optical cross-connects or optical SNCs to come into service, the following provisioning steps
are required to connect an AOFx-500 to an FMP-C:
Note: Note that once the line module is configured for SCG passive multiplexing or the line module
is associated to an FMP-C, the line module cannot be unlocked until both configurations are
performed.
4. Unlock the line module. (This completes the FMP-C to line module association.)
5. Associate the FMP-C line port to the FRM-9D tributary port. (In TL1, this is performed via the ED-SCG
command on the FRM add/drop SCG port and setting the PROVFPMPO value to the AID of FMP-C
MPO port.)
6. Repeat Step 1 through Step 5 at the far-end node.
7. Create the optical cross-connect/SNC on the AOFx-500 super channel (the AOFx-500 super channel
number must match the optical cross-connect/SNC’s super channel number).
8. Associate the FRM-9D tributary super channel to the AOFx-500 super channel. (Note that the FRM
SCH can be associated to the AOFx-500 SCH only after a cross-connect/SNC has been created in
Step 7.) To associate the FRM-9D super channel to the AOFx-500 super channel, use Associated
Client SCH CTP parameter on the FRM super channel. (In TL1, this is performed via the ED-SCH
command on the FRM-9D, specifying the AOFx-500 super channel in the CLIENTSCHCTP
parameter.)
Note: Note that if the optical cross-connect/SNC created in Step 7 is subsequently locked after the
FRM-9D tributary SCH CTP is associated to the AOFx-500 SCH CTP, the association will be lost and
Step 8 will need to be repeated after the cross-connect/SNC is unlocked. Likewise if the cross-
connect/SNC is deleted and re-created, the association will be lost and Step 8 will need to be
repeated.
Note: Automatic super channel tuning is not supported for ICE 4 line modules (i.e. XT(S)-3300,
OFx-1200 and XT(S)-3600).
Note: Digital SNC provisioning and service protection is not supported for XT(S)-3300 in the
current release.
■ D-SNCP Protection Groups and Protection Units (see D-SNCP Protection Groups and Protection
Units on page 4-132)
■ Switching Hierarchy and Criteria (see Switching Hierarchy and Criteria on page 4-134)
■ D-SNCP through Third-Party Networks (see D-SNCP through Third-Party Networks on page 4-137)
■ D-SNCP Automatic Alarm Adjustment (see D-SNCP Automatic Alarm Adjustment on page 4-140)
Note: It is possible to use different types of D-SNCP at each end of a circuit or route. In other words,
one end of a route can be protected using 2 Port D-SNCP, while the other end of the route is
protected with 1 Port D-SNCP.
Note: D-SNCP of both types can be applied to the client/tributary endpoints of Multi-point
Configuration (see Multi-point Configuration on page 4-23).
2 Port D-SNCP
Note: Previously, this protection scheme was referred to as “Dual TAM D-SNCP” in Infinera technical
documentation. But because this type of protection also applies to configurations without a TAM, such
as future support of protection for DTN-Xs that use TIMs instead of TAMs, this type of protection is
now referred to as “2 Port D-SNCP.”
2 Port protection offers the highest level of service protection on all interface points within the optical
network, including the client ports. 2 Port D-SNCP provides end-to-end protection of optical services and
protects against TAM/TIM failures by using two TAMs or TIMs connected to client equipment at each end
of the network path.
In 2 Port D-SNCP, a Y-cable (optical signal splitter/combiner) is connected to the client equipment at
either end of the network. As shown in Figure 4-83: 2 Port D-SNCP (DTN Example) on page 4-125, the Y-
cable at the ingress point directs two identical copies of the client signal to two different TOM interfaces
on the originating node. These two interfaces receive the duplicate client signals and encapsulate each
signal into a DTP wrapper (for MTC/DTC endpoint) or an ODUk (for XTC endpoints) for transmission
through the network. Each signal is transported independently to the destination node, typically along
diverse routes.
Note: In 2 Port D-SNCP, the ingress endpoints must be on the same physical chassis of the
originating node. Likewise, the egress endpoints must be on the same chassis of the terminating
node. In other words, a Y-cable used for 2 Port D-SNCP can't be connected to termination points on
two different chassis.
At the remote end, the destination node monitors both signal paths and, depending on signal quality,
switches the appropriate signal towards the client equipment. Only one of the two digital path-level
signals is enabled at the egress.
In the event of a datapath failure (due to facility or equipment failures), an automatic protection switch
mechanism at the destination node switches the redundant copy of the client signal to transmit on the Y-
cable at the egress, with sub-50ms switching speeds.
Note: Hybrid cascaded 2 Port D-SNCP Protection Circuits may experience double switching under
certain conditions.
User-generated switching requests are also supported. 2 Port D-SNCP can be configured to be auto
revertive, so that traffic is switched back to the working unit within 50ms once the working unit comes
back into service and the wait to restore (WTR) timer expires.
Datapath protection groups provisioned for revertive protection will automatically revert the service back
to its original path after the restored path becomes available and the WTR timer expires. The WTR timer
is configurable between 5 and 120 minutes (with 5 minutes as the default).
To provide protection services, the control plane of the line modules and TEMs in which active and
standby protection units reside should be fully operational. Protection service is unavailable with line
module or TEM equipment failure, line module or TEM removal, or circuit pack to circuit pack control bus
failures. The only exception to this requirement is in the case of a protection switch triggered by removal
of a line module or TEM containing the active protection unit. In this case, only the former standby
protection unit’s control plane needs to be fully operational.
Note the following for specific TOMs with 2 Port D-SNCP:
■ For switchovers on a TOM-100G-L10X or TOM-100G-S10X, if the tributary disable action is set to
Laser Off, protection switch times can exceed 50ms. For these 100GbE, it is recommended to set
the tributary disable action to Insert Idle Signal. (See Tributary Disable Action on page 3-41.)
■ 2 Port D-SNCP is not supported for TOM-100G-SR10 and TOM-40G-SR4.
Note the following for 2 Port D-SNCP for endpoints on the XTC:
■ See DTN-X Service Capabilities on page A-1 for the 2 Port D-SNCP capabilities for the services
on the XTC-10, XTC-4, XTC-2, and XTC-2E.
■ 2 Port D-SNCP is not supported for OTU4 transport without FEC service on the TIM-1-100G/
TIM-1-100GX, nor for OC-768/STM-256 services on the TIM-1-40GM.
■ 2 Port D-SNCP is not supported for ODU multiplexing services (see ODU Multiplexing on page 4-
48.)
■ 2 Port D-SNCP on TIM-1-100GE-Q is supported only when using TOM-100G-Q-LR4 modules.
■ The two client signals in 2 Port D-SNCP can each employ different network mapping values. For
example, for a 100GbE service type, the working route can use VCAT (ODU2i-10v) service
mapping while the protect route uses non-VCAT (ODU4i) service mapping, or vice-versa.
■ For 2 Port D-SNCP with paths that use VCAT network mappings, a protection switch will be
triggered if there is a fault detected on any of the constituent ODUs of the GTP.
■ For 2 Port D-SNCP of endpoints on the XTC, the signal degrade (SD) protection switch trigger is
detected by the software on the line module. Therefore, if a line module is warm rebooting the line
module will not respond to the SD condition until the line module’s software is present and running.
In a similar scenario, if an SD condition is detected and a protection switch does occur
successfully, if a line module is subsequently warm rebooted at the time when the SD condition
clears, the line module will not acknowledge the condition as cleared until rebooting is completed,
so only one path (the protection path) is seen as available. This means that a fault on the protect
path may affect traffic until the line module completes its reboot and acknowledges that the SD
condition is cleared.
■ For software releases lower than IQ NOS R18.2, restorable SNCs cannot be part of the protection
group.
Note the following for 2 Port D-SNCP for endpoints on the DTC/MTC:
■ 2 Port D-SNCP is not supported for endpoints on the TAM-2-10GT nor DICM-T-2-10GT, nor for
SNCs in which the source endpoint is a receive electrical TOM (TOM-1.485HD-RX or
TOM-1.4835HD-RX) and the destination endpoint is a transmit electrical TOM (TOM-1.485HD-TX
or TOM-1.4835HD-TX).
■ 2 Port D-SNCP is supported for OC-768/STM-256 services, but it is not supported for 4x10Gbps
services.
■ When provisioning 2 Port D-SNCP on the TAM-8-1G, use protection units belonging to different
tributary port pairs. A tributary port pair is (1a, 1b), (2a, 2b), (3a, 3b) or (4a, 4b), and traffic on each
of these port pairs is mapped together into a single 2.5Gbps digital path. If both the protection units
belong to the same port pair (1a and 1b, for example), there would be no effective protection in
case of any path failure along the circuit.
Figure 4-83: 2 Port D-SNCP (DTN Example) on page 4-125 and Figure 4-84: 2 Port D-SNCP (DTN-X
Example) on page 4-126 illustrate the configuration of 2 Port D-SNCP on DTN and DTN-X, respectively.
1 Port D-SNCP
Just like 2 Port SNCP, 1 Port D-SNCP generates duplicate client signals and encapsulates each signal
into an Infinera wrapper for transmission through the network using two dedicated and diverse 1+1
“working” and “protect” service paths, and then performs performance monitoring on both the working and
protect services to select the optimal service at the far-end's client interface. 1 Port D-SNCP protection is
supported for services originating on DTC, MTC, and XTC:
■ For services originating on a DTC/MTC, the duplicate client signals are each encapsulated into a
DTP wrapper for transmission through the network, and the optimal service is selected at the far-
end's client interface on the far-end line module or TEM.
■ For services originating on an XTC, the duplicate client signals are each encapsulated into the
ODUki Infinera wrapper for transmission through the network, and the optimal service is selected in
the switch fabric at the far-end XTC.
The figures below show the path of 1 Port D-SNCP for DTN-X and DTN, respectively.
Unlike 2 Port SNCP, which requires dual TAMs or TIMs at the ingress and egress points, 1 Port D-SNCP
reduces network deployment costs by eliminating dual TAMs/TIMs and Y-cables at network ingress and
egress, providing a true MSPP-like UPSR/SNCP protection implementation on the Intelligent Transport
Network.
Note: Line-side 1 Port D-SNCP is supported for ODU2i_10v VCAT manual cross connects on the
TIM-1-100GE of an XTC-4/XTC-10, see 1 Port D-SNCP on Line Side for ODU2i-10V VCAT Services
on page 4-130.
All client tributaries can be optionally configured to generate DTP-AIS (for DTC/MTC endpoints) or ODU-
AIS (for XTC endpoints) downstream if there is any fault on the client. Because of this, 1 Port D-SNCP is
supported for SNCs or cross-connects that traverse third-party networks.
Note: For 1GbE, 10G Clear Channel, and 2.5G Clear Channel, the trigger for switching will be
tributary OLOS and client LOS.
1 Port D-SNCP for 1GbE or 1G Fibre Channel (1GFC) services on the TAM-8-2.5GM can make use of
the flexible mapping of tributary port to DTPCTP. As described in 1GFC and 1GbE Service Provisioning
on page 4-14, when creating 1GbE and 1GFC services on the TAM-8-2.5GM, the DTN allows for flexible
mapping of tributary port to DTPCTP, so that the user is able to specify the virtual channel in the DTPCTP
to which the service should be mapped, as long as no service is already provisioned on the virtual
channel.
1 Port D-SNCP can be configured for endpoints on the TAM-2-10GT for 10Gbps SNCs across a Layer 1
OPN. Note the following constraints for 1 Port D-SNCP over Layer 1 OPN:
■ 1 Port D-SNCP is not supported for 2.5Gbps services on the TAM-2-10GT. To protect SNCs across
Layer 1 OPN, services should be configured as 10Gbps SNCs.
■ 1 Port D-SNCP is not supported on TAM-2-10GT tributaries that are configured as TE endpoints of
a Layer 1 OPN TE link. (And the converse is also true: TAM-2-10GT tributaries that are configured
for 1 Port D-SNCP cannot be configured as TE endpoints of a Layer 1 OPN TE link). This means
that 1 Port D-SNCP is configured on the TAM-2-10GT tributaries at the Provider Edge, since the
TAM-2-10GT tributaries on the Customer Edge are configured as TE endpoints for the TE link.
■ 1 Port D-SNCP is not supported on a 10Gbps DTPCTP of a TAM-2-10GT if a 2.5Gbps service is
originating/terminating on a constituent 2.5Gbps DTPCTP.
Figure 4-89 Example Network Configuration with Line-side 1 Port D-SNCP for ODU2i_10v VCAT
Note the following about configuring the fault isolation layer to iPATH:
■ iPATH fault isolation is supported only for 1 Port D-SNCP.
■ iPATH layer fault isolation cannot be used over a third party network or on a data path that includes
OTUk to OTUk TIM ports.
working and protection paths are faulted), the laser will be shut down. (Note that when the
TIM/TAM transitions between payload signal and idle groups, some corrupted frames will
transmitted towards the client equipment.)
Protection Switch Laser Control is supported for the GbE interfaces on the following TIMs and TAMs:
■ TIM-1-100GE
■ TIM-1B-100GE
■ TIM-5-10GM
■ TIM-5-10GX
■ TIM-5B-10GM
■ TIM-16-2.5GM
■ TAM-2-10G
■ TAM-2-10GR
■ TAM-2-10GM
■ DICM-T-2-10GM
■ TAM-8-1G
Protection Switch Laser Control is not supported on TIM2-2-100GM, TIM2-2-100GX, TIM2-18-10GM and
TIM2-18-10GX.
Note the following for Protection Switch Laser Control:
■ In the TL1 interface, this feature is called Ethernet Protection Switching Laser Control (EPSLC).
■ The Protection Switch Laser Control feature applies only for Ethernet interfaces configured for 1
Port D-SNCP.
■ Idle cell insertion is not supported for Ethernet interfaces on the TAM-2-10G and TAM-2-10GR. For
these TAMs, the laser will be kept on when Protection Switch Laser Control is enabled, but
because the incoming client signal is unavailable an indeterminate signal will be sent downstream.
■ For DTN-X nodes upgrading from pre-Release 15.3 to Release 15.3 or higher, any TIMs installed
before the upgrade will require a service-affecting cold reset to enable the functionality introduced
in Release 15.3.
■ For TAM-8-1G endpoints only, for local and remote side tributary ports involved in 1 Port D-SNCP,
if the Protection Switch Laser Control is set to Enable Laser, then it is also required to enable AIS
on Client Signal Failure on both the local and remote side tributary ports.
In Digital SNCP, one PU in each DPG is identified as the ‘Working’ PU, and the remaining PU is identified
as the ‘Protect’ PU. This designation, called the PU Configured State, identifies the Working path - the
path used in the absence of network failures - and the Protect path - the path used in the event of a
network failure.
When the system is running, and at the origin node, both Working and Protect PUs send any datapath
traffic they receive from the client side to the network interfaces of the node, resulting in two transmission
paths through the Intelligent Transport Network. The Working and Protect paths are generally routed
through completely diverse paths through the network.
At the destination node, the receiving node terminates both paths on the far-end Working and Protect
PUs. The receiving node evaluates the quality of both signals received on the DPG, and enables only one
of the two PUs to actively transmit traffic to the far-end client. In the absence of any prior protection switch
activity, the Working PU is the active PU at the destination node. The other PU exists in a standby state
(in 2 Port D-SNCP, the Protect PU will power off its transmission laser in the standby state).
For both 1 Port Digital SNCP and 2 Port Digital SNCP, the path chosen as the working path is a local
decision. Meaning that each end of the circuit chooses which signal to use independently. Each end of
the circuit may pick a different path as the working path.
Both 1 Port D-SNCP and 2 Port D-SNCP support the behavior in the case of an outage of both the
Working PU and the Protect PU to ensure that, when possible, the Working PU will be shown as an active
PU irrespective of whether traffic is up or not. In the case of a local node power cycle, or of a remote line
module reset or power cycle (or in any case where both PUs fail):
■ Traffic is switched to the Working path, even if the failure is present in both the Working path and
the Protect path.
■ If the failure is cleared first in the Working path, traffic will recover on the Working path immediately.
■ If the failure is cleared first in the Protection path, traffic will not recover on the Protection path
immediately.
□ Within a few seconds, if the failure is cleared in the Working path, traffic will recover on the
Working path immediately.
□ After a few seconds, if the failure is not cleared in the Working path, traffic will switch to the
Protection path and traffic will recover on the Protection path.
□ Active—The PU is currently providing full service, carrying datapath traffic in both directions.
□ Hot Standby—The PU is not active, but is healthy and able to provide protection service if
called upon.
□ Cold Standby—The PU is not active, and its operational state renders it unable to provide
protection service if called upon.
In either case, if the PU being locked out by the command is currently active, a protection switch to the
other PU shall occur, regardless of the state of the other PU (or of the state of the traffic being carried by
the PU). After the lockout-induced switch, traffic cannot be moved back to the locked-out PU until the
lockout command is cleared.
Note: If a failure occurs on the Protect circuit while a Lockout of Working is in effect, traffic cannot
switch to the Working circuit until the lockout is cleared. Conversely, if a failure occurs on the Working
circuit while a Lockout of Protect is in effect, traffic cannot switch to the Protect circuit until the lockout
is cleared. Both cases can result in loss of traffic.
Note: Manual Lockout of Working switches are non-revertive. If a lockout of working is issued and
then cleared by the user, traffic transmission will continue on the protection route until either an
automatic switch request is triggered (due to a failure), or a user-initiated switch request (manual or
lockout) is issued.
Users can also issue a request to clear a lockout. A user initiated Clear command removes a lockout
switching request. However, network-, service- or equipment-generated switching requests are not
cleared by the Clear command. It is also important to note that manual Lockout of Working requests are
non-revertive; meaning that if a user-generated Lockout of Working is assigned to a PG, the traffic will
switch to the Standby PU and path. Upon the issuance of a clear request, traffic does not automatically
switch back to the Working path.
■ Protection Switch for Client Rx Fault (FACRXPSTRIG in TL1)—Indicates whether a client facility
receive fault should be considered as a trigger for a protection switch. The default is for this feature
to be enabled.
Note: If traffic is running on the protection path of a revertive 2 Port Digital SNCP, changing this
parameter will cause traffic to switch to the active working path. It is recommended that this
parameter be set upon creating 2 Port Digital SNCP, or immediately after.
■ Switch to Work PU After Dual Outage (WKGPUAFTDLOUT in TL1)—Indicates whether traffic
should be switched to the working protection unit after a dual outage. The default is for this
feature to be disabled.
Note: For GNM and DNA, these two parameters are not configurable when creating the 2 Port D-
SNCP; they can be edited only after the 2 Port D-SNCP is created. (TL1 does support the configuring
of these parameters when using the ENT-FFP-TRIB command to create 2 Port D-SNCP.)
■ Allow the user to start the WTR timer upon manual protection switches (Lockout/Manual).
■ Allow the user to clear standing conditions from auto switch requests.
Note the following about using CSF as a trigger for protection switching:
■ CSF is supported as a switch trigger only for endpoints on an XTC chassis of a DTN-X running
Release 10.0 or higher.
□ See Figure 4-91: Using CSF as a Protection Trigger over Third-Party Networks on page 4-
138 below for protection switch handling in networks with DTNs and DTN-Xs via back-to-
back TIM-TAM connections.
■ CSF is supported as a switch trigger for both 1 Port D-SNCP and 2 Port D-SNCP.
■ CSF can be enabled as a switch trigger even after the protection group is created.
■ CSF can be enabled as a switch trigger on a per-PU basis (working PU only or protect PU only), or
for both PUs by setting the following values for the protection group:
□ Work—Use CSF as switch trigger only on work PU.
□ Protect—Use CSF as switch trigger only on protect PU.
□ Enabled—Use CSF as switch trigger on both work and protect PUs.
□ Disabled—Do not use CSF as a switch trigger on any PU.
■ CSF as a trigger for protection switching is not supported for services with ODUki and ODUflex
network mapping types
CSF is supported only for endpoints on an XTC chassis of a DTN-X. However, for network configurations
that use both DTNs and DTN-Xs with back-to-back TIM-TAM connections as shown in Figure 4-91: Using
CSF as a Protection Trigger over Third-Party Networks on page 4-138, CSF is used as a protection
switch trigger for endpoints on the DTN-X, and DTP-AIS is used as a protection switch trigger for
endpoints on the DTN.
Note: Network configurations with back-to-back TIM2-TAM connections are not supported.
For example, with the fiber break shown between DTN B and DTN-X D in Figure 4-91: Using CSF as a
Protection Trigger over Third-Party Networks on page 4-138:
■ The interface labeled “D1” on DTN-X D detects the fiber break as a client failure and DTN-X D sets
the CSF indicator to “1”.
■ If the PU on the interface labeled “F1” on DTN-X F is configured to use CSF as a protection switch
trigger, then the incoming CSF indicator will trigger a protection switch.
■ At the other (DTN) side of the network, if the interface labeled “B2” on DTN B is configured for fault
escalation, DTN B will escalate the client failure into a DTP.AIS failure, which triggers a protection
switch at DTN A.
switch at the far end. See Encapsulated Client Disable Action on Egress (DTN) on page 3-48 for
information on all of the supported encapsulated client disable actions.
In the case shown in Figure 4-93: ODUk AIS for ODUk Encapsulated Clients in Mixed DTN/DTN-X
Network on page 4-140, the interface labeled “B2” on DTN B is configured for Encapsulated Client
Disable Action of “ODUk AIS”. So in the case of a failure on the DTN side, DTN B would generate an
ODUk AIS signal on client port B2 toward DTN-X D, thus triggering a protection switch at DTN F. (Note
that if failure occurs on DTN-X side, ODUk AIS is sent from the DTN-X side to the DTN side of the
network, which prompts the DTN to trigger a protection switch on the DTN side.)
Figure 4-93 ODUk AIS for ODUk Encapsulated Clients in Mixed DTN/DTN-X Network
For information on the TAMs and service types that support ODUk AIS for ODUk Encapsulated Clients,
see Encapsulated Client Disable Action on Egress (DTN) on page 3-48.
feature operates. Under normal operation, the state of each SNC is maintained by a signaling protocol,
and traffic is carried along a ‘working route’. When a datapath failure occurs, all the impacted SNCs
automatically detect the failure at their endpoints. At the source node, circuits configured for restoration
are automatically re-signaled along a different, functional path, called the ‘restoration route’.
If multiple circuits are impacted simultaneously, the circuits are restored in a sequence based on the user-
configured restoration priority level assigned to each SNC (see Restoration Priority on page 4-142).
Note: Dynamic GMPLS circuit restoration only applies to SNCs; it does not impact manual cross-
connects.
Note: See DTN-X Service Capabilities on page A-1 for a full list of the DTN-X services and
modules that support GMPLS restoration.
Note: Dynamic GMPLS circuit restoration is not supported for SNCs in which the source endpoint is a
receive electrical TOM (TOM-1.485HD-RX or TOM-1.4835HD-RX) and the destination endpoint is a
transmit electrical TOM (TOM-1.485HD-TX or TOM-1.4835HD-TX).
SNCs that are configured for restoration can also be configured for reversion to the original working path,
as described in the sections below.
Restoration Priority
SNCs can be configured with a restoration priority value from 0-7 to be used by GMPLS in the case of
SNC restoration. At the network element level, the restoration priorities 0-7 can be associated with a hold-
off timer setting from 0 to 86400 seconds (i.e., 24 hours), so that restoration priority levels can affect the
order in which GMPLS attempts to restore SNCs.
For example, if the restoration hold-off timer for restoration priority level 3 is set to 40 seconds, GMPLS
will wait 40 seconds before attempting to restore all SNCs with priority level 3; if the hold-off timer for
priority level 0 is set to 0 seconds, GMPLS will immediately attempt to restore all priority 0 SNCs. So what
determines the time to initiate each restoration is the hold-off timer value that is associated with the
restoration priority.
Operational Details
When Dynamic GMPLS SNC Restoration is triggered (see Automatic Restoration Triggers on page 4-144
for a complete list of trigger events), the network element at the source endpoint takes the following
actions to restore the impacted circuit:
■ The network element tries to determine if the fault occurred at the destination network element or
source network element:
□ If the source node detects a line-side fault, it will try to restore the circuit irrespective of
whether the fault is at a source, intermediate, or destination node.
□ If the detected fault occurred at either a source or destination tributary/client, restoration will
not be attempted.
□ If the detected fault occurred at either a source or destination line module or TEM,
restoration will be attempted if the fault is attributed to a network fault on the SNC.
Note: Locking the source line module of an SNC does not trigger GMPLS restoration, but locking of
the destination line module does trigger GMPLS restoration attempt. Restoration will be continuously
attempted until the fault is cleared in the destination line module that houses the tributary.
■ Based on the restoration priority assigned to the SNC and the hold-off timer value associated
with the priority, GMPLS will wait the duration of the hold-off time before attempting to restore
the SNC (see Restoration Priority on page 4-142).
■ The source network element releases the SNC (if it is not already in a released state) and
begins to compute the restoration route. GMPLS attempts to restore SNCs as follows:
□ If the SNC has been configured to use Preferred Restoration route information, GMPLS
will use the configured inclusion/exclusion lists, along with other regular constraints, to
configure the restoration route for the SNC.
□ If either Preferred Restoration route information is not specified for the SNC, or a route
cannot be computed with the Preferred Restoration route information, GMPLS will
compute a diverse route in the following sequence:
1. GMPLS attempts to find a restoration path that is node and fiber diverse from the
protect path.
2. If a restoration path that is node and fiber diverse from the protect path cannot be
computed, GMPLS attempts to find a restoration path that is fiber diverse from protect
path.
3. If a restoration path that is node and Fiber diverse or only fiber diverse from the
protect path cannot be computed, GMPLS attempts to find a restoration path, where
at least one fiber link is diverse from the entire protect path
4. If a restoration path with at least one fiber link diverse from the protect path cannot be
computed, the SNC enters a set up failed state. GMPLS again attempts to find a
restoration path by following the above sequence after some time. The time between
attempts to find a restoration path in this case increases based on the number of
attempts.
■ Events logs are generated when SNC restoration starts and completes. If the SNC fails
restoration, an SNCFAIL alarm is declared, and the SNCs operational state is set to disabled.
The network element then proceeds with normal SNC setup retry procedures. When the SNC
is successfully restored, the SNCFAIL alarm is cleared, and the SNC operational state is set to
enabled.
Note: In the event of a removal or failure of the TAM/TIM/TIM2/TOM at the destination node,
restoration will be continuously attempted until the equipment is replaced.
■ Infinera nodes support a Restore Path Active (RESTPATHACTIVE) condition for auto-
restorable SNCs (both revertive and non-revertive) that indicates when the SNC has been
restored by GMPLS to a route other than the working route. Reporting for this alarm is disabled
by default. When reporting for this alarm is enabled, the alarm is raised on the local end of the
SNC when an unlocked, auto-restorable (either revertive or non-revertive) SNC is on a route
other than the configured working path; the alarm is cleared when the SNC is reverted back to
the original working path (for revertive SNCs), when the SNC is locked, or if the SNC is
converted from restorable to unprotected.
■ If the SNC was configured for automatic reversion, the originally configured working path of the
SNC is maintained and is continuously monitored for its health by checking the fault bits and
equipment state.
□ Once the original working path of the SNC demonstrates ten fault-free seconds, the SNC
goes into the Wait to Restore (WTR) state. GMPLS will continue to monitor the original
working path for the time configured in the Wait to Restore timer.
□ If the original working path of the SNC shows no faults during the Wait to Restore time,
traffic is switched back to the original working path (bidirectionally on both the Local and
Remote ends) and the path used for restoration is deleted.
Note: For SNCs using LM-80 OCH TE link, all nodes traversed by the SNC must be running software
that is Release 7.0 or higher.
Note: If there is a Pre-FEC Signal Degrade condition on a super channel, then the TE link bandwidth
is reduced.
Note: To manually revert an SNC back to its original working route, see Manual Operations on
Restorable SNCs on page 4-145.
Note: An SNC is also automatically restored if one of the cross-connects in its route is manually
released.
Note: Dynamic GMPLS SNC Restoration is primarily designed to provide traffic restoration utilizing
available alternate route bandwidth in the event of a fiber cut or module failure/removal. Performing a
BMM reseat or cold reset will trigger the restoration process. Due to the additional BMM boot time
requirements associated with these actions, local node SNC restoration may be delayed until the boot
process is completed.
Provisioning Considerations
To provision Dynamic GMPLS SNC Restoration, users set the following attributes during SNC
provisioning:
■ Auto-Restore - Set the value of this attribute to ‘yes’ to enable Dynamic GMPLS SNC Restoration
for an SNC. This attribute may be modified at any time.
■ Auto-Reversion - Enable automatic reversion to revert the restorable SNC back to its original
working path after a restoration event.
■ Use Preferred Restoration Route Info - Check this option to configure the inclusion and exclusion
list that should be used as a first option when restoring the SNC. This attribute may be modified at
any time. Preferred Restoration constraints take effect only if auto restoration is enabled.
■ Priority - Set the priority value from 0 -7. At the network element level, each priority level is
assigned a hold-off timer value to indicate how long GMPLS should wait before attempting to
restore the SNC (see Restoration Priority on page 4-142). The priority attribute can only be set for
SNCs enabled with auto-restore. The default value for the priority attribute is zero. The priority
attribute may be modified at any time, even after creation of the SNC.
Note: In case of a back-to-back link between a Gen 3 10G TIM and Gen 4 10G TIM,
TIM-5-10GM/GX or TIM2-18-10GM/GX, while specifying the inclusion list for that particular link,
instance ID and timeslots must both be selected.
Note: While creating a multi-layer recovery service on DTN-X, it is recommended to do the following:
■ Create a Work SNC, ensure it is the working route and set the work SNC to maintenance state
■ Create the 1-port PG
Note: Prior to deleting or re-configuring a multi-layer recovery service to a revertive restorable SNC,
ensure that the work SNC of the multi-layer recovery service is the working route and then delete the
1-port PG.
Prior to deleting or re-configuring a multi-layer recovery service to a revertive restorable SNC, ensure that
the work SNC of the multi-layer recovery service is the working route and then delete the 1-port PG.
The following sections illustrate Multi-layer recovery:
■ Multi-layer Recovery for Revertive PG with Non-Revertive Restorable SNCs on page 4-147
■ Multi-layer Recovery for Revertive PG with Revertive Restorable SNCs on page 4-148
■ Network Resiliency against Multiple Failures on page 4-152
Figure 4-95 1 Port DSNCP with non-revertive restorable SNC: Failure on Work path
A failure takes place on the work path (W-PU) in the above sample network configuration.
Figure 4-96 1 Port DSNCP with non-revertive restorable SNC: Switch to protect path on failure of work
path
On work PU (W-PU) failure, the SNC switches to the Protect path (P-PU). GMPLS will set up a restoration
path (W'-PU) for the Working PU.
Figure 4-97 1 Port DSNCP with non-revertive restorable SNC: Work path is deleted
The originally configured working path (W-PU) of the SNC is deleted as the SNC is non-revertive. The
protect path (P-PU) is the active path and the restoration working path (W'-PU) is on standby.
Figure 4-98 1 Port DSNCP with revertive restorable SNC: Failure on Work path
A failure takes place on the work path (W-PU) in the above sample network configuration.
Figure 4-99 1 Port DSNCP with revertive restorable SNC: Switch to Protect PU on failure of Working path
On work PU (W-PU) failure, the SNC switches to the Protect path (P-PU). GMPLS will set up a restoration
path for the Working PU. The originally configured working path (W-PU) of the SNC is maintained and is
continuously monitored for its health by checking the fault bits and equipment state.
Figure 4-100 1 Port DSNCP with revertive restorable SNC: Switch to Work Restoration path on failure of
Protect path
If a subsequent fault occurs on the Protect PU while the original Working PU is still in a fault state, the
traffic switches back to the restoration route of the Working PU (W'-PU).
Figure 4-101 1 Port DSNCP with revertive restorable SNC: Reversion to healed Work Path
The fault on the work path (W-PU) is cleared and a fault-free state is maintained, a Wait to Restore
(WTR) timer is started for the restorable SNC for auto-reversion and traffic is switched back to the original
working PU (i.e. from W'-PU to W-PU) after the WTR expiry.
Figure 4-102 1 Port DSNCP with revertive restorable SNC: Delete work restoration path
The work restoration path (W'-PU) is deleted. The work PU (W-PU) is active and the Protect PU (P-PU)
has a failure.
Figure 4-103 1 Port DSNCP with revertive restorable SNC: Protect path failure
Since the Protect PU (P-PU) is failed, GMPLS computes a restoration path (P'-PU) for the Protect PU.
Figure 4-104 1 Port DSNCP with revertive restorable SNC: Work and Protect Path failure
On a subsequent work path (W-PU) failure (when the protect path is also failed), traffic switches from the
work path (W-PU) to the protect restoration path (P'-PU). A work restoration path (W'-PU) is created and
is on standby.
Figure 4-105 Multi-Layer Recovery in DTN-X illustrated with four fiber cuts in a sample network
Note: See DTN-X Service Capabilities on page A-1 for the specific services that support FastSMP.
Note: For performance reasons, the overbooking ratio of 10 is recommended: For any network
resource (e.g., link bandwidth), the total protection bandwidth configured on the resource should be a
maximum of 10 times the actual available bandwidth. For example, if a link has 500Gbps of capacity,
that link should be provisioned for no more than 5Tbps (10x500Gbps) of total protection bandwidth.
Figure 4-106: FastSMP Working Paths Sharing Protection Resources on page 4-155 shows an example
network with two FastSMP protection groups using a shared protection resource (e.g., timeslots).
Each logical protection path is configured to register network resources when it is established, but no
actual protection bandwidth is consumed until the protection path is activated. Link states (availability of
protection bandwidth and paths) are maintained on the network elements. If working path incurs a fault,
traffic is switched to the protection path and only then is the bandwidth on the protection path activated,
as shown in Figure 4-107: FastSMP Activated Protection Path on page 4-155, in which a fault occurs on
Working Path #1 and traffic is moved to Activated Protection Path #1.
As shown in Figure 4-107: FastSMP Activated Protection Path on page 4-155, if a network resource is
used as a protection resource for multiple FastSMP protection groups, the resource can be used by any
of the protection groups that might need it. This optimizes the bandwidth used by FastSMP protection
services.
When a shared protection path resource is activated as in Figure 4-107: FastSMP Activated Protection
Path on page 4-155, the node that detects the failure sends an SMP activation protocol message to the
head-end indicating failure on the path. The head-end node on receiving the failure message selects the
least cost protect path in the protection group and activates the protect path end-to-end, sending an SMP
activation protocol message. SMP protection switching is a bi-direction switching (i.e., the head-end won't
select the activated protect path until it gets confirmation through SMP activation protocol from the tail-
end that the tail-end has switched).
Provisioning FastSMP
FastSMP is configured using GMPLS circuits (SNCs).
Note: See DTN-X Service Capabilities on page A-1 for the specific services that support FastSMP.
Note: All SNCs in the FastSMP protection group must originate from the head-end node of the
FastSMP (each working/protect SNC must use the head-end node as the source endpoint of
the SNC). See below for information on designating a node as head-end of a FastSMP
protection group.
■ The user then creates the FastSMP protection group using the reliable tributary termination point
AID. The FastSMP protection group must be created at both the head end and the tail end of the
service. When creating the FastSMP protection group at each end of the service, the user must
configure in the FastSMP protection group whether the supporting node is the head end or the tail
end:
□ Head End—The head end of the service, from which all protection parameters are
configured and from which user operations must be performed for the FastSMP protection
group: This includes creation of working and protect paths, protection switches, and
reversions. In addition, the head end triggers the activation of the protect path in case of
protection switches.
□ Tail End—The tail end of the FastSMP service.
■ Once the working path and FastSMP protection group are created, the user creates one or more
protection paths, which the user can configure for diversity from the working path as described
below.
FastSMP supports the following provisioning features:
■ SNC diversity: For GMPLS created SNC protection paths, the following diversity options can be
specified by the user:
□ Any—(default) The diversity is automatically configured by GMPLS. GMPLS attempts to find
a diverse path, using first end-to-end node diversity, then end-to-end fiber diversity, then
segment fiber diversity (see below for descriptions of these diversity types). GMPLS makes
three attempts to find a route with each diversity type before moving on to attempt to find a
route using the next diversity type.
□ End-to-end node diverse—The working path and protection path do not include any of the
same nodes, except at the head end and tail ends of the service.
Note: The protect path will be physically diverse from the nodes of the working path, but
guaranteed protection is available only for fiber failures.
□ End-to-end fiber diverse—The working path and protection path do not include any of the
same fibers.
□ Segment node diverse—The path is protected against a subset of working path nodes, for
cases where the network topology doesn’t have a single end-to-end node diverse protection
path. (Two or more protect paths are required to cover all work path node risks.) For
segment node diverse protection, the working segment to be protected is configured when
creating the protection SNC.
Note: The protect path will be physically diverse from the nodes of the specified working
path segment, but guaranteed protection is available only for fiber failures.
□ Segment fiber diverse—The path is protected against a subset of working path fibers, for
cases where the network topology doesn’t have a single end-to-end fiber diverse protection
path. (Two or more protect paths are required to cover all work path fiber risks.) For segment
fiber diverse protection, the working segment to be protected is configured when creating the
protection SNC.
□ Custom—For FastSMP paths that are GMPLS-created SNCs, when creating the protection
SNC the user must specify in the Inclusion List field the network resources (nodes, fibers,
channels, TE interfaces, instance IDs, or time slots) that are to be included in the protect
path. Inclusion list is supported for strict route only (the end-to-end path must be provided in
the Inclusion List; path segments are not supported in Inclusion List for FastSMP protect
path SNCs).
■ SNC inclusion list down to timeslot granularity: When custom diversity option is specified, the user
can specify the desired route down to the time slot granularity. (The user can create a list nodes,
fibers, channels, TE interfaces, instance IDs, and time slots to be included in the protection route.)
■ Pre-calculated protection path for faster protection: All protection paths are pre-calculated or
provisioned before failure occurs. New protection paths can be added based on user requests or
network conditions (such as all existing protection paths are impacted or used due to concurrent
network failures). For performance reasons, an overbooking ratio of 10 is recommended: A network
resource (e.g., link bandwidth) can be provisioned as a protection resource for up to a maximum of
10 times the actual available bandwidth. For example, if a link has 500Gbps of capacity, that link
should be provisioned for no more than 5Tbps (10x500Gbps) of total protection bandwidth for
FastSMP.
■ Support for revertive protection: FastSMP supports auto-revertive protection, wherein traffic is
switched back to the working route once the working route comes back into service and the wait to
restore (WTR) timer expires.
FastSMP Operations
FastSMP switch operations result in switchover at both head-end and tail-ends to the same path. For
revertive FastSMP protection groups, the clearing of pending switch commands results in immediate
reversion to default work path when healthy.
The following automatic operations are supported for FastSMP:
■ Network and service state—A network or service-generated fault (e.g., fiber cut, equipment failures,
OLOS, etc.) is an automatic switching request based on the quality or state of the service, and on
the state of the path terminating the service. FastSMP protection applies to both unidirectional and
bidirectional failures.
■ Wait to restore (WTR) request—A wait to restore request is a system-generated request when a
work path failure clears, and the FastSMP protection group is provisioned as revertive. The WTR
request uses a provisionable timer that begins counting when work path heals; traffic is reverted
back to the work path upon expiration of the timer. FastSMP protection groups configured for
revertive protection will automatically revert the service back to its original path after the wait to
restore (WTR) timer expires. The WTR request and associated timer is initialized and begins
counting when all higher priority network or system requests are cleared. The WTR timer is
provisionable between 5 and 120 minutes (in 1-minute increments, with a default of 5 minutes).
Clearing of user-initiated requests (manual and lockout) do not initiate a WTR request.
The following user-initiated operations are supported for FastSMP and are described in the following
sections:
■ Lockouts on page 4-161
■ Manual Switch on page 4-161
■ Forced Switch on page 4-162
Note: User-initiated operations are performed at the head-end of the FastSMP service.
Note: The priorities for FastSMP operations are described in Switch Request Priorities on page 4-
162.
Lockouts
The following user-initiated lockout operations are supported for FastSMP:
■ Lockout of protect—Applied to the FastSMP protection group, this command prevents the protect
path from becoming active under all circumstances (the user must specify which protection unit
within the protection group is to be locked out). Multiple protection paths can be locked out
simultaneously.
■ Lockout of working—Applied to the FastSMP protection group, this command prevents traffic from
being switch from the working path, under all circumstances. If the working path incurs any faults,
traffic will not be switched to a protection path.
■ Clear lockout—Applied to the FastSMP protection group, this command clears any existing lockout
operations on the protection group (either lockout of protection or lockout of working). A user-
initiated Clear command removes lockout switching requests; however, network-, service- or
equipment-generated switching requests are not cleared by the Clear command.
Note the following for lockout requests:
■ Lockout requests are the highest priority user command, so a lockout request is always honored
and will overwrite any previous command in effect for the protection unit.
■ A lockout request raises an alarm on the FastSMP protection unit.
■ If the protect path being locked out by the command is currently active, a protection switch to the
other path shall occur, regardless of the state of the other path (or if the state of the traffic being
carried by the path). After the lockout-induced switch, traffic cannot be moved back to the locked-
out path until the lockout command is cleared.
■ If a failure occurs on the protect path while a lockout of working is in effect, traffic cannot switch to
any of available protect path(s) configured until the lockout is cleared. Conversely, if a failure
occurs on the working circuit while a lockout of protect is in effect, traffic cannot switch to the
protect circuit until the lockout is cleared. Both cases can result in loss of traffic.
Manual Switch
A manual switch is a user-initiated command to switch from the active working path to the specified
protect path. A manual switch results in a protection switch if there are no higher priority requests in effect
on the alternative path.
Note the following for manual switch requests:
■ A manual switch request raises an alarm on the FastSMP protection group.
■ A manual switch is not allowed for switching away from a protect path.
■ A manual switch request is forgotten in the following circumstances:
□ If the manual switch cannot occur at the time of the request, the manual switch request will
be denied and disregarded.
□ If the manual switch succeeds and then a subsequent fault occurs on the specified protection
path, traffic will be automatically switched away from the protection path and the manual
switch request will be disregarded (meaning that traffic will not be switched back to the
protection path once the fault clears).
■ A manual switch request is rejected in the following circumstances:
□ If the specified protect path is in the lockout or forced switch state.
□ If a lockout of work operation is in effect.
□ If the specified protection path is not healthy.
Forced Switch
A forced switch is a user-initiated command to switch from an active path to a specified protect path.
Note the following for forced switch requests:
■ A forced switch request raises an alarm on the FastSMP protection group (any existing switch
alarm on the protection group will be cleared).
■ If the forced switch cannot be performed at the time of the request, the request is remembered by
the system until the switch can occur, or until a user clears the forced switch request.
■ A forced switch request is rejected in the following circumstances:
□ Back to back forced switches are not allowed.
□ The specified protection path is currently in lockout state.
□ A lockout of work request is currently in effect.
■ In the following cases, a forced switch request won't complete until the protection path becomes
available:
□ The specified protection path is not healthy.
□ There is a failure on the protection path or if the XGCC0 control channel is not available.
□ The path is currently in use by a high priority circuit.
□ The protection path contains a link with an OTUki TTI mismatch.
□ One of the nodes along the protection path is performing an upgrade of the Fast Control
Plane (FCP).
3. Forced switch
4. Signal Fail on working (SF-W)
5. Manual switch
6. Wait to restore request
Table 4-12 Alarms and Events for FastSMP Switching Operations (continued)
Condition Switch Description Alarm/ Default
Trigger Event Severity
MANWKSWPR Manual Raised against FastSMP protection group for user- Event Not
Switch initiated manual switch when traffic switches to Alarmed
protection path if protection path is healthy. (NA)
MANWKSWBK Raised against FastSMP protection group for user-
initiated manual switch when traffic switches to working
path, if working path is healthy.
In addition to FastSMP over point to point SLTE links, FastSMP is also supported for Optical Express
over SLTE links.
Figure 4-111 FastSMP over FlexILS SLTE Links (with Optical Express)
In this configuration GMPLS is disabled so the user must manually configure the fiber AIDs in the SLTE
Optical Express as shared risk resource group (SRRG) for FastSMP, see Manually Configured Shared
Risk Resource Group (SRRG) on page 4-165.
By default, O-SNCP protection groups are non-revertive, meaning that if a fault on the working path
causes a protection switch to the protect path, traffic will not automatically revert back to the working path
once the fault on the working path clears. However, the user can configure the O-SNCP protection group
for revertive switching. For revertive protection groups, traffic will be automatically switched back to the
working path once a fault on the working path has cleared and the wait to restore (WTR) period has
elapsed. The WTR period is a soaking time that can be configured from 5 minutes to 2 days (with a
default of 120 minutes). If the OLOS condition clears and OLOS is not detected on the working path for
the WTR period, an O-SNCP protection group configured for auto-reversion will revert back to the working
path. If OLOS is detected before the WTR period is complete, the WTR timer is reset and the soak period
will not begin again until the next time the working path’s OLOS condition clears. Once the WTR is
configured, the OPSM carries out the configured WTR behavior irrespective of the controller card
availability. If the user changes the WTR value when the WTR timer is already running, the new WTR
value takes effect. In case the new WTR value is less than the WTR time already elapsed, the WTR times
out immediately and the traffic reverts back to the work path.
The OPSM optical switches support both automatic and manual protection switching, and lockout of
working and protection:
■ Automatic protection switching—If the active line port detects OLOS failure and if the standby port
is clear of both OLOS failure and of lockout from the management interfaces, the OPSM will
automatically switch to the standby port.
■ Manual switching—Each optical switch on the OPSM modules support manual switch operations
initiated by the user. A manual switch causes the OPSM to switch from the active port to the
standby port. If the standby port has been locked out and/or has an OLOS fault, the manual switch
operation is rejected.
■ Lockout of working—Prevents traffic from switching to the working port. If the working line is
currently the active route, traffic will be switched to the protect port. The traffic does not switch back
to the locked out path even if the other leg has OLOS. On clearing of lockout, the traffic will auto-
switch back to the other path if the active path has OLOS. Lockout has a higher priority than auto-
switch or manual.
■ Lockout of protection—Prevents traffic from switching to the protect port. If the protect line is
currently the active route, traffic will be switched to the working port. The traffic does not switch
back to the locked out path even if the other leg has OLOS. Lockout has a higher priority than
automatic switching or manual switching.
In addition, the OPSM supports latching: Even if the module loses electrical power, each optical switch
will remain latched in its current (active) position and continue to allow optical power to pass through as a
passive device. In this case, the OPSM rejects any switch requests until the module is back online.
Figure 4-112 Multi-layer Recovery for Revertive PG with Revertive Restorable SNCs
1. If a fault triggers a protection switch on the SNC1 working path, an automatic protection switch routes
traffic to SNC2.
2. The faulted SNC1 is restored to the RestoredRoute. At this point, the PG WTR shall not be started as
the working SNC1 is still on the RestoredRoute. If there is a fault on SNC2, traffic is switched back to
SNC1 on the RestoredRoute. Otherwise, if there is no fault on SNC2, the DTN waits for the fault to
clear on SNC1’s WorkingRoute.
3. The fault is cleared on the WorkingRoute of SNC1. WTR is started for the restorable SNC1 for auto-
reversion. At the expiry of WTR, SNC1 is successfully reverted back to the WorkingRoute.
4. As soon as SNC1 is restored back to its WorkingRoute, the WTR on the protection group is started.
Upon the expiry of the protection group WTR timer, traffic is reverted back to its original working path
of SNC1.
Figure 4-113: Multi-layer Recovery with Revertive PG with Non-revertive Restorable SNC on page 4-168
shows how multi-layer recovery works to protect traffic in the case of a revertive 2 Port D-SNCP PG
deployed with a restorable SNC as the Working PU with no automatic reversion.
Figure 4-113 Multi-layer Recovery with Revertive PG with Non-revertive Restorable SNC
1. If a fault triggers a protection switch on the SNC1 working path, an automatic protection switch routes
traffic to SNC2.
2. The faulted SNC1 is restored to the RestoredRoute. At this point, the PG WTR is started.
3. Upon the expiry of the protection group WTR timer, traffic is reverted back to SNC1 using SNC1’s
RestoredRoute.
Figure 4-114: 1 Port D-SNCP with Restorable SNCs on page 4-168 shows how multi-layer recovery
works to protect traffic in the case of a 1 Port D-SNCP deployed with restorable SNCs.
1. If a fault triggers a protection switch on the working path of the SNC Working route, an automatic
protection switch routes traffic to the SNC Protect route.
2. The faulted SNC Work is restored to the restored path of the SNC Work, and the original working path
of the SNC Work is deleted as part of GMPLS restoration. At this point, the PG WTR is started.
3. Upon the expiry of the protection group WTR timer, traffic is reverted back to the newly-routed SNC
Work restored path.
The following guidelines apply to multi-layer recovery:
■ GMPLS restoration is not supported on the SNC corresponding to the Protect PU of the protection
group.
■ GMPLS restorable SNCs with auto reversion are supported only for 2 Port D-SNCP protection.
■ For 2 Port D-SNCP, GMPLS restoration/reversion is supported only on the SNC corresponding to
the Working PU of the D-SNCP protection group.
■ For 1 Port D-SNCP, GMPLS restoration is supported only on the SNC corresponding to the
Working PU of the D-SNCP protection group. Reversion is not supported for 1 Port D-SNCP; if
there is a revertive restorable SNC already provisioned, it cannot be included in the 1 Port D-SNCP
for multi-layer recovery.
■ Independent WTR timers can be configured on the 2 Port D-SNCP protection group and the
revertive SNC.
■ Multi-layer recovery is configurable on all the client interfaces. However, it is not supported for
Layer 1 OPN applications.
■ Manual switch operation is supported for both Working and Protect PUs for 1 Port D-SNCP.
■ The switch time is not affected due to the multi-layer recovery schemes and will be completed
within 50ms.
■ For D-SNCP using the TOM-40G-SR4, TOM-100G-L10X, TOM-100G-S10X, or TOM-100G-SR10,
if the tributary disable action is set to Laser Off, protection switch times can exceed 50ms. For
these 100GbE or 40GbE TOMs, it is recommended to set the tributary disable action to Insert Idle
Signal. (See Tributary Disable Action on page 3-41.)
Note: In case of XT-3600 with power saving mode enabled, DC-YCP cannot be configured
between the turned off ports.
In the following figure, chassis A and B when paired can have DC-YCP between them. Similarly for
chassis C and D. One chassis can be paired with only one other chassis with any combination of
node controller or shelf controller supported. For example, if one node has XTC as node controller
with XT-3600 shelf controller and another node has XT-3600 as node controller and shelf
controller, the XT-3600 chassis across the two nodes can be paired and DC-YCP can be
configured between them .
For DC-YCP1, Chassis A:P1 as working and Chassis B:P1 as protect. Similarly, for DC-YCP2,
Chassis B: P3 as working and Chassis A:P2 as protect.
Figure 4-115 Configuration showing DC-YCP between any two ports of the paired chassis
■ Supports any chassis pairing on a multi-chassis node. This means that the system provides
flexibility to pair any XT-3600 chassis with another XT-3600 chassis within the same multi-chassis
node.
■ Supports provisioning DC-YCP across Hybrid multi-chassis. This means that the system supports
the DC-YCP feature across different types of chassis in a hybrid multi-chassis node. The DC-YCP
can be configured between any of the CX-10E and CX-100E chassis belonging to the same multi-
chassis node. However, the client types/payload should be the same for the two PUs belonging to
different chassis.
Note: The direction of protection switching (PSDIRN in TL1) is not indicated for XT-3600.
The following sequence of events/actions are triggered when a client failure is detected and protection
switching is being initiated.
1. The transmit lasers on chassis A towards CPE1 and on chassis X towards CPE2 are ON. A client
failure such as LOSYNC is detected on chassis A.
2. Upon detection of LOSYNC, the system initiates a protection switch on DC-YCP between chassis A
and chassis B. The transmit laser on chassis B is turned ON towards CPE1 and transmit laser on
chassis A is shutdown.
3. Protection switching takes place when the FACRXPSTRIG (in TL1) or the Protection Switch for Client
Rx fault (in GNM/DNA) is enabled by default. This parameter can be disabled by the user in which
case protection switching for the client side faults does not take place.
The following sequence of events/actions are triggered when a bidirectional network failure is detected
and protection switching is being initiated.
1. The transmit lasers on chassis A towards CPE1 and on chassis X towards CPE2 are ON. A
bidirectional failure (fibercut) is detected on the datapath between chassis A to chassis X.
2. An OLOS condition is detected on chassis A and chassis X.
3. Upon detection of OLOS condition, the system initiates a protection switch on DC-YCP at each end.
The the transmit laser on chassis B is turned ON towards CPE1 and transmit laser on chassis A is
shutdown..
4. Similarly, the system performs a protection switch on DC-YCP between chassis X and chassis Y. The
transmit laser on chassis Y towards CPE2 is turned ON and the transmit laser on chassis X is
shutdown.
The following sequence of events/actions are triggered when a unidirectional network failure is detected
and the protection switching is being initiated.
1. The transmit lasers on chassis A towards CPE1 and on chassis X towards CPE2 are ON. A
unidirectional failure (fibercut) is detected on the datapath between chassis A to chassis X.
2. A OLOS condition is detected on chassis X.
3. Upon detection of OLOS, the system performs a protection switch on DC-YCP between chassis X and
chassis Y. The transmit laser on chassis Y towards CPE2 is turned ON and transmit laser on chassis
X is shutdown.
IQ NOS provides extensive performance monitoring (PM) to provide early detection of service
degradation before a service outage occurs. The performance monitoring capabilities allow users to pro-
actively detect problems and correct them before end-user complaints are registered. Performance
monitoring is also needed to ensure contractual Service Level Agreements between the customer and the
end user.
IQ NOS provides performance monitoring functions in compliance with GR-820. The following features
are supported:
Note: Please see the Infinera GNM Performance Management Guide for detailed information on PM
data supported on Infinera nodes.
■ Extensive performance data collection at every node, including optical performance monitoring
data, FEC PM data, native client signal PM data at the tributary ports, Ethernet PM collection for
Ethernet services, Optical Supervisory Channel (OSC) performance monitoring data.
■ Retrieval of the current and historical 15 minute bin, and current 24 hour bin, and real time bins for
Regenerator Section - Unavailable Seconds (RS-UAS) in both the receive and transmit directions.
The monitoring of RS-UAS is disabled by default, and can be enabled or disabled by the user for
each SDH facility. TCA/TCE are supported.
■ Comprehensive PM data collection functions, including,
□ Real-time PM data collection for real-time troubleshooting (see Real-time PM Data Collection
on page 5-3)
□ Historical PM data collection for service quality trend analysis (see Historical PM Data
Collection on page 5-3)
□ Threshold crossing notifications for early detection of degradation in service quality (see PM
Thresholding on page 5-4)
□ Invalid data flag indicator per managed object per period (see Suspect Interval Marking on
page 5-5)
□ Performance monitoring event logging for troubleshooting (see PM Logging on page 5-6)
■ Flexible PM data reporting and customizing options to meet diverse customers’ needs, including,
□ Automatic and periodic transfer of PM data in CSV format enabling customers to integrate
with their management applications ( PM Data Export on page 5-5)
□ Customization of PM data collection (see PM Data Configuration on page 5-6“)
■ Via the DNA, display of network-wide PM data for any selected circuit (see the DNA documentation
set for more information)
■ Network Latency Measurement for ODUk path (see DTN-X Network Latency Measurement on
page 5-7“)
PM Data Collection
IQ NOS collects digital PM data and optical PM data.
■ For the optical PM data, IQ NOS utilizes gauges to collect the PM data. The gauge attribute type,
as defined in ITU X.721 specification, indicates the current value of the PM parameter and is of
type float. The gauge value may increase or decrease by an arbitrary amount and it does not wrap
around. It is a read-only attribute.
■ For the digital PM data, IQ NOS uses counters to collect the PM data. The counter value is a non-
negative integer that is set to zero at the beginning of every collection interval. The counter size is
selected in a such a way that the counter does not rollover within the collection period.
PM Thresholding
PM thresholding provides an early detection of faults before significant effects are felt by the end users.
Degradation of service can be detected by monitoring error rates. Threshold mechanisms on counters
and gauges allow the monitoring of such trends to provide a warning to users when the monitored value
exceeds, or is outside the range of, the configured thresholds.
IQ NOS supports thresholding for both optical PM gauges and digital PM counters. During the PM period,
if the current value of a performance monitoring parameter reaches or exceeds corresponding configured
threshold value, threshold crossing notifications are sent to the management applications.
■ Optical PM Thresholding
IQ NOS performs thresholding on some optical PM parameters by utilizing high and low threshold
values. Note that the thresholds are configurable for some PM parameters; for others, the system
utilizes pre-defined threshold values. An alarm is reported when the measured value of an optical
PM parameter is outside the range of its configured threshold values. The alarms are automatically
cleared by IQ NOS when the recorded value of the optical PM parameter is within the acceptable
range.
■ Digital PM Thresholding
IQ NOS performs thresholding on some digital PM data utilizing high threshold values which are
user configurable. The Threshold Crossing Alert (TCA) is reported when a PM counter, within a
collection period, exceeds the corresponding threshold value. When a threshold is crossed, IQ
NOS continues to count the errors during that accumulation period. TCAs are transient in nature
and are reported as events which are logged in the event log as described in Event Log on page 2-
26. The TCAs do not have corresponding clearing events since the PM counter is reset at the
beginning of each period.
Note that PM thresholding is supported for some of the PM parameters, but not for all.
When a PM threshold value is modified, the new threshold will be used for generating associated TCAs in
the next complete PM interval. The current PM interval will not use the new threshold. This means that:
■ If TCA reporting is enabled after a PM threshold is modified to a value lower than the current PM
count, TCAs are not raised in the current PM interval. The new threshold will be used only in the
next complete PM interval.
■ If TCA reporting is enabled before a PM threshold is modified to a value lower than the current PM
count, TCAs are raised in the current PM interval.
alarm threshold value. The default OLOS alarm threshold for the ORM-CXH1 is -13dBm. For all other
ORMs, the threshold is -12dBm.
Note: Warm reset, cold reset, or switchover of a controller module does not mark the PM as invalid
since the other modules continue to collect the PM, and since the controller module collects the PM
from the other modules once the controller module reset is complete.
■ The period of PM data accumulation changes by +/-10secs (e.g., user changes the date and/or
time during the period).
■ Loss of PM data due to system restart or hardware failure.
PM Data Export
Users can export PM data, manually or periodically, in CSV format flat files to a user specified external
FTP server. Users can use these flat files to integrate PM data analysis into their management
applications or simply view the PM data through spreadsheet applications. For the PM data flat file
format, see the DNA documentation set.
Users can schedule the TOD (time of day) at which the network element automatically transfers the PM
data to the user specified server. Users can configure primary and secondary server addresses. If the
data transfer to the primary server fails, the PM data is transferred to the secondary FTP server.
Alternatively, Infinera nodes can be configured to transfer PM data files simultaneously to both the
primary and secondary FTP servers. (Simultaneous transfer requires that both servers are configured
correctly.)
When a compiled file transfer is initiated by the user, the node will first verify the FTP server configuration
before compiling the file. See Verifying FTP Connectivity for Debug, PM, and DB Backup on page 7-19
for more information.
PM Data Configuration
IQ NOS allows users to customize PM data collection on the network element. Users can configure PM
data collection through management applications. IQ NOS supports the following configuration options:
■ Reset the current 15-minute and 24-hour counters at any time per managed object.
■ Change the default threshold values according to the customer’s error monitoring needs.
■ Enable or disable the PM threshold crossing alarm and TCA reporting per attribute per managed
object.
■ Set the severity level of TCA notifications.
■ Configure the frequency of PM flat file uploading to the FTP servers as configured.
■ Enable or disable PM data collection per managed object entity.
PM Logging
As described in Event Log on page 2-26, IQ NOS maintains a wrap-around historical event log that tracks
all changes that occur within the system. Following are some PM related events that are logged in the
event buffer:
■ User changes PM thresholds
■ User resets PM counters
■ Threshold crossing alert (TCA) is generated
■ User configures periodic uploading of PM data to the client machine
■ The latency measurement involves filtering (i.e., accepting) of the received DM bit for persistency. If
random bit errors corrupt the DM bit, the acceptance time will be longer and will be accounted in
the overall latency results.
■ The measured latency values are measured in units of ODU frames (of the appropriate rate). More
specifically, for specific phase difference between the transmitted and received ODU frames, an
error of as much as two ODU frames is possible.
■ The delay measurement can be inaccurate during periods of errors in the network; large
measurement values are possible.
gRPC PM Telemetry
General Remote Procedure Calls (gRPC) is the management interface used to collect the Telemetry PM
data from a network element. The gRPC telemetry streaming feature provides the network monitoring
functions in which data, such as Performance Monitoring (PM), Alarms or Events, is streamed
continuously from the device at a prescribed interval.
The gRPC transport method uses its HTTP bidirectional streaming between the gRPC client (the
collector) and the gRPC server (the device). The device in this case is a network element. A gRPC
session is a single connection from the gRPC client to the gRPC server.
gRPC PM Telemetry is supported on XT-3300 and MTC-6/MTC-9 chassis. The PM reporting time interval
is configurable and PM data is reported and streamed to the subscribed gRPC client at the configured
interval.
By default, gRPC is disabled and enabled as part of NETCONF, RESTCONF, TL1 and CLI interface.
The IQ NOS security and access management features comply with Telcordia GR-815-CORE standard.
The supported features include:
■ User identification to indicate the logged in user or process (see User Identification on page 6-3).
■ User authentication to verify and validate the authenticity of the logged in user (see Authentication
on page 6-4).
■ User access control to prevent intrusion (see Access Control on page 6-5).
■ Resource access control by defining multiple access privileges (see Authorization on page 6-6).
■ Security audit logs to monitor unauthorized activities (see Security Audit Log on page 6-7).
■ Security functions and parameters to implement site-specific security policies (see Security
Administration on page 6-8).
■ Secure Shell (SSH v2) protection of management traffic (see Secure Shell (SSHv2) and Secure
FTP (SFTP) on page 6-9).
■ Secure Copy Protocol for upload/download of PM data, debug files, configuration database, and
software images (see Secure Copy Protocol (SCP) on page 6-11)
■ RADIUS enabled storage of user name and password information in a centralized location (see
Remote Authentication Dial-In User Service (RADIUS) on page 6-12).
■ The Terminal Access Controller Access Control System Plus (TACACS+) is a security protocol
similar to RADIUS which allows remote authentication (see Terminal Access Controller Access-
Control System Plus (TACACS+) on page 6-14
■ IP Security via Encapsulating Payload Protection (ESP) protocol in order to protect Optical
Supervisory Channel (OSC) control links in an Infinera network (see IP Security over OSC on page
6-15).
■ Media Access Control Security (MACSec) to provide point-to-point security on Ethernet links
between the nodes (see Media Access Control Security (MACSec) on page 6-17).
■ Serial port disabling via management interfaces in order to prevent unauthorized access from the
node site (see Serial Port Disabling on page 6-27).
■ DCN port block for XT(S)-3300 network elements (see DCN Port Block for Layer 3 Traffic on page
6-29).
■ ACLI session disabling to prevent unauthorized access (see ACLI Session Disabling on page 6-
30).
■ Verified software image to prevent systems from booting up with malicious software (see Verified
software image on page 6-31)
■ Signed Images provides integrity and authenticity of Infinera software (see Signed Images on page
6-32)
User Identification
Each network element user is assigned a unique user ID. The user ID is case-sensitive and contains 4 to
10 alphanumeric characters. The user specifies this ID (referred to as user login ID) to log into the
network element.
By default, IQ NOS creates three user accounts with the following user login IDs:
■ secadmin
An account with the security administrator privilege enabled. The default password is Infinera1and
the user is required to change the password at first login. This user login ID is used for initial login
to the network element.
■ netadmin
An account with the network administrator privilege enabled. The default password is Infinera1and
the user is required to change the password at first login. Additionally, this account is disabled by
default. It must be enabled by the user with security administrator privilege through the TL1
Interface or GNM. This account is used to turn-up the network element.
■ emsadmin
An account with all privileges enabled. The default password is Infinera1. This account is disabled
by default. It must be enabled by the user with security administrator privilege through the TL1
Interface or GNM. The DNA server communicates with the network element using this account,
referred to as the DNA account, when it is started without requiring additional configuration. Users
can create additional DNA accounts which the DNA server can use to connect to the network
element. These accounts must have the DNA access capability enabled during creation.
A single user can open multiple sessions. IQ NOS maintains a list of all current active sessions.
Note: IQ NOS supports a maximum of 30 active user sessions at any given time. All login attempts
beyond 30 sessions will be denied and a warning message is displayed.
Authentication
IQ NOS supports standards-based authentication features. These features ensure that only authorized
users log into the network element through management interfaces. IQ NOS also supports remote and
centralized RADIUS for user authentication (see Remote Authentication Dial-In User Service (RADIUS)
on page 6-12 for more information).
Each time the user logs in, the user must enter a user ID and password. For the initial login, the user
specifies the default password set by the security administrator. The user must then create a new
password based on the following requirements.
■ The password must contain:
□ 8 to 32 alphanumeric characters
□ At least one capital letter
□ At least one numeric character
□ At least one of the following special characters (no other special character are allowed):
!@#$%^()_+|~{}[]?–
■ The password must not contain:
□ The associated user ID
□ Blank spaces
■ The passwords are case-sensitive and must be entered exactly as specified.
The password is stored in the network element database in a one-way encrypted form.
The password rotation is implemented to prevent users from re-using the same password. The users are
forced to use passwords different from the previously used passwords. The number of history passwords
stored is configurable.
Infinera nodes support a configurable network element password digest type. Infinera nodes support the
following password digest schemes: MD5, SHA-256, SHA-384, and SHA-512. When the password digest
type is changed, the following will be the behavior observed on the system:
■ All password histories passwords housed in the configuration database are reset.
■ The default password for all users is reset to the default user password that is specified by the
admin user at the time that the password digest time is configured.
■ All users are prompted upon next login to change the password.
The node will notify all currently logged in users (GNM, DNA, TL1) about the change in the password
digest type via notification; existing sessions are not terminated.
Access Control
In addition to user login ID validation and password authentication, IQ NOS supports access control
features to ensure that the session requester is trusted, such as:
■ Detection of an unsuccessful user login and if the unsuccessful login exceeds the configured
number of attempts, the session is terminated and a security event is logged in the security audit
log.
■ User session is automatically terminated when the cable connecting the user computer and the
network element is physically removed. The user must follow the regular login procedure after the
cable is reconnected.
■ The activity of each user session is monitored. If, for a configurable period of time, no data is
exchanged between the user and the network element, the user session is timed-out and the
session is automatically terminated.
Authorization
Multiple access privileges are defined to restrict user access to resources. Each access privilege allows a
specific set of actions to be performed. One or more access privileges is assigned to each user account.
For the description of the managed objects, see Managed Objects on page 3-3.
The levels of access privileges are:
■ Monitoring Access (MA)—Allows the user to monitor the network element; cannot modify anything
on the network element (read-only privilege). The Monitoring Access is provided to all users by
default.
■ Security Administrator (SA)—Allows the user to perform network element security management
and administration related tasks.
■ Network Administrator (NA)—Allows the user to monitor the network element, manage equipment,
turn-up network element, provision services, administer various network-related functions, such as,
Auto-discovery and topology.
■ Network Engineer (NE)—Allows the user to monitor the network element and manage equipment.
■ Provisioning (PR)—Allows the user to monitor the network element, configure facility endpoints,
and provision services.
■ Turn-up and Test (TT)—Allows the user to monitor, turn-up, and troubleshoot the network element
and fix network problems.
■ Restricted Access (RA)—Allows the user to disable Automatic Laser Shutdown (ALS) operation. A
user may not disable the ALS feature unless the user’s account is configured with “Restricted
Access” privileges.
For the specific actions allowed for each access privilege group, refer to the GNM Security Management
Guide or the DTN and DTN-X TL1 User Guide .
Security Administration
IQ NOS defines a set of security administration functions and parameters that are used to implement site-
specific policies. Security administration can be performed only by users with security administrator
privilege. The supported features include:
■ View all users currently logged on
■ Disable and enable a user account (this operation is allowed only when the user is not logged on)
■ Modify user account parameters, including access privilege and password expiry time
■ Delete a user account and its attributes, including password
■ Reset any user password to system default password
■ Set the password change policy to allow users to change their own passwords, or to require all
password changes be performed by a security administrator
■ Specify whether or not new users need to change the account password upon first login
■ Monitor security audit logs to detect unauthorized access
■ Monitor the security alarms and events raised by the network element and take appropriate actions
■ Configure system-wide security administration parameters:
□ Default password
□ Inactivity time-out period
□ Maximum number of invalid login attempts allowed
□ Number of history passwords
□ Advisory warning message displayed to the user after successful login to the network
element
■ Perform network-wide user administration, including:
□ View user accounts cross the managed network
□ Add new user accounts to multiple nodes, in a single operation
□ Update user account information on multiple nodes, in a single operation
□ View and modify attributes common to multiple user accounts, in a single operation
□ Clone multiple user accounts (with the same privileges, associations and permissions as that
of an existing user account) to one or more network elements, or to all network elements
within an administrative domain
□ Export multiple user account information
□ Import multiple user account information that was previously exported
□ Delete multiple user accounts from one or more network elements, in a single operation
Note: For maximum management traffic protection, configure the network element and DCN ports
behind a firewall.
The IQ NOS implementation of SSHv2 is based on the IETF SSHv2 OpenSSH Toolkit solution. It
provides the following types of communication protection:
■ Data Encryption—Symmetric data encryption is based on the Advanced Encryption Standard
(AES) defined by NIST. A 256 bit key length is supported.
Note: A user with Security Administrator privileges can issue a command via GNM, DNA, and/or TL1
to regenerate SSHv2 keys. When this command is invoked, the node will create new SSH keys. Note
that this command applies to public/private SSH key pairs, and that this command will terminate all
existing SSH sessions, including SFTP sessions and transfers.
■ Data Integrity—The network element supports the Message Authentication Code (MAC) feature of
SSHv2 to ensure data integrity between the management client and the network element. The 256
bit key hmac-sha1 algorithm is supported.
Note: The following SSHv2 Clients are supported by the Infinera node:
■ PuTTY
■ OpenSSH Client
■ F-Secure SSH Client
■ Tera Term Pro
Users with the secadmin privilege can selectively enable SSHv2-based security on a per-node basis for
each of the management interface ports (that is, to protect communications via TL1, Telnet, file transfer,
or XML). By default, enhanced security is not enabled.
Note: The SSH enhanced security feature may be enabled at any time. However, if the enhanced
security flag is updated during run-time, existing sessions continue to function in its earlier mode. Any
new established sessions will operate according to the new security setting. You must perform a
warm or cold reboot of the active controller module in order to effect the security changes to existing
sessions.
Network elements functioning as Gateway Network Elements (GNEs) or Subtending Network Elements
(SNEs) also support the SSHv2 enhanced security feature. Traffic passed by a GNE to clients (for
example, traffic coming from SNEs) observe the security settings on the GNE. If necessary, the GNE will
perform any necessary encryption/decryption on behalf of an SNE.
Note: Infinera network elements and DNA inter-operate with FreeRadius version 1.1.0 and may not
be compatible with RADIUS servers that use vendor-specific attributes.
Note: The user account names and passwords on the RADIUS server(s) must comply with the same
rules and constraints for user names and passwords on the DTN (see User Identification on page 6-3
and Authentication on page 6-4 for the requirements for valid user names and passwords on the
DTN). In addition, all user accounts must have a privilege level of “MA” or higher in order to be
compatible with Infinera nodes (see Authorization on page 6-6 for information on privilege levels).
Note: Prior to Release 19.0, the default value for IP address of RADIUS Servers was 0.0.0.0.
However, starting release 19.0, the default value for IP address of RADIUS Server1 will be 0.0.0.1,
RADIUS Server2 will be 0.0.0.2, RADIUS Server3 will be 0.0.0.3. In case of IPv6 being selected, the
default IP address of RADIUS Server1 will be 0100::1, RADIUS Server2 will be 0100::2, RADIUS
Server3 will be 0100::3. During upgrade to release 19.0, the previous default of 0.0.0.0 is auto
migrated to the new default values.
An Infinera network element can be configured to authenticate users according to the local settings or via
the configured RADIUS servers. In addition, the network element can be configured to authenticate users
first according to the RADIUS settings, and then according to the local settings on the network element if
no RADIUS server can be contacted. Figure 6-2: Infinera Network with RADIUS on page 6-13 shows an
example Infinera network with redundant RADIUS servers.Prior to release 19.0, the default value for IP
address of RADIUS Servers was 0.0.0.0. However, starting release 19.0, the default value for IP address
of RADIUS Server1 will be 0.0.0.1, RADIUS Server2 will be 0.0.0.2, RADIUS Server3 will be 0.0.0.3. In
case of IPv6 being selected, the default IP address of RADIUS Server1 will be 0100::1, RADIUS Server2
will be 0100::2, RADIUS Server3 will be 0100::3. During upgrade to release 19.0, the previous default of
0.0.0.0 is auto migrated to the new default values.
Note: Prior to Release 19.0, the default value for IP address of TACACS+ Servers was 0.0.0.0.
However, starting release 19.0, the default value for IP address of TACACS+ Server1 will be 0.0.0.1,
TACACS+ Server2 will be 0.0.0.2, TACACS+ Server3 will be 0.0.0.3. In case of IPv6 being selected,
the default IP address of TACACS+ Server1 will be 0100::1, TACACS+ Server2 will be 0100::2,
TACACS+ Server3 will be 0100::3. During upgrade to release 19.0, the previous default of 0.0.0.0 is
auto migrated to the new default values.
Note: Network management traffic including GNE-SNE traffic, GNM sessions, and TL1 sessions
(including via craft port) can already be protected end-to-end via Secure Shell (SSH; see Secure
Shell (SSHv2) and Secure FTP (SFTP) on page 6-9).
SPI value used for outbound traffic at node A must be the same key and SPI value used for
inbound traffic at node B).
Note: In addition to the support of ASCII (alphanumeric) values for authentication and
encryption keys in previous releases, Infinera nodes also support hexadecimal values for the
authentication and encryption keys. A key must be all ASCII characters or all hexadecimal
characters (there cannot be a mix of ASCII and hexadecimal characters in one key). It is
allowed for one key to be of one character type (e.g., ASCII) and another key to be of the other
character type (e.g., hexadecimal). For hexadecimal keys, the value must be 64 hexadecimal
characters. For the TL1 interface, the hexadecimal entries must begin with “0x” followed by 64
hexadecimal characters.
■ IP security is disabled if there are no selectors nor SAs created on the link, or if all SAs and
selectors are in the out-of-service state.
To enable IP Security over OSC:
■ Create a selector on each node in the connection. Create a different selector for each type of traffic
to be protected (RSVP, OSPF, and/or ADAPT).
■ On each node, create a security association (SA) for each type of protected traffic (RSVP, OSPF,
and/or ADAPT):
□ For OSPF, create an inbound and outbound SA to/from each adjacent node (one inbound/
outbound SA for every fiber direction). (OSPF SAs are unidirectional.)
□ For RSVP, create an inbound and outbound SA to/from every other node in the signaling
domain. (RSVP SAs are unidirectional.)
□ For ADAPT, create a single SA to each adjacent node. (ADAPT SAs are bidirectional.)
■ The Security Tag (SecTAG) encodes various information in the frame including data plane
indicators (such as Association Number) which allows the remote end to use appropriate keys to
decrypt the incoming traffic.
■ The VLAN tags are part of the encrypted data (MSDU).
Figure 6-3 MAC Service Data Unit (MSDU) and MAC Protocol Data Units (MPDU)
MACSec Deployment
MACSec is deployed for point-to-point configuration for XT-3300. Encryption is performed by the mapper
on both 10GbE and 100GbE clients. The encryption is configured on TribGige and is on per port basis.
The XT client 10GbE and 100GbE can be individually encrypted, subject to user configuration. Security
Association (SA) are created between the A-End and Z-End peers by exchange of keys as per IKE.
The figure below shows a point-to-point XT deployment. Any switch/router connected to the XT client
ports transmit/receive Ethernet frames. Every port capable of performing encryption implements a
Security Entity (SecY). Uni-directional Secure Channels (SC) provide point-to-point secure
communication, which can be persistent/long-lived as long as the SecY is existent. The secure
communication on an Secure Channel is realized through a chain of Secure Associations (SA). An SA
associates a particular cryptographic key with an Secure Channel. SAs can be statically administered or
dynamically generated/exchanged through protocols such as IKEv2 or IEEE 802.1X. Secure Channels
also persist across any SA changes.
As seen in the figure below, Port X (on Node A) and Port Y (on Node B) implement/model a Secure
Entity. Two unidirectional Secure Channels exist: NodeA-Port X -> NodeB-Port Y and NodeB-Port Y ->
NodeA-Port X. SAs (SAn and SAm) are generated and exchanged between the SecY instances.
The following figure provides a data plane centric depiction of MACSec. There are two scenarios.
1. The unencrypted Ethernet traffic from the Client router/switch is received by the XT. MACSec
confidentiality and integrity functions are performed between the two XTs (see Sec and ICV added to
the MAC frame).
2. The MACSec encrypted Ethernet traffic is received by the XT. A second MACSec encryption is
performed by the XT. In this case, the incoming client MSDU becomes the MPDU for the XT MACSec.
Figure 6-6 Example scenario for MACSec Encryption and Double SecTAG-ing
Note: It is recommended to configure the NTP server before creating/ installing or performing
operations on the certificate.
Data Encryption
Data Encryption on XT uses AES-256 cipher suite as per IEEE 802.1AEbw-2013 specifications. AES-
GCM is an authenticated encryption with associated data (AEAD) cipher providing both confidentiality and
data origin authentication. AES-GCM is efficient and secure. It allows hardware implementations which
can achieve high speeds with low cost and low latency, as the mode can be pipelined. Applications that
require high data throughput can benefit from these high-speed implementations. AES-GCM has been
specified as a mode that can be used with IPsec ESP and 802.1AE Media Access Control (MAC)
Security [IEEE8021AE].
Note: If MACSec encryption has to be enabled on all the 10G or 100G ports, it is recommended to
wait for a few seconds before enabling it on the next consecutive port.
Certificate Management
In public key infrastructure (PKI), users of a public key need confidence that the associated private key is
owned by the correct remote entity with which an encryption of a digital signature will be established/
used. This is achieved through public key certificates.
A certificate is a data structure which binds public keys to subjects. The binding is asserted by having a
trusted Certificate Authority (CA) digitally sign each certificate. The certificates are typically housed in
repositories which is a system or collection of distributed systems that stores certificates and certificate
revocation lists (CRL) and serves as a means of distributing these certificates and CRLs to end entities.
The X.509 is one type of certificate that is commonly used. This was standardized by ITU-T (ISO/IEC)
[16]. The standard has gone through three revisions (v1 - 1988, v2 - 1993, v3 - 1996) developed by ITUT
(ISO/IEC) along with ANSI. The X.509 format also allows for extension attributes in the certificate which
convey such data as additional subject identification information, key attribute information, policy
information, and certification path constraints.
IKEv2 is used for key exchange and X.509 certificates for authentication, where the CERTs are
exchanged through IKE. There are different classes/categories of X.509 certificates that are stored in the
XT.
■ Personal certificates: The X.509 Certificate that represents a particular XT (local). Every XT that
acts as an IKE peer - Chassis, OCG/SCH, will have one or more X.509 Certificates that are its own,
which it would distribute to other peers participating in the PKI system (or the system of nodes
within which the XT needs to prove its identity and authenticity).
■ Peer certificates: This is a collection of certificates installed on an XT which represents the
identities of all the peers it expects to communicate with. During the process of Authentication, the
XT receives certificates from peers. The XT would then, compare the certificate it received with the
list of peer certificates that are installed. This ensures that the peer is one among the multiple peers
the XT is supposed to communicate with.
■ CA certificates: A list of X.509 certificates of well known CAs (for example: DigiCert, VeriSign, and
so on) stored on the XT. This is used for the purposes of signature validation if signatures are
present in the CERT that are sent by the peer.
■ Organization/customer CA certificates: These are CA certificates which are owned by the
customers deploying the XT. This could also be Infinera default CA certificate. The primary reason
is to sign certificates that are generated locally on the XT.
The certificates can be created outside the XT and then imported through DNA, GNM or CLI. The X.509
certificates are created outside the XT by the user and then installed on the XT. The associate private key
is also installed on the XT. The CERT and private key export to the XT is performed through management
interfaces over SSH (or TLS). Once the CERT is installed on the XT, the system performs local validation
(both syntactical and for correctness). The user can optionally choose to sign the certificate with one of
their Root CA certificates.
The system supports the ability to install and process X.509 certificates. The following are some of the
features supported related to configuration and management of X.509 certificates:
■ Supports X.509 v3 certificates
■ Supports the following X.509 certificate types
□ PKCS#7
□ PKCS#12
Note: If the serial ports are disabled, any session using the ports will be lost.
Note: Disabling the serial port does not block access to commissioning command line interface
(CCLI) during boot-up; disabling the serial port blocks only access to the administrative command line
interface (ACLI) after boot-up (i.e., the port is in the disabled state after boot-up).
Note: If the DCN port is disabled, any session using the port will be lost.
Note: In case of XTC-2, XTC-2E, MTC-6, MTC-9, XTC-4, XTC-10, OTC or DTC chassis with Layer 3
switching capability, Access Control Filters are unable to process (i.e. allow or block) any packets
from the SNE to GNE’s router ID or from the GNE to SNE’s router ID .
Note: If ACLI sessions are disabled, any open ACLI sessions remain active. Any subsequent ACLI
login requests are blocked.
Signed Images
Signed Images provides integrity and authenticity of Infinera software during software downloads and
system boot up using Infinera Digital Signatures. Signed software ensures that users can verify the origin
of the software which is Infinera, as well as verify that no one has tampered with it. This feature enables
Infinera to produce Signed Images that are digitally signed before release and users can verify the origin
of the software.
The signature verification process starts with the network element first computing a hash on the software
or component it wants to verify. The network element also has a copy of the public key (ISK) that
corresponds to the private key with which the signature was generated. The network element decrypts
the signature and verifies if this hash is identical to the hash that it had computed. A match indicates that
the signature verification is successful.
The Signed image feature is supported on XT(S)-3300 and XT(S)-3600.
IQ NOS provides the following capabilities to manage software and database images on the Infinera
nodes:
■ Downloading Software on page 7-2
■ Maintaining Software on page 7-3
■ Software Image Directory Structure on page 7-7
■ Maintaining the Database on page 7-10
■ Uploading Debug Information on page 7-17
■ Verifying FTP Connectivity for Debug, PM, and DB Backup on page 7-19
Downloading Software
IQ NOS, operating DTN-Xs, DTNs, Optical Amplifiers, and FlexILS nodes, is packaged into a single
software image. The software image includes the software components required for all the circuit packs in
the Infinera network elements.
Users can remotely download the software image from a user specified FTP server, to the controller(s)
(IMM, XCM, etc.) of one or more network elements within an administrative domain. Once users
download the software image to the controller module and then separately initiate the software upgrade
procedure, the software is automatically distributed to the remaining circuit packs within the chassis.
A network element can store up to two versions of the software image (including the current version), at
the same time.
Note: Earlier versions of IQ NOS supported up to three versions of software on a network element.
When upgrading a network element storing three versions of software, the system will prompt you to
reduce the number of software images residing on the network element.
Software downloads to systems with multiple chassis and/or redundant controllers occur in the following
manner:
■ Redundant controllers only—The software download is restricted to the active controller, after
which the software image is automatically replicated to the standby controller.
■ Multi-chassis only—The software download is restricted to the active controller on the Main
Chassis. Upon initiation of the software upgrade procedure, the software image is distributed to the
remaining controllers in the system.
■ Multi-chassis with redundant controllers—The software download is restricted to the active
controller on the Main Chassis. Once the new software is successfully activated on the Main
Chassis active controller, its image is automatically distributed to the remaining controllers in the
system, including the redundant controller on the Main Chassis.
Users may download software images on a node-by-node basis, or perform bulk download of software
images to multiple network elements within the Infinera Intelligent Transport Network. The bulk download
feature allows for fast and easy distribution of a software image to all the network elements in
administrative domains connected via an OSC.
Maintaining Software
The network elements support in-service software upgrade and reversion. The software upgrade/revert
operation lets users activate a different software version from the one currently active. The following
software operations are supported:
■ Install New Software—This operation lets users activate the new software image version with an
empty database. The software image may be older or newer than the active version.
Note: Do not attempt to reboot the system while it is coming up with an empty database. This may
corrupt the database and cause the controller module to re-boot repeatedly.
■ Upgrade Software—This operation lets users activate the new software image version with the
previously active database. The previously active database version must be compatible or
migratable with the new software image version.
Note: For detailed traffic, FPGA upgrade, and operational effects associated with upgrading to a
specific software image version, refer to the applicable Software Release Notes.
Note: For information on preparing for a software upgrade, see Nodal Software Pre-Upgrade
Verification on page 7-5.
■ Activate Software and Database—This operation lets users activate a different software image and
database version. The image version may be older or newer than the active software image version
The database version and the software version must be the same to activate the software and
database. Before upgrading the software, the new database image must be downloaded to the
network element.
Note: Before performing software Revert from Release 19.0 to pre-Release 19.0 or fresh installing
pre-Release 19.0 on a Release 19.0 system, remove and/or delete all new Release 19.0 specific
features (equipment and services).
■ Restart Software with Empty Database—This operation lets users activate the current software
image with an empty database.
Note: Do not attempt to reboot the system while it is coming up with an empty database. This may
corrupt the database and cause the controller module to reboot repeatedly.
■ Uncompress Software—This operation lets users uncompress the software image to enable faster
software upgrade.
In general, upgrading the software does not affect existing service. However, if the new software image
version includes a different Firmware/Field Programmable Gate Array (FPGA) version than the one
currently active, it could impact existing services. If this occurs, a warning message is displayed.
Users must upgrade software on a node-by-node basis. Therefore, at any given time, the network
elements within a network may be running at least two software image versions. These different images
must be compatible. In the presence of multiple software versions, the network provides functions that are
common to all the network elements.
The software upgrade procedure executes in the following steps:
Verifies that the software and database versions are compatible. If they are not compatible, the upgrade
procedure is not allowed.
Validates the uncompressed software image. If the software image is invalid, the upgrade procedure is
not allowed.
Decompresses the software image. If there is not enough memory on the network element to store the
decompressed image, software decompression will not occur at all.
Reboots the network element so that the new software image becomes active. If the reboot fails, the
upgrade procedure is aborted and software image reverts to the previously active software image version.
When the new software image is activated, the software upgrade procedure updates the format of the
Event Log and Alarm table alarms, if necessary.
Note: When the software is upgraded, the PM historical data is not converted to the new format (if
there is a change in the format) and it is not persisted. Therefore, before you upgrade the software,
you must upload and save the PM data in your local servers.
In general, if the upgrade procedure is aborted, the software reverts to the previously active version. The
procedure reports events and alarms indicating the cause of the failure.
The following list outlines software upgrade behavior on systems with multiple chassis and/or redundant
controllers:
■ Redundant controllers only—The software upgrade is restricted to the active controller. Once the
new software is successfully activated on the Main Chassis active controller its image is
automatically replicated to the standby controller.
■ Multi-chassis only—The software upgrade is restricted to the active controller on the Main Chassis.
Once the new software is successfully activated on the Main Chassis active controller, its image is
automatically distributed to the remaining controllers in the system.
■ Multi-chassis with redundant controllers—The software upgrade is restricted to the active controller
on the Main Chassis. Once the new software is successfully activated on the Main Chassis active
controller, its image is automatically distributed to the remaining controllers in the system, including
the redundant controller on the Main Chassis.
During the upgrade process, communication with the clients and other network elements within the
network is interrupted.
target upgrade release, hence reducing the total nodal upgrade time and avoid unnecessary updates or
issues.
Note: For details on which modules contain FPGAs and the firmware update information for each
release and module, as well as which updates require cold reboots of the module, please refer to the
Release Notes for the specific release.
Critical information about FPGA image upgrades is provided in the Software Release Notes. Specifically,
the Software Release Notes identify:
■ If the release contains any FPGA upgrades, and if so, for what modules
■ The functional changes made by each FPGA upgrade
■ Whether the FPGA upgrade is service impacting
■ If the FPGA upgrade is recommended, required, or optional
When a user performs a software upgrade, all non-service affecting FPGA upgrades are automatically
activated. Service-affecting FPGA upgrades are not activated until the user targets each individual
module with a cold-reboot, or removes/reinserts the module into the chassis. After performing a software
upgrade, users may check for pending FPGA upgrades using one of the following methods before
activating FPGA upgrades on a per-module basis:
■ Equipment Manager tool (in DNA or GNM)
■ RTRV-EQPT TL1 command with the SAFWUPG parameter
This allows users to perform hardware upgrade operations within a planned maintenance and service
disruption window.
Note: If there is an incompatibility between the firmware version on a given module compared to what
the current version of the software can support, no new services may be added to the node. If all
firmware versions are compatible with the current software image version (even if the software image
contains firmware upgrades) then users may use, add, and subtract services indefinitely.
Software Images
Starting Release 16.2, the software image files required to install IQNOS software are split based on the
chassis type. That is, every chassis type has its own software image file. The software image file is
downloaded from the FTP server as described below:
■ The software image file is first downloaded for the main chassis.
■ The main chassis downloads the software image from the FTP server based on expansion chassis
type. For all subsequent expansion chassis if the chassis type is the same, the software image is
reused from already downloaded software image.
In Release 18.2, verified software image is implemented for the security of IQ NOS network elements to
ensure integrity of Infinera software that are run on various platforms. This prevents systems from booting
up with malicious software inserted in the images in an Infinera device.
The verification process can be listed in the following sequence:
■ The software image includes a hash value - sha256 hash.
■ Management interfaces displays the hash of the downloaded image.
■ The user can manually compare the hash value displayed in management interfaces with the hash
present in the MetaR_<Release_Number>.<Build.Number>.txt.sha256 file in the software image
download directory. If the hashes match, the user can continue with the installation and upgrade. If
the hashes do not match, it is left to the user to delete the downloaded image and retry.
In order for the software image to be downloaded the FTP server must be reachable at all times when
any software maintenance operations are in progress. It is also required that the software image files are
stored in a defined directory on the FTP server as described below: Starting R19.0, the chassis software
image includes a tar ball which contains individual Field Replaceable Units (FRU) based tar balls for
some FRUs and a tar ball for the controller card and remaining FRUs supported on that chassis.
In Release 19.0.2, signed software image is implemented in XT(S)-3300/XT(S)-3600 network elements to
ensure integrity and authenticity of Infinera software during software downloads and system boot up
using Infinera Digital Signatures.
The verification process of the signed image process can be listed in the following sequence:
■ The signed software includes a hash value - sha256 hash.
■ Management interfaces displays the hash of the MetaR file.
■ MetaR file contains hash of the all the image types and network element software internally verifies
the hash of downloaded image with the hash in the MetaR file.
■ The user can manually compare the hash value displayed in management interfaces with the hash
present in the MetaR_<Release_Number>.<Build.Number>.txt.sha256 file in the software image
download directory. If the hashes match, the user can continue with the installation and upgrade. If
the hashes do not match, the user has to delete the downloaded software image and retry.
Note: For software downgrade from a Release supporting FRU based images (R19.0 and later) to a
Release that does not support FRU based images (Releases prior to R19.0) results in an unstable
system. Ensure that the FTP server contains both the software "From" IQNOS version as well as "To"
IQNOS version.
Where <Rel No> is the IQNOS software release number, for example 19.0,
Where yyyy is the build number, for example 0611
Where the MetaR file contains the hash information for all software images. The value of the hash in the
MetaR_<Rel No>.yyyy.txt.sha256 file is to be compared with the hash of the downloaded software image
displayed in management interfaces.verification
In both modes, the current active database is backed up, not any previously saved database files.
In the case of a multi-chassis system, a database backup operation is restricted to the active
controller module on the Main Chassis. For a system with redundant controllers, a database
backup operation is restricted to the active controller module.
The database file that has been backed up contains:
■ Database file, which includes configuration information stored in the persistent memory on the
network element.
■ Alarm table stored in the persistent memory of the network element.
■ Event Log stored in the persistent memory of the network element.
Infinera nodes can be configured to transfer database backup files simultaneously to both the
primary and secondary FTP servers. (Simultaneous transfer requires that both servers are
configured correctly.)
When a compiled file transfer is initiated by the user, the node will first verify the FTP server
configuration before compiling the file. See Verifying FTP Connectivity for Debug, PM, and DB
Backup on page 7-19 for more information.
If the restore operation fails, the software rolls back to the previously active database image and an alarm
is raised indicating the failure of the restore operation. When the database is successfully restored, the
alarm is cleared. Users can manually restore the database.
Note: For FlexILS nodes, database restoration is supported only when the active IMM controller
module is in the primary IMM slot (slot 9 of the MTC-9 chassis or slot 6 of the MTC-6). If the active
IMM is in the redundant IMM slot, the user must first do a switchover, thereby making the IMM in the
primary IMM slot the active controller module and the IMM in the redundant IMM slot the standby
controller module. Once the IMM in the primary IMM slot is made the active controller module, the
database can be restored.
Depending on the differences between the two databases, the database restore operation could affect
service. The database restoration procedure:
■ Restores the configuration data as per the restored database. The configuration data in the
restored database may differ from the current hardware configuration. In such scenarios, in
general, the configuration data takes precedence over the hardware.
Note: For restoring a database on a node which currently has a 2 Port D-SNCP service (with y-
cable fibers connected in the work and protect tributaries), if the database restoration will
change the 2 Port D-SNCP service to non-protected on the node, it is recommended that the
protect leg fiber be removed before the database is restored on the node.
■ Restores the alarms in the Alarm table by verifying the current alarm condition status. For example,
if there is an alarm entry in the restored Alarm table but the condition is cleared, that alarm is
cleared from the current Alarm table. On the other hand, if the alarm condition still exists, the
corresponding alarm entry is stored in the current Alarm table with the original time stamp.
The database image can be restored at system reboot time or at any time during normal operation.
The following list outlines database restoration behavior on systems with multiple chassis and/or
redundant controllers:
■ Redundant controllers only—The database is first restored on the active controller, and from there,
automatically synchronized to the standby controller.
■ Multi-chassis only—The database restore operation is restricted to the active controller on the Main
Chassis.
■ Multi-chassis with redundant controllers—The database restore operation is restricted to the active
controller on the Main Chassis. From there, it is automatically synchronized to the standby
controller on the Main Chassis only.
Following is the description of some scenarios where the configuration data in the restored database
differs from the current hardware configuration and how they are handled:
■ Scenario 1: The restored database contains a managed equipment object, but there is no
corresponding hardware present in the chassis. In this scenario, the corresponding equipment is
considered to be pre-configured (refer to Equipment Pre-configuration on page 3-35).
For example, consider the following sequence of operations:
□ Backup database
□ Remove a circuit pack from the chassis
□ Restore the previously backed up database.
After the database restoration, the removed circuit pack is pre-configured.
■ Scenario 2: If the restored database does not contain a managed equipment object, but the
hardware is present in the network element, the managed equipment object is created in the
database as in equipment auto-configuration (refer to Equipment Auto-configuration on page 3-35).
For example, consider the following sequence of operations:
□ Backup database
□ Install a new circuit pack
□ Restore the previously backed up database.
In this case, after database restoration, the newly inserted circuit pack is auto-configured.
■ Scenario 3: If the managed equipment object exists in the database and the corresponding
hardware equipment is present in the network element, but there is a configuration mismatch, an
equipment mismatch alarm is reported and the operational state of the equipment is changed to
out-of-service (see Operational State on page 3-39).
■ Scenario 4: If the restored database contains a manual cross-connect configuration information but
there is no cross-connect configured in the hardware, then IQ NOS provisions the corresponding
manual cross-connect (provided the required data path resources exist) according to the
configuration information in the restored database.
For example, consider the following sequence of operations:
□ Backup the database
□ Delete a manual cross-connect
□ Restore the database
In this case, the manual cross-connect was deleted after database backup is recreated.
■ Scenario 5: If the restored database does not contain a manual cross-connect configuration, but a
manual cross-connect is provisioned in the hardware, then the manual cross-connects is torn down
(deleted) as per the configuration information in the restored database.
For example, consider the following sequence of operations:
□ Backup the database
□ Create a manual cross-connect
□ Restore the database
In this scenario the manual cross-connect, that was created after the database backup, is deleted.
■ Scenario 6: If the restored database does not contain SNC configuration information, but an SNC is
provisioned in the hardware, then the SNC is torn down (released) by releasing the signaled cross-
connects (see GMPLS Signaled Subnetwork Connections (SNCs) on page 4-10) along the SNC
path. However, it takes approximately 45 minutes to release the signaled cross-connects. Note that
the SNC configuration information is stored on the source node only. The intermediate nodes
contain only the signaled cross-connects.
For example, consider an SNC that spans three nodes: Node A, Node B and Node C and Node A
is the source node. Consider the following sequence of operations:
□ Backup the database on Node A
□ Create an SNC from Node A to Node C passing through Node B which results in
corresponding signaled cross-connects being created on Node B and Node C
□ Restore the database on Node A
In this case, the restored database on Node A does not contain the SNC configuration information.
However, Node B and Node C have signaled cross-connects which are released after 45mins to
match the restored database in the Node A.
Consider the following sequence of operation for the same network configuration as in the previous
example,
□ Backup the database on Node B
□ Create an SNC from Node A to Node C passing through Node B which results in
corresponding signaled cross-connects being created on Node B and Node C
Note: For XTCs, an XCM cannot be rebranded from an XTC-4 to an XTC-10 and vice versa. Instead,
the user must delete the XCM’s database and bring up the XCM with an empty database or perform a
database restore on the XCM.
■ If there is no database present on the controller module, the user may perform one of the following
actions:
□ Bring up an empty database
□ Perform a local or remote database restore
Note: Do not attempt to reboot the system while it is coming up with an empty database. This may
corrupt the database and cause the controller module to re-boot repeatedly.
If the database brand does not match upon inserting a redundant controller module, the redundant
controller module will not boot, and a branding mismatch alarm (BRAND-MSMT) will be raised. To re-
brand a redundant controller module, the user must intervene with the “Make Standby” command. This
command forces the redundant controller module to format its flash and re-install its software from the
active controller. The redundant controller module then reboots and synchronizes the rest of its state (i.e.,
its database) from the active controller module, before entering the standby state.
Re-branding is useful for providing pre-configured controller modules with a user specific “template”
database. It also enables emergency chassis replacement without requiring re-configuration.
Note: Rebranding will overwrite the configuration of a system and should be used only by
experienced operational personnel.
For further details on the procedure to “rebrand” or recommission a controller module, refer to the DTN
Turn-up and Test Guide .
Note: If the network element is configured to support cross-connects, re-seating the line modules can
affect traffic.
■ Cold boot the line module, either manually or by power cycling the chassis.
Note: If the network element is configured to support cross-connects, cold-booting the line modules
can affect traffic.
Although new services may be provisioned even during the event of a line module brand mismatch, it is
highly recommended that line module brand mismatch alarms are addressed immediately, without
performing any new service provisioning. Once a line module brand mismatch alarm occurs, the following
critical functions are disabled, which can lead to quickly growing inconsistencies between the controller
module database and the physical network element state:
■ Performance monitoring is disabled on the affected line modules.
■ Alarm reporting is disabled on the affected line modules.
■ New services provisioned after the mismatch alarm occurs are not written to the controller module
database until a Force Sync operation is carried out.
Note: The DNA’s Digital Link Viewer application can be used to transfer the debug logs for all of the
controller modules and/or the BMMs, OAMs, ORMs, and Raman amplifiers on all of the nodes along
a span. In addition, the Digital Link Viewer can collect the logs for the line modules on all of the nodes
along the digital segment. See the DNA Administrator Guide for more information.
There are additional controls for the debug information that is transferred from a node controller and from
line modules on an XTC. The default setting streamlines the debug information transferred from these
modules in order to minimize the amount of time require for the FTP transfer. Alternatively, the user can
specify that full debug information is to be sent from the node controller or from XTC line modules:
■ Default—For XTC line modules (OFx, OLx, OLx2, etc.), only the most recent 1000 records are
retrieved from the DSP Field Data Recorder (FDR); for a node controller (XCM, MCM, OMM, etc.),
only limited GMPLS data is retrieved. This default mode minimizes the amount of time required for
debug file transfer from the node.
■ Complete LM DSP FDR—For retrieving debug information from the XTC line modules, the user can
specify that all DSP FDR records are to be retrieved (not just the latest 1000 records). (For GMPLS
data, the default/limited data is retrieved as with default setting.)
■ Complete GMPLS Data —For retrieving debug information from the node controller, the user can
specify that all GMPLS data is to be retrieved (i.e., topology nodes, TE links, control links,
backplane connectivity, and tributary/line payload capacity). (For XTC line module data, the default
1000 DSP FDR records are retrieved as with default setting.)
Note: This pre-file compilation check is performed only for transfers that are manually initiated by a
user; it is not performed for automatic, scheduled transfers.
IQ NOS provides an intelligent GMPLS control plane architecture that enables automated end-to-end
management of transport capacity across the Infinera Intelligent Transport Network resulting in a rapid,
error-free service turn-up and operational simplicity. With a simple “point-and-click” approach to
provisioning, users need only identify the A and Z service endpoints, and the intelligent control plane
automatically configures the intermediate network elements to route the transport capacity, without
manual intervention.
The GMPLS control plane provides several benefits, including:
■ Rapid, real-time end-to-end service provisioning
■ Traffic engineering/bandwidth management at the digital layer
■ Multi-service support
■ Simplified service provisioning independent of network topology
■ Automatic protection capabilities
The GMPLS control plane implementation is based on two key industry standard protocols: Open
Shortest Path First - Traffic Engineering (OSPF-TE), an IP routing protocol, and Resource Reservation
Protocol - Traffic Engineering (RSVP-TE), a GMPLS signaling protocol. The OSPF-TE performs network
topology discovery and route computation. The RSVP-TE signaling protocol establishes a circuit along
the route computed by the OSPF-TE. An end-to-end circuit set up by GMPLS control plane within a
routing domain is referred to as a Subnetwork Connection (SNC).
The GMPLS control plane does the following:
IQ NOS also features, at the user’s option, dynamic restoration of GMPLS-provisioned SNCs for DTN
services. See Dynamic GMPLS Circuit Restoration on page 4-140 for complete details on this feature.
The system control plane is certified for GMPLS signaling domains consisting of up to 1000 network
elements (up to 333 of which can be DTN-Xs/DTNs), configured in a number of topologies, including
those utilizing multi-fiber junction sites with up to eight degrees of connectivity. Contact Infinera before
attempting to build networks that exceed this number of network elements.
Network Topology
IQ NOS utilizes the OSPF-TE protocol to discover the Intelligent Transport Network topology. It models
the Intelligent Transport Network topology by defining the following elements:
■ A routing node, which corresponds to a network element within the Intelligent Transport Network.
■ A control link, which corresponds to OSC control between adjacent routing nodes or network
elements. There is one bidirectional control link per fiber (or from a TL1 perspective there will be
two uni-directional control link entries per fiber).
■ A GMPLS link, which corresponds to transport capacity between the adjacent DTN-Xs/DTNs
nodes. There is one bidirectional GMPLS link per fiber (or from a TL1 perspective there will be two
uni-directional control link entries per fiber). Each GMPLS link supports up to 8Tbps (8000Gbps)
transport capacity between DTN-Xs, which maps to 16 OCGs or 16 Traffic Engineering (TE) links.
Systems with LM-80s and CMMs support TE links on each of the ten LM-80 OCH ports (ten OCH
ports per OCG), for up to 160 TE links with up to 40G capacity on each channel (with QPSK
polarization multiplexing), totaling 6.4Tbps transport capacity.
IQ NOS defines two topology maps:
■ Physical Network Topology (see Physical Network Topology on page 8-3)
■ Service Provisioning Topology (see Service Provisioning Topology on page 8-4)
However, independent of the physical fiber connectivity, users can create topology partitions, where each
partition represents a continuous routing and signaling domain. The topology partitions are created by
disabling the OSPF interface. In Figure 8-2: Network with GMPLS Topology Partition on page 8-4,
Domain 1 and Domain 2 are two topology partitions created by disabling GMPLS between network
element C and network element D.
Note: SNCs spanning two topology partitions are not supported as they are operated as two separate
networks. However, the user can make use of the line-side terminating SNC capability to make
separate SNCs in the two partitioned domains to realize a single end-to-end customer circuit (see
Line-side Terminating SNCs on page 4-12).
Users can view the physical network topology, referred to as physical view, and service provisioning
topology, referred to as provisioning view, through the management applications.
In summary, physical topology represents the actual physical OTS fiber connectivity between the network
elements and the topology of the control plane traffic (e.g., OSPF-TE messages) and management plane
traffic (messages exchanged between the network element and the management application, such as
DNA), whereas the service provisioning topology represents the Traffic Engineering (TE) capacity
available to provision data plane (client) traffic through the OCGs.
Traffic Engineering
IQ NOS supports several traffic engineering parameters both at the link level and node level. This rich set
of traffic engineering parameters enables users to create networks that are utilized most efficiently.
The node and equipment level traffic engineering parameters include:
■ Inclusion List—Specifies an ordered list of nodes through which an SNC must pass. The inclusion
list is ordered and must flow from source to destination. This capability is used to constrain an SNC
to traverse certain network elements in a particular order. For example, in the network shown in
Figure 8-4: Example Network for SNC Routing on page 8-5, an SNC from node A and node C
can use either node B or node D as an intermediate node. The inclusion list can specify node B in
order to mandate a route with source as A, one of the intermediate nodes as B, and destination as
node C. This allows the traffic to be dropped at site B in the future. Optical carrier groups (OCGs)
or fiber links can also be included in the inclusion list, but the channel number of the OCG must be
specified. The inclusion list is configurable through the management applications.
■ Exclusion List—Specifies a list of nodes through which an SNC must not pass. For example, the
exclusion list can be used to avoid congested nodes. The exclusion list is not ordered and it is
configurable through the management applications. OCGs cannot be specified as part of the
exclusion list.
■ Use Installed Equipment Only—IQ NOS enables the equipment pre-provisioning where equipment
is pre-provisioned but not installed. This constraint enables an SNC to pass through installed
equipment only. Users can specify this through the management applications. Note this option
applies only to line modules and TEMs. BMMs must be installed on all nodes.
■ Disable Traffic Engineering Link—As described in Optical Transport Layers (ILS or ILS2) of DTN
and DTN-X System Description Guide, the DTN employs two-stage optical multiplexing where
transport capacity is added to the GMPLS link by adding OCGs (line modules/LM-80s). Using this
constraint users can disable the use of an OCG to set up dynamically signaled SNC circuits.
However, the OCG can be used to set up manual cross-connects. For example, users may want to
set aside some bandwidth for manual cross-connect provisioning. This constraint is configurable
through the management applications.
■ Switching Capacity—This parameter considers the switching/grooming capacity of the DTN. See
Bandwidth Grooming in DTN and DTN-X System Description Guidefor a complete description of
the supported switching and grooming capabilities.
■ Allow Multi-hop SNC—Specifies whether the SNC may utilize multi-hop bandwidth grooming.
The GMPLS link level traffic engineering parameters include:
■ Link Cost—The cost of the GMPLS link can be provisioned through the management applications.
A route with least cost is selected. Users can use this to control how the traffic is routed.
■ Link Inclusion List—Specifies an ordered list of control links an SNC must pass through. This is
similar to the node inclusion list described earlier. For a higher degree of granularity, users may
specify specific 10Gbps channels or 2.5Gbps sub-channels for inclusion. If a channel or sub-
channel is specified, the specified link should be a GMPLS link (OCG). Otherwise, it must be a
fiber/OCG.
■ Local DLM Routing—If this option is selected, the SNC ensures that add/drop cross-connects on
the source and destination nodes utilize the same line module for tributary to line cross-connects. If
this option is not selected, no such constraints are applied.
Note: If the chassis is configured in Mesh mode (DTC-B and MTC-A only), there is no option to select
an intermediate LM when creating a cross-connect, and no option to allow an intermediate LM when
creating an SNC.
■ Link Exclusion List—Specifies a list of fibers/OCGs the SNC must not pass through. This is similar
to the node exclusion list described earlier.
■ Link Capacity—The link capacity is another parameter that is considered during route computation.
IQ NOS maintains the following information based on the hardware state and user configuration
information, which is retrievable through the management applications:
□ Maximum capacity of the link based on the installed hardware
□ Usable capacity of the link based on the hardware and software state
□ Available capacity of the link for the new service requests
Additionally, users can provision the admin weight or cost for the control link. The control link cost
denotes the desirability of the link to route control traffic and management traffic. The lower (numerically)
the cost, the more desirable the link is.
All the traffic engineering parameters described above are exchanged between the network element as
part of the topology database updates.
Note: Because of the shelf controller behaviors (see Shelf Controller Behavior on page 3-2), SNC
creation, restoration, and deletion requires that the chassis on which the SNC originates and the
chassis on which the SNC terminates are reachable in the network. However, existing traffic is not
impacted if the chassis becomes unreachable.
Out-of-band GMPLS
Out-of-band GMPLS for OTS enables circuit provisioning in cases where in-band OSC is unavailable
(e.g., submarine applications). Out-of-band GMPLS separates the control plane traffic from data plane
traffic, thus enabling management connectivity to remote network elements so that circuit provisioning
capabilities are available even with Submarine Line Terminal Equipment (SLTE) applications.
Note: Out-of-band GMPLS for OTS is supported only by nodes running Release 6.0 or higher.
Figure 8-5: Out-of-band GMPLS Used in a Submarine Application on page 8-11 shows an example
application of Out-of-band GMPLS, in which in-band OSC is unavailable due to an SLTE configuration.
Out-of-band GMPLS is supported via the DCN, AUX, or CRAFT interface on the DTN-X or DTN, and is
configured through the management interfaces by first creating a GRE tunnel and then editing the OSC
properties to associate the OSC to the GRE tunnel. (Once the OSC is associated with the GRE tunnel,
the OSC cannot be associated with an IP address.) When Out-of-band GMPLS is enabled on the OSC,
all GMPLS messages will be sent out of band.
Note: The craft port on XTC chassis does not support GRE tunnels for Out-of-band GMPLS.
■ Multiple GRE tunnels are supported over the same physical interface (i.e., DCN, AUX, or CRAFT),
but only one GRE tunnel can be associated per BMM direction.
■ For Optical Amplifiers, a GRE tunnel can be created only via the TL1 interface. For DTN-Xs and
DTNs, GRE tunnels can be created via GNM, DNA, or TL1.
IQ NOS provides a highly available, reliable, and redundant management plane communications path
which connects the network operations centers (NOCs) to the physical transport network and meets the
diverse customers’ needs. The management plane includes:
■ Direct DCN (Data Communications Network) access where the NOC is connected to the network
element through a DCN network which is typically an IP-based network. The DCN is designed in
such a way that there is no single point of failure within the DCN network. (See DCN
Communication Path on page 9-2.)
■ In-band access through a Gateway Network Element (GNE) where a network element is accessed
through another network element that acts as a gateway and transports the management traffic
over the OSC control link between the network elements. (See Gateway Network Element on page
9-8.)
■ Static routing to access external networks that are not within the DCN network. (See Static Routing
on page 9-12.)
■ Telemetry access utilizing a dial-up modem which provides users remote access through the serial
port on the network element.
IQ NOS management plane supports Network Time Protocol (NTP) to provide accurate time stamping of
alarms, events and reports from the network element. (See Time-of-Day Synchronization on page 9-14.)
Note: DCN ports can also be configured for 100Mbps full duplex (with auto-negotiation disabled). This
configuration is performed as a part of node commissioning (see DCN Port Configuration on page 9-
5 for more information).
In a redundant configurations:
■ For XTC-10, the DCN-A port is controlled by the XCM in shelf A slot 6B; DCN-B is controlled by the
XCM in shelf B slot 6B.
■ For XTC-4, the DCN-A port is controlled by the XCM in slot 5A; DCN-B is controlled by the XCM in
slot 5B.
■ For DTC/MTC, the DCN-A port is controlled by the MCM in slot 7A; DCN-B is controlled by the
MCM in slot 7B.
■ For OTC, the DCN-A port is controlled by the OMM in slot 1A; DCN-B is controlled by the OMM in
slot 1B.
■ For MTC-9/MTC-6, the DCN port is on the IMM; each IMM has a single DCN port. DCN
redundancy is achieved by installing a redundant IMM.
■ For XTC-2/XTC-2E, the DCN port is on the XCM-H; each XCM-H has a single DCN port. DCN
redundancy is achieved by installing a redundant XCM-H.
As shown in Figure 9-1: Redundant DCN Connectivity (DTN Example) on page 9-3, Ethernet cables
from each of the DCN ports must be connected to a single Ethernet switch or hub (no other physical
connectivity from the DCN port is supported).
In an environment that has redundantly equipped and serviceable controllers, it is the active controller
module that processes the DCN management traffic and that determines which DCN port is active. As
described in the following sections, port selection depends upon not only which controller module is active
but also the state of the connected DCN links. Only one DCN IP address is specified, and it is mapped by
the active controller to whichever link has been selected for operation. The DCN IP address is
configurable through the CCLI application during network element turn-up.
Then the active controller module sends gratuitous ARP (i.e. an ARP request for the network element’s
DCN IP address) request through the Standby controller module in order to refresh the ARP entry in the
switch so that the DCN IP address maps to the MAC address of the Standby controller module. At this
point the active controller module is receiving the management traffic through the DCN-B port.
Note: Link failures between the switch/hub and the DCN routers is not detected by the network
element nor will any redundant path be provided by the network element. It is assumed that the user
will deploy routers which provide the necessary redundancy to take care of such failures.
configured via CCLI interface at the time of node commissioning to the fixed 100Mbps rate with auto-
negotiation disabled. This setting applies to the DCN port on the active node controller module and to the
DCN port on the standby node controller module, if the node has a standby controller module. This
setting is supported for DTN-X, FlexILS, DTN, XT and Optical Amplifier nodes. The DCN configuration
persists through software upgrades, software reverts, database backup/restore, node power cycles, and
control module reboots/switchovers.
Starting IQ NOS Release 17.1, Infineramanagement interfaces and CCLI support configuration of default
route for the DCN subnet on IPV4/IPv6 DTN, DTN-X, ROADM, OLA or XT network elements running IQ
NOS R17.1 As part of the DCN route configuration, the following are specified:
■ DCN Destination (IPv4 and IPv6): The host IP of the subnet to which the DCN packets are routed
For default routing, the destination on IPv4 network elements should be 0.0.0.0 and the IPv6
destination for IPv6 network elements should be ::
■ DCN Subnet Mask (IPv4): For default routing, the subnet mask on IPv4 Nodes should be 0.0.0.0.
■ DCN Prefix Length (IPv6): For default routing the prefix length of the packet for the destination
network should be 0.
■ Route Cost (IPv4 and IPv6): The cost of the route and the default value can be specified for IPv4
and IPv6 network elements.
■ Route Type (IPv4 and IPv6): The route type can be defined as Local (i.e. a route for which the next
hop is the final destination) or Distributed (i.e. a route for which the next hop is not the final
destination). For IPv4 or IPv6 Nodes, the Route Type can be configured either ‘Local’ or
‘Distributed‘ and Default value is ‘Local’.
Note: If the user reconfigures any of the configuration parameters in the CCLI interface, the DCN port
configuration setting will also need to be reconfigured for 100Mbps full duplex (auto-negotiation
disabled). Otherwise, if any parameters are configured in the CCLI interface and the DCN port
configuration for auto-negotiation is not set, the default setting of auto-negotiation “enabled” will be
applied.
Note: If a controller module (or both controller modules) fails in a node and is replaced, the user
needs to ensure the correct database is used to bring up the new controller module, and the CCLI
configuration will need to be performed again to provision the DCN ports with 100Mbps full duplex
(auto-negotiation disabled).
Note: When any IPv4 or IPv6 DCN related parameters (such as IP Address/Netmask/Gateway/
Destination/Prefix) are changed from the management interfaces, the stand by management module
(if plugged in) undergoes a warm reboot. If the DCN is connected to the standby management
module, then the network element may be unreachable for a short duration.If any of the DCN
configurations attributes are changed to an IP address that doesn't exist in the network, the node will
accept the IP address as long as the IP address is in the correct range and is valid. The node will not
be reachable until:
■ The node is accessed via the new DCN IP address.
■ The network is changed in accordance with the node configuration. In this case, the only way
to gain access to the node is to physically connect to the node via local network or by directly
connecting a cable to the node.
Note: It is recommended that every signaling domain have at least two GNEs with DCN capability to
enable management traffic to find/use a redundant path if the primary DCN path fails.
Additionally, IQ NOS has enhanced the GNE capability in order to support a variety of management
protocols. The enhanced GNE capability provided by IQ NOS is called Management Application Proxy,
often referred to as MAP. Hence, the MAP provides the ability to manage those network elements that
are not directly DCN addressable through the network elements that are directly DCN addressable.
The MAP supports the following functions (also see Figure 9-4: Management Application Proxy Function
on page 9-9):
■ GNE—The GNE is a network element that is directly IP addressable from the DCN. The GNE
provides management proxy services to any network element within the same routing domain as
the GNE. The GNE provides management proxy service to any management traffic received via its
DCN, OSC or craft interfaces. The GNE can be accessed from the DCN through a IPv4 or IPv6
address.
■ Subtending Network Element (SNE)—This is a network element that does not have physical
connectivity to the DCN and is not directly IP addressable from the DCN. The SNE is capable of
providing management proxy support to any management traffic received through its craft and OSC
interfaces. The proxy functionality is optional, and can be enabled/disabled by the user. The proxy
session between the GNE and SNE is supported over IPv4 only.
The MAP provides proxy services to the following protocols and enables various accessibility options as
described below:
■ HTTP Protocol—The MAP service on the GNE and SNE network elements relays HTTP protocol
messages by listening to a dedicated HTTP Proxy port 10080. This capability enables the DNA and
GNM applications to access all network elements within the purview of the GNE through the DCN
ports. Also, it enables the GNM to access all network elements within the purview of a network
element through the craft Ethernet and craft serial interfaces.
■ XML/TCP Protocol—The MAP service on the GNE and SNE network elements relays XML/TCP
protocol messages by listening to a dedicated XML/TCP Proxy port 15073. This capability enables
the DNA and GNM applications to securely access all network elements within the purview of the
GNE through the DCN ports. Also, it enables the GNM to access all network elements within the
purview of a network element though the craft Ethernet and craft serial interfaces.
■ Telnet Protocol—The MAP service on the GNE and SNE relays Telnet protocol messages by
listening to a dedicated Telnet Proxy port 10023. This capability enables the Telnet sessions to be
launched from the DNA and GNM applications to access all network elements within the purview of
the GNE through the DCN ports. Similarly, it enables the Telnet session to be launched from the
GNM to access all network elements within the purview of a network element through the craft
Ethernet and craft serial interfaces.
■ FTP Protocol—The MAP service on GNE and SNE relays FTP protocol messages by listening to a
dedicated FTP Proxy port 10021. This capability enables the communication between the FTP
client on the SNE and the DNA or external FTP Server through the GNE. The FTP client will be
used to upload performance monitoring data, downloading software, etc.
■ TL1 Protocol—The MAP service on GNE and SNE relays TL1 protocol messages by listening to a
dedicated TL1 Proxy port 9090. This capability enables TL1 terminal users to access all network
elements within the purview of the network element through a single connection to the GNE.
Note: There is no specific limitation on the number of SNEs that a GNE can support; instead, the
GNE is limited only by the number of proxy sessions it can support. Each GNM session to an SNE
requires one proxy XML session at the relevant GNE, and each DNA server managing an SNE
requires one proxy XML session at the relevant GNE.
The number of proxy sessions supported by the GNE depends on the type of controller module in the
Main Chassis of the GNE:
■ An XCM, IMM, or MCM-C node controller can support a maximum of 150 proxy XML sessions, 150
proxy TL1 sessions, and 10 proxy FTP sessions.
■ An MCM-B node controller can support a maximum of 60 proxy XML sessions, 60 proxy TL1
sessions, and 10 proxy FTP sessions.
■ An XTMM node controller can support a maximum of 50 proxy XML sessions, 50 proxy TL1
sessions, and 10 proxy FTP sessions.
Configuration Settings
IQ NOS provides several configuration options so that the users can design their DCN and management
communication access to meet their needs. Following are the various configuration options provided:
■ MAP Enabled—Users must set this option to enable MAP services on a network element.
■ Primary GNE IP Address—The Primary GNE IP Address should be configured on all network
elements. The Primary GNE IP Address is the router ID (also known as the GMPLS node ID) of the
GNE in the same domain as the network element being configured. If more than one GNE exists in
the same domain, it is recommended that the closest GNE to this node (in terms of hops) should be
selected as the primary GNE. The main function of the primary GNE is to provide FTP services
routing for SNEs that do not have direct DCN connectivity and for GNEs experiencing DCN
connectivity failure. FTP services include uploading historical performance monitoring data,
uploading database backups, and downloading software.
■ Secondary GNE IP Address—As with Primary GNE IP Address parameter, the Secondary GNE IP
Address is configured on all network elements. The Secondary GNE IP Address is the router ID
(also known as the GMPLS ID) of the GNE within the same domain as the GNE or SNE. The
Secondary GNE is used if the Primary GNE is unavailable. For Secondary GNE, it is recommended
to choose the GNE which:
□ Is the next closest network element in terms of number of hops from the network element
being configured.
□ Provides a completely separate path to the management station from the network element.
In other words, the inability to reach the Primary GNE should never mean that the Secondary
GNE is also unreachable and vice-versa.
Note: Provisioning a primary and secondary GNE IP address is required for a Subtending Network
Element (SNE) to enable FTP services. Additionally, it is recommended that both the primary and
secondary GNE IP address be provisioned on each GNE to ensure FTP services continue to function
in the event of an interruption of DCN service to the GNE. For each GNE, the closest alternate GNE
GMPLS Router ID should be used for the primary GNE and the next closest alternate GNE GMPLS
Router ID should be used for the secondary GNE.
Note: For DNA connectivity purposes, the DNA uses all of the GNEs in the signaling domain
(including those that are configured to be primary and secondary GNE addresses) in a round robin
manner. Because of this, the DNA may achieve connectivity with an SNE via the OSC by way of a
GNE other than the primary or secondary GNE that is configured on the SNE.
Static Routing
IQ NOS provides static routing capability. One application of static routes is to enable the network
elements to reach external networks that are not part of the DCN network. As shown in Figure 9-5: Using
Static Routing to Reach External Networks (IPv4 Examples) on page 9-12, the NTP Server may be
located in external networks, outside of the DCN network. In this scenario, users can configure the static
routes to external networks.
The destination address of static routes can be configured to an IPv6 address or an IPv4 address.
Figure 9-5 Using Static Routing to Reach External Networks (IPv4 Examples)
Another application of static routing is to enable the routing of the management traffic between two
topology partitions (see Network Topology on page 8-3). There might be a need to create topology
partitions within a single physical network. In such situations, users can still have the management
communication path between two topology partitions (created by disabling the GMPLS link) by
configuring static routes to reach network elements in other topology partitions.
The configured static routes can also be assigned cost so that the network can be designed to select
optimal path. Additionally, users can configure the ability to advertise static routes within the routing and
signaling domain.
Note: Users can configure the ability to advertise the IPv4 static routes within the GMPLS domain via
OSPF protocol. This functionality is not available for IPv6 static routes and only local static routes are
supported.
Note: Starting IQ NOS R17.1, static routes for IPv4/IPv6 network elements can be configured to
forward traffic to the default DCN subnet. This configuration is enabled or disabled from the Black
Hole Route attribute during static route creation and is applicable only when a default DCN subnet is
configured. On enabling Black Hole Route, traffic is discarded and not sent to default subnet. If Black
Hole Route is disabled, traffic is forwarded to the default subnet.
Time-of-Day Synchronization
IQ NOS provides accurate and synchronized timestamps on events and alarms, ensuring proper ordering
of alarms and events at both the network element and network levels. The synchronized time stamp
eases the network-level debugging and eliminates the inaccuracies caused by the manual configuration
of system time on each network element. Additionally, the time stamp complies with Universal
Coordinated Time (UTC) format, found in ISO 8601, and includes granularity down to seconds.
IQ NOS supports the Time-of-Day Synchronization by implementing NTP Client (Network Time Protocol)
which ensures that IQ NOS’s system time is synchronized with the specified NTP Server operating in the
customer network and also synchronized to the UTC. IQ NOS also implements NTP Server, so that one
network element may act as an NTP Server to the other network elements that do not have access to the
external NTP Server. As shown in Figure 9-6: NTP Server Configuration on page 9-14, typically a GNE
is configured to synchronize to an external NTP Server in the customer network and the SNEs are
configured to synchronize to the GNE.
In order to support NTP server redundancy, a node can be configured with up to three NTP servers (with
IPv6 or IPv4 addresses). When multiple NTP instances are configured on the node, the node determines
which of these instances to be used as active source of timing based on the NTP selection and clustering
algorithms. If the active NTP instance experiences a fault, the node ensures that another of the
configured NTP servers is available as a timing source. It is recommended for Subtending Network
Elements (SNEs) to use the Gateway Network Elements (GNE) as the primary NTP server.
Note: If the DCN ports are inaccessible, another route is selected first by gateway NE (primary, then
secondary), in-band OSC, then lastly by static route (which can take approximately 20 minutes to
implement).
The Infinera nodes also provide local clock with the accuracy of 23ppm or about a minute per month. If
the GNE (with NTP enabled) fails to access the external NTP Server, IQ NOS NTP (Client and Server)
uses the local clock as a time reference. When the connectivity to the external NTP Server is restored, IQ
NOS NTP Client and Server on the GNE re-synchronizes with the external NTP Server, and the new
synchronized time is propagated to all the network elements within the routing domain.
Following are some recommendations for configuring the NTP Server within an Intelligent Transport
Network:
■ Configure an external NTP Server with Stratum Level 4 or higher for each routing domain of an
Intelligent Transport Network.
■ Configure the GNE network element to point to the external NTP Server.
■ Configure the SNEs to point to the GNE as the NTP Server.
The active controller module on the Main Chassis synchronizes to the external NTP server, and itself acts
as a time server for the rest of the modules on the Main Chassis and Expansion Chassis. All of the
modules on the Main Chassis and Expansion Chassis (including the standby controller modules in
redundant controller configurations) synchronize their time settings with the time of day on the active
controller module on the Main Chassis. For multi-chassis systems, if the inter-chassis communication
links fail between the Main Chassis and Expansion Chassis, the modules on the Expansion Chassis
derive the time from the local clocks on the modules themselves.
Date and time change commands apply only to the active controller module on the Main Chassis. Once
the request is successfully applied, the remaining circuit packs on the Expansion Chassis synchronize
with the changed time automatically.
The standby controller synchronizes its internal time-of-day clock to the active controller’s clock using
NTP. If a controller switch occurs, the standby controller automatically becomes the NTP Server, as part
of the transition from standby to active, without the need for a reboot.
The active controller module on the Main Chassis synchronizes to the external NTP server using a “back-
off” algorithm to send consecutive requests to the external NTP server, so that if the controller module
compares its time to the NTP Server’s time and finds that the two times are in sync, the controller module
will wait for a longer period of time before synchronizing to the NTP Server the next time. This means that
the time between consecutive requests maybe as high as 512 seconds (~9 minutes).
Note: When changing the time on the active controller module, it may take up to a minute for the
modules on the node to sync up to the new time. Standby controller modules have an additional soak
period before changing their time to match the new time. If the system switches to the standby
controller module during this soak period, all of the modules will re-sync their time to match the now-
active controller module that is still using the previous time setting.
NTP Authentication
Starting Release 19.0, IQNOS supports authentication of the NTP servers to prevent tampering of
timestamps logged by Infinera devices.The NTP server and client (Infinera network element) are
configured with a common trusted key. The unique key is installed on the client or Infinera network
element using a key identifier with a value ranging 1 - 65534. It also specifies the type of algorithm (MD5/
SHA1) and a password for each key. During the request and response, the server calculates the hash
values of the packets using an algorithm specified in the key and NTP packet content, and fills the hash
values into the packet authentication information. The client then verifies if the packets are sent by trusted
NTP source or modified, based on the authentication information. Authentication is successful if the key
identifier, type of algorithm and the password match with the server configuration. IQNOS supports NTP
authentication to configure upto three server IP addresses.
Note: Prior to Release 19.0, the default value for IP address of NTP Servers was 0.0.0.0. However,
starting Release 19.0, the default value for IP address of NTP Server1 will be 0.0.0.1, NTP Server2
will be 0.0.0.2, NTP Server3 will be 0.0.0.3. In case of IPv6 being selected, the default IP address of
NTP Server1 will be 0100::1, NTP Server2 will be 0100::2, NTP Server3 will be 0100::3. During
upgrade to release 19.0, the previous default of 0.0.0.0 is auto migrated to the new default values.
This appendix lists the service provisioning and diagnostic capabilities for each service type supported by
the DTN-X:
■ 100GbE TIM/TIM2/MXP/LIM Services on page A-2
■ 100G OTN TIM/TIM2s/MXP/LIM Services on page A-6
■ 40G TIM Services on page A-10
■ 10G TIM/TIM2/MXP, SONET, SDH, and Ethernet Services on page A-13
■ 10G TIM Services (10GCC, 10.3GCC, and cDTF) on page A-16
■ 10G TIM/TIM2/MXP OTN Services on page A-20
■ Sub-10G TIM Services on page A-24
■ Packet Services on page A-27
In addition, this appendix provides the adaptation capabilities supported by the DTN-X:
■ DTN-X Adaptation Services on page A-29
Note: Support is the same for all XTC chassis types, except where noted.
Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
Supporting Chassis XTC-10 XTC-10 XTC-10
Types XTC-4 XTC-4 XTC-4
XTC-2 XTC-2
XTC-2E XTC-2E
Supporting TIMs/ TIM-1-100GE TIM-1-100GX TIM-1-100GE
LIMs/MXP TIM-1B-100GE TIM-1-100GM TIM-1B-100GE
TIM-1-100GE-Q TIM2-2-100GM TIM-1-100GE-Q
TIM-1-100GX TIM2-2-100GX LIM-1-100GE
TIM-1-100GM MXP-400
LIM-1-100GE
TIM2-2-100GM ■ TIM-1-100GM is
TIM2-2-100GX
supported on
XTC-10 or XTC-4
■ TIM-1-100GM is only
supported on
XTC-10 or XTC-4 ■ TIM2-2-100GM and
TIM2-2-100GX are
only
supported on
■ TIM2-2-100GM and XTC-10 or XTC-4
TIM2-2-100GX are
supported on ■ MXP-400 is
supported on
XTC-10 or XTC-4
XTC-2 / XTC-2E
only
Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
(continued)
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
Tributary slots used 1, 80 80 Ten groups of 8.
2, 3, 4 (Line-side ODU3i+ not (Line-side ODU3i+ not
applicable for 100GbE) applicable for ODU4) Note: For GMPLS
SNCs, all TS must be
on the same line
module. For manual
cross-connects, the
groups can be on
different line modules.
1
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
2
OTU3i+ is not supported for XTC-2/XTC-2E.
3
4
3QAM is not supported on XTC-2/XTC-2E.
Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
(continued)
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
1 Port D-SNCP (either Yes Yes Yes
with SNCs or cross- (Supported on TIM-1-100GE, (Supported on TIM-1-100GM,
connects) TIM-1-100GE-Q, TIM-1-100GX, Note: Both working and
TIM-1-100GM, TIM-1-100GX, TIM2-2-100GM and protection paths need to
TIM-1B-100GE, TIM2-2-100GX) be either VCAT or non-
TIM2-2-100GM, VCAT.
TIM2-2-100GX and
LIM-1-100GE only)
Note: 1 Port D-SCNP
on 100GbE
(ODU2i-10v) services
over a TIM-1-100GM/
TIM-1-100GX are not
supported on XTC-2
and XTC-2E
Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
(continued)
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
FastSMP Protection Yes No No
(XTC-10 and XTC-4 only)
Note: Not
supported on Note: Not supported on
XTC-2/XTC-2E. TIM2-2-100GM and
TIM2-2-100GX
Latency Measurement No No No
(Yes - Applicable for
TIM2-2-100GM/GX)
CTP PRBS IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only
Generation and generation, no monitoring generation, no monitoring generation, no monitoring
Monitoring Towards
the Client Interface
CTP PRBS IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only
Generation and generation, no monitoring generation, no monitoring generation, no monitoring
Monitoring Towards
the Network
ODUk Wrapper PRBS Not available PRBS-31 Not available
Generation and
Monitoring Towards
the Network
Loopbacks Supported Facility and Terminal Facility and Terminal Facility and Terminal
by Client CTP Object
Loopbacks Supported Facility Facility Facility
by ODUk Object
(Provided at OXM for
TIMs and at TIM2 for
TIM2-2-100GM and
TIM2-2-100GX)
Note: Support is the same for all XTC chassis types, except where noted.
Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
Supporting XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Chassis Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Supporting TIMs/ TIM-1-100G TIM-1-100G TIM-1-100GX TIM-1-100GX TIM-1-100GX TIM-1-100GX
LIMs/MXP TIM-1-100GM TIM-1-100GM LIM-1-100GX LIM-1-100GX LIM-1-100GX LIM-1-100GX
TIM-1-100GX TIM-1-100GX TIM2-2-100GX TIM2-2-100GX TIM2-2-100GX TIM2-2-100GX
LIM-1-100GX LIM-1-100GX
LIM-1-100GM LIM-1-100GM
TIM2-2-100GM
TIM2-2-100GX
MXP-400
■ TIM2-2-100GM
and
TIM2-2-100GX
are supported
on XTC-10
and XTC-4
■ TIM-1-100G,
TIM-1-100GM,
and
LIM-1-100GM
are supported
on XTC-10 or
XTC-4 only
■ MXP-400 is
supported on
XTC-2/
XTC-2E only
Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X (continued)
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
ODUk ODU4 ODU2i-10v ODU2 ODU2e ODU0 ODU1
Tributary slots 80 Ten groups of 8 8 1 2
used 5, 6, 7, 8 (Line-side ODU3i+ not 8.
applicable for ODU4)
Note: For
GMPLS
SNCs, all
TS must
be on the
same line
module.
For
manual
cross-
connects,
the
groups
can be
on
different
line
modules.
5
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
6
OTU3i+ is not supported for XTC-2/XTC-2E.
7
8
3QAM is not supported on XTC-2/XTC-2E.
Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X (continued)
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
1 Port D-SNCP Yes No Yes Yes Yes Yes
(either with SNCs (Not supported (Not supported (Not supported (Not supported
or cross-connects) for for for for
TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX)
Note: Not
supported on
TIM-1-100GE-
Q.
Note: Not
supported on
TIM-1-100GE-
Q.
Note: Not
supported on
XTC-2/
XTC-2E.
Latency No No No No No No
Measurement (Applicable for (Applicable for (Applicable for (Applicable for (Applicable for
TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX)
Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X (continued)
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
CTP PRBS PRBS-31 Not available ODUk: ODUk: ODUk: ODUk:
Generation and Both generation and PRBS-31 PRBS-31 PRBS-31 PRBS-31
Monitoring monitoring supported ODUj: ODUj: ODUj: ODUj:
Towards the Client for MXP-400 PRBS-31 PRBS-31 PRBS-31 PRBS-31
Interface (inverted) (inverted) (inverted) (inverted)
CTP PRBS PRBS-31 Not available ODUj: ODUj: ODUj: ODUj:
Generation and Both generation and PRBS-31 PRBS-31 PRBS-31 PRBS-31
Monitoring monitoring supported (inverted) (inverted) (inverted) (inverted)
Towards the for MXP-400
Network
ODUk Wrapper Not applicable Not available Not applicable Not applicable Not applicable Not applicable
PRBS Generation
and Monitoring
Towards the
Network
Loopbacks Facility and Terminal Facility and Facility Facility Facility Facility
Supported by Terminal
Client CTP Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by loopback at loopback at loopback at loopback at
ODUk Object both ODUj and both ODUj and both ODUj and both ODUj and
(Provided at OXM ODUk ODUk ODUk ODUk
for TIMs and at
TIM2s for
TIM2-2-100GM
and
TIM2-2-100GX)
Table A-3 Provisioning, Protection, and Diagnostic Support for 40G Services on the DTN-X
Service Type
40GbE 40GbE (ODU2i-4v) ODU3 ODU3e1 ODU3e2 OC-768/
(ODU3i) switching switching switching STM-256
service service service
Supporting XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Chassis Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
Supporting TIMs TIM-1-40GE TIM-1-40GE TIM-1-40G TIM-1-40G TIM-1-40G TIM-1-40GM
Mapping GMPi GMPi Standard G. G.Sup43 G.Sup43 AMP, BMP
709
adaptation
ODUk ODU3i ODU2i-4v ODU3 ODU3e1 ODU3e2 ODU3
Tributary slots 32 Four groups of 8 31 32 32 31
used 9, 10, 11, 12
Note: All TS
must be on the
same line
module).
9
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
10
OTU3i+ is not supported for XTC-2/XTC-2E.
11
12
3QAM is not supported on XTC-2/XTC-2E.
Table A-3 Provisioning, Protection, and Diagnostic Support for 40G Services on the DTN-X (continued)
Service Type
40GbE 40GbE (ODU2i-4v) ODU3 ODU3e1 ODU3e2 OC-768/
(ODU3i) switching switching switching STM-256
service service service
GMPLS Yes Yes Yes Yes Yes Yes
Restoration
Note: VCAT
will restore as
VCAT only
and non-VCAT
will restore as
non-VCAT
only.
Line side No No No No No No
Protection group
(either with SNCs
or cross-connects)
FastSMP Yes No No No No Yes
Protection
Latency No No No No No No
Measurement
Table A-3 Provisioning, Protection, and Diagnostic Support for 40G Services on the DTN-X (continued)
Service Type
40GbE 40GbE (ODU2i-4v) ODU3 ODU3e1 ODU3e2 OC-768/
(ODU3i) switching switching switching STM-256
service service service
CTP PRBS IEEE 802.3 IEEE 802.3 82.2.17 PRBS-31 PRBS-31 PRBS-31 Framed
Generation and 82.2.17 Only Only generation, PRBS-31 in
Monitoring generation, no no monitoring the SDH
Towards the monitoring Payload
Client Interface
CTP PRBS IEEE 802.3 IEEE 802.3 82.2.17 PRBS-31 PRBS-31 PRBS-31 Framed
Generation and 82.2.17 Only Only generation, PRBS-31 in
Monitoring generation, no no monitoring the SDH
Towards the monitoring Payload
Network
ODUk Wrapper Not available Not available Not Not Not PRBS-31
PRBS Generation applicable applicable applicable (inverted)
and Monitoring
Towards the
Network
Loopbacks Facility and Facility and Facility and Facility and Facility and Facility and
Supported by Terminal Terminal Terminal Terminal Terminal Terminal
Client CTP Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by
ODUk Object
(Provided at OXM)
Note: Support is the same for all XTC chassis types, except where noted.
Table A-4 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(SONET/SDH and 10GbE LAN/WAN)
Service Type
OC-192/STM-64 10GbE WAN 10GbE LAN 10GbE LAN
(ODU2e) (ODU1e)
Supporting TIMs ,MXP TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM
TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX
TIM2-18-10GM TIM2-18-10GM TIM2-18-10GM
TIM2-18-10GX TIM2-18-10GX TIM2-18-10GX
MXP-400 MXP-400 (TIM2-18-10GM and
(TIM2-18-10GM and (TIM2-18-10GM and TIM2-18-10GX are
TIM2-18-10GX are TIM2-18-10GX are supported on XTC-10
supported on supported on and XTC-4)
XTC-10 and XTC-4) XTC-10 and XTC-4) MXP-400 is supported
MXP-400 is MXP-400 is on XTC-2 /XTC-2E only
supported on supported on
XTC-2 / XTC-2E XTC-2 / XTC-2E
only only
Supporting Chassis Types XTC-10 XTC-10 XTC-10 XTC-10
XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E
Mapping AMP, BMP AMP, BMP 16FS+BMP BMP
ODUk ODU2 ODU2 ODU2e ODU1e
Tributary slots used 13, 14, 8 8 8 8
15, 16
13
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
14
OTU3i+ is not supported for XTC-2/XTC-2E.
15
16
3QAM is not supported on XTC-2/XTC-2E.
Table A-4 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(SONET/SDH and 10GbE LAN/WAN) (continued)
Service Type
OC-192/STM-64 10GbE WAN 10GbE LAN 10GbE LAN
(ODU2e) (ODU1e)
GMPLS Restoration Yes Yes Yes Yes
Support Line-side Yes Yes Yes Yes
Terminating SNC
1 Port D-SNCP (either with Yes Yes Yes Yes
SNCs or cross-connects)
2 Port D-SNCP (either with Yes Yes Yes Yes
SNCs or cross-connects) (Supported on (Supported on MXP-400)
MXP-400)
Line side Protection group Yes Yes Yes Yes
(either with SNCs or cross- (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and
connects) only) only) only) XTC-4 only)
CTP PRBS Generation and Not available Not available IEEE Test Pattern IEEE Test
Monitoring Towards the Pattern
Client Interface Note: Not available
for TIM2-18-10GM
and TIM2-18-10GX
Table A-4 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(SONET/SDH and 10GbE LAN/WAN) (continued)
Service Type
OC-192/STM-64 10GbE WAN 10GbE LAN 10GbE LAN
(ODU2e) (ODU1e)
CTP PRBS Generation and Not available Not available IEEE Test Pattern IEEE Test
Monitoring Towards the Both generation and Both generation and Pattern
Network monitoring monitoring Note: Not supported
supported for supported for on TIM2-18-10GM
MXP-400 MXP-400 and TIM2-18-10GX
Note: Support is the same for all XTC chassis types, except where noted.
Table A-5 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (10GCC, and
cDTF)
Service Type
10G Clear 10.3G Clear 10.3G Clear 10.3G Clear cDTF
Channel Channel Channel Channel
(ODU2) (ODU2e) (ODU1e) (ODU2i)
Supporting TIMs TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM
TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX
Supporting Chassis XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Mapping AMP, BMP 16FS+BMP BMP GMPi GMPi
ODUk ODU2 ODU2e ODU1e ODU2i ODUFlexi
Tributary slots used 17, 8 8 8 9 9
18, 19, 20
17
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
18
OTU3i+ is not supported for XTC-2/XTC-2E.
19
20
3QAM is not supported on XTC-2/XTC-2E.
Table A-5 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (10GCC, and
cDTF) (continued)
Service Type
10G Clear 10.3G Clear 10.3G Clear 10.3G Clear cDTF
Channel Channel Channel Channel
(ODU2) (ODU2e) (ODU1e) (ODU2i)
Line side Protection Yes Yes Yes Yes No
group (either with (XTC-10 and (XTC-10 and (XTC-10 and (XTC-10 and
SNCs or cross- XTC-4 only) XTC-4 only) XTC-4 only) XTC-4 only)
connects)
Note: Not
supported on
XTC-2/XTC-2E.
CTP PRBS Generation Not available Not available Not available Not available Not available
and Monitoring
Towards the Client
Interface
CTP PRBS Generation Not available Not available Not available Not available Not available
and Monitoring
Towards the Network
ODUk Wrapper PRBS PRBS-31 PRBS-31 PRBS-31 Not available Not available
Generation and (inverted) (inverted) (inverted)
Monitoring Towards
the Network
Loopbacks Supported Facility and Facility and Facility and Facility and Facility and
by Client CTP Object Terminal Terminal Terminal Terminal Terminal
Loopbacks Supported Facility Facility Facility Facility Facility
by ODUk Object
(Provided at OXM)
Note: Support is the same for all XTC chassis types, except where noted.
Table A-6 Provisioning, Protection, and Diagnostic Support for Fibre Channel Services on the DTN-X
(8GFC and 10GFC)
Service Type
8G Fibre Channel 10G Fibre Channel
Supporting TIMs TIM-5-10GM TIM-5-10GM
TIM-5-10GX TIM-5-10GX
Supporting Chassis Types XTC-10 XTC-10
XTC-4 XTC-4
Mapping GMPi GMPi
ODUk ODUFlexi ODUFlexi
Tributary slots used 21, 22, 23, 24 7 9
GMPLS Restoration No No
Support Line-side Terminating SNC Yes Yes
1 Port D-SNCP (either with SNCs or cross- No No
connects)
2 Port D-SNCP (either with SNCs or cross- No No
connects)
Line-side Protection group (either with SNCs or No No
cross-connects)
FastSMP Protection No No
Latency Measurement No No
21
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
22
OTU3i+ is not supported for XTC-2/XTC-2E.
23
24
3QAM is not supported on XTC-2/XTC-2E.
Table A-6 Provisioning, Protection, and Diagnostic Support for Fibre Channel Services on the DTN-X
(8GFC and 10GFC) (continued)
Service Type
8G Fibre Channel 10G Fibre Channel
CTP PRBS Generation and Monitoring Scrambled jitter pattern (JSPAT), IEEE Test Pattern (IEEE 802.3
Towards the Client Interface defined in INCITS Fibre Channel Clause 49.2.8)
Physical Interface-4 (FC-PI-4)
CTP PRBS Generation and Monitoring Scrambled jitter pattern (JSPAT), IEEE Test Pattern (IEEE 802.3
Towards the Network defined in INCITS Fibre Channel Clause 49.2.8)
Physical Interface-4 (FC-PI-4)
ODUk Wrapper PRBS Generation and Not available Not available
Monitoring Towards the Network
Loopbacks Supported by Client CTP Object Facility and Terminal Facility and Terminal
Loopbacks Supported by ODUk Object Facility Facility
(Provided at OXM)
Note: Support is the same for all XTC chassis types, except where noted.
Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
Supporting TIMs, TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GX TIM-5-10GX
MXP TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM2-18-10GM TIM2-18-10GM
TIM2-18-10GM TIM2-18-10GM TIM2-18-10GX TIM2-18-10GX
TIM2-18-10GX TIM2-18-10GX (supported on (supported on
MXP-400 MXP-400 XTC-10 and XTC-10 and
(TIM2-18-10GM (TIM2-18-10GM XTC-4) XTC-4)
and and
TIM2-18-10GX TIM2-18-10GX
are supported are supported
on XTC-10 and on XTC-10 and
XTC-4) XTC-4)
MXP-400 is MXP-400 is
supported on supported on
XTC-2 and XTC-2 and
XTC-2E only XTC-2E only
Supporting Chassis XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Mapping GMPi Standard G. Standard G.709 Standard G.709 Standard G. Standard G.
709 adaptation adaptation 709 adaptation 709 adaptation
adaptation
ODUk ODUFlexi ODU1e ODU2 ODU2e ODU0 ODU1
Tributary slots used 9 8 8 8 1 2
25, 26, 27, 28
25
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk) (continued)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
GMPLS Yes Yes Yes Yes Yes Yes
Restoration (not supported (not supported
on on
TIM2-18-10GM TIM2-18-10GM
and and
TIM2-18-10GX) TIM2-18-10GX)
Support Line-side Yes Yes Yes Yes Yes Yes
Terminating SNC
1 Port D-SNCP Yes Yes Yes Yes Yes Yes
(either with SNCs
or cross-connects)
2 Port D-SNCP Yes Yes Yes Yes No No
(either with SNCs (Supported on (Supported on
or cross-connects) MXP-400) MXP-400)
Line side No No Yes Yes No No
Protection group (XTC-10 and (XTC-10 and
(either with SNCs XTC-4 only) XTC-4 only)
or cross-connects)
Note: Not
supported on
XTC-2/
XTC-2E.
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
26
OTU3i+ is not supported for XTC-2/XTC-2E.
27
28
3QAM is not supported on XTC-2/XTC-2E.
Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk) (continued)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
FastSMP Yes Yes Yes Yes No No
Protection (XTC-10 (XTC-10 (XTC-10 and (XTC-10 and
and XTC-4 and XTC-4 XTC-4 only) XTC-4 only)
Note: Not only) only)
supported on
XTC-2/
XTC-2E.
Note: Not
supported on
TIM2-18-10GM
and
TIM2-18-10GX
Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk) (continued)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
ODUk Wrapper Not Not Not applicable Not applicable Not applicable Not applicable
PRBS Generation available applicable
and Monitoring
Towards the
Network
Loopbacks Facility and Facility and Facility and Facility and Facility and Facility and
Supported by Terminal Terminal Terminal Terminal Terminal Terminal
Client CTP Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by loopback at loopback at
ODUk Object ODUj, No ODUj, No
(Provided at OXM loopback at loopback at
for TIMs and at ODUk ODUk
TIM2 for
TIM2-18-10GM and
TIM2-18-10GX)
Note: Support is the same for all XTC chassis types, except where noted.
Note: Starting IQ NOS Release 17.1, non-bookended OC-3 and OC-12 services are supported
between a TIM-16-2.5GM (at one end) and TIM-1-100GX or TIM-5-10GX (at the other end).
Table A-8 Provisioning, Protection, and Diagnostic Support for sub-10G Services on the DTN-X
Service Type
1GbE OC-48/STM-16 OC-3/STM-1 OC-12/STM-4 2GFC 4GFC
Supporting XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Chassis Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Supporting TIM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM
Mapping TTT+GMP BMP TTT+GMP TTT+GMP TTT+GMP TTT+GMP
ODUk ODU0 ODU1 ODU0 ODU0 ODU1 ODUflexi
Tributary slots 1 2 1 1 2 4
used 29, 30, 31,
32
29
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
30
OTU3i+ is not supported for XTC-2/XTC-2E.
31
32
3QAM is not supported on XTC-2/XTC-2E.
Table A-8 Provisioning, Protection, and Diagnostic Support for sub-10G Services on the DTN-X
(continued)
Service Type
1GbE OC-48/STM-16 OC-3/STM-1 OC-12/STM-4 2GFC 4GFC
1 Port D-SNCP Yes Yes No No No No
(either with
SNCs or cross-
connects)
2 Port D-SNCP Yes Yes No No No No
(either with
SNCs or cross-
connects)
Line side No No No No No No
Protection
group (either
with SNCs or
cross-
connects)
FastSMP Yes Yes No No No No
Protection
Latency No No No No No No
Measurement
CTP PRBS Unframed Unframed Unframed Unframed Unframed Unframed
Generation PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31
and Monitoring
Towards the
Client Interface
CTP PRBS Not available Not available Not available Not available Not available Not available
Generation
and Monitoring
Towards the
Network
ODUk Wrapper PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31
PRBS (inverted) (inverted) (inverted) (inverted) (inverted) (inverted)
Generation
and Monitoring
Towards the
Network
Table A-8 Provisioning, Protection, and Diagnostic Support for sub-10G Services on the DTN-X
(continued)
Service Type
1GbE OC-48/STM-16 OC-3/STM-1 OC-12/STM-4 2GFC 4GFC
Loopbacks Facility and Facility and Facility and Facility and Facility and Facility and
Supported by Terminal Terminal Terminal Terminal Terminal Terminal
Client CTP
Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by
ODUk Object
(Provided at
OXM)
Packet Services
The following table shows the service provisioning and diagnostic capabilities for packet services on the
DTN-X.
Note: Support is the same for all XTC chassis types, except where noted.
Table A-9 Provisioning, Protection, and Diagnostic Support for Packet Services on the DTN-X
Service
1G Switched Packet 10G Switched Packet 100G Switched Packet
Services Services Services
Supporting Chassis Types XTC-10 XTC-10 XTC-10
XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E
Supporting PXM PXM-16-10GE PXM-16-10GE PXM-1-100GE
Mapping GFP-F+GMPi GFP-F+GMPi GFP-F+GMPi
ODUk ODUflexi-n, n=1 to 80 ODUflexi-n, n=1 to 80 ODUflexi-n, n=1 to 80
Tributary slots used 33, 34, 35, 36 n, n=1 to 80; Up to 10 n, n=1 to 80; Up to 10 n, n=1 to 80; Up to 10
different OTN paths different OTN paths different OTN paths
GMPLS Restoration Yes Yes Yes
Support Line-side Terminating SNC Yes Yes Yes
(XTC-10 and XTC-4 (XTC-10 and XTC-4
only) only)
33
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
34
OTU3i+ is not supported for XTC-2/XTC-2E.
35
36
3QAM is not supported on XTC-2/XTC-2E.
Table A-9 Provisioning, Protection, and Diagnostic Support for Packet Services on the DTN-X
(continued)
Service
1G Switched Packet 10G Switched Packet 100G Switched Packet
Services Services Services
1 Port D-SNCP (either with SNCs or Yes Yes Yes
cross-connects) (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and XTC-4
only) only) only)
Note: Not supported on XTC-2/
XTC-2E.
Latency Measurement No No No
CTP PRBS Generation and Not available Not available Not available
Monitoring Towards the Client
Interface
CTP PRBS Generation and Not available Not available Not available
Monitoring Towards the Network
ODUk Wrapper PRBS Generation Not available Not available Not available
and Monitoring Towards the
Network
Loopbacks Supported by Client Facility and Terminal Facility and Terminal Terminal
CTP Object
Loopbacks Supported by ODUk Facility Facility Facility
Object (Provided at OXM)
Note: Support is the same for all XTC chassis types, except where noted. 100GE to OTU4 adaptation
services are supported for XT(S)-3600 only
Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8
1G
1GbE ODU0 TIM-16-2.5GM TIM-5-10GX TTT ODU0 1 1 Yes No No
inside TIM2-18-10GX +GMP
Channelized
OTU2
ODU0 TIM-16-2.5GM TIM-1-100GX TTT ODU0 1 1 Yes No No
inside or +GMP
Channelized LIM-1-100GX
OTU4 TIM2-2-100GX
37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.
Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN) (continued)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8
2.5G
OC-48/ ODU1 TIM-16-2.5GM TIM-5-10GX BMP ODU1 2 2 Yes No No
STM-16 inside TIM2-18-10GX
Channelized
OTU2
ODU1 TIM-16-2.5GM TIM-1-100GX BMP ODU1 2 2 Yes No No
inside TIM2-2-100GX
Channelized
OTU4
10G
OC-192/ ODU2 TIM-5-10GM TIM-5-10GM AMP, ODU2 8 8 Yes Yes Yes
STM-64 switching or TIM-5-10GX or TIM-5-10GX BMP (XTC-10 (XTC-10
service or or or XTC-4 or
TIM2-18-10GM TIM2-18-10GM only) XTC-4
or or only)
TIM2-18-10GX TIM2-18-10GX
10GbE ODU2 TIM-5-10GM TIM-5-10GM AMP, ODU2 8 8 Yes Yes Yes
WAN switching or TIM-5-10GX or TIM-5-10GX BMP (XTC-10 (XTC-10
service or XTC-4 or
only) XTC-4
only)
10GbE ODU2e TIM-5-10GM TIM-5-10GM 16FS ODU2e 8 8 Yes Yes Yes
LAN switching or TIM-5-10GX or TIM-5-10GX +BMP (XTC-10 (XTC-10
service or or or XTC-4 or
(MXP-400 is TIM2-18-10GM TIM2-18-10GM only) XTC-4
supported or or only)
on XTC-2 / TIM2-18-10GX TIM2-18-10GX
XTC-2E MXP-400 (with MXP-400 (with
only) TIM2s as TIM TIM2s as TIM
Z) Z)
ODU1e TIM-5-10GM TIM-5-10GM BMP ODU1e 8 8 Yes Yes Yes
switching or TIM-5-10GX or TIM-5-10GX (XTC-10 (XTC-10
service or XTC-4 or
only) XTC-4
only)
37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.
Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN) (continued)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8
37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.
Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN) (continued)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8
37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.
XT Service Capabilities
The following table shows the service provisioning and diagnostic capabilities for 100GbE and 10GbE
services supported by XT(S)-3300 and XT(S)-3600.
Table B-1 Provisioning, Protection, and Diagnostic Support for GbE Services on XT
Service Type
100GbE 100GbE 10GbE 10GbE OTU4 ODU2 inside ODU2e
(ODU4) (ODU2e) a inside a
channelized channelized
OTU4 OTU4
(ODU (ODU
Multiplexing) Multiplexing)
Supporting XT(S)-3600 XT(S)-3300 XT(S)-3600 XT(S)-3300 XT(S)-3600 XTC-10 XTC-10
Node Types XTC-4 XTC-2 XTC-4 XTC-2
XTC-2E XTC-2E
Mapping GMPi NA Standard G. Standard G. ODU4 Standard G. Standard G.
709 709 709 709
adaptation adaptation adaptation adaptation
ODUk ODU4, NA ODU2i ODU2i ODU4 ODU2 ODU2e
ODU4i
Tributary slots 80 NA 80 80 80 8 8
used
GMPLS Yes No Yes Yes Yes No Yes
Restoration
Table B-1 Provisioning, Protection, and Diagnostic Support for GbE Services on XT (continued)
Service Type
100GbE 100GbE 10GbE 10GbE OTU4 ODU2 inside ODU2e
(ODU4) (ODU2e) a inside a
channelized channelized
OTU4 OTU4
(ODU (ODU
Multiplexing) Multiplexing)
Support Line- Yes No Yes Yes No Yes Yes
side
Terminating
SNC
1 Port D-SNCP No No No No No Yes Yes
(either with
SNCs or cross-
connects)
2 Port D-SNCP Yes No No No Yes No No
(either with (Supported
SNCs or cross- only through
connects) Dual-chassis
Y-cable
protection on
XT(S)-3600)
Line Side Yes No Yes Yes No Yes Yes
Protection
group (either
with SNCs or
cross-connects)
FastSMP No No No No No Yes Yes
Protection
Latency No No No No No No No
Measurement
CTP PRBS Yes No No No Yes ODUk: ODUk:
Generation and PRBS-31 PRBS-31
Monitoring ODUj: ODUj:
Towards the PRBS-31 PRBS-31
Client Interface (inverted) (inverted)
CTP PRBS Yes No No No Yes ODUj: ODUj:
Generation and PRBS-31 PRBS-31
Monitoring (inverted) (inverted)
Towards the
Network
Table B-1 Provisioning, Protection, and Diagnostic Support for GbE Services on XT (continued)
Service Type
100GbE 100GbE 10GbE 10GbE OTU4 ODU2 inside ODU2e
(ODU4) (ODU2e) a inside a
channelized channelized
OTU4 OTU4
(ODU (ODU
Multiplexing) Multiplexing)
ODUk Wrapper Not available NA PRBS-31 PRBS-31 Not Not Not
PRBS available applicable applicable
Generation and
Monitoring
Towards the
Network
TTI Yes NA Yes Yes Yes Yes Yes
Loopbacks Facility Facility Facility Facility Facility Facility Facility
Supported by Terminal Terminal Terminal Terminal
Client CTP
Object
Loopbacks Facility Facility Facility Facility Facility Facility Facility
Supported by
Line CTP
Object
Loopbacks Terminal Terminal Terminal Terminal Terminal Terminal Terminal
Supported by
OCG PTP or
SCG PTP
Object (OCG
PTP is
applicable for
XT-500S and
SCG PTP for
XT-500F,
XT(S)-3300 and
XT(S)-3600)
Loopbacks Facility NA Facility Facility Facility Facility Facility
Supported by
Tributary ODUk
CTP