100% found this document useful (1 vote)
2K views464 pages

R20.0 IQ NOS Overview Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
2K views464 pages

R20.0 IQ NOS Overview Guide

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 464

Infinera IQ Network

Operating System
Overview Guide

Release 20.0
Version 001

Document ID 1900-001600

Infinera Corporation
140 Caspian Court
Sunnyvale, California 94089
www.infinera.com

- Please refer to the Infinera Customer Web Portal for the most recent version of this document. -
Copyright
Copyright © 2019 Infinera Corporation. All rights reserved.
This Manual is the property of Infinera Corporation and is confidential. No part of this Manual may be reproduced for any purposes
or transmitted in any form to any third party without the express written consent of Infinera.
Infinera makes no warranties or representations, expressed or implied, of any kind relative to the information or any portion thereof
contained in this Manual or its adaptation or use, and assumes no responsibility or liability of any kind, including, but not limited to,
indirect, special, consequential or incidental damages, (1) for any errors or inaccuracies contained in the information or (2) arising
from the adaptation or use of the information or any portion thereof including any application of software referenced or utilized in the
Manual. The information in this Manual is subject to change without notice.

Trademarks
Infinera, Infinera Intelligent Transport Networks, IQ NOS, FlexILS, DTN-X, DTN, ATN, FastSMP, FlexCoherent, What the Network
Will Be, iWDM, Enlighten and logos that contain Infinera are trademarks or registered trademarks of Infinera Corporation in the
United States and other countries.
All other trademarks in this Manual are the property of their respective owners.
Infinera DTN-X, DTN, FlexILS, Cloud Xpress, XT, and ATN Regulatory Compliance
FCC Class A
This device complies with Part 15 of the FCC rules. Operation is subject to the following two conditions: (1) this device may not
cause harmful interference, and (2) this device must accept any interference received, including interference that may cause
undesired operation. Modifying the equipment without Infinera's written authorization may result in the equipment no longer
complying with FCC requirements for Class A digital devices. In that event, your right to use the equipment may be limited by FCC
regulations, and you may be required to correct any interference to radio or television communications at your own expense.

DOC Class A
This digital apparatus does not exceed the Class A limits for radio noise emissions from digital apparatus as set out in the
interference-causing equipment standard titled “Digital Apparatus," ICES-003 of the Department of Communications.
Cet appareil numérique respecte les limites de bruits radioélectriques applicables aux appareils numériques de Classe A prescrites
dans la norme sur le matériel brouilleur: "Appareils Numériques," NMB-003 édictée par le Ministère des Communications.

Class A
This is a Class A product based on the standard of the VCCI Council. If this equipment is used in a domestic environment, radio
interference may occur, in which case, the user may be required to take corrective actions.

Warning
This is a class A product. In a domestic environment this product may cause radio interference in which case the user may be
required to take adequate measures.

FDA
This product complies with the DHHS Rules 21CFR 1040.10 and 1040.11, except for deviations pursuant to Laser Notice No. 50,
dated June 24, 2007.
Contents
About this Document...................................................................................................................................17
Objective................................................................................................................................................................ 18
Audience................................................................................................................................................................ 19
Document Organization..........................................................................................................................................20
Documents for Release 20.0..................................................................................................................................21
Conventions........................................................................................................................................................... 25
Technical Assistance..............................................................................................................................................26
Documentation Feedback...................................................................................................................................... 27

Chapter 1: Introduction.............................................................................................................................. 1-1

Chapter 2: Fault Management...................................................................................................................2-1


Alarm Surveillance................................................................................................................................................ 2-2
Automatic Laser Shutdown (ALS)....................................................................................................................... 2-11
Optical Layer Defect Propagation (OLDP).......................................................................................................... 2-16
Optical Loss of Signal (OLOS) Soak Timers.......................................................................................................2-18
Software Controlled Power Reduction................................................................................................................ 2-23
Optical Ground Wire (OPGW).............................................................................................................................2-24
Electronic Equalizer Gain Control Loop.............................................................................................................. 2-25
Event Log............................................................................................................................................................ 2-26
Maintenance and Troubleshooting Tools............................................................................................................ 2-27
Syslog................................................................................................................................................................. 2-70

Chapter 3: Configuration and Management...............................................................................................3-1


Equipment Management and Configuration..........................................................................................................3-2
Migrating a DTN or Optical Amplifier to a DTN-X................................................................................................3-58
Migrating BMM based line systems to FRM based line systems........................................................................ 3-60

Chapter 4: Service Provisioning................................................................................................................ 4-1


DTN Service Provisioning..................................................................................................................................... 4-2
DTN-X Service Provisioning................................................................................................................................4-33
Packet Switching Service Provisioning .............................................................................................................. 4-62
FlexILS Service Provisioning.............................................................................................................................. 4-92
IQ NOS Digital Protection Services...................................................................................................................4-122
Multi-layer Recovery in DTNs........................................................................................................................... 4-167
Dual chassis Y-cable protection (DC-YCP).......................................................................................................4-170

Chapter 5: Performance Monitoring and Management..............................................................................5-1


PM Data Collection............................................................................................................................................... 5-3
DTN-X Network Latency Measurement................................................................................................................ 5-7
gRPC PM Telemetry............................................................................................................................................. 5-9

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


vi Contents

Chapter 6: Security and Access Management.......................................................................................... 6-1


User Identification................................................................................................................................................. 6-3
Authentication....................................................................................................................................................... 6-4
Access Control...................................................................................................................................................... 6-5
Authorization......................................................................................................................................................... 6-6
Security Audit Log................................................................................................................................................. 6-7
Security Administration......................................................................................................................................... 6-8
Secure Shell (SSHv2) and Secure FTP (SFTP)................................................................................................... 6-9
Secure Copy Protocol (SCP).............................................................................................................................. 6-11
Remote Authentication Dial-In User Service (RADIUS)......................................................................................6-12
Terminal Access Controller Access-Control System Plus (TACACS+)...............................................................6-14
IP Security over OSC ......................................................................................................................................... 6-15
Media Access Control Security (MACSec)..........................................................................................................6-17
Secure Web Connection..................................................................................................................................... 6-25
Serial Port Disabling............................................................................................................................................6-27
DCN Port Disabling............................................................................................................................................. 6-28
DCN Port Block for Layer 3 Traffic......................................................................................................................6-29
ACLI Session Disabling.......................................................................................................................................6-30
Verified software image.......................................................................................................................................6-31
Signed Images.................................................................................................................................................... 6-32

Chapter 7: Software Configuration Management...................................................................................... 7-1


Downloading Software.......................................................................................................................................... 7-2
Maintaining Software.............................................................................................................................................7-3
Software Image Directory Structure...................................................................................................................... 7-7
Maintaining the Database................................................................................................................................... 7-10
Uploading Debug Information............................................................................................................................. 7-17
Verifying FTP Connectivity for Debug, PM, and DB Backup...............................................................................7-19

Chapter 8: IQ NOS GMPLS Control Plane Overview................................................................................ 8-1


OSPF-TE Routing Protocol................................................................................................................................... 8-3
GMPLS Signaling (RSVP-TE)...............................................................................................................................8-8
Handling Fault Conditions..................................................................................................................................... 8-9
Topology Configuration Guidelines..................................................................................................................... 8-10
Out-of-band GMPLS........................................................................................................................................... 8-11

Chapter 9: IQ NOS Management Plane Overview.................................................................................... 9-1


DCN Communication Path.................................................................................................................................... 9-2
Gateway Network Element....................................................................................................................................9-8
Static Routing......................................................................................................................................................9-12
Time-of-Day Synchronization..............................................................................................................................9-14

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


vii

Appendix A: DTN-X Service Capabilities.................................................................................................. A-1


100GbE TIM/TIM2/MXP/LIM Services..................................................................................................................A-2
100G OTN TIM/TIM2s/MXP/LIM Services............................................................................................................A-6
40G TIM Services............................................................................................................................................... A-10
10G TIM/TIM2/MXP, SONET, SDH, and Ethernet Services ............................................................................. A-13
10G TIM Services (10GCC, 10.3GCC, and cDTF).............................................................................................A-16
10G TIM Fibre Channel Services....................................................................................................................... A-18
10G TIM/TIM2/MXP OTN Services.................................................................................................................... A-20
Sub-10G TIM Services....................................................................................................................................... A-24
Packet Services.................................................................................................................................................. A-27
DTN-X Adaptation Services................................................................................................................................A-29

Appendix B: XT Service Capabilities........................................................................................................B-1

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


viii Contents

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


List of Figures
Figure 2-1 ARC Behavior (Leave Outstanding Alarms vs. Clear Outstanding Alarms).................................... 2-8
Figure 2-2 Pilot Lasers in RAMs..................................................................................................................... 2-14
Figure 2-3 Optical Layers Between FlexILS Nodes........................................................................................ 2-16
Figure 2-4 Client Tributary Facility Loopback (XTC-4/XTC-10 TIMs and OLx Example)................................2-29
Figure 2-5 Client Tributary Facility Loopback (XTC-10 TIM2 and OFx-1200 Example)................................. 2-29
Figure 2-6 Client Tributary Facility Loopback (XTC-2/XTC-2E Example).......................................................2-30
Figure 2-7 Client Tributary Facility Loopback (MXP in XTC-2/XTC-2E Example).......................................... 2-30
Figure 2-8 Client Tributary Terminal Loopback (XTC-10 TIM2 and OFx-1200 Example)...............................2-31
Figure 2-9 Client Tributary Terminal Loopback (XTC-4/XTC-10 TIM and OLx Example)...............................2-32
Figure 2-10 Client Tributary Terminal Loopback (XTC-2/XTC-2E Example).................................................... 2-32
Figure 2-11 Client Tributary Facility Loopback (MXP in XTC-2/XTC-2E Example).......................................... 2-33
Figure 2-12 ODUk Facility Loopback (from the OTM) (XTC-4/XTC-10 Example)............................................ 2-34
Figure 2-13 ODUk Facility Loopback (from the OTM) (XTC-2/XTC-2E Example)............................................2-34
Figure 2-14 ODUk Facility Loopback (MXP in XTC-2/XTC-2E Example).........................................................2-35
Figure 2-15 ODUk Facility Loopback from the OTM-1200 XTC-10 Example................................................... 2-35
Figure 2-16 ODUk Facility Loopback (from Line Side) (XTC-4/XTC-10 Example)........................................... 2-36
Figure 2-17 ODUk Facility Loopback (from Line Side) (XTC-2/XTC-2E Example)...........................................2-36
Figure 2-18 ODUk Facility Loopback (from line side): XTC-10 with OFx-1200 ............................................... 2-37
Figure 2-19 ODUk Facility Loopback (from Line Side) (MXP in XTC-2/XTC-2E Example).............................. 2-38
Figure 2-20 SCG Terminal Loopback: XTC with OFx-1200..............................................................................2-39
Figure 2-21 Ethernet Interface Loopbacks (PXM only).....................................................................................2-39
Figure 2-22 Client Tributary Facility Loopback................................................................................................. 2-40
Figure 2-23 Tributary Digital Transport Frame (DTF) Path Terminal Loopback............................................... 2-40
Figure 2-24 Client Tributary Terminal Loopback...............................................................................................2-41
Figure 2-25 Line DTF Path Facility Loopback...................................................................................................2-41
Figure 2-26 Line DTF Path Terminal Loopback (Express Scenario)................................................................ 2-42
Figure 2-27 Line DTF Path Terminal Loopback (Add/Drop Scenario).............................................................. 2-43
Figure 2-28 Loopbacks Support by the TAM-2-10GT and DICM-T-2-10GT.....................................................2-44
Figure 2-29 Client Loopbacks on XT-500......................................................................................................... 2-44
Figure 2-30 Client loopbacks on XT(S)-3300....................................................................................................2-45
Figure 2-31 Client loopbacks on XT(S)-3600....................................................................................................2-46
Figure 2-32 Tributary ODUk Loopback on XT(S)-3600.................................................................................... 2-47
Figure 2-33 Line Loopback on XT-500............................................................................................................. 2-48
Figure 2-34 Line Loopback on XT(S)-3300.......................................................................................................2-48
Figure 2-35 Line Loopback on XT(S)-3600.......................................................................................................2-49
Figure 2-36 OCG Loopback on XT-500S and SCG Loopback on XT-500F..................................................... 2-50
Figure 2-37 SCG Loopback on XT(S)-3300......................................................................................................2-50
Figure 2-38 SCG loopback on XT(S)-3600.......................................................................................................2-51
Figure 2-39 PRBS Tests Supported by the XTC.............................................................................................. 2-53
Figure 2-40 Tributary and Line PRBS tests on the XTC (TIM-1-40GM/TIM-16.2.5GM)................................... 2-54
Figure 2-41 PRBS Tests Supported by the DTC/MTC..................................................................................... 2-56
Figure 2-42 DCh Line PRBS Test Supported by the LM-80............................................................................. 2-57
Figure 2-43 PRBS Tests Supported by TAM-2-10GT and DICM-T-2-10GT.....................................................2-58
Figure 2-44 Trace Messaging........................................................................................................................... 2-62
Figure 2-45 DCh Trace Messaging Supported by the LM-80........................................................................... 2-63
Figure 2-46 Trace Messaging Supported by the TAM-2-10GT and DICM-T-2-10GT.......................................2-63

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


x List of Figures
Figure 2-47 Path Loss Check for FRM-9D to FSP-C/FMP-C Connectivity.......................................................2-66
Figure 2-48 Path Loss Check for FSM/FSE to FSP-S to FRM-9D Connectivity...............................................2-67
Figure 2-49 Path Loss Check for FRM-9D/FRM-20X to FSP-E to FRM-9D/FRM-20X Connectivity................ 2-67
Figure 2-50 Path Loss Check for FSM to FSE Connectivity............................................................................. 2-68
Figure 2-51 Example scenario of Syslog Deployment...................................................................................... 2-70
Figure 3-1 Managed Objects and Hierarchy (DTN-X).......................................................................................3-5
Figure 3-2 Managed Objects and Hierarchy (DTN-X with ODU Multiplexing).................................................. 3-6
Figure 3-3 Managed Objects and Hierarchy (DTN-X with PXM)...................................................................... 3-7
Figure 3-4 Managed Objects and Hierarchy (DTN-X with OFx)....................................................................... 3-8
Figure 3-5 Managed Objects and Hierarchy (DTN-X with 100G VCAT)...........................................................3-9
Figure 3-6 Managed Objects and Hierarchy (MTC-9/MTC-6).........................................................................3-10
Figure 3-7 Managed Objects and Hierarchy (MTC-9/MTC-6 with FRM-4D)...................................................3-11
Figure 3-8 Managed Objects and Hierarchy (MTC-9/MTC-6 with OPSM)......................................................3-12
Figure 3-9 Managed Objects and Hierarchy (DTC/MTC with Line Modules)..................................................3-13
Figure 3-10 Managed Objects and Hierarchy (DTC/MTC with LM-80s)........................................................... 3-14
Figure 3-11 Managed Objects and Hierarchy (Base/Expansion BMM2 on DTC/MTC).................................... 3-15
Figure 3-12 Managed Objects and Hierarchy (OTC)........................................................................................ 3-15
Figure 3-13 Managed Objects and Hierarchy (FBM)........................................................................................ 3-16
Figure 3-14 Managed Objects and Hierarchy (XT-500S/XT-500F).................................................................. 3-17
Figure 3-15 Managed Objects and Hierarchy (XT(S)-3300)............................................................................. 3-18
Figure 3-16 Auto-discovery for OFx-500, FSM/FSE, and FRM-9D.................................................................. 3-26
Figure 3-17 Auto-discovery for OFx-500 to FRM-9D (via FSP-C).................................................................... 3-26
Figure 3-18 Auto-discovery for OFx-500 and FRM-4D..................................................................................... 3-27
Figure 3-19 Auto-discovery for FRM-9D to FRM-9D (via FSP-E): Sample express between two FRM-9Ds... 3-27
Figure 3-20 Auto-discovery for FRM-4D to FRM-4D........................................................................................ 3-28
Figure 3-21 Auto-discovery for OFx-500, FMM-F250, and FRM-4D................................................................ 3-28
Figure 3-22 Auto-discovery for OFx-500, FMM-F250, FSP-C, and FRM-9D....................................................3-29
Figure 3-23 Auto-discovery for OFx-100, FMM-C-5, and FRM-4D...................................................................3-29
Figure 3-24 Auto-discovery for OFx-100, FMM-C-5, BPP, and FRM-4D..........................................................3-30
Figure 3-25 Auto-discovery for OFx-100, FMM-C-5, FSP-C, and FRM-9D/FRM-20X (Example with
FRM-9D)........................................................................................................................................3-30
Figure 3-26 Auto-discovery for FMM-C-12 and FRM-9D..................................................................................3-31
Figure 3-27 Auto-discovery for OFx-100, FMM-C-5, and BMM ....................................................................... 3-32
Figure 3-28 Example Scenario for Forward Defect Triggering of Tributary Disable Action.............................. 3-46
Figure 3-29 LLDP frame and data unit formats.................................................................................................3-50
Figure 3-30 LLDP Receive Only Mode of Operation ....................................................................................... 3-51
Figure 3-31 Example Scenario of ZTP Deployment......................................................................................... 3-57
Figure 4-1 No-hop Add/Drop Cross-connects...................................................................................................4-4
Figure 4-2 Multi-hop Add/Drop Cross-connect between ADLMs/DLMs and TEM............................................ 4-5
Figure 4-3 Multi-hop Add Cross-connect between ADLMs/DLMs and TEM.....................................................4-6
Figure 4-4 No-hop Drop Cross-connect............................................................................................................4-7
Figure 4-5 Single-hop Express Cross-connect................................................................................................. 4-8
Figure 4-6 Multi-hop Express Cross-connect....................................................................................................4-8
Figure 4-7 No-hop Hairpin Cross-connects...................................................................................................... 4-9
Figure 4-8 Single-hop Hairpin Cross-connect.................................................................................................4-10
Figure 4-9 Line-side Terminating SNCs Connected Across Domain Boundaries...........................................4-13
Figure 4-10 1 Port D-SNCP with Line-side Terminating SNCs.........................................................................4-14
Figure 4-11 2 Port D-SNCP with Line-side Terminating SNCs.........................................................................4-14
Figure 4-12 Tributary Port to DTP Mapping on the TAM-8-1G......................................................................... 4-16
Figure 4-13 Flexible Mapping of Tributary Port to DTP on the TAM-8-2.5GM..................................................4-17

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


xi
Figure 4-14 VCGs and GTPs for 40G Services................................................................................................ 4-19
Figure 4-15 Standard Transport of OTN Services............................................................................................ 4-22
Figure 4-16 Adaptation of OTN Services across the Infinera Network............................................................. 4-22
Figure 4-17 Multi-point Configuration................................................................................................................4-24
Figure 4-18 Implementing Multi-point Configuration in a DTN..........................................................................4-25
Figure 4-19 Implementing Multi-point Configuration in a DTN-X (Hairpin)....................................................... 4-25
Figure 4-20 Implementing Multi-point Configuration in a DTN-X (Add/Drop)....................................................4-26
Figure 4-21 Multi-point Configuration Leg Used for Digital Test Access.......................................................... 4-27
Figure 4-22 Optical Express in an Intelligent Transport Network......................................................................4-28
Figure 4-23 Example Configuration of an Optical Express Loop in the Network.............................................. 4-31
Figure 4-24 Network Migration with Optical Service Bridge and Roll............................................................... 4-32
Figure 4-25 Add/Drop Cross-connects on a DTN-X......................................................................................... 4-34
Figure 4-26 Add Cross-connect on a DTN-X.................................................................................................... 4-35
Figure 4-27 Drop Cross-connect on an XTC.................................................................................................... 4-36
Figure 4-28 Express Cross-connect on an XTC............................................................................................... 4-37
Figure 4-29 Hairpin Cross-connects on a DTN-X............................................................................................. 4-38
Figure 4-30 Virtual Concatenation Mode (100GbE Example)...........................................................................4-39
Figure 4-31 Non-Virtual Concatenation Mode (100GbE Example)...................................................................4-40
Figure 4-32 VCG and GTPs for a 100GbE DTN-X VCAT Service....................................................................4-41
Figure 4-33 cDTF Use for Low-speed Services over DTN-X Network (2.5Gbps Example)..............................4-44
Figure 4-34 Virtual Concatenation Mode (OTU4 Example).............................................................................. 4-45
Figure 4-35 ODU Switching (ODU2 Example)..................................................................................................4-47
Figure 4-36 Entities Created for ODU Switching (ODU2 Example).................................................................. 4-47
Figure 4-37 Entities Created for ODU Switching (ODU0 Example).................................................................. 4-48
Figure 4-38 ODU Multiplexing (TIM-5-10GX)................................................................................................... 4-49
Figure 4-39 ODU Multiplexing (TIM-1-100GX and LIM-1-100GX)....................................................................4-49
Figure 4-40 Data Flow through PXM................................................................................................................ 4-63
Figure 4-41 Ethernet Private Line (EPL) Services............................................................................................ 4-65
Figure 4-42 Ethernet Virtual Private Line (EVPL) Services.............................................................................. 4-65
Figure 4-43 Logical Elements of E-LAN Implementation in a Network............................................................. 4-66
Figure 4-44 MPLS and LSP Elements in the Network...................................................................................... 4-68
Figure 4-45 Traffic Management in the Network...............................................................................................4-69
Figure 4-46 Queuing Elements for Packet Services......................................................................................... 4-73
Figure 4-47 Class-based Queuing (CBQ).........................................................................................................4-74
Figure 4-48 Enhanced Class-based Queuing (ECBQ) .................................................................................... 4-74
Figure 4-49 Connection Admission Control (CAC) Checks in the Network...................................................... 4-77
Figure 4-50 Ingress VLAN Edit and Egress VLAN Edit on the PXM.................................................................4-81
Figure 4-51 Service and Link OAM...................................................................................................................4-82
Figure 4-52 Maintenance Domains...................................................................................................................4-83
Figure 4-53 Maintenance Association...............................................................................................................4-84
Figure 4-54 Maintenance End Point................................................................................................................. 4-84
Figure 4-55 Up and Down MEPs...................................................................................................................... 4-85
Figure 4-56 Ethernet OAM Managed Object Hierarchy.................................................................................... 4-88
Figure 4-57 Add/Drop Optical Cross-connect (with FSM and FRM).................................................................4-93
Figure 4-58 Add/Drop Optical Cross-connect (FRM only)................................................................................ 4-94
Figure 4-59 Add/Drop Optical Cross-connect (with FMM-F250 and FRM).......................................................4-94
Figure 4-60 Add/Drop Optical Cross-connect (example with FMM-C-5 and FRM-4D).....................................4-95
Figure 4-61 Add/Drop optical cross-connect on FRM (Sample XT-3300/XTS-3300 configuration)..................4-95
Figure 4-62 Add/Drop optical cross-connect between FBM and FRM (XT-3300/XT-3600 configuration)........ 4-96
Figure 4-63 Add/Drop optical cross-connect between FBM and FRM (OFx-1200 configuration).....................4-96

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


xii List of Figures
Figure 4-64 Express Optical Cross-connect..................................................................................................... 4-97
Figure 4-65 FlexILS SLTE Manual Optical Cross-connects............................................................................. 4-98
Figure 4-66 Example - Minimized guard band................................................................................................ 4-103
Figure 4-67 Optical and Digital TE links and SNCs (DTN-X with ROADM sample configuration)..................4-105
Figure 4-68 Optical, Digital TE Links and Optical SNCs (XT-3300/XTS-3300 sample configuration)............ 4-105
Figure 4-69 Optical TE Links and Optical SNCs (ICE 4 modules and FBM/FRM sample configuration)....... 4-106
Figure 4-70 Optical TE Links, OELs, and Optical SNCs in a DTN-X Network................................................4-107
Figure 4-71 Optical TE Links and FRM end-point based Optical SNCs in an ICE 4 Network (XT-3300
example)......................................................................................................................................4-108
Figure 4-72 Optical TE Links and FBM end-point based Optical SNCs in an ICE 4 Network........................ 4-108
Figure 4-73 Optical SNCs Using FRM and FSM Endpoints........................................................................... 4-110
Figure 4-74 Optical SNCs Using FRM and FBM Endpoints........................................................................... 4-110
Figure 4-75 O-SNCP for an SLTE Optical Span.............................................................................................4-111
Figure 4-76 O-SNCP between a Subsea CLS and a POP............................................................................. 4-111
Figure 4-77 OSNCP for SOLx2 (through BMM2) and SOFx.......................................................................... 4-112
Figure 4-78 Example of Tributary-side O-SNCP with OPSM and AOFx-500 ................................................ 4-112
Figure 4-79 Optical Restoration on O-SNCs...................................................................................................4-114
Figure 4-80 Add/Drop Optical Cross-connect (example with FMM-C-5 and FRM-4D)...................................4-117
Figure 4-81 Example Configuration with OFx-100, FMM-C-5, and BMM2C...................................................4-118
Figure 4-82 Example Configuration of a DTN-X with FlexILS Using FMP-C.................................................. 4-119
Figure 4-83 2 Port D-SNCP (DTN Example).................................................................................................. 4-125
Figure 4-84 2 Port D-SNCP (DTN-X Example)...............................................................................................4-126
Figure 4-85 1 Port D-SNCP on DTN...............................................................................................................4-127
Figure 4-86 1 Port D-SNCP on DTN-X (XTC-4/XTC-10)................................................................................ 4-127
Figure 4-87 1 Port D-SNCP in a DTN Network...............................................................................................4-128
Figure 4-88 1 Port D-SNCP across a Third-party Network............................................................................. 4-129
Figure 4-89 Example Network Configuration with Line-side 1 Port D-SNCP for ODU2i_10v VCAT.............. 4-130
Figure 4-90 Fault Isolation Layers Configured in Two Example Networks..................................................... 4-131
Figure 4-91 Using CSF as a Protection Trigger over Third-Party Networks................................................... 4-138
Figure 4-92 Protection Switching for Mixed DTN/DTN-X Network..................................................................4-139
Figure 4-93 ODUk AIS for ODUk Encapsulated Clients in Mixed DTN/DTN-X Network................................ 4-140
Figure 4-94 Dynamic GMPLS Circuit Restoration.......................................................................................... 4-141
Figure 4-95 1 Port DSNCP with non-revertive restorable SNC: Failure on Work path................................... 4-147
Figure 4-96 1 Port DSNCP with non-revertive restorable SNC: Switch to protect path on failure of work
path..............................................................................................................................................4-148
Figure 4-97 1 Port DSNCP with non-revertive restorable SNC: Work path is deleted....................................4-148
Figure 4-98 1 Port DSNCP with revertive restorable SNC: Failure on Work path.......................................... 4-149
Figure 4-99 1 Port DSNCP with revertive restorable SNC: Switch to Protect PU on failure of Working path.4-149
Figure 4-100 1 Port DSNCP with revertive restorable SNC: Switch to Work Restoration path on failure of
Protect path................................................................................................................................. 4-150
Figure 4-101 1 Port DSNCP with revertive restorable SNC: Reversion to healed Work Path..........................4-150
Figure 4-102 1 Port DSNCP with revertive restorable SNC: Delete work restoration path...............................4-151
Figure 4-103 1 Port DSNCP with revertive restorable SNC: Protect path failure............................................. 4-151
Figure 4-104 1 Port DSNCP with revertive restorable SNC: Work and Protect Path failure.............................4-152
Figure 4-105 Multi-Layer Recovery in DTN-X illustrated with four fiber cuts in a sample network................... 4-153
Figure 4-106 FastSMP Working Paths Sharing Protection Resources.............................................................4-155
Figure 4-107 FastSMP Activated Protection Path............................................................................................ 4-155
Figure 4-108 FastSMP Preempting Lower Priority Protection Group............................................................... 4-157
Figure 4-109 FastSMP Protection Group with Multiple Protection Paths......................................................... 4-158
Figure 4-110 FastSMP over FlexILS SLTE Link (Point to Point)...................................................................... 4-164

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


xiii
Figure 4-111 FastSMP over FlexILS SLTE Links (with Optical Express)......................................................... 4-164
Figure 4-112 Multi-layer Recovery for Revertive PG with Revertive Restorable SNCs....................................4-167
Figure 4-113 Multi-layer Recovery with Revertive PG with Non-revertive Restorable SNC............................. 4-168
Figure 4-114 1 Port D-SNCP with Restorable SNCs........................................................................................ 4-168
Figure 4-115 Configuration showing DC-YCP between any two ports of the paired chassis........................... 4-171
Figure 4-116 DC-YCP switching upon detecting a client failure....................................................................... 4-173
Figure 4-117 DC-YCP switching upon detecting a Bidirectional fibercut.......................................................... 4-174
Figure 4-118 DC-YCP switching upon detecting a unidirectional fibercut........................................................ 4-175
Figure 5-1 gRPC Client/Server ........................................................................................................................ 5-9
Figure 6-1 SSHv2-secured Management......................................................................................................... 6-9
Figure 6-2 Infinera Network with RADIUS..................................................................................................... 6-13
Figure 6-3 MAC Service Data Unit (MSDU) and MAC Protocol Data Units (MPDU)......................................6-18
Figure 6-4 MACSec Frame - Breakdown of Individual Frame Elements........................................................ 6-19
Figure 6-5 Example scenario for MACSec Deployment in XT ....................................................................... 6-20
Figure 6-6 Example scenario for MACSec Encryption and Double SecTAG-ing............................................6-21
Figure 6-7 Example configuration of Access to GNE/SNE............................................................................. 6-26
Figure 8-1 Physical Network Topology............................................................................................................. 8-3
Figure 8-2 Network with GMPLS Topology Partition........................................................................................ 8-4
Figure 8-3 Service Provisioning Topology........................................................................................................ 8-4
Figure 8-4 Example Network for SNC Routing................................................................................................. 8-5
Figure 8-5 Out-of-band GMPLS Used in a Submarine Application.................................................................8-11
Figure 9-1 Redundant DCN Connectivity (DTN Example)................................................................................9-3
Figure 9-2 DCN Link Failure Recovery............................................................................................................. 9-4
Figure 9-3 Controller Module Failure Recovery................................................................................................ 9-5
Figure 9-4 Management Application Proxy Function........................................................................................ 9-9
Figure 9-5 Using Static Routing to Reach External Networks (IPv4 Examples)............................................. 9-12
Figure 9-6 NTP Server Configuration............................................................................................................. 9-14

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


xiv List of Figures

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


List of Tables
Table 2-1 Fault Bits Supported by Each Layer.............................................................................................. 2-17
Table 2-2 Connections Supporting Path Loss Check.................................................................................... 2-65
Table 3-1 Effective Channels as a Result of OCG Target Power Offset....................................................... 3-34
Table 3-2 Effective Channels as a Result of LM-80 OCH PTP Target Power Offset.................................... 3-34
Table 3-3 Tributary Disable Actions...............................................................................................................3-42
Table 3-4 TIM Support of Encapsulated Client Disable Action......................................................................3-47
Table 4-1 Cross-connect Network Mapping for Various Client Interfaces..................................................... 4-53
Table 4-2 Timeslots Required for Low Order ODUj Entities.......................................................................... 4-55
Table 4-3 Tributary Slots and Capacities of Line Side Containers .............................................................. 4-58
Table 4-4 PXM Meter Rate Granularity......................................................................................................... 4-71
Table 4-5 PXM Meter Burst Size Granularity.................................................................................................4-72
Table 4-6 PXM Flow Shaper Rate Granularity.............................................................................................. 4-76
Table 4-7 PXM Flow Shaper Burst Size Granularity......................................................................................4-76
Table 4-8 Layer 2 Control Protocol (L2CP) Profiles...................................................................................... 4-78
Table 4-9 Treatment of Incoming Packets Based on Ethernet Interface Type and TPID(s)..........................4-79
Table 4-10 PXM Scalability..............................................................................................................................4-88
Table 4-11 PXM Standard Compliance........................................................................................................... 4-91
Table 4-12 Alarms and Events for FastSMP Switching Operations.............................................................. 4-163
Table 7-1 Software Image directory structure on FTP server..........................................................................7-8
Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X..................A-2
Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X..............A-6
Table A-3 Provisioning, Protection, and Diagnostic Support for 40G Services on the DTN-X...................... A-10
Table A-4 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(SONET/SDH and 10GbE LAN/WAN).......................................................................................... A-13
Table A-5 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (10GCC,
and cDTF)..................................................................................................................................... A-16
Table A-6 Provisioning, Protection, and Diagnostic Support for Fibre Channel Services on the DTN-X
(8GFC and 10GFC).......................................................................................................................A-18
Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(Transparent OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized
OTUk)............................................................................................................................................A-20
Table A-8 Provisioning, Protection, and Diagnostic Support for sub-10G Services on the DTN-X............... A-24
Table A-9 Provisioning, Protection, and Diagnostic Support for Packet Services on the DTN-X.................. A-27
Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/
SDH, 10GbE LAN/WAN)...............................................................................................................A-29
Table B-1 Provisioning, Protection, and Diagnostic Support for GbE Services on XT.................................... B-1

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


xvi List of Tables

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


About this Document
This document provides an overview of the The IQ Network Operating System.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


18 Objective

Objective
This guide provides an introduction and reference to the Infinera IQ Network Operating System that runs
on the DTN-X, DTN, Optical Amplifier, XT and FlexILS nodes and enables network-wide intelligent control
and operations.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


19

Audience
The primary audience for this guide includes network architects, network planners, network operations
personnel, and system administrators who are responsible for deploying and administering the Intelligent
Transport Network. This guide assumes that the reader is familiar with the following topics and products:
■ Basic inter-networking terminology and concepts
■ Dense Wavelength Division Multiplexing (DWDM) technology and concepts

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


20 Document Organization

Document Organization
The following table describes each chapter in this guide.

Chapter Description
Introduction Provides an introduction to Infinera IQ Network Operating System. This
chapter also includes a list of hardware and software features.
Configuration and Management on Provides an overview of the extensive equipment inventory, management and
page 3-1 configuration capabilities supported by IQ NOS.
Service Provisioning on page 4-1 Provides an overview of the service provisioning capabilities of IQ NOS
network elements that allow users to engineer user traffic data transport
routes.
Performance Monitoring and Provides an overview of the performance monitoring capabilities of IQ NOS
Management on page 5-1 network elements.
Security and Access Management Provides an overview of user management and security features of IQ NOS
on page 6-1 network elements
Software Configuration Management Provides an overview of IQ NOS software and database image management.
on page 7-1
IQ NOS GMPLS Control Plane Provides an overview of the GMPLS control plane architecture that enables
Overview on page 8-1 automated end-to-end management of transport capacity across the Infinera
Intelligent Transport Network.
IQ NOS Management Plane Provides an overview of the management plane communications path for IQ
Overview on page 9-1 NOS network elements.
DTN-X Service Capabilities Lists the service provisioning and diagnostic capabilities for each service type
supported by the DTN-X
XT Service Capabilities on page B- Lists the service provisioning and diagnostic capabilities for each service type
1 supported by the XT(S)-3300 and XT(S)-3600.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


21

Documents for Release 20.0


The following documents are available for the Intelligent Transport Network™ systems:

Document Name Document ID Description


DTN and DTN-X Site Preparation and Hardware Installation Guide Portfolio
XTC Site Preparation and 1900-001578 Describes the procedures for initial installation of the XTC at any
Hardware Installation Guide given site. Includes procedures for site preparation and site
testing, system cabling, safety procedures and hand-over to
provisioning activities.
XT Site Preparation and 1900-001579 Describes the procedures for initial installation of the XT chassis
Hardware Installation Guide at any given site. Includes procedures for site preparation and
site testing, system cabling, safety procedures and hand-over to
provisioning activities.
Line Systems Site Preparation 1900-001580 Describes the procedures for initial installation of the Line
and Hardware Installation Systems at any given site. Includes procedures for site
Guide preparation and site testing, system cabling, safety procedures
and hand-over to provisioning activities.
DTC/MTC Site Preparation 1900-001581 Describes the procedures for initial installation of the DTC/MTC
and Hardware Installation at any given site. Includes procedures for site preparation and
Guide site testing, system cabling, safety procedures and hand-over to
provisioning activities.
Passive Equipment Site 1900-001582 Describes the procedures for initial installation of the Passive
Preparation and Hardware Equipment at any given site. Includes procedures for site
Installation Guide preparation and site testing, system cabling, safety procedures
and hand-over to provisioning activities.
DTN and DTN-X Hardware Description Guide Portfolio
XT Hardware Description 1900-001583 Provides the hardware description of the XT chassis which
Guide includes the description of chassis, common modules and circuit
packs. It provides hardware block diagrams, functional
descriptions, mechanical and electrical specifications for each
module.
XTC Hardware Description 1900-001584 Provides the hardware description of the XTC which includes the
Guide description of chassis, common modules and circuit packs. It
provides hardware block diagrams, functional descriptions,
mechanical and electrical specifications for each module.
Line Systems Hardware 1900-001585 Provides the hardware description of the Line Systems which
Description Guide includes the description of chassis, common modules and circuit
packs. It provides hardware block diagrams, functional
descriptions, mechanical and electrical specifications for each
module.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


22 Documents for Release 20.0

Document Name Document ID Description


DTC/MTC Hardware 1900-001586 Provides the hardware description of the DTC/MTC which
Description Guide includes the description of chassis, common modules and circuit
packs. It provides hardware block diagrams, functional
descriptions, mechanical and electrical specifications for each
module.
Passive Equipment Hardware 1900-001587 Provides the hardware description of the Passive Equipment
Description Guide which includes the description of chassis, common modules and
circuit packs. It provides hardware block diagrams, functional
descriptions, mechanical and electrical specifications for each
module.
DTN and DTN-X Task Oriented Procedures Guide Portfolio
XTC Task Oriented 1900-001588 Provides the routine task oriented procedures used in support of
Procedures Guide the XTC.
XT Task Oriented Procedures 1900-001589 Provides the routine task oriented procedures used in support of
Guide the XT chassis.
Line Systems Task Oriented 1900-001590 Provides the routine task oriented procedures used in support of
Procedures Guide the Line Systems.
DTC/MTC Task Oriented 1900-001591 Provides the routine task oriented procedures used in support of
Procedures Guide the DTC/MTC.
DTN and DTN-X Turn-up and Test Guide Portfolio
DTN-X Turn-up and Test 1900-001592 Describes procedures for turning up, commissioning and testing
Guide the installed DTN-X network element. Includes the description of
circuit activation and end-end system testing procedures.
DTN Turn-up and Test Guide 1900-001593 Describes procedures for turning up, commissioning and testing
the installed DTN network element. Includes the description of
circuit activation and end-end system testing procedures.
FlexILS ROADM Turn-up and 1900-001594 Describes procedures for turning up, commissioning and testing
Test Guide the installed FlexILS and FlexROADM network elements.
Includes the description of circuit activation and end-end system
testing procedures.
Optical Amplifier Turn-up and 1900-001595 Describes procedures for turning up, commissioning and testing
Test Guide the installed OTC-based Optical Amplifier network elements.
Includes the description of circuit activation and end-end system
testing procedures.
XT Turn-up and Test Guide 1900-001596 Describes procedures for turning up, commissioning and testing
the installed XT network element. Includes the description of
circuit activation and end-end system testing procedures.
Optical Line Amplifier Turn-up 1900-001597 Describes procedures for turning up, commissioning and testing
and Test Guide the installed MTC-9/MTC-6 based Optical Line Amplifier network
elements. Includes the description of circuit activation and end-
end system testing procedures.
DTN and DTN-X Reference Guides Portfolio

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


23

Document Name Document ID Description


SNMP Agent Reference Guide 1900-001598 Describes the Simple Network Management Protocol (SNMP)
Agent for network elements. It provides detailed instructions to
configure and operate the SNMP Agent from the network
element.
DTN and DTN-X System 1900-001599 Provides an overview of the Intelligent Transport Network and its
Description Guide principal elements, including the network elements.
IQ Network Operating System 1900-001600 Provides an overview of the Infinera IQ Network Operating
Overview Guide System.
Infinera Management Suite 1900-001601 Provides an overview of the management interfaces for products.
Overview Guide
NETCONF Agent Reference 1900-001602 Describes the Network Configuration (NETCONF) Agent.
Guide
DTN and DTN-X Alarm and 1900-001603 Describes the alarms raised by the network elements and the
Trouble Clearing Guide corrective procedures to perform to clear the alarms. It also
describes about Event Log.
gRPC Reference Guide 1900-001651 Provides an overview of the General Remote Procedure Calls
(gRPC) interface.
RESTCONF Agent Reference 1900-001604 Describes the REpresentational State Transfer Configuration
Guide Protocol (RESTCONF) Agent.
DTN and DTN-X User Guides Portfolio
GNM Overview Guide 1900-001605 Describes the Graphical Node Manager user interface. It also
describes the new features, the hardware and software
requirements required to launch the GNM. It also provides
procedures to install and upgrade the software and database on
the network elements.
GNM Fault Management and 1900-001606 Describes the Fault Management inventories and Alarm
Diagnostics Guide Manager. It also provides the procedures to perform diagnostic
tests on network elements.
GNM Configuration 1900-001607 Describes the procedures to use the GNM to configure the
Management Guide network elements and the network topology. It also provides a
description on the Equipment Manager and Facility Manager.
GNM Performance 1900-001608 Describes the procedures to use GNM to view performance
Management Guide monitoring (PM) data and modify PM thresholds for network
elements. It also provides the PM parameters details reported by
the network elements.
GNM Security Management 1900-001609 Describes the procedures to perform security and access
Guide management tasks such as creating, deleting and managing user
accounts on the network elements.
GNM Service Provisioning 1900-001610 Describes the procedures to provision cross-connects,
Guide subnetwork connections (SNCs) and protected services on
network elements. It includes a description of the various
inventory managers displayed in the GNM.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


24 Documents for Release 20.0

Document Name Document ID Description


CLI User Guide 1900-001611 Describes Command Line Interface (CLI) for the MTC-6/MTC-9
based FlexROADM and Optical Line Amplifier network elements.
It includes the description of the supported CLI commands and
the procedures for the commonly performed OAM&P functions.
DTN and DTN-X TL1 User 1900-001612 Describes the TL1 interface supported by the DTN-X, ILS, DTN,
Guide XT and Optical Line Amplifier network elements. It includes the
description of the supported TL1 commands and the procedures
for the commonly performed OAM&P functions.
Acronyms
Acronyms 1900-001614 Lists the acronyms used in documentation.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


25

Conventions
The table below lists the conventions used in this guide.

Convention Item Example


bold default font Menu command paths Select Fault Management-> Alarm
Manager
Button names Click Apply
User interface labels Click Summary panel
Window/dialog box titles In the Dial-Up Networking window

courier font User-entered text In the Label enter EastBMM


Command output Database restore from local or remote
machine?
Directory path /DNA/EMS
default font, italic Document titles Refer to the Infinera DTN and DTN-X
System Description Guide
Default font Icon names Click Node icon
Window names not in the user interface In the DNA Main View
Helpful suggestions
Note: Note: The window is refreshed only
after making all the changes.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


26 Technical Assistance

Technical Assistance
Customer Support for Infinera products is available, 24 hours a day, 7 days a week (24x7). For
information or assistance with Infinera products, please contact the Infinera Technical Assistance Center
(TAC) using any of the methods listed below:
■ Email: [email protected]
■ Telephone:
□ Direct within United States: 1-408-572-5288
□ Outside North America: +1-408-572-5288
□ Toll-free within United States: +1-877-INF-5288 (+1-877-463-5288)
□ Toll-free within Germany/France/Benelux/United Kingdom: 00-800-4634-6372
□ Toll-free within Japan: 010-800-4634-6372
■ Infinera corporate website: http://www.infinera.com
■ Infinera Customer Web Portal: https://support.infinera.com
Please see the Infinera Customer Web Portal to view technical support policies and procedures, to
download software updates and product documentation, or to create/update incident reports and
RMA requests.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


27

Documentation Feedback
Infinera strives to constantly improve the quality of its products and documentation. Please submit
comments or suggestions regarding Infinera Technical Product Documentation using any of the following
methods:
■ Submit a service request using the Infinera Customer Web Portal
■ Send email to: [email protected]
■ Send mail to the following address:
Attention: Infinera Technical Documentation and Technical Training
Infinera Corporation
140 Caspian Court
Sunnyvale, CA 94089
When submitting comments, please include the following information:
■ Document name and document ID written on the document cover page
■ Document release number and version written on the document cover page
■ Page number(s) in the document on which there are comments

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


28 Documentation Feedback

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 1

Introduction

The Infinera Intelligent Transport Network architecture includes an intelligent embedded control software
called the IQ Network Operating System, which operates on the DTN-X, DTN, Optical Amplifier, XT, and
FlexILS nodes. The IQ NOS software provides reliable and intelligent interfaces for the Operation,
Administration, Maintenance and Provisioning (OAM&P) tasks performed by the operating personnel and
management systems. The IQ NOS also includes an intelligent Generalized Multiprotocol Label Switching
(GMPLS) control plane architecture which provides automated end-to-end service provisioning and a
management plane architecture which provides reliable and redundant communication paths for the
management traffic between the management systems and the network elements.
IQ NOS supports the following features:
■ Operates on DTN-X, DTN, Optical Amplifier, XT, and FlexILS nodes
■ Standards based operations and information model (ITU-T, TMF 814, Telcordia).
■ Extensive fault management capabilities including current alarm reporting, alarm reporting
inhibition, hierarchical alarm correlation, configurable alarm severity assignment profile, event
logging, environmental alarms, and export of alarm and event data.
■ Network diagnostics capability including digital path and digital section level loopbacks, client side
loopbacks, circuit-level pseudo random binary sequence (PRBS) 31 and detection, trail trace
identifier (TTI) and synchronous optical network (SONET)/synchronous digital hierarchy (SDH) J0
monitoring and insertion at the tributaries.
■ Automatic equipment provisioning and equipment pre-provisioning.
■ Fully automated network topology discovery including physical topology and service topology
views.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


1-2

■ Robust end-to-end automated circuit routing and provisioning utilizing GMPLS routing and signaling
protocols. Highlights of this feature include the ability to pre-configure circuits, optional selection of
SNC path utilizing constraint based routing, and optional specification of the channel/sub-channel
number within an optical carrier group (OCG) for a subnetwork connection (SNC).
■ Flexible software and configuration database management including remote software upgrade/
rollback, configuration database backup and restore, and bulk File Transfer Protocol (FTP)
transfers.
■ Analog performance monitoring at every node, digital performance monitoring at DTNs and DTN-
Xs, and native client signal performance monitoring at tributaries.
■ Supports Network Time Protocol (NTP) to synchronize the timestamps on all alarms, events and
performance monitoring (PM) data across the network.
■ GR-815-CORE based security administration and support for Remote Authentication Dial-In User
Service (RADIUS).
■ Hitless software upgrades.
■ Multi-chassis configurations utilizing the nodal control ports (NC ports or NCT ports, depending on
the chassis type).
■ Redundant control plane communication paths utilizing two control modules.
■ Redundant management plane communication paths utilizing Gateway Network Element (GNE)
and Management Proxy services.
■ Telcordia compliant TL1 for operations support system (OSS) integration.
■ Open integration interfaces including the TL1 interface and CSV formatted flat files that can be
exported using secure FTP.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 2

Fault Management

IQ NOS provides extensive fault monitoring and management capabilities that are modeled after
Telcordia and ITU standards. All these capabilities are independent of the client signal payload type and
provide the ability to identify, correlate and correct faults based on actual digital and optical performance
indicators, leading to quicker problem resolution. Additionally, IQ NOS communicates all state and status
information of the network element automatically and asynchronously to the other network elements
within the Intelligent Transport Network and to all the registered management applications, thus
maintaining synchrony within the network.
IQ NOS provides the following fault management capabilities to help users in managing and maintaining
the network element:
■ Alarm Surveillance on page 2-2
■ Automatic Laser Shutdown (ALS) on page 2-11
■ Optical Layer Defect Propagation (OLDP) on page 2-16
■ Optical Loss of Signal (OLOS) Soak Timers on page 2-18
■ Software Controlled Power Reduction on page 2-23
■ Optical Ground Wire (OPGW) on page 2-24
■ Electronic Equalizer Gain Control Loop on page 2-25
■ Event Log on page 2-26
■ Maintenance and Troubleshooting Tools on page 2-27
■ Syslog on page 2-70

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-2 Alarm Surveillance

Alarm Surveillance
Alarm Surveillance functions include:
■ Detection of defects in the Infinera network elements and the incoming signals (see Defect
Detection on page 2-2).
■ Declaration of defects as failures (see Failure Declaration on page 2-2).
■ Reporting failures as alarms to the management applications (see Alarm Reporting on page 2-3).
■ Masking low level or lower order alarms in the presence of high level or higher order alarms (see
Alarm Masking on page 2-5).
■ Reporting alarms through local alarm indicators (see Local Alarm Summary Indicators on page 2-
6).
■ Configuring alarm reporting (see Alarm Configuration on page 2-6).
■ Isolating network faults utilizing Automatic Laser Shutdown feature (see Automatic Laser Shutdown
(ALS) on page 2-11).
■ Ability to configure the behavior of client tributaries in case the tributary is locked or faulted (see
Tributary Disable Action on page 3-41)
■ Ability to configure the encapsulated client disable action for certain TIMs and TAMs (see
Encapsulated Client Disable Action on page 3-46)

Defect Detection
IQ NOS detects and clears all hardware and software defects within the system. A defect is defined as a
limited interruption in the ability of an item to perform a required function. The detected defects are
analyzed and localized to the specific network site, network element, facility (or incoming signal) and
circuit pack. On detecting certain defects, such as defects in the incoming signal, IQ NOS transmits
maintenance signals to the upstream and downstream network elements indicating successful
localization of the defect. On termination of defects, IQ NOS stops transmitting maintenance signals. See
Automatic Laser Shutdown (ALS) on page 2-11 for more details.
The detection of facility defects, such as OLOS, AIS, BDI, etc., and transmission of maintenance signals
to the upstream and downstream network elements is in compliance with Telcordia and ITU
specifications.

Failure Declaration
Defects associated with facilities/incoming signal are soaked for a pre-defined period before they are
declared as failures. This measure prevents spurious failures from being reported. So, when a defect is
detected on a facility, it is soaked for a time interval of 2.5 seconds (+/- 1 second) before the
corresponding failure is declared. Similarly, when a facility defect clears, it is soaked for 12.5 seconds (+/-
2 seconds) before the corresponding failure is cleared. This eliminates pre-mature clearing of the failure.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-3

Defects associated with hardware equipment, environmental alarms, and temperature-related alarms are
not soaked. The failure condition is declared as soon as the defect is detected. Similarly, the failure
condition is cleared as soon as the defect is cleared.

Alarm Reporting
IQ NOS reports the hardware and software failures as alarms. Detection of a failure condition results in
an alarm being raised which is asynchronously reported to all the registered management applications.
The clearing of a failure results in clearing the corresponding alarm, which is again reported
asynchronously to all the registered management applications. IQ NOS stores the outstanding alarm
conditions locally and they are retrievable by the management applications. Thus, at any given time users
see only the current standing alarm conditions.
Alarm reporting is also dependent on the administrative state (see Administrative State on page 3-36) of
the managed object and presence of other failure conditions and the user configuration, as described
below:
■ Administrative State—Alarms are reported when the administrative state of a managed object and
its ancestor objects are unlocked. When the administrative state of an object or any of its ancestor
objects are locked or in maintenance, alarms are not reported (except for the Loopback related
alarms). IQ NOS also supports alarms that indicate when a managed object is put in the locked or
maintenance administrative state. The severity of these alarms can be customized via the ASPS
feature (see Alarm Severity Profile Setting (ASPS) on page 2-9).
■ Alarm Hierarchy—An alarm is reported only if no higher priority alarms exist for the managed
object. Thus, only alarms corresponding to the root cause of the fault condition are reported. This
capability prevents too many alarms being reported for a single fault condition (see Alarm Masking
on page 2-5).
■ User Configuration—IQ NOS provides users the ability to selectively inhibit alarm reporting (see
Alarm Reporting Control (ARC) on page 2-7).
IQ NOS reports each alarm with sufficient information, as described below, so that the user can take
appropriate corrective actions to clear the alarm. For a detailed description of all the parameters of alarms
reported to the management applications, refer to the GNM Fault Management and Diagnostics Guide.
■ Alarm Category—This information isolates the alarm to a functional area of the system (see Alarm
Category on page 2-4 for the list of supported alarm types).
■ Alarm Severity—This information indicates the level of degradation that the alarm causes to service
(see Alarm Severity on page 2-5 for the list of supported severities). This information is reported
within the NTFCNCDE parameter in TL1 notifications.
■ Probable Cause—This information describes the probable cause of the alarm. This is a short
description of the detected problem. A detailed description is provided as Probable Cause
Description.
■ TL1 Condition Type—This field is analogous to the probable cause, except that the condition type
string is in accordance with the GR-833-CORE standard. This information is reported within the
CONDTYPE parameter in TL1 notifications.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-4 Alarm Surveillance

■ Probable Cause Description—This information is an elaboration of the Probable Cause, providing a


detailed description of the alarm and isolating the alarm to a specific area. This information is
reported within the CONDDESCR parameter in TL1 notifications.
■ Service Affecting—This information indicates whether the given alarm condition interrupts data
plane services through the system or network. The two possible values are: ‘SA’ for service
affecting and ‘NSA’ for non-service affecting. An alarm is reported as service-affecting if the alarm
condition affects a hardware or software entity in the data plane, and the affected hardware or
software entity is administratively enabled. This information is reported within the SRVEFF
parameter in TL1 notifications.
■ Source Object—This information identifies the managed object on which the failure is detected.
This information is reported within the AID parameter in TL1 notifications.
■ Location—This information identifies the location of the managed object as near end or far end,
when applicable. This information is reported within the LOCN parameter in TL1 notifications.
■ Direction—This information indicates whether the alarm has occurred in the receive direction or in
the transmit direction, when applicable. This information is reported within the DIRN parameter in
TL1 notifications.
■ Time & Date of occurrence—This information provides the time at which the alarm was detected. It
is derived from the system time. IQ NOS provides users the ability to manually configure the
system time or enable Network Time Protocol (see Time-of-Day Synchronization on page 9-14) so
that an accurate and synchronized time is reported for all alarms. The time and date information
allows a root cause analysis of failures across network elements and networks. This information is
reported within the OCRDAT and OCRTM parameters in TL1 notifications.
■ Type—As described in PM Thresholding on page 5-4, IQ NOS supports performance monitoring
and thresholds, enabling early detection of degradation in system and network performance. The
threshold crossing conditions are handled utilizing the same mechanism as alarms. The type field
indicates whether the reported condition is an alarm or a threshold crossing condition.
IQ NOS records all the current alarms with alarm details, as described above, in an alarm table. The
alarms are persisted in the controller module across reboots. After a system reboot or a controller module
reboot, the alarms in persistent storage are compared to the current system status in order to remove any
cleared alarms and maintain only the current outstanding alarms.
Upon reboot or switchover of the active controller module, all alarms that were asserted before the reboot
or switchover are reasserted after the controller module recovers. Eight minutes after the system
becomes active, any alarms not re-detected by the system are remitted. This ensures that fault conditions
which cleared during the controller module reset are cleared if the conditions that originally caused the
alarms are no longer present.
Refer to the DTN and DTN-X Alarm and Trouble Clearing Guide for the detailed description of all the
alarms reported by IQ NOS and the corresponding clearing procedures.

Alarm Category
IQ NOS categorizes the alarms into the following types:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-5

■ Facility Alarm—Alarms associated with the line and tributary incoming signals. For example: OLOS,
LOF, and AIS.
■ Equipment Alarm—Alarms associated with hardware failures. For example: Equipment Failure, and
Equipment Unreachable.
■ Communications Alarm—Alarms associated with communication failures within the network
element and between network elements. For example: No Communication with OSC Neighbor
(LOC OSC).
■ Software Processing Alarm—Alarms associated with software processing errors. For example,
Software Upgrade Has Failed, and Persistence Space Less Than 2%-Critical.
■ Environmental Alarm—Alarms caused by the change in the state of the environmental alarm input
contact.

Alarm Severity
Each alarm, TCA, and TCC generated by IQ NOS has one of the following default severity levels:
■ Critical—Indicates that a service affecting condition has occurred and an immediate corrective
action is required. This severity is reported, for example, when a managed object is rendered out-
of-service by a failure and must be restored to operation in order to recover lost system
functionality.
■ Major—Indicates that a service affecting condition has developed and an urgent corrective action is
required. This severity is reported, for example, when there is a severe degradation in the capability
of the managed object and full capability must be restored in order to recover lost system
functionality.
■ Minor—Indicates the existence of a non-service affecting fault condition and that corrective action
should be taken in order to prevent a more serious (for example, service affecting) fault. Such a
severity is reported, for example, when the detected alarm condition is not currently degrading the
capacity of the managed object.
■ Warning—Indicates the detection of a potential or impending service affecting fault, before any
significant effects have been felt. Action should be taken to further diagnose (if necessary) and
correct the problem in order to prevent it from becoming a more serious service affecting fault.
Note: This severity level maps to the non-alarmed standing condition in TL1.
With the exception of Warning, the alarm severity levels are referred to as the notification code in
GR-833-CORE, and are reported as such in TL1 notifications.
Users can customize the severity associated with an alarm, TCA, or TCC through the management
applications (see Alarm Severity Profile Setting (ASPS) on page 2-9.)

Alarm Masking
IQ NOS provides an alarm masking feature that complies with, and extends, GR-253 Section 6.2.1.8.2
and GR-474 Section 2.2.2.1. The network element masks (suppresses) higher layer alarms associated
with the same root cause as a lower level alarm. This prevents logs and management applications from
being flooded with redundant information. Suppression is based on a logical hierarchy. For instance,

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-6 Alarm Surveillance

when a network element experiences an Optical Transport Section (OTS) - Optical Loss of Signal (OLOS)
failure, the network element will report the OLOS-OTS alarm, but the associated Band - OLOS, Channel -
Loss of Frame (LOF), and Band - Optical Power Received (OPR) Out of Range – Low (OORL) alarms,
and all other associated lower layer alarms, are suppressed. These conditions are still retrievable by
request.
The masked condition is neither reported to the management applications nor recorded in the alarm table.
For individual alarm descriptions and the alarm masking hierarchy, refer to the DTN and DTN-X Alarm
and Trouble Clearing Guide or the GNM Fault Management and Diagnostics Guide.

Local Alarm Summary Indicators


Infinera network elements provide local visual and audio indicators to report the summary of current alarm
conditions of a network element and each of the chassis to the local personnel. For the detailed
description of the indicators and their function refer to the InfineraHardware Description Guide Portfolio
and the GNM Fault Management and Diagnostics Guide.
Following is a brief summary of the local indicators provided by the network elements:
■ Bay Level Visual Alarm Indicators—These indicators provide the summary of the outstanding alarm
conditions of all chassis within a bay. A bay level visual alarm indicator (LED) is lit if there is at least
one corresponding outstanding alarm condition in any of the chassis within the bay.

Note: Bay-level LEDs are supported on DTCs, MTCs, and XTC-4s only.

■ Chassis Level Visual Alarm Indicators—These indicators provide the summary of the outstanding
alarm conditions of the chassis. A chassis level visual alarm indicator is lit if there is at least one
corresponding outstanding alarm condition within the chassis.
■ Chassis Level Office Alarm Indicators—As described in Office Alarms, the network elements
provide alarm output contacts to support chassis level visual and audio indication of critical, major
and minor alarms. As described in Alarm Cutoff, ACO buttons and ACO LEDs are also supported.
■ Card Level Visual Indicators—All circuit packs include LEDs to indicate the card status.
■ Port Level Indicators—These indicators are provided for each tributary port and line port.

Alarm Configuration
The following features are used to customize the alarm reporting to the management applications and
interfaces:
■ Alarm Reporting Control (ARC) on page 2-7 (see below)
■ Alarm Severity Profile Setting (ASPS) on page 2-9
■ Customizable Timer-Based Alarms on page 2-9
■ Power Draw Alarm on page 2-9

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-7

Alarm Reporting Control (ARC)


The Alarm Reporting Control (ARC) feature allows users to silence alarms on a managed object that is
administratively unlocked, but is being serviced or is awaiting valid signal flow. This feature is useful for
performing maintenance on a piece of equipment in an alarm-free state. For all managed objects, the
Alarm Reporting option is enabled by default, meaning that alarms are reported for a managed object
unless the user specifically turns off Alarm Reporting for the managed object.

Note: The TL1 commands used to control the Alarm Reporting option are OPR-ARC (operate ARC)
and RLS-ARC (release ARC). The OPR-ARC command is used to disable alarm reporting, and the
RLS-ARC command is used to re-enable alarm reporting. See the DTN and DTN-X TL1 User Guide
for more information on configuring ARC via the TL1 interface.

Note: Although it is possible to use ARC to suppress OLOS alarms on newly installed tributary
interfaces whose services have not yet been turned up, it may be more convenient to use the
Automatic In-Service (AINS) feature. The AINS feature automatically suppresses alarms on a
tributary until the entity is fault-free for a configured time period, at which time the tributary is declared
to be “In-Service”. Unlike the ARC feature, the AINS feature automatically puts tributary interfaces
into service once all faults are cleared. For more information on AINS, see Automatic In-Service
(AINS) on page 3-40.

When Alarm Reporting is turned off for a managed object, the reporting of the alarms, events, and TCAs/
TCCs for the specified entity are stopped for all the management interfaces. Although the managed
object may be detecting alarms such as OLOS, the alarms are not transmitted to any client, or reported to
the management applications. Turning off Alarm Reporting also suppresses status indicators, such as
LEDs and audio/visual indicators. When Alarm Reporting is disabled for a managed object, alarms are
also inhibited for all the contained and supported managed objects. For example, when alarm reporting is
inhibited for the chassis object, alarm reporting is inhibited for all the circuit pack objects within that
chassis. See Managed Objects on page 3-3 for the description of the managed objects and relationship
between them.
The inhibited alarms are logged in the event log and are retrievable through the TL1 Interface. Note that
the DNA and GNM will not retrieve this information.
When Alarm Reporting is disabled for a managed object, the default ARC behavior is to maintain all pre-
existing alarms for the managed object; these alarms are cleared as usual when the alarm condition no
longer exists. However, this behavior can be re-configured on the network element to cause pre-existing
alarms on an object to be cleared when Alarm Reporting is disabled on that object. In this case, once
Alarm Reporting is re-enabled, existing alarms (including pre-existing alarms that are still outstanding) will
be reported. This switch is configured on a per-node basis, and the behavior of the two settings (the
default Leave Outstanding Alarms and the override Clear Outstanding Alarms) is shown in Figure 2-1:
ARC Behavior (Leave Outstanding Alarms vs. Clear Outstanding Alarms) on page 2-8 below.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-8 Alarm Surveillance

Figure 2-1 ARC Behavior (Leave Outstanding Alarms vs. Clear Outstanding Alarms)

Note that the ARC behavior is the same for alarm events that are raised during the ARC period (Scenario
#1 and Scenario #2), regardless of whether ARC is set to Leave Outstanding Alarms or Clear
Outstanding Alarms.
■ When alarm conditions are raised and cleared during the ARC period (Scenario #1), the alarms are
not reported to the management interfaces.
■ When alarm conditions are raised during the ARC period but are not cleared during the ARC period
(Scenario #2), the alarms are reported to the management interfaces only at the end of the ARC
period, and the clearing event is reported to the management interfaces when the alarm is cleared.
However, the ARC behavior is different when alarm events are raised before the beginning of the ARC
period (Scenario #3 and Scenario #4), depending on whether ARC is set to Leave Outstanding Alarms or
Clear Outstanding Alarms:
■ When ARC is configured to Leave Outstanding Alarms, any pre-existing alarms will remain
outstanding and a clearing event will be reported to the management interfaces when the alarm
condition is cleared. In Scenario #3 the clearing event happens during the ARC period, and in
Scenario #4 the clearing event happens after the ARC period.
■ When ARC is configured to Clear Outstanding Alarms, any pre-existing alarms are cleared when
Alarm Reporting is disabled and a clearing event is sent to the management interfaces at the start
of the ARC period. If the alarm is cleared during the ARC period, the management interfaces will
not receive another clearing event. If the alarm is still outstanding at the end of the ARC period, the
management interfaces will receive a new alarm event for the alarm, and then will receive a
clearing event when the alarm is cleared.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-9

Alarm Severity Profile Setting (ASPS)


The Alarm Severity Profile Setting (ASPS) feature allows users to modify the default severity of an alarm
type, threshold crossing alert (TCA), or threshold crossing condition (TCC) on a per managed object-type
basis. IQ NOS also supports alarms that indicate when an entity is put in the locked or maintenance
administrative state. The severity of these alarms can also be customized via the ASPS feature.
ASPS is configured via the management applications so that users can modify the default severities of
alarms according to their fault-handling strategies. Note that user modifications of severity level take
effect for the newly-generated alarms, TCAs, or TCCs; if an alarm, TCA, or TCC is currently active, its
severity is not changed by the user modification.

Note: The severity is modified per object type, and not on a per managed object basis. For example,
when the severity of OLOS of an OCG termination point is modified, the new severity is applied to
OLOS alarms reported by all OCG termination points.

Note: The severity of an environmental alarm is assigned by the user when the alarm is provisioned.
The ASPS feature cannot be used to modify the provisioned severity of environmental alarms.
However, the severity of an environmental alarm can be changed from the Alarm Input Contact
window in the management applications.

ASPS allows the user to configure protection switching actions as alarms (see Protection Switch Alarm
Reporting on page 4-136).

Customizable Timer-Based Alarms


Infinera nodes support up to three user-created timer-based alarms that raise and clear a standing alarm
condition based on configurable timers. The timer-based alarms can be set as reminders for timed
maintenance events, such as air filter replacement. The timer-based alarms are supported by all
management and northbound interfaces and can be configured to activate chassis-level LEDs. Each
timer-based alarm can be customized with user-specified probable cause descriptions and alarm
messages. The severity level can be set to Critical, Major, Minor, Warning, or Not Reported, and the
service affecting value can be set to Non-Service Affecting or Service Affecting.

Power Draw Alarm


For XTC-10, XTC-4, XTC-2, XTC-2E, MTC-6, and MTC-9, the chassis raises an alarm when a module
(line modules or TIMs for XTC; IAM, IRM, FRM, or FSM for MTC-n) is installed in the chassis if the
current available power is not sufficient to support the new module. The chassis will not allow the module
to fully power up, the module remains in a reset state consuming minimal power, and the chassis raises a
Power Control alarm (PWRCTRL-INIT) alarm indicating that the system requires more power than
available (see XTC Chassis Power Control on page 3-53 and MTC-9/MTC-6 Chassis Power Control on
page 3-54).
Once available power increases sufficiently, the Power Control alarm clears and the controller module
automatically powers up the modules in the reset state.

Note: This applies only to newly-installed or re-seated modules; if these modules are cold reset the
XCM/IMM does not interfere with the reboot.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-10 Alarm Surveillance

For DTC, MTC, and OTC, the user can configure the ideal maximum electrical power draw (in Watts) for
the chassis (see MTC/DTC Chassis Power Control on page 3-55). This power draw limit is compared
against the total maximum (worst-case) power draw for all of the equipment provisioned (or pre-
provisioned) in the chassis, and the chassis raises an alarm if the sum of the power values for the
provisioned/pre-provisioned equipment in the chassis exceeds the user-configured maximum power limit.
This feature is especially useful when a chassis is deployed in a co-location environment where “rented
power” limits may be enforced/limited by the service provider providing the co-location environment.

Note: This feature does not limit power draw, but instead provides a configurable alarm if the system
equipment is calculated to exceed the user-configured maximum.

Note: The chassis has no means for reporting its actual current draw, so instead, the user-configured
maximum power draw limit is compared against the sum of the maximum power draw values for the
equipment currently provisioned (or pre-provisioned) in the chassis.

When provisioning a new piece of equipment in a chassis, the equipment’s estimated power draw is
added to the estimate of the total power draw for the chassis. If the newly computed power consumption
exceeds the user-configured maximum power draw value, the chassis raises a “Power Draw”
(PWRDRAW) alarm.
The Power Draw (PWRDRAW) alarm is cleared when:
■ The user increases the configured maximum power draw value for the chassis to a value that is
equal to or greater than the total estimated power draw value.
■ Pre-provisioned or provisioned equipment is deleted (or removed and then deleted, in the case of
provisioned equipment) from the network element's database. The network element will then re-
evaluate the estimated power draw. If the estimated power draw value is equal to or less than the
configured maximum power draw value, the Power Draw alarm is cleared.
See Power Draw of Equipment on page 3-52 for more information about configuring the power draw
settings for a chassis.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-11

Automatic Laser Shutdown (ALS)


Infinera nodes implement an Automatic Laser Shutdown (ALS) feature to isolate and contain a fiber cut
on a digital link. When a BMM, OAM, ORM, or IAM detects an OTS OLOS condition on the receive link,
the module shuts down its upstream band laser and transmits a BDI-OTS signal across the OSC to the
upstream node. The BDI-OTS signal and the absence of a C-band signal prompts the upstream node to
turn off its C-band transmit laser. The OSC signal is not shut down at either end. This link is now in the
ALS state. The general functions of ALS are to:
■ Shut down lasers directed towards the cut to comply with eye safety requirements. In the case of
ORMs, the module shuts down the Raman pumps as well.
■ Shut down lasers directed away from the cut (upstream and downstream) to protect the equipment
from sudden power surges when the repair is affected.
■ Communicate failure detection to the upstream and downstream digital link tail-end nodes so that
AIS can be injected on affected customer circuits (this is communicated by virtue of propagating
laser shutdown to upstream and downstream nodes).

Note: BDI-OTS and FDI-OTS conditions are not exposed in the user management interfaces; they
are detected and used internally by the system for ALS.

When the fiber is recovered, the OTS OLOS condition clears (recall that the OSC signal does not shut
down in the ALS state, so once the fiber is recovered, both ends will receive the OSC from the far end
and the OTS OLOS condition is cleared). Once the OTS OLOS condition clears, the C-band laser will
automatically turn back on, thus clearing the BDI-OTS signal sent towards upstream node. The upstream
node receives the C-band signal with no BDI-OTS signal, and therefore the upstream node turns on its C-
band laser, which clears the C-band OLOS at the near end. This link is now in the normal state.

Note: For SLTE links, which operate without the OSC (see Network Applications in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg), once ALS is triggered there is no automatic way for the
link to recover. ALS on the link must be manually disabled and then re-enabled. Alternatively, ALS
can be permanently disabled for SLTE links in order to support faster recovery from link failures. To
enable this feature, contact an Infinera Technical Assistance Center (TAC).

Note there is specialized ALS behavior for the following types of modules/configurations:
■ Raman Amplifier Modules (RAM-1, RAM-2-OR, and REM-2), see ALS with Raman Modules
(RAM-1, RAM-2-OR, and REM-2) on page 2-13.
■ Booster Amplifier/Preamplifier configurations, see ALS for Booster Amplifier/Preamplifier
Configurations on page 2-12.
■ IAMs and IRMs, see ALS with IAMs and IRMs on page 2-15.

ALS Disabling and ALS Administration Policy


The ALS feature can be disabled for a specified time interval via the management interfaces. In terrestrial
applications, disabling ALS is not recommended in a live network. This option can be used during lab or

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-12 Automatic Laser Shutdown (ALS)

field trials testing to measure the power levels in one direction when a single (i.e., uni-directional) fiber cut
is present. In order to disable ALS, the user must have a user account specifically configured with
“Restricted Access” privileges.
To prevent users from disabling ALS on modules with Raman amplifiers, a user with Network
Administrator privileges can set the network element’s ALS Administration Policy to “block.” When the
network element’s ALS Administration Policy is set to “block,” the network element does not allow users
to disable ALS on modules with Raman functionality: RAMs, REMs, ORMs, and IRMs. This setting does
not change the behavior for ALS on BMMs, IAMs, nor OAMs. The default setting is “do not block,” which
means that users are allowed to disable ALS on modules with Raman amplification.
For SLTE configurations, BMMs configured to SLTE mode and IAMs configured for SLTE or SLTE_TLA
mode support ALS disabling in order to allow the system to continue operating after a break in fiber
connectivity. ALS can be disabled in one of two modes:
■ Timer based—ALS may be disabled for a finite period of time. In this mode, a timer is set and ALS
is disabled until the expiration of the timer.
■ Permanent—ALS is permanently disabled, meaning the laser is on and will continue to transmit
even in presence of ALS triggers that otherwise would shut down the laser. ALS functionality is not
supported and never triggered. ALS-elated configuration settings are ignored for the IAM.

Note: Contact an Infinera Technical Assistance Center (TAC) for assistance in permanently disabling
ALS.

ALS for Booster Amplifier/Preamplifier Configurations


For configurations with BMM2Ps and booster amplifiers or preamplifiers (see DTN/DTN-X with ORM/OAM
Preamplifier Configuration for BMM2P, BMM2P with RAM-2-OR and OAM-CXH1-MS, and Optical
Amplifier with Booster Amplifier), there are some additional ALS behaviors that should be noted:
■ ALS cannot be disabled for any module configured as a preamplifier (OAM-CXH1-MS, OAM-CXH2-
MS, OAM-CXH3-MS, ORM-CXH1-MS, or ORM-CXH1), nor for any module configured as a booster
amplifier (OAM-CXH1-MS).
■ ALS is triggered by OTS OLOS as with other configurations. But in addition, for an ORM-CXH1 with
a booster amplifier and for a BMM2P with a preamplifier, ALS will also trigger when the patch cable
is broken.
■ When OTS OLOS is detected on a preamplifier (OAM-CXH1-MS, OAM-CXH2-MS, OAM-CXH3-
MS, ORM-CXH1-MS, or ORM-CXH1), or on a booster amplifier (OAM-CXH1-MS), the module
shuts down its OSC (downstream) transmitter. This behavior is different from the standard behavior
of other modules, which continue transmitting the OSC during ALS.
For DTN-X configurations with BMM2Cs and preamplifiers (see DTN-X or Optical Amplifier with
OAM/ORM Preamplifier for BMM2C), there are some additional ALS behaviors that should be noted:
■ ALS cannot be disabled for an OAM/ORM configured as a preamplifier for a BMM2C.
■ ALS is triggered (the BMM2C transmit EDFA shuts down) if OTS OLOS is detected by the
OAM/ORM preamplifier. Also, by software power reduction, the preamp EDFA is muted as well.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-13

■ ALS will also trigger when the OTS patch cable between the BMM2C and the OAM/ORM
preamplifier is broken (the preamp EDFA will be muted to an eye safe level of 10dBm or less).
■ There is no ALS trigger if the OSC patch cable between the BMM2C and the OAM/ORM
preamplifier is removed.

ALS with Raman Modules (RAM-1, RAM-2-OR, and REM-2)


Raman Amplifier Modules (RAMs; see Raman Amplifier Module (RAM)) increase the single span reach
between two network elements. Since Raman amplification is implemented between the BMMs/OAMs/
ORMs/IAM-2s and the transmission fiber, ALS behavior is slightly modified when Raman amplification is
present.

Note: This section describes the behavior for RAMs (RAM-1, RAM-2-OR, and REM-2). ORM modules
behave similarly to BMMs and OAMs, as discussed in the previous section. Also note that IRMs have
a different behavior than the RAMs. IRMs are discussed in the next section (see ALS with IAMs and
IRMs on page 2-15).

Note: The RAM-1, RAM-2-OR, and REM-2 are not supported for configurations with IAM-1.

Because of their high power levels, RAMs generate a significant amount of Amplified Spontaneous
Emission (ASE) noise, so the system can’t rely on detecting out-of-range C-band and OSC signal powers
for ALS, which are used for ALS in non-Raman systems. For nodes that use RAMs, ALS is instead
implemented via a dedicated 1610nm pilot laser on the counter-pump Raman modules (RAM-1s, and
RAM-2-ORs only; the REM-2 module can detect but not generate a pilot tone). The pilot laser output is
launched co-propagating with the payload signal, and is modulated to produce one of two tone signals
that facilitate in the link shutdown and restoration processes:
■ Remote Receive Fault (RRF)—Used to notify the RAM in the far end of the link of a fiber break in
the opposite fiber span as detected by the near-end receiver. This prompts the far-end RAM to turn
off its pumps.
■ Normal (NRM)—Used to notify the RAM in the far end of the link to turn on its pumps (if and when it
detects the tone).
These tones are generated by the RAM-1 or RAM-2-OR module at the near end of the link and detected
by the corresponding RAM-1, or RAM-2-OR at the far end of the link (see Figure 2-2: Pilot Lasers in
RAMs on page 2-14).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-14 Automatic Laser Shutdown (ALS)

Figure 2-2 Pilot Lasers in RAMs

Note: The pilot tone resides at 1610nm on the same fiber as the OSC and the OCGs. No additional
fiber is required to carry the pilot tone.
Based on the detection of the pilot tones, three ALS states are defined:
■ NoSignal No ALS tone detected. ALS event will be triggered.
■ RemoteRxFault RRF tone detected, indicating ALS event is detected by the upstream amplifier.
ALS event will be triggered.
■ Normal NRM tone detected. No ALS event present.
The pilot lasers will detect all fiber breaks occurring in the main fiber spans between the two RAM
modules. However, they are incapable of detecting fiber breaks in the local fiber spans between each
BMM/OAM/ORM/IAM-2 and RAM pair. For this purpose the RAM-1 and RAM-2-OR modules will rely on
C-band and OSC optical power detection from the BMM/OAM/ORM/IAM-2. ASE interference is not an
issue here since the pump lasers are located at the far end of the link.
Based on the detection of the BMM/OAM/ORM/IAM-2 C-band and OSC signals an additional ALS state is
defined:
■ LocalRxFault No C-band or OSC signal detected, indicating fiber break in local span. ALS event
will be triggered.
The LocalRxFault state has precedence over the other three states. While in this state the RAM module
will ignore any detected pilot tones.

Note: The four ALS states apply only to the RAM-1 and RAM-2-OR modules. For links which
incorporate a REM-2 module, there is a control line sent via the backplane to allow the RAM-2-OR
module to turn on or off the REM-2 pump lasers. This dictates that a span that requires both a
RAM-2-OR and REM-2 module must have these modules in the same chassis.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-15

ALS with IAMs and IRMs


The IAM supports ALS similarly to BMMs and OAMs, as described previously. IRMs, however, rely on the
pilot tone OLOS to detect a disconnected fiber and trigger ALS.

Note: The module type at each end of a link must match: Both modules must be IAMs or both
modules must be IRMs. It is not supported to have a link with an IRM at one end and an IAM at the
other end.

Note: For information on supported interconnectivity of IAM-1, IAM-2, and IRM, see FlexILS Optical
Line Amplifier - Network Applications in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg.

IAMs and IRMs utilize the chassis backplane for ALS functionality. Therefore, note the following
requirements for configurations with IAMs/IRMs:
■ For ROADM configurations (see FlexILS Reconfigurable Optical Add/Drop Multiplexer (ROADM)
and DTN-X with ROADM - Node Configurations in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg), which use both an FRM and an IAM or IRM for each
direction, the IAM/IRM must be in the same MTC-9/MTC-6 chassis as its associated FRM, and the
band PTP of the IAM/IRM must be associated to the band PTP of the FRM.
■ For FlexILS Optical Line Amplifier configurations (see Network Applications in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg), which use an IAM or IRM for each direction, both
amplifier modules (which can be IAMs, IRMs, or one of each) must be in the same MTC-9/MTC-6
chassis, and the band PTPs of the two modules must be associated each other.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-16 Optical Layer Defect Propagation (OLDP)

Optical Layer Defect Propagation (OLDP)


FlexILS nodes (including nodes configured with XT-500F) support the Optical Layer Defect Propagation
(OLDP) feature, a standards-based (G.709 and G.798) propagation of defects in the FlexILS network to
isolate faults and ensure that an alarm is raised only for the root cause of a fault instead of multiple
alarms throughout the network.
OLDP includes status exchange between FlexILS nodes for the following:
■ Optical transport section (OTS) layer defects
■ Optical multiplex section (OMS) layer defects
■ OCh layer defects (which includes faults on super channels and OCGs)
■ OSC failures
The figure below shows the optical layers between FlexILS nodes and on which managed entity and
module each layer originates/terminates. (Note that for FRM-4D/FRM-20X in Standalone with OSC mode,
the OTS and Band CTP are housed on the FRM-4D/FRM-20X. The figure below applies to configurations
with FRM-9D or with FRM-4D/FRM-20X in Paired without OSC mode.)

Figure 2-3 Optical Layers Between FlexILS Nodes

Link-level optical layer defects are communicated using the overhead bits on the OSC. The IAM/IRM/
FRM-4D/FRM-20X receives information on upstream faults on the overhead bits of the incoming OSC.
The outbound IAM/IRM/FRM-4D/FRM-20X injects the required fault bits on the OSC overhead before
transmitting the OSC. Local faults are suppressed based on the fault bits received from the upstream
node. Optical layer alarms and status are thus transmitted from head-end node to tail-end node.
The table below lists the OLDP faults and the layer(s) that support each fault (an “X” indicates support):

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-17

Table 2-1 Fault Bits Supported by Each Layer


Fault Bits Supporting Layer
OTS OMS OCh
Backward Defect Indication Payload (BDIP) X X X
Payload Missing Indication (PMI) X X
Open Connection Indicator (OCI) X
Forward Defect Indication Payload (FDIP) X X
Backward Defect Indication Overhead (BDIO) X X
Forward Defect Indication Overhead (FDIO) X X
Client Signal Failure (CSF) X

Please note the following for OLDP:


■ OLDP is supported for nodes using native (terrestrial) configuration; OLDP is not supported for
SLTE configurations.
■ For propagation of OLDP faults on a node, the IRM/IAM/FRM-4D/FRM-20X OTS must be enabled
for OAM control, and the peer IAM/IRM/FRM-4D/FRM-20X OTS must also be enabled for OAM
control. (The default is for OAM control to be enabled.)
■ OLDP faults are reported on the OLDP SCH entity on the IAM, IRM, and FRM-4D/FRM-20X.
■ The OTS PTP supports an OLDP Version Mismatch alarm that is reported if the connected node is
pre-Release 16.2.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-18 Optical Loss of Signal (OLOS) Soak Timers

Optical Loss of Signal (OLOS) Soak Timers


The OLOS soak timers are used to delay the response to an OLOS condition in the case of short line-side
fiber glitches or fiber mishandling. The DTN-X and DTN support the following types of OLOS soak timers,
which are described in the following sections:
■ C-band OLOS Soak Timer on page 2-18
■ BMM OCG OLOS Soak Timer on page 2-20
■ SCG OLOS Soak Timer on page 2-21

C-band OLOS Soak Timer


The C-band OLOS soak timer is supported for optical connections on the following modules:
■ BMM-4-CX1-A*
■ BMM-4-CX2-MS-A*
■ BMM-4-CX3-MS-A*
■ BMM-8-CXH2-MS*
■ BMM-8-CXH3-MS*
■ BMM2-8-CEH3
■ BMM2-8-CH3-MS
■ BMM2-8-CXH2-MS
■ BMM2C-16-CH
■ BMM2P-8-CEH1
■ BMM2P-8-CH1-MS
■ ORM-CXH1
■ OAM-CX1-A*
■ ORM-CXH1-MS
■ OAM-CX2-MS-A*
■ OAM-CX3-MS-A
■ OAM-CXH1*
■ OAM-CXH1-MS
■ OAM-CXH1-MS-B
■ OAM-CXH2-MS
■ OAM-CXH3-MS

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-19

■ IAM-B-ECXH2
■ IRM-B-ECXH1
■ FRM-9D-R-8-EC
■ FRM-4D-B-3-EC (when configured in Standalone with OSC Slot Operating Mode)
■ FRM-20X-R-EC (when configured in Standalone with OSC Slot Operating Mode)

Note: * For the indicated modules, support of the C-band OLOS soak timer depends on the specific
circuitry of the module. To verify whether the module supports C-band OLOS soak timer:
■ For TL1, run a RTRV-EQPT command on the module and note the value of the
CBANDSOAKCAPABLEFW response parameter: TRUE indicates soak timer is supported,
FALSE indicates soak timer is not supported.
■ For GNM/DNA, open the Span properties of the module. For modules that support the soak
timer, the Span/C-Band tab will have the OLOS Soak Time drop-down menu.

By default, the C-band OLOS soak timer is disabled.


The purpose of the C-band OLOS soak timer is to delay the system's response to OLOS in the case of
short line-side (C Band/OTS) fiber glitches or OTS fiber mishandling. The following system responses are
delayed by the C-band OLOS soak timer:
■ C-band OLOS alarm reporting
■ Shutdown of the receive and transmit EDFAs
■ Initiation of the Automatic Laser Shutdown (ALS) in the link.
The C-band OLOS soak timer can be set to “Fast” (disabled) or “Long” (enabled):
■ When the C-band OLOS soak timer is set to the default value “Fast” (disabled), there is no
additional soaking for C-band OLOS and ALS triggers.
■ When the C-band OLOS soak timer is set to the value “Long,” there is a delay of the EDFA
shutdown and the Automatic Laser Shutdown. The length of the delay depends on the
configuration:
□ For all IAMs, IRMs, and FRMs, the delay is 2.8 seconds.
□ For all OAMs and BMMs besides the BMM2P-8-CH1-MS, the delay is 2.8 seconds. (This
applies to any OAM-CXH1-MS, OAM-CXH2-MS, or OAM-CXH3-MS that isn’t configured as
a preamplifier for a BMM2P-8-CH1-MS.)
□ For all ORMs, the delay is 0.8 seconds. (This applies whether or not the ORM is configured
as a preamplifier for a BMM2P-8-CH1-MS.)
□ For a BMM2P-8-CH1-MS that is associated with an OAM preamplifier (OAM-CXH1-MS,
OAM-CXH2-MS, or OAM-CXH3-MS), the delay is 2.2 seconds for the BMM2P-8-CH1-MS
and also for the associated OAM preamplifier. (The 2.2 second delay applies whether or not
the base BMM2P-8-CH1-MS is associated with an expansion BMM2P-8-CEH1.). For
BMM2P with an OAM preamplifier, the user should configure the C-Band OLOS soak timer
only on the BMM2P and not on the OAM pre-amplifier.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-20 Optical Loss of Signal (OLOS) Soak Timers

□ For a BMM2P-8-CH1-MS that is associated with an ORM preamplifier (ORM-CXH1 or ORM-


CXH1-MS), the delay is 2.2 seconds for the BMM2P-8-CH1-MS. (The 2.2 second delay
applies whether or not the base BMM2P-8-CH1-MS is associated with an expansion
BMM2P-8-CEH1.). For BMM2P with an ORM preamplifier, the user should configure the C-
Band OLOS soak timer on both the BMM2P and the ORM. This is required because the
ORM’s Raman component also requires the C-Band OLOS soak timer.
□ The BMM2C does not contain a receive EDFA, therefore setting the C-Band OLOS soak
timer on the BMM2C would configure the transmit EDFA, and would have no impact on the
receive direction. Therefore, for configurations with BMM2C it is acceptable to set the C-
Band OLOS soak timer only on the BMM2C’s preamplifier and not on the BMM2C itself.

Note: The C-band OLOS soak timer values are set as listed above in order to meet the Class 1M
laser hazard level rating.

Note: For BMMs with mid-stage amplification, C-band OLOS soak timer applies to both stages of the
receive EDFA. However, if there is a glitch in the mid-stage fiber which results in OLOS condition, the
DCF OLOS alarm may not be suppressed.

BMM OCG OLOS Soak Timer


The BMM OCG OLOS soak timer is used to delay an Auto-discovery restart in the case of short OCG
fiber glitches or fiber mishandling which results in BMM OCG OLOS.
Auto-discovery is normally triggered immediately in the case of OLOS, but the soak timer configures the
system to pause for the specified number of seconds before initiating Auto-discovery. Configuring the
soak timer can prevent the Auto-discovery re-trigger and prevent a small fiber glitch from creating a
longer data outage that would be created if Auto-discovery was restarted immediately. During the soak
time, the node will defer the BMM OCG OLOS alarm reporting and Automated Gain Control will continue
to perform null sequencing; the node will not make any gain commitments in the link.
The BMM OCG OLOS soak timer is supported for add/drop connections between the BMM and a GAM-1,
Optical Express connections between BMMs, and add/drop connections between the BMM and an
ADLM, AXLM, AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, or SOLX2. By default, the BMM
OCG OLOS soak timer is set to 0 seconds (disabled).
The BMM OCG OLOS soak timer is supported by the following BMMs:

■ BMM2C-16-CH ■ BMM-4-CX1

■ BMM2-8-CEH3 ■ BMM-4-CX2-MS

■ BMM2-8-CH3-MS ■ BMM-4-CX3-MS

■ BMM2-8-CXH2-MS ■ BMM-8-CXH2-MS

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-21

■ BMM2P-8-CEH1 ■ BMM-8-CXH3-MS

■ BMM2P-8-CH1-MS

Note the following for the OCG OLOS soak timer functionality:
■ The BMM OCG OLOS soak timer is implemented on BMM OCGs only, not on line module OCGs
nor GAM OCGs. Therefore, if there is a fiber glitch between a GAM-1 and a line module (a DLM,
XLM, ADLM, or AXLM in Gen1 mode), Auto-discovery will be retriggered between the line module
and the GAM-1, thus impacting traffic until the Auto-discovery is completed. In addition, the soak
timer is not supported on mid-stage (DCF port) fibers, nor on the optical channel between an LM-80
and a CMM.
■ The BMM OCG OLOS soak timer can be set from 0 to 60 seconds, and it is recommended to set
all a uniform value for all BMM OCGs on a system in order to most easily manage the soak timer
values. The following values are recommended:
□ For add/drop OCGs: 10 seconds
□ For Optical Express OCGs: 20 seconds
■ If the BMM OCG OLOS soak timer is configured when an OLOS condition is already present, the
changes will take effect only during a subsequent occurrence of OLOS.
■ The BMM OCG OLOS soak timer is not honored when a fiber glitch occurs during a warm reset of
the BMM.

SCG OLOS Soak Timer


Similar to the BMM OCG OLOS soak timer, FlexILS modules support the SCG OLOS soak timer to delay
an Auto-discovery restart in the case of short SCG fiber glitches or fiber mishandling which results in
OLOS on the SCG. The SCG OLOS soak timer is supported for the SCG ports on the following modules:
■ FRM-9D
■ FRM-20X
■ FRM-4D
■ FSM
■ FSE
■ FMM-F250
■ FMM-C-5
■ FMM-C-12
Auto-discovery is normally triggered immediately in the case of OLOS, but the soak timer configures the
system to pause for the specified number of seconds before initiating Auto-discovery. Configuring the
soak timer can prevent the Auto-discovery re-trigger and prevent a small fiber glitch from creating a

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-22 Optical Loss of Signal (OLOS) Soak Timers

longer data outage that would be created if Auto-discovery was restarted immediately. During the soak
time, the node will defer the SCG OLOS alarm reporting and Automated Gain Control will continue to
perform null sequencing; the node will not make any gain commitments in the link. By default, the SCG
OLOS soak timer is set to 0 seconds (disabled).
Note the following for the SCG OLOS soak timer functionality:
■ The SCG OLOS soak timer can be set from 0 to 60 seconds, and it is recommended to set all a
uniform value for all SCGs on a system in order to most easily manage the soak timer values. The
following values are recommended:
□ For add/drop SCGs: 10 seconds
□ For FRM to FRM SCGs: 20 seconds
■ If the SCG OLOS soak timer is configured when an OLOS condition is already present, the
changes will take effect only during a subsequent occurrence of OLOS.
■ The SCG OLOS soak timer is not honored when a fiber glitch occurs during a warm reset of the
FRM.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-23

Software Controlled Power Reduction


To provide eye safety for configurations with the BMM2P-8-CH1-MS, the DTN-X and DTN support
Software Controlled Power Reduction that mutes the relevant EDFAs to an eye safe level (10dBm or
less). Software Controlled Power Reduction is supported in the following scenarios:
■ Receive direction: For fiber cuts or fiber removal from the mid-stage of the base BMM2P-8-CH1-
MS, a DCF OLOS condition will be reported and the Receive EDFAs are muted on the following
modules: the preamplifier (OAM-CXH1-MS, OAM-CXH2-MS, OAM-CXH3-MS, ORM-CXH1, or
ORM-CXH1-MS), the base BMM2P-8-CH1-MS, and the expansion BMM2P-8-CEH1.
■ Transmit direction: For fiber cut or fiber removal from the expansion BMM2P-8-CEH1 (transmit) to
the base BMM2P-8-CH1-MS (receive), the base BMM2P-8-CH1-MS reports a C-band OLOS
condition and the Transmit EDFA is muted in the expansion BMM2P-8-CEH1.

Note: Because Software Controlled Power Reduction relies on the software of the associated
modules in order to function, the EDFAs are not muted if the control plane is not accessible at the
time of the fiber cut, such as in the following scenarios:
■ The controller module is removed or cold rebooted
■ The base BMM2P-8-CH1-MS is warm rebooted
■ The preamplifier is warm rebooted (receive direction only)
■ The expansion BMM2P-8-CEH1 is warm rebooted (transmit direction only)
Note: In any of the above conditions, do not disconnect the DCF fiber nor the patch cable fiber
between the base and expansion BMM2P.
Software Controlled Power Reduction does take effect in the case of a controller module warm reboot
or a controller module switchover.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-24 Optical Ground Wire (OPGW)

Optical Ground Wire (OPGW)


The AOLM, AOLX, AOLM2, AOLX2, AOFx-500, AOFx-100, and XT-500 support the Optical Ground Wire
(OPGW) feature, which helps prevent traffic disruption in case of a fast SOP (State of Polarization)
transient event.
The OPGW feature is a combination of the following two new user-configurable attributes on the optical
channel CTP (for AOLM, AOLX, AOLM2, AOLX2, and XT-500) or the carrier CTP (for AOFX-500 and
AOFM-500):
■ Aggressive Polarization Tracking—When enabled, the polarization tracking rates are doubled,
allowing the system to withstand a majority of SOP transient events that might otherwise affect
traffic in the default mode. This option is supported for all modulation formats and encoding modes.
■ Rapid Recovery—When enabled, the system supports 50ms recovery from SOP transient-induced
loss of frame (LOF). (In the default mode, the system requires 5-10 seconds for re-acquisition after
LOF.) This option is available for line modules configured for QPSK modulation and bit differential
encoding mode.

Note: The XT-500 platform does not support <50ms recovery.

By default both of these parameters are disabled. To change the configuration for either of these
parameters, the associated optical channel CTP or carrier CTP must be in the maintenance or locked
administrative state. Changing either of these parameters is service-affecting.

Note: Do not configure Aggressive Tracking nor Rapid Recovery during warm reboot of the line
module.

Note: Do not configure Aggressive Tracking nor Rapid Recovery unless consulted to do so by an
Infinera Technical Assistance Center (TAC) resource.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-25

Electronic Equalizer Gain Control Loop


The OFx-100, AOLM, SOLM, AOLX, SOLX, AOLM2, SOLM2, AOLX2, and SOLX2 (all versions of these
modules, including Instant Bandwidth versions) support Electronic Equalizer Gain Control Loop to
optimize transmission performance under some circumstances involving very long routes or multiple
optical express links.

Note: This feature is disabled by default. Enabling this feature can cause a minor decrease in
performance for PM-QPSK modulation format.

Note: Do not configure Electronic Equalizer Gain Control Loop unless consulted to do so by an
Infinera Technical Assistance Center (TAC) resource.

This feature is enabled using the following two parameters:

Note: For AOLM, SOLM, AOLX, SOLX, AOLM2, SOLM2, AOLX2, and SOLX2, these parameters are
on the OCH CTP of the line module. For OFx, these parameters are on the Carrier CTP of the
modules.

■ Steady State Control—Optimizes the steady state component of Automated Gain Control (AGC) for
the optical channel/carrier. The steady state optimization is non-service-affecting. When enabled,
the line module immediately optimizes the steady state control for the optical channel/carrier.
■ Coarse Tuning Control—Optimizes coarse tuning on the associated optical channel to put AGC in
the desired range. The coarse tuning optimization is service-affecting for the associated optical
channel/carrier, and requires that steady state control is also enabled. When coarse tuning is
enabled, the optical channel coarse tuning is adjusted on upon subsequent re-acquisition (note that
the optimization does not take place immediately; re-acquisition must be triggered for changes to
take place).
Note the following for the Electronic Equalizer Gain Control Loop feature:
■ Do not configure Steady State Control nor Coarse Tuning Control during warm reboot of the line
module.
■ Before configuring the Steady State Control or Coarse Tuning Control, the associated optical
channel/carrier CTP must be administratively locked.
■ Steady State Control must be enabled before enabling Coarse Tuning Control.
■ After enabling Coarse Tuning Control, run a Reset Rx operation on the optical carrier/channel CTP
(in TL1, this is performed via the OPR-RESETRX command).

Note: This will restart the receive acquisition and will be service affecting for the associated
OCHCTP/carrier. It is recommended to perform this operation in a planned maintenance
window.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-26 Event Log

Event Log
IQ NOS provides an historical event log that tracks all significant events in the system (including alarms)
and stores the events in a wrap-around buffer. Management interface sessions can retrieve the full
history and track ongoing events in real time. Synchronization is maintained between the connected
management interfaces and the network element. If a session communication failure occurs, the
reconnected management interface can query the events that occurred during session failure.
IQ NOS records the following types of events in the event log:
■ Alarm related events, which include alarm raise and clear events.
■ PM data thresholding related events, which include threshold crossing condition raise and clear
events.
■ Threshold crossing alerts as described in PM Thresholding on page 5-4.
■ Managed object creation and deletion events triggered by user actions.
■ Security administration related events triggered by user actions.
■ Network administration events triggered by user actions to upgrade software, downgrade software,
restore database, etc.
■ Attribute value change events triggered by the user actions to add or delete managed objects, or
change attribute values of managed objects.
■ State change events indicating the state changes of a managed object triggered by user action
and/or changes in the operation capability of the managed object.
Event logs are stored in the persistent storage on the network element so that event persisted across
controller module reboots or switchovers. Users can export the event log information in TSV format using
management applications.

Note: Attribute value change events are also stored in the event log. The attribute value change
events are not persisted across controller module reboots or switchovers.

Following are some of the important information stored for each event log record:
■ The managed object that generated the event.
■ The time at which IQ NOS generated the event.
■ The event type indicating the event category, including:
□ Update Event, which includes managed object create and delete events.
□ Report Event, which includes security administration related event, network administration
related event, audit events, and threshold crossing alerts (TCA).
□ Condition, which includes alarm raise and clear event, non-alarmed conditions, and
Threshold crossing condition events.
Refer to the DTN and DTN-X Alarm and Trouble Clearing Guide for a list of events recorded in the event
logs for Infinera nodes.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-27

Maintenance and Troubleshooting Tools


IQ NOS provides extensive maintenance and troubleshooting tools used for pre-service operations and
for problem source isolation. The troubleshooting tools help to sectionalize problems and to accurately
identify the troubled spot by running tests progressively at the network element, span, digital link and path
levels.
IQ NOS provides both out-of-service troubleshooting tools, which require the corresponding facilities
(managed objects) to be in the administrative maintenance state, and in-service troubleshooting tools,
which can be run while the corresponding facilities are in the administrative unlocked state.
Out-of-service troubleshooting tools include:
■ Loopbacks to test circuit paths through the network or logically isolate faults (see Loopbacks on
page 2-28).
■ Pseudo random binary sequence (PRBS) generation and detection (see Pseudo Random Binary
Sequence (PRBS) Tests on page 2-52).
■ Line-side and tributary-side test signal for Fibre Channel facilities (see Test Signal for Fibre
Channel Clients on page 2-58).
■ GbE CTP test signal generation and detection for GbE clients (see GbE Client Termination Point
Tests on page 2-59).
■ Path Loss Check for MPO connections (see Path Loss Check for MPO Connections on page 2-
64).

Note: When PRBS generation, PRBS monitoring, and loopbacks are performed on a port, the
administrative status of the port is set to Maintenance. In such cases, the operational status of the
associated cross-connects will be reflected as out-of-service although no cross-connect related
alarms are reported.

In-service troubleshooting tools include:


■ Trace messaging, including J0 byte insertion and monitoring (see Trace Messaging on page 2-
61).
■ User-created ODUk tandem connection monitoring (TCM) facilities (DTN only; see Tandem
Connection Monitoring (TCM) on page 2-64).
■ Optical Time Doman Reflectometer (OTDR) test (see Optical Time Domain Reflectometer (OTDR)
Testing on page 2-68)
■ Digital Test Access application of the Multi-point Configuration feature (see Digital Test Access on
page 4-26)
The troubleshooting tools are accessible through the management applications by users with the Turn-up
and Test (TT) access privilege.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-28 Maintenance and Troubleshooting Tools

Loopbacks
Note: Unless specifically noted otherwise, all references to “line module” will refer interchangeably to
either the DLM, XLM, ADLM, AXLM, SLM, AXLM-80, ADLM-80 and/or SLM-80 (DTC/MTC only) and
AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and/or SOLX2 (XTC only). The term “LM-80”
is used to specify the LM-80 sub-set of line modules and refers interchangeably to the AXLM-80,
ADLM-80 and/or SLM-80 (DTC/MTC only). Note that the term “line module” does not refer to TEMs,
as they do not have line-side capabilities and are used for tributary extension.

Loopbacks are used to test newly created circuits before running live traffic or to logically locate the
source of a network failure on existing circuits. Loopbacks provide a mechanism where the signal under
test (either the user signal or the test pattern signal such as PRBS) is looped back at some location on
the network element in order to test the integrity and validity of the signal being looped back. Since
loopbacks affect normal data traffic flow, they must be invoked only when the associated facility is in
administrative maintenance state.
IQ NOS provides access to the loopback capabilities in the Infinera nodes, independent of the client
signal payload type. The loopbacks can be enabled or disabled remotely through the management
applications. The following sections describe the loopbacks supported to test each section of the network,
as well as the various hardware components along the data path:
■ Loopbacks Supported on the XTC on page 2-28
■ Loopbacks Supported on the DTC/MTC on page 2-39

Loopbacks Supported on the XT on page 2-44

Loopbacks Supported on the XTC


This section describes the loopbacks supported on the XTC.

Note: Loopbacks are not supported on an OTUk/ODUk when PRBS generation is enabled in either
direction (facility or terminal) on the ODUk.

Note: When a Client Tributary Facility Loopback is operated on an OTUk, the ODUk facility does not
support alarms nor PMs on its incoming signal.

Client Tributary Facility Loopback—A loopback is performed on the TIM/TIM2/MXP wherein the tributary
port Rx signal is looped back to the Tx on the TIM/TIM2/MXP. (The test signal will continue on its
provisioned path in addition to being looped back toward the originating point of the signal.) This loopback
test verifies the operation of the tributary side optics in the TOM and TIM/TIM2/MXP.

Note: For TIM-1-100GM and TIM-1-100GX in 100GbE -ODU4-4i-2ix10V mode, a downstream LF


signal is not propagated when client tributary facility loopback is enabled.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-29

Figure 2-4 Client Tributary Facility Loopback (XTC-4/XTC-10 TIMs and OLx Example)

Figure 2-5 Client Tributary Facility Loopback (XTC-10 TIM2 and OFx-1200 Example)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-30 Maintenance and Troubleshooting Tools

Figure 2-6 Client Tributary Facility Loopback (XTC-2/XTC-2E Example)

Figure 2-7 Client Tributary Facility Loopback (MXP in XTC-2/XTC-2E Example)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-31

Client Tributary Terminal Loopback—A loopback performed on the TIM/TIM2 wherein the signal is
received from the far-end node into the local node, and is transmitted through the local node switch fabric
and into the TIM/TIM2 where the signal is looped back and sent back out through the switch fabric and to
the far-end node.

Note: The system-wide behavior for client tributary terminal loopbacks can be configured so that the
laser on the client interface will be shut off to prevent the test signal from continuing to the client
equipment and the signal will only be looped back toward the originating point of the signal.
Otherwise, the default behavior is that the test signal will continue out the client interface in addition to
being looped back toward the originating point of the signal.

Figure 2-8 Client Tributary Terminal Loopback (XTC-10 TIM2 and OFx-1200 Example)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-32 Maintenance and Troubleshooting Tools

Figure 2-9 Client Tributary Terminal Loopback (XTC-4/XTC-10 TIM and OLx Example)

Figure 2-10 Client Tributary Terminal Loopback (XTC-2/XTC-2E Example)

A loopback performed on the MXP wherein the signal is received from the far-end node into the local
node, and is transmitted through the 200G mapper on the MXP and looped back through the OTN multi-
service processor and sent out to the far-end node.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-33

Figure 2-11 Client Tributary Facility Loopback (MXP in XTC-2/XTC-2E Example)

ODUk Facility Loopback—The incoming signal from the client interface (either from the OTM/OTM-1200
on the local node, or from the line side/far-end node, as in Figure 2-21: Ethernet Interface Loopbacks
(PXM only) on page 2-39) is looped back in the switch fabric (in the OXM), then sent back out to the
client interface.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-34 Maintenance and Troubleshooting Tools

Figure 2-12 ODUk Facility Loopback (from the OTM) (XTC-4/XTC-10 Example)

Figure 2-13 ODUk Facility Loopback (from the OTM) (XTC-2/XTC-2E Example)

The incoming signal from the client interface (corresponding to an MXP-400) from a local node is looped
back through the 200G mapper and then sent back out to the client interface.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-35

Figure 2-14 ODUk Facility Loopback (MXP in XTC-2/XTC-2E Example)

Figure 2-15 ODUk Facility Loopback from the OTM-1200 XTC-10 Example

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-36 Maintenance and Troubleshooting Tools

Figure 2-16 ODUk Facility Loopback (from Line Side) (XTC-4/XTC-10 Example)

Figure 2-17 ODUk Facility Loopback (from Line Side) (XTC-2/XTC-2E Example)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-37

Figure 2-18 ODUk Facility Loopback (from line side): XTC-10 with OFx-1200

The incoming signal from the client interface (corresponding to an MXP-400) from the line side/far-end
node is looped back through the 200G mapper and then sent back out to the client interface

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-38 Maintenance and Troubleshooting Tools

Figure 2-19 ODUk Facility Loopback (from Line Side) (MXP in XTC-2/XTC-2E Example)

SCG Terminal Loopback


A loopback is applied at the super channel group (SCG) level so that all packets received on all client
interfaces are sent back towards the connected customer equipment. This loopback is supported only on
XTC chassis with OFx-1200 line modules.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-39

Figure 2-20 SCG Terminal Loopback: XTC with OFx-1200

Loopbacks on the PXM:


■ Ethernet Interface Facility Loopback (PXM-16-10GE only, Figure 2-21: Ethernet Interface
Loopbacks (PXM only) on page 2-39)—All of the Ethernet Interface ingress frames are looped
back to the egress towards the connected client equipment.
■ Ethernet Interface Terminal Loopback (PXM only, Figure 2-21: Ethernet Interface Loopbacks (PXM
only) on page 2-39)—All of the frames received from all the attachment circuits (ACs) supported
by the Ethernet Interface are looped back to the respective ACs.

Figure 2-21 Ethernet Interface Loopbacks (PXM only)

Loopbacks Supported on the DTC/MTC


This section describes the loopbacks supported on the DTC/MTC.
Client Tributary Facility Loopback—A loopback is performed on the TAM wherein the tributary port Rx is
looped back to the Tx on the TAM. (The test signal will continue on its provisioned path in addition to
being looped back toward the originating point of the signal.) This loopback test verifies the operation of
the tributary side optics in the TOM and TAM. The TAM may reside within a line module or TEM.

Note: Client Tributary Facility Loopbacks are not supported at the 10G Fibre Channel services when
the corresponding DTP is configured for PRBS generation or monitoring.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-40 Maintenance and Troubleshooting Tools

Figure 2-22 Client Tributary Facility Loopback

Tributary Digital Transport Frame (DTF) Path Terminal Loopback—A loopback performed on the line
module or TEM circuit, wherein the cross-point switch on the line module or TEM loops back the received
client signal towards the TAM. (The test signal will continue on its provisioned path in addition to being
looped back toward the originating point of the signal.) This loopback verifies the operation of the tributary
side optics as well as the adaptation of the Tributary DTF into electrical signals performed in the TOM and
TAM and the cross-point switch on the line module or TEM.

Figure 2-23 Tributary Digital Transport Frame (DTF) Path Terminal Loopback

Client Tributary Terminal Loopback—A loopback performed on the TAM wherein the electrical signal
received from the OCG line is looped back to the OCG line transmit side in the TAM. This loopback
verifies the OCG line side optics on the line module, the DTF and FEC Mapper/Demapper in the line
module as well as the cross-point switch.

Note: The system-wide behavior for client tributary terminal loopbacks can be configured so that the
laser on the client interface will be shut off to prevent the test signal from continuing to the client
equipment and the signal will only be looped back toward the originating point of the signal.
Otherwise, the default behavior is that the test signal will continue out the client interface in addition to
being looped back toward the originating point of the signal.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-41

Figure 2-24 Client Tributary Terminal Loopback

Line DTF Path Facility Loopback—A loopback performed on the line module wherein the cross-point
switch on the line module loops back the received line DTF signal towards the OCG line. This loopback
verifies the line DTF connectivity and the DTF encapsulation performed in the line module.

Figure 2-25 Line DTF Path Facility Loopback

Line DTF Path Terminal Loopback:


■ Line DTF Path Terminal Loopback (express cross-connect scenario, Figure 2-26: Line DTF Path
Terminal Loopback (Express Scenario) on page 2-42)—A loopback performed on an express
cross-connect where the DTF of an OCG is cross-connected back on itself at the “far-end” line
module of the express connection. (The test signal will continue on its provisioned path in addition
to being looped back toward the originating point of the signal.) This allows verification of valid DTF
traffic across the backplane in a express cross-connect configuration.
■ Line DTF Path Terminal Loopback (add/drop cross-connect scenario, Figure 2-27: Line DTF Path
Terminal Loopback (Add/Drop Scenario) on page 2-43)—A loopback performed on an add/drop
cross-connect where traffic from a TAM is routed to an adjacent line module and looped back on
the line side. (The test signal will continue on its provisioned path in addition to being looped back
toward the originating point of the signal.) This allows verification of valid DTF traffic across the
backplane in an add/drop cross-connect configuration.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-42 Maintenance and Troubleshooting Tools

Figure 2-26 Line DTF Path Terminal Loopback (Express Scenario)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-43

Figure 2-27 Line DTF Path Terminal Loopback (Add/Drop Scenario)

In addition to the above loopbacks, the DTN and DTN-X also supports loopbacks on the Digital Channel
(DCh) and the Tributary DTF Path on the TAM-2-10GT and DICM-T-2-10GT:
■ DCh Client Facility Loopback (TAM-2-10GT and DICM-T-2-10GT only)—A loopback is performed
on the TAM/DICM wherein the tributary port Rx is looped back to the Tx on the TAM-2-10GT. This
loopback test verifies the operation of the tributary side optics in the TOM and TAM-2-10GT. The
TAM-2-10GT may reside within a line module or TEM.
■ DCh Client Terminal Loopback (TAM-2-10GT and DICM-T-2-10GT only)—A loopback performed
on the TAM/DICM wherein the electrical signal received from the OCG line is looped back to the
OCG line transmit side in the TAM/DICM. This loopback verifies the OCG line side optics on the
line module, the DTF and FEC Mapper/Demapper in the line module as well as the cross-point
switch.
■ Tributary DTF Path Facility Loopback (TAM-2-10GT and DICM-T-2-10GT only)—A loopback
supported by the TAM/DICM wherein a loopback is performed on the line module or TEM circuit,
wherein the cross-point switch on the line module or TEM loops back the received client signal
towards the client on the TAM/DICM to the DTN network. This loopback verifies the operation of
the tributary side optics as well as the adaptation of the Tributary DTF into electrical signals
performed in the TOM and TAM and the cross-point switch on the line module or TEM.

Note: Terminal loopback is not supported for TAM-2-10GT/DICM-T-2-10GT tributary DTPs.

The figure below shows the DCh loopbacks supported by the TAM-2-10GT and DICM-T-2-10GT.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-44 Maintenance and Troubleshooting Tools

Figure 2-28 Loopbacks Support by the TAM-2-10GT and DICM-T-2-10GT

Loopbacks Supported on the XT


This section describes the loopbacks supported on the XT.
Client Loopback—A loopback is performed on the client so that packets received on the client Ethernet
interface are sent back towards the connected customer equipment.

Figure 2-29 Client Loopbacks on XT-500

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-45

Figure 2-30 Client loopbacks on XT(S)-3300

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-46 Maintenance and Troubleshooting Tools

Figure 2-31 Client loopbacks on XT(S)-3600

Tributary ODUk Loopback—A loopback is performed on the client so that packets received on the client
tributary ODUk are sent back towards the connected customer equipment. This loopback is only
supported on XT(S)-3600.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-47

Figure 2-32 Tributary ODUk Loopback on XT(S)-3600

Line Loopback—A loopback is applied on the client so that packets received from the line side are sent
back towards the line.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-48 Maintenance and Troubleshooting Tools

Figure 2-33 Line Loopback on XT-500

Figure 2-34 Line Loopback on XT(S)-3300

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-49

Figure 2-35 Line Loopback on XT(S)-3600

OCG/SCG Loopback—A loopback is applied at the OCG level (for XT-500S) and SCG level for (XT-500F)
so that all packets received on all client Ethernet interfaces are sent back towards the connected
customer equipment.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-50 Maintenance and Troubleshooting Tools

Figure 2-36 OCG Loopback on XT-500S and SCG Loopback on XT-500F

SCG Loopback—A loopback is applied at the SCG level so that all packets received on all client Ethernet
interfaces are sent back towards the connected customer equipment.

Figure 2-37 SCG Loopback on XT(S)-3300

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-51

Figure 2-38 SCG loopback on XT(S)-3600

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-52 Maintenance and Troubleshooting Tools

Pseudo Random Binary Sequence (PRBS) Tests


The Pseudo Random Binary Sequence (PRBS) is a test pattern used to diagnose and isolate the troubled
spots in the network, without requiring a valid data signal or customer traffic. This type of test signal is
used during the system turn-up or in the absence of a valid data signal from the customer equipment. The
test is primarily aimed to watch for and sectionalize the occurrence of bit errors in the data path. Since the
PRBS test affects normal data traffic flow, it must be invoked only when the associated facility is in
administrative maintenance state.
IQ NOS provides access to the PRBS generation and monitoring capabilities supported by the Infinera
nodes. The PRBS test can be enabled or disabled remotely through the management applications.
The PRBS tests can be coupled with loopback tests so that the pre-testing of the quality of the digital link
or end-to-end digital path can be performed without the need for an external PRBS test set. While this is
not meant as a replacement for customer-premise to customer-premise circuit quality testing, it does
provide an early indicator of whether or not the transport portion of the full circuit is providing a clean
signal.
The following sections describe the PRBS tests supported by the DTN-X and DTN:
■ PRBS Tests Supported by the XTC on page 2-52
■ PRBS Tests Supported by the DTC/MTC on page 2-55

PRBS Tests Supported by the XTC


The DTN-X supports PRBS generation and monitoring for the facility and terminal directions:
■ ODUk Client Facility PRBS test—A PRBS signal is generated (transmitted) by the Infinera tributary
towards the client network side and is monitored (received) by the tributary in the customer
equipment or the test set connected to the Infinera tributary.
■ ODUk Client Terminal PRBS test—A PRBS signal is generated (transmitted) by the Infinera
tributary towards the Infinera network side and is monitored (received) by the tributary at the far-
end node.

Note: PRBS generation and monitoring are intrusive diagnostics.

The DTN-X supports PRBS tests on the following services originating on the XTC:
■ For ODUk switching services on the XTC, the DTN-X supports PRBS generation and monitoring for
both the facility and terminal directions (see ODU Switching on page 4-46).
■ For OTUk client services on the XTC, the DTN-X supports PRBS generation and monitoring for
both the facility and terminal directions (see Transparent Transport for OTN Services on page 4-
44 for information on OTUk with FEC transport).
■ For ODU multiplexing services on the XTC, the DTN-X supports PRBS generation and monitoring
for both the facility and terminal directions (see ODU Multiplexing on page 4-48).
■ For non-OTN services that are encapsulated in an ODUk wrapper, the DTN-X supports PRBS
generation and monitoring in the terminal direction only (see Transparent Transport for Non-OTN

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-53

Services on page 4-43 for information on which non-OTN services are supported). Note the
following for PRBS support for non-OTN services that are encapsulated in an ODUk wrapper:
□ PRBS tests are not supported in the facility direction.
□ PRBS is supported only for services encapsulated in an ODUk wrapper (e.g., ODU2,
ODU2e, ODU4, etc.); PRBS is not supported for services encapsulated in an ODUki wrapper
(e.g., ODU2i, ODUflexi, etc.). See DTN-X Network Mapping on page 4-52 for a list of the
payloads for which ODUk network mapping is supported.
□ PRBS tests are supported only for the segments in which the non-OTN service is
encapsulated in the ODUk wrapper.
■ For OC-768 and STM-256 clients on the TIM-1-40GM, the XTC supports the following PRBS tests:
□ Tributary PRBS—A PRBS signal is generated (transmitted) by the Infinera OC768/STM-256
tributary towards the client network side and is monitored (received) by the OC768/STM-256
tributary in the customer equipment or the test set connected to the Infinera tributary.
□ Line PRBS—A PRBS signal is generated (transmitted) by the Infinera OC768/STM-256
tributary towards the Infinera network side and is monitored (received) by the tributary at the
far-end TIM-1-40GM. When Line (terminal side) PRBS monitoring is enabled on an endpoint,
the LINE-PRBS-OOS alarm is raised if the PRBS signal is out of sync, or if there is not a
cross-connect/SNC present on the endpoint.
■ For 1GbE, OC-48, and STM-16 clients on the TIM-16-2.5GM, the XTC supports the following
PRBS tests:
□ Tributary PRBS—A PRBS signal is generated (transmitted) by the Infinera 1GbE/OC-48/
STM-16 tributary towards the client network side and is monitored (received) by the 1GbE/
OC-48/STM-16 tributary in the customer equipment or the test set connected to the Infinera
tributary.
Figure 2-39: PRBS Tests Supported by the XTC on page 2-53 shows the PRBS support for services on
the XTC.

Figure 2-39 PRBS Tests Supported by the XTC

Figure 2-40: Tributary and Line PRBS tests on the XTC (TIM-1-40GM/TIM-16.2.5GM) on page 2-54
shows tributary and line PRBS tests on the XTC.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-54 Maintenance and Troubleshooting Tools

Figure 2-40 Tributary and Line PRBS tests on the XTC (TIM-1-40GM/TIM-16.2.5GM)

Note the following for PRBS support on the XTC:


■ For the following diagnostics, only one is supported at a time:
□ OTUk Client Tributary Facility Loopback
□ OTUk Client Tributary Terminal Loopback
□ ODUk Facility Loopback
□ ODUk Client Facility PRBS generation
□ ODUk Client Facility PRBS monitoring
□ ODUk Client Terminal PRBS generation
□ ODUk Client Terminal PRBS monitoring

Note: PRBS generation and PRBS monitoring can be enabled on an ODUk simultaneously, as
long as both the generation and monitoring are enabled in the same direction (facility or
terminal). PRBS monitoring can be enabled in two directions on the same ODUk as long as
PRBS generation is not enabled in either direction for the ODUk.

■ For ODU Multiplexing services on the XTC, only one of the following diagnostics is supported at a
time:
□ ODUk Facility Loopback
□ OTUk Client Tributary Facility Loopback
□ ODUk Client Facility PRBS generation
□ ODUj Client Facility PRBS generation
□ ODUj Client Terminal PRBS generation

Note: PRBS generation can be supported simultaneously in both the facility and terminal
directions on the ODUj facility if no other diagnostics are enabled (i.e., loopback on the OTUk/

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-55

ODUk/ODUj or PRBS generation on the ODUk). Simultaneous PRBS generation in the facility
and terminal directions is supported for ODUj PRBS tests only.

■ Also for ODU Multiplexing services on the XTC, note the following:
□ Client Facility PRBS generation is supported on only one ODUk/ODUj on the port.
□ Client Terminal PRBS generation is supported on only one ODUj on the port.
□ Client Facility PRBS monitoring is supported on only one ODUk/ODUj on the port.
□ Client Terminal PRBS monitoring is supported on only one ODUj on the port.
■ When PRBS generation is enabled in the terminal direction for ODUk services on the TIM-1-40G or
TIM-1-100G, the OTUk/ODUk facility does not support alarms nor PMs on its incoming signal
(receive direction PMs/alarms). The tributary physical termination point (PTP) will continue to
correctly report PMs, but the tributary PTP will not generate an alarm in case of optical loss of
signal (OLOS).
■ For OTUk transport services, PRBS tests are not supported in the terminal direction. PRBS tests
are supported in the facility direction for OTUk transport services.
■ For ODUk Client Terminal PRBS monitoring, the TERM-PRBS-OOS alarm is suppressed if there is
no cross-connect or SNC on the ODUk. Note that this behavior is different from the behavior for
endpoints on the DTC/MTC: When Line (terminal side) PRBS monitoring is enabled for endpoints
on the DTC/MTC, the LINE-PRBS-OOS alarm is raised in the absence of a cross-connect/SNC on
the endpoint.
Note the following for PRBS support for OC-768 and STM-256 clients on the TIM-1-40GM:
■ PRBS is not supported on the OC-768 and STM-256 client if a loopback is enabled on the client or
on the associated ODU3.
■ Tributary PRBS generation and Line PRBS generation can be enabled simultaneously on the
OC-768/STM-256, provided that ODUk Client Terminal PRBS generation is not enabled on the
associated ODU3.
■ Tributary PRBS generation on the OC-768/STM256 and ODUk Client Terminal PRBS generation
on the associated ODU3 can be enabled simultaneously, provided that Line PRBS generation is
not enabled on the OC-768/STM-256.

PRBS Tests Supported by the DTC/MTC


The DTC/MTC supports PRBS generation and monitoring for testing circuit quality at the client, the DTF
Section layer, or the DTF Path layer:

Note: PRBS generation and monitoring are intrusive diagnostics.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-56 Maintenance and Troubleshooting Tools

Figure 2-41 PRBS Tests Supported by the DTC/MTC

There are several types of PRBS tests (see Figure 2-41: PRBS Tests Supported by the DTC/MTC on
page 2-56 through Figure 2-43: PRBS Tests Supported by TAM-2-10GT and DICM-T-2-10GT on page 2-
58):
■ Client PRBS test (supported only for OC-768 and STM-256 interfaces)—A PRBS signal is
generated (transmitted) by the Infinera OC768/STM-256 tributary towards the client network side
and is monitored (received) by the OC768/STM-256 tributary in the customer equipment or the test
set connected to the Infinera tributary.
■ Tributary (facility side) PRBS test (supported only for OTUk, SONET, and SDH interfaces on the
TAM-8-2.5GM, TAM-2-10GM, and DICM-T-2-10GM)—A PRBS signal is generated (transmitted) by
the Infinera tributary towards the client network side and is monitored (received) by the tributary in
the customer equipment or the test set connected to the Infinera tributary.
■ Line (terminal side) PRBS test (supported only for OTUk, SONET, and SDH interfaces on the
TAM-8-2.5GM, TAM-2-10GM, and DICM-T-2-10GM)—A PRBS signal is generated (transmitted) by
the Infinera tributary towards the Infinera network side and is monitored (received) by the tributary
at the far-end TAM-8-2.5GM, TAM-2-10GM, or DICM-T-2-10GM. When Line (terminal side) PRBS
monitoring is enabled on an endpoint, the LINE-PRBS-OOS alarm is raised if the PRBS signal is
out of sync, or if there is not a cross-connect/SNC present on the endpoint.

Note: Line PRBS test is not supported on OTUk clients that are configured for service type adaptation
(see OTN Adaptation Services on page 4-21).

Note: For OTUk clients on the TAM-2-10GM and DICM-T-2-10GM, Line PRBS generation must be
disabled and re-enabled upon either failure or recovery of the client signal.
■ DTF Section-level PRBS test—A PRBS signal is generated by the near-end line module and it
is monitored by the adjacent nodes. This test verifies the quality of the digital link between two
adjacent nodes.
■ DTF Path-level PRBS test—A PRBS signal is generated by the near-end TAM and it is
monitored at the far-end TAM where the digital path is terminated. This test verifies the quality

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-57

of the end-to-end digital path. Historical performance monitoring data is collected for PRBS
sync errors and PRBS errors on the Tributary DTF Path.

Note: DTF Path-level PRBS test is not supported on the TAM-8-1G. The TAM-8-1G does support the
GbE Client Termination Point tests described in GbE Client Termination Point Tests on page 2-59.

Note: When configuring DTF Path-level PRBS test between TAM-8-2.5GMs, the TOMs must be
physically present in the TAM-8-2.5GMs for PRBS to be generated. If the generating TOM is pre-
provisioned but not physically present, the PRBS signal will not be sent and so the DTP on the
monitoring TOM will report an PRBS-OOS alarm and the PRBS Error and PRBS Sync Err PM
counters will increment.

In addition to the above PRBS tests, the system also supports specialized PRBS tests on the line-side
Digital Channel (DCh) of the LM-80:
■ Digital Channel (DCh) PRBS test (LM-80 only)—A PRBS signal is generated by the near-end
LM-80 and it is monitored by the far-end LM-80. This test verifies the functioning of the optical
channel between two LM-80s (see Figure 2-42: DCh Line PRBS Test Supported by the LM-80 on
page 2-57).

Note: Digital Channel PRBS tests are not supported for 20Gbps wavelengths (PM-BPSK modulation
format). If a DCh PRBS test is enabled on a PM-BPSK wavelength, traffic will be impacted in the
adjacent DCh in the LM-80 optical channel. For LM-80 wavelengths that use PM-BPSK modulation,
use the DTF Path-level PRBS test on the TAM.

Figure 2-42 DCh Line PRBS Test Supported by the LM-80

Lastly, the system also supports specialized PRBS tests on the TAM-2-10GT and DICM-T-2-10GT:
■ Digital Channel (DCh) Section-level PRBS test (TAM-2-10GT and DICM-T-2-10GT only)—A PRBS
signal is generated by the near-end TAM/DICM and it is monitored by the far-end TAM/DICM. This
test verifies the functioning of the Digital Channel between two TAM-2-10GTs or DICM-T-2-10GTs
installed in the customer network and provider network (see Figure 2-43: PRBS Tests Supported
by TAM-2-10GT and DICM-T-2-10GT on page 2-58).
■ Tributary DTF Path-level PRBS test (TAM-2-10GT and DICM-T-2-10GT only)—A PRBS signal is
generated by the near-end TAM/DICM and it is monitored by the far-end TAM/DICM. This test

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-58 Maintenance and Troubleshooting Tools

verifies the digital path, such as in a Layer 1 OPN in a provider network (see Figure 2-43: PRBS
Tests Supported by TAM-2-10GT and DICM-T-2-10GT on page 2-58).
See Rules for Performing PRBS Tests on a Tributary DTF Path (TAM-2-10GT and DICM-T-2-10GT) on
page 2-58 for information on performing PRBS tests on the TAM-2-10GT and DICM-T-2-10GT.

Figure 2-43 PRBS Tests Supported by TAM-2-10GT and DICM-T-2-10GT

Rules for Performing PRBS Tests on a Tributary DTF Path (TAM-2-10GT and DICM-T-2-10GT)
The following rules apply to generating and monitoring PRBS test signals on a Tributary DTF Path for
TAM-2-10GT and DICM-T-2-10GT:
■ These rules are applicable only for 2.5G DTPs.
■ The PRBS generation and monitoring on 2.5G DTP cannot be enabled unless all the DTPCTPs on
that facility are in maintenance. That is, to enable PRBS on 1-A-3-T1-1-1, each of 1-A-3-T1-1-1, 1-
A-3-T1-1-2, 1-A-3-T1-1-3, 1-A-3-T1-1-4 (if they are present) must be in maintenance.
■ The Administrative state on any 2.5G DTP can be set to unlocked only if PRBS generation and
monitoring are disabled on all the DTPs on this facility.
■ If the PRBS generation/monitoring (and maintenance state) is set from the template and then the
DTPCTP object is created, PRBS will be enabled only if the above mentioned rules are satisfied.
Otherwise, the created DTP object will be put into maintenance.
■ If the PRBS (generation/monitoring) is enabled on the first DTP and a second facility is created
later, it will be forced to maintenance state irrespective of template configuration.

Test Signal for Fibre Channel Clients


Fibre Channel services support tributary-side and line-side test signal generation:
■ Tributary-side test signal—A test pattern that is generated towards the client from the network.
■ Line-side test signal—A test pattern that is generated towards the network.
On the DTN, 1GFC, 2GFC, 8GFC, and 10GFC services on the TAM-8-2.5GM, TAM-2-10GM, and DICM-
T-2-10GM support test signal generation only (monitoring is not supported and a third-party test set is
required ). On the DTN, the test signal generation mode can be set to Compliant Random Pattern
(CRPAT) or it can be disabled.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-59

On the DTN-X, 8GFC and 10GFC services on the TIM-5-10GX and TIM-5-10GM support both test signal
generation and monitoring.

Note: Since the Fibre Channel test signal affects normal data traffic flow, it must be used only when
the associated facility is in administrative maintenance state.

GbE Client Termination Point Tests


A GbE Client Termination Point (CTP) test signal is used to diagnose and isolate the troubled spots for
the GbE clients on Infinera nodes without requiring a valid data signal or customer traffic. The test signal
is based on IEEE 802.3 82.2.17 error checking.

Note: Since the GbE test signal affects normal data traffic flow, it must be used only when the
associated facility is in administrative maintenance state.

For DTC/MTC clients, the GbE CTP test signal is available only for the 1GbE ports on the TAM-8-1G and
TAM-8-2.5GM; for 10GbE ports on the TAM-2-10GM and DICM-T-2-10GM; for 40GbE ports on the
TAM-1-40GE and TAM-1-40GR; and for 100GbE ports on the TAM-1-100GE and TAM-1-100GR.
For XTC clients, GbE CTP test signals are supported by all Ethernet ports.

Note: GbE CTP test signal monitoring (both tributary and line) are disabled for TIM-1-100GM and
TIM-1-100GX when in 100GbE-ODU4-4i-2ix10V mode.

There are two types of GbE CTP tests:


■ Tributary-Side Client Test—A test signal is generated by the GbE CTP toward the client equipment.
This test signal can be monitored by an external test set or by creating a loopback and enabling
tributary-side monitoring on the same GbE CTP.
■ Line-Side Client Test—A test signal is generated by the GbE CTP toward the Infinera network
where it is monitored by one of the following:
□ An external test set.
□ A corresponding GbE CTP on a far-end TAM, if the corresponding GbE CTP has been
enabled for line-side test signal monitoring.
□ The same GbE CTP that is generating the test signal, if the GbE CTP has been enabled for
line-side monitoring and if a loopback has been created at the far end.
Note the following limitations for GbE CTP tests:
■ The TIM-1-40GE and TIM-1-100GE/TIM-1B-100GE do not support monitoring of GbE client
termination point tests. These TIMs can generate GbE client termination point test signals, but an

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-60 Maintenance and Troubleshooting Tools

external test set is required to monitor the signal. Alternatively, PCS faults and PCS PMs can be
used to verify end to end data path.
■ The TAM-1-100GE, TAM-1-100GR, TAM-1-40GE, and TAM-1-40GR do not support monitoring of
line-side GbE client termination point tests. These TAMs can generate tributary-side GbE client
termination point test signals, but an external test set is required to monitor the signal.
■ The TAM-8-2.5GM, TAM-1-100GE, TAM-1-100GR, TAM-1-40GE, and TAM-1-40GR do not support
monitoring of tributary-side GbE client termination point tests. These TAMs can generate tributary-
side GbE client termination point test signals, but an external test set is required to monitor the
signal.
■ For monitoring of GbE client termination point tests for TIM-5-10GM, XICM-T-5-10GM,
TIM-5B-10GM, and TIM-5-10GX, note that PM counts for both Test Signal Sync Errors and Test
Signal Out of Sync Errors will increment at the same time
■ For information on generating and monitoring GbE CTP test signals on the TAM-8-1G, see the
following section, Rules for Performing 1GbE Client Termination Point Tests on the TAM-8-1G on
page 2-60.

Rules for Performing 1GbE Client Termination Point Tests on the TAM-8-1G
The ports on the TAM-8-1G are divided into two physical port sets: {1a, 1b, 2a, 2b} and {3a, 3b, 4a, 4b}.
The following rules apply to generating and monitoring the test signals on the TAM-8-1G port sets:
Monitoring GbE Test Signals:
■ Only one port in each port set can be configured to monitor the line-side test signal at any one time.
■ Only one port in each port set can be configured to monitor the tributary-side test signal at any one
time.
■ Although it is not possible to configure two ports in a port set to monitor the same side (tributary or
line) at the same time, it is possible to configure one port in a port set to monitor one side while
another port (or even the exact same port) is monitoring the other side. Meaning that one port can
monitor the tributary side while another port in the port set is monitoring the line side.
Generating GbE Test Signals:
■ Any number of ports in a port set can simultaneously generate a line-side test signal, even while
one or more ports in the same port set are generating tributary-side test signals.
■ Any number of ports in a port set can simultaneously generate a tributary-side test signal, even
while one or more ports in the same port set are generating line-side test signals.
Monitoring and Generating GbE Test Signals at the Same Time:
■ It is possible for a port to monitor the test signal from one direction at the same time that one or
more ports in the port set are generating test signals in the same direction. Meaning that:
□ It is possible for a port to monitor the tributary-side test signal when one or more ports in the
port set are generating tributary-side test signals.
□ It is possible for a port to monitor the line-side test signal when one or more ports in the port
set are generating line-side test signals.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-61

■ It is not possible for a port to monitor the test signal from one direction when one or more ports in
the port set are generating test signals in the other direction. Meaning that:
□ It is not possible for a port to monitor the tributary-side test signal when one or more ports in
the port set are generating line-side test signals.
□ It is not possible for a port to monitor the line-side test signal when one or more ports in the
port set are generating tributary-side test signals

Trace Messaging
Trace messaging is a non-intrusive diagnostic tool that provides the Trail Trace Identifier functionality to
allow detection and validation of peer nodes/devices connected over the fiber. If a mismatch is detected
on the transmitted/expected and the received TTI, an alarm is raised to indicate the mismatch.
The XT supports TTI with the following extensions: Transmit TTI, Expected TTI, and Received TTI on:
■ OCG layer (for XT-500S)
■ SCG layer (for XT-500F)
■ Digital Wrapper layer (for XT-3300 and XTS-3300)
The DTN-X and DTN support the trace messaging functions as described below (see Figure 2-44: Trace
Messaging on page 2-62):
■ Trace messaging at the SONET/SDH J0 on the tributary ports.
The DTN-X and DTN provides the capability to monitor and transmit J0 messages received from
the client equipment. This capability enables the detection of misconnections between the client
equipment and the DTN/DTN-X. The DTN/DTN-X can monitor 1, 16 and 64 byte J0 trace
messages. The DTN/DTN-X can either transparently pass on the J0 message, or it can receive
then overwrite the incoming J0 message before transmitting the message toward the client
interface. The J0 message can be configured to comply with either the ANSI ITU or the GR-253
standard.

Note: For OC-768 and STM-256 services on the TIM-1-40GM, transparent J0 trace messaging
is supported; J0 overwrite is not supported.

Note: J0 trace messaging is not supported for SONET/SDH services on the TIM-16-2.5GM.

■ Trail trace identifier (TTI) trace messaging at the DTF Section and DTF Path.
The DTN supports DTF Section trace messaging to detect any misconnections between the DTNs
within a digital link, and DTF Path trace messaging is utilized to detect any misconnections in the
DTN circuit path along the Intelligent Transport Network. The DTF trace messaging is independent
of the client signal payload type.
■ TTI trace messaging on ODUk, ODUki, OTUk, and OTUki.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-62 Maintenance and Troubleshooting Tools

The DTN-X and DTN support trace messaging to detect any misconnections between the DTN-Xs
within a digital link. Both DTN and DTN-X support TTI for ODUk and OTUk; the DTN-X supports
TTI for ODUki and OTUki for services on the XTC.
■ TTI trace messaging on the optical channel for DLM, XLM, ADLM, AXLM, and SLM.
The optical channel on the line modules support TTI monitoring and insertion between line modules
over a fiber. This test verifies the functioning of the optical channel between two line modules
installed in the network.
■ J1 Path trace messaging over the OSC.
BMMs, OAMs, and ORMs on DTN-X and DTN support J1 Path trace messaging in order to
discover and continuously monitor the link connectivity between adjacent neighbor network
elements. J1 Path trace messages are continuously transmitted over the OSC, using the format “/
<NodeID>/<OTS TP AID>” (64 characters in length, padding unused bytes with ASCII null
characters, and terminated with “<CR><LF>”). BMMs, OAMs, and ORMs support J1 Path trace,
even when the OSC IP address is not configured, when the GMPLS is disabled, or the
BMM/OAM/ORM is put in the maintenance or locked state. J1 Path trace information won't be
available when there is fiber cut (OTS OLOS condition), OSC Loss of Communication condition, or
if the BMM/OAM/ORM is pre-provisioned. J1 Path trace is not supported on RAMs, as RAMs do
not terminate the OSC.

Figure 2-44 Trace Messaging

In addition to the above messaging capabilities, the system also supports trace messaging on the line-
side Digital Channel (DCh) of the LM-80:
■ Digital Channel (DCh) TTI messaging (LM-80 only)—The digital channel on the LM-80 supports TTI
monitoring and insertion between LM-80s over a fiber. This test verifies the functioning of the digital
channel between two LM-80s installed in the network (see Figure 2-45: DCh Trace Messaging
Supported by the LM-80 on page 2-63).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-63

Figure 2-45 DCh Trace Messaging Supported by the LM-80

Lastly, the system also supports specialized TTI messaging on the TAM-2-10GT and DICM-T-2-10GT):
■ TTI messaging at the Digital Channel (DCh) Section level (TAM-2-10GT and DICM-T-2-10GT only;
see Figure 2-46: Trace Messaging Supported by the TAM-2-10GT and DICM-T-2-10GT on page 2-
63).
The TTI on the Digital Channel level supports TTI monitoring and insertion between TAMs/DICMs
over a fiber.
■ TTI messaging at the Tributary DTF Path level (TAM-2-10GT and DICM-T-2-10GT only; see Figure
2-46: Trace Messaging Supported by the TAM-2-10GT and DICM-T-2-10GT on page 2-63).
The DTF Path-level TTI supports monitoring and insertion between TAMs/DICMs, such as over a
Layer 1 OPN.

Note: TTI insertion toward the client side is not supported, and the tributary-side TTI cannot be
monitored when the line-side TTI transmission is enabled.

Figure 2-46 Trace Messaging Supported by the TAM-2-10GT and DICM-T-2-10GT

Source and Destination Access Point Identifiers (SAPI and DAPI)


The DTN supports configuration of the TTI mismatch detection mode for ODUk and OTUk clients and for
ODUkT Tandem Connections on the TAM-2-10GM, DICM-T-2-10GM, and TAM-8-2.5GM, so that the
Source Access Point Identifier (SAPI) or the Destination Access Point Identifier (DAPI) or both can be
used to compare the TTI expected message with the TTI received message. Therefore, the expected
SAPI and DAPI messages are configurable independent of one another.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-64 Maintenance and Troubleshooting Tools

The DTN-X supports the following SAPI and DAPI directions for the following entities:
■ Tributary ODUk terminal SAPI/DAPI (towards the network).
■ Line ODUk terminal and facility SAPI/DAPI (towards the tributary and towards the network), when
the ODUk is in non-intrusion mode.
■ Line ODUki facility SAPI/DAPI (towards the network).
■ Line OTUki facility SAPI/DAPI (towards the network).

Tandem Connection Monitoring (TCM)


The DTN supports user-created ODUk tandem connection monitoring (TCM) facilities, which are logical
facilities created by the user to retrieve performance monitoring in order to diagnose faults on segments
of an ODU path.
The monitoring mode of the TCM can be configured by the user:
■ Non-intrusive monitoring, in which PMs are monitored but nothing else is changed, signal is passed
on as-is.
■ Limited-intrusive monitoring, meaning that any monitoring attributes can be user set, including
transmit TTI, for which the user setting or the default blank string will be inserted by the node.
■ Intrusive monitoring, meaning that the whole overhead is removed, and replaced along with any
user-entered TTI overwrite.

Note: The DTN allows a maximum of 3 TCM CTPs per side (facility side and terminal side). As a
result, the node allows up to 6 total ODUkT CTPs per a given ODUk client CTP. So, for example, the
user can activate TCM IDs 1, 4, 6 on FAC side, and can activate 2, 3, 5 on TERM side (but no more
than three TCM CTPs per side).

Path Loss Check for MPO Connections


To detect any faults or excessive loss in the dataplane connectivity over MPO connectors used between
SCG ports, the FRM-9D, FRM-20X, FSM, and FSE support unidirectional path loss check operations.
Once initiated by the user, the path loss check measures the difference between the transmitted and
received values (even if both Tx and Rx are on the same port in the case of a loopback). The results of
the path loss check are displayed in the SCG Properties window (Diagnostics tab) in GNM/DNA, or in the
response parameters of the RTRV-SCG command via TL1.
A path loss check can be operated using an MPO loopback connector in order to detect cases where the
MPO cable itself needs to be replaced, or for cases where there is no peer SCG port connected, as
shown in the figures below.

Note: Path loss checks are not supported for SCG ports that are associated with an optical cross-
connect or an optical SNC.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-65

In addition to running a path loss check on an individual port, path loss checks can be initiated on a per-
module level or on a per-node level:
■ Per module—A path loss check is initiated on an FRM or FSM, so that a path loss check is run for
each SCG on the module that supports the path loss check operation (listed in the table below).
Per-module initiation is supported from management interfaces. Note the following for per-module
path loss checks:
□ The per-module path loss check is not supported if any SCG on the module is already
running a path loss check.
□ Once a per-module path loss check is initiated, it cannot be aborted.
□ For FSMs, if the FSM is equipped with an FSE, path loss checks will also be run on any
applicable SCGs on the FSE. (Note that the per-module path loss checks cannot be initiated
directly on an FSE, but must be initiated on the FSM that contains the FSE.)
■ Per node—A path loss check is initiated on a nodal level, so that a path loss check is run on each
SCG that supports the path loss check operation for all the FRMs, FSMs, and FSEs on the node.
Per node initiation is supported on GNM and DNA. Per node path loss check initiation is supported
by all node types that support the MTC-9/MTC-6 chassis.
The table below lists the SCG connections for which path loss check is supported. The figures below the
table illustrate the different path loss checks supported for the different configurations.

Table 2-2 Connections Supporting Path Loss Check


From SCG Port To SCG Port Description
FRM-20X System SCG FRM-20X System SCG port on Verify FRM-20X to FRM-20X express connectivity via
port another FRM the FSP-E.
(FRM SCG PTP) (FRM SCG PTP)
FRM-9D System SCG FRM-9D System SCG port on Verify FRM-9D to FRM-9D express connectivity via the
port another FRM FSP-E.
(FRM SCG PTP) (FRM SCG PTP)
FSM Line SCG port Verify connectivity between the FRM-9D and the FSM
(FSM SCG PTP) via the FSP-S.
Verify connectivity between the FRM-9D and the FSE
via the FSP-S.
FRM-9D System SCG port Verify connectivity between the FRM-9D and the FSP-E
(itself) via MPO loopback at the FSP-E.
(FRM SCG PTP)
Verify connectivity between the FRM-9D and the FSP-C
or FMP-C via LC loopback at the FSP-C/FMP-C.
Verify connectivity between the FRM-9D and the FSP-S
via MPO loopback at the FSP-S.
FBM Line SCG Port FRM-9D or FRM-20X System Verify connectivity between FBM and FRM-9D/FRM-20X
(FBM SCG PTP) SCG port via FSP-CE-2MPO-1LC/ FSP-CE-2MPO-2LC or MPO-
(FRM SCG PTP) MPO connectors

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-66 Maintenance and Troubleshooting Tools

Table 2-2 Connections Supporting Path Loss Check (continued)


From SCG Port To SCG Port Description
FSM Line SCG port FRM-9D System SCG port Verify connectivity between the FSM and the FRM-9D
(FSM SCG PTP) (FRM SCG PTP) via the FSP-S.
FSM Line SCG port (itself) Verify connectivity between the FSM and the FSP-S via
(FSM SCG PTP) MPO loopback at the FSP-S.
FSM Base Port FSE Expansion Port Verify connectivity between the FSM and the FSE.
(FSM BASE SCG PTP) (FSE EXPN SCGPTP
FSE Line SCG port FRM-9D System SCG port Verify connectivity between the FSE and the FRM-9D
(FSE SCG PTP) (FRM SCG PTP) via the FSP-S.
FSE Line SCG port (itself) Verify connectivity between the FSE and the FSP-S via
(FSE SCG PTP) MPO loopback at the FSP-S.
FSE Expansion Port FSM Base Port Verify connectivity between the FSE and the FSM.
(FSE EXPN SCGPTP (FSM BASE SCG PTP)

Figure 2-47: Path Loss Check for FRM-9D to FSP-C/FMP-C Connectivity on page 2-66 shows an
example of FRM-9D to FSP-C/FMP-C connectivity.

Figure 2-47 Path Loss Check for FRM-9D to FSP-C/FMP-C Connectivity

The following path loss check is supported for this type of configuration:
■ FRM-9D System SCG port to FSP-C or FMP-C (via loopback on the FSP-C/FMP-C)
Figure 2-48: Path Loss Check for FSM/FSE to FSP-S to FRM-9D Connectivity on page 2-67 shows an
example of FSM/FSE to FSP-S to FRM-9D connectivity.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-67

Figure 2-48 Path Loss Check for FSM/FSE to FSP-S to FRM-9D Connectivity

The following path loss checks are supported for this type of configuration:
■ FSM line SCG port to FRM-9D system SCG port
■ FRM-9D system SCG port to FSM line SCG port
■ FSE line SCG port to FRM-9D system SCG port
■ FRM-9D system SCG port to FSE line SCG port
■ FSM line SCG port (loopback at the FSP-S)
■ FSE line SCG port (loopback at the FSP-S)
■ FRM-9D system SCG port (loopback at the FSP-S)
Figure 2-49: Path Loss Check for FRM-9D/FRM-20X to FSP-E to FRM-9D/FRM-20X Connectivity on
page 2-67 shows an example of FRM-9D/FRM-20X to FSP-E to FRM-9D/FRM-20X connectivity.

Figure 2-49 Path Loss Check for FRM-9D/FRM-20X to FSP-E to FRM-9D/FRM-20X Connectivity

The following path loss checks are supported for this type of configuration:
■ FRM-9D/FRM-20X (labeled “A”) system SCG port to FRM-9D/FRM-20X (labeled “B”) system SCG
port
■ FRM-9D/FRM-20X (labeled “B”) system SCG port to FRM-9D/FRM-20X (labeled “A”) system SCG
port

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-68 Maintenance and Troubleshooting Tools

■ FRM-9D/FRM-20X (labeled “A”) system SCG port (loopback at the FSP-E)


■ FRM-9D/FRM-20X (labeled “B”) system SCG port (loopback at the FSP-E)
Figure 2-50: Path Loss Check for FSM to FSE Connectivity on page 2-68 shows an example of FSM to
FSE connectivity.

Figure 2-50 Path Loss Check for FSM to FSE Connectivity

The following path loss checks are supported for this type of configuration:
■ FSE expansion SCG port to FSM base SCG port
■ FSM base SCG port to FSE expansion SCG port

Note: For path loss check from FSM base SCG port to FSE expansion SCG port, an external power
source is required, therefore the FSM tributary (add/drop) SCG port must be associated with an
AOFM/AOFX/SOFM/SOFX, and there should be no LOS condition on the FSM tributary port.
Furthermore, the FSM tributary (add/drop) SCG port must also be set to the “Path Loss Check
Source” mode (TRAFFICMOD=PATHLOSSCHECKSOURCE in TL1). At any given time, only one of
an FSM’s add/drop channels can be configured as a path loss check source.

Optical Time Domain Reflectometer (OTDR) Testing


The Optical Time Domain Reflectometer Module (OTDM) performs optical time domain reflectometer
(OTDR) testing for line fibers on the FlexILS line system via the OTDR ports on the IAM-2/IRM (OTDR
ports are not supported on IAM-1). The OTDM OTDR PTP port can be associated to the Amplifier OTDR
PTP on an IAM-2/IRM, or the OTDM can operate in standalone mode.
The OTDR test is manually started and stopped by a user via the OTDM OTDR PTP object. The OTDR
test results are stored in the OTDRTESTRESULT and OTDRKEYEVENTLIST managed objects on the
node, and also in a “.sor” file which can be uploaded to an external server.
The parameters for OTDR test, such as acquisition mode, pulse width, and test detection range, can be
modified via the OTDM OTDR PTP object.
Note the following for OTDR tests:
■ There is no restriction on the IAM-2/IRM OTDR ports to which the OTDM’s four ports are
connected: The four OTDM ports can be connected to four different IAM-2/IRM modules, or two of

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-69

the OTDM ports can be connected to the same IAM-2/IRM. The connected IAM-2/IRM modules
can be on the same chassis as the OTDM, or on a different chassis.
■ OTDR tests can be started on any one of the four OTDM ports, provided that the OTDM is not in
the Locked administrative state.
■ If one port of an OTDM is running a test, the other ports cannot start a new test.
■ For tests where the OTDM OTDR PTP port is associated to an Amplifier OTDR PTP on an IAM-2/
IRM,the Internal Spool Length of the IAM-2/IRM is compensated in the test.
■ OTDR tests are aborted in case of control module switchover, physical removal of the OTDM, cold/
warm reboot of the OTDM, or chassis disconnectivity from the node controller.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-70 Syslog

Syslog
The Syslog feature is introduced to support a standard based autonomous notification services. It can be
used for trouble shooting and by analytic applications. Syslog is supported on the following IQ NOS
chassis types:
■ XTC-10, XTC-4, XTC-2 and XTC-2E
■ MTC-9 and MTC-6
■ XT-3300 and XT-3600
■ DTC
The following figure shows an Example Syslog deployment scenario. In a GNE-SNE setup, A Syslog
server (SysLog Host) sends Requests for SysLog messages to network element. The network element
responds to this request by sending the Autonomous SysLog Notifications based on the request. Upto
three Syslog servers can be configured. In the current release, splunk v6.5.1 has been certified as the
syslog server.

Figure 2-51 Example scenario of Syslog Deployment

The Syslog has the following features:


■ Supports standards based Syslog functions as per IETF RFC 5424 standards
■ Syslog deployment can be enabled or disabled through management interfaces
■ Syslog server can be configured through all management interfaces
■ Syslog notifications transport with UDP over IPv4/IPv6
IQ NOS R19.0 introduces support for Syslog over Transport Layer Security (TLS) V1.2. TLS Syslog
protocol enables the transport of Syslog messages over secure channel using the encryption
mechanism.The Rsyslogd server (RFC 3161 format) is configured to read the Syslog messages (RFC
5424 format). The local and peer TLS certificates have to be imported or installed on the network element
and the Rsyslogd server.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Fault Management 2-71

The Rsyslogd server by default is available with the Linux OS. Starting Release 19.0, TLS is supported
as a Syslog Transport protocol along with UDP. TLS enables log sources to receive encrypted Syslog
events from network elements.
For Rsyslog Server, below are details of the certificates:
■ Local Cerificates - server_certificate.p12
■ Peer Certificates - CaDer.p7b
For Syslog Server, below are details of the certificates:
■ CAFile: ca.crt
■ CertFile: Server.crt
■ KeyFile: Server.key

Syslog message
A typical Syslog message includes information to identify the origination, destination, timestamp, and the
reason why the log is sent. The logs also has a severity level field to indicate the importance of the
message. For more information, Refer to the DTN and DTN-X Alarm and Trouble Clearing Guide.
An example Syslog notification message is shown below.
1 2017-02-07T010:40:52.0Z 10.220.70.195 - - ALARM [notification@21296
SourceName="/SIM070191001/CXOCGPTP=1-L1" EventSubType="ALARM" EventType=
"Condition" LogId="3030" LogType="Event" UserInformation="" PerceivedSeverity=
"Minor" AssertedSeverity="Minor" ProbableCause="TIM-OCG" Category="Facility"
ServiceAffecting="NSA" AdditionalText="" ProbableCauseDescription="OCG TTI
Mismatch" CircuitIdInfo="" Location="NearEnd" CurrentThreshold="0" Direction=
"Receive" SentryId="0" MoId="/SIM070191001/ALARM=CXOCGPTP%1-L1%TIM-OCG"
UniqueID="FAC0471" arcEnabledAlarm="false" FaultConditions="8192" SnmpIndex=
"4194305" AlarmCorrelationId="3855" NotificationId="3855"] BOM TIM-OCG-OCG
TTI Mismatch

Note: Every alarm raised by the network element can be identified by the Unique ID field’s value in
the syslog message. For more details of the alarm description and associated troubleshooting
procedures, refer to DTN and DTN-X Alarm and Trouble Clearing Guide. This document describes all
events and alarms indexed with Unique ID

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


2-72 Syslog

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 3

Configuration and Management

IQ NOS provides the following configurations to manage the Infinera network elements. For information
on Node Configurations and Applications, see DTN and DTN-X System Description Guide
■ Equipment Management and Configuration on page 3-2
■ Migrating BMM based line systems to FRM based line systems on page 3-60
■ Migrating a DTN or Optical Amplifier to a DTN-X on page 3-58

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-2 Equipment Management and Configuration

Equipment Management and Configuration


IQ NOS provides extensive equipment inventory, management and configuration capabilities, modeled
after telecommunications standards such as Telcordia GR-1093, TMF 814, and ITU-T M.3100. IQ NOS
provides the following features to manage Infinera network elements:
■ Shelf controller behavior for managing the equipment and logical termination points on multi-
chassis nodes (see Shelf Controller Behavior on page 3-2 in the following section)
■ Ability to manage the hardware equipment, physical port and logical termination points by software
abstraction as managed objects (see Managed Objects on page 3-3)
■ Automatic equipment discovery and inventory (see System Discovery and Inventory on page 3-
18) including:
□ Circuit pack Auto-discovery (see Circuit Pack Discovery on page 3-19)
□ Optical data plane Auto-discovery (see Optical Data Plane Auto-discovery on page 3-20)
□ Target power offset (see Required Number of Effective Channels on page 3-32).
■ Circuit pack configuration (see Equipment Configuration on page 3-35) including:
□ Circuit pack pre-configuration
□ Circuit pack auto-configuration
■ GR-1093 and TMF-814 compliant state management (see State Modeling on page 3-36)
■ Ability to configure the behavior of client tributaries in case the tributary is locked or faulted (see
Tributary Disable Action on page 3-41)
■ Configuration of Reed-Solomon Forward Error Correction (RS-FEC) capability on XT (see Forward
Error Correction Configuration for XT on page 3-49)
■ Link Layer Discovery Protocol (LLDP) on XT (see Link Layer Discovery Protocol (LLDP) for XT on
page 3-50)
■ Ability to configure the encapsulated client disable action for certain TIMs and TAMs (see
Encapsulated Client Disable Action on page 3-46)
■ Power draw PM measurement (see Power Draw Reporting (MTC-9 and MTC-6) on page 3-51)
■ Configuration of maximum power draw limit and reporting of the current estimated power draw for
the system (see Power Draw of Equipment on page 3-52)
■ Disabling of the OSC and, for links with Raman amplification, disabling of the Raman pilot laser
(see OSC and Raman Pilot Laser Disabling on page 3-52)

Shelf Controller Behavior


The controller modules of the Expansion Chassis are responsible for managing the objects on the chassis
(opposed to having the controller modules in the Main Chassis being responsible for managing all of the
objects in the entire node).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-3

Note the following behaviors and considerations:


■ Because managed object data resides on each respective chassis, the chassis must be physically
present and reachable by the management interfaces in order to retrieve equipment inventory from
the chassis, or to pre-provision equipment, services, and associations (such as between base/
expansion BMMs, or between RAMs and BMMs on separate chassis).
■ Equipment installed on an unreachable chassis will not be retrieved by the GNM, TL1, and SNMP
management interfaces. DNA will display the last-cached equipment inventory, if any.
■ An Expansion Chassis can be pre-provisioned, but equipment and services cannot be pre-
provisioned on the chassis unless the chassis is physically present.
■ An Expansion Chassis cannot be deleted if the chassis is unreachable by the management
interfaces. If the chassis is not deleted and remains unreachable, the associations, cross-connects,
SNCs, etc. on the chassis can be deleted only with a fresh install of the node software which will
impact service. To avoid affecting service:
□ Ensure the Expansion Chassis is deleted when it is reachable(or)
□ Restore connectivity to the Expansion Chassis and then delete the chassis
■ The following must be performed before a software upgrade from pre-Release 7.0 software to
software of Release 8.1 or higher:
□ If there are pre-provisioned chassis, delete all the equipment in the pre-provisioned chassis,
then delete the pre-provisioned chassis.
□ Verify that all the Expansion Chassis are reachable.

Note: It is highly recommended to delete all pre-provisioned chassis and verify the reachability of all
chassis before initiating a software upgrade.

The PM data for the modules in a chassis are stored by the controller module on that same chassis.
■ If an Expansion Chassis has redundant controller modules, PM data is replicated on both the active
and standby controller modules.
■ If an Expansion Chassis does not have redundant controller modules, PM data is replicated to the
active controller module on the Main Chassis, and also to the standby controller module on the
Main Chassis (if the Main Chassis has redundant controller modules).
□ If the non-redundant controller module on the Expansion Chassis is replaced, the new
controller module will download the PM data from the active controller module on the Main
Chassis.
□ If a redundant controller module is subsequently installed on the Expansion Chassis, the PM
data will be deleted from the controller modules on the Main Chassis.

Managed Objects
IQ NOS defines software abstraction of all the hardware equipment, physical ports and logical termination
points, referred to as managed objects which are administered through the management applications.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-4 Equipment Management and Configuration

Managed objects are modeled after the ITU-T and TMF general information modeling standards, which
provide an intuitive and convenient means to reference the managed objects.
The figures below illustrate the most commonly used managed objects in each type of Infinera chassis
and the hierarchical relationship between them.
As shown, there are three major categories: hardware equipment, physical ports, and logical termination
points which represent the termination of signals. Users can create and delete the equipment managed
objects while the physical port and logical termination points are automatically created with default
attributes when the parent equipment managed object is created. Users can modify the attributes of the
auto-created managed objects through the management applications. Note that multi-chassis network
elements are managed as single objects.
User operations, such as modifying the administrative state (see Administrative State on page 3-36) and
modifying the alarm reporting state (see Alarm Reporting Control (ARC) on page 2-7), of a given
managed object impact the behavior of the corresponding contained and supported/supporting managed
objects. For example, when a user modifies the administrative state of a BMM to locked, the service state
of the contained and supported managed objects, DCF, C-band, OCG, OSC, GMPLS link, etc., is
changed to out-of-service. Similarly, when ARC is enabled on a BMM, alarm reporting is inhibited for all
the corresponding contained and supported managed objects. The following figures show the hierarchy of
managed objects for the different chassis and configurations:
■ Figure 3-1: Managed Objects and Hierarchy (DTN-X) on page 3-5 shows the hierarchy of
managed objects on a DTN-X.
■ Figure 3-2: Managed Objects and Hierarchy (DTN-X with ODU Multiplexing) on page 3-6 shows
the hierarchy of managed objects on a DTN-X when using ODUk switching.
■ Figure 3-3: Managed Objects and Hierarchy (DTN-X with PXM) on page 3-7 shows the hierarchy
of managed objects on a DTN-X with PXM.
■ Figure 3-4: Managed Objects and Hierarchy (DTN-X with OFx) on page 3-8 shows the hierarchy
of managed objects on a DTN-X with AOFX/AOFM/SOFM/SOFX.
■ Figure 3-5: Managed Objects and Hierarchy (DTN-X with 100G VCAT) on page 3-9 shows the
hierarchy of managed objects on a DTN-X when using 100G virtual concatenation (VCAT).
■ Figure 3-6: Managed Objects and Hierarchy (MTC-9/MTC-6) on page 3-10 shows the hierarchy of
managed objects on an MTC-9/MTC-6 chassis with FSM, FMM-F250, FRM-9D, IAM, IRM.
■ Figure 3-7: Managed Objects and Hierarchy (MTC-9/MTC-6 with FRM-4D) on page 3-11 shows
the hierarchy of managed objects on an MTC--9/MTC-6 chassis with FRM-4D.
■ Figure 3-8: Managed Objects and Hierarchy (MTC-9/MTC-6 with OPSM) on page 3-12 shows the
hierarchy of managed objects on an MTC--9/MTC-6 chassis with OPSM.
■ Figure 3-9: Managed Objects and Hierarchy (DTC/MTC with Line Modules) on page 3-13 shows
the hierarchy of managed objects on a DTC/MTC with line modules.
■ Figure 3-10: Managed Objects and Hierarchy (DTC/MTC with LM-80s) on page 3-14 shows the
hierarchy of managed objects on a DTC/MTC with LM-80s. (Note that line modules and LM-80s
can be combined in the same chassis, but for simplicity Figure 3-9: Managed Objects and
Hierarchy (DTC/MTC with Line Modules) on page 3-13 shows only line modules and Figure 3-10:
Managed Objects and Hierarchy (DTC/MTC with LM-80s) on page 3-14 shows only LM-80s.)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-5

■ Figure 3-11: Managed Objects and Hierarchy (Base/Expansion BMM2 on DTC/MTC) on page 3-
15 shows the managed objects on the BMM2 expansion/base modules.
■ Figure 3-12: Managed Objects and Hierarchy (OTC) on page 3-15 shows the hierarchy of
managed objects on an OTC.

■ Figure 3-14: Managed Objects and Hierarchy (XT-500S/XT-500F) on page 3-17 shows the
hierarchy of managed objects on an XT-500S/XT-500F.
■ Figure 3-15: Managed Objects and Hierarchy (XT(S)-3300) on page 3-18 shows the hierarchy of
managed objects on an XT(S)-3300.

Figure 3-1 Managed Objects and Hierarchy (DTN-X)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-6 Equipment Management and Configuration

Figure 3-2 Managed Objects and Hierarchy (DTN-X with ODU Multiplexing)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-7

Figure 3-3 Managed Objects and Hierarchy (DTN-X with PXM)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-8 Equipment Management and Configuration

Figure 3-4 Managed Objects and Hierarchy (DTN-X with OFx)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-9

Figure 3-5 Managed Objects and Hierarchy (DTN-X with 100G VCAT)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-10 Equipment Management and Configuration

Figure 3-6 Managed Objects and Hierarchy (MTC-9/MTC-6)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-11

Figure 3-7 Managed Objects and Hierarchy (MTC-9/MTC-6 with FRM-4D)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-12 Equipment Management and Configuration

Figure 3-8 Managed Objects and Hierarchy (MTC-9/MTC-6 with OPSM)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-13

Figure 3-9 Managed Objects and Hierarchy (DTC/MTC with Line Modules)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-14 Equipment Management and Configuration

Figure 3-10 Managed Objects and Hierarchy (DTC/MTC with LM-80s)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-15

Figure 3-11 Managed Objects and Hierarchy (Base/Expansion BMM2 on DTC/MTC)

Figure 3-12 Managed Objects and Hierarchy (OTC)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-16 Equipment Management and Configuration

Figure 3-13 Managed Objects and Hierarchy (FBM)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-17

Figure 3-14 Managed Objects and Hierarchy (XT-500S/XT-500F)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-18 Equipment Management and Configuration

Figure 3-15 Managed Objects and Hierarchy (XT(S)-3300)

System Discovery and Inventory


IQ NOS automatically discovers the system resources and maintains an inventory that is retrievable by
the management applications. IQ NOS discovers the following automatically:
■ Multi-chassis configuration:
□ Main Chassis
□ Expansion Chassis
■ All circuit packs in Infinera network elements (see Circuit Pack Discovery on page 3-19)
■ All termination points, including physical ports and logical termination points in a network element

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-19

■ The Intelligent Transport Network topology, including Physical Topology and Service Provisioning
topology (see Network Topology on page 8-3)
■ The optical data plane connectivity (see Optical Data Plane Auto-discovery on page 3-20)
IQ NOS maintains the inventory of all the automatically discovered resources, as described above, and
also the user provisioned services which includes:
■ Cross-connects provisioned using Manual Cross-connect Provisioning mode (including
Channelized cross-connects and Associations)
■ Circuits provisioned using Dynamically Signaled SNC Provisioning mode (including Channelized
SNCs and sub-SNCs)
■ Cross-connects that are automatically created while creating circuits utilizing Dynamically Signaled
SNC Provisioning mode (including Channelized cross-connects and Associations)
■ Protection groups that have been provisioned
Refer to DTN Service Provisioning on page 4-2 and DTN-X Service Provisioning on page 4-33 for
more details.

Multi-chassis Discovery
IQ NOS provides the ability to automatically detect multiple chassis in Infinera nodes, along with detailed
information for each chassis, including:
■ Label name
■ CLEI code
■ Product ordering name (PON)
■ Manufacturing part number
■ Serial number
■ Hardware version
■ Manufacturing date
■ Internal temperature
■ Rack name
■ Provisioned serial number
■ Location in rack
■ Alarm Cutoff (ACO) state (enabled or disabled)
■ Chassis-level alarm reporting (enabled or disabled)

Circuit Pack Discovery


IQ NOS provides the ability to automatically detect circuit packs in the network element. IQ NOS also
discovers the detailed manufacturing information including:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-20 Equipment Management and Configuration

■ Hardware version number


■ Circuit pack type
■ Serial ID
■ CLEI code
■ Product ordering name (PON)
■ Manufacturing part number
■ Manufacturing date
■ Software version
■ Last reboot date, time, and reason
The manufacturing information and firmware version are maintained in the inventory, and are retrievable
by the management applications.

Optical Data Plane Auto-discovery


Infinera network elements support Auto-discovery of optical connections for various types of connections:
■ For optical connections for OCG connections (BMMs, OCG-based line modules, CMMs, GAMs,
etc.), see OCG-based Auto-discovery on page 3-21
■ For optical connections for SCG connections (FlexChannel line modules, FSM, FRM, etc.), see
FlexILS Auto-discovery on page 3-24
■ For optical connections between OFx-100, FMM-C-5, and BMM, see FMM-C-5/BMM (OCG) Auto-
discovery on page 3-31

Note: For configurations with an FRM-9D/FRM-20X or FBM with an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600/MXP-400) in Open Wave configuration, auto-discovery is not
supported between the Open Wave ICE 4 line module or network element and the FRM or FBM.
However, auto-discovery and power control loops are supported between FRMs and FBMs.

Note: For configurations with an FBM or FRM-9D/FRM-20Xwith an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600/MXP-400) in SCG Line System Mode, Release 18.2 supports
native auto-discovery bypass between the ICE 4 module and and FRM/FBM but supports power
control loop between them when the following configurations are made:
■ The Infinite Capacity Engine 4 module (i.e. XT(S)-3300/OFx-1200/XT(S)-3600/MXP-400) is in
SCG Line System Mode
■ FBM is in Active Line Operating Mode
■ FRM-9D/FRM-20X/FBM SCG Interface type is set as Infinera Wave
For more information, see Bypass Native Auto-discovery

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-21

OCG-based Auto-discovery

Note: Unless specifically noted otherwise, all references to “line module” will refer interchangeably to
either the DLM, XLM, ADLM, AXLM, SLM, AXLM-80, ADLM-80 and/or SLM-80 (DTC/MTC only) and
AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and/or SOLX2 (XTC only). The term “LM-80”
is used to specify the LM-80 sub-set of line modules and refers interchangeably to the AXLM-80,
ADLM-80 and/or SLM-80 (DTC/MTC only). Note that the term “line module” does not refer to TEMs,
as they do not have line-side capabilities and are used for tributary extension.

Note: Unless specifically noted otherwise, all references to the BMM will refer to either the BMM,
BMM2C, BMM2, BMM2P, BMM1H, and/or BMM2H interchangeably.

Note: When a module is in the Auto-discovery process, there will be a delay in the retrieval of the
module’s performance monitoring data. The delay may be up to 15 seconds.

Note: Auto-discovery is not supported for line modules in Open Wave configuration, neither between
Open Wave line modules, nor between an Open Wave line module and a BMM (see Open Wave Line
Module Configuration - Network Applications in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg).

Note: Auto-discovery is not supported for XT(S)-3300 line modules in Open Wave configuration, nor
between an FBM and FRM-20X (see Open Wave Line Module Configuration - Network Applications
in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg).

The DTN and DTN-X support Auto-discovery for the following types of optical connections:
■ BMM-to-BMM connection (Optical Express): A front-accessible optical patch cord is connected
from the Optical Carrier Group (OCG) port on one BMM to the OCG port on the other BMM. Optical
Express allows optical pass-through of one or more OCGs through a node. Auto-discovery is
supported for connections between BMM2s, between BMM2Ps, and between BMM2Cs, but it is not
supported for Optical Express connections between Gen 1 BMMs (manual configuration is required
for Optical Express between Gen 1 BMMs). Auto-discovery is also not supported for Optical
Express connection in which one or both BMMs is set to SLTE mode. For more information on
Optical Express connections, see Optical Express on page 4-27.

Note: For Optical Express connections between BMMs, the BMM OCG can be locked without
affecting traffic. However, for Optical Express connections between BMM2s, between BMM2Ps, and
between BMM2Cs, if the BMM2/BMM2P/BMM2C OCG is locked, Auto-discovery is re-triggered, thus
impacting traffic. Make sure that BMM2/BMM2P/BMM2C OCGs are unlocked for Auto-discovery to
succeed, thereby restoring traffic.

■ Line module-to-BMM connection: A front-accessible optical patch cord is connected from the OCG
port on a BMM to the OCG port on a line module to carry the 100Gbps OCG (or 500Gbps OCG for
DTN-X) signal between the BMM and line module.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-22 Equipment Management and Configuration

□ For connections between BMM2C and AOLM/AOLM2/AOLX/AOLX2/SOLM/SOLM2/SOLX/


SOLX2, Auto-discovery is supported only when the AOLM/AOLX/SOLM/SOLX is configured
for Gen1 operational mode.
□ BMM2s that are set to Native mode support Auto-discovery for connections with AOLM,
AOLM2, AOLX, AOLX2, DLM, XLM, ADLM, or AXLM. For BMM2-8-CH3-MS, BMM2-8-
CEH3, BMM2-8-CXH2-MS, and BMM2C-16-CH in Native mode, Auto-discovery is also
supported for connections with SOLM, SOLM2, SOLX, and SOLX2.
□ BMM2s that are set to SLTE mode support Auto-discovery only with SLM, SOLM, SOLM2,
SOLX, and SOLX2, and SOLX (Auto-discovery will not complete if a BMM2 is set to SLTE
mode and is connected to a DLM, XLM, ADLM, AXLM, AOLM, AOLM2, AOLX, or AOLX2).
□ Auto-discovery is not supported on BMM OCGs set for Open Automated Add/Drop for Layer
0 OPN. The user must manually provision the AID of the associated line module OCG. See
Layer 0 Optical Private Network (OPN) for more information.
■ LM-80-to-CMM connection: A front-accessible optical patch cord is connected from each optical
channel (OCH) port on a LM-80 to one of the OCH ports on a CMM to carry the 40Gbps optical
channel signal between the CMM and LM-80. For Auto-discovery to complete between LM OCH
and CMM OCH, the user has to configure the correct OCG number and the channel number on the
LM-80 based on what OCG number and channel number it is connected to on the CMM OCH.
■ CMM-to-BMM connection: A front-accessible optical patch cord is connected from the OCG port on
a CMM to the OCG port on a BMM to carry the (up to) 400Gbps OCG signal between the CMM and
BMM. Auto-discovery between a CMM and BMM requires that the CMM and the BMM have the
same operational mode setting (either Gen 1 or Gen 2). When configured for Gen 1 mode, a 15dB
pad is required from the CMM OCG to the BMM OCG.

Note: Auto-discovery for CMMs has two stages:


■ The first stage between the CMM OCH port and its associated LM-80 OCH port
■ The second stage between the CMM OCG and its associated BMM OCG.
Note: The second stage starts only if the first stage is completed. If all the LM-80 optical channels
which are connected to the CMM are edited to locked state during CMM-to-BMM Auto-discovery, the
CMM-to-BMM Auto-discovery will be unable to complete and Auto-discovery time out alarm will be
reported in the CMM OCG.

■ GAM connections: For configurations that use interconnections between DLMs/XLMs/ADLMs/


AXLMs and BMM2/BMM2P, an intermediary GAM is required between the BMM2/BMM2P and the
line module to amplify the signal from the line module to the BMM2/BMM2P (in the direction from
the BMM2/BMM2P back to the line module, the GAM provides no amplification, and is instead a
fiber pass-through). The DTN and DTN-X support optical connections between the GAM and the
line module, and between the GAM and the BMM2/BMM2P. (See Gain Adapter Module (GAM) for
more information on GAM types and inter-operation with line modules and BMM2s/BMM2Ps.)
□ For connections between a line module, GAM, and BMM2/BMM2P, the Auto-discovery is a
two-step process: First, the line module and GAM perform Auto-discovery, and then the line
module and the BMM2/BMM2P perform Auto-discovery (via the GAM).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-23

□ For GAM connections, Auto-discovery is supported only in the multiplex direction (from the
line module to the GAM to the BMM2/BMM2P).
■ Connections between a TOM on a DTN and a TOM on an ATN. The DTN supports Auto-discovery
for optical connections between a TOM on a TAM-2-10GT on a DTN and a TOM on a SIM-GT or
SIM-GMT on an ATN.
Infinera network elements support the Auto-discovery of line module-to-BMM connections, for both single
and multi-chassis configurations. Auto-discovery eliminates misconnections between the modules,
including:
■ Connecting a line module to a wrong OCG port on the BMM. For example, connecting a line
module with an OCG3 output to an OCG5 input port on a BMM.
■ Connecting a line module to a BMM in conflict with the pre-provisioned association of the BMM and
line module. For example, pre-provisioning an OCG3 port on a BMM to be associated with the line
module in slot 4, but then incorrectly connecting the fiber to the line module in slot 3 (though it may
support OCG3).
On detecting a misconnection, alarms are reported so that the user can correct the connectivity. Also, the
line module is prohibited from transmitting optical signals towards the BMM to prevent the misconnection
from interfering with the other operational line modules. In addition, the operational state of the line
module OCG is changed to disabled.
The optical data plane Auto-discovery involves control message exchanges between the active controller
module in the Main Chassis and the BMMs and line modules, in addition to the control message
exchange between the line module and BMM over the optical data path. The optical data plane Auto-
discovery requires the control plane to be available. Following are some limitations imposed by the
protocol, which prevent proper detection of a line module-to-BMM misconnection:
■ When the Auto-discovery is in progress, there is a 5 second window during which a BMM will not
discover any re-cabling performed by the user. Therefore, do not perform re-cabling while the Auto-
discovery is in progress. Below is a list of events during which BMM and line module automatically
initiate the optical data plane Auto-discovery.
■ If users inadvertently connect an incorrect high power signal to the OCG port on a BMM (for
example connecting the line port output to the OCG input port on a BMM), it could impact traffic on
the other operational OCG ports on the BMM.
■ The Auto-discovery procedure requires the connectivity between the BMM and line module be bi-
directional. In other words, the transmit and receive pair of a given OCG port on a BMM must be
connected to the transmit and receive pair of the same line port of the line module. If this is not
done properly, it will impact the active traffic.
■ The BMM may not detect a misconnection if the fiber is re-cabled under the following conditions,
during which the control messages pertaining to the Auto-discovery could be lost. Refrain from re-
cabling during these conditions:
□ BMM is rebooting
□ BMM is shut down
□ BMM is unplugged

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-24 Equipment Management and Configuration

□ Line module is shut down


□ No active controller module in the Main Chassis
□ No active controller module in the Expansion Chassis if the BMM and line module are not in
the same chassis
□ Nodal control cable is unplugged (inter-chassis connectivity) if BMM and line module are not
in the same chassis
In general, perform re-cabling only when the controller module, BMM, and line module are completely
operational. This will ensure that the optical data plane Auto-discovery can positively identify all
misconnections.
The operational state of the line module is enabled if Auto-discovery is successful, else it is disabled.
Auto-discovery Soak Timer
BMMs support an Auto-discovery Soak Timer setting to delay an Auto-discovery restart in the case of
small line side (OTS) fiber glitches or OTS fiber mishandling. Auto-discovery is normally triggered
immediately in the case of OLOS, but the soak timer configures the system to pause for the specified
number of seconds before initiating Auto-discovery.
Especially for Optical Express connections, which take comparably longer than add/drop connections to
perform Auto-discovery, configuring the soak timer can prevent the Auto-discovery re-trigger and prevent
a small fiber glitch from creating a longer data outage that would be created if Auto-discovery was
restarted immediately. During the soak time, the node will defer the BMM OCG OLOS alarm reporting and
Automated Gain Control will continue to perform null sequencing. Auto-discovery Soak Timer can be set
from 0 - 60 seconds, and the default setting is 0 seconds (disabled).
The following connections support the Auto-discovery Soak Timer:
■ Add/drop connections for all BMM types (BMM2s, BMM2Ps, BMM2Cs, and Gen 1 BMMs)
■ Optical Express connections for BMM2s, BMM2Ps, and BMM2Cs

FlexILS Auto-discovery

Note: The following terminology is used in this document for FlexChannel line modules:
■ "OFx" is used to refer to all FlexChannel line modules collectively (AOFM-500, SOFX-500B,
AOFM-100, etc.)
■ "OFx-500" is used to refer to FlexChannel 500G modules (AOFM-500, AOFX-500B,
SOFX-500, etc.)
■ "OFx-100" is used to refer to FlexChannel 100G modules (AOFM-100 and AOFX-100)
■ "OFx-1200" is used to refer to FlexChannel 1200G modules (AOFM-1200, AOFX-1200,
SOFM-1200, SOFX-1200, etc)
■ "SOFx" is used to refer only to Submarine FlexChannel modules (SOFX-500, SOFX-500B,
SOFX-1200, etc)
■ "AOFx" is used to refer only to non-Submarine FlexChannel modules (AOFM-500, AOFX-100,
etc.)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-25

Where there are further differences in behavior/support, the exact module type(s) will be mentioned
specifically.

This section describes the optical connections for which Auto-discovery is supported for FlexILS nodes.
Note the following for Auto-discovery for connections on FlexILS nodes:
■ Unlike Auto-discovery for connections between OCG-based modules, FlexILS modules initiate
Auto-discovery only after an optical service (optical SNC or optical cross-connect) is provisioned
over the connection.

Note: To enable faster traffic recovery, starting from Release 10.1 Auto-discovery does not trigger if
an optical cross-connect or optical SNC is deleted and subsequently re-created, as long as the
service is re-created within 60 minutes of when it was deleted. (In the previous release, Auto-
discovery was triggered immediately upon re-creation.)

■ For configurations with FRM-9D and FMP-C, Auto-discovery is not supported for connections
between the FRM-9D and FlexChannel line modules (OFx-500) when connecting via an FMP-C. In
this case, the FlexChannel line module’s SCG must be configured for passive multiplexing mode.
■ For configurations with an FRM-9D/FRM-20X or FBM with an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600) in Open Wave configuration, auto-discovery is not
supported between the Open Wave ICE 4 line module or network element and the FRM or FBM.
However, auto-discovery and power control loops are supported between FRMs and FBMs.
■ For configurations with an FBM or FRM-9D/FRM-20X with an Infinite Capacity Engine 4 module
(i.e. XT(S)-3300/OFx-1200/XT(S)-3600) in SCG Line System Mode, Release 18.2 supports native
auto-discovery bypass between the ICE 4 module and FRM/FBM but supports power control loop
between them when the following configurations are made:
□ The Infinite Capacity Engine 4 module (i.e. XT(S)-3300/OFx-1200/XT(S)-3600) is in SCG
Line System Mode
□ FBM is in Active Line Operating Mode
□ FRM-9D/FRM-20X/FBM SCG Interface type is set as Infinera Wave
For more information, see Bypass Native Auto-discovery on page 3-31.
■ For configurations with FMM-C-12 to FlexROADM Broadcast Module (FBM) to FRM-20X, i.e. FMM-
C-12 Line Operating Mode is set to is in Passive Modeling Mode, auto-discovery is not supported
between the FMM-C-12 and the FRM-20X.
■ For configurations with FMM-C-12 and FRM-20X through an FSP-C, i.e. FMM-C-12 Line Operating
Mode is set to Active, auto-discovery between FMM-C-12 (line OUT port) and FRM-20X (System
IN port) is supported.
■ Auto-discovery with FlexChannel line modules is unidirectional: from the OFx-500 to the FSM/FRM,
for example. Auto-discovery is not supported in the opposite direction (from the FSM/FRM towards
the OFx-500).
The figure below shows the Auto-discovery supported between OFx-500, FSM/FSE, and FRM-9D:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-26 Equipment Management and Configuration

■ From the OFx-500 to the FSM (tributary IN) port


■ From the FSM (line OUT) port to the FRM-9D (system IN) port
■ From the FRM-9D (system OUT) port to the FSM (line IN) port
■ From the FRM-9D (system OUT) port to the FSE (line IN) port
■ From the FSE (line OUT) port to the FRM-9D (system IN) port

Figure 3-16 Auto-discovery for OFx-500, FSM/FSE, and FRM-9D

The figure below shows the Auto-discovery supported between OFx-500 and FRM-9D:
■ From the OFx-500 to the FRM-9D (system IN) port

Figure 3-17 Auto-discovery for OFx-500 to FRM-9D (via FSP-C)

The figure below shows the Auto-discovery supported between OFx-500 and FRM-4D:
■ From the OFx-500 to the FRM-4D (system IN) port

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-27

Figure 3-18 Auto-discovery for OFx-500 and FRM-4D

Auto-discovery is supported between a pair of FRM-9Ds/FRM-20Xs in express connections. Auto-


discovery is also supported between an FRM-9D and an FRM-20X in an express configuration. Auto
discovery is supported as followis
■ From the FRM-9D (system OUT) port to the FRM-9D (system IN) port (in an FRM-9D to FRM-9D
express)
■ From the FRM-20X (system OUT) port to the FRM-20X (system IN) port (in an FRM-20X to
FRM-20X express)
■ From the FRM-9D/FRM-20X (system OUT) port to the FRM-20X/FRM-9D (system IN) port (in an
FRM-9D to FRM-20X or FRM-20X to FRM-9D express)

Figure 3-19 Auto-discovery for FRM-9D to FRM-9D (via FSP-E): Sample express between two FRM-9Ds

The figure below shows the Auto-discovery supported between FRM-4D in express connections:
■ From the system OUT port on the FRM-4D labeled “A” to the system IN port on the FRM-4D
labeled “B”.
■ From the system IN port on the FRM-4D labeled “A” to the system OUT port on the FRM-4D
labeled “B”.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-28 Equipment Management and Configuration

Note: Express connections between FRM-4Ds requires two fibers to connect each system IN port to
the system OUT port on the other FRM-4D. Auto-discovery cannot complete for express connections
between FRM-4Ds unless both fibers are connected.

Note: For any pair of FRM-4Ds, only one express connection is supported between the two
FRM-4Ds. For example, if a pair of FRM-4Ds has an express connection between their System 6
ports, the same pair of FRM-4Ds cannot support an express connection between their System 5
ports.

Figure 3-20 Auto-discovery for FRM-4D to FRM-4D

The figure below shows the Auto-discovery supported between OFx-500, FSM/FSE, and FRM-4D:
■ From the OFx-500 to the FMM-F250 (add/drop IN) port
■ From the FMM-F250 (line OUT) port to the FRM-4D (system IN) port

Figure 3-21 Auto-discovery for OFx-500, FMM-F250, and FRM-4D

The figure below shows the Auto-discovery supported between OFx-500, FMM-F250, FSP-C, and
FRM-9D:
■ From the OFx-500 to the FMM-F250 (add/drop IN) port
■ From the FMM-F250 (line OUT) port to the FRM-9D (system IN) port

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-29

Figure 3-22 Auto-discovery for OFx-500, FMM-F250, FSP-C, and FRM-9D

The figure below shows the Auto-discovery supported between OFx-100, FMM-C-5, and FRM-4D:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the FRM-4D (system IN) port

Figure 3-23 Auto-discovery for OFx-100, FMM-C-5, and FRM-4D

The figure below shows the Auto-discovery supported between OFx-100, FMM-C-5, BPP, and FRM-4D:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the BPP (system IN) port
■ From the BPP (line OUT) port to the FRM-4D (system IN) port

Note: Because the BPP is a passive device, the BPP connections between the FMM-C-5 and the
FRM-4D must be manually provisioned:
■ If the BPP connections are not manually provisioned, Auto-discovery will complete between the
FMM-C-5 and the FRM-4D but the passive BPP will not be automatically discovered.
■ If the BPP connections are manually configured but the actual fiber is connected directly
between the FMM-C-5 and the FRM-4D, Auto-discovery has no way to detect that the fiber isn’t
connected through the BPP and Auto-discovery between the FMM-C-5 and the FRM-4D will
complete with no errors.
■ If the connection between the FMM-C-5 and the BPP System port is provisioned (the user has
configured the Provisioned Neighbor TP on the BPP to the AID of the FMM-C-5 Line AID), but
the connection between the BPP Line port and the FRM-4D is not provisioned, a

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-30 Equipment Management and Configuration

misconnection alarm will be reported on the FRM-4D System port and Auto-discovery will not
complete.
■ If the connection between the BPP Line port and the FRM-4D is provisioned (the user has
configured the Passive Provisioned Neighbor TP on the FRM-4D to the AID of the BPP Line
AID), but the connection between the FMM-C-5 and BPP System Port is not provisioned, a
misconnection alarm will be reported on the FRM-4D System port and Auto-discovery will not
complete.

Figure 3-24 Auto-discovery for OFx-100, FMM-C-5, BPP, and FRM-4D

The figure below shows the Auto-discovery supported between OFx-100, FMM-C-5, FSP-C, and
FRM-9D/FRM-20X:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the FRM-9D/FRM-20X (system IN) port

Note: In configurations with OFx-100 connected to an FRM-20X through an FMM-C-5 and


FSP-C, the FRM-20X can be housed in an XTC-2E chassis.

Figure 3-25 Auto-discovery for OFx-100, FMM-C-5, FSP-C, and FRM-9D/FRM-20X (Example with
FRM-9D)

The figure below shows the Auto-discovery supported between the FMM-C-12 and the FRM-9D:
■ From the FMM-C-12 (line OUT) port to the FRM-9D (system IN) port

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-31

Figure 3-26 Auto-discovery for FMM-C-12 and FRM-9D

Bypass Native Auto-discovery


Starting Release 18.2, IQ NOS enables support to enable power control loops between ICE 4 modules
and FRM-9D/FRM-20X/FBM when auto-discovery between the modules are not available and the
following configurations are present:
■ The Infinite Capacity Engine 4 module (i.e. XT(S)-3300/OFx-1200/XT(S)-3600) is in SCG Line
System Mode
■ FBM is in Active Line Operating Mode
■ FRM-9D/FRM-20X/FBM SCG Interface type is set as Infinera Wave
Manual provisioning of neighbor connectivity information between FRM-9D, FRM-20X or FBM system
ports and ICE 4 line modules enables power control loops to work in the absence of auto-discovery.
■ The Provision Neighbor Termination Point attribute on the FRM-9D/FRM-20X or FBM SCGPTP is
updated with the Line SCGPTP of ICE 4 line modules.
■ Auto-discovery is enabled or disabled on FRM-9D/FRM-20X or FBM SCGPTP using the Auto
Discover Neighbor attribute. The Auto Discover Neighbor is in enabled state by default. To bypass
native auto-discovery, the Auto Discover Neighbor is set to disabled state.

Note: Auto Discover Neighbor is only applicable for FRM-9D/FRM-20X and FBMs with SCG
Interface type as Infinera Wave and is not supported for other SCG Interface types such as
Manual Mode 2, SLTE Manual etc.

■ Power Control Loop is enabled automatically when Auto Discover Neighbor is set to disabled state.

FMM-C-5/BMM (OCG) Auto-discovery

Note: "OFx-100" is used to refer to FlexChannel 100G modules (AOFM-100 and AOFX-100).

This section describes the optical connections for which Auto-discovery is supported for configurations
with OFx-100, FMM-C-5, and BMM.

Note: Auto-discovery with FlexChannel line modules is unidirectional: From the OFx-100 to the FMM-
C-5 and from the FMM-C-5 to the BMM. Auto-discovery is not supported in the opposite direction
(there is no Auto-discovery from the FMM-C-5 towards the OFx-100, nor from the BMM towards the
FMM-C-5).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-32 Equipment Management and Configuration

Figure 3-27: Auto-discovery for OFx-100, FMM-C-5, and BMM on page 3-32 shows the Auto-discovery
supported between OFx-100, FMM-C-5, and BMM:
■ From the OFx-100 to the FMM-C-5 (add/drop IN) port
■ From the FMM-C-5 (line OUT) port to the BMM (OCG IN) port

Figure 3-27 Auto-discovery for OFx-100, FMM-C-5, and BMM

Required Number of Effective Channels

Note: Contact Technical Assistance Center (TAC) before configuring target power offset.

The DTN-X and DTN use various modulation formats depending on the line modules on the system:
■ For configurations with DLMs, XLMs, ADLMs, AXLMs, and/or SLMs, a BMMs uses OOK (On Off
Keying) modulation.
■ For configurations with AOLMs, AOLM2s, AOLXs, AOLX2s, SOLMs, SOLM2s, SOLXs, SOLX2s,
and LM-80s that support different modulation formats (see System Data Plane Functions in
#unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg), a BMM may be transmitting across the
line a combination of differently modulated channels (OOK, PM-QPSK, PM-BPSK, etc.).
If all channels use the same modulation format, the BMM can use the same launch power for all of the
channels when transmitting the channels across the optical transport section (OTS). However, if a BMM is
transmitting channels that use different modulation formats, the co-existence of different modulation
schemes in the same OCG can present a problem: The reach of a phase-modulated signal such as PM-
QPSK can be impaired by neighboring channels that use OOK modulation if all channels have equal
channel power. This problem can be corrected by modifying the launch power for various channels or
OCGs based on link design. For this reason, the user can configure the target power offset on a per-OCG
basis on the BMM and on a per-optical channel (OCH) basis on the LM-80:
■ On the OCG level, the power of the ingress signal to the BMM on that OCG is reduced by the
power as defined by the target power offset value configured on the BMM OCG. Different offset
values (within the supported range) can be provisioned for every OCG within an OTS.

Note: When target power offset on the BMM OCG is changed from a more negative to a less
negative or zero offset (e.g., from -4dB to 0dB), the “OPR-OOR-L” and “Power Adjustment

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-33

Incomplete” alarms may be reported for the duration of the time it takes to adjust the power to
incorporate the new offset value.

■ On the OCH level the power of the ingress signal to the CMM from the LM-80 OCH is reduced by
the power as defined by the target power offset value configured on the CMM OCH PTP. Thus, the
power of individual channels within an OCG can also be offset. Different channel offset values
(within the supported range) can be provisioned for every LM-80 OCH within an OCG.
Infinera’s Automated Gain Control (AGC) relies on a minimum power received (and thus a minimum
number of channels) in order to detect a signal and complete Auto-discovery of optical connections as
well as to accurately perform calculations for amplifier gain settings. Therefore, AGC requires:
■ Per OCG: At least 2 channels
■ Per OTS:
□ For ILS1 and ILS2: At least 8 channels
□ For FlexILS: At least 2 channels (4 slices = 50GHz band equivalent power)
■ From an expansion BMM to its associated base BMM: At least 8 channels

Note: Release 18.1 supports Infinera's automated gain control for C-Band traffic only. Automated
gain control for L-Band signals is not supported in this release and will lead to errors in the gain
control calculations.

Note: For LM-80s, if an optical channel (OCH PTP) is in the locked state, that channel does not count
towards the minimum channel count.

Even if the above requirements are satisfied, it is possible that the launch power over the line is reduced
due to target power offset, thus dropping the launch power below what would be expected for two
channels, and this may also prevent Auto-discovery from completing and/or cause erroneous gain
calculations.
With the introduction of target power offset, it becomes important to consider the number of “effective
channels” carried by the OCG and also by the OTS.
For example, an OCG might contain two channels, but if a target power offset has been applied to the
OCG and/or to any of the channels in the OCG, the number of effective channels may be less than the
minimum requirement of two channels. Furthermore, because an LM-80’s channel power can be offset by
both the OCH target power offset and by the containing OCG target power offset, it is important to note
that each channel can support a maximum of -4dB total target power offset. For example, if the BMM
OCG target power offset is set to -3dB, the channels in that OCG can support a maximum CMM OCH
target power offset of -1dB, for a total target power offset of -4dB.
The table below shows how the total target power offset applied to OCG reduces the number of effective
channels in the OTS. For example, if an OCG has three channels and is configured with a target power
offset of -2dB, the effective channel count will be 1.9, which does not meet the minimum channel
requirement per OCG. The table indicates with an asterisk (*) the target power offset values that will not
meet the minimum channel requirement per OCG.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-34 Equipment Management and Configuration

Table 3-1 Effective Channels as a Result of OCG Target Power Offset


Total Target Number of Actual Channels
Power Offset
Value (dB) 1 2 3 4 5 6 7 8 9 10

0 1.0* 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
-1 0.8* 1.6* 2.4 3.2 4.0 4.8 5.6 6.4 7.1 7.9
-2 0.6* 1.3* 1.9* 2.5 3.2 3.8 4.4 5.0 5.7 6.3
-3 0.5* 1.0* 1.5* 2.0 2.5 3.0 3.5 4.0 4.5 5.0
-4 0.4* 0.8* 1.2* 1.6* 2.0 2.4 2.8 3.2 3.6 4.0

Similarly, the table below shows how the target offset applied to channels at the LM-80 OCH PTP level
affects the effective channel count in the OCG. For example, if an OCG has three channels and each
channel is configured with a target power offset of -2dB, the effective channel count will be 1.8, which
does not meet the minimum channel requirement per OCG. The table indicates with an asterisk (*) the
target power offset values that will not meet the minimum channel requirement per OCG.

Table 3-2 Effective Channels as a Result of LM-80 OCH PTP Target Power Offset
Target Power Number of Actual Channels
Offset Value (dB)
1 2 3 4 5 6 7 8 9 10
0 1.0* 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0
-1 0.8* 1.6* 2.4 3.2 4.0 4.8 5.6 6.4 7.1 7.9
-2 0.6* 1.3* 1.9* 2.5 3.2 3.8 4.4 5.0 5.7 6.3
-3 0.5* 1.0* 1.5* 2.0 2.5 3.0 3.5 4.0 4.5 5.0

Because of this, it is important to ensure that the total number of active channels included in each OCG,
OTS, and between base/expansion BMMs meets the minimum channel requirement, taking into account
the total target power offset for the OCG and for all LM-80 channels in the OCG, and also taking into
account any optical channels on the LM-80 that are in the locked state (because locked channels do not
count towards the number of effective channels in the OCG):
■ The effective channel count in an OTS is the sum of the channel counts from each OCG.
■ The effective channel count in a CMM OCG is the sum of the channel counts from each
LM-80/CMM OCH.
For example, if there are five LM-80 OCH channels in a CMM OCG:
■ LM-80 OCH Channel 1 (configured in the corresponding CMM OCH) is assigned a target offset of
-3dB. The effective channel count for this channel is 0.5.
■ LM-80 OCH Channel 2 (configured in the corresponding CMM OCH) is assigned a target offset of
0dB. The effective channel count for this channel is 1.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-35

■ LM-80 OCH Channel 3 (configured in the corresponding CMM OCH) is assigned a target offset of
-3dB. The effective channel count for this channel is 0.5.
■ LM-80 OCH Channel 4 (configured in the corresponding CMM OCH) is assigned a target offset of
0dB. The effective channel count for this channel is 1.
■ LM-80 OCH Channel 5 (configured in the corresponding CMM OCH) is assigned a target offset of
0dB, but the LM-80 OCH is in the locked state. The effective channel count for this channel is 0.
■ Total effective channel count in the CMM/BMM OCG will be 0.5 + 1 + 0.5 + 1 + 0 = 3.

Equipment Configuration
IQ NOS supports two modes of equipment configuration as described in the following sections:
■ Equipment Auto-configuration on page 3-35
■ Equipment Pre-configuration on page 3-35
In both cases, the termination points are automatically created after the circuit pack is configured.

Equipment Auto-configuration
As described in System Discovery and Inventory on page 3-18, IQ NOS automatically discovers the
equipment installed in network element, enabling users to bring up a circuit packs without manual
configuration. The auto-configuration is performed when a circuit pack is installed in a slot which is not
configured, neither pre-configured (see below) nor auto-configured. IQ NOS discovers the installed circuit
pack and also creates and configures the corresponding circuit pack managed object using default
configuration parameters. The default administrative state of an automatically created circuit pack is
unlocked so the circuit pack can start operation without manual configuration. However, users can modify
this default state through management applications.

Note: In case of an XT-500 chassis in breakout mode, the TOM managed object is created with a
default locked administrative state. The port should then be manually unlocked to enable breakout
mode.

Once a slot is populated and the circuit pack auto-configuration completes, the slot is configured and any
attempt to replace the circuit pack with a different circuit pack type will raise an alarm. To enable auto-
configuration of a different circuit pack in the same slot, the circuit pack configuration for the slot must first
be deleted through management applications.
In the case of a multi-chassis system, the Main chassis must be auto-configured; you may not manually
create it through a management interface. In contrast, Expansion Chassis may not be auto-configured;
you must manually create them through the management interface.

Equipment Pre-configuration
IQ NOS supports circuit pack pre-configuration where users can configure the slots to house a specific
circuit pack before physically installing it in the chassis. Such slots are displayed as pre-configured but
unpopulated through the management applications. For multi-chassis systems, only the Expansion
Chassis may be pre-configured.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-36 Equipment Management and Configuration

When the circuit pack is installed in a pre-configured slot, the circuit pack becomes operational using pre-
configured data.
Once a slot is pre-configured for a circuit pack type, insertion of a different circuit pack type causes the
network element to generate an equipment mismatch alarm.

State Modeling
IQ NOS implements state modeling that meets the various needs of all the supported management
applications and interfaces, and also communicates comprehensive state of the equipment and
termination points. IQ NOS state modeling complies with TMF814 and GR-1093 to meet the TL1
management interface standards.

Note: The TL1 agent software provides the appropriate translation of the node state model to reflect
the GR-1093 based TL1 state model.

IQ NOS defines a standard state model for all the managed objects, which includes equipment as well as
termination points as described in Managed Objects on page 3-3. IQ NOS defines the following states:
■ Administrative State—Represents the user’s operation on an equipment or termination point
(referred to as a managed object). See Administrative State on page 3-36.
■ Operational State—Represents the ability of the managed object to provide service. See
Operational State on page 3-39.
■ Service State—Represents the current state of the managed object, which is derived from the
administrative state and operational state. See Service State on page 3-40.

Administrative State
The administrative state allows the user to allow or prohibit the managed object from providing service.
The administrative state of the managed object can be modified only by the user through the
management applications. Also, a change in the administrative state of a managed object results in an
operational state change of the contained and supported managed objects. However, the administrative
states of the contained and supported managed objects are not changed.

Note: IQ NOS supports alarms that indicate when an entity is put in the locked or maintenance
administrative state. The severity of these alarms can also be customized via the ASPS feature (see
Alarm Severity Profile Setting (ASPS) on page 2-9).

IQ NOS defines three administrative states as described in the following sections:


■ Unlocked State on page 3-37
■ Maintenance State on page 3-37
■ Locked State on page 3-37

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-37

Unlocked State
The managed object in unlocked state is allowed to provide services. Using management applications,
users can change the state of a managed object to unlocked state from either locked state or
maintenance state. This action results in the following behavior:
■ If there are any outstanding alarms on the managed object they are reported.
■ PM values for the managed object will be collected and reported as valid.
■ The managed object is available to provide services (provided its operational state is enabled).
However, if there is a corresponding redundant managed object that is active, the unlocked
managed object will be placed into standby mode (e.g., MCM).

Maintenance State
The managed object avails itself to management operations, such as trace messaging, loopbacks, PRBS
testing, GbE CTP test signals, etc. Users can change the state of a managed object to the maintenance
state from either locked state or unlocked state. This section results in the following behavior:
■ All outstanding alarms are cleared on the managed object and all its dependent equipment and
facility objects. All new alarm reporting and alarm logging for the managed object and all the
dependent equipment and facility objects are suppressed until the managed object is
administratively unlocked again. (For example, if a line module is in maintenance, all outstanding
alarms are cleared on the line module, TIM, TAM, TOM, etc. and also on the facility objects like line
module OCG, Optical Channel, DTF Path, Client CTP, Tributary PTP, OTUki, ODUki, etc.)
■ PM values will be marked invalid for managed objects in the maintenance state and all of the
dependent facility objects.
■ Users can perform service-impacting maintenance operations, such as loopback tests, PRBS tests,
etc., without having any alarms reported.
■ The operational and service state of all contained and supported managed objects are modified;
the operational state is changed to disabled and the service state is changed to the OOS-MT (out-
of-service-maintenance) state.

Locked State
The managed object is available for service affecting provisioning, such as modifying attributes or
deleting objects. Users can change the administrative state of a managed object to the locked state from
either unlocked state or maintenance state through all management applications except for the TL1
Interface. In the TL1 Interface, users can change the administrative state of a managed object to the
locked state only from the unlocked state. Changing the administrative state to the locked state results in
the following behavior:
■ Depending on the type of managed object, the locked state may or may not provide services to
users:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-38 Equipment Management and Configuration

□ For all traffic-carrying modules (BMM, DLM, AOLX, AXLM-80, TIM, TAM, TOM, OAM, ORM,
RAM, etc.), the managed object does not provide services to users, meaning that any traffic
on the module will be affected when the module is locked.
□ For the expansion BMM2P-8-CEH1, transmit traffic is affected when the module is locked.
For the other expansion BMMs (BMM2-8-CEH3 and BMM2H-4-B3), traffic is not affected
when the module is locked.
□ For CMMs and LM-80s:
Locking the CMM will affect service on both of the CMM’s OCGs.
Locking a single OCG on a CMM will affect the service on the locked OCG, but not on
the CMM’s other OCG.
Locking a single optical channel physical termination point (OCH PTP) on an LM-80
will affect service on the locked OCH PTP, but not on the LM-80’s other OCH PTP.
Locking a OCH PTP on the CMM does not affect service.
□ Locking the SCM card shuts down the Idler lasers. The Idler lasers will turn on only when the
associated SCM is in the unlocked state.
□ For facilities that can be locked, such as optical channel, OCG, OTUki, Tributary DTF, Line
DTF Path, etc., the facility can be put in the locked state without preventing traffic with the
following exceptions:
For BMM2/BMM2P OCGs involved in Optical Express connections, traffic will be
impacted when the facility is put in the locked state.
For OCGs on line modules (AXLMs, SLMs, AOLXs, SOLMs, etc.), the OCG can be put
in the locked state without impacting traffic on the OCG. However, new SNCs cannot
be provisioned using OCGs in the locked state since the associated TE link will be
down. Manual cross-connects can still be provisioned on line module OCGs in the
locked state.
For SNCs, the associated signaled cross-connects will be deleted and traffic will be
impacted when the SNC is put in the locked state.
□ For Tributary PTPs:
The payload/service type can be changed when the Tributary PTP is in the locked
state.
For all payload types, traffic on the Tributary PTP will be impacted when the facility is
put in the locked state.
Furthermore, the behavior of the Tributary PTP in the locked state is determined by
the Tributary Disable Action setting of the Tributary PTP:
If the Tributary Disable Action is set to Disable Laser, the tributary laser will be
shut down (applicable for all payloads).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-39

If the Tributary Disable Action is set to Send AIS, AIS will be sent (applicable for
SONET/SDH only; this setting does not apply to GbE, Clear Channel, Fibre
Channel, etc.).
■ All outstanding alarms are cleared on the managed object and all its dependent equipment and
facility objects. No new alarms are reported on this object, nor for its dependent equipment and
facility objects.
■ PM values will be marked invalid for managed objects in the locked state and all of the dependent
facility objects.
■ The service state of all the contained and supported managed objects are modified; the service
state is changed to OOS (out-of-service) state.
■ The operational state of this managed object is not changed, since the operational state is
determined by the object’s ability to provide service.

Note: Once the OSC is in the in-service, normal (IS-NR) operational state, it is not affected by the
operational state of the BMM/OAM/ORM. The OSC will remain IS-NR even if the operational state of
the BMM/OAM/ORM is set to the maintenance or locked state.

Operational State
The operational state indicates the operational capability of a managed object to provide its services. It is
determined by the state of the hardware and software, and by the state of the supporting/containing
object; it is not configurable by the user. Two operational states are defined:
■ Enabled—The managed object is able to provide service. This typically indicates that the
corresponding hardware is installed and functional.
■ Disabled—The managed object can not provide all or some services. This typically indicates that
the corresponding hardware has detected some faults or is not installed. For example, when a
provisioned circuit pack is removed, the operational state of the corresponding managed object
becomes disabled.
Each operational state may be further characterized by the following operational state qualifiers that
indicate an operational state due to the operational state of related objects, such as the supporting/
containing (ancestor) object:
■ Ancestor Unavailable, Supporting Unavailable, Related Object Unavailable—The managed object
is Unavailable.
■ Ancestor Locked, Supporting Locked, Related Object Locked—The managed object is Locked.
■ Ancestor Maintenance, Supporting Maintenance, Related Object Maintenance—The managed
object is in Maintenance state.
■ Ancestor Faulted, Supporting Faulted, Related Object Faulted—The managed object is faulted.
■ Ancestor Inhibited, Supporting Inhibited, Related Object Inhibited—The managed object is
Inhibited.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-40 Equipment Management and Configuration

Service State
The service state represents the current functional state of the managed object which is dependent on the
operational state and the administrative state of the object and its ancestors. The following states are
defined:
■ In-service (IS)—Indicates that the managed object is functional and providing services. Its
operational state is enabled and its administrative state is unlocked.
■ Out-of-service (OOS)—Indicates that the managed object is not providing normal end-user services
because its operational state is disabled, the administrative state of its ancestor object is locked, or
the operational state of its ancestor object is disabled.
■ Out-of-service Maintenance (OOS-MT)—Indicates that the managed object is not providing normal
end-user services, but it can be used for maintenance test purposes. Its operational state is
enabled and its administrative state is maintenance.
■ Out-of-service Maintenance, Locked (OOS-MT, Locked)—Indicates that the managed object is not
providing normal end-user services, but it can be used for maintenance test purposes. Its
operational state is enabled and its administrative state is locked.
■ Automatic In-Service (AINS)—Indicates that the managed object will go automatically in-service
when associated tributary PTP faults are cleared (see Automatic In-Service (AINS) on page 3-40,
below).

Automatic In-Service (AINS)


The Automatic In-Service (AINS) service state is used to pre-provision services or to troubleshoot faulty
termination points in an alarm-free state. The AINS state is assigned to a tributary PTP in order to
suppress alarm reporting to the management interfaces for the tributary PTP and its associated client
CTP and/or SNC when the tributary PTP and/or the client CTP are deemed to have an invalid signal. The
following failure alarm conditions on the tributary PTP and the client CTP are used for declaring the signal
as an invalid signal:
■ Tributary PTP: OLOS
■ SONET client CTP: LOF, LOS, AIS-L, DE-ENCAP-LOF, DE-ENCAP-LOS, DE-ENCAP-AIS-L
■ SDH client CTP: LOF, LOS, AIS-MS, DE-ENCAP-LOF, DE-ENCAP-LOS, DE-ENCAP-AIS-MS
■ Fibre Channel client CTP: LOS, LOSYNC, DE-ENCAP-LOS, DE-ENCAP-LOSYNC
■ OTUk client CTP: LOF, LOS, AIS-L, DE-ENCAP-LOF, DE-ENCAP-LOS, DE-ENCAP-AIS-L
■ ODUk client CTP: OCI, LCK, AIS, DE-ENCAP-OCI, DE-ENCAP-LCK, DE-ENCAP-AIS, CSF
■ ODUkT client CTP: AIS, OCI, LCK,LTC, DE-ENCAP-AIS, DE-ENCAP-OCI, DE-ENCAP-LCK, DE-
ENCAP-LTC
■ GIGE client CTP: LOS, LOSYNC, DE-ENCAP-LOS, DE-ENCAP-LOSYNC
■ Clear Channel client CTP: LOS, DE-ENCAP-LOS
■ Equipment: EQPTFAIL and IMPROPRMVL of line module, LM-80, TEM, TAM, or TOM equipment
Alarms are suppressed until the invalid signal alarms on the tributary PTP have been cleared. The
suppression time is specified by the user-configurable AINS valid signal timer, during which, the tributary

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-41

PTP and any associated client CTP and/or SNC is declared in service. If the fault condition re-occurs
during the time when the timer is active, the equipment/termination point transitions to the “out-of-
service:AINS” state and the valid signal timer is reset to its configured value.
Listed below are the possible ways to enable AINS
■ If the tributary port is in the locked or maintenance state and the associated client CTP is not in the
maintenance state.
■ If the tributary port is in unlocked state and the tributary port or the associated client CTP has a
fault on it.
■ If the tributary port is already-provisioned with AINS disabled, there must be a fault condition
present on the tributary port or the corresponding Client CTP. On an attempt to enable AINS when
there is no such fault condition, an error message is displayed.

Note: Once AINS is enabled for a port, the associated client CTP cannot be changed to the
maintenance state.

The AINS state is supported for both local and remote SNCs and sub-SNCs. AINS state is not supported
for channelized SNCs.
Starting Release 20.0, SNC Fail and CSF (Client Signal Failure) handling in AINS condition has the
following possible configurations in case of both Unprotected SNC Configuration with CSF on T-ODUk
and Protected SNC Configuration with a CSF on L-ODUk.
■ The SNCFAIL Alarm is not reported on detecting a CSF and if AINS is Enabled on the
corresponding Tributary port of SNC Endpoint.
■ The SNCFAIL Alarm is reported on detecting and if AINS is Disabled on the corresponding
Tributary port of SNC Endpoint.
This feature is supported under all the below listed conditions.
■ Supported for ODUs created through GMPLS based SNCs
■ Supported for both Local and Remote SNCs and all types of Network Mappings, Adaptations and
SNC endpoints.
■ The Masking of SNCFAIL alarm is applicable even if the CSF is used as Protection Switch trigger.

Tributary Disable Action


For DTN-X, DTN, and XT nodes, the user can configure the action taken by the tributary once the
tributary is disabled:
■ For DTN and DTN-X interfaces, a tributary is considered disabled when the tributary is locked, or
when LOF or AIS is detected for the tributary.
■ For XT-500 interfaces, a tributary is considered disabled when the tributary is locked, or when the
tributary detects one of the following conditions:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-42 Equipment Management and Configuration

□ Optical Loss of Signal (OLOS) received at ingress TOM


□ Local fault (LF) received at ingress
□ Loss of Sync (LOSYNC) received at ingress
□ Data path disruption (e.g., due to hardware failure)
□ Line fiber faults
■ For XT(S)-3300 interfaces, a tributary is considered disabled when the tributary is locked, or when
the tributary detects one of the following conditions:
□ A remote (Far-End) client ingress fault leads to tributary disable action applied on the local
(Near-End)
□ Post-FEC-BER-SF received at ingress
□ Data path disruption (e.g., due to hardware failure)
□ Line fiber faults

Note: Tributary Disable Action is not supported on the PXM.

Note: In addition, the DTN-X supports tributary disable action upon detection of a forwarded error,
see Forward Defect Triggering of Tributary Disable Action on page 3-45.

The following table describes the supported tributary disabling actions.

Table 3-3 Tributary Disable Actions


Tributary Description and Supported Interface Types
Disable Action
Disable Laser (All interface types) When disabled, the tributary turns off its laser.

Note: For electrical transmit TOMs (TOM-1.485HD-TX and TOM-1.4835HD-TX), this setting is
called Disable Transmitter, see below.

Disable (Applicable to electrical transmit TOMs) When disabled, the electrical transmit TOM turns off its
Transmitter laser (this is the equivalent to Laser Off, but applicable only to electrical transmit TOMs).

Note: Because the transmitter on electrical transmit TOMs cannot be disabled, when an
electrical TOM is configured for Disable Transmitter setting, the TOM will transmit all zeros.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-43

Table 3-3 Tributary Disable Actions (continued)


Tributary Description and Supported Interface Types
Disable Action
Generate AIS-L (Applicable to all SONET and SDH interfaces on TAMs/TIMs, and OTUk interfaces on TAMs only)
When disabled, the tributary sends an AIS signal.

Note: AIS-L generation is not supported for OC48/STM16 on TIM-16-2.5GM. For these
services, the Generate Generic AIS option can be used instead.

Note: AIS-L generation is not supported for OC-3 and STM-1 payloads on the TAM-8-2.5GM.
It is recommended to use the Disable Laser setting for these payloads on the TAM-8-2.5GM.

Generate (Applicable to SONET and SDH interfaces on TIM-16-2.5GM only) When tributary disable action is
Generic AIS in effect, the tributary sends a generic AIS signal.
Send All Zeros (Applicable to Fibre Channel interfaces on TAMs only) When tributary disable action is in effect,
the tributary sends all zeros in the entire frame, which results in a PCS Loss of Sync alarm on the
downstream equipment or test set.
Do Nothing (Applicable to 10G DTF clients on the TAM-2-10GT and DICM-T-2-10GT and 1GFC-CC/1.0625G
and 2GFC-CC/2.125G clients on the TAM-8-2.5GM) Laser continues to transmit and DTF framing
is kept intact even when transmit DTPs are faulted.
Send LF (Applicable to 100GbE clients on TAM-1-100GR/TAM-1-100GE, 40GbE clients on TAM-1-40GR/
TAM-1-40GE, and for 10GbE and 1GbE signals on TAM-2-10GM/TAM-8-2.5GM/DICM-T-2-10GM
on a DTC/MTC; and for all Ethernet clients on the XTC and XT ) When tributary disable action is in
effect, the tributary sends a local fault (LF) signal towards the connected client equipment upon
receiving network-side faults or de-encapsulated faults due to issues in faults received from far-
end client equipment.
See Send Local Fault Signal on page 3-44 for more details.
Insert Idle (Applicable to Fibre Channel interfaces, for Ethernet interfaces on TAM-8-2.5GM, TAM-2-10GM,
Signal DICM-T-2-10GM, TAM-1-40GE, TAM-1-40GR, TAM-1-100GE, and TAM-1-100GR, and for all
Ethernet clients on the XTC When disabled, the tributary sends an idle signal.
Send NOS (Applicable to Fibre Channel interfaces on TAMs, 8G Fibre Channel on TIM-5-10GM/TIM-5-10GX,
and2G/4G Fibre Channel on the TIM-16-2.5GM) When disabled, the tributary sends an NOS (Not
Operational Primitive Sequence) signal.

The DTN-X and DTN support a configurable setting on the tributary physical termination point that
triggers the configured disable action on the tributary when a post-FEC Bit Error Rate - Signal Failure
(BER-SF) condition is present on the Tributary DTF Path or ODUk Path. By default, this setting is
disabled (meaning that the tributary disable action does not take place when the Tributary DTF Path
BER-SF condition is present). The user can enable this feature on a per-tributary basis.
A standing condition Laser Shutdown Active (LS-ACTIVE) is reported by the network element and will be
cleared when the laser turns back on. The LS-ACTIVE condition is masked by the conditions Equipment
Fail (EQPTFAIL), Improper Removal (IMPROPRMVL), and ALS Disabled (ALS-DISABLED).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-44 Equipment Management and Configuration

TIM-1-100GE and TIM-1-100GE-Q with Tributary Disable Action in LaserOff/Disable Laser support setting
of Recovery Tributary Disable action to be used if there are toggling Rx faults. In case of such faults on
these TIMs, it is recommended to set the Recovery tributary disable action to Send IDLE/Send LF.

Note: In case of DTN-X network elements, the LOS alarms on the downstream line side clears when
the Tributary Disable Action of the upstream peer DTN-X port is set to IDLE. It raises LOS if it is set to
Turn Off Laser or Send LF.

The following are the supported Recovery Tributary Disable Action values:
■ Send IDLE: The tributary interface sends an idle signal
■ Send Local Fault: The tributary sends a local fault (LF) signal towards the connected client
equipment upon receiving networkside faults or de-encapsulated faults due to issues in faults
received from far-end client equipment.
■ None: The recovery trib disable action is disabled.

Send Local Fault Signal


The Send LF (send local fault) option is included in the tributary disable actions for all Ethernet clients on
the XTC and XT, and for Ethernet clients on the TAM-1-100GR, TAM-1-100GE, TAM-1-40GR,
TAM-1-40GE, TAM-2-10GM, DICM-T-2-10GM, and TAM-8-2.5GM on the DTC/MTC. The Send LF option
applies in the following scenarios:
■ 100GbE LAN, 40GbE LAN, and 10GbE LAN (standard client handling, without OTN Adaptation)—
During a signal fail condition of the network side of the Ethernet client signal (e.g., loss of signal,
loss of sync, DTP-AIS, etc.), or de-encapsulated faults due to issues in faults received from far-end
client equipment (de-encapsulated loss of sync, de-encapsulated loss of alignment), the failed
Ethernet signal toward client connected at near end is replaced by a stream of 66B blocks, with
each block carrying two local fault sequence ordered sets (as specified in [IEEE 802.3ae]). The
sending of local fault signal is accomplished by editing the tributary’s disable action to Send LF.

Note: For TAM-2-10GM and DICM-T-2-10GM, this scenario is applicable only when handling
the native Ethernet client signal in ingress and egress directions of the network.

■ 10GbE LAN incoming signal adapted to OTN via OTN Adaptation (see OTN Adaptation Services
on page 4-21)—During a signal fail condition of the network side of the 10GbE LAN client signal
(e.g., loss of signal, loss of sync, DTP-AIS, etc.), the failed 10GbE LAN signal is replaced by a
stream of 66B blocks, with each block carrying two local fault sequence ordered sets (as specified
in [IEEE 802.3]). This replacement signal is then mapped into the OPU2e/OPU1e, as specified in
the G.709/G.Sup43. The 10GbE LAN to OTN Adaptation case is only applicable to TAM-2-10GM
and DICM-T-2-10GM. The sending of local fault signal is accomplished by editing the 10GbE
properties so that the encapsulated signal disable action is set to Send LF (see Encapsulated
Client Disable Action on page 3-46).
■ 1GbE LAN (standard client handling, without OTN Adaptation)—During a signal fail condition of the
network side of the 1GbE client signal (e.g., loss of signal, loss of sync, DTP-AIS, etc.), the failed
1GbE signal is replaced by a stream of 10B blocks, with /v/ error propagation signal (as specified in
[IEEE 802.3]). The sending of local fault signal is accomplished by editing the tributary’s disable
action to Send LF.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-45

Note: For TAM-8-2.5GM, this scenario is applicable only when handling the native Ethernet
client signal in ingress and egress directions of the network.

Forward Defect Triggering of Tributary Disable Action


The DTN-X and XT support an option for triggering the tributary disable action based on forward defects
such as AIS, LF, NOS, etc. This feature is configured on a per-port basis. When disabled, forward defect
indications such as de-encapsulated AIS, LF, and NOS do not trigger the tributary disable action on the
tributary. When enabled (which is the default for SONET, SDH, and Ethernet services), the behavior of
the tributary is as follows:
■ For SONET/SDH (e.g., native SONET/SDH client transport service), forwarded SONET/SDH-AIS
(de-encapsulated AIS in the network to client direction) will trigger tributary disable action as
follows:
□ For tributaries with Disable Laser tributary disable action, turn off laser (LOL)
□ For tributaries with Generate AIS-L tributary disable action, AIS-L will be generated
■ For Ethernet (e.g., native GbE client transport service), a forwarded GbE Local Fault (de-
encapsulated LF in the network to client direction) will trigger tributary disable action as follows:
□ For tributaries with Disable Laser tributary disable action, turn off laser (LOL)
□ For tributaries with Insert Idle Signal tributary disable action, send idle signal
□ For tributaries with Send LF tributary disable action, LF signal will be generated

Note: XT(S)-3300 supports Send LF tributary disable action. A forwarded GbE Local
Fault triggers an LF signal on these tributaries.

■ For Fibre Channel (e.g., native Fibre Channel client transport service), forwarded FC Not
Operational Primitive Sequence (de-encapsulated NOS in the network to client direction) will trigger
tributary disable action as follows:
□ For tributaries with Disable Laser tributary disable action, turn off laser (LOL)
□ For tributaries with Insert Idle Signal tributary disable action, send idle signal
□ For tributaries with Send NOS tributary disable action, send "Not Operational" signal.

Note: For 8G Fibre Channel services on TIM-5-10GM/TIM-5-GX, the Not Operational Primitive
Sequence (NOS) signal in the trasmit direction cannot trigger the tributary disable action.
Therefore, for 8GFC services configured for Send NOS, Forward Defect Triggering must be
disabled.

Note: For 10G Fibre Channel services on TIM-5-10GM/TIM-5-GX, Forward Defect Triggering
must be enabled .

Figure 3-28: Example Scenario for Forward Defect Triggering of Tributary Disable Action on page 3-46
shows an example scenario for a tributary disable action triggered by a forward defect indication.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-46 Equipment Management and Configuration

Figure 3-28 Example Scenario for Forward Defect Triggering of Tributary Disable Action

Encapsulated Client Disable Action


In addition to specifying tributary disable actions, certain TIMs and TAMs support an option to configure
an encapsulated client disable action, as described in the following sections:
■ Encapsulated Client Disable Action on Ingress (DTN-X) on page 3-46
■ Encapsulated Client Disable Action on Egress (DTN) on page 3-48

Encapsulated Client Disable Action on Ingress (DTN-X)

The Encapsulated Client Disable Action on the DTN-X is used to define the content of the OPUk sent by
a DTN-X toward the Infinera network in case of an ingress client interface failure of a SONET, SDH, or
Ethernet signal (including locking of the client interface).

Note: In the TL1 interface, this is the ENCLIENTDISABLEACT parameter.

Table 3-4: TIM Support of Encapsulated Client Disable Action on page 3-47 lists the TIMs that support
Encapsulated Client Disable Action, and shows the supported values and behavior based on the service
type and TIM.

Note: Before Release 15.3, the default values for Encapsulated Client Disable Action for all services
types was “No Replace.” In Release 15.3 and above, the default values are updated to those shown
in the table below. The setting for Encapsulated Client Disable Action for any existing services will not
be impacted by an upgrade to Release 15.3. However, for any new services created in Release 15.3
and above, if no value is specified for Encapsulated Client Disable Action, the node will apply the
default setting as shown in the table below.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-47

Table 3-4 TIM Support of Encapsulated Client Disable Action


TIM Type Service Type Supported Behavior
Encapsulated
Client Disable
Actions
TIM-1-100GX 100GbE Send LF The OPUk carries a local fault (LF) signal.
TIM-1-100GM
No Replace Even though the received signal is deemed failed the
system attempts to encapsulate a signal as close to the
received signal as possible into the OPUk by copying the
data stream as it is detected by the receiver into the OPUk
TIM-1-100GE 100GbE No Replace Even though the received signal is deemed failed the
TIM-1B-100GE system attempts to encapsulate a signal as close to the
received signal as possible into the OPUk by copying the
TIM-1-40GE 40GbE LAN No Replace
data stream as it is detected by the receiver into the OPUk.
TIM-1-40GM OC-768 No Replace The content of the OPUk will be an invalid signal, which will
not be interpreted as a valid client signal.
AISL (default) The OPUk carries AIS-L.
STM-256 No Replace The content of the OPUk will be an invalid signal, which will
not be interpreted as a valid client signal.
AISL (default) The OPUk carries MS-AIS.
TIM-5-10GM 10GbE LAN Send LF The OPUk carries a local fault (LF) signal.
TIM-5B-10GM (default)
TIM-5-10GX
No Replace Even though the received signal is deemed failed the
XICM-T-5-10GM
system attempts to encapsulate a signal as close to the
received signal as possible into the OPUk by copying the
data stream as it is detected by the receiver into the OPUk.
SONET OC-192 No Replace Even though the received signal is deemed failed the
system attempts to encapsulate a signal as close to the
received signal as possible into the OPUk by copying the
data stream as it is detected by the receiver into the OPUk.
AISL (default) The OPUk carries AIS-L.
SDH STM-64 No Replace Even though the received signal is deemed failed the
system attempts to encapsulate a signal as close to the
received signal as possible into the OPUk by copying the
data stream as it is detected by the receiver into the OPUk.
AISL (default) The OPUk carries MS-AIS.
TIM-5-10GM 8G Fibre Channel Send NOS When near-end has OLOS/LOSYNC, the Not Operational
TIM-5-10GX (default) Primitive Sequence (NOS) signal will be sent towards
downstream side.
No Replace When near end has OLOS/LOSYNC, the downstream node
will detect DE-ENCAP-LOSYNC.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-48 Equipment Management and Configuration

Table 3-4 TIM Support of Encapsulated Client Disable Action (continued)


TIM Type Service Type Supported Behavior
Encapsulated
Client Disable
Actions
10G Fibre Send LF When near end has OLOS/LOSYNC, LF will be sent
Channel (default) towards downstream side.
No Replace When near end has OLOS/LOSYNC, downstream side
should be able to detect DE-ENCAP-LOSYNC.
TIM-16-2.5GM 1 GbE Send LF For 1GbE clients configured for SENDLF, the TIM sends an
error propagation signal.
2 GFC Send NOS When near-end has OLOS/LOSYNC, the Not Operational
Primitive Sequence (NOS) signal will be sent towards
downstream side.
4 GFC No Replace When near end has OLOS/LOSYNC, downstream side will
detect DE-ENCAP-LOSYNC.
OC-48 Generic AIS When tributary disable action is in effect, the tributary
sends a generic AIS signal.
STM-16 Generic AIS
OC-12 Generic AIS
STM-4 Generic AIS
OC-3 Generic AIS
STM-1 Generic AIS

Encapsulated Client Disable Action on XT(S)-3300 Ingress


The Encapsulated Client Disable Action on the XT(S)-3300 is used to define the content of the signal sent
by an XT toward the Infinera network in case of an ingress client interface failure of an Ethernet signal
(including locking of the client interface).
The XT(S)-3300 carries a 100GbE service. The supported Encapsulated Client Disable Action is Send LF
i.e. the tributary carries a local fault (LF) signal.

Encapsulated Client Disable Action on Egress (DTN)

For DTN, the Encapsulated Client Disable Action specifies the replacement signal type for the
encapsulated client interface upon egress from the DTN network in case of a signal fail condition from the
network side of the client signal.

Note: In the TL1 interface, this is the ENCDISABLEACT parameter.

Encapsulated Client Disable Action is supported only for SONET/SDH adaptation services, Ethernet
adaptation services, and ODUk transport services (see OTN Adaptation Services on page 4-21 and
ODUk Transport on page 4-23). The following TAMs and service types support Encapsulated Client
Disable Action:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-49

■ For TAM-2-10GM and DICM-T-2-10GM:


□ ODU1e (ODUk transport service)
□ ODU2 (ODUk transport service)
□ ODU2e (ODUk transport service)
□ OC-192/10GbE WAN PHY (with OTN adaptation to/from OTU2)
□ STM-64 (with OTN adaptation to/from OTU2)
□ 10GbE LAN (with OTN adaptation to/from OTU2e)
□ 10GbE LAN with OTN adaptation to/from OTU1e
■ For TAM-8-2.5GM:
□ ODU1 (ODUk transport service)
□ OC-48 (with OTN adaptation to/from OTU1)
□ STM-16 (with OTN adaptation to/from OTU1)
The following Encapsulated Client Disable Actions are supported by the DTN:
■ Send All Zeros—(supported by Ethernet adaptation services only) During a signal fail condition
from the network side of the 10GbE LAN client signal (e.g., loss of signal, loss of sync, DTP-AIS,
etc.), the failed 10GbE LAN signal is replaced by all zeros.
■ Send LF—(supported by Ethernet adaptation services only) During a signal fail condition from the
network side of the 10GbE LAN client signal, the failed 10GbE LAN signal is replaced by a stream
of 66B blocks, with each block carrying two local fault sequence ordered sets (as specified in [IEEE
802.3]). This replacement signal is then mapped into the OPU2e/OPU1e, as specified in the G.709.
■ Send AISL—(supported by SONET/SDH adaptation services only) During a signal fail condition
from the network side of the client signal, the failed SONET/SDH signal is replaced by AIS-L (for
SONET interfaces) or MS-AIS (for SDH interfaces).
■ Send ODU-AIS—(supported by SONET/SDH adaptation, Ethernet adaptation, and ODUk transport
services) During a signal fail condition from the network side of the client signal, the failed signal is
replaced with ODUk-AIS. (For more information on this option, see ODUk AIS for ODUk
Encapsulated Clients on page 4-139.)
■ Laser Off—(supported by SONET/SDH adaptation, Ethernet adaptation, and ODUk transport
services) During a signal fail condition from the network side of the client signal, the egress
tributary will turn off its laser.

Forward Error Correction Configuration for XT


The Infinera XT supports Reed-Solomon Forward Error Correction (RS-FEC) capability for detection and
correction of errors in data transmission can be enabled on 100GbE interfaces on the XT. The RS-FEC
capability implements IEEE802.3bj functionalities.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-50 Equipment Management and Configuration

RS-FEC is supported on any of the QSFP28 type TOMs supported on the XT (i.e., TOM-100G-Q-SR4
and TOM-100G-Q-LR4) and is enabled by default on the TOMs plugged into the XT-500S-100.

Link Layer Discovery Protocol (LLDP) for XT


The Infinera XT supports Link Layer Discovery Protocol (LLDP). This protocol is defined by IEEE 802.1AB
standard and allows an Ethernet device to advertise its capabilities and addresses to another Ethernet
device to support topology discovery of connected Ethernet devices.
The LLDP frame format handling is in accordance with the standard defined in IEEE 802.1AB to ensure
interoperability between two devices that are connected together. Information is exchanged between
LLDP agents using type-length-value structures (TLVs) on Ethernet frames.

Figure 3-29 LLDP frame and data unit formats

Each LLDP Data Unit includes four mandatory TLVs:


■ Chassis ID—The string value used to identify the chassis component associated with the remote
system.
■ Port ID—The string value used to identify the port associated with the remote system
In addition following optional TLVs are supported:
■ Port Description—The string value used to identify the description of the given port associated with
the remote system.
■ System Name—The string value used to identify the name of the remote system
■ System Description—The string value used to identify the system description of the remote system
Using management interfaces, the operation mode of LLDP agent can be configured on each Ethernet
client interface to any of the following:
■ Disabled—This is the default mode on an Ethernet client interface and indicates that LLDP is
disabled.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-51

■ RxOnly—In this operation mode, the LLDP agent only receives LLDP frames on the port and does
not transmit LLDP frames to the client. The remote system information is gathered and stored in an
SNMP MIB (LLDP-MIB-v2).

Figure 3-30 LLDP Receive Only Mode of Operation

Note the following for LLDP support on XT:


■ LLDP configuration does not impact the data plane traffic and it is not a a service affecting
operation.
■ The LLDP mode of operation persists over power cycle, cold-reset or warm reset of the
management card or the chassis/network element.
■ If the LLDP packet size is greater than 256 byte, it cannot be learned.
■ It is not necessary for all the mandatory TLVS needs to present in the received LLDP packet. Only
Chassis ID TLV is mandatory.
■ User must enable LLDP Snooping on both trib. port and line port on both XT to be able to learn
LLDP packets.
□ The AMCC E120 can only capture packet on the line side port of the network.
□ LLDP packets sends from the local trib port will be learned on the far side line port.This
LLDP packet will be sent back via IGCC to the local node to be learned on the Trib port
■ LLDP database is not deleted when the LLDP packet is received with TTL as 0.
■ If loopback is enabled on the OCG port, trib port LLDP packet cannot be learned.
■ LLDP packet cannot be learned if IGCC link faisl or OCG OLOS alarm is raised.

Power Draw Reporting (MTC-9 and MTC-6)


The MTC-n chassis support power draw reporting for all the modules installed on the chassis (with 10%
accuracy). The measurement is enabled by default, and be can enabled/disabled via the chassis
equipment properties for the MTC-n. The power draw measurement can be seen via the “Actual Chassis

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-52 Equipment Management and Configuration

Power Draw” PM measurement on the chassis (when PM collection is enabled on the chassis). The
power draw PM is supported for real time and historical (15 minute/24 hour) PM, and data is collected for
minimum, maximum, and average values.

Power Draw of Equipment


The user can configure IQ NOS software to calculate per-chassis, worst-case power draw based on shelf
equipage, and escalate a standing condition (PWRDRAW) for the chassis. This power draw limit is
compared against the total estimated power draw for all of the equipment provisioned (or pre-provisioned)
in the chassis, and the chassis raises an alarm if the sum of the power values for the provisioned/pre-
provisioned equipment in the chassis exceeds the user-configured maximum power draw value.

Note: If a chassis exceeds its configured maximum power draw value, it raises the Power Draw
alarm, but does not power down or take any further action. See Power Draw Alarm on page 2-9 for
information about the behavior of the power draw alarm.

Note: The maximum power draw value is not retrieved for fan modules, as the power draw of two fans
is integrated into the power draw estimate for the chassis (along with two PEMs, the IO Panel. etc.).
Likewise, the maximum power draw value is not retrieved for TOMs, as the power draw value of
TOMs is integrated into the power draw estimate for each TAM (assuming maximum number of
TOMs).

OSC and Raman Pilot Laser Disabling


The default behavior for BMM, OAM, ORM, RAM-2-OR, IAM, and IRM modules is to maintain the
transmission of the OSC, regardless of fiber cut or fiber removal. Unless there is an OSC fault condition,
the OSC is transmitted to the fiber. Likewise for the Raman pilot tone for RAM-2-OR and RAM-1 modules,
the default behavior is for the Raman module to transmit the pilot tone even during a fiber cut or fiber
removal. (Note that REM-2 is used in conjunction with RAM-2-OR/RAM-1 and the REM-2 itself doesn’t
have a pilot laser.)
However, in cases where all light to the fiber needs to be turned off, it is possible to disable the OSC and,
for links with Raman amplification, to disable the Raman pilot laser:
■ For BMMs, OAMs, ORMs, IAMs, and IRMs, the OSC port can be administratively locked, which
shuts down the transmit OSC from the module:
□ When locked, the historical PM is marked as invalid.
□ Any associated faults (OSC faults, GMPLSCC faults, etc.) are masked. For Optical
Amplifiers, locking the OSC also suppresses inter-OAM- misconnection alarms on the peer
card.
■ For RAM-2-OR modules, the OSCT O2 port can be administratively locked, which shuts down the
transmit OSC from the module:
□ When locked, the historical PM is marked as invalid.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-53

□ Any associated faults (OSCT faults) are masked.


■ For RAM-2-OR and RAM-1, there is a Disable Pilot Laser field in the equipment properties
interface:
□ The Raman module must be administratively locked before disabling the pilot laser.
□ When disabled, the equipment state of the module indicates that the pilot laser is disabled.
□ Any associated faults (Raman pilot laser faults) are masked while the pilot laser is disabled.
Note that it is the source of the OSC that must be locked, and the OSC source varies depending on the
module type and the system configuration:
■ For BMM, BMM2, BMM2P, IAMs, and IRMs, the OSC source is in the module itself.
■ For BMM2C, the OSC originates from the associated OAM/ORM that acts as the BMM2C’s
preamplifier (see Figure 1).
■ For OAM or ORM in preamplifier/booster mode, the OSC originates from the associated module
(see Figure 1).
■ For OAM or ORM in standard configuration (dual slot with OSC), the OSC source is in the module
itself.

Chassis Power Control


DTN-X, DTN, and FlexILS nodes provide software for monitoring and controlling chassis power
consumption, as described in the following sections:
■ XTC Chassis Power Control on page 3-53
■ MTC-9/MTC-6 Chassis Power Control on page 3-54
■ MTC/DTC Chassis Power Control on page 3-55

XTC Chassis Power Control


When a line module or TIM is installed in an XTC, the active XCM recognizes the module and calculates
the module’s power requirements, then verifies whether the power available to the system is sufficient to
support the new module. Once the XCM determines that the available power is sufficient, the XCM allows
the new module to power up. If the current available power is not sufficient, the XCM will not allow the
module to fully power up; the module remains in a reset state and consumes a minimal amount of power
and the XTC raises an alarm indicating that the system requires more power than available (PWRCTRL-
INIT, see Power Draw Alarm on page 2-9). Once available power increases sufficiently, the XCM will
automatically power up modules in the reset state.
This applies only to newly-installed or re-seated line modules or TIMs; if these modules are cold reset the
XCM does not interfere with the reboot.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-54 Equipment Management and Configuration

Third Party Power Supply (XTC-10)


For cases where a site does not have the space to house the power facilities for an XTC-10 chassis, the
XTC-10 supports the Power Supply Type setting. The Power Supply Type parameter can have one of the
following values:
■ Native (default)—The node monitors PEMs and the Power Control Feature is enabled based on
PEM inputs.
■ Unmanaged 3rd Party—The Power Control Feature is disabled for the XTC-10, the system does
not monitor the XTC-10 for power redundancy, nor does the system monitor for PEM faults. The
system does not collect PEM power information for input/output voltage. Once the node’s XCM
boots up and detects no PEMs, a user with network administrator (NA) or network engineer (NE)
privileges can set the Power Supply Type parameter to this mode.

Note: The XTC-10 must be set to Unmanaged 3rd Party power supply mode when the chassis is
configured to use a third-party AC-based power supply.

Note: The XTC-10 can be configured for an Unmanaged 3rd Party power supply mode only when the
XTC-10 chassis is administratively locked or in maintenance mode. The node will not allow
Unmanaged 3rd Party power supply mode if a PEM is detected by the node via the SCSI cables.
(There are no restrictions when setting the power supply mode from Unmanaged 3rd Party to Native.)

Note the following for an XTC-10 chassis configured for Unmanaged 3rd Party power supply mode:
■ All input voltage monitoring and power redundancy must be verified by the user since the system
will no longer monitor power input. This means that user must ensure the following:
□ The third party power supply provides the correct input (operating voltage, wattage, etc.).
□ The third power supply provides redundancy. The node will not report the PWR-PROT-FAIL
(chassis power redundancy lost) alarm.
■ The XTC-10 chassis does not support the Chassis Power Control feature; this feature will be set to
disabled. Meaning that when a new module is inserted into the chassis, the system will allow the
module to initialize without checking for available power, and without reporting the PWRCTRL-INIT
(Power Control) alarm. This means that the chassis may draw more power than what is available
from the third power supply.
■ The maximum available power information will not be available for the chassis.
■ The Power Draw (PWRDRW) alarm behavior is the same for both power supply modes (the
chassis will assert the PWRDRW alarm if the chassis’ estimated power requirements exceed the
user-configured maximum power draw threshold for the chassis). See Power Draw Alarm on page
2-9) for more information on maximum power draw settings.

MTC-9/MTC-6 Chassis Power Control


When a module (i.e. IAM, IRM, FRM, and/or FSM) is installed in an MTC-9/MTC-6, the active IMM
recognizes the module and calculates the module's power requirements, then verifies whether the power
available to the system is sufficient to support the new module. Once the IMM determines that the
available power is sufficient, the IMM allows the new module to power up. If the current available power is

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-55

not sufficient, the IMM will not allow the module to fully power up; the module remains in a reset state and
consumes a minimal amount of power. Once available power increases sufficiently, the IMM will
automatically power up modules in the reset state.
This applies only to newly-installed or re-seated modules; if these modules are cold reset the IMM does
not interfere with the reboot.
Note that the PEMs on the MTC-9 use an external circuit breaker with possible ratings of 15A or 20A. For
site configurations where a 20A circuit breaker is installed, the user can configure the circuit breaker
rating on the MTC-9 with possible values of 15A (default) or 20A. The maximum power supported by an
MTC-9 will vary depending on the circuit breaker used by the chassis (and indicated in the circuit breaker
rating value configured on the chassis). Note the following about changing the circuit breaker rating:
■ Increasing the circuit breaker rating from 15A to 20A is allowed without any restrictions. Increasing
the circuit breaker rating will increase the available power on the MTC-9 chassis from 600 watts to
800 watts, so modules held in the reset state will be powered up.
■ Decreasing the circuit breaker rating from 20A to 15A is allowed only if the new available power is
greater than the power draw from the currently installed equipment, and if the new available power
is also greater than the configured maximum power draw. This is done to prevent power draw from
exceeding the maximum available power from the PEMs.

MTC-9 Air Baffles


Optional air baffle kits are available for the MTC-9. Air baffles can be installed on each side of the MTC-9
to redirect intake air to the chassis air inlet (from the front or rear) and to divert exhaust air away from the
chassis (toward the rear or front). In addition, a second air baffle option is supported that provides
improved air flow to the MTC-9 (note that this option extends the MTC-9 chassis height to 8 RU). Once
the air baffle kit has been physically installed on the chassis, the user must select the appropriate Baffle
Type attribute (listed below) for the chassis from the management interfaces:
■ None (default)
■ MTC-9-AIRBAFFLE
■ MTC-9-AIRBAFFLE2
The system uses the Baffle Type attribute to determine power consumption by the fan tray and to adjust
the parameters for the algorithm used for controlling the fan speed in the fan tray.

MTC/DTC Chassis Power Control


IQ NOS software includes the Chassis Power Control feature for DTCs and MTCs, which provides
software monitoring and control of chassis power consumption.
Because the actual power consumption values are unavailable to the software, IQ NOS uses an estimate
for the worst-case power consumption of all the equipment that is physically present in the chassis (this
does not include pre-provisioned, unequipped equipment). The conditions which may cause the worst-
case power consumption estimate to exceed the pre-defined threshold are:
■ Chassis is heavily loaded (see the DTC/MTC Power Consumption and Configuration Rules in the
DTC/MTC Hardware Description Guide to determine which configurations might exceed the
maximum current draw of 70 Amps).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-56 Equipment Management and Configuration

■ Chassis has a temperature or fan fault that requires the fans to run at 90% of their maximum

Note: There is no way to measure the input voltage for chassis equipped with only DLMs/XLMs.
However, ADLMs, ADLM-80s, AXLMs, AXLM-80s, SLMs, SLM-80s, and MCM-Cs do have the ability
to measure their voltage input. If one of these modules is present in the chassis, the input voltage to
the chassis can be measured and used for the current draw estimation. In the absence of an ADLM,
ADLM-80, AXLM, AXLM-80, SLM, SLM-80, or MCM-C, a voltage level of -39V is assumed.

IQ NOS then takes the minimum preventative action needed to prevent power consumption from
exceeding a pre-defined threshold which would potentially trip the 70A circuit breaker under worst-case
operating conditions. IQ NOS does the following:
■ Raises an alarm indicating that the Power Control feature has taken action on the chassis (see
Power Draw Alarm on page 2-9)
■ Resets the receive-side application-specific integrated circuit (ASIC) components in the TAM that
are in the Loss of Frame (LOF) condition (i.e., not carrying traffic).
■ Governs the fan speed to 85% or less of the maximum rotation speed if more power savings is
needed.

Note: IQ NOS will NOT reset the ASIC component that is successfully carrying user traffic, nor will IQ
NOS reduce the fan speed such that component failure is a high probability.

Once initiated, the Power Control feature can be disabled in two ways:
■ The user can manually disable the Power Control feature via the management interfaces.
■ IQ NOS can disable the Power Control feature based on any of the following:
□ The fan speed has changed to less than 75% of its maximum for five seconds (due to the
ambient temperature dropping).
□ Equipment has been physically removed from the chassis, thereby decreasing the power
demand.
□ Any ADLMs, ADLM-80s, AXLMs, AXLM-80s, SLMs, or SLM-80s present in the chassis
report a voltage change indicating decreased current demand for the chassis.
Once the Power Control feature is disabled due to any of the above events, the fan speed control is
released and the line modules re-enable their receive-side ASICs.

Zero Touch Provisioning


Zero Touch Provisioning (ZTP) is an end to end solution which allows network elements to be provisioned
automatically —with minimal operator intervention— to deploy and maintain services. Starting Release
20.0 ZTP is supported and is enabled by default for XT-3300 and MTC-6/ MTC-9 based IQ NOS network
elements and is supported on DNA, GNM, TL1 and CLI interfaces.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-57

When the network element is installed and powered on, the ZTP obtains all the necessary configurations
without any operator intervention to bring up the network element to a state where it can be managed
through management interfaces.
The ZTP prompt is displayed at the time of turn-up of network element. The ZTP feature can be disabled
at this point. If ZTP is not disabled any new configurations will override the existing configurations. Refer
to XT Turn up and Test Guide and FLEXILS ROADM Turn up and Test Guide for more information.

Figure 3-31 Example Scenario of ZTP Deployment

The ZTP work flow follows a series of steps which includes the following:
■ Network Element requests for a valid DHCP lease. DHCP server responds with a valid lease
having DCN IP address, gateway, subnet and also vendor specific options having the image and
configuration file locations (URI).
■ Check if the current release is different from the one specified in DHCP vendor options and
download a new software image, if it is different.
■ Download an initial configuration specified in the vendor specific options containing a list of
commands/operations for subsequent operation.
DHCP stage:
■ The configured DHCP server within the management network provides IP configuration (IP
Address, Net mask, Gateway Address, and DNS Address) in a dhcpd.conf file.
■ The DHCP client running on the network element receives device specific configuration and applies
it on the network element.
Software image download and install stage:
■ The network element downloads the software image only if the currently running software image on
the network element is lower than the software image referenced to by the DHCP server. If the new
software image is different then the current software image, it will download and install fresh image.
■ The network element installs the software image and performs a reboot.
Initial start-up configuration setup stage:
■ The network element downloads the initial configuration file from ZTP server. This file contains all
configuration commands pertaining to equipment, facilities, service provisioning and others.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-58 Equipment Management and Configuration

■ The initial configuration file is executed within the network element and the configurations specified
in the config file are applied.
The user passwords, database passwords can be entered in encrypted form in the initial configuration
file. The encrypted text can be generated from the encryption tool provided to the customer. The
encrypted password can be decrypted from the network element. The parameter xform can be added to
the commands to specify if password must be encrypted. The parameter xform can be set to true/false.
Below is a sample for the CLI commands used in config file:
aaa authentication users changepasswd user secadmin curpasswd Infinera1
newpasswd Infinera\#2
aaa authentication users username secadmin role MA,SA,NA,NE,PR,TT,RA,EA
InactivityTimeout 0 PasswordAging 0 hostname NEA
!
!do show dhcp
!ztp system ztpmode disable cleandb false
!ztp ZtpMode Enable
!logging host SYSLOG-1 TransportProtocol TLS ServerIpAddress 1.1.1.1
!system MgmtProxyRoutePreference GFD
hostname NEA
ip xfr swdl XfrPrimaryIp46 192.168.0.2 XfrPrimaryUser sttester
XfrPrimaryPasswd sttester XfrFileName none XfrFilePath /Kumuda/ITN
AdministrativeState UnLocked
commission equipment chassis 1 ProvSerialNumber MA6814140892
ProvisionedChassisType MTC_9

Migrating a DTN or Optical Amplifier to a DTN-X


In order to introduce DTN-X nodes into existing DTN networks, a DTN or Optical Amplifier node can be
converted to a DTN-X node with an XTC as the Main Chassis. This procedure is non-service impacting
for the traffic carried by the existing network element. Once the network element conversion is complete,
new services can be configured for the DTN-X.
To migrate a DTN node to a DTN-X, the DTN node and DTN-X node must be running IQ NOS Software
Release 8.0.1 or higher. To perform the migration, an XTC is installed at the DTN site and commissioned
as node controller of the DTN-X node. The shelf controller MCMs on the existing DTN are then issued a
command to accept the XCM on the XTC as the new node controller. After that, the DTN database is
backed up remotely, downloaded to the new node controller XCM on the DTN-X, then merged with the
DTN-X database. Lastly, the MCM node controller on the DTN is then manually converted to a shelf
controller. The DTN and XTC are then physically connected to each other to act as a DTN-X node.
To migrate an Optical Amplifier to a DTN-X, the Optical Amplifier node and DTN-X node must be running
IQ NOS Software Release 9.0.2 or higher. To perform the migration, an XTC is installed at the Optical
Amplifier site and commissioned as node controller of the DTN-X node. The shelf controller OMMs on the
existing Optical Amplifier are then issued a command to accept the XCM on the XTC as the new node

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Configuration and Management 3-59

controller. After that, the Optical Amplifier database is backed up remotely, downloaded to the new node
controller XCM on the DTN-X, then merged with the DTN-X database. Lastly, the OMM node controller on
the Optical Amplifier is then manually converted to a shelf controller. The Optical Amplifier and XTC are
then physically connected to each other to act as a DTN-X node.
For the detailed procedures, see the DTC/MTC Task Oriented Procedures Guide or contact an Infinera
Technical Assistance Center (TAC).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


3-60 Migrating BMM based line systems to FRM based line systems

Migrating BMM based line systems to FRM based line systems


IQ NOS R17.1 supports migration of BMM based line systems to FRM based line systems for networks
with services originating from AOLM-500, AOLX-500, AOLM2-500, AOLx2-500 or XT-500S line modules.
On migration to FRM based line systems, the fiber capacity can be leveraged by utilizing super-channel
based services along with OCG based services over the extended C-Band spectrum. FlexILS line
systems also provide ROADM functionality.
Refer to the Line Systems Task Oriented Procedures Guidemore information on the procedure to migrate
from BMM based line systems to FRM based line systems.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 4

Service Provisioning

Infinera Intelligent Transport Networks feature service provisioning capabilities that allow users to
engineer user traffic data transport routes.Service provisioning is supported on Infinera DTN, DTN-X and
FlexILS network elements listed below:
DTN Service Provisioning on page 4-2
DTN-X Service Provisioning on page 4-33
Packet Switching Service Provisioning on page 4-62
FlexILS Service Provisioning on page 4-92
IQ NOS Digital Protection Services
Multi-layer Recovery in DTNs on page 4-167
Dual chassis Y-cable protection (DC-YCP) on page 4-170

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-2 DTN Service Provisioning

DTN Service Provisioning


Note: Unless specifically noted otherwise, all references to “line module” for the DTN will refer
interchangeably to either the DLM, XLM, ADLM, AXLM, and/or SLM. All references to the “LM-80” will
refer interchangeably to the AXLM-80, ADLM-80 and/or SLM-80 (DTC/MTC only).

IQ NOS provides service provisioning capabilities that include establishing data path connectivity
between endpoints for delivery of end-to-end capacity. The services are originated and terminated in a
DTN. IQ NOS defines the following types of cross-connect endpoints:
■ Tributary-side Endpoints—Client payload specific endpoints that can be any of the payload types
described in Client/Tributary Interfaces.
■ Tributary DTF Path Endpoints—Endpoints which are DTF encapsulated 2.5Gbps or 10Gbps
channels. the tributary-side paths are sourced and terminated in the TAM. (See Digital Transport
Frame (DTF) for more information.)
■ Line DTF Path Endpoints—Endpoints which are DTF encapsulated 2.5Gbps or 10Gbps channels
(see Digital Transport (DTN) for the description of DTF). The line-side paths are sourced and
terminated in a line module. As described in Digital Line Module (DLM) and Switching Line Module
(XLM), each XLM/DLM supports one OCG, which in turn includes ten 10Gbps optical channels.
The ADLM, AXLM, and SLM can be tuned to one of several OCGs (see Amplified Digital Line
Module (ADLM), Amplified Switching Line Module (AXLM), and Submarine Line Module (SLM)).
The ADLM-80, AXLM-80, and SLM-80 can be tuned to one of several optical channels (see Line
Module 80G (LM-80)).
IQ NOS automatically creates the endpoints when connections are configured on the equipment. IQ NOS
supports the following service provisioning modes to meet diverse users’ needs:
■ Manual Cross-connects (DTN)—End-to-end services are built by manually creating each of the
cross-connects that compose the circuit (see Manual Cross-connects (DTN) on page 4-3).
■ GMPLS Signaled Subnetwork Connections (SNCs)—End-to-end services are created dynamically
by GMPLS; the user specifies only the endpoints (see GMPLS Signaled Subnetwork Connections
(SNCs) on page 4-10).
IQ NOS supports pre-provisioning of circuits, enabling users to set up both manual cross-connects and
SNCs in the absence of line modules, TEMs, and TAMs. Pre-provisioning of data plane connections
keeps the resources in a pending state until the line module, TEM, and/or TAM is inserted. IQ NOS
internally tracks resource utilization to ensure that resources are not overbooked. The pre-provisioning of
circuits requires that the supporting circuit packs first be pre-configured.
IQ NOS has specialized functionality to provide the following service provisioning capabilities:
■ 1GFC and 1GbE Service Provisioning on page 4-14
■ 40Gbps and 40GbE Service Provisioning on page 4-18
■ 100GbE Service Provisioning on page 4-20
■ OTN Adaptation Services on page 4-21
■ ODUk Transport on page 4-23

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-3

■ Multi-point Configuration on page 4-23


■ Optical Express on page 4-27
■ Bridge and Roll on page 4-32

Manual Cross-connects (DTN)


IQ NOS supports a manual cross-connect provisioning mode where the cross-connects are manually
configured in each DTN along the circuit’s route. This mode provides users full control over all circuit
resources, including network elements, cards, and channel and sub-channel endpoints. Manual cross-
connects can be assigned a circuit ID to correlate multiple cross-connects in multiple DTNs forming an
end-to-end circuit.
Manual cross-connects utilize the bandwidth grooming capabilities described in Bandwidth Grooming.
During manual cross-connect provisioning, users specify the source and destination points of the circuit,
as well as the specific bandwidth grooming configuration within the circuit. Users also have the option to
specify if more than two switching line modules are to be used in the cross-connect, and to designate a
specific intermediate line module or TEM to use in a multi-hop cross-connect.

Note: Note that electrical TOMs are uni-directional TOMs that either receive a signal or transmit a
signal, and therefore must be configured correctly for either add cross-connects (for TOM-1.485HD-
RX and TOM-1.4835HD-RX), or for drop cross-connects (TOM-1.485HD-TX and TOM-1.4835HD-
TX). The DTN does not block incorrect cross-connect provisioning, such as incorrectly provisioning
an add cross-connect on a transmit TOM, but traffic will not come up on incorrect cross-connect
provisioning on electrical TOMs. When bidirectional (add/drop) cross-connects are provisioned on the
uni-directional electrical TOMs, traffic will come up, but this is not a recommended configuration.

Note: SNCs and cross-connects at the OC-12/STM-4 rate and at the OC-3/STM-1 rate are not
supported between an endpoint on a TAM-8-2.5GM and an endpoint on a TAM-4-2.5G.

The following sections describe the types of manual cross-connects supported by the DTN:
■ Add/Drop Cross-connect on page 4-3
■ Add Cross-connect on page 4-5
■ Drop Cross-connect on page 4-6
■ Express Cross-connect on page 4-7
■ Hairpin Cross-connect on page 4-9

Add/Drop Cross-connect
The add/drop cross-connect is a bidirectional cross-connect that associates the tributary-side endpoint to
the line-side endpoint by establishing connectivity between a TOM tributary port (residing within a line
module or TEM) to a line-side optical channel within a line module. Any tributary port can be connected to
any line-side optical channel, subject to the bandwidth grooming rules.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-4 DTN Service Provisioning

The add/drop type of cross-connect is used to add/drop traffic at a Digital Add/Drop site (see Digital
Terminal Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg) and to drop traffic at a
site as part of the Multi-point Configuration feature (see Multi-point Configuration on page 4-23).
See Figure 4-1: No-hop Add/Drop Cross-connects on page 4-4 and Figure 4-2: Multi-hop Add/Drop
Cross-connect between ADLMs/DLMs and TEM on page 4-5 for examples of no-hop and multi-hop
add/drop cross-connects.

Figure 4-1 No-hop Add/Drop Cross-connects

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-5

Figure 4-2 Multi-hop Add/Drop Cross-connect between ADLMs/DLMs and TEM

Add Cross-connect
An add cross-connect is a unidirectional cross-connect that associates the tributary-side endpoint to the
line-side endpoint by establishing connectivity between a TOM tributary port (residing within a line module
or TEM) to a line-side optical channel within a line module. Any tributary port can be connected to any
line-side optical channel, subject to the bandwidth grooming rules.
The add type of cross-connect is used to add traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg).
Figure 4-3: Multi-hop Add Cross-connect between ADLMs/DLMs and TEM on page 4-6 shows an
example add cross-connect.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-6 DTN Service Provisioning

Figure 4-3 Multi-hop Add Cross-connect between ADLMs/DLMs and TEM

Drop Cross-connect
A drop cross-connect is a unidirectional cross-connect that associates the line-side endpoint to tributary-
side endpoint the by establishing connectivity between a line-side optical channel within a line module to
a TOM tributary port (residing within a line module or TEM). Any line-side optical channel can be
connected to any tributary port, subject to the bandwidth grooming rules.
The drop type of cross-connect is used to drop traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg), and can be used to drop traffic
at a site as part of the Multi-point Configuration feature (see Multi-point Configuration on page 4-23).
Figure 4-4: No-hop Drop Cross-connect on page 4-7 shows an example drop cross-connect.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-7

Figure 4-4 No-hop Drop Cross-connect

Express Cross-connect
An express cross-connect is a unidirectional or bidirectional cross-connect that associates one line-side
DTF endpoint to another line-side DTF endpoint by establishing connectivity between the optical channels
of two different OCGs (line modules) within a DTN. An express cross-connect can be established
between line modules using any of the supported grooming configurations.
The express cross-connect type is transparent to the payload type encapsulated in the DTF. A typical
application for this cross-connect is to establish a data path through a Digital Repeater site (see Digital
Repeater Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg.
Figure 4-5: Single-hop Express Cross-connect on page 4-8 and Figure 4-6: Multi-hop Express Cross-
connect on page 4-8 show example express cross-connects.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-8 DTN Service Provisioning

Figure 4-5 Single-hop Express Cross-connect

Alternatively, a multi-hop express cross-connect can be established between three switching-capable line
modules and TEMs residing within a chassis, again using the supported grooming configurations
described in Bandwidth Grooming. See Figure 4-6: Multi-hop Express Cross-connect on page 4-8 for
an example of a multi-hop cross-connect.

Figure 4-6 Multi-hop Express Cross-connect

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-9

Hairpin Cross-connect
A hairpin cross-connect is a unidirectional or bidirectional cross-connect that is used to cross-connect two
tributary ports within a single DTN chassis. Hairpin circuits are supported in the following configurations:
■ Between two tributary ports within a given switching-capable line module (line module or TEM). The
two tributary ports may reside on the same or different TAMs (see Figure 4-7: No-hop Hairpin
Cross-connects on page 4-9).
■ Between a tributary port on one line module or TEM and a tributary port on another line module or
TEM, utilizing the bandwidth grooming capabilities (see Figure 4-8: Single-hop Hairpin Cross-
connect on page 4-10).
Hairpin cross-connects do not use the line-side optical channel resource. The hairpin cross-connects are
used in Metro applications for connecting two buildings within a short reach without laying new fibers.

Figure 4-7 No-hop Hairpin Cross-connects

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-10 DTN Service Provisioning

Figure 4-8 Single-hop Hairpin Cross-connect

GMPLS Signaled Subnetwork Connections (SNCs)


IQ NOS supports GMPLS signaled Subnetwork Connection (SNC) provisioning, where an end-to-end
transport service is automatically provisioned utilizing IQ NOS GMPLS control protocol as described in IQ
NOS GMPLS Control Plane Overview on page 8-1. In this mode, users identify the source and
destination endpoints and IQ NOS GMPLS control protocol computes the circuit route through the
Intelligent Transport Network and establishes the circuit, referred to as an SNC, by automatically
configuring the cross-connects in each node along the path. The cross-connects automatically configured
by the GMPLS protocol are called signaled cross-connects. An inventory of signaled cross-connects are
retrievable through the management applications.
The IQ NOS GMPLS control protocol features:
■ Error-free, automatic end-to-end SNC provisioning resulting in automatic service turn-up.
■ Automatic creation of a Channelized SNC (a special 2.5Gbps SNC that can hold up to a maximum
of two 1GbE sub-SNCs) on creation of a sub-SNC.
■ An automatic retry mechanism that allows SNC setup to be tried periodically without manual
intervention.
■ SNC monitoring and alarm reporting if a circuit experiences problems in the Intelligent Transport
Network.
■ Automatic re-establishment of an SNC after network problems are corrected (note that SNCs are
not automatically released on detecting network problems; the SNC must be released by the user
at the source node where the SNC was originated).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-11

■ User configured circuit identifiers for easy correlation of alarms and performance monitoring
information on the end-to-end circuit, aiding in service level monitoring. SNC circuit IDs are editable
after SNC creation.
■ Out-of-band GMPLS for circuit provisioning, for OTS over third party networks in cases where in-
band OSC is unavailable (e.g., submarine applications). See Out-of-band GMPLS on page 8-11
for more information.
■ Out-of-band GMPLS for Layer 1 OPN applications, which provides GMPLS control plane
connectivity between customer edge devices via out-of-band communication by the use of Generic
Routing Encapsulation (GRE) tunnels configured using any one of the DTN interfaces (e.g., DCN,
AUX, CRAFT) (see Layer 1 Optical Private Network (OPN) in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg).
■ Provisioning of 40Gbps services (see 40Gbps and 40GbE Service Provisioning on page 4-18).
■ Adaptation of OTN services to/from OC-48, OC-192, STM-16, STM-64, or 10GbE LAN signals for
transport through the Infinera network (see OTN Adaptation Services on page 4-21).
■ Bridge and Roll functionality that allows a sub-50ms switchover from one SNC to a new SNC (see
Bridge and Roll on page 4-32).
■ Circuit tracking, by storing and making the hop-by-hop circuit route and the source endpoint of the
SNC available to the management.
■ Automatic restoration will be initiated for an SNC if any one of the endpoints (source or destination)
detects a traffic-affecting fault. For further information about this optional feature, refer to Dynamic
GMPLS Circuit Restoration on page 4-140.
■ Automatic reversion to the working path for restorable SNCs that have been configured for
automatic reversion. For further information about this optional feature, refer to Dynamic GMPLS
Circuit Restoration on page 4-140.

Note: To use any feature related to SNC, ensure that the nodes connected in a network
element are upgraded to the same release.

Note: An SNC that has been re-routed through an intermediate node due to a restoration event
modifies the node’s database. Restoring a database snapshot taken prior to the restoration event will
delete any new SNC connections and result in traffic loss.

Note: SNC provisioning does not support uni-directional circuits or multi-point configuration; these
services must be provisioned via manual cross-connects. Unidirectional multi-point cross-connect
legs can be created on the line endpoints of tributary-to-tributary SNCs or of line-side terminating
SNCs, but the multi-point configuration legs are themselves cross-connects and not SNCs. See Multi-
point Configuration on page 4-23.
■ Unidirectional multi-point configuration connections for broadcast services such as video.
Unidirectional multi-point cross-connect legs can be created on the endpoints of an SNC. The
multi-point legs can be created on the line endpoints of tributary-to-tributary SNCs or of line-
side terminating SNCs.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-12 DTN Service Provisioning

Note: Dynamic GMPLS SNC Restoration is primarily designed to provide traffic restoration utilizing
available alternate route bandwidth in the event of a fiber cut or module failure/removal. Performing a
BMM reseat or cold reset will trigger the restoration process. Due to the additional BMM boot time
requirements associated with these actions, local node SNC restoration may be delayed until the boot
process is completed.

Note: Bandwidth is reserved for an SNC only once the SNC is created, and not at the time of route
computation for the SNC. This means that if several SNCs are computed and created in rapid
succession, the bandwidth will appear to be available during path computation, but may be reserved
by the time the system attempts to create some of the connections. This may result in an SNC set-up
failure until the system computes another route for the SNCs using available bandwidth.

Note: SNCs and cross-connects at the OC-12/STM-4 rate and at the OC-3/STM-1 rate are not
supported between an endpoint on a TAM-8-2.5GM and an endpoint on a TAM-4-2.5G.

Refer to IQ NOS GMPLS Control Plane Overview on page 8-1 for a detailed description of the GMPLS
functions.

SNCs over Layer 1 OPN


Provisioning of SNCs over Layer 1 OPN is supported via the out-of-band GMPLS described in Layer 1
Optical Private Network (OPN). The TAM-2-10GT supports the following rates of SNCs over Layer 1
OPN:
■ 2.5Gbps SNCs
■ 10Gbps SNCs
■ 40Gbps SNCs (this requires four separate tributary ports on the TAM-2-10GTs used to create the
L1 OPN.)

Note: Only nodes running Release 6.0 or higher can originate a 2.5Gbps SNC on a TAM-2-10GT
endpoint. A pre-Release 6.0 node cannot originate a 2.5Gbps SNC on a TAM-2-10GT endpoint
(although a pre-Release 6.0 nodes can terminate such an SNC, either on a TAM-2-10GT endpoint or
an endpoint on another TAM type).

1 Port D-SNCP can be configured for endpoints on the TAM-2-10GT for 10Gbps SNCs. (If the user wants
to protect SNCs across Layer 1 OPN, services should be configured as 10Gbps SNCs, opposed to
2.5Gbps SNCs.) See 1 Port D-SNCP on page 4-126 for more information on 1 Port D-SNCP protection.

Line-side Terminating SNCs


SNCs can be configured with line-side endpoints, opposed to using only tributary-side endpoints. The
line-side terminating SNC capability allows SNCs to be built and maintained across GMPLS domain
boundaries.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-13

Figure 4-9 Line-side Terminating SNCs Connected Across Domain Boundaries

Line-side terminating SNCs enable the user to create a circuit that spans across GMPLS signaling
domains, which is very useful for networks that contain more nodes than are allowed in a single GMPLS
signaling domain. These large networks are often divided into several smaller domains by terminating the
OSC links at the border of the smaller domains. Line-side terminating SNCs allow a tributary-to-tributary
SNC to be realized as a concatenation of multiple disjoint SNCs.
Line-side terminating SNCs are supported on both DTN and DTN-X node:
■ On an XTC, the line-side endpoints are line-side ODU endpoints
■ On a DTC/MTC, the line-side endpoints are line-side DTP CTPs
Line-side terminating SNCs cannot be configured as restorable SNCs, but they can participate in Digital
Subnetwork Connection Protection (D-SNCP). For termination points on the DTC, MTC, and XTC:
■ The tributary-side endpoint can be part of 1 Port D-SNCP and 2 Port D-SNCP
■ The line-side endpoint can be part of 1 Port D-SNCP
See 1 Port D-SNCP on page 4-126 and 2 Port D-SNCP on page 4-123 for more information.

Note: Line module OCGs are by default enabled for line-side terminating SNCs. If a line module is
disabled for line-side terminating SNCs and is re-configured to be enabled for line-side terminating
SNCs, any existing TE links with neighboring nodes are maintained.

For tributary-to-line connections originating on a 1GbE client of a TAM-8-1G, the sub-SNC will be created
on the local side only. The remote Channelized SNC will be created on the remote node, but the remote
sub-SNC will not be created on the remote node. So to create a 1GbE circuit across three domains, the
user needs to create a tributary-to-line Channelized SNC in Domain 1, a line-to-line SNC in Domain 2 and
another tributary-to-line Channelized SNC in Domain 3. In addition, the user needs to create tributary-to-
line sub-SNC in Domain 1 and Domain 3 to realize end-to-end traffic.
See 1GFC and 1GbE Service Provisioning on page 4-14 for more information on Channelized SNCs
and sub-SNCs.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-14 DTN Service Provisioning

Figure 4-10 1 Port D-SNCP with Line-side Terminating SNCs

Figure 4-11 2 Port D-SNCP with Line-side Terminating SNCs

1GFC and 1GbE Service Provisioning


The DTN supports two types of 1Gbps signals: 1Gigabit Ethernet (1GbE; supported by both the
TAM-8-1G and the TAM-8-2.5GM) and 1Gbps Fibre Channel (1GFC; supported only by the
TAM-8-2.5GM). Because the Intelligent Transport Network transports signals in 2.5Gbps or 10Gbps
granularity, these 1Gbps signals must be mapped into a 2.5Gbps digital path that is routed through the
network via a Channelized cross-connect or via a Channelized SNC.

Note: For 1G Fibre Channel over Clear Channel services (i.e., 1GFC-CC or 1.0625GCC in TL1), each
1GFC-CC service is mapped to a single 2.5Gbps digital path, therefore there is no Channelized
cross-connect nor SNC required for 1 GFC-CC services.

A Channelized cross-connect is a special type of Add/Drop, Add, Drop, or Hairpin cross-connect that is
used to transport 1Gbps signals to and from the client interfaces on the TAM-8-1G and TAM-8-2.5GM. A

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-15

Channelized cross-connect represents connectivity from the Tributary DTPCTP to Line DTPCTP (in the
case of Add/Drop type of traffic) and the Tributary DTPCTP to the Tributary DTPCP (in the case of
Hairpin traffic type). The tributary-side payload is set to ‘Channelized_2x1Gbe’.

Note: When provisioning a 1GbE circuit or cross-connect, ensure that the following parameters on the
customer equipment (router or switch connected to the TAM-8-1G or TAM-8-2.5GM) are set as
follows:
■ Auto-negotiation set to “on”,
or
■ Auto-negotiation set to “off”, static configuration of the customer equipment Ethernet
port capabilities set to “full-duplex,” and the data rate set to “1GbE.”

The way 1Gbps signal types are mapped from the tributary port to the DTP is dependent on the TAM
type:
■ The TAM-8-1G has four tributary port pairs: ports 1a and 1b, ports 2a and 2b, ports 3a and 3b, and
ports 4a and 4b.The two 1GbE signals in a port pair are mapped together into a single 2.5Gbps
digital path and associated with a prescribed virtual channel on the digital transport path (DTP), as
shown in Figure 4-12: Tributary Port to DTP Mapping on the TAM-8-1G on page 4-16. This is
considered fixed mapping between the tributary port and the DTPCTP.
■ The TAM-8-2.5GM has eight ports, numbered 1-8. As with the TAM-8-1G, two 1Gbps services
must be mapped together into a 2.5Gbps Channelized cross-connect or Channelized SNC.
However, when creating 1GbE and 1GFC services on the TAM-8-2.5GM, the DTN allows for
flexible mapping of tributary port to DTPCTP, as shown in Figure 4-13: Flexible Mapping of
Tributary Port to DTP on the TAM-8-2.5GM on page 4-17. So when creating the SNC or cross-
connect, the user is able to specify the virtual channel in the DTPCTP to which the service should
be mapped. (If no virtual channel is specified, the TAM-8-2.5GM follows the default mapping, which
is the same as the mapping in the TAM-8-1G.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-16 DTN Service Provisioning

Figure 4-12 Tributary Port to DTP Mapping on the TAM-8-1G

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-17

Figure 4-13 Flexible Mapping of Tributary Port to DTP on the TAM-8-2.5GM

As shown in Figure 4-13: Flexible Mapping of Tributary Port to DTP on the TAM-8-2.5GM on page 4-17,
each DTPCTP has two virtual channels, and each of these virtual channels can carry a different 1G
service: one virtual channel can carry a 1GFC service while the other virtual channel carries a 1GbE.
Note the following constraints for flexible mapping on the TAM-8-2.5GM:
■ Tributary ports 1-4 on the TAM-8-2.5GM must be mapped to a virtual channel on DTPCTPs 1-4.
■ Tributary ports 5-8 on the TAM-8-2.5GM must be mapped to a virtual channel on DTPCTPs 5-8.
■ If a DTPCTP has a virtual channel associated with a 1G service, the tributary that faces that
DTPCTP cannot support a 2.5Gbps service (since the DTPCTP is already supporting a 1G service,
only another 1GbE or 1GFC service can be added).
■ 1GFC SNCs can originate and terminate only on the TAM-8-2.5GM on nodes running Release 6.0
or higher.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-18 DTN Service Provisioning

■ A single 1GbE SNC can be provisioned with one endpoint on a TAM-8-2.5GM and the other
endpoint on a TAM-8-1G. However, an 1GbE SNC originating on a pre-Release 6.0 node can be
terminated only on a TAM-8-1G (it cannot be terminated on an endpoint on a TAM-8-2.5GM on a
node running Release 6.0 or higher).
■ For 1G services on the TAM-8-2.5GM, 1 Port D-SNCP can be configured on the facing DTP of a
port which is already part of 2 Port D-SNCP (this interaction is not allowed on any other TAM type).
In this case, a 1G service cannot go through the facing DTP; it must go through a DTP which is not
configured with 1 Port D-SNCP.

40Gbps and 40GbE Service Provisioning


In order to transport 40G services through the 10Gbps Infinera network, the TAM creates a VCG and
uses inverse multiplexing to divide incoming 40G signals into four separate 10G or 10.3G sub-client
signals that are transported through the network:
■ For TAM-1-40G, the VCG is a 40Gbps (OC-768/STM-256) trail termination point that is composed
of four separate 10Gbps (OC-192/STM-64) sub-client signals.
■ For TAM-1-40GE and TAM-1-40GR, the VCG is a 40GbE trail termination point that is composed of
four separate 10.3G Clear Channel sub-client signals.
The virtual concatenation group (VCG) and the sub-clients are created automatically when a 40G TAM is
installed or pre-configured.
The group termination point (GTP) is created as part of provisioning a 40G service (the user must specify
DTPCTPs that are logically grouped together into a GTP that serves as an endpoint in a cross-connect or
an SNC). The GTP is the termination point used to identify the 40Gbps/40GbE services for creating
cross-connects, SNCs, and protection groups (1 Port and 2 Port Digital SNCP).

For DTC/MTC endpoints, the GTP name for GTP-dependent cross-connects and SNCs is required to
be the AID of first constituent member of the GTP. For example: If a GTP has DTPs 1-A-3-L1-1, 1-
A-3-L1-2, 1-A-3-L1-3, 1-A-3-L1-4, then the GTP AID will be 1-A-3-L1-1. (In Release 6.0, the GTP was
specified by the user upon creation of a OC-768 or STM-256 cross-connect, and there was no
requirement for the GTP name to be the DTP AID.)
Note: If a pre-Release 7.0 system is upgraded to Release 8.1 or higher, the GTP AID value will be
automatically updated to the AID of first constituent member of the GTP for all pre-existing, GTP-
dependent SNCs, cross-connects, and D-SNCPs (1 Port or 2 Port).

Figure 4-14: VCGs and GTPs for 40G Services on page 4-19 shows the relationship between sub-
clients, VCG, GTP, and DTPCTPs.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-19

Figure 4-14 VCGs and GTPs for 40G Services

Please note the following guidelines and features of Infinera 40G service provisioning:
■ 40G SNC provisioning is supported for tributary-to-tributary SNCs. The DTN does not support
tributary-to-line and line-to-tributary 40G SNCs.

Note: For 40GbE SNCs, all nodes in the route must be running Release 7.0 software or above.

■ 40G SNCs are supported over Layer 1 Optical Private Network. This configuration requires four
separate 10G tributary ports on the TAM-2-10GTs used to create the L1 OPN, and each of these
tributary ports must be configured as Layer 1 OPN TE endpoints. Also, the provider network must
have 40G of bandwidth available to support the SNC.
■ For 40G SNCs, restoration and reversion are supported, as are route diversity and preferred
restoration routes.
■ 40G services can be transported through the network on intermediate nodes via 40G express
cross-connects, meaning that the intermediate nodes do not require a 40G TAM for signal
regeneration in order to transport 40G services.

When provisioning OC-768 or STM-256 tributary-to-tributary connections through an intermediate


node (with back-to-back 10G TOMs installed), ensure that you provision four individual 10G Clear
Channel cross-connects as there is no option to create a single 4x10G Clear Channel cross-connect.
Also note that for this type of configuration, the 4xOC192 payload type is not valid.
Note: When provisioning 40GbE tributary-to-tributary connections through an intermediate node with
back-to-back 10G TOMs, ensure that you provision four individual 10.3G Clear Channel cross-
connects. In addition, the 10G TAM types at either end of the connection must be the same. In other
words, if one end of the tributary-to-tributary connection uses TAM-2-10GMs, both of the TAMs on
that end of the connection must be TAM-2-10GMs. The other end of the connection can use a

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-20 DTN Service Provisioning

different TAM type, such as TAM-2-10GR, but again, both TAMs on that end of the connection must
match, so they must both be TAM-2-10GRs. If the TAMs at each end of the connection are not all of
the same TAM type, LOA alarm may be reported.

■ All of the channels carrying the 40G signal must be 10G channels, or channels that use the same
modulation format, either Binary Phase Shift Keying (BPSK) modulation or Quadrature Phase Shift
Keying (QPSK). If the 40G signal is transported over channels with a mix of 10G, BPSK, and
QPSK, a Loss of Alignment (LOA) alarm is raised and service is affected.
■ Multi-point configuration is supported for 40G services. See Multi-point Configuration on page 4-
23 for more information on this feature.
■ For manual cross-connect provisioning, 40G services can be routed through the network using
diverse OCGs, meaning that each of the 10Gbps or 10.3GCC channels can be routed through the
network using different OCGs. (In previous releases, each of the four sub-clients were required to
be routed in the same OCG through each of the nodes along the route.) This feature is not
supported for SNC provisioning. For SNCs, each of the four sub-clients must still be routed through
the same OCGs through each node along the route.
■ 1 Port D-SNCP and 2 Port D-SNCP protection are supported only for 40G services that are routed
on the same OCGs (and not on 40G services that are provisioned with diverse OCGs). See Digital
Subnetwork Connection Protection (D-SNCP) on page 4-122 for more information on this feature.
■ Fault management and troubleshooting tools that are available for 2.5Gbps and 10Gbps services
are also available for 40G services.
■ For performance monitoring support:
□ 40Gbps services support the PM data supported for 2.5Gbps and 10Gbps services.
□ 40GbE services support the PM data supported for 10GbE, with the exception of MAC layer
PM parameters, which are not supported by the TAM-1-40GE.

100GbE Service Provisioning


The TAM-1-100GE and TAM-1-100GR support 100GbE manual cross-connects. 100GbE service
provisioning parallels 40G service provisioning, except that 100GbE services require ten channels at the
10.3GCC rate.
Please note the following guidelines and features of 100GbE service provisioning on the DTN:
■ The DTN supports 100GbE manual cross-connects. 100GbE SNCs are not supported for endpoints
on the DTN.
■ 100GbE services can be transported through the network on intermediate nodes via 100G GTP-
based express cross-connects (or 10x10G non-GTP cross-connects), meaning that the
intermediate nodes do not require a 100G TAM for signal regeneration in order to transport 100G
services.

Note: When provisioning 100GbE tributary-to-tributary connections through an intermediate node with
back-to-back 10G TOMs, ensure that you provision ten individual 10.3G Clear Channel cross-

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-21

connects. In addition, the 10G TAM types at either end of the connection must be the same. In other
words, if one end of the tributary-to-tributary connection uses TAM-2-10GMs, all five of the TAMs on
that end of the connection must be TAM-2-10GMs. The other end of the connection can use a
different TAM type, such as TAM-2-10GR, but again, all five TAMs on that end of the connection must
match, so they must all be TAM-2-10GRs. If all of the TAMs at each end of the connection are not of
the same TAM type, a loss of alignment (LOA) alarm may be reported.

■ All of the channels carrying the 100G signal must be 10G channels, or channels that use the same
modulation format, either Binary Phase Shift Keying (BPSK) modulation or Quadrature Phase Shift
Keying (QPSK). If the 100G signal is transported over channels with a mix of 10G, BPSK, and
QPSK, a Loss of Alignment (LOA) alarm is raised and service is affected.
■ 100GbE cross-connects can be routed through the network using diverse OCGs, meaning that
each of the 10G channels can be routed through the network using different OCGs (provided that
the OCGs have the same number of hops and the skew is limited to 6μs).
■ 1 Port D-SNCP and 2 Port D-SNCP protection are supported for 100GbE services. See Digital
Subnetwork Connection Protection (D-SNCP) on page 4-122 for more information on this feature.
■ Multi-point configuration is supported for 100GbE services. See Multi-point Configuration on page
4-23 for more information on this feature.
■ Fault management and troubleshooting tools that are available for 2.5Gbps and 10Gbps services
are also available for 100GbE services.
■ 100GbE services support the PM data supported for 10GbE, with the exception of MAC layer PM
parameters for 100GbE services, which are supported on the TAM-1-100GR but not on the
TAM-1-100GE. The TAM-1-100GE does support Physical Coding Sublayer (PCS) PMs.

OTN Adaptation Services


In addition to supporting the transport of standard based OTN (OTU1, OTU1e, OTU2, OTU2e) signals
over the DTN network between two identical OTUk client interfaces, the TAM-2-10GM, DICM-T-2-10GM,
and TAM-8-2.5GM also support the adaptation of SONET, SDH, or Ethernet signals to OTN services for
transport through the Infinera network. These services can be preserved as OC-n/STM-n/Ethernet
signals, or they can be mapped back to OTN services. The user specifies the adaptation details as part of
the SNC or cross-connect provisioning.
Figure 4-15: Standard Transport of OTN Services on page 4-22 shows standard transport of OTN
services across the network.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-22 DTN Service Provisioning

Figure 4-15 Standard Transport of OTN Services

Adaptation mode is shown in Figure 4-16: Adaptation of OTN Services across the Infinera Network on
page 4-22. The OTU2 client on one end is being adapted with the STM-64 client at the other end.
Similarly, the OTU2e client on one end is being adapted with the 10GbE client at the other end.

Figure 4-16 Adaptation of OTN Services across the Infinera Network

The TAM-2-10GM and DICM-T-2-10GM support OTN adaptation between the following interfaces:
■ OC-192/10GbE WAN PHY to/from OTU2
■ STM-64 to/from OTU2
■ 10GbE LAN to/from OTU2e
■ 10GbE LAN to/from OTU1e
The TAM-8-2.5GM supports OTN adaptation between the following interfaces:
■ OC-48 to/from OTU1
■ STM-16 to/from OTU1
When transporting across the network between two identical OTUk client interfaces, the DTN supports
the fault and performance monitoring of all the OTN layers, including the encapsulated/adapted client.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-23

If adaptation is enabled, the DTN provides only intrusive monitoring of OTUk, ODUk Path, and TCM
overheads at the edges of the network.
In addition, OTN adaptation services support an option to configure the disable action for encapsulated
Ethernet client interfaces upon egress from the DTN network in case of a signal fail condition from the
network side of the client signal (see Encapsulated Client Disable Action on Egress (DTN) on page 3-48).

ODUk Transport
The DTN supports ODUk transport service for OTUk client/tributary interfaces, in which the OTUk service
is terminated and the contained ODUk is maintained intact and transported across the Infinera network.
The TAM-8-2.5GM supports transport for the following ODUk services, which are mapped to a 2.5G DTP:
■ ODU1
The TAM-2-10GM and DICM-T-2-10GM support transport for the following ODUk services, which are
mapped to a 10G DTP:
■ ODU2
■ ODU2e
■ ODU1e
ODUk transport services support an option to configure the disable action for encapsulated Ethernet
client interfaces upon egress from the DTN network if a digital wrapper (DTP) defect is detected (e.g.,
AIS) from the network side of the client signal (see Encapsulated Client Disable Action on Egress (DTN)
on page 3-48).

Multi-point Configuration
Multi-point Configuration is the feature that digitally broadcasts a single service (e.g., 1GbE, 2.5Gbps,
40Gbps, 100GbE, etc.) from a single node in the Intelligent Transport Network and drops the signal at
several distribution points in the network (see Figure 4-17: Multi-point Configuration on page 4-24).
Multi-point Configuration enables bridging of an incoming service from any port (OCG or client) to up to
16 other ports (OCG or client). The service can be bridged/duplicated or a port on the same line module
or TEM as in the incoming service, or on any other line module or TEM. Multi-point Configuration also
allows an optional service return path from one of the legs of the digital bridge.
Multi-point Configuration is implemented by adding a unidirectional manual cross-connect from an
existing manual cross-connect or SNC within the Intelligent Transport Network to a new broadcast leg.
The existing cross-connect or SNC can be either unidirectional or bi-directional, and protected by 2 Port
or 1 Port Digital SNCP, or unprotected. Multi-point Configuration within the Intelligent Transport Network
can be protected by 2 Port or 1 Port Digital SNCP (see Digital Subnetwork Connection Protection (D-
SNCP) on page 4-122).
Multi-point legs can be created on the line endpoints of an SNC. The multi-point leg on an SNC must be a
unidirectional, drop cross-connect. A multi-point leg can use as its source the line endpoints of tributary-
to-tributary SNCs or of line-side terminating SNCs (see Line-side Terminating SNCs on page 4-12).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-24 DTN Service Provisioning

Note: If multi-point legs are added to an SNC and that SNC is deleted, locked, or restored, the cross-
connect on the local/source node will be deleted automatically and an event will be generated. The
remainder of the cross-connects in the multi-point leg of intermediate and remote/destination nodes
will be stale (orphaned) and must be deleted manually.

The primary application for Multi-point Configuration is to broadcast unidirectional traffic to multiple
endpoints in the network. Figure 4-17: Multi-point Configuration on page 4-24 shows a single signal
being distributed and dropped at various points in the network.

Figure 4-17 Multi-point Configuration

For DTN, the duplication of the service is performed on the cross-point switch of the line module or TEM
( Figure 4-18: Implementing Multi-point Configuration in a DTN on page 4-25), allowing the signal to be
sent in multiple directions. For DTN-X, the duplication of the service is performed in the switch fabric
(OXM). Figure 4-19: Implementing Multi-point Configuration in a DTN-X (Hairpin) on page 4-25 and
Figure 4-20: Implementing Multi-point Configuration in a DTN-X (Add/Drop) on page 4-26 show add/drop
and hairpin Multi-point Configurations on a DTN-X (note that hairpin and add/drop configurations can be
implemented for the same signal).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-25

Figure 4-18 Implementing Multi-point Configuration in a DTN

Figure 4-19 Implementing Multi-point Configuration in a DTN-X (Hairpin)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-26 DTN Service Provisioning

Figure 4-20 Implementing Multi-point Configuration in a DTN-X (Add/Drop)

Digital Test Access


One of the possible applications of Multi-point Configuration is Digital Test Access, wherein a signal (a
bidirectional or unidirectional manual cross-connect) is monitored by creating a unidirectional multi-point
leg drop point and connecting the multi-point leg to a test set or an optical monitor, as shown in Figure
4-21: Multi-point Configuration Leg Used for Digital Test Access on page 4-27. The Digital Test Access
function causes no impact to the service path.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-27

Figure 4-21 Multi-point Configuration Leg Used for Digital Test Access

Optical Express
In addition to the digital add/drop capabilities that are supported via the combination of the line modules
and the BMMs of a node, the DTN and DTN-X can also support direct BMM-to-BMM Optical Express,
wherein a fiber jumper cable is connected from the Optical Carrier Group (OCG) port on one BMM to the
corresponding OCG port of another BMM (the BMMs do not have to reside in the same chassis).

Note: Unless specifically noted otherwise, all references to the BMM will refer to either the BMM,
BMM2, BMM2P, BMM2C, BMM1H, and/or BMM2H interchangeably.

Figure 4-22: Optical Express in an Intelligent Transport Network on page 4-28 shows an example of
Optical Express in an Intelligent Transport Network. Note that Node D is configured for only add/drop of
its OCGs: no Optical Express is configured on Node D. For standard Optical Express configuration in a
ring network, there must be at least one node that add/drops all of its OCGs. For information on support
of Optical Express loops in a ring network, see Optical Express Loops on page 4-30.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-28 DTN Service Provisioning

Figure 4-22 Optical Express in an Intelligent Transport Network

Before disconnecting an OCG fiber, it is important to lock the BMM OCGs at the Optical Express site.
(And then unlock the BMM OCG once the fiber is reconnected.)
Note: Before disconnecting an OCG fiber between a CMM and a BMM, it is important that the
associated CMM OCG is set to the locked admin state before unplugging an OCG fiber. (And then
unlock the CMM OCG once the fiber is reconnected.)
Optical Express is supported on the following BMMs:
■ BMM1H-4-CX2
■ BMM2-8-CEH3
■ BMM-4-CX1-A
■ BMM2H-4-R3-MS
■ BMM-4-CX2-MS-A
■ BMM2H-4-B3
■ BMM-4-CX3-MS-A
■ BMM2P-8-CH1-MS
■ BMM2-8-CH3-MS
■ BMM2P-8-CEH1
■ BMM2-8-CXH2-MS
■ BMM2C-16-CH

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-29

BMM2s, BMM2Ps, and Gen 1 BMMs can optically express any OCG supported by the module. BMM2Cs
support Optical Express only for 500Gbps OCGs from AOLM/AOLM2/AOLX/AOLX2/SOLM/SOLM2/
SOLX/SOLX2. See the DTN and DTN-X System Description Guide for details.
Before configuring Optical Express, take note of the following configuration guidelines:
■ Optical Express is supported on links that are configured with RAMs. Optical Express is also
supported on links that are configured with ORMs and DSEs.
■ Optical Express is supported on all 4 OCG ports of 40-channel BMMs (this also applies to the 4
OCG ports for OCG 5-8 on the BMM2H-4-B3 expansion BMM).
■ Optical Express is supported on all 8 OCG ports of BMM2s (this also applies to the 8 OCG ports for
OCGs 9-16 on the BMM2-8-CEH3 expansion BMM).
■ BMM2s, BMM2Ps, or BMM2Cs can be used at an intermediate node to optically express traffic that
originates/terminates on BMM2s, BMM2Ps, or BMM2Cs. However, note that for 16 channel BMMs
a connection (either add/drop or Optical Express) must be made on a base OCG (OCG 1-8) before
a connection can be made on one of the expansion OCGs (i.e., OCGs 9-16).
■ Optical Express is supported on all 16 OCG ports of BMM2Cs with the following caveats:
□ Pre-provisioning of OCGs or physical OCG fiber connections for Auto-discovery is required
only for OCGs 1 - 8.
□ Optical Express is not supported on OCGs 1 - 8 when any of the corresponding peer ports
(OCGs 9 - 16) are provisioned for add/drop and vice versa due to implementation of OCG
port pairing on the BMM2C (refer to the Line Systems Hardware Description Guide for further
information). Each pair of OCG ports (OCG 1/OCG 9, OCG 2/OCG 10, OCG 3/OCG11, OCG
4/OCG 12, OCG 5/OCG 16, OCG 6/OCG 13, OCG 7/OCG 14, and OCG 8/OCG 15) can
either be dropped in a BMM2C or expressed by the BMM2C. For example, if one of the
paired ports is expressed (i.e., OCG 1), the peer port (OCG 9) is automatically expressed.
And if one of the paired ports is dropped (i.e., OCG 2), the peer port (OCG 10) can only be
dropped in the same BMM2C and cannot be expressed.
■ The modules in an Optical Express connection both must be BMM2s, both must be BMMs, both
must be BMM2Ps, or both must be BMM2Cs. Optical Express is not supported between a mix of
these BMM types (i.e., Optical Express is not supported between a BMM2 and a BMM, or a BMM2
and a BMM2P, etc.). Optical Express is supported between full-height and half-height BMMs and
between full-height and half-height BMM2s, as long as the OCG number is equivalent on both
modules in the connections.
■ Optical Express connections are supported between BMMs with different amplifier settings (SLTE,
Native, Third Party Amplifier). For example, a BMM2 configured for SLTE can support Optical
Express with a BMM2 that is set to Native Automated mode. Note, however, that Auto-discovery is
not supported for any Optical Express configuration where one or both BMMs is set to SLTE mode
or Third Party Amplifier mode. See the DTN Turn-up and Test Guide for details.
■ For Optical Express connections between BMMs, the BMM OCG can be locked without affecting
traffic. However, for Optical Express connections between BMM2s, between BMM2Ps, and
between BMM2Cs, if the BMM2/BMM2P/BMM2C OCG is locked, Auto-discovery is re-triggered,
thus impacting traffic. Make sure that BMM2/BMM2P/BMM2C OCGs are unlocked for Auto-

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-30 DTN Service Provisioning

discovery to succeed, thereby restoring traffic. (See Optical Data Plane Auto-discovery on page 3-
20.)
■ Optical Express termination (via O-E-O conversion) is supported by the following module types only
(but there is no requirement that all optically expressed OCGs within a ring must be terminated at a
single node):
□ All AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and SOLX2 module types
□ DLM-n-C2 (where n=1 to 8)
□ DLM-n-C3 (where n=1 to 8)
□ XLM-n-C3 (where n=1 to 8)
□ ADLM-T4-n-C4 (where n=1, 3, 5, 7)
□ ADLM-T4-n-C5 (where n=1, 3, 5, 7)
□ SLM-T4-n-C4 (where n=1, 3, 5, 7)
□ SLM-T4-n-C5 (where n=1, 3, 5, 7)
□ AXLM-T4-n-C4 (where n=1, 3, 5, 7)
□ AXLM-T4-n-C5 (where n=1, 3, 5, 7)
□ AXLM-80-T1-C5
□ ADLM-80-T1-C5
□ SLM-80-T1-C5
■ An Optical Express connection requires correct OCG levels between the two BMMs. BMM2s
contain a variable optical attenuator (VOA), but 40-channel BMMs do not contain a VOA and
therefore require a 20dB or 22dB pad for correct optical span engineering. The express OCG
power needs to be within a 3dB capture window (1dB above and 2dB below the target power).
Typical target power for the receive OCG on a 40-channel BMMs is -14dBm to -13dBm.
■ Optical Express requires the correct placement of DCM units (as determined by the span design).

Optical Express Loops


The default support for Optical Express in a network with a ring configuration is to require at least one
node in the ring to add/drop all of its OCGs, as in Figure 4-22: Optical Express in an Intelligent Transport
Network on page 4-28. However, Infinera nodes can be configured to support an Optical Express loop, in
which each node in the ring is configured with Optical Express. Figure 4-23: Example Configuration of an
Optical Express Loop in the Network on page 4-31 shows an example ring network that configured with
an Optical Express loop. In Figure 4-23: Example Configuration of an Optical Express Loop in the
Network on page 4-31, each of these nodes is configured for Optical Express of at least one OCG.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-31

Figure 4-23 Example Configuration of an Optical Express Loop in the Network

To configure an Optical Express loop in the network, each BMM in the loop must be enabled for the
Optical Express route loop (OER loop) feature. Unless this feature is enabled on each BMM, an Optical
Express loop is not supported, meaning that at least one node in the ring is required to add/drop all of its
OCGs.
The following BMMs support Optical Express loops:
■ BMM2P-8-CH1-MS
■ BMM2-8-CXH2-MS
■ BMM2-8-CH3-MS
■ BMM2H-4-R3-MS
■ BMM2C-16-CH

Power Control Loop Mode for Optical Express OCGs


By default, the Power Control Loop mode for Optical Express OCGs is set to automatic. In rare cases
where the OCGs are carrying LM-80 channels, there may be a delay in Auto-discovery completion for
Optical Express connections between BMM2s where the effective channel count is less than five (see
Required Number of Effective Channels on page 3-32 for an explanation of effective channels).
In such cases, Auto-discovery may not complete, in which case the Power Control Loop Mode should be
changed to “open.” (This is called FORCEDOPENLOOP in the ED-OCG command of TL1.)
For OCGs in Open Loop mode, the BMM will not report OLOS nor OPR-OOR alarms, nor can the BMM
detect OLOS in the case where the fiber between the CMM and the BMM is unplugged.

Note: The Power Control Loop Mode feature is not supported for OCGs that are provisioned for
manual (for example, when Optical Express is configured for BMM2s set to SLTE mode).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-32 DTN Service Provisioning

Before disconnecting an OCG fiber, it is important to lock the BMM OCGs at the Optical Express site.
(And then unlock the BMM OCG once the fiber is reconnected.)
Note: Before disconnecting an OCG fiber between a CMM and a BMM, it is important that the
associated CMM OCG is set to the locked admin state before unplugging an OCG fiber. (And then
unlock the CMM OCG once the fiber is reconnected.)

Bridge and Roll


IQ NOS supports full networking bridge-and-roll functionality to migrate services from a DWDM network to
the Intelligent Transport Network, one node and/or service path at a time. By establishing the circuit
through an Infinera node and then establishing the new network path, the provider can manage a
sub-50ms switchover from the old network path to new network path. This transition can be done using
SNCs or manual cross-connects.

Note: For switchovers on a TOM-40G-SR4, TOM-100G-L10X, TOM-100G-S10X, or TOM-100G-


SR10, if the tributary disable action is set to Laser Off, protection switch times can exceed 50ms. For
these 100GbE or 40GbE TOMs, it is recommended to set the tributary disable action to Insert Idle
Signal. (See Tributary Disable Action on page 3-41.)

As shown in Figure 4-24: Network Migration with Optical Service Bridge and Roll on page 4-32, an
Infinera node is installed at one end of the existing network and traffic is routed through the client ports. At
the other end of the existing network, another Infinera node is installed and connected to the first node,
thus creating a bridge path through the Intelligent Transport Network. Traffic in the existing DWDM
network is then rolled from the existing DWDM network onto the Intelligent Transport Network.
The Bridge and Roll feature can also be applied to routes in the Intelligent Transport Network in order to
perform maintenance functions on a node or link with minimal traffic disruption.

Figure 4-24 Network Migration with Optical Service Bridge and Roll

Note: In lieu of the Bridge and Roll feature, the Digital Network Administrator (DNA) offers a function
where unprotected SNCs can be converted to protected SNCs (either as part of 1 Port D-SNCP or 2
Port D-SCNCP). This feature can be used to add a protect leg to an existing SNC. (DNA also allows
the user to convert protected SNCs to unprotected SNCs.)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-33

DTN-X Service Provisioning


By way of DTC/MTC Expansion chassis, a DTN-X node supports many of the service provisioning
capabilities supported by the DTN (see DTN Service Provisioning on page 4-2). In addition to the
DTC/MTC service provisioning features, the DTN-X supports the following service provisioning
capabilities on the XTC:
■ Manual Cross-connects (DTN-X) on page 4-33
■ GMPLS Signaled Subnetwork Connections (SNCs) on DTN-X on page 4-38
■ DTN-X Virtual Concatenation (VCAT) on page 4-39
■ DTN-X Provisioning Options on page 4-42
■ DTN-X Network Mapping on page 4-52
■ Provisioning ODUflexi Services on page 4-56
■ Provisioning ODUCni Services on page 4-56
■ Provisioning for TIM-16-2.5GM on page 4-59
■ Packet Switching Service Provisioning on page 4-62

Note: Unless specifically noted otherwise, all references to “line module” will refer interchangeably to
either the DLM, XLM, ADLM, AXLM, SLM, AXLM-80, ADLM-80 and/or SLM-80 (DTC/MTC only) and
AOLM, AOLM2, AOLX, AOLX2, SOLM, SOLM2, SOLX, and/or SOLX2 (XTC only). The term “LM-80”
is used to specify the LM-80 sub-set of line modules and refers interchangeably to the AXLM-80,
ADLM-80 and/or SLM-80 (DTC/MTC only). Note that the term “line module” does not refer to TEMs,
as they do not have line-side capabilities and are used for tributary extension.

Manual Cross-connects (DTN-X)


The following sections describe the types of manual cross-connects supported by the DTN-X:
■ Add/Drop Cross-connect on page 4-34
■ Add Cross-connect on page 4-34
■ Drop Cross-connect on page 4-35
■ Express Cross-connect on page 4-36
■ Hairpin Cross-connect on page 4-37

Note: As with cross-connects on the DTC and MTC, the endpoints of an XTC cross-connect both
must be on the same chassis. Unlike the DTC/MTC, the XTC retains tributary-side endpoints of a
cross-connect after the cross-connect is deleted.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-34 DTN-X Service Provisioning

Add/Drop Cross-connect
The add/drop cross-connect is a bidirectional cross-connect that associates the tributary-side endpoint to
the line-side endpoint by establishing connectivity between a TOM tributary port (residing within an OTM)
to a line-side optical channel within a line module.
The add/drop type of cross-connect is used to add/drop traffic at a Digital Add/Drop site (see Digital
Terminal Configuration) and to drop traffic at a site as part of the Multi-point Configuration feature (see
Multi-point Configuration on page 4-23).
Figure 4-25: Add/Drop Cross-connects on a DTN-X on page 4-34 shows an example of an add/drop
cross-connect on a DTN-X.

Figure 4-25 Add/Drop Cross-connects on a DTN-X

Add Cross-connect
An add cross-connect is a unidirectional cross-connect that associates the tributary-side endpoint to the
line-side endpoint by establishing connectivity between a TOM tributary port (residing on an OTM) to a
line-side optical channel within a line module. Any tributary port can be connected to any line-side optical
channel.
The add type of cross-connect is used to add traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg).
Figure 4-26: Add Cross-connect on a DTN-X on page 4-35 shows an example of an add cross-connect
on a DTN-X.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-35

Figure 4-26 Add Cross-connect on a DTN-X

Drop Cross-connect
A drop cross-connect is a unidirectional cross-connect that associates the line-side endpoint to tributary-
side endpoint by establishing connectivity between a line-side optical channel within a line module to a
TOM tributary port (residing within an OTM).
The drop type of cross-connect is used to drop traffic at a Digital Add/Drop site (see Digital Terminal
Configuration in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg), and can be used to drop traffic
at a site as part of the Multi-point Configuration feature (see Multi-point Configuration on page 4-23).
Figure 4-27: Drop Cross-connect on an XTC on page 4-36 shows an example drop cross-connect.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-36 DTN-X Service Provisioning

Figure 4-27 Drop Cross-connect on an XTC

Express Cross-connect
An express cross-connect is a unidirectional or bidirectional cross-connect that associates one line-side
endpoint to another line-side endpoint by establishing connectivity between the optical channels of two
different OCGs (line modules) within a DTN-X.
The express cross-connect type is transparent to the payload type encapsulated in the OTN wrapper. A
typical application for this cross-connect is traffic switching and grooming at OTN switching sites.
Figure 4-28: Express Cross-connect on an XTC on page 4-37 shows an example express cross-
connect.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-37

Figure 4-28 Express Cross-connect on an XTC

Hairpin Cross-connect
A hairpin cross-connect is a unidirectional or bidirectional cross-connect that is used to cross-connect two
tributary ports within a single XTC chassis. Hairpin circuits are supported in the following configurations
(see Figure 4-29: Hairpin Cross-connects on a DTN-X on page 4-38):
■ Between two tributary ports within a given OTM. The two tributary ports may reside on the same or
different TIMs.
■ Between a tributary port on one OTM and a tributary port on another OTM.
Hairpin cross-connects do not use the line-side optical channel resource. The hairpin cross-connects are
used in Metro applications for connecting two buildings within a short reach without laying new fibers.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-38 DTN-X Service Provisioning

Figure 4-29 Hairpin Cross-connects on a DTN-X

GMPLS Signaled Subnetwork Connections (SNCs) on DTN-X


The DTN-X supports SNCs in a similar fashion as SNCs on the DTN (see GMPLS Signaled Subnetwork
Connections (SNCs) on page 4-10): Users identify the source and destination endpoints and IQ NOS
GMPLS control protocol computes the circuit route through the Intelligent Transport Network and
establishes the circuit, referred to as an SNC, by automatically configuring the cross-connects in each
node along the path. The cross-connects automatically configured by the GMPLS protocol are called
signaled cross-connects. An inventory of signaled cross-connects are retrievable through the
management applications.
For SNCs on the DTN-X:
■ SNCs are supported between XTCs (an SNC that originates on an XTC must also terminate on an
XTC).
■ SNCs are supported between DTCs and/or MTCs that are configured as Expansion Chassis of a
DTN-X.
■ SNCs are supported between an DTC/MTC that is configured as an Expansion Chassis of a DTN-X
and an DTC/MTC that is configured as an Expansion Chassis of a DTN.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-39

DTN-X Virtual Concatenation (VCAT)


Similar to the DTN’s handling for 40Gbs/40GbE and 100GbE services (see 40Gbps and 40GbE Service
Provisioning on page 4-18 and 100GbE Service Provisioning on page 4-20), the DTN-X supports virtual
concatenation of signals, an inverse multiplexing technique in which large signals are distributed over
multiple smaller capacity signals which may be transported or routed independently. This is valuable in
cases where the bandwidth is not available within a single OTU4i/OTU3i+.
The DTN-X supports VCAT for the following services:
■ 40GbE LAN services on the TIM-1-40GE
■ 100GbE LAN services on the TIM-1-100GE/TIM-1B-100GE, TIM-1-100GE-Q, and LIM-1-100GE
■ OTU4 transport without FEC on the TIM-1-100G and TIM-1-100GM, and for TIM-1-100GX and
LIM-1-100GX configured for ODU switching operating mode, see Transparent Transport without
FEC for OTU4 on page 4-44.
For non-VCAT services, the DTN-X transports the entire client signal in a single network wrapper (ODU3i
for 40GbE; ODU4i for 100GbE). In the VCAT case, the DTN-X separates the client signal into smaller
ODU2i virtually concatenated containers (for 40GbE, the signal is split into four ODU2i containers, called
“ODU2i-4v” network mapping; for 100GbE, the signal is split into ten ODU2i containers, called
“ODU2i-10v” network mapping) for transportation across the network.

Note: For 100GbE services on the TIM-1-100GE/TIM-1B-100GE and for OTU4 transport without FEC
on the TIM-1-100G/TIM-1-100GM/TIM-1-100GX, the DTN-X can transport these services across
multiple channels (OCGs). See Multi-OCG Support for VCAT Services on page 4-42.

Figure 4-30: Virtual Concatenation Mode (100GbE Example) on page 4-39 below shows an example of
virtually concatenated 100GbE transport; Figure 4-31: Non-Virtual Concatenation Mode (100GbE
Example) on page 4-40 shows non-virtually concatenated 100GbE transport.

Figure 4-30 Virtual Concatenation Mode (100GbE Example)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-40 DTN-X Service Provisioning

Figure 4-31 Non-Virtual Concatenation Mode (100GbE Example)

The VCAT option is selected when creating a service using the network mapping options (see DTN-X
Network Mapping on page 4-52). As with VCAT on the DTN, VCAT on the DTN-X uses the concepts of
virtual concatenation groups (VCGs) and group termination points (GTPs):
■ The VCG is created automatically when a VCAT cross-connect or SNC is created. The VCG is
created on the client side and is the monitoring point for the multiple ODU2i CTPs. Unlike VCGs for
endpoints on the DTC/MTC, VCGs with endpoints on the XTC do not contain sub-client entities.
■ The GTP is a list of user-specified ODUs that are logically grouped together into a GTP that serves
as an endpoint in a cross-connect or an SNC. Unlike for VCGs that are client-side only, for GTPs
there is a client-side GTP and a line-side GTP. The GTP is the termination point used to identify the
service for creating cross-connects, SNCs, and protection groups (1 Port and 2 Port Digital SNCP).
The GTP can be created as part of provisioning a VCAT cross-connect or SNC, or the GTP can be
created independently for subsequent use in provisioning.
Figure 4-32: VCG and GTPs for a 100GbE DTN-X VCAT Service on page 4-41 shows the relationship
between VCGs, GTP, and ODUs.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-41

Figure 4-32 VCG and GTPs for a 100GbE DTN-X VCAT Service

Please note the following guidelines and behaviors for DTN-X VCAT service provisioning:
■ 1 Port D-SNCP protection is supported for VCAT services with endpoints on the XTC. The
protection units must both be VCAT or both must be non-VCAT.
■ 2 Port D-SNCP protection is supported for VCAT services with endpoints on the XTC. 2 Port D-
SNCP does support a mix of VCAT and non-VCAT protection units, so one route may be VCAT
and the other route may be non-VCAT.

Note: See DTN-X Service Capabilities on page A-1 for a full list of the services and modules
that support VCAT and D-SCNP .

■ For both cross-connect and SNC provisioning, VCAT services must be routed through the same
OCG, meaning that all ODU constituents must be on the same line module.
■ For VCAT SNCs with endpoints on the XTC, only tributary-to-tributary SNCs are supported. Line-
terminating SNCs (tributary-to-line or line-to-tributary SNCs) are not supported. For manual cross-
connects with endpoints on the XTC, VCAT is supported for tributary-to-tributary manual cross-
connects and also for line-terminating manual cross-connects.
■ Restoration and revertive restoration are supported for VCAT SNCs with endpoints on the XTC.
■ Route diversity is supported for VCAT SNCs with endpoints on the XTC.
■ Both Binary Phase Shift Keying (BPSK) modulation and Quadrature Phase Shift Keying (QPSK)
modulation are supported for VCAT services.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-42 DTN-X Service Provisioning

Multi-OCG Support for VCAT Services


The DTN-X can transport VCAT services across multiple channels (OCGs). Multi-OCG VCAT is
supported on the following services:
■ 100GbE LAN services on the TIM-1-100GE/TIM-1B-100GE
■ OTU4 transport without FEC on the TIM-1-100G and TIM-1-100GM, and for TIM-1-100GX
configured for ODU switching operating mode, see Transparent Transport without FEC for OTU4
on page 4-44.
During manual cross-connect provisioning, the user specifies ODU2i-10v network mapping and then
specifies the ODU2i AIDs, which can be transported over different line modules.
Note the following for Multi-OCG VCAT support:
■ Multi-OCG VCAT is supported for manual cross-connect services only.
■ All OCGs in the service must be transported through the network over a single fiber.
■ All of the OCGs in a multi-OCG VCAT service must be housed on the same generation of line
module. For example, all of the OCGs must originate on an AOLM module, or all of the OCGs must
originate on an AOLM2 module. It is not supported for some of the OCGs in the VCAT service to be
housed by an AOLM and other OCGs in the service to be supported by an AOLM2.
■ Multi-OCG VCAT is supported only for unprotected services.

DTN-X Provisioning Options


For client services, the DTN-X allows the user to select the network payload treatment (how the signal will
be handled for transport across the network). The DTN-X supports the following type of payload
treatments, which are described in the following sections:

Note: See DTN-X Service Capabilities on page A-1 for the service provisioning and diagnostic
capabilities supported by the XTC-10, XTC-4, XTC-2, and XTC-2E.

■ Transparent Transport for Non-OTN Services on page 4-43


■ Transparent Transport for OTN Services on page 4-44
■ Transparent Transport without FEC for OTU4 on page 4-44
■ ODU Switching on page 4-46
■ ODU Multiplexing on page 4-48
All DTN-X client interfaces are strictly compliant with the relevant OTN, SONET, SDH, and Ethernet
standards. Additionally, the DTN-X supports the DTN OTU2V (cDTF) client interface. For transport
through the Infinera network, the DTN-X uses both standard ODUk encapsulations and enhanced ODUki
encapsulations (see Digital Transport (DTN-X)).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-43

Transparent Transport for Non-OTN Services


For transparent transport services of native (non-OTN) clients, the entire client signal is kept intact and
embedded in the ODUk/ODUki wrapper for transport across the Infinera network. The signal is monitored
for alarms and performance monitoring data at the ingress and egress points, but the client signal is not
observable at intermediate nodes inside the Infinera network. The DTN-X supports transparent transport
for the following non-OTN client signal types:
■ SONET OC-768
■ SDH STM-256
■ SONET OC-192
■ SDH STM-64
■ 10GbE LAN
■ 10GbE WAN
■ 100GbE LAN
■ 10G DTF/cDTF (see cDTF Transport on page 4-43)
■ 10G Clear Channel
■ 10.3G Clear Channel
■ 8G Fibre Channel
■ 10G Fibre Channel
■ 40GbE
In order to transport the above non-OTN client signals across Infinera’s OTN-based DTN-X network, the
non-OTN services are adapted into ODUk signals as indicated in Table 4-1: Cross-connect Network
Mapping for Various Client Interfaces on page 4-53. (See Provisioning ODUflexi Services on page 4-56
for services that require ODUflexi mapping.)

Note: When creating adaptation services, both head-end and tail-end nodes must be running Release
9.0 or higher.

cDTF Transport
In order to support 1GbE and/or 2.5Gbps services from a DTC/MTC, the DTN-X supports the cDTF
service type (Clear Channel Digital Transport Frame), which is an 11.1G Clear Channel service that is an
aggregate of sub-10G services. (Note that this service is called 11G1CC in the TL1 interface.)
The cDTF is mapped to ODUflexi and requires 9 timeslots (see Provisioning ODUflexi Services on page
4-56 for an explanation of ODUflexi).
Connectivity between the XTC and a DTC/MTC is achieved via cDTF using the following module pair
combinations:
■ A DICM on the DTC/MTC and an XICM on the XTC (see DTN Interconnect Module (DICM) and
DTN-X Interconnect Module (XICM)).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-44 DTN-X Service Provisioning

■ A TAM-2-10GT on the DTC/MTC and a TIM-5-10GM, TIM-5B-10GM, or TIM-5-10GX on the XTC


(Figure 4-33: cDTF Use for Low-speed Services over DTN-X Network (2.5Gbps Example) on page
4-44 shows an example with the TIM-5-10GM).

Note: Each of the 5 TOM slots on the TIM-5-10GM/TIM-5-10GX/TIM-5B-10GM/XICM-T-5-10GM have


8 designated timeslots on the TIM/XICM. However, because OTU2, OTU1e, and OTU2e with FEC
and cDTF services each require 9 timeslots, the TIM/XICM automatically uses a timeslot from TOM
slot 5 to provide the additional timeslot for these services. This means that if a service is already
provisioned on slot 5, the TIM/XICM will not be able to support a new service that requires 9
timeslots. For this reason, it is recommended to provision all services on 10G TIMs/XICMs starting
with slot 1, keeping slot 5 available until the TIM/XICM is fully utilized.

Figure 4-33 cDTF Use for Low-speed Services over DTN-X Network (2.5Gbps Example)

Transparent Transport for OTN Services


For transparent transport services of OTN clients, the entire client signal (along with OTN overhead and
FEC) is kept intact and embedded in the Infinera ODUki /ODUji wrapper for transport across the Infinera
network. The existing FEC is kept intact and OTN overhead is maintained but is not observable at
intermediate nodes inside the Infinera network.
The DTN-X supports transparent transport for the following OTN client signal types (see Table 4-1: Cross-
connect Network Mapping for Various Client Interfaces on page 4-53 for supported network mappings):
■ OTU1e
■ OTU2
■ OTU2e
The signal is monitored for alarms and performance monitoring data at the ingress and egress points, but
the client signal is not observable at intermediate nodes inside the Infinera network.

Transparent Transport without FEC for OTU4


In addition to VCAT support for 40GbE and 100Gbe services (see DTN-X Virtual Concatenation (VCAT)
on page 4-39), the DTN-X supports virtual concatenation for OTU4 services.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-45

For OTU4 transparent transport without FEC, the DTN-X separates the OTU4 client signal into ten ODU2i
virtually concatenated containers (ODU2i-10v network mapping) for transportation across the network.

Note: All ten ODU2i must be routed over the same OCG/SCG, and all ten ODU2i entities must
traverse through the same modulation type (QPSK or BPSK).

Figure 4-34: Virtual Concatenation Mode (OTU4 Example) on page 4-45 below shows an example of
virtually concatenated OTU4 transport.

Figure 4-34 Virtual Concatenation Mode (OTU4 Example)

Note the following for OTU4 transparent transport without FEC:


■ OTU4 transparent transport without FEC is supported on the AOLM, AOLX, AOLM2, AOLX2,
AOFM, AOFX, SOLM, SOLX, SOLM2, SOLX2, SOFM, and SOFX on the following TIMs/LIMs:
□ TIM-1-100G
□ LIM-1-100GM
□ TIM-1-100GM
□ TIM-1-100GX configured for ODU4-ODL (ODU switching) operating mode (see Operating
Mode for TIM-1-100GM, TIM-1-100GX, and LIM-1-100GX on page 4-50)
□ LIM-1-100GX configured for ODU4-ODL (ODU switching) operating mode
■ When provisioning the OTU4 transparent transport without FEC, the order of the tributary ODU2i
entities in the cross-connects must be the same at both the source and destination nodes. For
example, if the cross-connect at the source node lists the FROM ODUs as “T1, T2, T3, T4...”, then
the cross-connect at the destination node must also list the ODUs in the same order. The cross-
connect at the destination could not, for example, list the TO ODUs as “T1, T6, T3, T4.” Note that
by default, GNM/DNA will sort the ODU2i list on the line endpoints. The user must override this

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-46 DTN-X Service Provisioning

sorting and list the line ODU2i entities so that the order sequence of the tributary ODU2i at source
and destination node match.
■ OTU4 transparent transport without FEC is supported with either BPSK or QPSK modulation
(mixed modulation is not supported).
■ Restoration and D-SNCP (1 Port and 2 Port D-SNCP) are not supported for OTU4 transparent
transport without FEC services.
■ For line-side endpoints, only manual cross connects are supported. Line-terminating SNCs
(tributary-to-line SNCs or line-to-line SNCs) are not supported for OTU4 transparent transport
without FEC.
■ The OTU4 is supports the alarms, performance monitoring, and diagnostics supported for OTU4
switching services (see ODU Switching on page 4-46). However, note the following:
□ The ODU4 is not monitored for alarms, performance monitoring, diagnostics, etc.
□ The virtual concatenation group (VCG) is not monitored for performance monitoring.

ODU Switching
The DTN-X supports ODUk switching, in which the client OTUk overhead is terminated at the ingress.
The ODUk overhead is switched at every network hop from one interface to the next interface (meaning
that the ODUk overhead is accessible at every hop).

Note: When creating ODU switching services, both head-end and tail-end nodes must be running
Release 9.0 or higher.

The following ODU switching options are supported on the DTN-X:


■ OTU4 (client) to ODU4 (ODUk)
■ OTU3 (client) to ODU3 (ODUk)
■ OTU3e1 (client) to ODU3e1 (ODUk)
■ OTU3e2 (client) to ODU3e2 (ODUk)
■ OTU2 (client) to ODU2 (ODUk)
■ OTU2e (client) to ODU2e (ODUk)
■ OTU1e (client) to ODU1e (ODUk)
Figure 4-35: ODU Switching (ODU2 Example) on page 4-47 shows how an ODU2 signal is transported
through the DTN-X network.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-47

Figure 4-35 ODU Switching (ODU2 Example)

Figure 4-36: Entities Created for ODU Switching (ODU2 Example) on page 4-47 shows the entities that
are created at the DTN-X to perform ODU2 switching.

Figure 4-36 Entities Created for ODU Switching (ODU2 Example)

Figure 4-37: Entities Created for ODU Switching (ODU0 Example) on page 4-48 shows the entities that
are created at the DTN-X to perform ODU0 switching.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-48 DTN-X Service Provisioning

Figure 4-37 Entities Created for ODU Switching (ODU0 Example)

ODU Multiplexing
The DTN-X supports ODU multiplexing, in which the client OTUk and ODUk overhead is terminated at the
ingress, and the ODUj is switched across the network. The ODUj overhead is switched at every network
hop from one interface to the next interface (meaning that the ODUj overhead is accessible at every hop).
For each of the supported ODUj granularities, services can be provisioned either via GMPLS circuits
(SNCs) or by manually configured cross-connects. In addition, ODU multiplexed services can be
protected by 1 Port D-SNCP (see 1 Port D-SNCP on page 4-126); 2 Port D-SNCP is not supported for
ODU multiplexing services. ODU multiplexed SNCs also support restoration (see Dynamic GMPLS Circuit
Restoration on page 4-140).
The TIM-5-10GX, TIM-1-100GX, and LIM-1-100GX support single-stage multiplexing:
The TIM-5-10GX supports the following ODU multiplexing options ( Figure 4-38: ODU Multiplexing
(TIM-5-10GX) on page 4-49):
■ ODU1 (low order ODUj) to ODU2 (high order ODUk) to OTU2
■ ODU0 (low order ODUj) to ODU2 (high order ODUk) to OTU2
The TIM-1-100GX and LIM-1-100GX support the following ODU multiplexing options ( Figure 4-39: ODU
Multiplexing (TIM-1-100GX and LIM-1-100GX) on page 4-49):

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-49

■ ODU2 (low order ODUj) to ODU4 (high order ODUk) to OTU4


■ ODU2e (low order ODUj) to ODU4 (high order ODUk) to OTU4
■ ODU0 (low order ODUj) to ODU4 (high order ODUk) to OTU4
■ ODU1 (low order ODUj) to ODU4 (high order ODUk) to OTU4

Note: The TIM-1-100GX and LIM-1-100GX support an Operating Mode configuration for supporting
ODU Multiplexing services (see Operating Mode for TIM-1-100GM, TIM-1-100GX, and LIM-1-100GX
on page 4-50).

For ODU0 switching, the TIM-1-100GX/LIM-1-100GX must be in the ODUk-ODUj operating mode. In
ODUk-ODUj mode, the module can support a mix of ODU0, ODU2, and ODU2e services.

Figure 4-38 ODU Multiplexing (TIM-5-10GX)

Figure 4-39 ODU Multiplexing (TIM-1-100GX and LIM-1-100GX)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-50 DTN-X Service Provisioning

Low order ODUj entities can be multiplexed into high-order ODUk as follows:
■ A high order ODU2 can contain:
□ 8 ODU0
□ 4 ODU1
□ a mix of the above
■ A high order ODU4 can contain:
□ 10 ODU2
□ 10 ODU2e
□ 80 ODU0s
□ a mix of the above
For ODU1 multiplexing on the TIM-5-10GX, the TIM-5-10GX supports two user-configurable time slot
granularities for the high-order ODU2:
■ 1.25G granularity—The ODU2 is split into 8 sections of 1.25G each
■ 2.5G granularity—The ODU2 is split into 4 sections of 2.5G each
Note that when the time slot granularity is set for 1.25G, the 8 sections of the ODU2 are paired like this:
■ Pair #1: time slots 1 and 5
■ Pair #2: time slots 2 and 6
■ Pair #3: time slots 3 and 7
■ Pair #4: time slots 4 and 8
If either time slot in a pair is configured for a ODU1 service, the other timeslot in the pair can be used only
for ODU1 service as well, and likewise for ODU0 services: A mix of ODU1 and ODU0 services is not
supported in a time slot pair on the TIM-5-10GX. For example, If ODU1 uses time slots 1 and 2, then time
slots 5 and 6 cannot be used by ODU0, they need to be used by an ODU1 rate service only.

Operating Mode for TIM-1-100GM, TIM-1-100GX, and LIM-1-100GX


The TIM-1-100GM, TIM-1-100GX, and LIM-1-100GX support a configurable Operating Mode parameter
with the following options:

Note: The TIM-1-100GX and LIM-1-100GX support all of these options. The TIM-1-100GM supports
only ODU4-ODL and GBE100-ODU4-4i-2ix10V.

■ ODU4-ODL—(Supported by TIM-1-100GM, TIM-1-100GX, and LIM-1-100GX) In this mode the


TIM/LIM does not support ODU multiplexing. The module supports ODU4 transport and ODL
format ODU switching. TIM/LIM in this mode can inter-operate with TIM-1-100GMs, TIM-1-100GXs,
and LIM-1-100GXs that have the same Operating Mode setting.
■ ODU4-ODU2-ODU2E—(Supported by TIM-1-100GX and LIM-1-100GX) In this mode the TIM/LIM
supports ODU4 multiplexing/de-multiplexing and low order ODU2/2e switching. A TIM/LIM in this

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-51

mode can inter-operate with TIM-1-100GXs or LIM-1-100GXs with the Operating Mode setting of
ODU4-ODU2-ODU2E or ODUk-ODUj.

Note: The ODU4-ODU2-ODU2E operating mode is not supported on TIM-1-100GX housed in


XTC-2 or XTC-2E chassis.

■ ODUk-ODUj—(Supported by TIM-1-100GX and LIM-1-100GX) In this mode the TIM/LIM supports


ODU4 multiplexing/de-multiplexing and low order ODU0, ODU2, and ODU2e switching. A TIM/LIM
in this mode can inter-operate with TIM-1-100GXs or LIM-1-100GXs with the Operating Mode
setting of ODU4-ODU2-ODU2E or ODUk-ODUj.
■ ODUk—(Supported by TIM-1-100GM, TIM-1-100GX, and LIM-1-100GX) In this mode the TIM/LIM
supports ODU4 switching. A TIM/LIM in this mode can inter-operate with TIM-1-100GMs,
TIM-1-100GXs or LIM-1-100GXs that have the same Operating Mode setting. In addition, a
TIM/LIM in this mode can support ODU adaptation, in which case the TIM/LIM can inter-operate
with TIM-1-100GMs or TIM-1-100GXs in GBE100-ODU4-4i-2ix10V mode.
■ GBE100-ODU4-4i-2ix10V—(Supported by TIM-1-100GM and TIM-1-100GX) In this mode the
TIM/LIM supports 100GbE native client with ODU transport service switching. A TIM/LIM in this
mode can inter-operate with:
□ TIM-1-100GX or LIM-1-100GX that has the same Operating Mode setting for ODU4 and
ODU4i services.
□ TIM-1-100GE for ODU4i mapped services.
□ TIM-1-100GX or LIM-1-100GX in ODUk mode.
□ Third party equipment for 100GbE services.

Note: Before the Operating Mode can be changed, all existing services on the TIM/LIM must be
deleted and the equipment must be administratively locked.

Note: Do not physically remove or cold reset the TIM/LIM when the TIM/LIM is performing an
Operating Mode update. Wait until the Operating Mode Status is “Active” before removing the TIM/
LIM.

Note: If downgrading from Release 16.2 or higher to pre-Release 16.2, any TIM/LIM set for 100GbE
or one of the ODU multiplexing modes must first be configured to an operating mode supported in the
release to which the node is downgrading:
■ GBE100-ODU4-4i-2ix10V operating mode is supported in Release 16.2 and higher.
■ ODUk-ODUj operating mode is supported in Release 16.1 and higher.
■ ODU4-ODU2-ODU2E operating mode is supported in Release 11.0 and higher.
■ ODU4-ODL operating mode is supported in Release 10.0 and higher.
Note: .

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-52 DTN-X Service Provisioning

The TIM-1-100GX/LIM-1-100GX requires specific firmware to support each of these modes. If the
operating mode is changed, the TIM/LIM will download and apply the appropriate firmware to support the
new operating mode. The status of the operating mode and firmware synchronization can be viewed in
Operating Mode Status fields on the equipment. The TIM/LIM will indicate the current status:
■ Not determined—The module is pre-provisioned or has not yet booted up after physical installation.
■ Change in progress—The module’s operating mode has been changed and the TIM is currently
downloading the required firmware and programming the provisioned operating mode. (All user
operations are blocked for the TIM/LIM during this time.)
■ Active—The module’s firmware matches its operating mode; firmware and software operating
modes are in sync.

DTN-X Network Mapping


For the transport of native and OTN clients on DTN-X, the user can select the way the signal will be
mapped for transport across the network. The table below shows the possible network mapping options
for the supported payload types.
In addition to network mappings shown in the table, the network mapping value “ANY” can be selected for
SNCs:
■ For SNC service types that support only non-VCAT network mapping, GMPLS will use the default
network mapping for the service type when the value “ANY” is selected by the user. For example, if
the ANY network mapping is selected when creating an OC-192 SNC, GMPLS will use the ODU2
BMP (default) network mapping when setting up the SNC.
■ For SNC with service types that support VCAT network mapping options, GMPLS will first search
for a route using the non-VCAT network mapping. If a route is not found, GMPLS will then attempt
to set up a route using the VCAT network mapping. For example, if the ANY network mapping is
selected when creating a 100GbE SNC, GMPLS will first search for a route using the ODU4i (non-
VCAT) network mapping when setting up the SNC. If no route is available using ODU4i mapping,
GMPLS will then search for a route using the ODU2i-10v (VCAT) network mapping.
In case of restorable SNCs with service types that support VCAT and with network mapping set to ANY,
once an SNC is set up the network mapping of the restore path is the same as work path (both paths will
be VCAT or both will be non-VCAT). For example, if the work path is created with non-VCAT network
mapping (e.g., ODU4i), the restore path will be created with ODU4i network mapping as well. If no route
exists with ODU4i mapping, restoration will fail. Restoration is not attempted with a different network
mapping (e.g., ODU2i-10v).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-53

Table 4-1 Cross-connect Network Mapping for Various Client Interfaces


PAYLOAD FROMAID TOAID NETMAP Description
(* indicates
the default
value)
OC-3 ODU0 ODU0 (Line) ODU0-GMP Add/drop cross connect with OC-3 payload
(Tributary)
STM-1 ODU0 ODU0 (Line) ODU0-GMP Add/drop cross connect with STM-1 payload
(Tributary)
OC-12 ODU0 ODU0 (Line) ODU0-GMP Add/drop cross connect with OC-12 payload
(Tributary)
STM-4 ODU0 ODU0 (Line) ODU0-GMP Add/drop cross connect with STM-4 payload
(Tributary)
2G Fibre ODU1 ODU1 (Line) ODU1-GMP Add/drop cross connect with 2G Fibre Channel
Channel (Tributary) payload
4G Fibre ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with 4G Fibre Channel
Channel (Tributary) (4 timeslots) payload
OC-48 ODU1 ODU2 (Line) ODU1-BMP Add/drop cross connect with OC48 payload and
(Tributary) default network mapping set to ODU1 BMP
STM-16 ODU1 ODU2 (Line) ODU1-BMP Add/drop cross connect with STM16 payload
(Tributary) and default network mapping set to ODU1 BMP
OC-192 ODU2 ODU2 (Line) ODU2-BMP Add/drop cross connect with OC192 payload
(Tributary) ODU2-AMP and default network mapping set to ODU2 BMP
STM-64 ODU2 ODU2 (Line) ODU2-BMP Add/drop cross connect with STM64 payload
(Tributary) ODU2-AMP and default network mapping set to ODU2 BMP
OC-768 ODU3 ODU3 (Line) ODU3-BMP* Add/drop cross connect with OC768 payload
(Tributary) ODU3-AMP and default network mapping set to ODU3 BMP
STM-256 ODU3 ODU3 (Line) ODU3-BMP* Add/drop cross connect with STM256 payload
(Tributary) ODU3-AMP and default network mapping set to ODU3 BMP
1GBE ODU0 ODU0 (Line) ODU0-TTT- Add/drop cross connect with 1GBE payload and
GMP default network mapping set to ODU0 GMP
10GbE LAN ODU2e ODU2e (Line) ODU2e Add/drop cross connect with 10GbE LAN
(Tributary) payload, mapped to an ODU2e
ODU1e ODU1e (Line) ODU1e Add/drop cross connect with 10GbE LAN
(Tributary) payload, mapped to an ODU1e
40GbE LAN ODU3i ODU3i ODU3i Add/drop cross connect with 40GbE payload
(direct mapping)
GTP AID GTP AID ODU2i-4v Add/drop cross connect with 40GbE payload
(VCAT mapping)
100GbE LAN ODU4i ODU4i ODU4i Add/drop cross connect with 100GbE payload
(direct mapping)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-54 DTN-X Service Provisioning

Table 4-1 Cross-connect Network Mapping for Various Client Interfaces (continued)
PAYLOAD FROMAID TOAID NETMAP Description
(* indicates
the default
value)
ODU4 ODU4 Add/drop cross connect with 100GbE payload
mapped to ODU4
GTP AID GTP AID ODU2i-10v Add/drop cross connect with 100GbE payload
(VCAT mapping)
OTU4 ODU4 ODU4 (Line) ODU4 Add/drop cross connect for G.709 standard
(Tributary) ODU4 service
GTP AID GTP AID ODU2i-10v Add/drop cross connect with OTU4 payload
without FEC (transparent transport, VCAT
mapping)
OTU3 ODU3 ODU3 (Line) ODU3 Add/drop cross connect for G.709 standard
(Tributary) ODU3 service
OTU3e1 ODU3e1 ODU3e1 (Line) ODU3e1 Add/drop cross connect for G.Sup43 ODU3e1
(Tributary) service
OTU3e2 ODU3e2 ODU3e2 (Line) ODU3e2 Add/drop cross connect for G.Sup43 ODU3e2
(Tributary) service
OTU2 ODU2 ODU2 (Line) ODU2 Add/drop cross connect for G.709 standard
(Tributary) ODU2 service
ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with OTU2 payload with
(Tributary) (9 timeslots) FEC (transparent transport)
OTU2e ODU2e ODU2e (Line) ODU2e Add/drop cross connect for G.709 standard
(Tributary) ODU2e service
ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with OTU2e payload
(Tributary) (9 timeslots) with FEC (transparent transport)
OTU1e ODU1e ODU1e (Line) ODU1e Add/drop cross connect for G.709 standard
(Tributary) ODU1e service
ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with OTU1e payload
(Tributary) (9 timeslots) with FEC (transparent transport)
10G CC ODU2 ODU2 (Line) ODU2-BMP* Add/drop cross connect with 10G Clear
(Tributary) ODU2-AMP Channel payload and default network mapping
set to ODU2 BMP
10.3G CC ODU2e ODU2e (Line) ODU2e Add/drop cross connect with 10.3G Clear
(Tributary) Channel payload, mapped to an ODU2e
ODU1e ODU1e (Line) ODU1e Add/drop cross connect with 10.3G Clear
(Tributary) Channel payload, mapped to an ODU1e
ODU2i ODU2i (Line) ODU2i Add/drop cross connect with 10.3G Clear
(Tributary) Channel payload, mapped to an ODU2i

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-55

Table 4-1 Cross-connect Network Mapping for Various Client Interfaces (continued)
PAYLOAD FROMAID TOAID NETMAP Description
(* indicates
the default
value)
10G Fibre ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with 10G Fibre Channel
Channel (Tributary) (9 timeslots) payload
10G DTF/ ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with 11.1G Clear
cDTF (Tributary) (9 timeslots) Channel payload
(11.1G Clear
Channel)
8G Fibre ODUflexi ODUflexi (Line) ODUflexi Add/drop cross connect with 8G Fibre Channel
Channel (Tributary) (7 timeslots) payload
GFP ODUflexi ODUflexi (Line) ODUflexi Add/drop PXM services.
(Tributary) (variable
number of
ODU0
timeslots)

High-order to low-order ODU mapping is created as part of service provisioning (see above table for the
low-order ODUj mapping for the client signals supported on the DTN-X).
The following table show the number of ODU0 (time slots) required for each rate of low-order ODUj.

Table 4-2 Timeslots Required for Low Order ODUj Entities


ODUj Rate Required Number of Time Slots
ODU0 1
ODU0i
ODU1 2
ODU1i
ODU1e 8
ODU2
ODU2i
ODU2e
ODU3 31
ODU3i 32
ODU3e1
ODU3e2

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-56 DTN-X Service Provisioning

Table 4-2 Timeslots Required for Low Order ODUj Entities (continued)
ODUj Rate Required Number of Time Slots
ODU4 80
ODUflexi (number varies, see previous table)

Provisioning ODUflexi Services


The following client services do not fit the standard ODUk containers, and so the DTN-X uses ODUflex
mapping for their transport:
■ OTU2 (as Clear Channel)
■ cDTF (as 11.1G Clear Channel)
■ OTU2 (with FEC)
■ OTU1e (with FEC)
■ OTU2e (with FEC)
■ GFP (for PXM services)
■ 8G Fibre Channel
■ 10G Fibre Channel
The ODUflexi is a scalable option, meaning that the DTN-X can vary the number of timeslots used for
ODUflexi depending on the type of service being transported. For example, to transport a cDTF service,
the DTN-X dedicates 9 timeslots to the ODUflexi, but only 7 timeslots would be required to transport a 8G
Fibre Channel service. (See Table 4-1: Cross-connect Network Mapping for Various Client Interfaces on
page 4-53 for the number of timeslots required by each of the services that are mapped to ODUflexi.)

Note: Each of the 5 TOM slots on the 10G TIMs have 8 designated timeslots on the TIM/XICM.
However, for services that require 9 timeslots, the TIM/XICM automatically uses a timeslot from TOM
slot 5 to provide the additional timeslot for these services. This means that if a service is already
provisioned on slot 5, the TIM/XICM will not be able to support a new service that requires 9
timeslots. For this reason, it is recommended to provision all services on 10G TIMs/XICMs starting
with slot 1, keeping slot 5 available until the TIM/XICM is fully utilized.

Provisioning ODUCni Services


ODUCni provisioning allows for efficient handling of service payloads onto optical carriers whose rates
are not in increments of 10Gbps, and to fill bandwidth gaps in the OTU fabric that are also not in
increments of 10Gbps.
In particular, ODUCni enables the 37.5Gbps optical carriers that result from 3QAM modulation supported
by the OFx-500. The ODUCni supports up to 10 carriers.
ODUCni provisiong is supported for the following modules/modulation types:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-57

■ AOFX-500, AOFM-500, SOFX-500, and SOFM-500 with 3QAM modulation


■ SOFX-500 and SOFM-500 with BPSK modulation
■ SOLM2 and SOLX2 with BPSK modulation
■ OFx-1200 and XT(S)-3600 for all supported modulation formats and baud rates

Note: BPSK is supported only on the C13 versions of SOFx-500 and SOLx2, it is not supported on
C12 versions of SOFx-500 and SOLx2.
3QAM is supported only on the C13 versions of SOFx-500 and on C8 versions of AOFx-500; it is not
supported on C12 versions of SOFx-500, nor on C3, C5, nor C6 of AOFx-500, nor on line modules
other than OFx-500

The ODUCni framework supports the same digital provisioning services as ODUn services, such as add/
drop and express cross-connects/SNCs, bi-directional/unidirectional services, multipoint services, virtual
concatenation, etc. ODUCni services are constrained only by the bandwidth of the associated OTN entity
(in the case of 3QAM services, this means that ODUCni services require bandwidth in multiples
37.5Gbps, BPSK services require bandwidth in multiples of 25Gbps).
The actual ODUCni service is represented in the management interfaces as ODUCni-M, where:
■ n= number of 100G OTUC frames, rounded up to a nearest hundred. For example, if the bit rate is
37.5, it will be rounded up to nearest 100 (n=1).
■ M = number of 5G timeslots. The value M is only used for ODUCni-M nomenclature when the bit
rate isn't divisible by 100G. For example, if the bit rate is 37.5G, M =7.5. But for the bit rate of 300G
there is no M value, so the service type is denoted as ODUC3i.
The following ODUCni rates are supported:
■ 37.5G (ODUC1i-7.5)
■ 75G (ODUC1i-15)
■ 100G ( ODUC1i)
■ 112.5G (ODUC2i-22.5)
■ 150G (ODUC2i-30)
■ 187.5G (ODUC2i-37.5)
■ 200G (ODUC2i)
■ 225G (ODUC3i-45)
■ 250G (ODUC3i-50)
■ 262.5G (ODUC3i-52.5)
■ 300G (ODUC3i)
■ 337.5G (ODUC4i-67.5)
■ 375G (ODUC4i-75)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-58 DTN-X Service Provisioning

Note: For ODUCni on OFx-1200 or XT-3600, the ODUCni rates can vary from 50G to 1.2Tb based on
the super channel configuration.

For high order ODUCni entities with 3QAM modulation, the carrier mode model goes beyond single-
carrier/dual-carrier modes. As shown in the table below, each ODUCni rate requires a different number of
carriers. When provisioning ODUCni services, the user can combine any of the available carriers to be
used for the service.
The number of tributary slots required for the ODUCni service is the ODUCni container capacity divided
by 1.25. For example, for the ODUC2i-22.5 service which has a maximum capacity of 112.5Gbps, the
number of tributary slots required is 112.5/1.25, which is 90 tributary slots.

Table 4-3 Tributary Slots and Capacities of Line Side Containers


ODUCni Rate Modulation format Maximum Capacity Number of Carriers Number of
of ODU Container Required for the Tributary Slots
(Gbps) Service
ODUC1i-7.5 3QAM 37.5 1 30
ODUC1i-10 BPSK 50G 2 40

Note: In Release 16.2,


ODUC1i-10 is not
supported, as the minimum
rate for BPSK is 100G and
minimum number of
carriers is 4.

ODUC1i-15 3QAM 75 2 60
ODUC1i BPSK 100 4 80
ODUC2i-22.5 3QAM 112.5 3 90
ODUC2i-30 3QAM 150 4 120
BPSK 150 6 120
ODUC2i-37.5 3QAM 187.5 5 150
ODUC2i BPSK 200 8 160
ODUC3i-45 3QAM 225 6 180
ODUC3i-50 BPSK 250 10 200
ODUC3i-52.5 3QAM 262.5 7 210
ODUC3i 3QAM 300 8 240
ODUC4i-67.5 3QAM 337.5 9 270
ODUC4i-75 3QAM 375 10 300
Existing Line Side ODU Containers (for reference)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-59

Table 4-3 Tributary Slots and Capacities of Line Side Containers (continued)
ODUCni Rate Modulation format Maximum Capacity Number of Carriers Number of
of ODU Container Required for the Tributary Slots
(Gbps) Service
ODU3i+ SC-PM-QPSK 50 1 40
DC-PM-BPSK 50 2 40
ODU4i DC-PM-QPSK 100 2 80

Note the following for ODUCni services:


■ For OFx-500 modules in ODUCni mode, the default is for all 10 carriers on the super channel to be
included, and only one instance of carrier group is present. The carrier group can be configured to
group together any number of the 10 carriers. Only one carrier group instance is supported in a
super channel in ODUCni mode. For 3QAM, the rate of the optical channel will be 37.5Gbps
multiplied by the number of carriers in the carrier group. For BPSK, the rate of the optical channel
will be 25Gbps multiplied by the number of carriers in the carrier group.

Provisioning for TIM-16-2.5GM


The DTN-X supports sub-10Gbps endpoints on the XTC via the TIM-16-2.5GM. (The DTN-X also
supports sub-10Gbps services via TAMs on the DTC/MTC or via aggregation shelves, such as the ATC
or 3rd party ADM equipment.)
The TIM-16-2.5GM provides a standard OTN based digital wrapper function to transparently transport the
following client service types (see Table 3-4: TIM Support of Encapsulated Client Disable Action on page
3-47 for hardware information and a list of the supported TOMs):
■ 1 GbE
■ OC-48
■ STM-16
■ OC-3/STM-1
■ OC-12/STM-4
■ 2GFC (FC-200)
■ 4GFC (FC-400)
The TIM-16-2.5GM contains four port groups:
■ Port group 1—TIM ports 1-4
■ Port group 2—TIM ports 5-8
■ Port group 3—TIM ports 9-12
■ Port group 4—TIM ports 13-16

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-60 DTN-X Service Provisioning

A port group cannot simultaneously support both GMP and AMP mappings. Therefore, each port group
can support either BMP and GMP mappings (default), or the port group can support BMP and AMP
mappings. The user can configure each of the four port groups; all ports in the port group share the same
mapping configuration and its associated restrictions.

Note: The port map mode cannot be changed if a service exists on the port that would conflict with
the new mode. Likewise, a service cannot be provisioned on any of the ports in the port group if the
port group is configured for a mapping mode that doesn’t support the service.

Note: Port group mapping mode settings are not applicable since AMP mapping is not currently
supported. Only BMP and GMP mappings are supported.

Note the following for service provisioning on the TIM-16-2.5GM:


■ SNCs with TIM-16-2.5GM endpoints support automatic restoration (see Dynamic GMPLS Circuit
Restoration on page 4-140).
■ For OC-48, STM-16, and 1GbE services:
□ Non-bookended services are supported. Meaning, for example, that an OC-48 service
originating on a TIM-16-2.5GM can terminate on a TIM-5-10GX via ODU multiplexing (see
ODU Multiplexing on page 4-48).
□ 1 Port D-SNCP is supported (see 1 Port D-SNCP on page 4-126). Bookended services
(services that originate and terminate on a TIM-16-2.5GM) can be protected by 2 Port D-
SNCP (see 2 Port D-SNCP on page 4-123).

Note: 2 Port D-SNCP is not supported for ODU multiplexing services. So although the
DTN-X supports non-bookended services for 1GbE, OC-48, and STM-16 on the
TIM-16-2.5GM, these non-bookended services require ODU multiplexing and so do not
support 2 Port D-SNCP.

■ For OC-3, STM-1, OC-12 and STM-4 services :


□ Non-bookended services are supported. Meaning, for example, that an OC-3 or OC-12
service originating on a TIM-16-2.5GM can terminate on a TIM-5-10GX or TIM-1-100GX via
ODU multiplexing (see ODU Multiplexing on page 4-48).
□ Add/drop and Hairpin Cross Connects (with either source or destination side adaption) are
supported
□ Neither 1 Port D-SNCP nor 2 Port D-SNCP is supported for these services.
■ For 2GFC, and 4GFC services:
□ Only bookended services are supported. Meaning, for example, that an 1GFC service
originating on a TIM-16-2.5GM must also terminate on a TIM-16-2.5GM.
□ Neither 1 Port D-SNCP nor 2 Port D-SNCP is supported for these services.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-61

■ For 1GbE services on TIM-16-2.5GM, the encapsulated client disable action is always Send LF and
can not be disabled (see Encapsulated Client Disable Action on Ingress (DTN-X) on page 3-46).
■ For OC-48 and STM-16 services on the TIM-16-2.5GM, the encapsulated client disable action is
always Generic AIS and cannot be disabled.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-62 Packet Switching Service Provisioning

Packet Switching Service Provisioning


Previous to the Packet Switching Module (PXM), the DTN-X supported bit-transparent transport of traffic.
With the introduction of the PXM, the DTN-X supports packet switching functionality, including packet
aggregation, port consolidation, and statistical multiplexing for Ethernet services.

Note: See Packet Switching in #unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg for an


overview of packet switching in the Infinera network. See Packet Switching Module (PXM) in
#unique_60/unique_60_Connect_42_dtn_and_dtnx_sdg for information on the hardware features of
the PXM.

Via the PXM, the DTN-X supports statistical multiplexing, in which services are mapped to flows. Without
statistical multiplexing each port is dedicated to a single service, so for ten services there will be ten
circuits of 10Gbps each over a 100Gbps circuit. With statistical multiplexing on the PXM, there can
instead be multiple Ethernet services (e.g., across the 16 ports of the PXM-16-10GE) over multiple
ODUflexi connections, each of which can range between 1.25Gbps and 100Gbps, as the services
demand.

Note: The maximum switching capacity of the PXM is 200Gbps (bidirectional). In order to avoid
oversubscribing the device capacity, the user can control the amount of traffic admitted across ports
using the Max Switching Capacity Factor parameter on the PXM equipment (in TL1, this is the
MAXSWCAPFAC parameter in the ENT/ED-EQPT commands). This parameter supports values from
0.5 to 1 and indicates the percentage of 200Gbps allowed on the PXM: 0.5 means that 100Gbps
switch capacity is used for admission control of the traffic flows, 0.96 means that 192Gbps switching
capacity is used, etc. The default value is 1, meaning that 200Gbps switching capacity is used for
admission control of the traffic flows.

Note the following for PXM services:


■ FastSMP protection is supported for packet services on the PXM-1-100GE and PXM-16-10GE on
an XCT-10 or XTC-4 (via ODUFlexi mapping).
■ For 1GbE ports on the PXM-16-10GE, the Ethernet Interface supports auto-negotiation, which the
user can enable or disable. Auto-negotiation is disabled by default.
■ PXM services support multipoint packet services via E-LAN, see Packet Services in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg).
■ PXM services support Multi-Protocol Label Switching Transport Protocol (MPLS-TP), see MPLS
and LSP Elements on page 4-67.
■ The PXM supports Ethernet Operations, Administration, and Maintenance, see Ethernet OAM on
page 4-81.
■ PXM services are transported through the Infinera network using ODUflexi encapsulation, with the
generic framing protocol (GFP) payload type.
■ PXM services can be provisioned using manual cross-connects or GMPLS signaled SNCs
(including line-side terminating SNCs).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-63

■ Restoration and inclusion/exclusion lists are supported for PXM services provisioned using SNCs.
■ 1 Port D-SNCP is supported for PXM services (see 1 Port D-SNCP on page 4-126). However for
PXM services the reliable TP is the ODUflexi TP, instead of being the tributary PTP as it is for TIM
services. (This means that an empty 1 Port D-SNCP is not supported in the case of PXM services.)
Hairpin cross-connects are not supported for 1 Port D-SNCP.
The following sections describe the features and provisioning of packet switching services using the PXM:
■ Data Flow and Facilities for Packet Services on page 4-63
■ Ethernet Private Line (EPL) Ethernet, Virtual Private Line (EVPL), and Ethernet Local Area Network
(E-LAN) Services on page 4-64
■ MPLS and LSP Elements on page 4-67
■ Traffic Management and Quality of Service on page 4-68
■ Layer 2 Control Protocol (L2CP) Handling on page 4-77
■ Treatment of Packets Through the Network on page 4-78
■ Ethernet OAM on page 4-81
■ Scalability for Packet Services on the DTN-X on page 4-88
■ PXM Standard Compliance on page 4-90

Data Flow and Facilities for Packet Services


The figure below shows the flow for data through the PXM.

Figure 4-40 Data Flow through PXM

PXM services are mapped to the following managed objects, listed here in order from the client service
ingress to the OTN layer of the network (see also Figure 3-3: Managed Objects and Hierarchy (DTN-X
with PXM) on page 3-7 for the managed object hierarchy of a DTN-X equipped with PXMs):
■ Ethernet Interface—The Ethernet Interface models the external interface (the physical port) for
packet services. An Ethernet Interface can support multiple service flows (ACs). The Ethernet
Interface is automatically created when a client TOM is provisioned on the PXM.
■ Attachment Circuit (AC)—The AC is the link between a customer edge (CE) device and a provider
edge (PE) device that connects a user network with the service provider network. The ends of the

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-64 Packet Switching Service Provisioning

AC are the Ethernet Interfaces on either end of the network. Multiple ACs can be configured on an
Ethernet Interface, but an AC can be associated to only one Ethernet Interface.
■ Virtual Service Instance (VSI)—The VSI is the Ethernet bridge function entity of a service instance
on a PE. The VSI forwards Layer 2 frames based on MAC addresses and VLAN tags. The
following VSI types are supported:
□ Virtual Private Wire Service (VPWS)—Point to point connectivity between an AC endpoint
and a PW endpoint (end-to-end service may involve two such VSIs, one on the ingress PXM
and another on another PXM in the network).
□ VLAN cross-connect—Point-to-point service between two AC endpoints on the same PXM,
such as for hairpin connections.
□ Virtual Private LAN service (VPLS)—Multipoint-to-multipoint service between numerous ACs
and PWs.
■ Pseudowire (PW)—The PW is a bidirectional virtual connection between VSIs on two PEs. A PW,
also called an emulated circuit, consists of two unidirectional MPLS virtual circuits (VCs).
■ Multi-Protocol Label Switching (MPLS) Tunnel—The MPLS tunnel defines the endpoints of LSPs
and enables packet switching on PXMs at intermediate nodes.
■ Label Switched Path (LSP)—LSPs are unidirectional paths that are co-routed in pairs between the
network interfaces of nodes across a network.
■ Network Interface—The Network Interface is the Ethernet interface on the network side that maps
the service to the ODU container for transport over the network. The Network Interface is the
demarcation point between the packet layer and the OTN layer, representing the aggregate higher
rate interface into which multiple Ethernet PWs are multiplexed/de-multiplexed before handing
over/from the OTN layer interfaces for GFP encapsulation/de-encapsulation and then further ODU
encapsulation/de-encapsulation.

Ethernet Private Line (EPL) Ethernet, Virtual Private Line (EVPL), and
Ethernet Local Area Network (E-LAN) Services
Service can be provisioned as an Ethernet private line or as an Ethernet virtual private line:
■ Ethernet private line (EPL)—Each Ethernet Interface (port) is mapped to a single, dedicated AC, as
in Figure 4-41: Ethernet Private Line (EPL) Services on page 4-65.
■ Ethernet virtual private line (EVPL)—Virtual private line service in which multiple ACs are mapped
to an Ethernet Interface (port), as in Figure 4-42: Ethernet Virtual Private Line (EVPL) Services on
page 4-65.
■ Port-based Ethernet LAN (EP-LAN)—All to one bundling, where all service frames are associated
to one EVC at the UNI. In a port-based (or private) service, all UNIs are configured for all-to-one
bundling, and all service frames are mapped to the EVC, regardless of CE-VLAN ID. EP-LAN

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-65

allows any UNI to forward Ethernet frames to any other UNI. The key advantage of a port-based
service is that the subscriber and the service provider do not have to coordinate VLAN IDs.
■ VLAN-based Ethernet LAN (EVP-LAN)—Service multiplexing, where multiple EVCs are associated
to a UNI. In a VLAN-based (or virtual private) service, CE-VLAN IDs are explicitly mapped to the
EVC at each UNI. The key advantage of a VLAN-based service is that VLAN-based services can
share UNIs and IDs.

Figure 4-41 Ethernet Private Line (EPL) Services

Figure 4-42 Ethernet Virtual Private Line (EVPL) Services

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-66 Packet Switching Service Provisioning

The PXM supports E-LAN for packet services. E-LAN service is realized via Virtual Private LAN service
(VPLS) and MPLS: VPLS enables geographically separated LAN segments to be interconnected as a
single bridged domain over an MPLS network. The full functions of the traditional LAN such as MAC
address learning, aging, and switching are emulated across all the remotely connected LAN segments
that are part of a single bridged domain. VPLS delivers an Ethernet service that can span one or more
metro areas and that provides connectivity between multiple sites as if these sites were attached to the
same Ethernet LAN.

Figure 4-43 Logical Elements of E-LAN Implementation in a Network

Multipoint connections are supported via VSIs associated with multiple ACs and PWs. The VSI connects
to CE devices via ACs and to other VSIs via point-to-point pseudowires (PWs). A set of VSIs (with one
VSI per PE) interconnected via PWs defines a VPLS instance. .
The figure above shows an example network of three nodes using E-LAN services. Note that the nodes
PE 1 and PE 3 are not physically connected, but via use of MPLS, a service from PE 1 can be routed
through PE 2 to PE 3. From the end customer perspective, services can still be routed from PE 1 to PE 3

MAC Learning
MAC learning attributes are supported on AC and PW entities, and for VSIs configured as VLSR. The VSI
maintains a Layer 2 forwarding database (FDB) to forward customer frames to the appropriate
destinations based on destination MAC addresses. The VSI learns MAC source addresses based on
frames received on ACs/PWs and dynamically updates the associated FDB.
When a VSI receives a unicast frame from an AC with a known unicast destination MAC, i.e., an entry
exists in the forwarding database (FDB) for the destination MAC, it forwards the frame over exactly one
point-to-point PW or one AC associated with the destination MAC address. In contrast, when a VSI
receives a broadcast frame or a multicast frame or unicast frame with an unknown destination MAC from
an AC. it forwards the frame to all ACs and all PWs except the AC on which the frame was received.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-67

Similarly, when a VSI receives a broadcast frame or a multicast frame or a unicast frame with an
unknown destination MAC from an PW, it forwards the frame on all ACs only. To prevent loops in a full
mesh VPLS, a VSI does not forward traffic from one PW to another in the same VPLS (due to
enforcement of the split-horizon rule). The set of PWs that are not allowed to forward traffic to each other
is said to form a split-horizon group. Both the AC and PW support a split horizon group ID in order to
prevent loops. (Note that split horizon groups for ACs must be manually configured by the user.)
A split horizon group is a collection of bridge ports. Traffic cannot flow between members of a split
horizon group. The restriction applies to all types of traffic, including broadcast, multicast, unknown
unicast, and known unicast. If a packet is received on a bridge port that is a member of a split horizon
group, that packet will not be sent out on any other port in the same split horizon group.
The PW and AC also support a MAC flap action attribute for situations in which a source MAC address
repeatedly appears on different interfaces of a VSI (known as a MAC flap). PWs and AC support a Flap
Action Clear parameter in order to clear a MAC flap.
For AC and PW entities, MAC learning is always enabled. For VLSR VSIs, MAC learning can be enabled
or disabled by the user. The VSI also supports the MAC limit value, and the MAC limit action once the
value is reached. The MAC limit action can be configured to one of the following:
■ Do Not Learn Do Not Flood—The system will not learn the new source MAC address and will not
flood the unknown unicast frames with unknown destination MAC address. If the incoming frame’s
destination MAC is known, then the system will forward the frame following the normal forwarding
behavior.
■ Do Not Learn Flood—The system will not learn the new source MAC address and will flood all
unknown unicast frames with unknown destinations MAC addresses. If the incoming frame’s
destination MAC is known, then the system will forward the frame following the normal forwarding
behavior.
■ Do Not Learn Do Not Forward—The system will not learn the new source MAC address.
Furthermore, the system will stop forwarding for the VSI: the system will drop all frames inncluding
the ones with known destination MAC addresses for this particular VSI (likewise for a particular AC
limit).
If the VSI's MAC limit notification is enabled, the VSI will raise an alarm if the the MAC limit is reached.

MPLS and LSP Elements


At the head and tail ends of a service, the user creates an MPLS tunnel and two unidirectional LSPs: a
forward LSP and a reverse LSP and then specifies the next hop for the LSP. The next hop entitiy
identifies what PXM is on the other end of the ODUFlex. At any intermediate nodes, the user must create
the mid-point LSPs.
The figure below shows the MPLS and LSP elements at the end nodes and the intermediate nodes.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-68 Packet Switching Service Provisioning

Figure 4-44 MPLS and LSP Elements in the Network

Note the following for MPLS-TP:


■ LSPs are unprotected, statically provisioned, point-to-point paths between the network interfaces of
PXMs across a network.
■ LSP services support only class-based queuing (CBQ).
■ A pseudowire can be moved from one MPLS tunnel to another.
■ The next hop of a midpoint LSP can be modified.

Traffic Management and Quality of Service


One of the features of packet switching is the ability to allow large amounts of traffic at the ingress,
tagging the traffic flows based on class of service (CoS), and then at the egress managing congestion
and queuing traffic for delivery.
Figure 4-45: Traffic Management in the Network on page 4-69 shows an overview of the Traffic
Management elements used for packet services, and where each element is implemented in the network.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-69

Figure 4-45 Traffic Management in the Network

The following sections describe the Traffic Management/Quality of Service elements supported for packet
services in the Infinera network:
■ Class of Service (CoS) Mapping on page 4-69
■ Metering and Bandwidth Profiles on page 4-70
■ Queuing and Congestion Management on page 4-72
■ Scheduling on page 4-74
■ Shaping on page 4-75
■ Connection Admission Control (CAC) on page 4-76

Class of Service (CoS) Mapping


The PXM’s class of service (CoS) feature enables user to separate traffic into different traffic classes to
support various levels of throughput and packet loss when network congestion occurs. This allows traffic
management loss to happen according to rules that the user configures.
For packet interfaces the CoS features are used to provide multiple classes of service for different
applications. On the system, user can configure multiple traffic classes for transmitting packets, define
which packets are placed into each output queue, schedule the transmission service level for each
queue, and manage congestion at queue level.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-70 Packet Switching Service Provisioning

The CoS feature works by examining the traffic as it ingresses the network. Traffic is classified into
defined service groups to provide special treatment across the network. As traffic leaves the network at
the egress, the user can configure the system to tag the packets with a different CoS identifier if needed.
From a functionality perspective, the CoS components are:
■ Classification—Packet classification is the process of examining an incoming packet. In PXM,
classifiers associate the incoming packet with a traffic class and a drop precedence and, based on
the associated traffic class, assign the packet to output queues. The PXM can determine the traffic
class based on the CoS identifier in the packet header or based on the AC classification identifiers
(i.e., service).
■ Traffic Class—The traffic classes affect the forwarding, scheduling, and marking policies applied to
the packet as it transits the system. The PXM supports five traffic classes: 0, 2, 4, 6, and 7, with 0
being the lowest priority and 7 being the highest network priority. The traffic class can be defined at
the AC level or at the Ethernet Interface level (if the traffic class is defined at both the AC and the
Ethernet Interface levels, the AC-level traffic class is used for the service).
■ Drop Precedence/Color—Drop precedence or color control the priority of dropping a packet. Loss
priority may affect how the packets are scheduled without affecting the packet’s relative ordering in
the traffic stream. The PXM supports two drop precedence levels: high and low, with high being a
higher likelihood to be dropped.

Metering and Bandwidth Profiles


In order to regulate packet flow to cap/clip the ingress/egress rate of the flow, packet flow is monitored to
ascertain whether it complies with the contract. Metering is the process of measuring the rate temporal
properties (e.g., rate) of a traffic stream selected by a classifier. An ingress bandwidth profile is used to
regulate the amount of ingress traffic at the Ethernet Interface, and an egress bandwidth profile is used to
regulate the amount of traffic at the egress. The PXM supports both Internet Engineering Task Force
(IETF) and Metro Ethernet Forum (MEF) meters.
Metering is implemented using one or two level leaky token bucket algorithm. Packets are colored as
follows:
■ Green—Compliant packets, packets conforming to committed information rate (CIR).
■ Yellow—Exceeded packets, packets that are over CIR but conforming to excess information rate
(EIR).
■ Red—Violating packets, packets that are over EIR.
The following metering types are supported:
■ Single rate three color meter (srTCM)—For srTCM, the following traffic parameters are configured:
□ Committed Information Rate (CIR)
□ Committed Burst Size (CBS)
□ Excess Burst Size (EBS)

Note: For srTCM metering, if the EBS value is zero, packets of all sizes may be recognized as
yellow.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-71

■ Two rate three color meter (trTCM)—For trTCM, the following traffic parameters are configured:
□ Committed Information Rate (CIR)
□ Committed Burst Size (CBS)
□ Excess Burst Size (EBS)
□ Excess Information Rate (EIR)
For packet services on the PXM, the user creates bandwidth profiles to specify the set of parameters
(e.g., CIR, CBS, EIR, EBS, etc.) for the metering algorithm. The bandwidth profile is used to characterize
service frames for the purpose of metering or rate enforcement (i.e., policing):
■ An Ingress Bandwidth Profile is used to regulate the amount of ingress traffic at the ingress PXM
Ethernet Interface.
■ An Egress Bandwidth Profile is used to regulate the amount of egress traffic at the egress PXM
Ethernet Interface.
For MEF compliant metering, the PXM supports a couple flag to enable or disable coupling flag.
In addition to the above, meters can be color aware mode or color blind:
■ Color Aware—A color aware meter is used when each service frame already has a level of
compliance (i.e., a color) associated with it and that color is taken into account in determining the
level of compliance by the meter. The color on the packet will be used to direct the packets to the
appropriate bucket. Excess green packets will either become yellow or red. Excess yellow packets
will become red.
■ Color Blind—Ignore the color (green or yellow, from DEI field in the VLAN tag) on the packet.
Metering will use its own mechanism to determine the packet color. Metering result can be
overwritten on the packet on the way out. A meter is said to be in color blind mode when the color
(if any) already associated with each service frame is ignored by the meter.
The following metering actions are supported:
■ None—No action is taken with respect to setting of drop precedence even if the packets are Yellow
Compliant.
■ Remark drop precedence—The drop precedence of packets which fall in the Yellow category will
be SET. The drop action for red and yellow frames is based on global configuration (i.e., at the
PXM level). Only red packets are dropped. Yellow packets drop can be achieved by setting EIR=0.
The following tables provide the meter rate and burst size granularity supported by the PXM.

Table 4-4 PXM Meter Rate Granularity


From (Kbps) To (Kbps) Granularity (Kbps)
18 2308 18.312
2380 4542 36.624
4762 9230 73.248
9344 18688 146

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-72 Packet Switching Service Provisioning

Table 4-4 PXM Meter Rate Granularity (continued)


From (Kbps) To (Kbps) Granularity (Kbps)
18980 37084 292
37440 74295 585
74944 148717 1171
149952 297561 2343
299968 595249 4687
600000 1181250 9375
1200000 2362500 18750
2400000 4725000 37500
4800000 9450000 75000
9600000 18900000 150000
19200000 37800000 300000
38400000 76200000 600000
76800000 150000000 1200000

Table 4-5 PXM Meter Burst Size Granularity


Start (Kbits) End (Kbits) Granularity (Kbits)
1 131 1
132 262 2
263 524 4
525 1048 8
1064 2112 16
2128 4224 32
4256 8448 65
8449 16777 131
16778 33030 263

Queuing and Congestion Management


Queue management is the set of mechanisms used for managing the length of queues by marking or
dropping packets when necessary or appropriate for congestion control. A queuing system can be
decomposed into three distinct but related functional elements: queues, schedulers, and droppers (figure
below).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-73

Figure 4-46 Queuing Elements for Packet Services

A queue stores packets and controls the packet departure and ordering of traffic streams. A first-in-first-
out (FIFO) queue enqueues a packet to the tail of the queue and dequeues a packet from the head of the
queue. Packets are dequeued in the order in which they were enqueued. The output of a queue is
connected to an input of the scheduler. A scheduler determines the departure time for each packet that
arrives at one of its inputs, based on a service discipline (see Scheduling on page 4-74).
The PXM supports two methods of queuing (see figures below):
■ Class-based queuing (CBQ) reserves a queue for each traffic class; traffic belonging to a traffic
class is directed to the queue reserved for that traffic class. CBQ provides a simple and scalable
QoS architecture (e.g., the number of queues remains constant as the number of flows increases).
However, CBQ can only provide a coarse QoS and is incapable of separating competing flows (one
misbehaving flow can degrade the QoS of well-behaved flows).
■ Enhanced class-based queuing (ECBQ) reserves a separate queue for each flow, and traffic
belonging to a flow is directed to the queue reserved for that flow. ECBQ enables a granular QoS
and also allows separation of competing flows. However, ECBQ is less scalable than CBQ (the
number of queues grows linearly with the number of flows).
In addition, the PXM supports single and dual drop precedence as follows:
■ TC-7 and TC-6 traffic classes support a single drop precedence.
■ TC-4, TC-2, and TC-0 traffic classes support dual drop precedence.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-74 Packet Switching Service Provisioning

Figure 4-47 Class-based Queuing (CBQ)

Figure 4-48 Enhanced Class-based Queuing (ECBQ)

In addition to the above queuing techniques, queue management is used to anticipate congestion before
it occurs and attempt to avoid congestion. The PXM supports two queue management techniques:
■ Tail Drop (TD), which drops packets when a queue is full until congestion is eliminated. The TD
treats all traffic flows equally and does not differentiate between classes of service.
■ Weighted random early detection (WRED), which maintains an average queue length for each
queue configured for WRED. The PXM supports minimum and maximum for green and yellow
traffic.

Scheduling
The scheduling function determines the departure time of each packet that arrives at one of its inputs. A
scheduler has one or more inputs and exactly one output. Each input is connected to an upstream
element (such as a queue or another scheduler output) and a set of parameters that affects the

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-75

scheduling behavior of packets received at that input. A scheduler may be configured through one or
more parameters for each of its inputs that influence the scheduling behavior of packets received at that
input.
For the PXM, the scheduling parameters are configured via the Bandwidth Resource Profile.
Scheduling disciplines may be classified into the following broad categories:
■ Strict priority (SP) scheduling discipline assigns a priority to each scheduler input with respect to all
other inputs feeding into the same scheduler. A SP scheduler serves a higher priority input before a
lower priority input. An undesirable side-effect of SP scheduling is that a mis-behaving higher
priority input can starve a lower priority input. To prevent starvation of lower priority inputs, it is
recommended to meter the traffic at a network edge entry points. As an additional safeguard
measure, higher priority scheduler inputs may also be rate limited (e.g., shaped to a rate).
■ Fair queuing (FQ) is a scheduling discipline that allows multiple scheduler inputs to fairly share the
link capacity. A generalization of FQ is called weighted fair queuing (WFQ). Unlike an FQ
scheduler, a WFQ scheduler allows different inputs to have different bandwidth shares.
As described above, the PXM supports 5 traffic classes (TC-0, TC-2, TC-4, TC-6, and TC-7, with
TC-0 being the lowest priority and TC-7 being the highest network priority). The PXM supports the
5P3D traffic class model as specified in IEEE 802.1Q:
■ TC-7 and TC-6 are scheduled using SP and provide low latency/low jitter performance.
■ TC-4 and TC-2 are scheduled using WFQ and provide low drop probability performance.
■ TC-0 provides best-effort performance.

Shaping
Traffic shaping is the process of delaying packets within a traffic stream to achieve conformance to some
predefined temporal profile (shaping the egress traffic to smooth out possible bursts). For example,
minimum service rate parameter for a scheduler input may be specified and realized with a token bucket
rate shaper configured with CIR>0 and CBS=0. The CIR rate limiter on a scheduler input ensures that
packets on that input don’t exceed configured minimum service rate. Maximum service rate limit for a
scheduler input may be specified and realized through a token bucket rate shaper configured with EIR>0
and EBS=0. The EIR rate limiter ensures that packets on that scheduler input don’t exceed configured
maximum service rate.
For the PXM, the shaping parameters are configured via the Bandwidth Resource Profile.
Note that shaping applies only for services configured for enhanced class-based queuing (ECBQ); class-
based queuing (CBQ) does not use shaping (see Queuing and Congestion Management on page 4-72).
Table 4-6: PXM Flow Shaper Rate Granularity on page 4-76 and Table 4-7: PXM Flow Shaper Burst
Size Granularity on page 4-76 provide the flow shaper rate and burst size granularity supported by the
PXM.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-76 Packet Switching Service Provisioning

Table 4-6 PXM Flow Shaper Rate Granularity


From (Kbps) To (Kbps) Granularity (Kbps)
390 25000 390.625
25000 50000 781.250
50000 100000 1562.500
100000 200000 3125.000
200000 400000 6250.000
400000 800000 12500.000
800000 1600000 25000.000
1600000 3200000 50000.000
3200000 6400000 100000.000
6400000 12800000 200000.000
12800000 25600000 400000.000
25600000 51200000 800000.000
51200000 102400000 1600000.000

Table 4-7 PXM Flow Shaper Burst Size Granularity


Start (Kbits) End (Kbits) Granularity (Kbits)
1024 523264 1024

Connection Admission Control (CAC)


Connection Admission Control (CAC) is the set of actions taken by the network during the connection set-
up to determine whether a connection request can be accepted or should be rejected (or if a request for
re-allocation can be accommodated) based on available bandwidth/resources.
CAC is configured in the Bandwidth Resource Profile by specifying CIR and EIR. (Note the CIR
parameter in the Bandwidth Resource Profile is distinct from the CIR parameter specified in the
Bandwidth Profile for metering.) To configure CAC and perform checks:
■ The user configures the service rate via the CIR/EIR parameters on the Bandwidth Resource
Profile.
■ The system maps the user-provided CIR/EIR values to a supported granularity rate (see Table 4-6:
PXM Flow Shaper Rate Granularity on page 4-76) and burst size (see Table 4-7: PXM Flow Shaper
Burst Size Granularity on page 4-76).
■ Once the CIR/EIR values are mapped, the system performs the following checks:
□ For each interface, the sum of the mapped CIR values (for all services) should be less than/
equal to interface speed.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-77

□ The maximum of mapped CIR+ mapped EIR values should be less than the interface speed.
■ If the above CAC checks are successful, the service is allowed.

Note: Note that for ECBQ, the value specified for CAC is also used for the shapers (this doesn’t apply
for CBQ, because CBQ does not use shaping), see Queuing and Congestion Management on page
4-72 and Shaping on page 4-75.

CAC checks are performed at each Network Interface and Ethernet Interface, as shown in Figure 4-49:
Connection Admission Control (CAC) Checks in the Network on page 4-77.

Figure 4-49 Connection Admission Control (CAC) Checks in the Network

Layer 2 Control Protocol (L2CP) Handling


For each Ethernet Interface, the user assigns a L2CP profile. The L2CP profiles (four EPL L2CP profiles
and two EVPL L2CP profiles) are auto-created on the node. For each L2CP profile, L2CP frames are
discarded or tunneled on a per protocol basis. Table 4-8: Layer 2 Control Protocol (L2CP) Profiles on
page 4-78 shows the L2CP profiles supported by the DTN-X and lists the behavior of the L2CP for each
type of packet.

Note: Any of the L2CP profiles can be applied to an Ethernet interface, regardless of whether a
service is EPL or EVPL (e.g., an EVPL L2CP profile can be applied on an Ethernet interface that

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-78 Packet Switching Service Provisioning

carries an EPL service). If no L2CP profile is specified for an Ethernet interface, the EPL L2CP Profile
2 is applied (discard all).

Table 4-8 Layer 2 Control Protocol (L2CP) Profiles


Packet Type EPL L2CP EPL L2CP EPL L2CP EPL L2CP EVPL L2CP EVPL L2CP
Profile 1 Profile 2 Profile 3 Profile 4 Profile 1 Profile 2
(Default)
STP-RSTP-MSTP Tunnel Discard Tunnel Tunnel Discard Discard
PAUSE Discard Discard Discard Discard Discard Discard
LACP-LAMP Discard Discard Discard Tunnel Discard Discard
LINK-OAM Discard Discard Discard Tunnel Discard Discard
PORT-AUTHENTICATION Discard Discard Discard Tunnel Discard Discard
E-LMI Discard Discard Discard Tunnel Discard Discard
LLDP Discard Discard Discard Tunnel Discard Discard
PTP-PEER-DELAY Discard Discard Discard Tunnel Discard Discard
ESMC Discard Discard Discard Tunnel Discard Discard
GARP-GMRP Discard Discard Tunnel Tunnel Tunnel Discard

Treatment of Packets Through the Network


At the ingress, incoming packets are filtered at the Ethernet Interface according to the allowed frame type
configured by the user. The filter settings specify which frame types are allowed: untagged (UT) frames,
priority tagged frames (PT) frames, VLAN tagged (VT) frames. The following filter settings are supported:
■ Tagged Frames—Only VLAN tagged frames are accepted (VT frames are accepted, PT and UT
frames are discarded)
■ Untagged and Priority Tagged Frames—Non-VLAN tagged frames only (UT and PT frames are
accepted, VT frames are discarded)
■ All—UT, PT, and VT frames (or a mix of them) are accepted
For allowed frames, the treatment of the packet depends on the port type configuration of the Ethernet
Interface:
■ 802.1Q—Used for C-VLAN encapsulation. Packets will have a single (outer) tag protocol identifier
(TPID). The default TPID for the outer tag is 0x8100.
■ 802.1ad (C-VLAN & S-VLAN) encapsulation. These packets are double-tagged (outer and inner
TPIDs. The default TPID for the outer tag is 0x88a8; the default TPID for the inner tag is 0x8100.
Based on the port type and the TPID(s) of the incoming packet, the PXM identifies the packet and treats
the packet as shown in Table 4-9: Treatment of Incoming Packets Based on Ethernet Interface Type and

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-79

TPID(s) on page 4-79. This behavior performed on a port-by-port basis (per Ethernet Interface,
depending on the port’s interface type).

Table 4-9 Treatment of Incoming Packets Based on Ethernet Interface Type and TPID(s)
Port Type Incoming Packet Tag(s) Resulting Packet Treatment
(outer, inner) (Identified Tag Format)
802.1Q Untagged Untag
with outer TPID set to
Single tagged packet of 0x8100 TPID C-tag
default value (0x8100)
Single tagged packet of 0x9200 TPID Untag
Double tagged packet (0x9200, 0x8100) Untag
Double tagged packet (0x8100, 0x88a8) C-tag
Double tagged packet (0x8100, 0x9200) C-tag
Double tagged packet (0x8100, 0x8100) C-tag
Single priority tagged packet of 0x8100 TPID Cprio-tag
802.1Q Untagged Untag
with outer TPID set to
Single tagged packet of 0x9200 TPID C-tag
custom value (0x9200 is
used in this table as an Single tagged packet of 0x8100 TPID Untag
example custom value)
Double tagged packet (0x8100, 0x9200) Untag
Single tagged packet of 0x8100 TPID Untag
Double tagged packet (0x9200, 0x88a8) C-tag
Double tagged packet (0x9200, 0x8100) C-tag
Double tagged packet (0x9200, 0x9200) C-tag
Single priority tagged packet of 0x9200 TPID Cprio-tag
802.1ad Untagged Untag
with outer/inner TPIDs
Single tagged packet of 0x8100 TPID C-tag
set to default values
(0x88a8, 0x8100) Single tagged packet of 0x88a8 TPID S-tag
Double tagged packet (0x88a8, 0x8100) S-C tag
Single tagged packet of 0x9200 TPID Untag
Double tagged packet (0x8100, 0x88a8) C-tag
Double tagged packet (0x8100, 0x9200) C-tag
Double tagged packet (0x8100, 0x8100) C-tag
Single priority tagged packet of 0x8100 TPID Cprio-tag
Single priority tagged packet of 0x88a8 TPID Sprio-tag
Double tagged packet (0x88a8, 0x88a8) S-tag

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-80 Packet Switching Service Provisioning

Table 4-9 Treatment of Incoming Packets Based on Ethernet Interface Type and TPID(s) (continued)
Port Type Incoming Packet Tag(s) Resulting Packet Treatment
(outer, inner) (Identified Tag Format)
Double tagged packet (0x88a8, 0x9200) S-tag
802.1ad Untagged Untag
with outer/inner TPIDs
Single tagged packet of 0x9300 TPID C-tag
set to custom values
(0x9200, 0x9300 are Single tagged packet of 0x9200 TPID S-tag
used in this table as
example custom values) Double tagged packet (0x9200, 0x9300) S-C tag
Single tagged packet of 0x88a8 TPID Untag
Double tagged packet of (0x88a8, 0x9300) Untag
Double tagged packet of (0x88a8, 0x8100) Untag
Double tagged packet (0x9300, 0x9200) C-tag
Double tagged packet (0x9300, 0x88a8) C-tag
Double tagged packet (0x9300, 0x9300) C-tag
Single priority tagged packet of 0x9300 TPID Cprio-tag
Single priority tagged packet of 0x9200 TPID Sprio-tag
Double tagged packet (0x9200, 0x9200) S-tag
Double tagged packet (0x9200, 0x88a8) S-tag
802.1ad Untagged Untag
with outer/inner TPIDs
Single tagged packet of 0x8100 TPID Untag
set to the same value
(0x88a8, 0x88a8 is used Single tagged packet of 0x88a8 TPID S-tag
in this table as an
example) Double tagged packet (0x88a8, 0x8100) S-tag
Single tagged packet of 0x9200 TPID Untag
Double tagged packet (0x8100, 0x88a8) Untag
Double tagged packet (0x8100, 0x9200) Untag
Double tagged packet (0x8100, 0x8100) Untag
Single priority tagged packet of 0x8100 TPID Untag
Single priority tagged packet of 0x88a8 TPID Sprio-tag
Double tagged packet (0x88a8, 0x88a8) SC-tag
Double tagged packet (0x88a8, 0x9200) S-tag

At the packet ingress, the PXM determines the packet’s inner and outer TPID and based on the identified
tag format (Table 4-9: Treatment of Incoming Packets Based on Ethernet Interface Type and TPID(s) on
page 4-79), the Ethernet Interface supports the following ingress actions:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-81

■ For Ethernet interfaces configured with interface type 802.1Q: none, push, pop, swap
■ For Ethernet interfaces configured with interface type 802.1ad: none, pop, swap
Once the Ethernet Interface has identified the packet and performed the configured ingress action, the
packet flow continues to the AC. As shown in Figure 4-50: Ingress VLAN Edit and Egress VLAN Edit on
the PXM on page 4-81, it is at the AC ingress that the PXM performs the ingress VLAN edit (IVE) for
incoming packets.

Figure 4-50 Ingress VLAN Edit and Egress VLAN Edit on the PXM

As shown in Figure 4-50: Ingress VLAN Edit and Egress VLAN Edit on the PXM on page 4-81, the PXM
performs the egress VLAN edit (EVE) at the egress of the AC.
The Ethernet Interface supports the following egress actions (supported for both 802.1Q and 802.1ad
interface types): none, push, pop, swap.
The PXM supports TPID editing: At the egress, the PXM supports overwriting of the TPID on outgoing
frames. The PXM uses the same attributes to identify incoming frames as with overwrites for outgoing
frames.

Ethernet OAM
DTN-X introduces Ethernet Operations, Administration and Maintenance on Layer 2 Ethernet Services
(EVCs). Ethernet OAM supports Ethernet connectivity fault management functionalities such as fault
detection and fault notification as defined in IEEE 802.1Q and ITU Y.1731.
Ethernet OAM is supported at the following levels:
■ Service OAM monitors entire EVC service inside the service provider network
■ Link Monitoring capabilities for connectivity links between customer edge and provider edge.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-82 Packet Switching Service Provisioning

Figure 4-51 Service and Link OAM

The following topics describe Ethernet OAM architecture, its components and the hierarchy of Ethernet
OAM managed objects:
■ Ethernet OAM Architecture on page 4-82
■ Ethernet OAM Managed Object Hierarchy on page 4-87

Ethernet OAM Architecture


Key entities of Ethernet OAM architecture are as described below:
■ Maintenance Domain (MD): Maintenance domain is defined as a sub-network over which an EVC
is being monitored and is defined by operational or contractual boundaries. Maintenance Domains
are identified by MD name and MD level. There are eight defined nested MD levels (0-7). Higher
levels (such as 7,6,5) provide a broader OAM reach compared to lower levels (such as 2,1,0). See
Maintenance Domain on page 4-82.
■ Maintenance Association (MA): It represents a part of the end to end Ethernet service within a
Maintenance Domain. A maintenance association is defined as service monitoring session between
two MEPS with in a domain. See Maintenance Association on page 4-83.
■ Maintenance End Point (MEP): An MEP is an end point of a maintenance association and defines
the boundary of the maintenance domain at that level for a given Ethernet Service. Service OAM
MEPs are attached to Attachment Circuit associated with an EVC and Link OAM MEPs are
attached to Ethernet Interface associated with an EVC. See Maintenance End Point on page 4-84.
■ Remote Maintenance End Point (RMEP): RMEPs are the remote MEPs for a local MEP. They are
configured on other participating Attachment Circuits (ACs) associated with an EVC. See Remote
MEP (RMEP) on page 4-87.

Maintenance Domain
A Maintenance Domain is defined as a sub-network over which an EVC is being monitored and is defined
by operational or contractual boundaries.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-83

Each maintenance domain is assigned a unique maintenance level (in the range of 0 to 7). Maintenance
domain names along with their levels are used to define the hierarchy that exists among domains. Higher
levels (such as 7,6,5) provide a broader OAM reach compared to lower levels (such as 2,1,0). Typically
customers have larger maintenance domains and would have a higher level such as 7. Operators would
have the smallest domains with lower levels such as 0 or 1. Service provider domains would be in
between them in size.
Maintenance domains may nest within another, but should not intersect. In case of nested domains, the
outer domain must have a higher maintenance level than the domain(s) nested within it.

Figure 4-52 Maintenance Domains

A maximum of three maintenance domain levels are supported per service on a given PXM. Following
are the guidelines to determine the levels:
■ MD Level 6: MEF Subscriber Level equivalence.
■ MD Level 4: MEF EVC Level equivalence.
■ MD Level 2: MEF Operator Level equivalence.
Infinera PXM supports setting Shared and Independent maintenance domain levels. In case of shared
levels, the maintenance domain roles and their corresponding entities need to be agreed upon by the
adminstrators.

Maintenance Association
A maintenance association represents a part of the end to end Ethernet service within a Maintenance
Domain. It is defined by a set of Maintenance End Points (MEPs) at the edge of the domain.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-84 Packet Switching Service Provisioning

Figure 4-53 Maintenance Association

Each maintenance association entity is identified by MA name (that is unique within a maintenance
domain) and corresponds to an Ethernet Service.
As part of Connectivity check Messaging, an MEP attached to an MA sends periodic CCM messages to
remote MEP. The CCM Intervals (3.33ms, 10ms, 100ms,1 second, 10 seconds; default value 1 second)
are configured on a Maintenance Association.

Maintenance End Point


A Maintenance End Point (MEP) is an end point of a maintenance association and defines the boundary
of the maintenance domain at that level for a given Ethernet Service. MEPs are responsible for confining
Connectivity Fault Messages (CFM) within the domain.
An MEP is associated with one maintenance association which in turn is associated with one
maintenance domain. Hierarchically, a maintenance domain can have one or more maintenance
associations (each monitoring a particular EVC) and each maintenance association has set of MEPs
under it, attached to all the end points of that EVC.
During MEP creation, the interface type to which the MEP is to be associated i.e. Attachment Circuit (AC)
or Ethernet Client Interface is selected. Service OAM MEPs are created on the Attachment Circuit
associated with an EVC and Link maintenance MEPs are created on the Ethernet Client Interface
associated with an EVC.

Figure 4-54 Maintenance End Point

If Continuity Check is enabled on an MEP and the interval for continuity check is defined, the MEP sends
periodic Continuity Check Messages (CCM) to remote MEP(s) of the maintenance association to which
this MEP is attached. In addition, MEP also inject maintenance signals like ETH-AIS and ETH-RDI.
MEPs are directional and are classified as Up MEP or Down MEP.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-85

■ Up MEP - This MEP transmits CCM messages toward the Switch Fabric/ Bridge
■ Down MEP - This MEP transmits CCM messages away from Switch Fabric/Bridge

Figure 4-55 Up and Down MEPs

All MEPs under a maintenance association are required to have a unique MEP ID. All MEPs under a
maintenance association should be either Up MEPs or down MEPs .
Continuity Check Messaging
Once an MEP entity is created and associated to an interface, CCM PDUs are sent at the configured
CCM interval (as defined in the maintenance association). The MEP also expects CCM messages from
remote MEP(s) at the same interval as defined in the maintenance association.
CCM PDUs that are generated include the Port Status and Interface Status as Type-Length-Value (TLVs)
so that the recipient Remote MEPs can act upon it as needed. The ability to include the Port/Interface
status TLVs is available on both Up and Down MEPs to allow for unidirectional faults to be propagated to
the remote end.
The following CCM processing rules are followed by MEPs in accordance with the OAM architecture:
■ An MEP at a particular maintenance domain level transparently passes Service OAM (SOAM)
traffic at a higher maintenance domain level
■ An MEP at a particular maintenance domain level terminates SOAM traffic at its own maintenance
domain level
■ An MEP at a particular maintenance domain level discards SOAM traffic at a lower maintenance
domain level.
This results in a nesting requirement where a maintenance association with a maintenance domain level
cannot exceed the boundary of a maintenance association with a higher maintenance domain level.
The following Ethernet Continuity based alarms with respect to CCM handling are supported:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-86 Packet Switching Service Provisioning

■ CCM LoC Failure


■ CCM-UNEXP-PERIOD
■ CCM-RDI
■ CCM-MISMERGE
■ CCM-UNEXP-LEVEL
■ CCM-UNEXP-MEP
■ CCM-REMOTE-MAC-ERR
For more information on these alarms, refer to the DTN and DTN-X Alarm and Trouble Clearing Guide .
Ethernet AIS
Support to enable/disable Ethernet Alarm Indication Signal (ETH-AIS) generation capability per ITU Y.
1731 is available on an MEP. When an MEP detects CCM loss or an underlying transport layer detects
service affecting fault(s), an ETH-AIS is generated towards the client layer periodically (as defined by the
AIS Interval) at a maintenance domain level equal to that of the client layer maintenance domain level.
■ ETH-AIS is generated towards the client layer periodically as defined by the AIS Interval (supported
values range from 1 second to 1 minute)
■ Untagged, IEEE 802.1Q and IEEE 802.1ad AIS Frames are supported.
■ Based on the AIS frame type, provisioning of Outer/Inner Tag VLAN-ID(s) is supported
■ Priority and Frame Drop Eligibility can be set on an ETH-AIS PDU

Note: Down MEPs do not support ETH-AIS generation. However, they support ETH-AIS monitoring
and report any alarms when such a condition is detected.

Note: ETH-AIS PDU is generated from the highest maintenance domain level present. Internally AIS
Indication Signal is sent from lower to higher maintenance levels. Once the highest maintenance level
is identified, the ETH-AIS PDU is sent from that level.

Ethernet RDI
When any outstanding Ethernet Continuity based alarms are present on the local MEP and the
corresponding defect’s priority is greater than or equal to the Lowest Fault Defect Priority configured on
the MEP, an Ethernet Remote Defect Indication (RDI) bit is set on the transmitted CCM PDU.
Ethernet Client Signal Failure (CSF) on MEP
The Ethernet CSF signal informs a peer MEP of the detection of a failure or defect in communication with
a client when the client itself does not support a means of notification to its peer, such as ETH-AIS or the
RDI function of ETH-CC.
Ethernet CSF is supported on OAM MEP for Ethernet Private Line (EPL) Ethernet Virtual Connection
(port based E Line service).
Note the following for Ethernet CSF:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-87

■ Ethernet Interfaces on the PXM support tributary disable action, which will trigger transmit laser
shut down on the PXM port when ETH-CSF alarm is raised on an MEP whose parent Ethernet
Interface port is enabled for tributary disable action.
■ To enable tributary disable action on CSF, the PXM-1-100GE requires a cold reboot in order to
upgrade to the required firmware version. (This is not required for PXM-16-10GE.)
■ CSF related attributes are configurable only on Up MEPs associated with EP Line service.
■ Ethernet CSF is supported only when both AC have ingress match type of "Match Interface."
■ CSF messaging is applicable only on Up MEPs.
■ A port can support either Ethernet CSF transmission or Tributary Disable Action:
□ An Ethernet port cannot transmit Ethernet CSF towards the line side if the port itself is
already under Tributary Disable Action due to receiving a CSF message.
□ Conversely, an Ethernet port cannot perform its Tributary Disable Action upon receiving a
CSF message if the port is already transmitting Ethernet CSF toward the line side.
■ MD level is encoded in two places on a CSF PDU, and the MD levels from these two places must
match:
□ The destination MAC address
□ The PDU OAM header
If the port receives a CSF PDU where the MD levels do not match in these two places, the PDU is
forwarded instead of being dropped.
Remote MEP (RMEP)
RMEPs are remote MEPs for a local MEP. They are configured on other participating Attachment Circuits
(ACs) associated with an EVC. For example, if there are five end points in an EVC, then each MEP in the
five ACs should have four remote MEPs (RMEPs) configured for proper CCM transmission.
The following types of RMEPs are supported:
■ Manual - Users can create manually create RMEPs from management interfaces
■ Auto-created - RMEPs are auto-created by the network element when the CCM frame is received.

Ethernet OAM Managed Object Hierarchy


The below figure shows the hierarchy of Ethernet OAM managed objects on a DTN-X with PXM.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-88 Packet Switching Service Provisioning

Figure 4-56 Ethernet OAM Managed Object Hierarchy

Scalability for Packet Services on the DTN-X


The table below lists the supported scalability for packet services on the PXM/DTN-X.

Table 4-10 PXM Scalability


Metric Supported Maximum/Supported Range
Hardware Support
Maximum number of PXMs per XTC-10 9 (2 per OTM2)
Maximum number of PXMs per XTC-4 5 (2 per OTM2)
Maximum number of PXMs per XTC-2/ 4 (2 per OTSM/OTXM)
XTC-2E
Provisioning Support

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-89

Table 4-10 PXM Scalability (continued)


Metric Supported Maximum/Supported Range
Maximum number of VLANs per port 4,000

Note: There is no limitation on the VLAN ID used. VLAN IDs can


range from 0-4095.

Maximum number of service instances per ■ 2,000 per PXM-16-10GE


PXM
■ 1,000 per PXM-1-100GE
Maximum number of service instances per For XTC-10:
XTC
■ 15,000 non-restorable services
■ 4,000 restorable services
For XTC-4:
■ 10,000 non-restorable services
■ 4,000 restorable services
For XTC-2/XTC-2E:
■ 4,000 restorable services
Maximum number of pseudowires (PWs) per 2,000 per PXM-16-10GE
PXM
1,000 per PXM-1-100GE
Maximum number of PWs per XTC 14,000 per XTC-10
8,000 per XTC-4
Maximum number of PWs per ODUflexi SNC 2,000 for PXM-16-10GE
1,000 for PXM-1-100GE
Maximum number of 1 Port D-SNCP per 10 (the same maximum for PXM-16-10GE and PXM-1-100GE)
PXM
Maximum number of SNCs per PXM 10 ODUflexi SNCs
Switching capacity 200Gbps packet switching
Packet buffer memory 3GB memory available per PXM (2GB memory can be used per
queue)
Maximum number of number of queues 5 for class-based scheduler (CBQ)

Note: This is the per port maximum (a port being a client


Ethernet interface or a network SNC-based interface).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-90 Packet Switching Service Provisioning

Table 4-10 PXM Scalability (continued)


Metric Supported Maximum/Supported Range
2,000 for enhanced class-based scheduler (ECBQ)

Note: This is the maximum allowed per interface; queues are


shared from a common pool.

Ethernet frame MTU/MRU 64 - 9216 bytes


MPLS
Maximum number of label switched paths 2,000
(LSPs) per PXM
Maximum number of MPLS tunnels per PXM 2,000
Ethernet LAN
Maximum number of multi-point (EP-LAN 1,000
and EVP-LAN) service instances per PXM
Maximum number of endpoints per E-LAN 24
service
Maximum number of MAC entries per PXM 64,000 dynamic MAC entries
1,000 static MAC entries
Ethernet OAM
Maximum number of MEPs per PXM 500
Maximum number of AISs per PXM 150
Meter Values
Meter rate range 1 Mbps - 100Gbps
Meter rate granularity (See Table 4-4: PXM Meter Rate Granularity on page 4-71)
Meter burst size range 1 - 33,030 kbits (128 - 4 MBytes)
Meter burst size granularity (See Table 4-5: PXM Meter Burst Size Granularity on page 4-72)
Shaper Values
Shaper rate range 1 Mbps - 100Gbps
Shaper rate granularity (See Table 4-6: PXM Flow Shaper Rate Granularity on page 4-76)
Shaper burst size range 20,000 - 512 KBytes
Shaper burst size granularity 1024 Bytes

PXM Standard Compliance


Table 4-11: PXM Standard Compliance on page 4-91 lists the standards to which the PXM is compliant.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-91

Table 4-11 PXM Standard Compliance


Standard Title
Metro Ethernet Forum (MEF)
10.2 Ethernet Services Attributes
23.1 Class of Service
26.1 External Network Network Interface (ENNI)
Internet Engineering Task Force (IETF)
RFC 2697 A Single Rate Three Color Marker
RFC 2819 Remote Network Monitoring Management Information Base
RFC 2863 The Interfaces Group MIB
RFC 3273 Remote Network Monitoring Management Information Base for High Capacity Networks
RFC 3985 Pseudo Wire Emulation Edge-to-Edge (PWE3) Architecture
RFC 4115 A Differentiated Service Two-Rate, Three-Color Marker with Efficient Handling of in-Profile
Traffic
RFC 4448 Encapsulation Methods for Transport of Ethernet over MPLS Networks
RFC 4664 Framework for Layer 2 Virtual Private Networks (L2VPNs)
RFC 5601 Pseudowire (PW) Management Information Base (MIB)
Institute of Electrical and Electronics Engineers (IEEE)
802.1Q Virtual LANs
802.1ad Provider Bridges

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-92 FlexILS Service Provisioning

FlexILS Service Provisioning


FlexILS services are end-to-end optical light paths that are designed for a specific optical slice or super
channel across the Infinera FlexILS network. FlexILS service provisioning is described in the following
sections:
■ Manual Optical Cross-connects on page 4-92
■ Manual Optical Cross-connects for SLTE Configurations on page 4-97
■ Traffic Engineering (TE) Links on page 4-104
■ Optically Engineered Lightpath (OELs) on page 4-107
■ Optical Subnetwork Connections (O-SNCs) on page 4-109
■ Service Provisioning with OFx-100 and FMM-C-5 on page 4-115
■ Provisioning Configurations with FMP-C on page 4-118
■ Automatic Tuning of Line Module Super Channels on page 4-120

Manual Optical Cross-connects


FlexILS nodes support manual provisioning mode for optical cross-connects, wherein the optical cross-
connects are manually configured in each FlexILS node along the circuit’s route. This mode provides
users full control over all circuit resources, including network elements, cards, and super channel
endpoints. Manual optical cross-connects can be assigned a circuit ID to correlate multiple optical cross-
connects in multiple nodes forming an end-to-end circuit.
The following sections describe the types of manual optical cross-connects supported by FlexILS nodes:
■ Add/Drop Optical Cross-connect on page 4-92
■ Express Optical Cross-connect on page 4-96

Add/Drop Optical Cross-connect


The optical add/drop cross-connect is a bidirectional optical cross-connect that associates an add/drop
tributary-side super channel endpoint on the FRM, FSM or FMM-C-5 to the line-side super channel
endpoint on the FRM. The figures below show example configurations with add/drop optical cross-
connects:
■ For configurations with FSM, an add/drop optical cross-connect is between the tributary-side super
channel CTP endpoint on the FSM and the line-side super channel CTP endpoint on the FRM, see
Figure 4-57: Add/Drop Optical Cross-connect (with FSM and FRM) on page 4-93.
■ For configurations with FRM only, or with FMM-F250 and FRM, an add/drop optical cross-connect
is between the tributary-side super channel CTP endpoint on the FRM and the line-side super
channel CTP endpoint on the FRM, see Figure 4-58: Add/Drop Optical Cross-connect (FRM only)
on page 4-94 and Figure 4-59: Add/Drop Optical Cross-connect (with FMM-F250 and FRM) on
page 4-94.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-93

■ For configurations with FMM-C-5 and FRM, an add/drop optical cross connect is between the
tributary-side super channel CTP endpoint on the FMM-C-5 and the line-side super channel CTP
endpoint on the FRM, see Figure 4-60: Add/Drop Optical Cross-connect (example with FMM-C-5
and FRM-4D) on page 4-95. See Service Provisioning with OFx-100 and FMM-C-5 on page 4-115
for additional information on service provisioning with FMM-C-5.
■ For configurations with an XT(S)-3300/ XT(S)-3600, FBM and FRM, an add/drop optical cross-
connect is between the tributary-side super channel CTP endpoint on the FBM and the line-side
super channel CTP endpoint on the FRM. See Figure 4-62: Add/Drop optical cross-connect
between FBM and FRM (XT-3300/XT-3600 configuration) on page 4-96.
■ For configurations with an OFx-1200, FBM and FRM, an add/drop optical cross-connect is between
the tributary-side super channel CTP endpoint on the FBM and the line-side super channel CTP
endpoint on the FRM. See Figure 4-63: Add/Drop optical cross-connect between FBM and FRM
(OFx-1200 configuration) on page 4-96.

Figure 4-57 Add/Drop Optical Cross-connect (with FSM and FRM)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-94 FlexILS Service Provisioning

Figure 4-58 Add/Drop Optical Cross-connect (FRM only)

Figure 4-59 Add/Drop Optical Cross-connect (with FMM-F250 and FRM)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-95

Figure 4-60 Add/Drop Optical Cross-connect (example with FMM-C-5 and FRM-4D)

Figure 4-61 Add/Drop optical cross-connect on FRM (Sample XT-3300/XTS-3300 configuration)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-96 FlexILS Service Provisioning

Figure 4-62 Add/Drop optical cross-connect between FBM and FRM (XT-3300/XT-3600 configuration)

Figure 4-63 Add/Drop optical cross-connect between FBM and FRM (OFx-1200 configuration)

Express Optical Cross-connect


The optical express cross-connect is a bidirectional optical cross-connect that associates the FRM band
endpoints on two FRMs. Figure 4-64: Express Optical Cross-connect on page 4-97 shows an example
configuration with an express optical cross-connect.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-97

Figure 4-64 Express Optical Cross-connect

Manual Optical Cross-connects for SLTE Configurations


In addition to standard manual optical cross-connects as described in Manual Optical Cross-connects on
page 4-92, an FRM-9D/FRM-20X configured for SLTE can also support the following special manual
optical cross-connects (see SLTE Configuration with FlexILS Nodes in #unique_60/
unique_60_Connect_42_dtn_and_dtnx_sdg for information on FlexILS SLTE configurations):

Note: For FRMs, SLTE mode is supported on the FRM-9D and FRM-20X only; SLTE is not supported
for FRM-4D.

■ Channel blocking optical cross-connects for cases where a certain portion of the optical spectrum is
not available for service provisioning. See Channel Blocking Optical Cross-connects on page 4-
98.
■ Addition of ASE idlers to the line system. See ASE Idler Optical Cross-connects on page 4-99.
■ Dynamic WSS resizing manual optical cross-connects in SLTE configuration. See Dynamic WSS
Resizing on page 4-100.
■ Split spectrum mode for manual optical cross-connects in SLTE configuration. See Split Spectrum
Mode for Manual Optical Cross-Connects on SOFx-500.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-98 FlexILS Service Provisioning

Figure 4-65: FlexILS SLTE Manual Optical Cross-connects on page 4-98 shows an example of a
FlexILS wave spectrum that includes ASE idlers and channel blocking.

Figure 4-65 FlexILS SLTE Manual Optical Cross-connects

Channel Blocking Optical Cross-connects


Channel blocking optical cross-connects are supported for SLTE configurations where a certain portion of
the optical spectrum is not available for service provisioning, where specific frequency slots used by
FlexILS modules are outside the supported operating range of the submarine line system, or where
specific frequency slots have poor optical performance due to physical characteristics and reach
limitations of the submarine line system.
For such cases, FlexILS SLTE terminal nodes support channel blocking optical cross-connects. For
example, in Figure 4-65: FlexILS SLTE Manual Optical Cross-connects on page 4-98 Super Channel #X
uses channel blocking to block carriers at the edges of the super channel that are outside of the
supported operating range of the submarine line system. Super Channel #Y uses channel blocking to
block frequency slots that have poor optical performance.
Channel blocking is implemented at the FRM-9D WSS by customizing the optical switching passband that
is associated with an SOFX/SOFM (for add/drop connections) or by another FRM-9D (for express
connections). The user creates an optical cross-connect on the FRM-9D and specifies the frequency slots
to be used in the optical cross-connect. To block channels, the user leaves the frequency slots out of the
list to be used by the optical cross-connect.
Note the following for channel blocking optical cross-connects:
■ Channel blocking optical cross-connects are supported only for FRM-9Ds in SLTE operating mode.
■ Channel blocking optical cross-connects are supported only for SCGs with interface type
configured for Infinera Wave.
■ Channel blocking optical cross-connects are supported for multiplexing direction and de-
multiplexing direction.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-99

■ Channel blocking is allowed on any slots in a super channel (in the middle range of the super
channel, or at the ends of the super channel, etc.). The blocked slices can be used by any other
optical cross-connect.
■ When specifying the frequency slot list, the selected slices to be used in the channel blocking
optical cross-connect must be within the range of the selected super channel.
■ Multiple pass bands are supported in a single super channel. However, each passband must
contain a minimum of 3 contiguous slices within the super channel.
■ The blocked bands can be from 1 to 20 contiguous slices.
■ On the associated SOFM/SOFX, the channels which have been blocked via the channel blocking
optical cross-connect must be administratively locked in order to prevent alarming on those
channels.
■ Pre-emphasis is applicable to channel blocking optical cross-connects to compensate for signal
quality deviations over long distances. The FRM-9D can apply fixed attenuation for each frequency
slot (12.5GHz granularity) across the C-band spectrum with 0 to 18dB.
■ Channel blocking optical cross-connects are supported for both contiguous spectrum (CS) super
channels and for split spectrum (SS) super channels.

Note: See Split Spectrum Mode for Manual Optical Cross-Connects on SOFx-500) for information on
Split Spectrum mode.

ASE Idler Optical Cross-connects


In addition to supporting manual optical cross-connects that are sourced from an SOFX/SOFM, an
FRM-9D in SLTE mode can also support a manual optical cross-connect whose source is an ASE idler.
For each ASE idler channel, an optical passband is provisioned on the FRM-9D by creating an add/drop
optical cross-connect on the FRM-9D tributary port with user-specified frequency slot lists.
For example, in Figure 4-65: FlexILS SLTE Manual Optical Cross-connects on page 4-98, Super Channel
#Z and Super Channel #A are configured as ASE idler passbands.
Note the following for ASE idler optical cross-connects:
■ ASE idler optical cross-connects are supported only for FRM-9Ds in SLTE operating mode.
■ ASE idler optical cross-connects are created in the multiplexing direction (not the de-multiplexing
direction), except in the case of express ASE idler optical cross-connects, where cross-connects
are created in both the multiplexing direction and de-multiplexing direction.
■ ASE idler optical cross-connects are supported on ASE idler tributary/system ports and on FRM-9D
line ports (for ASE idler express connections between FRM-9Ds).
■ ASE idler optical cross-connects are supported only for SCGs with interface type configured for
ASE Idler.
■ For ASE idler optical cross-connects, the super channel number is set to NONE. The user must
explicitly specify the frequency slot list.
■ For ASE idler optical cross-connects, the frequency slot plan must be set to ASE-IDLER.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-100 FlexILS Service Provisioning

■ The range of slices can be selected anywhere in the spectrum, but the ASE idler optical cross-
connects must contain a minimum of 3 and a maximum of 40 contiguous slices.
■ Pre-emphasis is applicable to ASE idler optical cross-connects.
■ For ASE idler optical cross-connects (either for add/drop or for express connections between two
FRM-9Ds), only one passband is supported in each ASE idler optical cross-connect. To create
multiple passbands, the user can create multiple ASE idler optical cross-connects from same FSM
tributary (for add/drop cross-connects) or between the two FRM-9Ds (for SLTE express cross-
connects).

Dynamic WSS Resizing


FlexILS service provisioning supports WSS resizing for manual optical cross connects on an FRM-9D in
SLTE mode (see SLTE Configuration with FlexILS Nodes). Dynamic WSS resizing refers to in a non-
service affecting operation performed by the wavelength selective switch (WSS) on the FRM-9D.
Dynamic WSS Resizing is supported for both split spectrum and contiguous spectrum cross-connects,
and for both normal and channel blocking optical cross-connects. The following resizing operations are
supported.
■ Replacing an ASE idler channel with a new super channel
■ Replacing an existing super channel with an ASE idler channel
■ Widening or narrowing an existing super channel
Starting release 20.0 Dynamic WSS resizing with FRM-9D associated with XT(S)-3600, AOFx-1200,
SOFx-1200 is supported. It introduces the option to resize and release portions of the spectrum for a
SOFx-500 set to Infinera Wave mode and using an optical cross-connect of the type FSP-250GHz in the
FRM-9D. This involves resizing an existing Infinera Wave SOFx-500 (FSP-250GHz) to a smaller size to
accomodate insertion of ICE 4 carriers. It is now supported for contiguous spectrum cross-connects. The
following resizing options are now supported.
■ Replacing an ASE idler channel with a new AOFx-500/SOFx-500 or ICE 4 super channel
■ Replacing an existing super channel with an ASE idler channel
■ Widening or narrowing an existing AOFx-500/SOFx-500 or ICE 4 super channel
■ Replacing a AOFx-500/SOFx-500 superchannel with an ICE 4 superchannel
Dynamic WSS resizing is supported by FRM-9Ds in SLTE mode only, and only for the following types of
traffic:
■ Add/drop manual optical cross-connections—cross-connects between the tributary-side super
channel CTP endpoint on the FRM and the line-side super channel CTP endpoint on the FRM.
■ Express manual optical cross-connections—cross-connects between two line-side super channel
CTP endpoints on the FRM (note that WSS resize is supported only for traffic originating on an
SOFM/SOFX)
■ ASE idler manual optical cross-connections

Note: Dynamic WSS resizing is supported only for traffic originating from the following line
modules

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-101

□ AOFx-500, SOFx-500
□ ICE 4 modules - XT(S)-3300, XT(S)-3600, AOFx-1200, SOFx-1200

The dynamic WSS resizing operation is performed by editing an existing optical manual cross-connect via
the Frequency Slot List parameter and the Possible Frequency Slot List parameter:
■ Frequency Slot List (FSL)—Defines the frequency slots to be used by an optical manual cross-
connect. The frequency slots specified in the FSL must be a subset of the frequency slots specified
in the Possible Frequency Slot List.
■ Possible Frequency Slot List (PFSL)—Defines the out-most boundary of the frequency slots that
can be used by the optical manual cross-connect. The default range of the PFSL is the entire
frequency slot list in the associated super channel.
To shrink a passband, the user can edit the FSL optical cross-connect to specify a narrower range of
slices (alternatively, the user can delete the PFSL and corresponding FSL entries). To widen a passband,
the user can add additional frequency slot entries in the PFSL, and then edit the FSL values to include
the additional frequency slot entries as well within the PFSL range only. During the process of WSS
resizing, a passband cannot be shrunk below 3 slices, it can be deleted instead.
Note the following for dynamic WSS resizing:
■ Both super channel CTP endpoints associated with the manual optical cross-connect must be in
the locked or maintenance administrative state.
■ The passband can be changed with a granularity of 1 slice (12.5GHz):
□ For contiguous spectrum optical cross-connects, the passband must be a minimum of 3
slices and a maximum of 40 slices.
□ For split spectrum optical cross-connects, the passband must be a minimum of 3 slices and
a maximum of 40 slices (with a channel blocking super channel in between).
□ For ASE idler cross-connects, the passband must be a minimum of 3 slices and a maximum
of 40 slices.
■ The values specified in an optical cross-connect’s FSL must be a sub-set of the values in the
PFSL.
■ For data-carry optical cross-connects, the value range of both the PFSL and the FSL must be
within the supported frequency slot range supported by the provisioned super channel number.
(Note that this does not apply for ASE idler optical cross-connects, which can span across super
channels.)
■ Starting Release 20.0, for data-carry optical cross-connects with ICE-4 line modules, both the
PFSL and the FSL support any frequency range within the 40 slice limit.
■ The FSL cannot contain any slice(s) that are already used in an existing passband. (Note that for
ASE idler cross-connects, the PFSL can contain slices that are also specified in the PFSL of
another optical ASE idler cross-connect.)
■ If the re-provisioning of the passband uses any frequency slot not included in the PFSL, then the
re-provision operation will be rejected.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-102 FlexILS Service Provisioning

■ For data-carry optical cross-connects, any passband narrowing operation (where slices are
removed from the FSL), will impact traffic on the removed slices.
■ Multiple entries are not supported for ASE idler cross-connects: ASE idler cross-connects support
only one entry in the PFSL parameter and one entry in the FSL parameter. Starting Release 20.0,
this is applicable in case of superchannels with both AOFx-500, SOFx-500 and
XT(S)-3600,AOFx-1200, SOFx-1200 . Also,the PFSL parameter cannot be edited and only the FSL
can be edited for ASE idler cross-connects.
■ For contiguous spectrum and split spectrum data-carrying optical cross-connects, the PFSL and
FSL parameters support multiple entries to specify ranges and groups of frequency slots (e.g.,
“-274&10&--254&10”). Note the following for multiple entries in these fields:
□ For contiguous spectrum optical cross-connects, it is not supported to have the ratio of n:m
entries (more than one entry in the PFSL parameter and a different number greater than one
in the FSL parameter). The following ratios are supported for the number of entries in the
PFSL to the number of entries in the FSL:
1:1 (one entry in the PFSL parameter to one entry in FSL parameter)
1:n (one entry in the PFSL parameter to n entries in FSL parameter)
n:n (the same number of entries in both PFSL and FSL parameters).
□ For split spectrum optical cross-connects, the PFSL and FSL parameters must have more
than one entry (to specify a range of slices from both of the super channels in the split
spectrum cross-connect). Therefore, for split spectrum only the ratio of n:n is supported (the
same number of entries in both PFSL and FSL parameters, where n is greater than 1).
Neither the 1:n, 1:1, nor m:n ratios are supported for split spectrum optical cross-connects.
□ The FSL parameter values can be edited to merge multiple entries into one entry, or to split a
single entry into multiple entries. Merging or splitting entries in the FSL parameter will affect
traffic. For FSL parameter, in a single operation either the value of an entry can be edited or
an entry can be added/removed, but not both at the same time. Note that this feature is not
applicable for Gen4 superchannels.
□ The entries in the PFSL parameter cannot be edited, entries in PFSL can only be added or
deleted.
□ For the PFSL parameter, multiple entries can specify overlapping frequency slots.
□ For the PFSL parameter, an entry can be added only to the end of the list in the parameter
field. For deleting, an item can be deleted from any position in the list.
□ In the case of split spectrum optical cross-connects, an item in the PFSL parameter cannot
span both parts of the split spectrum super channel.
■ When resizing ASE idler cross-connects, the values in the new (resized) FSL must have at least
one slice in common with the values in the existing/previous FSL.
■ After an upgrade, an existing optical cross-connects will have the same value for Possible
Frequency Slot List and Frequency Slot List for FRMs associated with SOFx-500.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-103

Minimized guard band support


Minimized guardband support is introduced to optimally utilize and maximize the bandwidth between a set
of nodes in the network by reducing the guard band between the edge carriers of two line modules. This
support is applicable for all Gen4 line modules on SLTE mode.

Figure 4-66 Example - Minimized guard band

The above example spectum illustrates a configuration where two adjacent SCHs are separated by a
12.5 GHz guard band. This 12.5 GHz guard band is required irrespective of whether both the SCHs are
from cross-connections on the same port or from adjacent ports. When the carriers are multiplexed
together and passed through a single port (as in the case of a FMP-C), the guardband between the SCHs
(12.5 GHz) can be removed. A guardband spacing of 4 GHz is required between the last carrier of the
first superchannel to the first carrier of the next superchannel(C4 to C1' in the second spectrum above) to
handle tuning considerations between the two carrier sources leading to an effective bandwidth saving of
8.5 GHz when compared to individual SCHs. However the composite SCH still needs to have a guard
band of 12.5 GHz (6.25 GHz on either side) for it to be cross-connected through the WSS.

Alpha bin configuration


Alpha bin of a carrier defines the sharpness of the edges of the carrier and therefore the effective width of
the carrier. The attribute ‘Alpha bin’ defines the excess bandwidth value or the roll off factor. The default
alpha bin value was 8 i.e, Alphabin_8 for terrestrial and submarine. Release 20.0 supports configuration
of a new alpha bin attribute Alpha bin_2 (value 2) on the SCG properties of the SCGPTP for the XTS
-3300 line modules. As the α value is closer to zero, the rolloff is sharper and lesser is the bandwidth
utilized by the carrier.

Note: The Alphabin_2 configurations for XTS-3300 are applicable for XTS3312-YN-EZC15 PONs
only.

The use of this Alpha bin is for the Minimized guard band support to optimally tune the carriers for the
most effective utilization of the spectrum, typically when combined with a FMP-C for multiplexing carriers

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-104 FlexILS Service Provisioning

from two different line modules into a single contiguous passband thereby minimizing the inter- SCH
guard band between the two line modules.

Traffic Engineering (TE) Links


A traffic engineering (TE) link indicates bandwidth available over a link:
■ Digital TE link—For links between digital nodes (DTNs, DTN-Xs or XTs), a digital TE link
represents bandwidth available between the line ports of the line modules at either end of a link.
■ Optical TE link—For links between FlexILS nodes (FlexILS ROADM nodes or DTN-Xs with
ROADM), an optical TE link represents bandwidth available between the line ports of the FRMs at
either end of a link.
Optical TE links are created automatically once an FRM is associated to an IAM/IRM and the GMPLS
control channel is established. Optical TE links can be associated to the following GMPLS control
channels:
■ GMPLS control channel over OSC
■ GMPLS control channel over GRE
■ SD-FEC Overhead IGCC (for XT(S)-3300/OFx-1200/XT(S)-3600/MXP-400)
Figure 4-67: Optical and Digital TE links and SNCs (DTN-X with ROADM sample configuration) on page
4-105 shows the relationship between optical and digital TE links, and between optical and digital SNCs
for DTN-X.
Figure 4-68: Optical, Digital TE Links and Optical SNCs (XT-3300/XTS-3300 sample configuration) on
page 4-105 and Figure 4-69: Optical TE Links and Optical SNCs (ICE 4 modules and FBM/FRM sample
configuration) on page 4-106 show the relationsip between optical links, digital TE links and between
optical and digital SNCs for ICE 4 line modules (XT(S)-3300/OFx-1200/XT(S)-3600).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-105

Figure 4-67 Optical and Digital TE links and SNCs (DTN-X with ROADM sample configuration)

Figure 4-68 Optical, Digital TE Links and Optical SNCs (XT-3300/XTS-3300 sample configuration)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-106 FlexILS Service Provisioning

Figure 4-69 Optical TE Links and Optical SNCs (ICE 4 modules and FBM/FRM sample configuration)

Note the following about optical and digital TE links and service provisioning:
■ A digital TE link between the FlexILS line modules (AOFXs in Figure 4-67: Optical and Digital TE
links and SNCs (DTN-X with ROADM sample configuration) on page 4-105) is supported only after
an optical connection is defined between the associated FRMs, such as an optical SNC (O-SNC) or
an optical cross-connect.

Note: For deployments in which GMPLS control channel over GRE is enabled, digital SNCs or
cross-connects can be provisioned without first provisioning an optical cross-connect in the
FlexILS link. Once the GMPLS control channel over GRE is brought up and the FRM is
connected to the IAM on each end of the link, connecting the FlexILS line module (AOFM/
AOFX/SOFM/SOFX) to the FRM at each end of the link will bring up the digital TE link. An
optical cross-connect can then be configured to bring up the data path.

■ Digital services can use optical TE links to traverse through the network, meaning that a digital TE
link might traverse the network contained within an optical TE link. An optical TE link can be used
by many different services:
□ Optical SNCs and optical cross-connects
□ Digital TE links (and any digital SNCs or cross-connects associated with the digital TE link)

Note: Digital SNC creation is not supported for XT(S)-3300 configurations.

■ Failure of an optical TE link will affect all optical SNCs or optical cross-connects associated with it,
thereby affecting any associated digital TE links and in turn their associated digital SNCs or digital
cross-connects.
■ Optical TE links are created between the following node types:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-107

□ DTN-X with ROADM (see DTN-X with ROADM)


□ FlexILS ROADM (see FlexILS Reconfigurable Optical Add/Drop Multiplexer (ROADM))
■ Optical TE links are not created between amplifier nodes:
□ FlexILS Optical Line Amplifier (an amplifier node with an MTC-9/MTC-6 as Main Chassis,
see FlexILS Optical Line Amplifier)
□ Optical Amplifier (an amplifier node with an OTC as Main Chassis)

Optically Engineered Lightpath (OELs)


An optically engineered lightpath (OEL) is an end to end optical path that is optically reachable for a given
rate, modulation, and frequency slot type. The OEL is defined between a source node and a destination
node. OELs can be created in two ways:
■ User-created via any of the management interfaces
■ Planned using the Network Planning System (NPS), and then imported via DNA.
Once the OEL is created (or created and imported via NPS/DNA), the OEL is specified by the user when
creating an optical SNC (and optionally an optical cross-connect as well).
Note that an OEL can traverse multiple optical TE links, as shown in Figure 4-70: Optical TE Links, OELs,
and Optical SNCs in a DTN-X Network on page 4-107.

Figure 4-70 Optical TE Links, OELs, and Optical SNCs in a DTN-X Network

An OEL is also designed for a given module type in terms of the optical characteristics. For example, an
OEL might be defined for line modules with enhanced reach characteristics, which is indicated by a line
module PON with “C6.” An OEL created for C6 PONs would support a path using any of C6 line modules:
■ AOLX-500-T4-n-C6
■ AOLX-500B-T4-n-C6
■ AOLM-500-T4-n-C6
■ AOLM-500B-T4-n-C6

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-108 FlexILS Service Provisioning

An explicit route can be defined for the OEL in the case the user wants to OEL to traverse specific optical
TE links.

Figure 4-71 Optical TE Links and FRM end-point based Optical SNCs in an ICE 4 Network (XT-3300
example)

Figure 4-72 Optical TE Links and FBM end-point based Optical SNCs in an ICE 4 Network

Starting R18.2.1, OELs can be used to define the work and restoration path of a restorable optical SNC
on ICE 4 line modules.
■ An explicit route can be defined for an OEL if the user wants to the OEL to traverse specific optical
TE links.
■ The work path of an optical SNC can be constrained to an OEL and by setting Frequency Slot

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-109

■ The Restored Path of a restorable optical SNC refers to the recovery path setup in order to restore
the user traffic when the working path gets faulted.
■ Restored Path can be based constrained to a list of OELs selected during OSNC creation
For more information see Optical Restoration on Optical Subnetwork Connections on page 4-113.

Optical Subnetwork Connections (O-SNCs)


Similar to digital SNCs (GMPLS Signaled Subnetwork Connections (SNCs) on page 4-10), IQ NOS
supports signaled provisioning for optical subnetwork connections (O-SNCs), where the user specifies the
endpoints for an optical service, and the optical service is automatically provisioned across the network.
IQ NOS GMPLS control protocol computes the circuit route through the Intelligent Transport Network and
establishes the circuit by automatically configuring the optical cross-connects in each FlexILS node along
the path.
An O-SNC can be created only when:
■ Optical TE links are established along the path between the desired endpoints for the O-SNC (see
Traffic Engineering (TE) Links on page 4-104).
■ AT least one OEL is created between the line-side ports of the FRMs to terminate the desired O-
SNC (see Optically Engineered Lightpath (OELs) on page 4-107).
FlexILS nodes support the following as endpoints on O-SNC:
■ FRM add/drop (tributary) SCH port to FRM add/drop (tributary) SCH port
■ FSM add/drop (tributary) SCH port to FSM add/drop (tributary) SCH port
■ FMM-C-5 add/drop (tributary) SCH port to FMM-C-5 add/drop (tributary) SCH port
■ FBM add/drop (tributary) SCH port to FBM add/drop (tributary) SCH port
Figure 4-73: Optical SNCs Using FRM and FSM Endpoints on page 4-110 shows an example network
configuration, along with example O-SNC routes using FRM and FSM endpoints.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-110 FlexILS Service Provisioning

Figure 4-73 Optical SNCs Using FRM and FSM Endpoints

Figure 4-74: Optical SNCs Using FRM and FBM Endpoints on page 4-110 shows an example network
configuration, along with example O-SNC routes using FBM and FRM endpoints

Figure 4-74 Optical SNCs Using FRM and FBM Endpoints

O-SNCP in SLTE Configurations


The OPSM is supported in the following configurations for SLTE applications:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-111

■ For an SLTE optical span between subsea point of presence (POP) stations. In this configuration,
the OPSM provides O-SNCP between two IAMs configured for one of the SLTE modes (SLTE
Mode 1, SLTE-TLA, or TLA).
■ For links between a cable landing station (CLS) and a point of presence (POP). In this
configuration, the OPSM provides O-SNCP between two IAMs configured for one of the SLTE
modes (SLTE Mode 1, SLTE-TLA, or TLA).
■ For links with a mix of OCGs (over SOLx2 modules via BMM2) and super channels (over SOFx
modules).

Figure 4-75 O-SNCP for an SLTE Optical Span

Figure 4-76 O-SNCP between a Subsea CLS and a POP

In SLTE applications, the OPSM is supported with IAMs; the IAM’s line port is connected to the OPSM to
optically protect the OTS (C-Band + OSC) signal via two line ports of the OPSM. The OPSM optical
switches provide optical protection that is agnostic to data rate, modulation format, and number of optical
channels. Each optical switch in the OPSM modules supports the extended C-Band (4.8THz) optical
window for optical protection.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-112 FlexILS Service Provisioning

Figure 4-77 OSNCP for SOLx2 (through BMM2) and SOFx

Release 16.3 introduces OPSM-2 based optical protection for a network with a mix of OCGs (over SOLx2
modules via BMM2) and super channels (over SOFx modules).
For more information on configurations required for this network, see OPSM-2 protection for SOLx2 (via
BMM2) and SOFx.

Tributary-side O-SNCP
For terrestrial applications, the OPSM supports tributary side protection wherein the OPSM is deployed
between an AOFx-500 and an FMM-F250/FRM-9D.

Figure 4-78 Example of Tributary-side O-SNCP with OPSM and AOFx-500

Note the following for tributary-side O-SNCP applications:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-113

■ The AOFx-500 SCG’s Interface Type must be set to Open Wave (see Open Wave Line Module
Configuration).
■ The FRM SCG’s Interface Type must be set to Infinera Wave, and the FMM SCG's Interface Type
must be set to Manual Mode 1 (see Manual Mode 1 Configuration).
■ The AOFx-500's line in/out port is connected to the OPSM facility (FAC) port, and the Provisioned
Neighbor TP on the OPSM PTP must be configured as the AOFx-500 SCG.
■ The FMM-F250's add/drop port is connected to the OPSM line port.
■ The FRM SCG's Provisioned Neighbor TP must be configured to the FMM Line SCG PTP to bring
up the link.
■ Both FRMs connected to the AOFx-500 will have a cross-connect for the same SCH.
■ Auto-discovery is not supported between the AOFx-500, FMM-F250, FSP-C, and FRM-9D in this
configuration while the AOFx-500 is in the OpenWave mode.

Optical Restoration on Optical Subnetwork Connections


FlexILS optical networks support automatic creation of signaled end-to-end optical paths i.e. super
channel based O-SNCs between optical add/drop endpoints located across the network. The optical
paths can originate on ICE 4 line modules i.e. XTC based OFx-1200 line Modules, XT(S)-3300 or
XT(S)-3600.
When fiber cuts or equipment failures occur on the Optical SNC path:
■ The super channels and the higher layer services (Layer1/Layer2) mapped on to the same get
disrupted. These higher layer services can be independently protected through digital protection or
restoration schemes implemented in their respective layers
■ Directionless multiplexing module i.e. FBM is used to dynamically restore the optical SNCs
connected over one degree on to another degree. A Restore path is setup in order to restore the
traffic when working path is faulted.
IQ NOS supports automatic restoration recovery mechanisms on GMPLS provisioned O-SNCs when any
failures occur along the O-SNC path. Under normal operation, the state of each SNC is maintained by a
signaling protocol, and traffic is carried along a working path. When a datapath failure occurs, all the
impacted O-SNCs automatically detect the failure at their endpoints. O-SNCs configured for restoration
are automatically re-signaled along a different, functional path, called the restoration path.
■ Working Path refers to the original planned O-SNC path setup using the user specified constraints
such as OEL
■ Restored Path of a restorable optical SNC refers to the recovery path setup in order to restore the
user traffic when the working path gets faulted. Restored Path can be constrained to a list of up to
three OELs, either during O-SNC creation or by modifying after the O-SNC is created (as it is not
service affecting)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-114 FlexILS Service Provisioning

Figure 4-79 Optical Restoration on O-SNCs

OSNC Restoration Configurations


The following Optical Subnetwork Connection Restoration configurations are supported:
■ Optical SNC Restoration is only supported in networks with OSNCs created between FBM tributary
endpoints and optionally through an FMM-C-12. The following FBM and FMM-C-12 modules are
supported:
□ FBM-SLCDC-8-2-USB
□ FBM-SLCDC-8-4-USB
□ FBM-SLCDC-8-8-USB
□ FMM-C-12-EC-TR
■ Restoration of Optical SNCs is supported in configurations where ICE 4 line module (in Open Wave
mode) and Line System Modules are co-located on the same network element.
■ Restoration of Optical SNCs is supported in configurations where ICE 4 line module (in Open Wave
mode) and Line System Modules are located on different network elements.
■ Restoration is not supported on Optical SNCs whose working route consists of SLTE based optical
TE-Links.

Provisioning Considerations
To provision OSNC Restoration, users nned to perform the following
■ Create an OEL for the workpath with preferred attributes
■ Create multiple OEL paths with the same set of attributes
■ Create an O-SNC service (workpath) starting from the trib of FBM at near-end to trib of FBM at
farend
■ Auto-Restore - Set the value of this attribute to ‘yes’ to enable Dynamic GMPLS SNC Restoration
for an SNC. This attribute may be modified at any time.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-115

■ Auto-Reversion - Enable automatic reversion to revert the restorable SNC back to its original
working path after a restoration event.
■ Use Preferred Restoration Route Info - Check this option to configure the inclusion and exclusion
list that should be used as a first option when restoring the SNC. This attribute may be modified at
any time. Preferred Restoration constraints take effect only if auto restoration is enabled.
■ Priority - Set the priority value from 0 -7. At the network element level, each priority level is
assigned a hold-off timer value to indicate how long GMPLS should wait before attempting to
restore the SNC (see Restoration Priority on page 4-142). The priority attribute can only be set for
SNCs enabled with auto-restore. The default value for the priority attribute is zero. The priority
attribute may be modified at any time, even after creation of the SNC.

OSNC Restoration Triggers


The following path level faults act as trigger for Optical SNC Restoration:
■ OTS-BDI-P, OTS-LOL
■ OMS-FDI-P, OMS-BDI-P
■ SCH-FDI-P, SCH-BDI-P, SCH-OCI
■ OTS/Band OLOS, Band OLOS
■ FRM plug-out

Manual Reversion of Optical Restoration


Manual restoration is supported on a Restorable Optical SNC currently on the Working Path or Restored
Path. This action ignores the current fault conditions on the working path and restoration is attempted
immediately.
Manual restoration action results in tearing down of the current working path and setup of a new restored
path. When manual restoration action fails, the system sets up the original working path and moves to the
same.

Service Provisioning with OFx-100 and FMM-C-5


The FMM-C-5 supports connections between an OFx-100 and the following modules (listed under the
FMM-C-5 operating mode that supports the connection):
■ FlexILS Mode:
□ FRM-4D (directly to the FRM-4D Add/Drop port)
□ FRM-4D (through a BPP housed in MPC to the FRM-4D System port)
□ FRM-9D (through an FSP-C housed in FPC/MPC-6 to the FRM-9D System port)
□ FRM-20X (through an FSP-C housed in the FPC/MPC-6 to the FRM-20X System Port)
■ Gen 2 Mode:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-116 FlexILS Service Provisioning

□ BMM2-8-CXH2-MS
□ BMM2H-4-R3-MS
□ BMM2-8-CH3-MS
□ BMM2-8-CEH3
■ Gen 1 Mode:
□ BMM2C-16-CH
□ BMM-4-CX2-MS-A
□ BMM-4-CX3-MS-A
□ BMM-8-CXH2-MS
□ BMM-8-CXH3-M
□ BMM1H-4-CX2

Note: With the exception of the BMM2C-16-CH, all of the BMMs listed for Gen 1 mode
require:
15dB of pad between the FMM-C-5 and the BMM OCG
7dB of pad between FMM-C-5 transmit side (Tx) and BMM OCG receive side (Rx)

For configurations from OFx-100 to FMM-C-5 to FRM, where the FMM-C-5 operating mode is set to
FlexILS mode, an add/drop optical cross-connect is required between the tributary-side super channel
CTP endpoint on the FMM-C-5 and the line-side super channel CTP endpoint on the FRM, as shown by
the orange line in the figure below.
Note that the user can create this optical cross-connect either by provisioning the manual optical cross-
connect between the FMM-C-5 and the FRM, or by provisioning an optical SNC from the FMM-C-5 to
another FMM-C-5 in the network. In the case of OFx-100 to FMM-C-5 to FRM, the OFx-100's super
channel number is provisioned as part of the optical cross-connect/SNC provisioning (the user must
specify the super channel number while creating the optical cross-connect/SNC, and the OFx-100 is
automatically configured accordingly).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-117

Figure 4-80 Add/Drop Optical Cross-connect (example with FMM-C-5 and FRM-4D)

For configurations from OFx-100 to FMM-C-5 to BMM, where the FMM-C-5 operating mode is set to
either Gen 1 or Gen 2 mode, there is no associated optical cross-connect (see the figure below). With this
BMM configuration, the user must configure the super channel CTP on the OFx-100 to match the OCG
number on the BMM port.
The user must specify OFx-100's super channel as the OCG number and the carrier pair within the OCG
in the format OCGn-<carrier pair>, where n = 1-16 and the carrier pair can be one of the following pairs:
1-2, 3-4, 5-6, 7-8, or 9-10. For example, to specify carriers 3 and 4 in OCG 2, the user would configure
the OFx-100's super channel number to OCG2-3-4.

Note: For configuring OFx-100 for OCGs 5-8 or 13-16, the carrier pair 3-4 is not supported, due to the
optimization of the OFx-100 for the ITU 50GHz channel plan.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-118 FlexILS Service Provisioning

Figure 4-81 Example Configuration with OFx-100, FMM-C-5, and BMM2C

Provisioning Configurations with FMP-C


The Fiber Multiplexing Panel (FMP) is a pluggable module that is installed in an FPC and provides
colorless add/drop fiber multiplexing. The FMP-C (FMP-C-8-4-LC-MPO) is a half-width module that
connects an AOFx-500 to/from an FRM-9D via fiber ribbon cables (using LC and MPO connectors) for
multiplexing of add/drop traffic (see Fiber Multiplexing Panel (FMP) in Infinera Passive Equipment
Hardware Description Guide for more information).
Figure 4-82: Example Configuration of a DTN-X with FlexILS Using FMP-C on page 4-119 shows an
example configuration of a DTN-X with FlexILS in which the AOFx-500 is connected to an FRM-9D via an
FMP-C.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-119

Figure 4-82 Example Configuration of a DTN-X with FlexILS Using FMP-C

Note the following for configurations where the FlexILS line module (AOFx-500) is connected to an FRM
via an FMP-C:
■ FMP-C is supported only for FRMs in Native-Automated mode (FMP-C is not supported for FRMs
in SLTE mode).
■ A FlexILS line module (AOFx-500) with an FMP-C connection will ramp up to operational power
only after the user has created an optical cross-connect or an optical SNC on the line module.
■ For configurations where the FlexILS line module (AOFx-500) is connected to an FRM via an FMP-
C, specific associations and provisioning steps are required for FMP-C connections in order to
prevent mis-connections. For FlexILS line module connections via FMP-C, the following three
associations are required (see the procedure below):
□ AOFx-500 to FMP-C
□ FMP-C to FRM-9D
□ FRM-9D SCH CTP to AOFx-500 SCH CTP
In order for optical cross-connects or optical SNCs to come into service, the following provisioning steps
are required to connect an AOFx-500 to an FMP-C:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-120 FlexILS Service Provisioning

1. Lock the line module equipment.


2. On the line module’s SCG, configure the Line System Mode parameter to SCG passive multiplexing
(in TL1, this is performed via the ED-SCG command, setting
LINESYSMODE=MODESCGPASSIVEMUX_1).
3. Associate the FMP-C tributary port to the line module’s SCG port (in TL1, this is performed via the ED-
SCG command on the FMP-C tributary port, using the PROVNBRTP parameter to specify the AID of
the line module SCG port).

Note: Note that once the line module is configured for SCG passive multiplexing or the line module
is associated to an FMP-C, the line module cannot be unlocked until both configurations are
performed.

4. Unlock the line module. (This completes the FMP-C to line module association.)
5. Associate the FMP-C line port to the FRM-9D tributary port. (In TL1, this is performed via the ED-SCG
command on the FRM add/drop SCG port and setting the PROVFPMPO value to the AID of FMP-C
MPO port.)
6. Repeat Step 1 through Step 5 at the far-end node.
7. Create the optical cross-connect/SNC on the AOFx-500 super channel (the AOFx-500 super channel
number must match the optical cross-connect/SNC’s super channel number).
8. Associate the FRM-9D tributary super channel to the AOFx-500 super channel. (Note that the FRM
SCH can be associated to the AOFx-500 SCH only after a cross-connect/SNC has been created in
Step 7.) To associate the FRM-9D super channel to the AOFx-500 super channel, use Associated
Client SCH CTP parameter on the FRM super channel. (In TL1, this is performed via the ED-SCH
command on the FRM-9D, specifying the AOFx-500 super channel in the CLIENTSCHCTP
parameter.)

Note: Note that if the optical cross-connect/SNC created in Step 7 is subsequently locked after the
FRM-9D tributary SCH CTP is associated to the AOFx-500 SCH CTP, the association will be lost and
Step 8 will need to be repeated after the cross-connect/SNC is unlocked. Likewise if the cross-
connect/SNC is deleted and re-created, the association will be lost and Step 8 will need to be
repeated.

Automatic Tuning of Line Module Super Channels


For cases in which an optical cross-connect is created between the line-side super channel of an OFx
and the trib-side super channel of an FRM or FSM, if the super channel number of the optical cross-
connect does not match the provisioned super channel number on the line module, the super channel
number on the line module is automatically changed to match the super channel number of the optical
cross-connect.
Once the super channel is automatically configured to match the optical cross-connect, the optical service
is turned up, without user intervention or without requiring the line module to be locked or manually re-
configured.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-121

Note: Automatic super channel tuning is not supported for ICE 4 line modules (i.e. XT(S)-3300,
OFx-1200 and XT(S)-3600).

Note the following for automatic super channel tuning:


■ Automatic tuning is supported only when the required super channel number is supported by the
line module. For example, the AOFM-500-T4-1-C5 supports super channel 1, 2, 3, or 4. If super
channel number 6 is required by the optical cross-connect, automatic tuning will not take place and
the line module’s super channel will report a configuration mismatch alarm.
■ Automatic tuning requires that the correct fiber connections and associations (including Auto-
discovery) are complete.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-122 IQ NOS Digital Protection Services

IQ NOS Digital Protection Services


The Infinera Intelligent Transport Network system provides the option to protect service against facility
failures, fiber cuts in the network, and equipment failures at the ingress or egress points of the network.
This section covers the following supported types of digital path protection for client services:
■ Digital Subnetwork Connection Protection (D-SNCP)— Defines and provisions a protection circuit
for either SNCs or manually-created cross-connect circuits.In this protection architecture, a
dedicated protection circuit path is provisioned for each protection group. See Digital Subnetwork
Connection Protection (D-SNCP) on page 4-122.
■ Dynamic GMPLS Circuit Restoration—An optional protection feature for GMPLS-provisioned
SNCs, in which an SNC is dynamically restored from its source point to its end point upon detection
of a network fault. In this protection architecture, a recovery circuit path is automatically signaled
only after detection of a network failure. See Dynamic GMPLS Circuit Restoration on page 4-140.
■ Multi-layer Recovery—Protection that combines both the D-SNCP and GMPLS circuit restoration
protection mechanisms. See Multi-layer Recovery in DTNs on page 4-167 and Multi-layer Recovery
in DTN-X on page 4-146.
■ Fast Shared Mesh Protection (FastSMP™)—Protection that uses shared, pre-determined
protection resources to ensure fast recovery from failures for routes. See Fast Shared Mesh
Protection (FastSMP™) on page 4-153.
■ Optical Subnetwork Connection Protection (O-SNCP)—Protection via the Optical Protection Switch
Module (OPSM). See Optical Subnetwork Connection Protection (O-SNCP) on page 4-165.

Note: Digital SNC provisioning and service protection is not supported for XT(S)-3300 in the
current release.

Digital Subnetwork Connection Protection (D-SNCP)


IQ NOS supports provisioning of datapath protection groups to provide path protection of client services.
Users can create and delete protection groups, request manual protection switching, apply locks on a
protected circuit, and clear those locks.
The D-SNCP feature allows client equipment to support only a single interface to the transport network,
and provides failure protection for either SNCs or manually-created cross-connect circuits through
redundant transmission paths through the network. Each path is independently configured through the
network to ensure route diversity.
There are two types of D-SNCP, both of which can be implemented for manual cross-connects and for
dynamically signaled SNCs:
■ 2 Port D-SNCP (see below)
■ 1 Port D-SNCP (see 1 Port D-SNCP on page 4-126)
See the following sections for a description of D-SNCP functions and behavior:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-123

■ D-SNCP Protection Groups and Protection Units (see D-SNCP Protection Groups and Protection
Units on page 4-132)
■ Switching Hierarchy and Criteria (see Switching Hierarchy and Criteria on page 4-134)
■ D-SNCP through Third-Party Networks (see D-SNCP through Third-Party Networks on page 4-137)
■ D-SNCP Automatic Alarm Adjustment (see D-SNCP Automatic Alarm Adjustment on page 4-140)

Note: It is possible to use different types of D-SNCP at each end of a circuit or route. In other words,
one end of a route can be protected using 2 Port D-SNCP, while the other end of the route is
protected with 1 Port D-SNCP.

Note: D-SNCP of both types can be applied to the client/tributary endpoints of Multi-point
Configuration (see Multi-point Configuration on page 4-23).

2 Port D-SNCP

Note: Previously, this protection scheme was referred to as “Dual TAM D-SNCP” in Infinera technical
documentation. But because this type of protection also applies to configurations without a TAM, such
as future support of protection for DTN-Xs that use TIMs instead of TAMs, this type of protection is
now referred to as “2 Port D-SNCP.”

2 Port protection offers the highest level of service protection on all interface points within the optical
network, including the client ports. 2 Port D-SNCP provides end-to-end protection of optical services and
protects against TAM/TIM failures by using two TAMs or TIMs connected to client equipment at each end
of the network path.
In 2 Port D-SNCP, a Y-cable (optical signal splitter/combiner) is connected to the client equipment at
either end of the network. As shown in Figure 4-83: 2 Port D-SNCP (DTN Example) on page 4-125, the Y-
cable at the ingress point directs two identical copies of the client signal to two different TOM interfaces
on the originating node. These two interfaces receive the duplicate client signals and encapsulate each
signal into a DTP wrapper (for MTC/DTC endpoint) or an ODUk (for XTC endpoints) for transmission
through the network. Each signal is transported independently to the destination node, typically along
diverse routes.

Note: In 2 Port D-SNCP, the ingress endpoints must be on the same physical chassis of the
originating node. Likewise, the egress endpoints must be on the same chassis of the terminating
node. In other words, a Y-cable used for 2 Port D-SNCP can't be connected to termination points on
two different chassis.

At the remote end, the destination node monitors both signal paths and, depending on signal quality,
switches the appropriate signal towards the client equipment. Only one of the two digital path-level
signals is enabled at the egress.
In the event of a datapath failure (due to facility or equipment failures), an automatic protection switch
mechanism at the destination node switches the redundant copy of the client signal to transmit on the Y-
cable at the egress, with sub-50ms switching speeds.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-124 IQ NOS Digital Protection Services

Note: Hybrid cascaded 2 Port D-SNCP Protection Circuits may experience double switching under
certain conditions.

User-generated switching requests are also supported. 2 Port D-SNCP can be configured to be auto
revertive, so that traffic is switched back to the working unit within 50ms once the working unit comes
back into service and the wait to restore (WTR) timer expires.
Datapath protection groups provisioned for revertive protection will automatically revert the service back
to its original path after the restored path becomes available and the WTR timer expires. The WTR timer
is configurable between 5 and 120 minutes (with 5 minutes as the default).
To provide protection services, the control plane of the line modules and TEMs in which active and
standby protection units reside should be fully operational. Protection service is unavailable with line
module or TEM equipment failure, line module or TEM removal, or circuit pack to circuit pack control bus
failures. The only exception to this requirement is in the case of a protection switch triggered by removal
of a line module or TEM containing the active protection unit. In this case, only the former standby
protection unit’s control plane needs to be fully operational.
Note the following for specific TOMs with 2 Port D-SNCP:
■ For switchovers on a TOM-100G-L10X or TOM-100G-S10X, if the tributary disable action is set to
Laser Off, protection switch times can exceed 50ms. For these 100GbE, it is recommended to set
the tributary disable action to Insert Idle Signal. (See Tributary Disable Action on page 3-41.)
■ 2 Port D-SNCP is not supported for TOM-100G-SR10 and TOM-40G-SR4.
Note the following for 2 Port D-SNCP for endpoints on the XTC:
■ See DTN-X Service Capabilities on page A-1 for the 2 Port D-SNCP capabilities for the services
on the XTC-10, XTC-4, XTC-2, and XTC-2E.
■ 2 Port D-SNCP is not supported for OTU4 transport without FEC service on the TIM-1-100G/
TIM-1-100GX, nor for OC-768/STM-256 services on the TIM-1-40GM.
■ 2 Port D-SNCP is not supported for ODU multiplexing services (see ODU Multiplexing on page 4-
48.)
■ 2 Port D-SNCP on TIM-1-100GE-Q is supported only when using TOM-100G-Q-LR4 modules.
■ The two client signals in 2 Port D-SNCP can each employ different network mapping values. For
example, for a 100GbE service type, the working route can use VCAT (ODU2i-10v) service
mapping while the protect route uses non-VCAT (ODU4i) service mapping, or vice-versa.
■ For 2 Port D-SNCP with paths that use VCAT network mappings, a protection switch will be
triggered if there is a fault detected on any of the constituent ODUs of the GTP.
■ For 2 Port D-SNCP of endpoints on the XTC, the signal degrade (SD) protection switch trigger is
detected by the software on the line module. Therefore, if a line module is warm rebooting the line
module will not respond to the SD condition until the line module’s software is present and running.
In a similar scenario, if an SD condition is detected and a protection switch does occur
successfully, if a line module is subsequently warm rebooted at the time when the SD condition
clears, the line module will not acknowledge the condition as cleared until rebooting is completed,
so only one path (the protection path) is seen as available. This means that a fault on the protect

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-125

path may affect traffic until the line module completes its reboot and acknowledges that the SD
condition is cleared.
■ For software releases lower than IQ NOS R18.2, restorable SNCs cannot be part of the protection
group.
Note the following for 2 Port D-SNCP for endpoints on the DTC/MTC:
■ 2 Port D-SNCP is not supported for endpoints on the TAM-2-10GT nor DICM-T-2-10GT, nor for
SNCs in which the source endpoint is a receive electrical TOM (TOM-1.485HD-RX or
TOM-1.4835HD-RX) and the destination endpoint is a transmit electrical TOM (TOM-1.485HD-TX
or TOM-1.4835HD-TX).
■ 2 Port D-SNCP is supported for OC-768/STM-256 services, but it is not supported for 4x10Gbps
services.
■ When provisioning 2 Port D-SNCP on the TAM-8-1G, use protection units belonging to different
tributary port pairs. A tributary port pair is (1a, 1b), (2a, 2b), (3a, 3b) or (4a, 4b), and traffic on each
of these port pairs is mapped together into a single 2.5Gbps digital path. If both the protection units
belong to the same port pair (1a and 1b, for example), there would be no effective protection in
case of any path failure along the circuit.
Figure 4-83: 2 Port D-SNCP (DTN Example) on page 4-125 and Figure 4-84: 2 Port D-SNCP (DTN-X
Example) on page 4-126 illustrate the configuration of 2 Port D-SNCP on DTN and DTN-X, respectively.

Figure 4-83 2 Port D-SNCP (DTN Example)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-126 IQ NOS Digital Protection Services

Figure 4-84 2 Port D-SNCP (DTN-X Example)

1 Port D-SNCP

Just like 2 Port SNCP, 1 Port D-SNCP generates duplicate client signals and encapsulates each signal
into an Infinera wrapper for transmission through the network using two dedicated and diverse 1+1
“working” and “protect” service paths, and then performs performance monitoring on both the working and
protect services to select the optimal service at the far-end's client interface. 1 Port D-SNCP protection is
supported for services originating on DTC, MTC, and XTC:
■ For services originating on a DTC/MTC, the duplicate client signals are each encapsulated into a
DTP wrapper for transmission through the network, and the optimal service is selected at the far-
end's client interface on the far-end line module or TEM.
■ For services originating on an XTC, the duplicate client signals are each encapsulated into the
ODUki Infinera wrapper for transmission through the network, and the optimal service is selected in
the switch fabric at the far-end XTC.
The figures below show the path of 1 Port D-SNCP for DTN-X and DTN, respectively.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-127

Figure 4-85 1 Port D-SNCP on DTN

Figure 4-86 1 Port D-SNCP on DTN-X (XTC-4/XTC-10)

Unlike 2 Port SNCP, which requires dual TAMs or TIMs at the ingress and egress points, 1 Port D-SNCP
reduces network deployment costs by eliminating dual TAMs/TIMs and Y-cables at network ingress and
egress, providing a true MSPP-like UPSR/SNCP protection implementation on the Intelligent Transport
Network.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-128 IQ NOS Digital Protection Services

Figure 4-87 1 Port D-SNCP in a DTN Network

Note the following for 1 Port SNCP:


■ See DTN-X Service Capabilities on page A-1 for a full list of the DTN-X services and modules
that support 1 Port D-SNCP.
■ For switchovers on a TOM-40G-SR4, TOM-100G-L10X, TOM-100G-S10X, or TOM-100G-SR10, if
the tributary disable action is set to Laser Off, protection switch times can exceed 50ms. For these
100GbE or 40GbE TOMs, it is recommended to set the tributary disable action to Insert Idle Signal.
(See Tributary Disable Action on page 3-41.)
■ For switchovers on a TIM-1-100GE-Q, TIM2-2-100GM or TIM2-2-100GX, if the tributary disable
action on the TOM is set to Laser Off, protection switch times can exceed 50ms. For 100GbE
TOMs on the TIM-1-100GE-Q, TIM2-2-100GM or TIM2-2-100GX, it is recommended to set the
tributary disable action to Insert Idle Signal. (See Tributary Disable Action on page 3-41.)
■ Because 1 Port D-SNCP uses diverse routes originating from the same TAM/TIM, it does not
protect against TAM, TIM, or TOM failures at the originating node.
■ Because SNCs at the OC-12/STM-4 rate and at the OC-3/STM-1 rate are not supported between
an endpoint on a TAM-8-2.5GM and an endpoint on a TAM-4-2.5G, do not combine TAM-8-2.5GM
and TAM-4-2.5G when specifying endpoints for 1 Port D-SNCP or 2 Port D-SNCP at the OC-12/
STM-4 rate nor the OC-3/STM-1 rate.
■ For PXM services the reliable TP is the ODUflexi TP, instead of being the tributary PTP as it is for
TIM services. (This means that an empty 1 Port D-SNCP is not supported in the case of PXM
services.)
■ Hairpin cross-connects can be added to a 1 Port D-SNCP if the adjacent cross-connect is not a
hairpin cross-connect.

1 Port D-SNCP for Line-side Terminating SNCs


For line-side terminating SNCs (see Line-side Terminating SNCs on page 4-12), 1 Port D-SNCP is
supported on:
■ The tributary-side endpoints on DTC, MTC, and all XTC chassis types
■ The line-side endpoints on DTC, MTC, and XTC-4/XTC-10 chassis types

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-129

Note: Line-side 1 Port D-SNCP is supported for ODU2i_10v VCAT manual cross connects on the
TIM-1-100GE of an XTC-4/XTC-10, see 1 Port D-SNCP on Line Side for ODU2i-10V VCAT Services
on page 4-130.

All client tributaries can be optionally configured to generate DTP-AIS (for DTC/MTC endpoints) or ODU-
AIS (for XTC endpoints) downstream if there is any fault on the client. Because of this, 1 Port D-SNCP is
supported for SNCs or cross-connects that traverse third-party networks.

Figure 4-88 1 Port D-SNCP across a Third-party Network

Note: For 1GbE, 10G Clear Channel, and 2.5G Clear Channel, the trigger for switching will be
tributary OLOS and client LOS.
1 Port D-SNCP for 1GbE or 1G Fibre Channel (1GFC) services on the TAM-8-2.5GM can make use of
the flexible mapping of tributary port to DTPCTP. As described in 1GFC and 1GbE Service Provisioning
on page 4-14, when creating 1GbE and 1GFC services on the TAM-8-2.5GM, the DTN allows for flexible
mapping of tributary port to DTPCTP, so that the user is able to specify the virtual channel in the DTPCTP
to which the service should be mapped, as long as no service is already provisioned on the virtual
channel.
1 Port D-SNCP can be configured for endpoints on the TAM-2-10GT for 10Gbps SNCs across a Layer 1
OPN. Note the following constraints for 1 Port D-SNCP over Layer 1 OPN:
■ 1 Port D-SNCP is not supported for 2.5Gbps services on the TAM-2-10GT. To protect SNCs across
Layer 1 OPN, services should be configured as 10Gbps SNCs.
■ 1 Port D-SNCP is not supported on TAM-2-10GT tributaries that are configured as TE endpoints of
a Layer 1 OPN TE link. (And the converse is also true: TAM-2-10GT tributaries that are configured
for 1 Port D-SNCP cannot be configured as TE endpoints of a Layer 1 OPN TE link). This means
that 1 Port D-SNCP is configured on the TAM-2-10GT tributaries at the Provider Edge, since the
TAM-2-10GT tributaries on the Customer Edge are configured as TE endpoints for the TE link.
■ 1 Port D-SNCP is not supported on a 10Gbps DTPCTP of a TAM-2-10GT if a 2.5Gbps service is
originating/terminating on a constituent 2.5Gbps DTPCTP.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-130 IQ NOS Digital Protection Services

1 Port D-SNCP on Line Side for ODU2i-10V VCAT Services


The TIM-1-100GE supports line-side 1 Port D-SNCP for ODU2i_10v VCAT manual cross connects. 1 Port
D-SNCP is supported for manual cross connects with line-side GTP endpoints on the AOLM/OFx-500/
OFx-1200, for single OCG only.

Figure 4-89 Example Network Configuration with Line-side 1 Port D-SNCP for ODU2i_10v VCAT

Configurable Fault Isolation Layer for 1 Port D-SNCP


For 1 Port D-SNCP services with ODUk endpoints on the XTC, the user can configure the fault isolation
layer to be used to monitor faults and trigger protection switching for the protection group:
■ ODUkP (PATH; default value)—For services that traverse third party networks, the ODUkP layer
should be used to detect faults and trigger switches for the 1 Port D-SNCP. If ODUkP level failure
is monitored, the 1 Port D-SNCP is equivalent to SNC/Ne.
■ ODUkPi (iPATH)—The Infinera extension to the standard ODUk path. For services that are entirely
inside within the Infinera network, the ODUkPi layer should be used to detect faults and trigger
switches for the 1 Port D-SNCP. The ODUkPi layer is used only for fault isolation; no performance
monitoring data is collected for ODUkPi, and there is no additional configuration capability for
ODUkPi. The iPATH layer for a standard ODUk based service will always have an associated
ODUk PATH layer. If ODUkPi level failure is monitored, the 1 Port D-SNCP is equivalent to
SNC/Ni.
Figure 4-90: Fault Isolation Layers Configured in Two Example Networks on page 4-131 shows two
example networks using different fault isolation layers.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-131

Figure 4-90 Fault Isolation Layers Configured in Two Example Networks

Note the following about configuring the fault isolation layer to iPATH:
■ iPATH fault isolation is supported only for 1 Port D-SNCP.
■ iPATH layer fault isolation cannot be used over a third party network or on a data path that includes
OTUk to OTUk TIM ports.

Protection Switch Laser Control for 1 Port D-SNCP


The DTN and DTN-X support the Protection Switch Laser Control feature, which is a configurable option
on each Ethernet interface to keep the laser on and insert idle code groups during the process of
executing a 1 Port D-SNCP protection switch. Protection Switch Laser Control is supported for 1GbE and
10GbE TAM interfaces, and for 1GbE, 10GbE, 40GbE, and 100GbE TIM interfaces. Furthermore, the
Protection Switch Laser Control feature is supported only for Ethernet interfaces where the tributary
disable action is set to Shut Down Laser (Laser Off).
■ When the Protection Switch Laser Control is set to Enable Laser, the laser will remain on during a
protection switch unless one of the following occurs:
□ Both the working and protection paths are down.
□ There is a failure of the incoming tributary signal to the far end Ethernet port.
■ When the Protection Switch Laser Control is set to Enable Laser in the Ethernet interface, the
TIM/TAM will continue to send idle code groups for the duration of the protection switch completion.
Subsequently, if the system determines that a protection switch will not recover the signal (if both

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-132 IQ NOS Digital Protection Services

working and protection paths are faulted), the laser will be shut down. (Note that when the
TIM/TAM transitions between payload signal and idle groups, some corrupted frames will
transmitted towards the client equipment.)
Protection Switch Laser Control is supported for the GbE interfaces on the following TIMs and TAMs:
■ TIM-1-100GE
■ TIM-1B-100GE
■ TIM-5-10GM
■ TIM-5-10GX
■ TIM-5B-10GM
■ TIM-16-2.5GM
■ TAM-2-10G
■ TAM-2-10GR
■ TAM-2-10GM
■ DICM-T-2-10GM
■ TAM-8-1G
Protection Switch Laser Control is not supported on TIM2-2-100GM, TIM2-2-100GX, TIM2-18-10GM and
TIM2-18-10GX.
Note the following for Protection Switch Laser Control:
■ In the TL1 interface, this feature is called Ethernet Protection Switching Laser Control (EPSLC).
■ The Protection Switch Laser Control feature applies only for Ethernet interfaces configured for 1
Port D-SNCP.
■ Idle cell insertion is not supported for Ethernet interfaces on the TAM-2-10G and TAM-2-10GR. For
these TAMs, the laser will be kept on when Protection Switch Laser Control is enabled, but
because the incoming client signal is unavailable an indeterminate signal will be sent downstream.
■ For DTN-X nodes upgrading from pre-Release 15.3 to Release 15.3 or higher, any TIMs installed
before the upgrade will require a service-affecting cold reset to enable the functionality introduced
in Release 15.3.
■ For TAM-8-1G endpoints only, for local and remote side tributary ports involved in 1 Port D-SNCP,
if the Protection Switch Laser Control is set to Enable Laser, then it is also required to enable AIS
on Client Signal Failure on both the local and remote side tributary ports.

D-SNCP Protection Groups and Protection Units


Digital SNCP services are configured using Datapath Protection Groups (DPGs). A DPG is a pair of
client/tributary ports (in 2 Port D-SNCP) or a pair of DTF or ODU paths (in 1 Port D-SNCP) that is
designated to transmit or receive the protected/duplicated client signals. Each designated client signal is
called a protection unit (PU). To configure Digital SNCP services, DPGs must be defined on each node at
both ends of the network (origin and destination).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-133

In Digital SNCP, one PU in each DPG is identified as the ‘Working’ PU, and the remaining PU is identified
as the ‘Protect’ PU. This designation, called the PU Configured State, identifies the Working path - the
path used in the absence of network failures - and the Protect path - the path used in the event of a
network failure.
When the system is running, and at the origin node, both Working and Protect PUs send any datapath
traffic they receive from the client side to the network interfaces of the node, resulting in two transmission
paths through the Intelligent Transport Network. The Working and Protect paths are generally routed
through completely diverse paths through the network.
At the destination node, the receiving node terminates both paths on the far-end Working and Protect
PUs. The receiving node evaluates the quality of both signals received on the DPG, and enables only one
of the two PUs to actively transmit traffic to the far-end client. In the absence of any prior protection switch
activity, the Working PU is the active PU at the destination node. The other PU exists in a standby state
(in 2 Port D-SNCP, the Protect PU will power off its transmission laser in the standby state).
For both 1 Port Digital SNCP and 2 Port Digital SNCP, the path chosen as the working path is a local
decision. Meaning that each end of the circuit chooses which signal to use independently. Each end of
the circuit may pick a different path as the working path.
Both 1 Port D-SNCP and 2 Port D-SNCP support the behavior in the case of an outage of both the
Working PU and the Protect PU to ensure that, when possible, the Working PU will be shown as an active
PU irrespective of whether traffic is up or not. In the case of a local node power cycle, or of a remote line
module reset or power cycle (or in any case where both PUs fail):
■ Traffic is switched to the Working path, even if the failure is present in both the Working path and
the Protect path.
■ If the failure is cleared first in the Working path, traffic will recover on the Working path immediately.
■ If the failure is cleared first in the Protection path, traffic will not recover on the Protection path
immediately.
□ Within a few seconds, if the failure is cleared in the Working path, traffic will recover on the
Working path immediately.
□ After a few seconds, if the failure is not cleared in the Working path, traffic will switch to the
Protection path and traffic will recover on the Protection path.

Protection Units: Configured State vs Actual State


A protection unit is characterized by two types of states, as described below:
■ Configured State
The configured state reflects the user-preferred Working/Protect designation for the PU. During the
creation of each protection group, users must designate one protection unit as the Working PU and
the other protection unit as the Protect PU. The Working PU is the preferred PU for transmitting the
traffic signal to the client in the absence of network failure. The Protect PU provides protection
functionality in that it only transmits the traffic signal to the client when a protection switch occurs.
■ Actual State
The actual state of a protection unit is a system-derived state that reflects the actual operating state
of the PU. The actual state of a PU may be one of the following:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-134 IQ NOS Digital Protection Services

□ Active—The PU is currently providing full service, carrying datapath traffic in both directions.
□ Hot Standby—The PU is not active, but is healthy and able to provide protection service if
called upon.
□ Cold Standby—The PU is not active, and its operational state renders it unable to provide
protection service if called upon.

Switching Hierarchy and Criteria


The DPG support many of the switching criteria and hierarchies specified in GR-1400, G.841 SONET/
SDH, G.709, G.871, and G.873.1 specifications. As in UPSR/SNCP, the protection mechanism is
provided by a ‘head-end bridge’ (the Y-cable on the transmit side in 2 Port D-SNCP, for example) and a
‘tail-end switch’ (the transmitting PU at the receiving end of the PG).
Switching requests are part of a switching hierarchy and can be escalated by the status of the received
path, or assigned by the user at the tail-end of the DPG. Requests supported within a DPG are escalated
with the following priority:
Lockout request, see Lockout of Protect and Lockout of Working Switching Request on page 4-134
Network and Service State (Automatic) request, see Network/Service State (Automatic) Switching
Request on page 4-135
Manual request, see Manual Switching Request on page 4-136
Wait to Restore (Automatic) request, see Wait to Restore Request on page 4-136
Higher priority switch requests override lower priority switch requests.

Warning: Traffic Disruption Risk


All switches (manual, automatic, lockout) cause a 50ms (or less) interruption to traffic.
■ For D-SNCP using the TOM-40G-SR4, TOM-100G-L10X, TOM-100G-S10X,
or TOM-100G-SR10, if the tributary disable action is set to Laser Off,
protection switch times can exceed 50ms.
■ For D-SNCP over OTM-1200 carrying Ethernet services, if the tributary
disable action is set to Laser Off, protection switch times can exceed 50ms.
For these 100GbE or 40GbE TOMs or OTM-1200 wit Ethernet services, it is recommended
to set the tributary disable action to Insert Idle Signal. (See Tributary Disable Action on page
3-41.)

Lockout of Protect and Lockout of Working Switching Request


A lockout is a user-generated request to prevent a PU (within a DPG) from ever becoming active. The
lockout commands are useful in supporting maintenance operations by an operator. There are two types
of lockout requests:
■ Lockout of Protect—Applied to a DPG, this command prevents the Protect PU from becoming
active under all circumstances.
■ Lockout of Working—Applied to a DPG, this command prevents the Working PU from being active,
under all circumstances.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-135

In either case, if the PU being locked out by the command is currently active, a protection switch to the
other PU shall occur, regardless of the state of the other PU (or of the state of the traffic being carried by
the PU). After the lockout-induced switch, traffic cannot be moved back to the locked-out PU until the
lockout command is cleared.

Note: If a failure occurs on the Protect circuit while a Lockout of Working is in effect, traffic cannot
switch to the Working circuit until the lockout is cleared. Conversely, if a failure occurs on the Working
circuit while a Lockout of Protect is in effect, traffic cannot switch to the Protect circuit until the lockout
is cleared. Both cases can result in loss of traffic.

Note: Manual Lockout of Working switches are non-revertive. If a lockout of working is issued and
then cleared by the user, traffic transmission will continue on the protection route until either an
automatic switch request is triggered (due to a failure), or a user-initiated switch request (manual or
lockout) is issued.

Users can also issue a request to clear a lockout. A user initiated Clear command removes a lockout
switching request. However, network-, service- or equipment-generated switching requests are not
cleared by the Clear command. It is also important to note that manual Lockout of Working requests are
non-revertive; meaning that if a user-generated Lockout of Working is assigned to a PG, the traffic will
switch to the Standby PU and path. Upon the issuance of a clear request, traffic does not automatically
switch back to the Working path.

Network/Service State (Automatic) Switching Request


A Network or service-generated fault is an automatic switching request based on the quality or state of
the service, and on the state of the PU terminating the service. The following events can trigger automatic
protection switching:
■ Client-side faults, such as LOS, LOF, AIS, LOA, LF, LOSYNC, etc.
■ Line-side/network faults, such as SD, ODU-AIS, DTP-AIS, C-band OLOS, OTS OLOS, de-
encapsulated LOSYNC, etc.
■ Equipment faults on modules in the terminating PU's datapath (BMM, FRM, OAM, ORM, IAM, IRM,
line module, TIM/TIM2, TAM, OTM, TOM, etc.) may also generate path or line-level faults,
escalating the switching request on the remote end as well. Additionally, other equipment related
defects/failures that generate switching requests include:
□ A module removal defect
□ Any defect that (after integration) maps to an EQPTFAIL alarm and that affects a PU's
datapath traffic
□ Any defect that (after integration) maps to an EQPT-PARTFAIL alarm and that affects a PU's
datapath traffic
■ Administrative operations such as locking or cold resetting the facilities or modules along the path
of service that would cause an AIS condition on the path.
In addition, the following switching parameters can be configured for 2 Port D-SNCP and would trigger
automatic protection switching:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-136 IQ NOS Digital Protection Services

■ Protection Switch for Client Rx Fault (FACRXPSTRIG in TL1)—Indicates whether a client facility
receive fault should be considered as a trigger for a protection switch. The default is for this feature
to be enabled.

Note: If traffic is running on the protection path of a revertive 2 Port Digital SNCP, changing this
parameter will cause traffic to switch to the active working path. It is recommended that this
parameter be set upon creating 2 Port Digital SNCP, or immediately after.
■ Switch to Work PU After Dual Outage (WKGPUAFTDLOUT in TL1)—Indicates whether traffic
should be switched to the working protection unit after a dual outage. The default is for this
feature to be disabled.

Note: For GNM and DNA, these two parameters are not configurable when creating the 2 Port D-
SNCP; they can be edited only after the 2 Port D-SNCP is created. (TL1 does support the configuring
of these parameters when using the ENT-FFP-TRIB command to create 2 Port D-SNCP.)

Manual Switching Request


A manual switch is a user-generated, non-latching (no associated state) protection switch request. A
manual switch command results in a protection switch if there are no higher priority requests in effect on a
standby PU. If a higher priority request is in effect, the manual switch link is grayed out in the user
interface.

Wait to Restore Request


A Wait to Restore request is a system-generated request when a network or service state request clears,
and the PG is provisioned for Revertive operation. The Wait to Restore request uses a provisionable
timer that begins counting when the network/service request clears and reverts the traffic back to the
preferred route upon expiration of the timer.
Datapath Protection Groups configured for revertive protection will automatically revert the service back to
its original path after the wait to restore (WTR) timer expires. The WTR request and associated timer is
initialized and begins counting when all higher priority network or system requests are cleared. The WTR
timer is provisionable between 5 and 120 minutes (in 1-minute increments, with a default of 5 minutes).
Clearing of user-initiated requests (Manual and Lockout) do not initiate a WTR request and therefore the
traffic does not automatically revert to the working route when Manual requests are cleared.

Protection Switch Alarm Reporting


The Protection Switch Alarm Reporting feature is enabled at the network element level. (In TL1, this is
called Enhanced Protection Switching and is enabled or disabled via the ENHPROTSW parameter in the
ED-SYS command.)
The network does the following when Protection Switch Alarm Reporting is enabled:
■ Indicate the protection switch request state for protection groups and tributaries involved in a
manual protection switch. (The protection switch request state for manual switches are indicated by
“Manual” in the tributary or PG properties.)
■ Allow the user to manually clear switching states.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-137

■ Allow the user to start the WTR timer upon manual protection switches (Lockout/Manual).
■ Allow the user to clear standing conditions from auto switch requests.

D-SNCP through Third-Party Networks


The Fault Escalation feature is used to provide Digital SNCP protection of services that originate and
terminate on two Infinera Intelligent Transport Networks, but are separated by a third-party WDM or
SONET/SDH network. Faults from the foreign network are escalated at the encapsulation point of the
Infinera network, enabling Digital SNCP protection of services transmitted through disjoint Infinera
Intelligent Transport Networks.
Without Fault Escalation, client faults are mapped into a valid wrapper and are therefore transparent to
the terminating node, and so will not trigger a protection switch. The Fault Escalation feature enables the
node interface at the terminating edge of the network to escalate a client failure into an AIS failure at the
terminating point, hence allowing 1 Port Digital SNCP protection switching to function seamlessly through
the third party network. See 1 Port D-SNCP on page 4-126 for more information.

Client Signal Fail (CSF) as a Protection Trigger


Client Signal Fail (CSF) is an OTN/G.709 defined indicator in the ODU path overhead that conveys to
downstream equipment that a non-OTN client signal has failed. The DTN-X can use the CSF indicator as
a protection switching trigger. The CSF indicator may be generated by the DTN-X at signal ingress or by
other third-party OTN equipment.
The user can set up the CSF trigger independently for the working PU and the protection PU, meaning
that CSF can be used as a protection switch trigger on one PU and not the other. For example, as shown
in the figure below, if one PU is transported across the network using interfaces on Infinera line modules
only, that PU should not use CSF as a protection switch trigger. However, if the other PU does traverse
native non-OTU clients, CSF should be used as a protection switch trigger.
The figure shows an example configuration where CSF as a protection trigger is enabled over a
protection unit in order to detect faults and trigger a protection switch when traversing native, non-OTU
clients.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-138 IQ NOS Digital Protection Services

Figure 4-91 Using CSF as a Protection Trigger over Third-Party Networks

Note the following about using CSF as a trigger for protection switching:
■ CSF is supported as a switch trigger only for endpoints on an XTC chassis of a DTN-X running
Release 10.0 or higher.
□ See Figure 4-91: Using CSF as a Protection Trigger over Third-Party Networks on page 4-
138 below for protection switch handling in networks with DTNs and DTN-Xs via back-to-
back TIM-TAM connections.
■ CSF is supported as a switch trigger for both 1 Port D-SNCP and 2 Port D-SNCP.
■ CSF can be enabled as a switch trigger even after the protection group is created.
■ CSF can be enabled as a switch trigger on a per-PU basis (working PU only or protect PU only), or
for both PUs by setting the following values for the protection group:
□ Work—Use CSF as switch trigger only on work PU.
□ Protect—Use CSF as switch trigger only on protect PU.
□ Enabled—Use CSF as switch trigger on both work and protect PUs.
□ Disabled—Do not use CSF as a switch trigger on any PU.
■ CSF as a trigger for protection switching is not supported for services with ODUki and ODUflex
network mapping types
CSF is supported only for endpoints on an XTC chassis of a DTN-X. However, for network configurations
that use both DTNs and DTN-Xs with back-to-back TIM-TAM connections as shown in Figure 4-91: Using
CSF as a Protection Trigger over Third-Party Networks on page 4-138, CSF is used as a protection

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-139

switch trigger for endpoints on the DTN-X, and DTP-AIS is used as a protection switch trigger for
endpoints on the DTN.

Note: Network configurations with back-to-back TIM2-TAM connections are not supported.

Figure 4-92 Protection Switching for Mixed DTN/DTN-X Network

For example, with the fiber break shown between DTN B and DTN-X D in Figure 4-91: Using CSF as a
Protection Trigger over Third-Party Networks on page 4-138:
■ The interface labeled “D1” on DTN-X D detects the fiber break as a client failure and DTN-X D sets
the CSF indicator to “1”.
■ If the PU on the interface labeled “F1” on DTN-X F is configured to use CSF as a protection switch
trigger, then the incoming CSF indicator will trigger a protection switch.
■ At the other (DTN) side of the network, if the interface labeled “B2” on DTN B is configured for fault
escalation, DTN B will escalate the client failure into a DTP.AIS failure, which triggers a protection
switch at DTN A.

ODUk AIS for ODUk Encapsulated Clients


For native non-OTN services originating on an MTC/DTC that are ODUk encapsulated (e.g., OC-192 to
ODU2 encapsulation, see OTN Adaptation Services on page 4-21), in a network with DTNs and DTN-Xs
as shown in Figure 4-93: ODUk AIS for ODUk Encapsulated Clients in Mixed DTN/DTN-X Network on
page 4-140, OTUk client ports on the MTC/DTC can be configured to generate ODUk AIS so that failures
on the DTN side of the network can be detected on the DTN-X side of the network to trigger a protection

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-140 IQ NOS Digital Protection Services

switch at the far end. See Encapsulated Client Disable Action on Egress (DTN) on page 3-48 for
information on all of the supported encapsulated client disable actions.
In the case shown in Figure 4-93: ODUk AIS for ODUk Encapsulated Clients in Mixed DTN/DTN-X
Network on page 4-140, the interface labeled “B2” on DTN B is configured for Encapsulated Client
Disable Action of “ODUk AIS”. So in the case of a failure on the DTN side, DTN B would generate an
ODUk AIS signal on client port B2 toward DTN-X D, thus triggering a protection switch at DTN F. (Note
that if failure occurs on DTN-X side, ODUk AIS is sent from the DTN-X side to the DTN side of the
network, which prompts the DTN to trigger a protection switch on the DTN side.)

Figure 4-93 ODUk AIS for ODUk Encapsulated Clients in Mixed DTN/DTN-X Network

For information on the TAMs and service types that support ODUk AIS for ODUk Encapsulated Clients,
see Encapsulated Client Disable Action on Egress (DTN) on page 3-48.

D-SNCP Automatic Alarm Adjustment


If a client or tributary termination point experiences a failure with severity Critical, Major, Minor and is
service affecting, but a protection switch successfully occurs in association with the failure, the alarm for
the facility is adjusted to Minor and non-service affecting. A similar adjustment is automatically applied if
there is a failure associated with a standby PU (and therefore, no protection switch is required). This
automatic alarm adjustment avoids false reporting of critical alarms, ensuring that critical alarms are
reported only if service is down after a protection switch fails.

Dynamic GMPLS Circuit Restoration


IQ NOS supports automatic restoration of GMPLS-provisioned SNCs upon detection of a datapath failure.
Figure 4-94: Dynamic GMPLS Circuit Restoration on page 4-141 illustrates at a high level how this

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-141

feature operates. Under normal operation, the state of each SNC is maintained by a signaling protocol,
and traffic is carried along a ‘working route’. When a datapath failure occurs, all the impacted SNCs
automatically detect the failure at their endpoints. At the source node, circuits configured for restoration
are automatically re-signaled along a different, functional path, called the ‘restoration route’.
If multiple circuits are impacted simultaneously, the circuits are restored in a sequence based on the user-
configured restoration priority level assigned to each SNC (see Restoration Priority on page 4-142).

Note: Dynamic GMPLS circuit restoration only applies to SNCs; it does not impact manual cross-
connects.

Note: See DTN-X Service Capabilities on page A-1 for a full list of the DTN-X services and
modules that support GMPLS restoration.

Note: Dynamic GMPLS circuit restoration is not supported for SNCs in which the source endpoint is a
receive electrical TOM (TOM-1.485HD-RX or TOM-1.4835HD-RX) and the destination endpoint is a
transmit electrical TOM (TOM-1.485HD-TX or TOM-1.4835HD-TX).

Figure 4-94 Dynamic GMPLS Circuit Restoration

SNCs that are configured for restoration can also be configured for reversion to the original working path,
as described in the sections below.

Warning: Traffic Disruption Risk


When an SNC is restored along an alternate path, note that the network elements
associated with the SNC (including the local, remote, and intermediate network elements
used for the alternate path as well as those used for the original path) will have updated
their databases accordingly. However, until a backup operation is performed, the backup
databases on these network elements will not contain information for the restored
alternate path of the SNC. This must be taken into consideration before performing a
database restoration on any of these network elements. Traffic will be affected if an older
database is restored after an SNC auto restoration has occurred.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-142 IQ NOS Digital Protection Services

Restoration Priority
SNCs can be configured with a restoration priority value from 0-7 to be used by GMPLS in the case of
SNC restoration. At the network element level, the restoration priorities 0-7 can be associated with a hold-
off timer setting from 0 to 86400 seconds (i.e., 24 hours), so that restoration priority levels can affect the
order in which GMPLS attempts to restore SNCs.
For example, if the restoration hold-off timer for restoration priority level 3 is set to 40 seconds, GMPLS
will wait 40 seconds before attempting to restore all SNCs with priority level 3; if the hold-off timer for
priority level 0 is set to 0 seconds, GMPLS will immediately attempt to restore all priority 0 SNCs. So what
determines the time to initiate each restoration is the hold-off timer value that is associated with the
restoration priority.

Operational Details
When Dynamic GMPLS SNC Restoration is triggered (see Automatic Restoration Triggers on page 4-144
for a complete list of trigger events), the network element at the source endpoint takes the following
actions to restore the impacted circuit:
■ The network element tries to determine if the fault occurred at the destination network element or
source network element:
□ If the source node detects a line-side fault, it will try to restore the circuit irrespective of
whether the fault is at a source, intermediate, or destination node.
□ If the detected fault occurred at either a source or destination tributary/client, restoration will
not be attempted.
□ If the detected fault occurred at either a source or destination line module or TEM,
restoration will be attempted if the fault is attributed to a network fault on the SNC.

Note: Locking the source line module of an SNC does not trigger GMPLS restoration, but locking of
the destination line module does trigger GMPLS restoration attempt. Restoration will be continuously
attempted until the fault is cleared in the destination line module that houses the tributary.
■ Based on the restoration priority assigned to the SNC and the hold-off timer value associated
with the priority, GMPLS will wait the duration of the hold-off time before attempting to restore
the SNC (see Restoration Priority on page 4-142).
■ The source network element releases the SNC (if it is not already in a released state) and
begins to compute the restoration route. GMPLS attempts to restore SNCs as follows:
□ If the SNC has been configured to use Preferred Restoration route information, GMPLS
will use the configured inclusion/exclusion lists, along with other regular constraints, to
configure the restoration route for the SNC.
□ If either Preferred Restoration route information is not specified for the SNC, or a route
cannot be computed with the Preferred Restoration route information, GMPLS will
compute a diverse route in the following sequence:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-143

1. GMPLS attempts to find a restoration path that is node and fiber diverse from the
protect path.
2. If a restoration path that is node and fiber diverse from the protect path cannot be
computed, GMPLS attempts to find a restoration path that is fiber diverse from protect
path.
3. If a restoration path that is node and Fiber diverse or only fiber diverse from the
protect path cannot be computed, GMPLS attempts to find a restoration path, where
at least one fiber link is diverse from the entire protect path
4. If a restoration path with at least one fiber link diverse from the protect path cannot be
computed, the SNC enters a set up failed state. GMPLS again attempts to find a
restoration path by following the above sequence after some time. The time between
attempts to find a restoration path in this case increases based on the number of
attempts.
■ Events logs are generated when SNC restoration starts and completes. If the SNC fails
restoration, an SNCFAIL alarm is declared, and the SNCs operational state is set to disabled.
The network element then proceeds with normal SNC setup retry procedures. When the SNC
is successfully restored, the SNCFAIL alarm is cleared, and the SNC operational state is set to
enabled.

Note: In the event of a removal or failure of the TAM/TIM/TIM2/TOM at the destination node,
restoration will be continuously attempted until the equipment is replaced.
■ Infinera nodes support a Restore Path Active (RESTPATHACTIVE) condition for auto-
restorable SNCs (both revertive and non-revertive) that indicates when the SNC has been
restored by GMPLS to a route other than the working route. Reporting for this alarm is disabled
by default. When reporting for this alarm is enabled, the alarm is raised on the local end of the
SNC when an unlocked, auto-restorable (either revertive or non-revertive) SNC is on a route
other than the configured working path; the alarm is cleared when the SNC is reverted back to
the original working path (for revertive SNCs), when the SNC is locked, or if the SNC is
converted from restorable to unprotected.
■ If the SNC was configured for automatic reversion, the originally configured working path of the
SNC is maintained and is continuously monitored for its health by checking the fault bits and
equipment state.
□ Once the original working path of the SNC demonstrates ten fault-free seconds, the SNC
goes into the Wait to Restore (WTR) state. GMPLS will continue to monitor the original
working path for the time configured in the Wait to Restore timer.
□ If the original working path of the SNC shows no faults during the Wait to Restore time,
traffic is switched back to the original working path (bidirectionally on both the Local and
Remote ends) and the path used for restoration is deleted.

Note: For SNCs using LM-80 OCH TE link, all nodes traversed by the SNC must be running software
that is Release 7.0 or higher.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-144 IQ NOS Digital Protection Services

Note: If there is a Pre-FEC Signal Degrade condition on a super channel, then the TE link bandwidth
is reduced.

Note: To manually revert an SNC back to its original working route, see Manual Operations on
Restorable SNCs on page 4-145.

Automatic Restoration Triggers


All of the following defects trigger the automatic re-signaling of SNCs configured for dynamic GMPLS
SNC restoration:
■ For SNCs with endpoints on MTC/DTC:
□ LOF-DTP
□ BER-based Signal Fail based on post-FEC DTP BIP calculations.
□ AIS-DTP
□ BDI-DTP
□ Post-FEC-BER based on DTS calculations that is raised as an alarm on channel CTP.
■ For SNCs with endpoints on XTC:
□ ODU_RX_AIS
□ ODU_RX_BDI
□ ODU_RX_LCK
□ ODU_RX_OCI
□ ODU_TX_AIS
□ ODU_TX_BDI
□ ODU_RX_LOF
□ ODU_RX_LOMF
□ ODU_RX_SD
□ AIS-ODUk-iPATH
□ BDI-ODUk-iPATH

Note: An SNC is also automatically restored if one of the cross-connects in its route is manually
released.

Note: Dynamic GMPLS SNC Restoration is primarily designed to provide traffic restoration utilizing
available alternate route bandwidth in the event of a fiber cut or module failure/removal. Performing a
BMM reseat or cold reset will trigger the restoration process. Due to the additional BMM boot time
requirements associated with these actions, local node SNC restoration may be delayed until the boot
process is completed.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-145

Provisioning Considerations
To provision Dynamic GMPLS SNC Restoration, users set the following attributes during SNC
provisioning:
■ Auto-Restore - Set the value of this attribute to ‘yes’ to enable Dynamic GMPLS SNC Restoration
for an SNC. This attribute may be modified at any time.
■ Auto-Reversion - Enable automatic reversion to revert the restorable SNC back to its original
working path after a restoration event.
■ Use Preferred Restoration Route Info - Check this option to configure the inclusion and exclusion
list that should be used as a first option when restoring the SNC. This attribute may be modified at
any time. Preferred Restoration constraints take effect only if auto restoration is enabled.
■ Priority - Set the priority value from 0 -7. At the network element level, each priority level is
assigned a hold-off timer value to indicate how long GMPLS should wait before attempting to
restore the SNC (see Restoration Priority on page 4-142). The priority attribute can only be set for
SNCs enabled with auto-restore. The default value for the priority attribute is zero. The priority
attribute may be modified at any time, even after creation of the SNC.

Note: In case of a back-to-back link between a Gen 3 10G TIM and Gen 4 10G TIM,
TIM-5-10GM/GX or TIM2-18-10GM/GX, while specifying the inclusion list for that particular link,
instance ID and timeslots must both be selected.

Manual Operations on Restorable SNCs


Because restoration routes are not defined prior to a datapath failure, there are no operational
equivalents to the Lockout and Manual Switching Requests supported by D-SNCP digital path protection.
(See Switching Hierarchy and Criteria on page 4-134 for more information on the D-SNCP switching
requests.) However, the user may effect the following manual operations on restorable SNCs:
■ Disable Dynamic GMPLS SNC Restoration - Users can disable the auto-restore feature by setting
the SNC to “Unprotected,” or by putting the SNC into maintenance state. This operation is useful
when performing diagnostics or other maintenance operations on the restoration route.
■ Manual Reversion - There are two cases in which an SNC can be manually reverted to its working
path after a restoration event:
□ If the SNC is not configured for automatic reversion, a manual reversion can be performed
(note in this case there would be no existing working path). Once the manual revert
command is issued, a new working path is created and monitored for 10 fault-free seconds.
After 10 fault-free seconds, traffic is reverted to the working path and the restoration path is
torn down.
□ The Wait to Restore period starts only when the working path becomes fault free. When
WTR is in progress, a manual revert command can be issued so that traffic can revert to
working path immediately, after which the restoration path is torn down.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-146 IQ NOS Digital Protection Services

Multi-layer Recovery in DTN-X


The two protection schemes, Digital Subnetwork Connection Protection (D-SNCP) and Dynamic GMPLS
Circuit Restoration, can be used in combination so that the Working PU or Protect PU of a 1 Port D-SNCP
in a DTN-X network can be configured as GMPLS restorable. Therefore, an SNC can be protected by
both D-SNCP and by GMPLS auto-restoration to provide multi-layer recovery in case of multiple network
failures. At any point, two healthy paths are available when Multi-layer Recovery is set up.
The DTN-X supports Multi-layer Recovery for 1 Port D-SNCPs. The work path and protect path SNCs in a
1 port D-SNCP can be configured as revertive or non-revertive, restorable SNCs.
IQ NOS supports Multi-layer Recovery for:
■ 1 Port D-SNCPs through TIM2-2-100GM, TIM2-2-100GX on OTM-1200 carrying 100GbE services
or ODU4 switching services over OFx-1200 line modules.
■ 1 Port D-SNCPs through TIM2-18-10GM and TIM2-18-10GX on OTM-1200 carrying 10GbE LAN
services, ODU2/ODU2e switching services or SONET OC-192/SDH STM-64 services over
OFx-1200 line modules.
With multi-layer recovery in DTN-X,
■ the Working protection unit (PU) in D-SNCP can be configured as restorable. So if a D-SNCP’s
Working PU experiences a fault, the SNC will switch to its Protect PU and additionally, GMPLS will
set up a restoration path for the Working PU. Therefore, if a subsequent fault occurs on the Protect
PU while the original Working PU is still in a fault state, the traffic will switch back to the restoration
route of the Working PU.
■ the Protect PU in the D-SNCP can be configured as restorable. So, if a D-SNCP's Protect PU
experiences a fault, GMPLS will set up a restoration path for the Protect PU. If a subsequent fault
occurs on the Working PU while the original Protect PU is still in a fault state, the traffic will switch
to the restoration route of the Protect PU.
In case of work and protect SNCs in the 1 port D-SNCP configured as revertive, restorable SNCs, when a
fault occurs on the Working or Protect PU, GMPLS will set up a restoration path for the Working or
Protect PU as applicable. The originally configured working or protect path of the SNC is maintained and
is continuously monitored for its health by checking the fault bits and equipment state. Once the fault on
the working or protect route (as applicable) is cleared and the route demonstrates a fault-free condition, a
Wait to Restore (WTR) timer is started for the restorable SNC for auto-reversion. If the original working or
protect PU of the SNC shows no faults during the Wait to Restore time, traffic is switched back to the
original working or protect PU as applicable (bidirectionally on both the Local and Remote ends) and the
path used for restoration is deleted.
In case of work and protect SNCs in the 1 port D-SNCP configured as non-revertive restorable SNCs,
when a fault occurs on the Working or Protect PU, GMPLS will set up a restoration path for the Working
or Protect PU as applicable. This restoration path is now a part of the 1 Port D-SNCP and the originally
configured working or protect PU of the SNC is deleted.

Note: While creating a multi-layer recovery service on DTN-X, it is recommended to do the following:
■ Create a Work SNC, ensure it is the working route and set the work SNC to maintenance state
■ Create the 1-port PG

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-147

■ Create the Protect SNC


■ Change the state on the work SNC from maintenance to unlocked state

Note: Prior to deleting or re-configuring a multi-layer recovery service to a revertive restorable SNC,
ensure that the work SNC of the multi-layer recovery service is the working route and then delete the
1-port PG.

Prior to deleting or re-configuring a multi-layer recovery service to a revertive restorable SNC, ensure that
the work SNC of the multi-layer recovery service is the working route and then delete the 1-port PG.
The following sections illustrate Multi-layer recovery:
■ Multi-layer Recovery for Revertive PG with Non-Revertive Restorable SNCs on page 4-147
■ Multi-layer Recovery for Revertive PG with Revertive Restorable SNCs on page 4-148
■ Network Resiliency against Multiple Failures on page 4-152

Multi-layer Recovery for Revertive PG with Non-Revertive Restorable SNCs


Figures Figure 4-98: 1 Port DSNCP with revertive restorable SNC: Failure on Work path on page 4-149 to
Figure 4-102: 1 Port DSNCP with revertive restorable SNC: Delete work restoration path on page 4-151
illustrate how multi-layer recovery works to protect traffic in the case of a revertive 1 Port D-SNCP PG
deployed with restorable non-revertive SNCs as the Working PU and Protect PU.

Figure 4-95 1 Port DSNCP with non-revertive restorable SNC: Failure on Work path

A failure takes place on the work path (W-PU) in the above sample network configuration.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-148 IQ NOS Digital Protection Services

Figure 4-96 1 Port DSNCP with non-revertive restorable SNC: Switch to protect path on failure of work
path

On work PU (W-PU) failure, the SNC switches to the Protect path (P-PU). GMPLS will set up a restoration
path (W'-PU) for the Working PU.

Figure 4-97 1 Port DSNCP with non-revertive restorable SNC: Work path is deleted

The originally configured working path (W-PU) of the SNC is deleted as the SNC is non-revertive. The
protect path (P-PU) is the active path and the restoration working path (W'-PU) is on standby.

Multi-layer Recovery for Revertive PG with Revertive Restorable SNCs


Figures Figure 4-98: 1 Port DSNCP with revertive restorable SNC: Failure on Work path on page 4-149 to
Figure 4-102: 1 Port DSNCP with revertive restorable SNC: Delete work restoration path on page 4-151
illustrate how multi-layer recovery works to protect traffic in the case of a revertive 1 Port D-SNCP PG
deployed with revertive restorable SNCs as the Working PU and Protect PU.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-149

Figure 4-98 1 Port DSNCP with revertive restorable SNC: Failure on Work path

A failure takes place on the work path (W-PU) in the above sample network configuration.

Figure 4-99 1 Port DSNCP with revertive restorable SNC: Switch to Protect PU on failure of Working path

On work PU (W-PU) failure, the SNC switches to the Protect path (P-PU). GMPLS will set up a restoration
path for the Working PU. The originally configured working path (W-PU) of the SNC is maintained and is
continuously monitored for its health by checking the fault bits and equipment state.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-150 IQ NOS Digital Protection Services

Figure 4-100 1 Port DSNCP with revertive restorable SNC: Switch to Work Restoration path on failure of
Protect path

If a subsequent fault occurs on the Protect PU while the original Working PU is still in a fault state, the
traffic switches back to the restoration route of the Working PU (W'-PU).

Figure 4-101 1 Port DSNCP with revertive restorable SNC: Reversion to healed Work Path

The fault on the work path (W-PU) is cleared and a fault-free state is maintained, a Wait to Restore
(WTR) timer is started for the restorable SNC for auto-reversion and traffic is switched back to the original
working PU (i.e. from W'-PU to W-PU) after the WTR expiry.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-151

Figure 4-102 1 Port DSNCP with revertive restorable SNC: Delete work restoration path

The work restoration path (W'-PU) is deleted. The work PU (W-PU) is active and the Protect PU (P-PU)
has a failure.

Figure 4-103 1 Port DSNCP with revertive restorable SNC: Protect path failure

Since the Protect PU (P-PU) is failed, GMPLS computes a restoration path (P'-PU) for the Protect PU.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-152 IQ NOS Digital Protection Services

Figure 4-104 1 Port DSNCP with revertive restorable SNC: Work and Protect Path failure

On a subsequent work path (W-PU) failure (when the protect path is also failed), traffic switches from the
work path (W-PU) to the protect restoration path (P'-PU). A work restoration path (W'-PU) is created and
is on standby.

Network Resiliency against Multiple Failures


Multi-layer Recovery provides resiliency against multiple failures by performing restorations over 1 Port
DSNCP. The number of restorations that can be performed can be configured by the user from
management interfaces.
This Maximum MLR Restoration attempts attribute on management interfaces is used to set the number
of restoration attempts in a DTN-X Multi-layer Recovery configurations. This attribute is applicable only for
protection groups created on XTCs only and sets the restoration attempts for revertive SNCs only.
The Maximum MLR Restoration attempts can be configured to:
■ A value 1 to 6: Restricts the number of restoration attempts to a value in the range of 1 to 6
■ A value NA: Sets no restriction on restoration attempts. The failed work and/or protect path is
restored until no fault-free path is available.
The below figure illustrates Multi-layer Recovery for four failures in a network. The following terms are
used in the below figure:
■ W- Indicates the working path
■ P- Indicates the protect path
■ W'- Indicates the work restoration path
■ P' - Indicates the protect restoration path

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-153

Figure 4-105 Multi-Layer Recovery in DTN-X illustrated with four fiber cuts in a sample network

Note the following about Multi-layer recovery on DTN-X:


■ Independent WTR timers can be configured on the 1 Port D-SNCP protection group and the
revertive SNC.
■ Manual switch operation is supported for both Working and Protect PUs for 1 Port D-SNCP.
■ Any routing constraints are honored during restoration as part of multi-layer recovery.
■ The switch time is not affected due to the multi-layer recovery schemes and will be completed
within 50ms.
■ For D-SNCP using the TOM-100G-LR4, protection switch times may exceed 50ms.

Fast Shared Mesh Protection (FastSMP™)


For services with endpoints on the XTC-10 and XTC-4, IQ NOS supports Fast Shared Mesh Protection
(FastSMP). Shared Mesh Protection is a protection service mechanism that allows protection groups to
share protect path resources across multiple protection groups. Infinera’s FastSMP supports protection
switching in less than 50ms.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-154 IQ NOS Digital Protection Services

Note: See DTN-X Service Capabilities on page A-1 for the specific services that support FastSMP.

Note the following for FastSMP:


■ FastSMP is not supported for VCAT services.
■ FastSMP is supported for SLTE links as well as terrestrial links.
■ A protection time of sub-50ms for a single failure is supported for protection paths of up to 2000km.
The protection time beyond 2000km is 50ms plus the speed of light roundtrip delay for the distance
beyond 2000km. For example, if the distance is 3000km, the protection switching time is 50ms
+ 1000km roundtrip delay of 10ms = 60ms
■ FastSMP is a licensed feature. A license is required for each TIM and each line module that is used
in a FastSMP working or protection path.
The following sections describe the provisioning and functioning of FastSMP:
■ FastSMP Resource Sharing on page 4-154
■ Preemption for Prioritized FastSMP Services on page 4-156
■ Multiple Protect Paths for a FastSMP Protection Group on page 4-157
■ Provisioning FastSMP on page 4-158
■ FastSMP Operations on page 4-160
■ Switch Request Priorities on page 4-162
■ FastSMP Protection Switching Events/Alarms on page 4-163
■ FastSMP for FlexILS SLTE Links on page 4-164
■ Manually Configured Shared Risk Resource Group (SRRG) on page 4-165

FastSMP Resource Sharing


Unlike GMPLS Circuit Restoration (see Dynamic GMPLS Circuit Restoration on page 4-140), which does
not compute protection path until a failure occurs, FastSMP pre-computes the protection paths at the time
of FastSMP protection group configuration. The protection path bandwidth along the path isn’t used until
the protect path becomes active upon a work path failure.
With FastSMP protection, transport network bandwidth (ODU timeslots) are marked as a protection
resource without taking actual bandwidth until the protection path is activated. Because the bandwidth on
the protection path is marked but not actually taken until a protection switch is required, the resources on
the protection path can be shared by multiple paths as protection bandwidth.

Note: For performance reasons, the overbooking ratio of 10 is recommended: For any network
resource (e.g., link bandwidth), the total protection bandwidth configured on the resource should be a
maximum of 10 times the actual available bandwidth. For example, if a link has 500Gbps of capacity,
that link should be provisioned for no more than 5Tbps (10x500Gbps) of total protection bandwidth.

Figure 4-106: FastSMP Working Paths Sharing Protection Resources on page 4-155 shows an example
network with two FastSMP protection groups using a shared protection resource (e.g., timeslots).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-155

Figure 4-106 FastSMP Working Paths Sharing Protection Resources

Each logical protection path is configured to register network resources when it is established, but no
actual protection bandwidth is consumed until the protection path is activated. Link states (availability of
protection bandwidth and paths) are maintained on the network elements. If working path incurs a fault,
traffic is switched to the protection path and only then is the bandwidth on the protection path activated,
as shown in Figure 4-107: FastSMP Activated Protection Path on page 4-155, in which a fault occurs on
Working Path #1 and traffic is moved to Activated Protection Path #1.

Figure 4-107 FastSMP Activated Protection Path

As shown in Figure 4-107: FastSMP Activated Protection Path on page 4-155, if a network resource is
used as a protection resource for multiple FastSMP protection groups, the resource can be used by any

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-156 IQ NOS Digital Protection Services

of the protection groups that might need it. This optimizes the bandwidth used by FastSMP protection
services.
When a shared protection path resource is activated as in Figure 4-107: FastSMP Activated Protection
Path on page 4-155, the node that detects the failure sends an SMP activation protocol message to the
head-end indicating failure on the path. The head-end node on receiving the failure message selects the
least cost protect path in the protection group and activates the protect path end-to-end, sending an SMP
activation protocol message. SMP protection switching is a bi-direction switching (i.e., the head-end won't
select the activated protect path until it gets confirmation through SMP activation protocol from the tail-
end that the tail-end has switched).

Preemption for Prioritized FastSMP Services


As described above, a network resource can be used as a protection resource for multiple FastSMP
protection groups. If the protection resource is not currently used, it is available to any of its associated
protection groups in case of a fault, as shown in Figure 4-107: FastSMP Activated Protection Path on
page 4-155. However, in order to ensure that the protection resources are available to high-priority traffic,
each FastSMP protection group is configured as high priority or low priority.
High priority traffic is able to preempt low priority traffic in the case where the high priority FastSMP
protection group incurs a fault and requires a shared protection resource in use by a low priority FastSMP
protection group.
Each FastSMP protection group maintains the list of available protection paths in priority order based on
protection type, lowest cost, etc. In case of failure on a high priority work/protect path, FastSMP picks the
lowest cost path even if it is being occupied by another lower priority circuit.
For example, as shown in Figure 4-108: FastSMP Preempting Lower Priority Protection Group on page 4-
157 when a low-priority protection group activates a shared protection resource, if a high-priority
protection group requires the protection resource, the lower-priority traffic will be preempted to protect the
high-priority traffic. In Figure 4-108: FastSMP Preempting Lower Priority Protection Group on page 4-157,
FastSMP protection group A is low priority, so when FastSMP protection group B experiences a fault on
the working path, the protection resources are allocated to the FastSMP protection group B.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-157

Figure 4-108 FastSMP Preempting Lower Priority Protection Group

Multiple Protect Paths for a FastSMP Protection Group


In addition to sharing protection paths between FastSMP protection groups, FastSMP also supports
multiple protection paths for a single FastSMP protection group, as shown in Figure 4-109: FastSMP
Protection Group with Multiple Protection Paths on page 4-158. A FastSMP protection group can be
configured with up to 5 protection paths. This is useful in the case shown in Figure 4-108: FastSMP
Preempting Lower Priority Protection Group on page 4-157 where protection group A has been
preempted from its protection path by a higher priority protection group. In the configuration in Figure
4-109: FastSMP Protection Group with Multiple Protection Paths on page 4-158, FastSMP Group A has
protection path #1 which can be used if the traffic is preempted off of protection path #2.
In case of a fiber cut on FastSMP Group A working path, traffic will be switched to protect path #1 or #2
depending priority and bandwidth availability of the two protection paths. If either of the protect paths are
not in service or if bandwidth is not available, traffic will be switched to the other protect path.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-158 IQ NOS Digital Protection Services

Figure 4-109 FastSMP Protection Group with Multiple Protection Paths

Provisioning FastSMP
FastSMP is configured using GMPLS circuits (SNCs).

Note: See DTN-X Service Capabilities on page A-1 for the specific services that support FastSMP.

To provision a FastSMP protection group:


■ The user first creates a working path via a GMPLS created SNC.

Note: All SNCs in the FastSMP protection group must originate from the head-end node of the
FastSMP (each working/protect SNC must use the head-end node as the source endpoint of
the SNC). See below for information on designating a node as head-end of a FastSMP
protection group.

■ The user then creates the FastSMP protection group using the reliable tributary termination point
AID. The FastSMP protection group must be created at both the head end and the tail end of the
service. When creating the FastSMP protection group at each end of the service, the user must
configure in the FastSMP protection group whether the supporting node is the head end or the tail
end:
□ Head End—The head end of the service, from which all protection parameters are
configured and from which user operations must be performed for the FastSMP protection
group: This includes creation of working and protect paths, protection switches, and

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-159

reversions. In addition, the head end triggers the activation of the protect path in case of
protection switches.
□ Tail End—The tail end of the FastSMP service.
■ Once the working path and FastSMP protection group are created, the user creates one or more
protection paths, which the user can configure for diversity from the working path as described
below.
FastSMP supports the following provisioning features:
■ SNC diversity: For GMPLS created SNC protection paths, the following diversity options can be
specified by the user:
□ Any—(default) The diversity is automatically configured by GMPLS. GMPLS attempts to find
a diverse path, using first end-to-end node diversity, then end-to-end fiber diversity, then
segment fiber diversity (see below for descriptions of these diversity types). GMPLS makes
three attempts to find a route with each diversity type before moving on to attempt to find a
route using the next diversity type.
□ End-to-end node diverse—The working path and protection path do not include any of the
same nodes, except at the head end and tail ends of the service.

Note: The protect path will be physically diverse from the nodes of the working path, but
guaranteed protection is available only for fiber failures.

□ End-to-end fiber diverse—The working path and protection path do not include any of the
same fibers.
□ Segment node diverse—The path is protected against a subset of working path nodes, for
cases where the network topology doesn’t have a single end-to-end node diverse protection
path. (Two or more protect paths are required to cover all work path node risks.) For
segment node diverse protection, the working segment to be protected is configured when
creating the protection SNC.

Note: The protect path will be physically diverse from the nodes of the specified working
path segment, but guaranteed protection is available only for fiber failures.

□ Segment fiber diverse—The path is protected against a subset of working path fibers, for
cases where the network topology doesn’t have a single end-to-end fiber diverse protection
path. (Two or more protect paths are required to cover all work path fiber risks.) For segment
fiber diverse protection, the working segment to be protected is configured when creating the
protection SNC.
□ Custom—For FastSMP paths that are GMPLS-created SNCs, when creating the protection
SNC the user must specify in the Inclusion List field the network resources (nodes, fibers,
channels, TE interfaces, instance IDs, or time slots) that are to be included in the protect
path. Inclusion list is supported for strict route only (the end-to-end path must be provided in

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-160 IQ NOS Digital Protection Services

the Inclusion List; path segments are not supported in Inclusion List for FastSMP protect
path SNCs).
■ SNC inclusion list down to timeslot granularity: When custom diversity option is specified, the user
can specify the desired route down to the time slot granularity. (The user can create a list nodes,
fibers, channels, TE interfaces, instance IDs, and time slots to be included in the protection route.)
■ Pre-calculated protection path for faster protection: All protection paths are pre-calculated or
provisioned before failure occurs. New protection paths can be added based on user requests or
network conditions (such as all existing protection paths are impacted or used due to concurrent
network failures). For performance reasons, an overbooking ratio of 10 is recommended: A network
resource (e.g., link bandwidth) can be provisioned as a protection resource for up to a maximum of
10 times the actual available bandwidth. For example, if a link has 500Gbps of capacity, that link
should be provisioned for no more than 5Tbps (10x500Gbps) of total protection bandwidth for
FastSMP.
■ Support for revertive protection: FastSMP supports auto-revertive protection, wherein traffic is
switched back to the working route once the working route comes back into service and the wait to
restore (WTR) timer expires.

FastSMP Operations
FastSMP switch operations result in switchover at both head-end and tail-ends to the same path. For
revertive FastSMP protection groups, the clearing of pending switch commands results in immediate
reversion to default work path when healthy.
The following automatic operations are supported for FastSMP:
■ Network and service state—A network or service-generated fault (e.g., fiber cut, equipment failures,
OLOS, etc.) is an automatic switching request based on the quality or state of the service, and on
the state of the path terminating the service. FastSMP protection applies to both unidirectional and
bidirectional failures.
■ Wait to restore (WTR) request—A wait to restore request is a system-generated request when a
work path failure clears, and the FastSMP protection group is provisioned as revertive. The WTR
request uses a provisionable timer that begins counting when work path heals; traffic is reverted
back to the work path upon expiration of the timer. FastSMP protection groups configured for
revertive protection will automatically revert the service back to its original path after the wait to
restore (WTR) timer expires. The WTR request and associated timer is initialized and begins
counting when all higher priority network or system requests are cleared. The WTR timer is
provisionable between 5 and 120 minutes (in 1-minute increments, with a default of 5 minutes).
Clearing of user-initiated requests (manual and lockout) do not initiate a WTR request.
The following user-initiated operations are supported for FastSMP and are described in the following
sections:
■ Lockouts on page 4-161
■ Manual Switch on page 4-161
■ Forced Switch on page 4-162

Note: User-initiated operations are performed at the head-end of the FastSMP service.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-161

Note: The priorities for FastSMP operations are described in Switch Request Priorities on page 4-
162.

Lockouts
The following user-initiated lockout operations are supported for FastSMP:
■ Lockout of protect—Applied to the FastSMP protection group, this command prevents the protect
path from becoming active under all circumstances (the user must specify which protection unit
within the protection group is to be locked out). Multiple protection paths can be locked out
simultaneously.
■ Lockout of working—Applied to the FastSMP protection group, this command prevents traffic from
being switch from the working path, under all circumstances. If the working path incurs any faults,
traffic will not be switched to a protection path.
■ Clear lockout—Applied to the FastSMP protection group, this command clears any existing lockout
operations on the protection group (either lockout of protection or lockout of working). A user-
initiated Clear command removes lockout switching requests; however, network-, service- or
equipment-generated switching requests are not cleared by the Clear command.
Note the following for lockout requests:
■ Lockout requests are the highest priority user command, so a lockout request is always honored
and will overwrite any previous command in effect for the protection unit.
■ A lockout request raises an alarm on the FastSMP protection unit.
■ If the protect path being locked out by the command is currently active, a protection switch to the
other path shall occur, regardless of the state of the other path (or if the state of the traffic being
carried by the path). After the lockout-induced switch, traffic cannot be moved back to the locked-
out path until the lockout command is cleared.
■ If a failure occurs on the protect path while a lockout of working is in effect, traffic cannot switch to
any of available protect path(s) configured until the lockout is cleared. Conversely, if a failure
occurs on the working circuit while a lockout of protect is in effect, traffic cannot switch to the
protect circuit until the lockout is cleared. Both cases can result in loss of traffic.

Manual Switch
A manual switch is a user-initiated command to switch from the active working path to the specified
protect path. A manual switch results in a protection switch if there are no higher priority requests in effect
on the alternative path.
Note the following for manual switch requests:
■ A manual switch request raises an alarm on the FastSMP protection group.
■ A manual switch is not allowed for switching away from a protect path.
■ A manual switch request is forgotten in the following circumstances:
□ If the manual switch cannot occur at the time of the request, the manual switch request will
be denied and disregarded.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-162 IQ NOS Digital Protection Services

□ If the manual switch succeeds and then a subsequent fault occurs on the specified protection
path, traffic will be automatically switched away from the protection path and the manual
switch request will be disregarded (meaning that traffic will not be switched back to the
protection path once the fault clears).
■ A manual switch request is rejected in the following circumstances:
□ If the specified protect path is in the lockout or forced switch state.
□ If a lockout of work operation is in effect.
□ If the specified protection path is not healthy.

Forced Switch
A forced switch is a user-initiated command to switch from an active path to a specified protect path.
Note the following for forced switch requests:
■ A forced switch request raises an alarm on the FastSMP protection group (any existing switch
alarm on the protection group will be cleared).
■ If the forced switch cannot be performed at the time of the request, the request is remembered by
the system until the switch can occur, or until a user clears the forced switch request.
■ A forced switch request is rejected in the following circumstances:
□ Back to back forced switches are not allowed.
□ The specified protection path is currently in lockout state.
□ A lockout of work request is currently in effect.
■ In the following cases, a forced switch request won't complete until the protection path becomes
available:
□ The specified protection path is not healthy.
□ There is a failure on the protection path or if the XGCC0 control channel is not available.
□ The path is currently in use by a high priority circuit.
□ The protection path contains a link with an OTUki TTI mismatch.
□ One of the nodes along the protection path is performing an upgrade of the Fast Control
Plane (FCP).

Switch Request Priorities


If multiple switch requests are present for the FastSMP protection group, the switch requests are
performed according to the priority shown below. Request priorities apply on a per-service basis and not
across different services, meaning that a low-priority operation on a high-priority circuit would still take
precedence over a high-priority operation on a low priority path. The switch request priorities are (from
highest to lowest):
1. Lockout of protection (highest priority)
2. Signal Fail on protection path (SF-P)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-163

3. Forced switch
4. Signal Fail on working (SF-W)
5. Manual switch
6. Wait to restore request

FastSMP Protection Switching Events/Alarms


Table 4-12: Alarms and Events for FastSMP Switching Operations on page 4-163 describes the alarms
and events that are reported on the FastSMP protection group upon protection switching operations. Note
that ASPS can be used to configure the default severity of these conditions, and also to configure the
event conditions as alarms (see Alarm Severity Profile Setting (ASPS) on page 2-9).

Table 4-12 Alarms and Events for FastSMP Switching Operations


Condition Switch Description Alarm/ Default
Trigger Event Severity
WKSWPR Automatic Raised against FastSMP protection group when traffic Event Not
switches to protection path, or when traffic switches Alarmed
from one protect path to another protect path due to (NA)
failure/user operation.
WKSWBK Raised against FastSMP protection group when traffic
switches back to working path due to automatic
reversion, manual reversion, or release of manual/
forced operation in revertive mode.
LOCKOUTOFPR Lockout Raised against the protect SNCfor user-initiated lockout Alarm MN
of protect operation (standing condition).
If the active path is locked out, the traffic switches back
to the work path with a WKSWBK condition, or the traffic
switches to a different protect path with a WKSWPR
condition if the work path is faulty.
LOCKOUTOFWK Raised against the working SNCfor user-initiated
lockout of working operation (standing condition).
If traffic is on the protect path, it switches back to the
work path with a WKSWBK condition.
FRCDWKSWPR Forced Raised against FastSMP protection group for user- Alarm MN
Switch initiated forced switch. Traffic switches to protection if
the protection path is healthy (standing condition).
FRCDWKSWBK Raised against FastSMP protection group for user-
initiated forced switch.Traffic switches to back to
working path if healthy (standing condition).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-164 IQ NOS Digital Protection Services

Table 4-12 Alarms and Events for FastSMP Switching Operations (continued)
Condition Switch Description Alarm/ Default
Trigger Event Severity
MANWKSWPR Manual Raised against FastSMP protection group for user- Event Not
Switch initiated manual switch when traffic switches to Alarmed
protection path if protection path is healthy. (NA)
MANWKSWBK Raised against FastSMP protection group for user-
initiated manual switch when traffic switches to working
path, if working path is healthy.

FastSMP for FlexILS SLTE Links


The DTN-X uses SLTE IAM band PTP as a fiber ID (control link) automatically and shows it as an optical
layer resource in the digital TE link. For these links, there is no OSC control channel present. The IGCC
link will be used for FastSMP activation and deactivations like in terrestrial deployments (see Inter-node
Control Plane (IGCC)). FastSMP is supported over point-to-point (single hop) FlexILS SLTE links, which
means that a failure on the FlexILS SLTE link can be protected by a FastSMP protection path. Figure
4-110: FastSMP over FlexILS SLTE Link (Point to Point) on page 4-164 shows an example FlexILS SLTE
point to point link that can be protected by FastSMP.

Figure 4-110 FastSMP over FlexILS SLTE Link (Point to Point)

In addition to FastSMP over point to point SLTE links, FastSMP is also supported for Optical Express
over SLTE links.

Figure 4-111 FastSMP over FlexILS SLTE Links (with Optical Express)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-165

In this configuration GMPLS is disabled so the user must manually configure the fiber AIDs in the SLTE
Optical Express as shared risk resource group (SRRG) for FastSMP, see Manually Configured Shared
Risk Resource Group (SRRG) on page 4-165.

Manually Configured Shared Risk Resource Group (SRRG)


One of the preconditions for effective deployment of FastSMP service is that protect path of different
circuits should not share timeslots if their respective work paths are routed over TE links which share a
common resource (fiber). The resources of TE links are discovered automatically. However, in SLTE
mode, where GMPLS is disabled, the network may not be able to discover all the physical resources of
digital TE link(s). In this scenario when the network is not able to discover these resources, the user can
specify the resources via the management interfaces.
FastSMP shared risk resource groups (SRRGs) can be manually configured for SLTE scenarios where
GMPLS cannot automatically discover the links:
■ SLTE links that carry both ILS2 and FlexILS services multiplexed via passive Line Multiplexing
Module (LMM).
■ SLTE links with Optical Express.
In these scenarios the user can manually configure the SRRG list of the TE interface with the AID of the
last element before the SLTE fiber, which can be the following:
■ IAM/IRM OSC PTP
■ FRM OSC PTP
■ FRM OSC PTP
■ BMM OSC PTP
■ LMM LINE PTP
The SRRG list will be available in the TE link resource list and can be used while creating the FastSMP
protection path and in the inclusion/exclusion list when creating an SNC.

Optical Subnetwork Connection Protection (O-SNCP)


The Optical Protection Switch Module (OPSM) is an MTC-9/MTC-6 module that provides unidirectional
optical protection (see Optical Protection Switch Module (OPSM)). The OPSM provides a pair of optical
switches, each of which bi-casts an input signal in the transmit direction and selects one of the two inputs
in the receive direction based on optical power absence or presence of the input signals.
■ For SLTE applications, the OPSM can be deployed in conjunction with an IAM at FlexILS ROADM,
Optical Line Amplifier, or DTN-X with FlexILS nodes (see O-SNCP in SLTE Configurations on page
4-110).
■ For terrestrial applications, the OPSM supports tributary side protection wherein the OPSM is
deployed between an AOFx-500 and an FMM-F250/FRM-9D (see Tributary-side O-SNCP on page
4-112).
■ OPSM protection is supported for all ICE 4 line modules EXCEPT(see Figure 11).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-166 IQ NOS Digital Protection Services

By default, O-SNCP protection groups are non-revertive, meaning that if a fault on the working path
causes a protection switch to the protect path, traffic will not automatically revert back to the working path
once the fault on the working path clears. However, the user can configure the O-SNCP protection group
for revertive switching. For revertive protection groups, traffic will be automatically switched back to the
working path once a fault on the working path has cleared and the wait to restore (WTR) period has
elapsed. The WTR period is a soaking time that can be configured from 5 minutes to 2 days (with a
default of 120 minutes). If the OLOS condition clears and OLOS is not detected on the working path for
the WTR period, an O-SNCP protection group configured for auto-reversion will revert back to the working
path. If OLOS is detected before the WTR period is complete, the WTR timer is reset and the soak period
will not begin again until the next time the working path’s OLOS condition clears. Once the WTR is
configured, the OPSM carries out the configured WTR behavior irrespective of the controller card
availability. If the user changes the WTR value when the WTR timer is already running, the new WTR
value takes effect. In case the new WTR value is less than the WTR time already elapsed, the WTR times
out immediately and the traffic reverts back to the work path.
The OPSM optical switches support both automatic and manual protection switching, and lockout of
working and protection:
■ Automatic protection switching—If the active line port detects OLOS failure and if the standby port
is clear of both OLOS failure and of lockout from the management interfaces, the OPSM will
automatically switch to the standby port.
■ Manual switching—Each optical switch on the OPSM modules support manual switch operations
initiated by the user. A manual switch causes the OPSM to switch from the active port to the
standby port. If the standby port has been locked out and/or has an OLOS fault, the manual switch
operation is rejected.
■ Lockout of working—Prevents traffic from switching to the working port. If the working line is
currently the active route, traffic will be switched to the protect port. The traffic does not switch back
to the locked out path even if the other leg has OLOS. On clearing of lockout, the traffic will auto-
switch back to the other path if the active path has OLOS. Lockout has a higher priority than auto-
switch or manual.
■ Lockout of protection—Prevents traffic from switching to the protect port. If the protect line is
currently the active route, traffic will be switched to the working port. The traffic does not switch
back to the locked out path even if the other leg has OLOS. Lockout has a higher priority than
automatic switching or manual switching.
In addition, the OPSM supports latching: Even if the module loses electrical power, each optical switch
will remain latched in its current (active) position and continue to allow optical power to pass through as a
passive device. In this case, the OPSM rejects any switch requests until the module is back online.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-167

Multi-layer Recovery in DTNs


The two protection schemes, Digital Subnetwork Connection Protection (D-SNCP) and Dynamic GMPLS
Circuit Restoration, can be used in combination so that the Working PU of a D-SNCP (either 2 Port D-
SNCP or 1 Port D-SNCP) can be configured as GMPLS restorable. Therefore, an SNC can be protected
by both D-SNCP and by GMPLS auto-restoration to provide multi-layer recovery in case of a network
failure.
The DTN supports Multi-layer Recovery for 1 Port D-SNCPs. Multi-layer Recovery for 1 Port D-SNCP is
supported only if the work path SNC is configured as a non-revertive, restorable SNC. It is not supported
if the work path SNC is configured as a revertive, restorable SNC.
With multi-layer recovery, the Working PU in D-SNCP can be configured as restorable. So if a D-SNCP’s
Working PU experiences a fault, the SNC will switch to its Protect PU and additionally, GMPLS will set up
a restoration path for the Working PU. Therefore, if a subsequent fault occurs on the Protect PU while the
original Working PU is still in a fault state, the traffic will switch back to the restoration route of the
Working PU.
Figure 4-112: Multi-layer Recovery for Revertive PG with Revertive Restorable SNCs on page 4-167
shows how multi-layer recovery works to protect traffic in the case of a revertive 2 Port D-SNCP PG
deployed with a restorable SNC as the Working PU with automatic reversion set up with GMPLS
restoration on the Working PU.

Figure 4-112 Multi-layer Recovery for Revertive PG with Revertive Restorable SNCs

1. If a fault triggers a protection switch on the SNC1 working path, an automatic protection switch routes
traffic to SNC2.
2. The faulted SNC1 is restored to the RestoredRoute. At this point, the PG WTR shall not be started as
the working SNC1 is still on the RestoredRoute. If there is a fault on SNC2, traffic is switched back to
SNC1 on the RestoredRoute. Otherwise, if there is no fault on SNC2, the DTN waits for the fault to
clear on SNC1’s WorkingRoute.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-168 Multi-layer Recovery in DTNs

3. The fault is cleared on the WorkingRoute of SNC1. WTR is started for the restorable SNC1 for auto-
reversion. At the expiry of WTR, SNC1 is successfully reverted back to the WorkingRoute.
4. As soon as SNC1 is restored back to its WorkingRoute, the WTR on the protection group is started.
Upon the expiry of the protection group WTR timer, traffic is reverted back to its original working path
of SNC1.
Figure 4-113: Multi-layer Recovery with Revertive PG with Non-revertive Restorable SNC on page 4-168
shows how multi-layer recovery works to protect traffic in the case of a revertive 2 Port D-SNCP PG
deployed with a restorable SNC as the Working PU with no automatic reversion.

Figure 4-113 Multi-layer Recovery with Revertive PG with Non-revertive Restorable SNC

1. If a fault triggers a protection switch on the SNC1 working path, an automatic protection switch routes
traffic to SNC2.
2. The faulted SNC1 is restored to the RestoredRoute. At this point, the PG WTR is started.
3. Upon the expiry of the protection group WTR timer, traffic is reverted back to SNC1 using SNC1’s
RestoredRoute.
Figure 4-114: 1 Port D-SNCP with Restorable SNCs on page 4-168 shows how multi-layer recovery
works to protect traffic in the case of a 1 Port D-SNCP deployed with restorable SNCs.

Figure 4-114 1 Port D-SNCP with Restorable SNCs

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-169

1. If a fault triggers a protection switch on the working path of the SNC Working route, an automatic
protection switch routes traffic to the SNC Protect route.
2. The faulted SNC Work is restored to the restored path of the SNC Work, and the original working path
of the SNC Work is deleted as part of GMPLS restoration. At this point, the PG WTR is started.
3. Upon the expiry of the protection group WTR timer, traffic is reverted back to the newly-routed SNC
Work restored path.
The following guidelines apply to multi-layer recovery:
■ GMPLS restoration is not supported on the SNC corresponding to the Protect PU of the protection
group.
■ GMPLS restorable SNCs with auto reversion are supported only for 2 Port D-SNCP protection.
■ For 2 Port D-SNCP, GMPLS restoration/reversion is supported only on the SNC corresponding to
the Working PU of the D-SNCP protection group.
■ For 1 Port D-SNCP, GMPLS restoration is supported only on the SNC corresponding to the
Working PU of the D-SNCP protection group. Reversion is not supported for 1 Port D-SNCP; if
there is a revertive restorable SNC already provisioned, it cannot be included in the 1 Port D-SNCP
for multi-layer recovery.
■ Independent WTR timers can be configured on the 2 Port D-SNCP protection group and the
revertive SNC.
■ Multi-layer recovery is configurable on all the client interfaces. However, it is not supported for
Layer 1 OPN applications.
■ Manual switch operation is supported for both Working and Protect PUs for 1 Port D-SNCP.
■ The switch time is not affected due to the multi-layer recovery schemes and will be completed
within 50ms.
■ For D-SNCP using the TOM-40G-SR4, TOM-100G-L10X, TOM-100G-S10X, or TOM-100G-SR10,
if the tributary disable action is set to Laser Off, protection switch times can exceed 50ms. For
these 100GbE or 40GbE TOMs, it is recommended to set the tributary disable action to Insert Idle
Signal. (See Tributary Disable Action on page 3-41.)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-170 Dual chassis Y-cable protection (DC-YCP)

Dual chassis Y-cable protection (DC-YCP)


XT-3600 chassis provides support for dual chassis Y-cable protection with the following features:
■ Supports SCG and line (band) protection
■ Supports provisioning protected services across two XT-3600 chassis for client/Trib or Network/
Line failure
■ Supports a port rate of 100G with the payload types 100GbE and ODU4
■ Supports both revertive and non-revertive switching
■ Supports creation of Protection group with work and protect PUs across different XT-3600 chassis
■ Supports protection only between two chassis that are "paired".
The user has to pair the chassis first and then create protected services between the paired
chassis.
■ Supports protection on any two ports across a paired chassis. This means that the system allows
the configuration of DC-YCP between any two ports of the paired chassis.

Note: In case of XT-3600 with power saving mode enabled, DC-YCP cannot be configured
between the turned off ports.

In the following figure, chassis A and B when paired can have DC-YCP between them. Similarly for
chassis C and D. One chassis can be paired with only one other chassis with any combination of
node controller or shelf controller supported. For example, if one node has XTC as node controller
with XT-3600 shelf controller and another node has XT-3600 as node controller and shelf
controller, the XT-3600 chassis across the two nodes can be paired and DC-YCP can be
configured between them .
For DC-YCP1, Chassis A:P1 as working and Chassis B:P1 as protect. Similarly, for DC-YCP2,
Chassis B: P3 as working and Chassis A:P2 as protect.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-171

Figure 4-115 Configuration showing DC-YCP between any two ports of the paired chassis

■ Supports any chassis pairing on a multi-chassis node. This means that the system provides
flexibility to pair any XT-3600 chassis with another XT-3600 chassis within the same multi-chassis
node.
■ Supports provisioning DC-YCP across Hybrid multi-chassis. This means that the system supports
the DC-YCP feature across different types of chassis in a hybrid multi-chassis node. The DC-YCP
can be configured between any of the CX-10E and CX-100E chassis belonging to the same multi-
chassis node. However, the client types/payload should be the same for the two PUs belonging to
different chassis.

Preconditions to be followed in DC-YCP


There are certain preconditions to be followed when creating a DC-YCP or pairing of two chassis. The
following list describes these preconditions.
Preconditions for creating DC-YCP:
■ Ensure that the chassis are paired
■ Ensure that TOMs have the same service type
■ Ensure that a combination of local and remote SNCs is not created
■ It is recommended to create services of the same payload type and then create a DC-YCP
between two paired chassis
■ In case of DC-YCP with GbE services, the FEC mode diagnostics should have the same mode on
both XT-3600.For example, FEC mode should be set to Enabled or Disabled on both XT-3600
Preconditions for chassis pairing:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-172 Dual chassis Y-cable protection (DC-YCP)

■ The chassis to be paired should not be part of another paired group


■ The pairing of chassis is allowed only if both chassis belong to the same chassis family
■ The chassis to be paired are connected by an Node Controller Cable (NCC)

DC-YCP protection switching


The DC-YCP switching can be triggered on any of the following path failures:
■ Client failure
■ Bi-directional fiber-cut
■ Uni-directional fiber-cut

Note: The direction of protection switching (PSDIRN in TL1) is not indicated for XT-3600.

DC-YCP switching in client failure


The following Figure shows a scenario of protection switching upon detecting a client failure.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-173

Figure 4-116 DC-YCP switching upon detecting a client failure

The following sequence of events/actions are triggered when a client failure is detected and protection
switching is being initiated.
1. The transmit lasers on chassis A towards CPE1 and on chassis X towards CPE2 are ON. A client
failure such as LOSYNC is detected on chassis A.
2. Upon detection of LOSYNC, the system initiates a protection switch on DC-YCP between chassis A
and chassis B. The transmit laser on chassis B is turned ON towards CPE1 and transmit laser on
chassis A is shutdown.
3. Protection switching takes place when the FACRXPSTRIG (in TL1) or the Protection Switch for Client
Rx fault (in GNM/DNA) is enabled by default. This parameter can be disabled by the user in which
case protection switching for the client side faults does not take place.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-174 Dual chassis Y-cable protection (DC-YCP)

4. A replacement signal (AIS in this case) is sent downstream towards chassis X.


5. Upon detection of AIS from the network, the system performs a protection switch on DC-YCP between
chassis X and chassis Y. The the transmit laser on chassis Y towards CPE2 is turned ON and the
transmit laser on chassis X is shutdown.

DC-YCP switching upon Bidirectional network failure


The following Figure shows the protection switching upon bidirectional network failure such as fiber cut.

Figure 4-117 DC-YCP switching upon detecting a Bidirectional fibercut

The following sequence of events/actions are triggered when a bidirectional network failure is detected
and protection switching is being initiated.
1. The transmit lasers on chassis A towards CPE1 and on chassis X towards CPE2 are ON. A
bidirectional failure (fibercut) is detected on the datapath between chassis A to chassis X.
2. An OLOS condition is detected on chassis A and chassis X.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Service Provisioning 4-175

3. Upon detection of OLOS condition, the system initiates a protection switch on DC-YCP at each end.
The the transmit laser on chassis B is turned ON towards CPE1 and transmit laser on chassis A is
shutdown..
4. Similarly, the system performs a protection switch on DC-YCP between chassis X and chassis Y. The
transmit laser on chassis Y towards CPE2 is turned ON and the transmit laser on chassis X is
shutdown.

DC-YCP switching upon Unidirectional network failure


The following Figure shows the protection switching upon unidirectional network failure such as fiber cut.

Figure 4-118 DC-YCP switching upon detecting a unidirectional fibercut

The following sequence of events/actions are triggered when a unidirectional network failure is detected
and the protection switching is being initiated.
1. The transmit lasers on chassis A towards CPE1 and on chassis X towards CPE2 are ON. A
unidirectional failure (fibercut) is detected on the datapath between chassis A to chassis X.
2. A OLOS condition is detected on chassis X.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


4-176 Dual chassis Y-cable protection (DC-YCP)

3. Upon detection of OLOS, the system performs a protection switch on DC-YCP between chassis X and
chassis Y. The transmit laser on chassis Y towards CPE2 is turned ON and transmit laser on chassis
X is shutdown.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 5

Performance Monitoring and


Management

IQ NOS provides extensive performance monitoring (PM) to provide early detection of service
degradation before a service outage occurs. The performance monitoring capabilities allow users to pro-
actively detect problems and correct them before end-user complaints are registered. Performance
monitoring is also needed to ensure contractual Service Level Agreements between the customer and the
end user.
IQ NOS provides performance monitoring functions in compliance with GR-820. The following features
are supported:

Note: Please see the Infinera GNM Performance Management Guide for detailed information on PM
data supported on Infinera nodes.

■ Extensive performance data collection at every node, including optical performance monitoring
data, FEC PM data, native client signal PM data at the tributary ports, Ethernet PM collection for
Ethernet services, Optical Supervisory Channel (OSC) performance monitoring data.
■ Retrieval of the current and historical 15 minute bin, and current 24 hour bin, and real time bins for
Regenerator Section - Unavailable Seconds (RS-UAS) in both the receive and transmit directions.
The monitoring of RS-UAS is disabled by default, and can be enabled or disabled by the user for
each SDH facility. TCA/TCE are supported.
■ Comprehensive PM data collection functions, including,

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


5-2

□ Real-time PM data collection for real-time troubleshooting (see Real-time PM Data Collection
on page 5-3)
□ Historical PM data collection for service quality trend analysis (see Historical PM Data
Collection on page 5-3)
□ Threshold crossing notifications for early detection of degradation in service quality (see PM
Thresholding on page 5-4)
□ Invalid data flag indicator per managed object per period (see Suspect Interval Marking on
page 5-5)
□ Performance monitoring event logging for troubleshooting (see PM Logging on page 5-6)
■ Flexible PM data reporting and customizing options to meet diverse customers’ needs, including,
□ Automatic and periodic transfer of PM data in CSV format enabling customers to integrate
with their management applications ( PM Data Export on page 5-5)
□ Customization of PM data collection (see PM Data Configuration on page 5-6“)
■ Via the DNA, display of network-wide PM data for any selected circuit (see the DNA documentation
set for more information)
■ Network Latency Measurement for ODUk path (see DTN-X Network Latency Measurement on
page 5-7“)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Performance Monitoring and Management 5-3

PM Data Collection
IQ NOS collects digital PM data and optical PM data.
■ For the optical PM data, IQ NOS utilizes gauges to collect the PM data. The gauge attribute type,
as defined in ITU X.721 specification, indicates the current value of the PM parameter and is of
type float. The gauge value may increase or decrease by an arbitrary amount and it does not wrap
around. It is a read-only attribute.
■ For the digital PM data, IQ NOS uses counters to collect the PM data. The counter value is a non-
negative integer that is set to zero at the beginning of every collection interval. The counter size is
selected in a such a way that the counter does not rollover within the collection period.

Real-time PM Data Collection


IQ NOS supports real-time PM data retrieval which is useful for real-time troubleshooting. Real-time data
can be retrieved by the management applications at any time.
IQ NOS provides real-time PM data for some of the optical and digital PM parameters. The real-time
optical PM data indicates the state of the hardware (value of the PM parameter) at the time of its retrieval.
The real-time digital PM data is essentially the value of the digital PM counter at the time of its retrieval.
The value of the counters will roll over after the upper bound is reached.

Historical PM Data Collection


In addition to the real-time PM data, IQ NOS provides historical PM data archived locally in the network
element enabling service quality trend analysis. IQ NOS collects the historical PM data at the following
intervals:
■ 15 minutes
■ 24 hours
IQ NOS maintains the following historical counters/gauges:
■ Current 15-minute and ninety-six previous 15-minute counters/gauges
■ Current 24-hour and seven previous 24-hour counters/gauges
The historical PM data is not asynchronously reported to the management applications. It must be
retrieved by the users through management applications.
Note that the historical counters/gauges are supported only for some PM parameters, but not for all.
The historical (current and previous) optical PM data is derived by taking several snapshots of the
hardware status. In other words, the optical PM parameter value is read from the hardware every five
seconds within a PM period, and minimum, maximum and average values are derived from all the
readings. The duration of the reading itself is one second. Thus the historical optical PM data is the
minimum, maximum and average of the PM parameter values within a given period.
The historical digital PM data is essentially the value of the counter at the end of the given PM period.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


5-4 PM Data Collection

PM Thresholding
PM thresholding provides an early detection of faults before significant effects are felt by the end users.
Degradation of service can be detected by monitoring error rates. Threshold mechanisms on counters
and gauges allow the monitoring of such trends to provide a warning to users when the monitored value
exceeds, or is outside the range of, the configured thresholds.
IQ NOS supports thresholding for both optical PM gauges and digital PM counters. During the PM period,
if the current value of a performance monitoring parameter reaches or exceeds corresponding configured
threshold value, threshold crossing notifications are sent to the management applications.
■ Optical PM Thresholding
IQ NOS performs thresholding on some optical PM parameters by utilizing high and low threshold
values. Note that the thresholds are configurable for some PM parameters; for others, the system
utilizes pre-defined threshold values. An alarm is reported when the measured value of an optical
PM parameter is outside the range of its configured threshold values. The alarms are automatically
cleared by IQ NOS when the recorded value of the optical PM parameter is within the acceptable
range.
■ Digital PM Thresholding
IQ NOS performs thresholding on some digital PM data utilizing high threshold values which are
user configurable. The Threshold Crossing Alert (TCA) is reported when a PM counter, within a
collection period, exceeds the corresponding threshold value. When a threshold is crossed, IQ
NOS continues to count the errors during that accumulation period. TCAs are transient in nature
and are reported as events which are logged in the event log as described in Event Log on page 2-
26. The TCAs do not have corresponding clearing events since the PM counter is reset at the
beginning of each period.
Note that PM thresholding is supported for some of the PM parameters, but not for all.
When a PM threshold value is modified, the new threshold will be used for generating associated TCAs in
the next complete PM interval. The current PM interval will not use the new threshold. This means that:
■ If TCA reporting is enabled after a PM threshold is modified to a value lower than the current PM
count, TCAs are not raised in the current PM interval. The new threshold will be used only in the
next complete PM interval.
■ If TCA reporting is enabled before a PM threshold is modified to a value lower than the current PM
count, TCAs are raised in the current PM interval.

Customizable Severity Levels for TCAs and TCCs


IQ NOS supports customizable severity levels for TCAs and TCCs via the Alarm Severity Profile Setting
(ASPS) feature. See Alarm Severity Profile Setting (ASPS) on page 2-9 for more information on setting
TCA/TCC severities.

Enhanced PM Reporting for ORMs


The network element supports enhanced PM reporting for ORMs, a feature which is enabled/disabled
independently for each ORM. When enabled, the network element will include the “(OLOS)” qualifier
alongside the ORM’s real-time OTS OPR and C-band OPR value when the value is below the OLOS

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Performance Monitoring and Management 5-5

alarm threshold value. The default OLOS alarm threshold for the ORM-CXH1 is -13dBm. For all other
ORMs, the threshold is -12dBm.

Suspect Interval Marking


IQ NOS marks the PM data for a given managed object collected in 15-minute and 24-hour periods as
suspect or invalid by maintaining an invalid data flag (IDF). The IDF is maintained per managed object
per period basis. The IDF is retrievable by management applications and is used to communicate to the
user the validity of the collected PM data. The PM data is marked invalid under the following conditions:
■ User resets the PM counter through management applications.
■ User puts the equipment in the locked or maintenance state.
■ User warm resets, cold resets, or physically re-seats the module (i.e., BMM, OAM, ORM, RAM,
GAM, DLM, AXLM-80, TEM, TAM, TOM, etc.)

Note: Warm reset, cold reset, or switchover of a controller module does not mark the PM as invalid
since the other modules continue to collect the PM, and since the controller module collects the PM
from the other modules once the controller module reset is complete.
■ The period of PM data accumulation changes by +/-10secs (e.g., user changes the date and/or
time during the period).
■ Loss of PM data due to system restart or hardware failure.

PM Data Export
Users can export PM data, manually or periodically, in CSV format flat files to a user specified external
FTP server. Users can use these flat files to integrate PM data analysis into their management
applications or simply view the PM data through spreadsheet applications. For the PM data flat file
format, see the DNA documentation set.
Users can schedule the TOD (time of day) at which the network element automatically transfers the PM
data to the user specified server. Users can configure primary and secondary server addresses. If the
data transfer to the primary server fails, the PM data is transferred to the secondary FTP server.
Alternatively, Infinera nodes can be configured to transfer PM data files simultaneously to both the
primary and secondary FTP servers. (Simultaneous transfer requires that both servers are configured
correctly.)
When a compiled file transfer is initiated by the user, the node will first verify the FTP server configuration
before compiling the file. See Verifying FTP Connectivity for Debug, PM, and DB Backup on page 7-19
for more information.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


5-6 PM Data Collection

PM Data Configuration
IQ NOS allows users to customize PM data collection on the network element. Users can configure PM
data collection through management applications. IQ NOS supports the following configuration options:
■ Reset the current 15-minute and 24-hour counters at any time per managed object.
■ Change the default threshold values according to the customer’s error monitoring needs.
■ Enable or disable the PM threshold crossing alarm and TCA reporting per attribute per managed
object.
■ Set the severity level of TCA notifications.
■ Configure the frequency of PM flat file uploading to the FTP servers as configured.
■ Enable or disable PM data collection per managed object entity.

PM On Software Upgrade and Rollback


IQ NOS does not support PM data conversion on software upgrade or rollback. The user must save all
PM data by uploading to a server prior to performing software upgrade or rollback.

PM Logging
As described in Event Log on page 2-26, IQ NOS maintains a wrap-around historical event log that tracks
all changes that occur within the system. Following are some PM related events that are logged in the
event buffer:
■ User changes PM thresholds
■ User resets PM counters
■ Threshold crossing alert (TCA) is generated
■ User configures periodic uploading of PM data to the client machine

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Performance Monitoring and Management 5-7

DTN-X Network Latency Measurement


The DTN-X supports a PM measurement for network latency, a measure for network latency incurred
between ODUk connection termination points within a DTN-X network. This is supported for services with
endpoints on the TIM-5-10GM/TIM-5-10GX of an XTC-10 or XTC-4 chassis (for all service types
supported by the TIM-5-10GM/TIM-5-10GX), and includes measurements between any two peer ODUk
termination points.
The latency measurement is configured on ODUk termination points. By default, the latency
measurement is disabled on the ODUk. A user can be configure an ODUk as an initiator or a responder
for the latency measurement. An ODUk configured as the initiator will include a delay measurement (DM)
bit in its overhead and begin measuring microseconds until the far-end ODUk configured as a responder
loops back the DM signal. Once the initiatory ODUk receives the signal back from the far-end, the initiator
ODUk reports number of frame periods (microseconds) that has passed.
Network latency measuring is supported for both end-to-end ODUk path and for ODUk path segments
within the DTN-X network boundary. The network latency test can be performed in either the facility
direction (towards the client network side) or in the terminal direction (towards the Infinera network side).
The latency values can be retrieved via the following PM parameters. Both real-time and historical data
(15 minute and 1 day) is collected):
■ PSLATFAC—Path segment latency, facility direction
■ PSLATTERM—Path segment latency, terminal direction
■ PELATFAC—Path end-to-end latency, facility direction
■ PELATTERM—Path end-to-end latency, terminal direction
In addition, the ODUk supports high and low thresholding for each latency measurement to support
alarming if the latency values are out of the configured range.
Note the following for the Network Latency Measurement feature:
■ Network latency measurements do not affect service on the ODUk path.
■ For any ODUk path, only one latency measurement can be performed at any given time, due to the
fact that the DM overhead bit is required for the test and can be used for only one latency test at a
time.
■ An ODUk can act as an latency measurement initiator or responder only when the ODUK is
configured for the following service modes:
□ For ODUk in adaptation mode, initiating/responding is supported only in the facility direction
(for both end-to-end ODUk path and for segments within the end-to-end ODUk path).
□ For ODUk in switching mode, initiating/responding is supported only for segments within the
end-to-end ODUk path (in both terminal and facility direction).
□ For ODUk in network wrapper mode (or service mode set to none), initiating/responding in
the terminal direction is supported only for segments within the end-to-end ODUk path.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


5-8 DTN-X Network Latency Measurement

■ The latency measurement involves filtering (i.e., accepting) of the received DM bit for persistency. If
random bit errors corrupt the DM bit, the acceptance time will be longer and will be accounted in
the overall latency results.
■ The measured latency values are measured in units of ODU frames (of the appropriate rate). More
specifically, for specific phase difference between the transmitted and received ODU frames, an
error of as much as two ODU frames is possible.
■ The delay measurement can be inaccurate during periods of errors in the network; large
measurement values are possible.

Note: Latency measurement and configuring of Latency thresholds is not supported on


Tributary ODUk on XT-3600

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Performance Monitoring and Management 5-9

gRPC PM Telemetry
General Remote Procedure Calls (gRPC) is the management interface used to collect the Telemetry PM
data from a network element. The gRPC telemetry streaming feature provides the network monitoring
functions in which data, such as Performance Monitoring (PM), Alarms or Events, is streamed
continuously from the device at a prescribed interval.
The gRPC transport method uses its HTTP bidirectional streaming between the gRPC client (the
collector) and the gRPC server (the device). The device in this case is a network element. A gRPC
session is a single connection from the gRPC client to the gRPC server.

Figure 5-1 gRPC Client/Server

gRPC PM Telemetry is supported on XT-3300 and MTC-6/MTC-9 chassis. The PM reporting time interval
is configurable and PM data is reported and streamed to the subscribed gRPC client at the configured
interval.
By default, gRPC is disabled and enabled as part of NETCONF, RESTCONF, TL1 and CLI interface.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


5-10 gRPC PM Telemetry

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 6

Security and Access Management

The IQ NOS security and access management features comply with Telcordia GR-815-CORE standard.
The supported features include:
■ User identification to indicate the logged in user or process (see User Identification on page 6-3).
■ User authentication to verify and validate the authenticity of the logged in user (see Authentication
on page 6-4).
■ User access control to prevent intrusion (see Access Control on page 6-5).
■ Resource access control by defining multiple access privileges (see Authorization on page 6-6).
■ Security audit logs to monitor unauthorized activities (see Security Audit Log on page 6-7).
■ Security functions and parameters to implement site-specific security policies (see Security
Administration on page 6-8).
■ Secure Shell (SSH v2) protection of management traffic (see Secure Shell (SSHv2) and Secure
FTP (SFTP) on page 6-9).
■ Secure Copy Protocol for upload/download of PM data, debug files, configuration database, and
software images (see Secure Copy Protocol (SCP) on page 6-11)
■ RADIUS enabled storage of user name and password information in a centralized location (see
Remote Authentication Dial-In User Service (RADIUS) on page 6-12).
■ The Terminal Access Controller Access Control System Plus (TACACS+) is a security protocol
similar to RADIUS which allows remote authentication (see Terminal Access Controller Access-
Control System Plus (TACACS+) on page 6-14

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-2

■ IP Security via Encapsulating Payload Protection (ESP) protocol in order to protect Optical
Supervisory Channel (OSC) control links in an Infinera network (see IP Security over OSC on page
6-15).
■ Media Access Control Security (MACSec) to provide point-to-point security on Ethernet links
between the nodes (see Media Access Control Security (MACSec) on page 6-17).
■ Serial port disabling via management interfaces in order to prevent unauthorized access from the
node site (see Serial Port Disabling on page 6-27).
■ DCN port block for XT(S)-3300 network elements (see DCN Port Block for Layer 3 Traffic on page
6-29).
■ ACLI session disabling to prevent unauthorized access (see ACLI Session Disabling on page 6-
30).
■ Verified software image to prevent systems from booting up with malicious software (see Verified
software image on page 6-31)
■ Signed Images provides integrity and authenticity of Infinera software (see Signed Images on page
6-32)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-3

User Identification
Each network element user is assigned a unique user ID. The user ID is case-sensitive and contains 4 to
10 alphanumeric characters. The user specifies this ID (referred to as user login ID) to log into the
network element.
By default, IQ NOS creates three user accounts with the following user login IDs:
■ secadmin
An account with the security administrator privilege enabled. The default password is Infinera1and
the user is required to change the password at first login. This user login ID is used for initial login
to the network element.
■ netadmin
An account with the network administrator privilege enabled. The default password is Infinera1and
the user is required to change the password at first login. Additionally, this account is disabled by
default. It must be enabled by the user with security administrator privilege through the TL1
Interface or GNM. This account is used to turn-up the network element.
■ emsadmin
An account with all privileges enabled. The default password is Infinera1. This account is disabled
by default. It must be enabled by the user with security administrator privilege through the TL1
Interface or GNM. The DNA server communicates with the network element using this account,
referred to as the DNA account, when it is started without requiring additional configuration. Users
can create additional DNA accounts which the DNA server can use to connect to the network
element. These accounts must have the DNA access capability enabled during creation.
A single user can open multiple sessions. IQ NOS maintains a list of all current active sessions.

Note: IQ NOS supports a maximum of 30 active user sessions at any given time. All login attempts
beyond 30 sessions will be denied and a warning message is displayed.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-4 Authentication

Authentication
IQ NOS supports standards-based authentication features. These features ensure that only authorized
users log into the network element through management interfaces. IQ NOS also supports remote and
centralized RADIUS for user authentication (see Remote Authentication Dial-In User Service (RADIUS)
on page 6-12 for more information).
Each time the user logs in, the user must enter a user ID and password. For the initial login, the user
specifies the default password set by the security administrator. The user must then create a new
password based on the following requirements.
■ The password must contain:
□ 8 to 32 alphanumeric characters
□ At least one capital letter
□ At least one numeric character
□ At least one of the following special characters (no other special character are allowed):
!@#$%^()_+|~{}[]?–
■ The password must not contain:
□ The associated user ID
□ Blank spaces
■ The passwords are case-sensitive and must be entered exactly as specified.
The password is stored in the network element database in a one-way encrypted form.
The password rotation is implemented to prevent users from re-using the same password. The users are
forced to use passwords different from the previously used passwords. The number of history passwords
stored is configurable.
Infinera nodes support a configurable network element password digest type. Infinera nodes support the
following password digest schemes: MD5, SHA-256, SHA-384, and SHA-512. When the password digest
type is changed, the following will be the behavior observed on the system:
■ All password histories passwords housed in the configuration database are reset.
■ The default password for all users is reset to the default user password that is specified by the
admin user at the time that the password digest time is configured.
■ All users are prompted upon next login to change the password.
The node will notify all currently logged in users (GNM, DNA, TL1) about the change in the password
digest type via notification; existing sessions are not terminated.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-5

Access Control
In addition to user login ID validation and password authentication, IQ NOS supports access control
features to ensure that the session requester is trusted, such as:
■ Detection of an unsuccessful user login and if the unsuccessful login exceeds the configured
number of attempts, the session is terminated and a security event is logged in the security audit
log.
■ User session is automatically terminated when the cable connecting the user computer and the
network element is physically removed. The user must follow the regular login procedure after the
cable is reconnected.
■ The activity of each user session is monitored. If, for a configurable period of time, no data is
exchanged between the user and the network element, the user session is timed-out and the
session is automatically terminated.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-6 Authorization

Authorization
Multiple access privileges are defined to restrict user access to resources. Each access privilege allows a
specific set of actions to be performed. One or more access privileges is assigned to each user account.
For the description of the managed objects, see Managed Objects on page 3-3.
The levels of access privileges are:
■ Monitoring Access (MA)—Allows the user to monitor the network element; cannot modify anything
on the network element (read-only privilege). The Monitoring Access is provided to all users by
default.
■ Security Administrator (SA)—Allows the user to perform network element security management
and administration related tasks.
■ Network Administrator (NA)—Allows the user to monitor the network element, manage equipment,
turn-up network element, provision services, administer various network-related functions, such as,
Auto-discovery and topology.
■ Network Engineer (NE)—Allows the user to monitor the network element and manage equipment.
■ Provisioning (PR)—Allows the user to monitor the network element, configure facility endpoints,
and provision services.
■ Turn-up and Test (TT)—Allows the user to monitor, turn-up, and troubleshoot the network element
and fix network problems.
■ Restricted Access (RA)—Allows the user to disable Automatic Laser Shutdown (ALS) operation. A
user may not disable the ALS feature unless the user’s account is configured with “Restricted
Access” privileges.
For the specific actions allowed for each access privilege group, refer to the GNM Security Management
Guide or the DTN and DTN-X TL1 User Guide .

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-7

Security Audit Log


IQ NOS maintains an independent and persistent circular audit log that records all system configuration
activities and security related events, such as unauthorized attempts and excessive authentication
attempts. The audit log provides traceability of all system-impacting changes.
The audit logs include system configuration activities and security related activities performed by the user.
These activities include:
■ Creating and deleting managed objects
■ Updating an attribute of the managed object
■ Invalid login attempts
■ Unauthorized attempts to access resources due to restrictions imposed by the user access
privilege
■ Updates to the user's security parameters, such as the password, user access privilege, password
aging time, etc.
■ Updates to the network element security parameters such as maximum number of invalid login
attempts, and inactivity time-out interval
Each audit log entry includes the following minimum set of information:
■ User login ID of the user who performed the action, along with terminal, port and network address
information
■ Date and Time of the operation
■ Action performed
■ Instance of the managed object on which the action was performed
■ Result of the operation performed
The audit logs are maintained in a circular buffer and hence the oldest records are overwritten. Audit logs
are preserved when system reboots. Although users cannot modify audit logs, users with any access
privilege can view the audit logs through the management applications.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-8 Security Administration

Security Administration
IQ NOS defines a set of security administration functions and parameters that are used to implement site-
specific policies. Security administration can be performed only by users with security administrator
privilege. The supported features include:
■ View all users currently logged on
■ Disable and enable a user account (this operation is allowed only when the user is not logged on)
■ Modify user account parameters, including access privilege and password expiry time
■ Delete a user account and its attributes, including password
■ Reset any user password to system default password
■ Set the password change policy to allow users to change their own passwords, or to require all
password changes be performed by a security administrator
■ Specify whether or not new users need to change the account password upon first login
■ Monitor security audit logs to detect unauthorized access
■ Monitor the security alarms and events raised by the network element and take appropriate actions
■ Configure system-wide security administration parameters:
□ Default password
□ Inactivity time-out period
□ Maximum number of invalid login attempts allowed
□ Number of history passwords
□ Advisory warning message displayed to the user after successful login to the network
element
■ Perform network-wide user administration, including:
□ View user accounts cross the managed network
□ Add new user accounts to multiple nodes, in a single operation
□ Update user account information on multiple nodes, in a single operation
□ View and modify attributes common to multiple user accounts, in a single operation
□ Clone multiple user accounts (with the same privileges, associations and permissions as that
of an existing user account) to one or more network elements, or to all network elements
within an administrative domain
□ Export multiple user account information
□ Import multiple user account information that was previously exported
□ Delete multiple user accounts from one or more network elements, in a single operation

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-9

Secure Shell (SSHv2) and Secure FTP (SFTP)


IQ NOS provides the option to secure management plane communications using the Secure Shell version
2 (SSHv2) and Secure File Transfer Protocol (SFTP) protocols on a per-network element basis. The
SFTP port number is configurable (possible values are 1-65535; the default port number is 22).
This feature provides a secure, encrypted channel between the management interfaces (GNM, DNA, and
TL1 clients) and any network element, thereby protecting the system from the following security threats:
■ System integrity threats:
□ Unauthorized manipulation of system configuration files or system database files
□ Communication Integrity Unauthorized manipulation of data in transit
■ Confidentiality threats:
□ Eavesdropping
□ Session recording and disclosure
□ Privacy violations
□ Snooping
□ Password hacking
■ Service threats:
□ Session hi-jacking
□ Theft of service

Note: For maximum management traffic protection, configure the network element and DCN ports
behind a firewall.

Figure 6-1 SSHv2-secured Management

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-10 Secure Shell (SSHv2) and Secure FTP (SFTP)

The IQ NOS implementation of SSHv2 is based on the IETF SSHv2 OpenSSH Toolkit solution. It
provides the following types of communication protection:
■ Data Encryption—Symmetric data encryption is based on the Advanced Encryption Standard
(AES) defined by NIST. A 256 bit key length is supported.

Note: A user with Security Administrator privileges can issue a command via GNM, DNA, and/or TL1
to regenerate SSHv2 keys. When this command is invoked, the node will create new SSH keys. Note
that this command applies to public/private SSH key pairs, and that this command will terminate all
existing SSH sessions, including SFTP sessions and transfers.

■ Data Integrity—The network element supports the Message Authentication Code (MAC) feature of
SSHv2 to ensure data integrity between the management client and the network element. The 256
bit key hmac-sha1 algorithm is supported.

Note: The following SSHv2 Clients are supported by the Infinera node:
■ PuTTY
■ OpenSSH Client
■ F-Secure SSH Client
■ Tera Term Pro

Users with the secadmin privilege can selectively enable SSHv2-based security on a per-node basis for
each of the management interface ports (that is, to protect communications via TL1, Telnet, file transfer,
or XML). By default, enhanced security is not enabled.

Note: The SSH enhanced security feature may be enabled at any time. However, if the enhanced
security flag is updated during run-time, existing sessions continue to function in its earlier mode. Any
new established sessions will operate according to the new security setting. You must perform a
warm or cold reboot of the active controller module in order to effect the security changes to existing
sessions.

Network elements functioning as Gateway Network Elements (GNEs) or Subtending Network Elements
(SNEs) also support the SSHv2 enhanced security feature. Traffic passed by a GNE to clients (for
example, traffic coming from SNEs) observe the security settings on the GNE. If necessary, the GNE will
perform any necessary encryption/decryption on behalf of an SNE.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-11

Secure Copy Protocol (SCP)


Infinera nodes support the Secure Copy Protocol (SCP) network protocol for upload/download of PM
data, debug files, configuration database files, and software images. SCP is a system-wide setting that
is disabled by default and can be enabled via the node’s default security profile settings. When SCP is
enabled, the node will use SCP instead of FTP/SFTP for upload/download of all applicable files, for both
scheduled and manual file transfers. The SCP setting does not affect other security settings, nor does it
affect other configuration settings for file transfer, such as primary and secondary server IP addresses,
user name/password, file path settings, etc.
Note that the node acts as SCP client, and any upload or download request will be initiated by the node:
■ File upload is initiated by the node and the file is uploaded (pushed) out to the remote SCP server
from the node.
■ File download is initiated from the node and files are downloaded (pulled) from an external/remote
SCP server to the node.
The Infinera node is considered an SCP client. It is required that the remote SCP server is configured
correctly: That the IP address and file path are valid, that the server port is open, and that SCP is enabled
on the SCP server.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-12 Remote Authentication Dial-In User Service (RADIUS)

Remote Authentication Dial-In User Service (RADIUS)


Each network element can be configured to use its local settings for user authentication, or to use the
Remote Authentication Dial-In User Service (RADIUS) capability. RADIUS is a standard for remote
authentication and storage of user name and password information that centralizes user account
maintenance on a RADIUS server. The network element is configured to use any of the three redundant
RADIUS servers configured to authenticate the username and password. Each of the redundant RADIUS
servers are polled for authentication requests in a round-robin fashion based on the availability of the
server. The network element attempts to access each server only if the previous server is either busy or
unavailable. If all attempts to poll the servers are unsuccessful, authentication is done by the local
network element (if configured).
RADIUS servers can be configured with IPv6 or IPv4 addresses.

Note: Infinera network elements and DNA inter-operate with FreeRadius version 1.1.0 and may not
be compatible with RADIUS servers that use vendor-specific attributes.

Note: The user account names and passwords on the RADIUS server(s) must comply with the same
rules and constraints for user names and passwords on the DTN (see User Identification on page 6-3
and Authentication on page 6-4 for the requirements for valid user names and passwords on the
DTN). In addition, all user accounts must have a privilege level of “MA” or higher in order to be
compatible with Infinera nodes (see Authorization on page 6-6 for information on privilege levels).

Note: Prior to Release 19.0, the default value for IP address of RADIUS Servers was 0.0.0.0.
However, starting release 19.0, the default value for IP address of RADIUS Server1 will be 0.0.0.1,
RADIUS Server2 will be 0.0.0.2, RADIUS Server3 will be 0.0.0.3. In case of IPv6 being selected, the
default IP address of RADIUS Server1 will be 0100::1, RADIUS Server2 will be 0100::2, RADIUS
Server3 will be 0100::3. During upgrade to release 19.0, the previous default of 0.0.0.0 is auto
migrated to the new default values.

An Infinera network element can be configured to authenticate users according to the local settings or via
the configured RADIUS servers. In addition, the network element can be configured to authenticate users
first according to the RADIUS settings, and then according to the local settings on the network element if
no RADIUS server can be contacted. Figure 6-2: Infinera Network with RADIUS on page 6-13 shows an
example Infinera network with redundant RADIUS servers.Prior to release 19.0, the default value for IP
address of RADIUS Servers was 0.0.0.0. However, starting release 19.0, the default value for IP address
of RADIUS Server1 will be 0.0.0.1, RADIUS Server2 will be 0.0.0.2, RADIUS Server3 will be 0.0.0.3. In
case of IPv6 being selected, the default IP address of RADIUS Server1 will be 0100::1, RADIUS Server2
will be 0100::2, RADIUS Server3 will be 0100::3. During upgrade to release 19.0, the previous default of
0.0.0.0 is auto migrated to the new default values.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-13

Figure 6-2 Infinera Network with RADIUS

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-14 Terminal Access Controller Access-Control System Plus (TACACS+)

Terminal Access Controller Access-Control System Plus


(TACACS+)
TACACS+ authentication and authorization is supported on IQ NOS network elements installed with IQ
NOS R16.3 and above.
The Terminal Access Controller Access Control System Plus (TACACS+) is a security protocol similar to
RADIUS which allows remote authentication. The TACACS+ protocol is the latest generation of TACACS.
TACACS is a simple UDP based access control protocol originally developed by BBN for the MILNET.
TACACS+ is a CISCO designed extension to TACACS. TACACS+ provides access control for routers,
network access servers and other networked computing devices through one or more centralized servers.
TACACS+ provides separate authentication, and authorization services.
TACACS+ separates the functions of Authentication and Authorization, and Accounting by encrypting all
traffic between the NAS and the daemon. It allows for arbitrary length and content authentication
exchanges which will allow any authentication mechanism to be utilized with TACACS+ clients. It is
extensible to provide for site customization and future development features, and uses TCP to ensure
reliable delivery. The protocol allows the TACACS+ client to request very fine grained access control and
allows the daemon to respond to each component of that request. The daemon should listen at port 49,
which is the "LOGIN" port assigned for the TACACS protocol. This port is reserved in the assigned
numbers RFC for both UDP and TCP. Current TACACS and extended TACACS implementations use
port 49.
The system allows both TACACS+ and RADIUS server to co-exist on the network element. For example,
the network element can use RADIUS server for authentication and TACACS+ for authorization. Users
are allowed to use either RADIUS server or TACACS+ (but not both simultaneously).

Note: Prior to Release 19.0, the default value for IP address of TACACS+ Servers was 0.0.0.0.
However, starting release 19.0, the default value for IP address of TACACS+ Server1 will be 0.0.0.1,
TACACS+ Server2 will be 0.0.0.2, TACACS+ Server3 will be 0.0.0.3. In case of IPv6 being selected,
the default IP address of TACACS+ Server1 will be 0100::1, TACACS+ Server2 will be 0100::2,
TACACS+ Server3 will be 0100::3. During upgrade to release 19.0, the previous default of 0.0.0.0 is
auto migrated to the new default values.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-15

IP Security over OSC


Infinera nodes support IP security via Encapsulating Security Payload (ESP) protocol in order to control
traffic over Optical Supervisory Channel (OSC) control links in an Infinera network. IP Security over OSC
is a configurable option that protects link management traffic as well as routing and signaling traffic
(GMPLS).
The following node types support IP Security over OSC:
■ DTN-X
■ DTN
■ ROADM (Reconfigurable Optical Add/Drop Multiplexer)
■ OLA (Optical Line Amplifier)
■ OA (Optical Amplifier)
■ XT
The following traffic types can be protected:
■ OSPF—Open Shortest Path First routing messages for network topology discovery and route
computation.
■ RSVP—Resource Reservation Protocol messages for establishing circuits along routes computed
by OSPF.
■ ADAPT—Link management (link optical control) traffic.

Note: Network management traffic including GNE-SNE traffic, GNM sessions, and TL1 sessions
(including via craft port) can already be protected end-to-end via Secure Shell (SSH; see Secure
Shell (SSHv2) and Secure FTP (SFTP) on page 6-9).

The following algorithms are supported:


■ For IP security authentication, the HMAC-SHA-256 algorithm is supported.
■ For IP security encryption, the AES-256 algorithm is supported (cypher block chaining mode).
IP Security over OSC requires selectors (user-specified parameter values that define the connection) and
security associations (SAs; uni-directional logical connections between two communicating nodes). IP
security must be configured on the nodes at both ends of the link for the link to be protected. IP security
parameters can be configured via GNM, DNA, and/or TL1.
■ Infinera nodes use an SPI (Security Parameters Index), an arbitrary 32-bit value that is used by a
receiver to identify the SA to which an incoming packet should be bound. SA SPI values are unique
for each link. The SPI value used for outbound traffic at node A must be the same SPI value used
for inbound traffic at node B. However, note that the SPI value used for traffic from node A to node
B must be different from the SPI value used in the opposite direction (from node B to node A).
■ IP security is enabled on a link if at least one selector and one SA are created and are in the in-
service state. To properly bring up secured control traffic on the link, both ends of the link must be
configured with selectors and SAs, and the SAs at each end of the link must match (the key and

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-16 IP Security over OSC

SPI value used for outbound traffic at node A must be the same key and SPI value used for
inbound traffic at node B).

Note: In addition to the support of ASCII (alphanumeric) values for authentication and
encryption keys in previous releases, Infinera nodes also support hexadecimal values for the
authentication and encryption keys. A key must be all ASCII characters or all hexadecimal
characters (there cannot be a mix of ASCII and hexadecimal characters in one key). It is
allowed for one key to be of one character type (e.g., ASCII) and another key to be of the other
character type (e.g., hexadecimal). For hexadecimal keys, the value must be 64 hexadecimal
characters. For the TL1 interface, the hexadecimal entries must begin with “0x” followed by 64
hexadecimal characters.

■ IP security is disabled if there are no selectors nor SAs created on the link, or if all SAs and
selectors are in the out-of-service state.
To enable IP Security over OSC:
■ Create a selector on each node in the connection. Create a different selector for each type of traffic
to be protected (RSVP, OSPF, and/or ADAPT).
■ On each node, create a security association (SA) for each type of protected traffic (RSVP, OSPF,
and/or ADAPT):
□ For OSPF, create an inbound and outbound SA to/from each adjacent node (one inbound/
outbound SA for every fiber direction). (OSPF SAs are unidirectional.)
□ For RSVP, create an inbound and outbound SA to/from every other node in the signaling
domain. (RSVP SAs are unidirectional.)
□ For ADAPT, create a single SA to each adjacent node. (ADAPT SAs are bidirectional.)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-17

Media Access Control Security (MACSec)


Media Access Control Security (MACSec) is a Layer 2 security technology that provides point-to-point
security on Ethernet links between the nodes. In common with IPSec and SSL, MACSec defines a
security infrastructure to provide data confidentiality, data integrity, and data origin authentication. By
assuring that a frame comes from the station that claimed to send it, MACSec can mitigate attacks on
Layer 2 protocols.
MACSec protects communications using several configurable techniques. Data origin is authenticated
and data is transported over secured channels. Frames are validated as MACSec Ethernet frames. The
integrity of frame content is verified on receipt. Frame sequence is monitored using an independent
replay protection counter. Invalid frames are discarded or monitored. Data traffic carried within the
MACSec frame is encrypted and decrypted using an industry-standard cipher suite. For Infinera use
cases, point-to-point MACSec applications are supported.
The XT supports the following MACSec features:
■ MACSec supports encryption/decryption per 10 GbE and 100 GbE port basis, when client data is
transported over the line between two XT-3300s
■ Client site data encryption/decryption is supported per 10 GbE and 100 GbE port basis
■ IEEE 802.1AEbw-2013 based Layer 2 MACsec encryption : AES-256 GCM (Galois Counter Mode)
Cipher Suite w/ 256 bit keys encryption/decryption
■ IKEv2 standard key management scheme
■ IEEE8021-SECY-MIB (802.1ae MIB)
■ Security policy administration
■ Support for data encryption/decryption on user interfaces (DNA, GNM, and TL1)
■ Configuration of NE-wide policies

MACSec Frame Format


MACSec provides secure MAC service on a frame-by-frame basis, using cryptographic methods
supported by the system. The figure below depicts a high-level overview of the MACSec frame. The
following are some of important points to be noted:
■ Confidentiality is provided only on the Ethernet payload. Rest of the MAC frame is kept in clear text.
■ Integrity (authentication) is provided on the full MAC frame (including MAC SA and DA). This allows
for a full MAC frame level integrity check to ensure malicious (or accidental) modification of the
frame while in flight. The resultant of integrity is a fixed length ICV which is padded at the end of the
MAC frame.
■ In the context of XT, the encryption is performed only towards the line-side. XT does not support
encrypting traffic towards the CPE.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-18 Media Access Control Security (MACSec)

■ The Security Tag (SecTAG) encodes various information in the frame including data plane
indicators (such as Association Number) which allows the remote end to use appropriate keys to
decrypt the incoming traffic.
■ The VLAN tags are part of the encrypted data (MSDU).

Figure 6-3 MAC Service Data Unit (MSDU) and MAC Protocol Data Units (MPDU)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-19

Figure 6-4 MACSec Frame - Breakdown of Individual Frame Elements

MACSec Deployment
MACSec is deployed for point-to-point configuration for XT-3300. Encryption is performed by the mapper
on both 10GbE and 100GbE clients. The encryption is configured on TribGige and is on per port basis.
The XT client 10GbE and 100GbE can be individually encrypted, subject to user configuration. Security
Association (SA) are created between the A-End and Z-End peers by exchange of keys as per IKE.
The figure below shows a point-to-point XT deployment. Any switch/router connected to the XT client
ports transmit/receive Ethernet frames. Every port capable of performing encryption implements a
Security Entity (SecY). Uni-directional Secure Channels (SC) provide point-to-point secure
communication, which can be persistent/long-lived as long as the SecY is existent. The secure
communication on an Secure Channel is realized through a chain of Secure Associations (SA). An SA
associates a particular cryptographic key with an Secure Channel. SAs can be statically administered or

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-20 Media Access Control Security (MACSec)

dynamically generated/exchanged through protocols such as IKEv2 or IEEE 802.1X. Secure Channels
also persist across any SA changes.
As seen in the figure below, Port X (on Node A) and Port Y (on Node B) implement/model a Secure
Entity. Two unidirectional Secure Channels exist: NodeA-Port X -> NodeB-Port Y and NodeB-Port Y ->
NodeA-Port X. SAs (SAn and SAm) are generated and exchanged between the SecY instances.

Figure 6-5 Example scenario for MACSec Deployment in XT

The following figure provides a data plane centric depiction of MACSec. There are two scenarios.
1. The unencrypted Ethernet traffic from the Client router/switch is received by the XT. MACSec
confidentiality and integrity functions are performed between the two XTs (see Sec and ICV added to
the MAC frame).
2. The MACSec encrypted Ethernet traffic is received by the XT. A second MACSec encryption is
performed by the XT. In this case, the incoming client MSDU becomes the MPDU for the XT MACSec.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-21

Figure 6-6 Example scenario for MACSec Encryption and Double SecTAG-ing

Software licensing on MACSec


Ethernet MACSec is supported on the XT platform. The encryption controls are available on every Line-
GigEClientCTP allowing 10 GbE or 100 GbE encryption. As part of the MACSec implementation, the
system also supports MACSec management model including representation of Security Entity (SecY),
Secure Channels (SC) and Security Associations (SA).
There are two classes of licenses that are supported for MACSec:
■ Slot/Port Level Licenses: Licenses are available for 10 GbE and 100 GbE ports. Every port license
enables Encryption services to be enabled on the specified port
■ Chassis Level License: Licenses are available for 10 GbE and 100 GbE ports. This class of license
enables the ability to turn-on MACSec service on all the ports of the chassis
Software licensing on MACSec is supported through Infinera DNA and ILM.
■ For DNA, refer to DNA License Management Guide

Note: It is recommended to configure the NTP server before creating/ installing or performing
operations on the certificate.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-22 Media Access Control Security (MACSec)

Data Encryption
Data Encryption on XT uses AES-256 cipher suite as per IEEE 802.1AEbw-2013 specifications. AES-
GCM is an authenticated encryption with associated data (AEAD) cipher providing both confidentiality and
data origin authentication. AES-GCM is efficient and secure. It allows hardware implementations which
can achieve high speeds with low cost and low latency, as the mode can be pipelined. Applications that
require high data throughput can benefit from these high-speed implementations. AES-GCM has been
specified as a mode that can be used with IPsec ESP and 802.1AE Media Access Control (MAC)
Security [IEEE8021AE].

Note: If MACSec encryption has to be enabled on all the 10G or 100G ports, it is recommended to
wait for a few seconds before enabling it on the next consecutive port.

Internet Key Exchange (IKE)


Internet Key Exchange Protocol Version 2 (IKEv2) is based on Diffie-Hellman key exchange for exchange
or derivation of symmetric keys (SAs) between the concerned end-points.
IKEv2 is a UDP based protocol where an IKE daemon listens for incoming IKE sessions on Port 500. The
protocol implements its own timeout and retry mechanism. IKE messages have sequence numbers and
multiple IKE requests are allowed in transit. The protocol exchanges are always in pairs of messages.
The two broad categories of messages are:
■ IKE messages to create Security Associations (SA). This includes both the IKE SA (the SA to
secure the control session over which IKE messages are being exchanged) as well as the MACSec
SAs (SA for the data plane traffic).
■ Informational messages - Control messages between the peers to convey errors or notifications.
Informational messages are also used for deletion of SA and null messages for the detection of
peer aliveness.
IQ NOS also supports IKE v2 with Pre-shared key (PSK) for peer authentication. The system supports the
following PSK encoding schemes:
■ ASCII character encoding
■ HEX encoding
The same PSK has to be used at both ends, even if they are encoded in different PSK encoding
schemes. The user can choose the type of PSK encoding scheme.

Migration between Pre-Shared Key and Certificate Based Authentication


Changing of IKE authentication mechanism from PSK to X.509 (or vice-versa) has to be performed with
caution, in particular, when there are IKE SAs that already exists and the corresponding Child SAs
support traffic carrying data plane security entities.
Refer to XT Task Oriented Procedures Guide for detailed description of the steps to change the
authentication mechanism.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-23

Certificate Management
In public key infrastructure (PKI), users of a public key need confidence that the associated private key is
owned by the correct remote entity with which an encryption of a digital signature will be established/
used. This is achieved through public key certificates.
A certificate is a data structure which binds public keys to subjects. The binding is asserted by having a
trusted Certificate Authority (CA) digitally sign each certificate. The certificates are typically housed in
repositories which is a system or collection of distributed systems that stores certificates and certificate
revocation lists (CRL) and serves as a means of distributing these certificates and CRLs to end entities.
The X.509 is one type of certificate that is commonly used. This was standardized by ITU-T (ISO/IEC)
[16]. The standard has gone through three revisions (v1 - 1988, v2 - 1993, v3 - 1996) developed by ITUT
(ISO/IEC) along with ANSI. The X.509 format also allows for extension attributes in the certificate which
convey such data as additional subject identification information, key attribute information, policy
information, and certification path constraints.
IKEv2 is used for key exchange and X.509 certificates for authentication, where the CERTs are
exchanged through IKE. There are different classes/categories of X.509 certificates that are stored in the
XT.
■ Personal certificates: The X.509 Certificate that represents a particular XT (local). Every XT that
acts as an IKE peer - Chassis, OCG/SCH, will have one or more X.509 Certificates that are its own,
which it would distribute to other peers participating in the PKI system (or the system of nodes
within which the XT needs to prove its identity and authenticity).
■ Peer certificates: This is a collection of certificates installed on an XT which represents the
identities of all the peers it expects to communicate with. During the process of Authentication, the
XT receives certificates from peers. The XT would then, compare the certificate it received with the
list of peer certificates that are installed. This ensures that the peer is one among the multiple peers
the XT is supposed to communicate with.
■ CA certificates: A list of X.509 certificates of well known CAs (for example: DigiCert, VeriSign, and
so on) stored on the XT. This is used for the purposes of signature validation if signatures are
present in the CERT that are sent by the peer.
■ Organization/customer CA certificates: These are CA certificates which are owned by the
customers deploying the XT. This could also be Infinera default CA certificate. The primary reason
is to sign certificates that are generated locally on the XT.
The certificates can be created outside the XT and then imported through DNA, GNM or CLI. The X.509
certificates are created outside the XT by the user and then installed on the XT. The associate private key
is also installed on the XT. The CERT and private key export to the XT is performed through management
interfaces over SSH (or TLS). Once the CERT is installed on the XT, the system performs local validation
(both syntactical and for correctness). The user can optionally choose to sign the certificate with one of
their Root CA certificates.
The system supports the ability to install and process X.509 certificates. The following are some of the
features supported related to configuration and management of X.509 certificates:
■ Supports X.509 v3 certificates
■ Supports the following X.509 certificate types

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-24 Media Access Control Security (MACSec)

□ PKCS#7
□ PKCS#12

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-25

Secure Web Connection


Transport Layer Security (TLS) is a cryptographic protocol that provides communication security over a
communication network. The TLS provides privacy and data integrity between communicating network
elements.
An SSL connection between a client and server is setup by a handshake. Only communication between
the Management station and GNE is secures. The GNE and SNE communication is not through a secure
channel.
After the connection is established, keys are used to securely send messages to each other. This
handshake process happens in three phases; Hello, Certificate Exchange, and Key Exchange.
The certificates used for exchanges can be imported to network element. In this case, the X.509
certificates created outside the network element by the user will be installed on the network element. The
associated private key is also pushed to the network element. The Certificate and Private Key export to
the network element is done through management interfaces over SSH.

Applications using TLS


RESTCONF
RESTCONF is a REST-like protocol running over HTTP for accessing data described in a YANG. A
RESTCONF interaction consists of an HTTP request sent by a RESTCONF client and an HTTP response
sent by the server. Both of these contain a required set of expected HTTP headers and may contain a
request or response body. The message body can be encoded in XML or JSON.
When the RESTCONF client connects with a GNE, the TLS connection is terminated at the Frontend
Web Server. Based on the URI the Frontend Web server forwards the request using HTTP to a Backend
Web Server. The Backend web server uses various processes to address the request from the client.
When the RESTCONF client connects with a SNE via a GNE, the TLS connection is terminated at the
Frontend Web Server. Based on the URI the Frontend Web server forwards the request using HTTP to a
SNE’s Frontend Server or to its Backend Web server which invokes the same mechanism as in the case
of the GNE.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-26 Secure Web Connection

Figure 6-7 Example configuration of Access to GNE/SNE

Graphical Node Manager (GNM)


Infinera's Graphical Node Manager (GNM) is a node-level management application that provides users
with on-site access and control of Infinera DTN, DTN-X, XT, FlexROADM, Optical Amplifier and Optical
Line Amplifiers network. GNM provides fault management, configuration management, service
provisioning, performance management, and security management (FCPS) functionality across local and
remote network elements.
The initial connection from the GNM client to a network element is supported over TLS.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-27

Serial Port Disabling


The DTN-X, DTN, Optical Amplifier, and FlexILS nodes support serial port disabling via management
interfaces in order to prevent unauthorized access from the node site. Serial port disabling is a system-
wide setting that is accessed via the NE-Wide Security Settings on GNM/DNA or the SET-ATTR-
SECUDFLT command in TL1. By default, these ports are enabled.

Note: If the serial ports are disabled, any session using the ports will be lost.

The following ports are configured with this setting:


■ Craft RS-232 Serial Port (DCE) on the MCM, IMM, OMM, XCM, and XCM-H.
■ Craft RS-232 Serial Port (DTE) on the Input/Output (I/O) Panel of the DTC, MTC, XTC-4, and
MTC-9; on the Input/Output Timing and Alarm Panel of the XTC-10; and on the Input/Output Alarm
Panel of the Optical Amplifier.

Note: Disabling the serial port does not block access to commissioning command line interface
(CCLI) during boot-up; disabling the serial port blocks only access to the administrative command line
interface (ACLI) after boot-up (i.e., the port is in the disabled state after boot-up).

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-28 DCN Port Disabling

DCN Port Disabling


The DTN-X, DTN, Optical Amplifier, and FlexILS nodes support DCN port disabling via management
interfaces in order to prevent unauthorized access to the node. DCN port disabling is a system-wide
setting that is accessed from the Network Element Properties window in GNM/DNA or the ED-SYS
command in TL1. The DCN is enabled by default.

Note: If the DCN port is disabled, any session using the port will be lost.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-29

DCN Port Block for Layer 3 Traffic


The DCN interface is used for user management traffic. The DCN port block feature is used to restrict the
Layer 3 traffic that enters an XT(S)-3300 network element, that may be consumed by the Management
CPU or forwarded to a subtending network element.
Access Control Filters (ACF) are used to achieve the DCN port block. This feature enables to:
■ secure the interfaces on the network element and restrict inbound IP access to a network element,
protecting it from access by hosts that do not have permission.
■ specify which hosts or groups of hosts can access and manage a network element by IP address,
simplifying operations.
■ gather statistics on the allowed application ports and IP addresses.
The current release supports creation of ACF rules, viewing ACF counters, clearing counter values and
deletion of ACF rules on all IQ NOS nodes.

Note: In case of XTC-2, XTC-2E, MTC-6, MTC-9, XTC-4, XTC-10, OTC or DTC chassis with Layer 3
switching capability, Access Control Filters are unable to process (i.e. allow or block) any packets
from the SNE to GNE’s router ID or from the GNE to SNE’s router ID .

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-30 ACLI Session Disabling

ACLI Session Disabling


The DTN-X, DTN, Optical Amplifier, and FlexILS nodes support administrative command line interface
(ACLI) session disabling via management interfaces in order to prevent unauthorized access to the
node’s debugging interface. ACLI session disabling is a system-wide setting that is accessed via the NE-
Wide Security Settings on GNM/DNA or the SET-ATTR-SECUDFLT command in TL1. By default, ACLI
sessions are enabled.

Note: If ACLI sessions are disabled, any open ACLI sessions remain active. Any subsequent ACLI
login requests are blocked.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Security and Access Management 6-31

Verified software image


Verified software image is implemented for the security of Infinera IQ NOS Network Element to ensure
integrity of Infinera software that are run on various platforms.It prevents systems from booting up with
malicious software insterted in the images in an Infinera device. This verification process can be listed in
the following sequence:
■ The Network Element image is created during build time, with a hash value - sha256 hash. The
Management interface displays this hash of the downloaded image.
■ Infinera software distribution portal or the Customer Web Portal (https://support.infinera.com/
images/) is enhanced to display hash of the software image that is released.
■ The user can now manually check the values computed with what is displayed in the portal.
■ If the hashes match, the user can continue with the installation and upgrade. If the hashes do not
match, it is left to the user to delete the downloaded image and retry.

Verified software image is supported on XT-500 and MTC-6/MTC-9 chassis.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


6-32 Signed Images

Signed Images
Signed Images provides integrity and authenticity of Infinera software during software downloads and
system boot up using Infinera Digital Signatures. Signed software ensures that users can verify the origin
of the software which is Infinera, as well as verify that no one has tampered with it. This feature enables
Infinera to produce Signed Images that are digitally signed before release and users can verify the origin
of the software.
The signature verification process starts with the network element first computing a hash on the software
or component it wants to verify. The network element also has a copy of the public key (ISK) that
corresponds to the private key with which the signature was generated. The network element decrypts
the signature and verifies if this hash is identical to the hash that it had computed. A match indicates that
the signature verification is successful.
The Signed image feature is supported on XT(S)-3300 and XT(S)-3600.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 7

Software Configuration Management

IQ NOS provides the following capabilities to manage software and database images on the Infinera
nodes:
■ Downloading Software on page 7-2
■ Maintaining Software on page 7-3
■ Software Image Directory Structure on page 7-7
■ Maintaining the Database on page 7-10
■ Uploading Debug Information on page 7-17
■ Verifying FTP Connectivity for Debug, PM, and DB Backup on page 7-19

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-2 Downloading Software

Downloading Software
IQ NOS, operating DTN-Xs, DTNs, Optical Amplifiers, and FlexILS nodes, is packaged into a single
software image. The software image includes the software components required for all the circuit packs in
the Infinera network elements.
Users can remotely download the software image from a user specified FTP server, to the controller(s)
(IMM, XCM, etc.) of one or more network elements within an administrative domain. Once users
download the software image to the controller module and then separately initiate the software upgrade
procedure, the software is automatically distributed to the remaining circuit packs within the chassis.
A network element can store up to two versions of the software image (including the current version), at
the same time.

Note: Earlier versions of IQ NOS supported up to three versions of software on a network element.
When upgrading a network element storing three versions of software, the system will prompt you to
reduce the number of software images residing on the network element.

Software downloads to systems with multiple chassis and/or redundant controllers occur in the following
manner:
■ Redundant controllers only—The software download is restricted to the active controller, after
which the software image is automatically replicated to the standby controller.
■ Multi-chassis only—The software download is restricted to the active controller on the Main
Chassis. Upon initiation of the software upgrade procedure, the software image is distributed to the
remaining controllers in the system.
■ Multi-chassis with redundant controllers—The software download is restricted to the active
controller on the Main Chassis. Once the new software is successfully activated on the Main
Chassis active controller, its image is automatically distributed to the remaining controllers in the
system, including the redundant controller on the Main Chassis.
Users may download software images on a node-by-node basis, or perform bulk download of software
images to multiple network elements within the Infinera Intelligent Transport Network. The bulk download
feature allows for fast and easy distribution of a software image to all the network elements in
administrative domains connected via an OSC.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-3

Maintaining Software
The network elements support in-service software upgrade and reversion. The software upgrade/revert
operation lets users activate a different software version from the one currently active. The following
software operations are supported:
■ Install New Software—This operation lets users activate the new software image version with an
empty database. The software image may be older or newer than the active version.

Note: Do not attempt to reboot the system while it is coming up with an empty database. This may
corrupt the database and cause the controller module to re-boot repeatedly.

■ Upgrade Software—This operation lets users activate the new software image version with the
previously active database. The previously active database version must be compatible or
migratable with the new software image version.

Note: For detailed traffic, FPGA upgrade, and operational effects associated with upgrading to a
specific software image version, refer to the applicable Software Release Notes.

Note: Do not physically unseat a TAM-1-40GE or TAM-1-40GR when a firmware upgrade is in


progress.

Note: For information on preparing for a software upgrade, see Nodal Software Pre-Upgrade
Verification on page 7-5.

■ Activate Software and Database—This operation lets users activate a different software image and
database version. The image version may be older or newer than the active software image version
The database version and the software version must be the same to activate the software and
database. Before upgrading the software, the new database image must be downloaded to the
network element.

Note: Before performing software Revert from Release 19.0 to pre-Release 19.0 or fresh installing
pre-Release 19.0 on a Release 19.0 system, remove and/or delete all new Release 19.0 specific
features (equipment and services).

■ Restart Software with Empty Database—This operation lets users activate the current software
image with an empty database.

Note: Do not attempt to reboot the system while it is coming up with an empty database. This may
corrupt the database and cause the controller module to reboot repeatedly.

■ Uncompress Software—This operation lets users uncompress the software image to enable faster
software upgrade.
In general, upgrading the software does not affect existing service. However, if the new software image
version includes a different Firmware/Field Programmable Gate Array (FPGA) version than the one
currently active, it could impact existing services. If this occurs, a warning message is displayed.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-4 Maintaining Software

Users must upgrade software on a node-by-node basis. Therefore, at any given time, the network
elements within a network may be running at least two software image versions. These different images
must be compatible. In the presence of multiple software versions, the network provides functions that are
common to all the network elements.
The software upgrade procedure executes in the following steps:
Verifies that the software and database versions are compatible. If they are not compatible, the upgrade
procedure is not allowed.
Validates the uncompressed software image. If the software image is invalid, the upgrade procedure is
not allowed.
Decompresses the software image. If there is not enough memory on the network element to store the
decompressed image, software decompression will not occur at all.
Reboots the network element so that the new software image becomes active. If the reboot fails, the
upgrade procedure is aborted and software image reverts to the previously active software image version.
When the new software image is activated, the software upgrade procedure updates the format of the
Event Log and Alarm table alarms, if necessary.

Note: When the software is upgraded, the PM historical data is not converted to the new format (if
there is a change in the format) and it is not persisted. Therefore, before you upgrade the software,
you must upload and save the PM data in your local servers.

In general, if the upgrade procedure is aborted, the software reverts to the previously active version. The
procedure reports events and alarms indicating the cause of the failure.
The following list outlines software upgrade behavior on systems with multiple chassis and/or redundant
controllers:
■ Redundant controllers only—The software upgrade is restricted to the active controller. Once the
new software is successfully activated on the Main Chassis active controller its image is
automatically replicated to the standby controller.
■ Multi-chassis only—The software upgrade is restricted to the active controller on the Main Chassis.
Once the new software is successfully activated on the Main Chassis active controller, its image is
automatically distributed to the remaining controllers in the system.
■ Multi-chassis with redundant controllers—The software upgrade is restricted to the active controller
on the Main Chassis. Once the new software is successfully activated on the Main Chassis active
controller, its image is automatically distributed to the remaining controllers in the system, including
the redundant controller on the Main Chassis.
During the upgrade process, communication with the clients and other network elements within the
network is interrupted.

Nodal Software Efficient Update


Starting with upgrades from Release 11.0 software, the nodal software update process automatically
bypasses those hardware FRUs that do not have any software changes and/or enhancements in the

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-5

target upgrade release, hence reducing the total nodal upgrade time and avoid unnecessary updates or
issues.

Nodal Software Pre-Upgrade Verification


Starting with upgrades from Release 11.0, Nodal Software Pre-Upgrade Verification is supported for
software upgrades of the network element. At the time of a nodal software upgrade (from Release 11.0 or
higher to any higher maintenance or major release), the user can initiate a Prepare for Upgrade
command, which causes the node to distribute the new software to the shelf controllers and perform pre-
upgrade checks on the modules to verify that the upgrade will complete successfully. These checks are
performed without initiating the actual upgrade. In this way, the user can prepare for a software upgrade
without performing the upgrade, and without losing management connectivity.
Nodal Software Pre-Upgrade Verification is supported for software upgrades of the following node types:
■ DTN-X (a node with an XTC main chassis/XCM node controller), including multi-chassis
configurations with multiple DTCs/MTCs (with MCM shelf controller), OTCs (with OMM shelf
controller), and MTC-9/MTC-6s (with IMM shelf controller).
■ FlexILS Optical Line Amplifier and FlexILS ROADM (nodes with an MTC-9/MTC-6 main
chassis/IMM node controller), including multi-chassis configurations with multiple MTC-9s/MTC-6s
(with IMM shelf controller) and OTC expansion chassis (with OMM shelf controller).
■ DTN (a node with a DTC main chassis/MCM node controller). The feature is supported for all node
DTN configurations, including multi-chassis configurations with multiple DTCs/MTCs (with MCM
shelf controller) and OTCs (with OMM shelf controller).
■ Optical Amplifier (a node with an OTC main chassis/OMM node controller), including multi-chassis
configurations with multiple OTCs (with OMM shelf controller).
■ XT (a node with an XT-500/MTC-6/MTC-9/DTC/MTC main chassis including multi-chassis
configurations with XT-500/MTC-6/MTC-9/DTC/MTC OR a node with XT(S)-3300/MTC-6/MTC-9
main chassis including multi-chassis configurations with XT(S)-3300 chassis).

Remote Hardware FPGA Upgrade


Infinera nodes feature the use of Field Programmable Gate Array (FPGA) logic chips within many of the
Infinera hardware modules. Each FPGA contains an “image”, a set of programmed instructions by which
a circuit pack operates. When available, updates to the FPGA image are provided through new software
releases, and may be downloaded to the circuit packs remotely. Compared to the traditional method of
upgrading hardware logic devices through replacement repair and return of hardware modules, the
remote upgrade feature offers improved relative cost savings and minimized service impact.

Note: For details on which modules contain FPGAs and the firmware update information for each
release and module, as well as which updates require cold reboots of the module, please refer to the
Release Notes for the specific release.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-6 Maintaining Software

Critical information about FPGA image upgrades is provided in the Software Release Notes. Specifically,
the Software Release Notes identify:
■ If the release contains any FPGA upgrades, and if so, for what modules
■ The functional changes made by each FPGA upgrade
■ Whether the FPGA upgrade is service impacting
■ If the FPGA upgrade is recommended, required, or optional
When a user performs a software upgrade, all non-service affecting FPGA upgrades are automatically
activated. Service-affecting FPGA upgrades are not activated until the user targets each individual
module with a cold-reboot, or removes/reinserts the module into the chassis. After performing a software
upgrade, users may check for pending FPGA upgrades using one of the following methods before
activating FPGA upgrades on a per-module basis:
■ Equipment Manager tool (in DNA or GNM)
■ RTRV-EQPT TL1 command with the SAFWUPG parameter
This allows users to perform hardware upgrade operations within a planned maintenance and service
disruption window.

Note: If there is an incompatibility between the firmware version on a given module compared to what
the current version of the software can support, no new services may be added to the node. If all
firmware versions are compatible with the current software image version (even if the software image
contains firmware upgrades) then users may use, add, and subtract services indefinitely.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-7

Software Image Directory Structure


The following section describes the Software Images and the directory structure.

Software Images
Starting Release 16.2, the software image files required to install IQNOS software are split based on the
chassis type. That is, every chassis type has its own software image file. The software image file is
downloaded from the FTP server as described below:
■ The software image file is first downloaded for the main chassis.
■ The main chassis downloads the software image from the FTP server based on expansion chassis
type. For all subsequent expansion chassis if the chassis type is the same, the software image is
reused from already downloaded software image.
In Release 18.2, verified software image is implemented for the security of IQ NOS network elements to
ensure integrity of Infinera software that are run on various platforms. This prevents systems from booting
up with malicious software inserted in the images in an Infinera device.
The verification process can be listed in the following sequence:
■ The software image includes a hash value - sha256 hash.
■ Management interfaces displays the hash of the downloaded image.
■ The user can manually compare the hash value displayed in management interfaces with the hash
present in the MetaR_<Release_Number>.<Build.Number>.txt.sha256 file in the software image
download directory. If the hashes match, the user can continue with the installation and upgrade. If
the hashes do not match, it is left to the user to delete the downloaded image and retry.
In order for the software image to be downloaded the FTP server must be reachable at all times when
any software maintenance operations are in progress. It is also required that the software image files are
stored in a defined directory on the FTP server as described below: Starting R19.0, the chassis software
image includes a tar ball which contains individual Field Replaceable Units (FRU) based tar balls for
some FRUs and a tar ball for the controller card and remaining FRUs supported on that chassis.
In Release 19.0.2, signed software image is implemented in XT(S)-3300/XT(S)-3600 network elements to
ensure integrity and authenticity of Infinera software during software downloads and system boot up
using Infinera Digital Signatures.
The verification process of the signed image process can be listed in the following sequence:
■ The signed software includes a hash value - sha256 hash.
■ Management interfaces displays the hash of the MetaR file.
■ MetaR file contains hash of the all the image types and network element software internally verifies
the hash of downloaded image with the hash in the MetaR file.
■ The user can manually compare the hash value displayed in management interfaces with the hash
present in the MetaR_<Release_Number>.<Build.Number>.txt.sha256 file in the software image

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-8 Software Image Directory Structure

download directory. If the hashes match, the user can continue with the installation and upgrade. If
the hashes do not match, the user has to delete the downloaded software image and retry.

Note: For software downgrade from a Release supporting FRU based images (R19.0 and later) to a
Release that does not support FRU based images (Releases prior to R19.0) results in an unstable
system. Ensure that the FTP server contains both the software "From" IQNOS version as well as "To"
IQNOS version.

FRU based installers are supported for the following:

Table 7-1 Software Image directory structure on FTP server


Chassis Type FTP Folder Location File Name
DTN <ftp folder <Rel No>.yyyy.BMM.ppc.tar.gz
path>/DTN <Rel No>.yyyy.CMM.ppc.tar.gz
<Rel No>.yyyy.DLM.ppc.tar.gz
<Rel No>.yyyy.MCM.ppc.tar.gz
<Rel No>.yyyy.ppc.tar.gz
MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256
XTC-2 <ftp folder path>/ <Rel No>.yyyy.x86.tar.gz
XTC2 MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256
DTNX <ftp folder path>/ <Rel No>.yyyy.AOFX1200.x86.gz
DTNX <Rel No>.yyyy.OLM.ppc.tar.gz
<Rel No>.yyyy.OTM.ppc.tar.gz
<Rel No>.yyyy.OTM1200.x86.gz
<Rel No>.yyyy.ppc.tar.gz
<Rel No>.yyyy.XCM.ppc.tar.gz
MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256
MTC-9/MTC-6 <ftp folder <Rel No>.yyyy.ppc.tar.gz
path>/ITN MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256
OTC <ftp folder <Rel No>.yyyy.ppc.tar.gz
path>/OLA <Rel No>.yyyy.BMM.ppc.tar.gz
<Rel No>.yyyy.DSC.ppc.tar.gz
<Rel No>.yyyy.OAM.ppc.tar.gz
<Rel No>.yyyy.ORM.ppc.tar.gz
<Rel No>.yyyy.RAM.ppc.tar.gz
<Rel No>.yyyy.SCM.ppc.tar.gz
MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256
XT-500S/XT-500F <ftp folder <Rel No>.yyyy.x86.tar.gz
path>/XT MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-9

Table 7-1 Software Image directory structure on FTP server (continued)


Chassis Type FTP Folder Location File Name
XT(S)-3300 <ftp folder path>/ <Rel No>.yyyy.x86.tar.gz
XT3300 <Rel No>.yyyy.XT3300.x86.tar.gz
MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256
tar_manifest.txt
SignatureR_<Rel No>.yyyy.txt (for signed software image)
XT(S)-3600 <ftp folder path>/ <Rel No>.yyyy.x86.tar.gz
XT3600 <Rel No>.yyyy.XT3600.x86.tar.gz
MetaR_<Rel No>.yyyy.txt
MetaR_<Rel No>.yyyy.txt.sha256
tar_manifest.txt
SignatureR_<Rel No>.yyyy.txt (for signed software image)
Miscellaneous (contains <ftp folder path>/ <Rel No>.yyyy.pre_upgradeu.tar.gz
pre-upgrade check MISC MetaR_<Rel No>.yyyy.txt
installers) MetaR_<Rel No>.yyyy.txt.sha256

Where <Rel No> is the IQNOS software release number, for example 19.0,
Where yyyy is the build number, for example 0611
Where the MetaR file contains the hash information for all software images. The value of the hash in the
MetaR_<Rel No>.yyyy.txt.sha256 file is to be compared with the hash of the downloaded software image
displayed in management interfaces.verification

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-10 Maintaining the Database

Maintaining the Database


To ensure that the correct database is activated on a network element, the database image includes this
information:
■ The database version. This is used to check its compatibility with the software image version. The
database image version must be equal to the software image version.
■ The backplane ID of the network element on which the database was created.
The following database operations are supported:
■ Downloading the Database on page 7-10
■ Backing up the Database on page 7-10
■ Restoring the Database on page 7-11

Downloading the Database


Users can download the previously backed up database file to the network element from a specified FTP
server. Up to three database versions (including the current one) can be stored on the network element at
a time. The downloaded database file does not change the current active database. It is simply stored in
the persistent memory of the network element.

Backing up the Database


There are two database backup modes:
■ Manual Database Backup— Users can manually backup the current database image at any time.
The current database image can be transferred to a specified FTP server or can be stored locally
on the network element.
■ Scheduled Database Backup—Users can schedule automatic database backups to occur at a
specified time on a specified day, at either daily or weekly intervals. For example, users can
schedule database backups to occur every day at 5pm. Users can also specify a primary and
secondary FTP server to store the backup file. By default, the database is backed up to the primary
server; however, if that server is unavailable, the database is backed up to the secondary server.

Note: It is not recommended to perform any provisioning-related activities on a network


element when a database backup operation is in progress. New provisioning requests may be
rejected by the network element until the database backup operation is complete.

In both modes, the current active database is backed up, not any previously saved database files.
In the case of a multi-chassis system, a database backup operation is restricted to the active
controller module on the Main Chassis. For a system with redundant controllers, a database
backup operation is restricted to the active controller module.
The database file that has been backed up contains:

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-11

■ Database file, which includes configuration information stored in the persistent memory on the
network element.
■ Alarm table stored in the persistent memory of the network element.
■ Event Log stored in the persistent memory of the network element.
Infinera nodes can be configured to transfer database backup files simultaneously to both the
primary and secondary FTP servers. (Simultaneous transfer requires that both servers are
configured correctly.)
When a compiled file transfer is initiated by the user, the node will first verify the FTP server
configuration before compiling the file. See Verifying FTP Connectivity for Debug, PM, and DB
Backup on page 7-19 for more information.

Restoring the Database


Users can perform the restore operation to activate a new database image file with the current active
software image version. The new database image file and the software image must be of the same
version and compatible with the network element. The restore operation restarts the network element and
activates the new database image. Users can restore the database at system reboot time or at any time
during normal operation.

If the restore operation fails, the software rolls back to the previously active database image and an alarm
is raised indicating the failure of the restore operation. When the database is successfully restored, the
alarm is cleared. Users can manually restore the database.

Note: For FlexILS nodes, database restoration is supported only when the active IMM controller
module is in the primary IMM slot (slot 9 of the MTC-9 chassis or slot 6 of the MTC-6). If the active
IMM is in the redundant IMM slot, the user must first do a switchover, thereby making the IMM in the
primary IMM slot the active controller module and the IMM in the redundant IMM slot the standby
controller module. Once the IMM in the primary IMM slot is made the active controller module, the
database can be restored.

Depending on the differences between the two databases, the database restore operation could affect
service. The database restoration procedure:
■ Restores the configuration data as per the restored database. The configuration data in the
restored database may differ from the current hardware configuration. In such scenarios, in
general, the configuration data takes precedence over the hardware.

Note: For restoring a database on a node which currently has a 2 Port D-SNCP service (with y-
cable fibers connected in the work and protect tributaries), if the database restoration will
change the 2 Port D-SNCP service to non-protected on the node, it is recommended that the
protect leg fiber be removed before the database is restored on the node.

■ Restores the alarms in the Alarm table by verifying the current alarm condition status. For example,
if there is an alarm entry in the restored Alarm table but the condition is cleared, that alarm is

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-12 Maintaining the Database

cleared from the current Alarm table. On the other hand, if the alarm condition still exists, the
corresponding alarm entry is stored in the current Alarm table with the original time stamp.

Note: The data in the Event Log is not restored.

The database image can be restored at system reboot time or at any time during normal operation.
The following list outlines database restoration behavior on systems with multiple chassis and/or
redundant controllers:
■ Redundant controllers only—The database is first restored on the active controller, and from there,
automatically synchronized to the standby controller.
■ Multi-chassis only—The database restore operation is restricted to the active controller on the Main
Chassis.
■ Multi-chassis with redundant controllers—The database restore operation is restricted to the active
controller on the Main Chassis. From there, it is automatically synchronized to the standby
controller on the Main Chassis only.
Following is the description of some scenarios where the configuration data in the restored database
differs from the current hardware configuration and how they are handled:
■ Scenario 1: The restored database contains a managed equipment object, but there is no
corresponding hardware present in the chassis. In this scenario, the corresponding equipment is
considered to be pre-configured (refer to Equipment Pre-configuration on page 3-35).
For example, consider the following sequence of operations:
□ Backup database
□ Remove a circuit pack from the chassis
□ Restore the previously backed up database.
After the database restoration, the removed circuit pack is pre-configured.
■ Scenario 2: If the restored database does not contain a managed equipment object, but the
hardware is present in the network element, the managed equipment object is created in the
database as in equipment auto-configuration (refer to Equipment Auto-configuration on page 3-35).
For example, consider the following sequence of operations:
□ Backup database
□ Install a new circuit pack
□ Restore the previously backed up database.
In this case, after database restoration, the newly inserted circuit pack is auto-configured.
■ Scenario 3: If the managed equipment object exists in the database and the corresponding
hardware equipment is present in the network element, but there is a configuration mismatch, an
equipment mismatch alarm is reported and the operational state of the equipment is changed to
out-of-service (see Operational State on page 3-39).

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-13

■ Scenario 4: If the restored database contains a manual cross-connect configuration information but
there is no cross-connect configured in the hardware, then IQ NOS provisions the corresponding
manual cross-connect (provided the required data path resources exist) according to the
configuration information in the restored database.
For example, consider the following sequence of operations:
□ Backup the database
□ Delete a manual cross-connect
□ Restore the database
In this case, the manual cross-connect was deleted after database backup is recreated.
■ Scenario 5: If the restored database does not contain a manual cross-connect configuration, but a
manual cross-connect is provisioned in the hardware, then the manual cross-connects is torn down
(deleted) as per the configuration information in the restored database.
For example, consider the following sequence of operations:
□ Backup the database
□ Create a manual cross-connect
□ Restore the database
In this scenario the manual cross-connect, that was created after the database backup, is deleted.
■ Scenario 6: If the restored database does not contain SNC configuration information, but an SNC is
provisioned in the hardware, then the SNC is torn down (released) by releasing the signaled cross-
connects (see GMPLS Signaled Subnetwork Connections (SNCs) on page 4-10) along the SNC
path. However, it takes approximately 45 minutes to release the signaled cross-connects. Note that
the SNC configuration information is stored on the source node only. The intermediate nodes
contain only the signaled cross-connects.
For example, consider an SNC that spans three nodes: Node A, Node B and Node C and Node A
is the source node. Consider the following sequence of operations:
□ Backup the database on Node A
□ Create an SNC from Node A to Node C passing through Node B which results in
corresponding signaled cross-connects being created on Node B and Node C
□ Restore the database on Node A
In this case, the restored database on Node A does not contain the SNC configuration information.
However, Node B and Node C have signaled cross-connects which are released after 45mins to
match the restored database in the Node A.
Consider the following sequence of operation for the same network configuration as in the previous
example,
□ Backup the database on Node B
□ Create an SNC from Node A to Node C passing through Node B which results in
corresponding signaled cross-connects being created on Node B and Node C

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-14 Maintaining the Database

□ Restore the database on Node B which results in signaled cross-connect corresponding to


the SNC created after database backup being deleted.
In this scenario, since Node A contains the SNC configuration, the corresponding, deleted signaled
cross-connect in Node B is recreated. However, it may take up to 15mins for the SNC to come
back up.

Database and Line Module Branding


Both the IQ NOS software and database are stored within the controller module flash memory, and
persist in the controller module even when the module is not powered or installed within the system. To
prevent a system from operating off a controller module with an inappropriate version of IQ NOS and/or
database, the IQ NOS software and hardware work together to “brand” the controller modules and line
modules. The system supports the following two types of branding:
■ Database Branding - A process by which the software and database residing on a controller
module are marked as belonging to a specific network element. This prevents the chassis from
booting off an inappropriate controller module - that is, a controller module that was not specifically
configured to work with the given chassis.
■ Line Module Branding - A process by which the line modules in a network element are marked to
work with the specific controller module. This prevents the line modules from operating off a
controller module that might have been properly branded for the chassis, but contains a stale
database.

Database Branding and Rebranding


There are two levels of database branding: primary and secondary. The primary database brand is the
system chassis serial number, which is stored on both an EEPROM on the chassis backplane, as well as
within the controller module database. Should the chassis EEPROM become unavailable, or change as
the result of an emergency chassis replacement, the system will retrieve the Input/Output (I/O) Panel
serial number as a secondary database brand. During controller module initialization, the system brand is
checked against the brand located within the controller module database. If the brand matches, the
controller module will complete its boot sequence.
If the controller module database brand does not match, the controller module will not complete its
initialization as the active controller module without user intervention. When this happens, the user has
several choices in order to continue:
■ If there is a database present on the controller module, the user may perform one of the following
actions:
□ Delete the database and then either bring up an empty database or perform a local/remote
database restore
□ Rebrand the system so that the chassis accepts the controller module database

Note: For XTCs, an XCM cannot be rebranded from an XTC-4 to an XTC-10 and vice versa. Instead,
the user must delete the XCM’s database and bring up the XCM with an empty database or perform a
database restore on the XCM.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-15

■ If there is no database present on the controller module, the user may perform one of the following
actions:
□ Bring up an empty database
□ Perform a local or remote database restore

Note: Do not attempt to reboot the system while it is coming up with an empty database. This may
corrupt the database and cause the controller module to re-boot repeatedly.

If the database brand does not match upon inserting a redundant controller module, the redundant
controller module will not boot, and a branding mismatch alarm (BRAND-MSMT) will be raised. To re-
brand a redundant controller module, the user must intervene with the “Make Standby” command. This
command forces the redundant controller module to format its flash and re-install its software from the
active controller. The redundant controller module then reboots and synchronizes the rest of its state (i.e.,
its database) from the active controller module, before entering the standby state.
Re-branding is useful for providing pre-configured controller modules with a user specific “template”
database. It also enables emergency chassis replacement without requiring re-configuration.

Note: Rebranding will overwrite the configuration of a system and should be used only by
experienced operational personnel.

For further details on the procedure to “rebrand” or recommission a controller module, refer to the DTN
Turn-up and Test Guide .

Line Module Branding


The line module branding process stores the serial number of active and standby controller modules on
the line modules’ static Random Access Memory. During controller module initialization, the line module
brands are checked against the controller module serial number. If the brands match, the controller
module will complete its boot sequence and apply the configuration stored within its database to the line
modules. If the line module brands do not match, the controller module will not complete its initialization,
and a branding mismatch alarm (DBMSMT) will be raised. When this happens, the user has several
choices to clear the alarm:
■ Apply the ‘Force Sync’ action to synchronize the configuration stored within the controller module
database to the line modules.
■ Download a different database that was backed up earlier, apply the Force Sync action to restore
the newly downloaded database, and then restart the controller module.
■ Re-seat the line module. This action clears the serial branding from the line modules, applies the
configuration stored within the controller module database to the line modules, and finally re-brands
the line modules with the controller module serial number.

Note: If the network element is configured to support cross-connects, re-seating the line modules can
affect traffic.
■ Cold boot the line module, either manually or by power cycling the chassis.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-16 Maintaining the Database

Note: If the network element is configured to support cross-connects, cold-booting the line modules
can affect traffic.

Although new services may be provisioned even during the event of a line module brand mismatch, it is
highly recommended that line module brand mismatch alarms are addressed immediately, without
performing any new service provisioning. Once a line module brand mismatch alarm occurs, the following
critical functions are disabled, which can lead to quickly growing inconsistencies between the controller
module database and the physical network element state:
■ Performance monitoring is disabled on the affected line modules.
■ Alarm reporting is disabled on the affected line modules.
■ New services provisioned after the mismatch alarm occurs are not written to the controller module
database until a Force Sync operation is carried out.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-17

Uploading Debug Information


If there is a software crash, the network element stores core dump files as well as pertinent debug
information which is preserved over restart. This information is used by Infinera Technical Support for
failure analysis. You must specify the primary and secondary FTP server to which the debug information
must be uploaded. By default, the debug information is uploaded to the primary FTP server; however, if
that server is unavailable, the debug information is uploaded to the secondary FTP server. When a
compiled file transfer is initiated by the user, the node will first verify the FTP server configuration before
compiling the file. See Verifying FTP Connectivity for Debug, PM, and DB Backup on page 7-19 for
more information.
This information is uploaded in a tar.gz file format.
There are two methods to upload the debug information:
■ Automatic Upload—The basic debug information (FDR details) is uploaded automatically to the
specified FTP server. This upload is triggered 1 hour after a successful reboot of the active
controller module or a successful switchover.
■ Manual Upload—The debug information can be manually uploaded to the specified FTP server at
any time. This method can be used when any line modules or cards on the Expansion Chassis
crash. The AID of the equipment for which the detailed debug information is to be uploaded can be
specified.

Note: The DNA’s Digital Link Viewer application can be used to transfer the debug logs for all of the
controller modules and/or the BMMs, OAMs, ORMs, and Raman amplifiers on all of the nodes along
a span. In addition, the Digital Link Viewer can collect the logs for the line modules on all of the nodes
along the digital segment. See the DNA Administrator Guide for more information.

There are additional controls for the debug information that is transferred from a node controller and from
line modules on an XTC. The default setting streamlines the debug information transferred from these
modules in order to minimize the amount of time require for the FTP transfer. Alternatively, the user can
specify that full debug information is to be sent from the node controller or from XTC line modules:
■ Default—For XTC line modules (OFx, OLx, OLx2, etc.), only the most recent 1000 records are
retrieved from the DSP Field Data Recorder (FDR); for a node controller (XCM, MCM, OMM, etc.),
only limited GMPLS data is retrieved. This default mode minimizes the amount of time required for
debug file transfer from the node.
■ Complete LM DSP FDR—For retrieving debug information from the XTC line modules, the user can
specify that all DSP FDR records are to be retrieved (not just the latest 1000 records). (For GMPLS
data, the default/limited data is retrieved as with default setting.)
■ Complete GMPLS Data —For retrieving debug information from the node controller, the user can
specify that all GMPLS data is to be retrieved (i.e., topology nodes, TE links, control links,
backplane connectivity, and tributary/line payload capacity). (For XTC line module data, the default
1000 DSP FDR records are retrieved as with default setting.)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-18 Uploading Debug Information

Automatic Saving of Debug Logs Before Warm Reset


For all modules containing a central processor complex (CPC), such as the BMM, OTM, OAM, FSM,
ORM, line modules, etc., when a user warm resets the module, the module will first save any debug logs
before beginning the warm reset process. Any debug logs in the module’s volatile memory are saved to
the flash memory of the controller module so that the debug information is maintained across the
module’s warm reset. Note that saving the debug logs causes a small delay between when the warm
reset is requested by the user and when the module begins the warm reset process. During this delay the
module will display the status message “Reset Card Initiated. Card will reset after pre-shutdown
diagnostics.”

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


Software Configuration Management 7-19

Verifying FTP Connectivity for Debug, PM, and DB Backup


Before sending any compiled file (i.e., debug files, PM files, and database backups), the node must first
compile the file, a process which takes several minutes and cannot be canceled once initiated. Because
of this, when a compiled file transfer is initiated by the user, the node will first verify the FTP (or SFTP)
server configuration before compiling the file. Likewise for software downloads and database restoration
downloads, the node will perform a server connectivity check before beginning the download. Users can
also enable a connectivity check with the configured servers prior to initiating the file transfer.
The node does the following before compiling and sending the debug file, and before downloading
software or database restore files:
■ The node verifies that the primary FTP server is configured and reachable and that authentication
details for the primary FTP server can be validated.
■ If the primary FTP server is not configured and reachable, the node will check for configuration and
reachability of a secondary FTP server and that authentication details for the secondary FTP server
can be validated.
If neither server can be reached/authenticated, the node will abort the file transfer and report a transfer
failure to the user.

Note: This pre-file compilation check is performed only for transfers that are manually initiated by a
user; it is not performed for automatic, scheduled transfers.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


7-20 Verifying FTP Connectivity for Debug, PM, and DB Backup

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 8

IQ NOS GMPLS Control Plane


Overview

IQ NOS provides an intelligent GMPLS control plane architecture that enables automated end-to-end
management of transport capacity across the Infinera Intelligent Transport Network resulting in a rapid,
error-free service turn-up and operational simplicity. With a simple “point-and-click” approach to
provisioning, users need only identify the A and Z service endpoints, and the intelligent control plane
automatically configures the intermediate network elements to route the transport capacity, without
manual intervention.
The GMPLS control plane provides several benefits, including:
■ Rapid, real-time end-to-end service provisioning
■ Traffic engineering/bandwidth management at the digital layer
■ Multi-service support
■ Simplified service provisioning independent of network topology
■ Automatic protection capabilities
The GMPLS control plane implementation is based on two key industry standard protocols: Open
Shortest Path First - Traffic Engineering (OSPF-TE), an IP routing protocol, and Resource Reservation
Protocol - Traffic Engineering (RSVP-TE), a GMPLS signaling protocol. The OSPF-TE performs network
topology discovery and route computation. The RSVP-TE signaling protocol establishes a circuit along
the route computed by the OSPF-TE. An end-to-end circuit set up by GMPLS control plane within a
routing domain is referred to as a Subnetwork Connection (SNC).
The GMPLS control plane does the following:

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


8-2

■ Supports dynamically signaled SNC provisioning.


■ Allows SNC to be provisioned between any two tributary ports of the same type.
■ Allows SNCs to be provisioned from tributary end points to line side endpoints if line side
termination is enabled on the line modules (see Line-side Terminating SNCs on page 4-12).
■ Supports point-to-point, linear add/drop, hub and spoke, ring, and mesh topologies (see Network
Topologies for a description of these topologies).
■ Supports service pre-provisioning (pre-provisioned service becomes operational upon installation of
the hardware equipment).
■ Provides traffic engineering control, utilizing constraint-based source routing.

Note: Creation of digital services is not supported on XT(S)-3300.

IQ NOS also features, at the user’s option, dynamic restoration of GMPLS-provisioned SNCs for DTN
services. See Dynamic GMPLS Circuit Restoration on page 4-140 for complete details on this feature.
The system control plane is certified for GMPLS signaling domains consisting of up to 1000 network
elements (up to 333 of which can be DTN-Xs/DTNs), configured in a number of topologies, including
those utilizing multi-fiber junction sites with up to eight degrees of connectivity. Contact Infinera before
attempting to build networks that exceed this number of network elements.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS GMPLS Control Plane Overview 8-3

OSPF-TE Routing Protocol


IQ NOS utilizes the OSPF-TE routing protocol to discover the Intelligent Transport Network topology, and
to perform route computation utilizing the Constrained Shortest Path First (CSPF) algorithm. The OSPF-
TE implementation is based on OSPF v2 (IETF RFC 2178 and RFC 3630).

Network Topology
IQ NOS utilizes the OSPF-TE protocol to discover the Intelligent Transport Network topology. It models
the Intelligent Transport Network topology by defining the following elements:
■ A routing node, which corresponds to a network element within the Intelligent Transport Network.
■ A control link, which corresponds to OSC control between adjacent routing nodes or network
elements. There is one bidirectional control link per fiber (or from a TL1 perspective there will be
two uni-directional control link entries per fiber).
■ A GMPLS link, which corresponds to transport capacity between the adjacent DTN-Xs/DTNs
nodes. There is one bidirectional GMPLS link per fiber (or from a TL1 perspective there will be two
uni-directional control link entries per fiber). Each GMPLS link supports up to 8Tbps (8000Gbps)
transport capacity between DTN-Xs, which maps to 16 OCGs or 16 Traffic Engineering (TE) links.
Systems with LM-80s and CMMs support TE links on each of the ten LM-80 OCH ports (ten OCH
ports per OCG), for up to 160 TE links with up to 40G capacity on each channel (with QPSK
polarization multiplexing), totaling 6.4Tbps transport capacity.
IQ NOS defines two topology maps:
■ Physical Network Topology (see Physical Network Topology on page 8-3)
■ Service Provisioning Topology (see Service Provisioning Topology on page 8-4)

Physical Network Topology


The physical network topology is defined by the topology of the OSC, which provides the communication
path for the routing and signaling protocols between network elements. The physical network topology
mirrors the physical fiber connectivity between the network elements, and thus the topology elements
include all network elements and the control links which correspond to the fiber connecting the network
elements. (See Figure 8-1: Physical Network Topology on page 8-3.)

Figure 8-1 Physical Network Topology

However, independent of the physical fiber connectivity, users can create topology partitions, where each
partition represents a continuous routing and signaling domain. The topology partitions are created by
disabling the OSPF interface. In Figure 8-2: Network with GMPLS Topology Partition on page 8-4,

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


8-4 OSPF-TE Routing Protocol

Domain 1 and Domain 2 are two topology partitions created by disabling GMPLS between network
element C and network element D.

Note: SNCs spanning two topology partitions are not supported as they are operated as two separate
networks. However, the user can make use of the line-side terminating SNC capability to make
separate SNCs in the two partitioned domains to realize a single end-to-end customer circuit (see
Line-side Terminating SNCs on page 4-12).

Figure 8-2 Network with GMPLS Topology Partition

Service Provisioning Topology


The service provisioning topology is a higher layer logical topology providing users a view of topological
nodes where services can be terminated or groomed, and the associated digital links between them. In
an Intelligent Transport Network, the service provisioning topology consists of DTN-Xs, DTNs, and digital
links between them. Thus, in a service provisioning topology, all Optical Amplifiers are eliminated. Figure
8-3: Service Provisioning Topology on page 8-4 illustrates the service provisioning topology of the
physical topology shown in Figure 8-1: Physical Network Topology on page 8-3.

Figure 8-3 Service Provisioning Topology

Users can view the physical network topology, referred to as physical view, and service provisioning
topology, referred to as provisioning view, through the management applications.
In summary, physical topology represents the actual physical OTS fiber connectivity between the network
elements and the topology of the control plane traffic (e.g., OSPF-TE messages) and management plane
traffic (messages exchanged between the network element and the management application, such as
DNA), whereas the service provisioning topology represents the Traffic Engineering (TE) capacity
available to provision data plane (client) traffic through the OCGs.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS GMPLS Control Plane Overview 8-5

Traffic Engineering
IQ NOS supports several traffic engineering parameters both at the link level and node level. This rich set
of traffic engineering parameters enables users to create networks that are utilized most efficiently.
The node and equipment level traffic engineering parameters include:
■ Inclusion List—Specifies an ordered list of nodes through which an SNC must pass. The inclusion
list is ordered and must flow from source to destination. This capability is used to constrain an SNC
to traverse certain network elements in a particular order. For example, in the network shown in
Figure 8-4: Example Network for SNC Routing on page 8-5, an SNC from node A and node C
can use either node B or node D as an intermediate node. The inclusion list can specify node B in
order to mandate a route with source as A, one of the intermediate nodes as B, and destination as
node C. This allows the traffic to be dropped at site B in the future. Optical carrier groups (OCGs)
or fiber links can also be included in the inclusion list, but the channel number of the OCG must be
specified. The inclusion list is configurable through the management applications.

Figure 8-4 Example Network for SNC Routing

■ Exclusion List—Specifies a list of nodes through which an SNC must not pass. For example, the
exclusion list can be used to avoid congested nodes. The exclusion list is not ordered and it is
configurable through the management applications. OCGs cannot be specified as part of the
exclusion list.
■ Use Installed Equipment Only—IQ NOS enables the equipment pre-provisioning where equipment
is pre-provisioned but not installed. This constraint enables an SNC to pass through installed
equipment only. Users can specify this through the management applications. Note this option
applies only to line modules and TEMs. BMMs must be installed on all nodes.
■ Disable Traffic Engineering Link—As described in Optical Transport Layers (ILS or ILS2) of DTN
and DTN-X System Description Guide, the DTN employs two-stage optical multiplexing where

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


8-6 OSPF-TE Routing Protocol

transport capacity is added to the GMPLS link by adding OCGs (line modules/LM-80s). Using this
constraint users can disable the use of an OCG to set up dynamically signaled SNC circuits.
However, the OCG can be used to set up manual cross-connects. For example, users may want to
set aside some bandwidth for manual cross-connect provisioning. This constraint is configurable
through the management applications.
■ Switching Capacity—This parameter considers the switching/grooming capacity of the DTN. See
Bandwidth Grooming in DTN and DTN-X System Description Guidefor a complete description of
the supported switching and grooming capabilities.
■ Allow Multi-hop SNC—Specifies whether the SNC may utilize multi-hop bandwidth grooming.
The GMPLS link level traffic engineering parameters include:
■ Link Cost—The cost of the GMPLS link can be provisioned through the management applications.
A route with least cost is selected. Users can use this to control how the traffic is routed.
■ Link Inclusion List—Specifies an ordered list of control links an SNC must pass through. This is
similar to the node inclusion list described earlier. For a higher degree of granularity, users may
specify specific 10Gbps channels or 2.5Gbps sub-channels for inclusion. If a channel or sub-
channel is specified, the specified link should be a GMPLS link (OCG). Otherwise, it must be a
fiber/OCG.
■ Local DLM Routing—If this option is selected, the SNC ensures that add/drop cross-connects on
the source and destination nodes utilize the same line module for tributary to line cross-connects. If
this option is not selected, no such constraints are applied.

Note: If the chassis is configured in Mesh mode (DTC-B and MTC-A only), there is no option to select
an intermediate LM when creating a cross-connect, and no option to allow an intermediate LM when
creating an SNC.

■ Link Exclusion List—Specifies a list of fibers/OCGs the SNC must not pass through. This is similar
to the node exclusion list described earlier.
■ Link Capacity—The link capacity is another parameter that is considered during route computation.
IQ NOS maintains the following information based on the hardware state and user configuration
information, which is retrievable through the management applications:
□ Maximum capacity of the link based on the installed hardware
□ Usable capacity of the link based on the hardware and software state
□ Available capacity of the link for the new service requests
Additionally, users can provision the admin weight or cost for the control link. The control link cost
denotes the desirability of the link to route control traffic and management traffic. The lower (numerically)
the cost, the more desirable the link is.
All the traffic engineering parameters described above are exchanged between the network element as
part of the topology database updates.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS GMPLS Control Plane Overview 8-7

Constrained Shortest Path Route Computation


The OSPF-TE performs SNC route computation utilizing CSPF (constrained shortest path first) algorithm.
The CSPF provides the following benefits:
■ Route SNCs around known bottlenecks or points of congestion in the network.
■ Provide precise control over how traffic is rerouted when the primary path is faced with single or
multiple failures.
■ Provide more efficient use of available aggregate bandwidth and long-haul fiber by ensuring that
subsets of the network do not become over-utilized while other subsets of the network along
potential alternate paths are under-utilized.
The CSPF considers all the traffic engineering parameters described in Traffic Engineering on page 8-5
while performing SNC route computation. In the presence of multiple routes, the least cost route (based
on the cost of the GMPLS link configured by the user) is selected.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


8-8 GMPLS Signaling (RSVP-TE)

GMPLS Signaling (RSVP-TE)


The RSVP-TE signaling protocol is used to establish an SNC along the route computed by the OSPF-TE.
The computed route is specified as an explicit route object in the RSVP-TE signaling messages. The
SNC is established when the RSVP-TE signaling messages are exchanged successfully between all
nodes. If the SNC setup fails due to failures in the network, IQ NOS reports appropriate error messages
through the management applications and retries the SNC setup periodically until the setup is successful
or user chooses to delete the SNC. For every retry, a new route is computed and an attempt is made to
set up the SNC along the new route computed by the OSPF-TE.
Once the SNC is established, the SNC is not deleted unless the user explicitly requests the SNC to be
deleted.

Note: Because of the shelf controller behaviors (see Shelf Controller Behavior on page 3-2), SNC
creation, restoration, and deletion requires that the chassis on which the SNC originates and the
chassis on which the SNC terminates are reachable in the network. However, existing traffic is not
impacted if the chassis becomes unreachable.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS GMPLS Control Plane Overview 8-9

Handling Fault Conditions


The GMPLS control plane monitors and detects fault conditions that impacts service availability and takes
necessary precautions. Following are some faults that are detected by the GMPLS control plane:
■ Lower-layer hardware or connectivity failures resulting in a reduction of bandwidth availability: such
fault conditions result in an OSPF-TE protocol advertisement of the new available bandwidth.
However, the SNCs which are already established are neither deleted nor rerouted. When the fault
condition is cleared, the SNCs resume their operation. (Note that restorable SNCs are rerouted in
the presence of such a fault condition.)
■ Faults, such as fiber cuts, resulting in topology partition: such fault conditions result in topology
database updates. However, the SNCs that span partitioned topologies will not provide service.
The SNC becomes operational after the fault condition is cleared.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


8-10 Topology Configuration Guidelines

Topology Configuration Guidelines


The OSPF V2 (RFC 2178) does not specify any guidelines for the number of routers in an area or the
best way to design an OSPF network. Users must design the OSPF networks based on their specific
application and/or constraints. Every network element in the network adds routing control traffic to OSPF
and increases the load on CSPF computation algorithm.
Note that all Control and GMPLS links, by default, are associated with area 0.0.0.0. The area ID is not
configurable.

Control Link Configuration


The control link between adjacent network elements is enabled by:
■ Provisioning the BMMs/FRMs on each network element.
■ Creation of chassis, super channel/digital wrapper and IGCC for XT(S)-3300 configurations
■ Provisioning the OSC IP address on either side of the control link. The OSC IP address has to be
routable and unique within a routing and signaling domain. However, it can be an internal
(unregistered) IP address. Also, the subnetwork mask has to be identical on both ends of the
control link. This ensures that both ends of the control link are on the same subnet.
■ Provisioning the link name (optional), cost, hello and dead intervals
■ Provisioning control link (OSPF) cost, the hello and dead intervals, and the link name as per the
desired network design. Note that the configuration of OSPF admin cost and the link name are
optional.
■ Enabling OSPF interface on either side of the control link.
■ Unlocking the GMPLS CC associated with the IGCC

GMPLS Link Configuration


The GMPLS link includes the various configurable traffic engineering parameters as described in Traffic
Engineering on page 8-5.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS GMPLS Control Plane Overview 8-11

Out-of-band GMPLS
Out-of-band GMPLS for OTS enables circuit provisioning in cases where in-band OSC is unavailable
(e.g., submarine applications). Out-of-band GMPLS separates the control plane traffic from data plane
traffic, thus enabling management connectivity to remote network elements so that circuit provisioning
capabilities are available even with Submarine Line Terminal Equipment (SLTE) applications.

Note: Out-of-band GMPLS for OTS is supported only by nodes running Release 6.0 or higher.

Figure 8-5: Out-of-band GMPLS Used in a Submarine Application on page 8-11 shows an example
application of Out-of-band GMPLS, in which in-band OSC is unavailable due to an SLTE configuration.

Figure 8-5 Out-of-band GMPLS Used in a Submarine Application

Out-of-band GMPLS is supported via the DCN, AUX, or CRAFT interface on the DTN-X or DTN, and is
configured through the management interfaces by first creating a GRE tunnel and then editing the OSC
properties to associate the OSC to the GRE tunnel. (Once the OSC is associated with the GRE tunnel,
the OSC cannot be associated with an IP address.) When Out-of-band GMPLS is enabled on the OSC,
all GMPLS messages will be sent out of band.

Note: The craft port on XTC chassis does not support GRE tunnels for Out-of-band GMPLS.

Note the following about the Out-of-band GMPLS feature:


■ When Out-of-band GMPLS is enabled, only GMPLS/routing information is carried out of band.
Other OSC messages, such as Automated Gain Control (AGC) messages, will continue to be
transmitted through in-band OSC.
■ Out-of-band GMPLS can be enabled per BMM direction in a node. For example, Out-of-band
GMPLS can be enabled in one direction to traverse a submarine link, but this same node can use
in-band GMPLS via the OSC in the other direction over terrestrial links.
■ If the network interface (DCN, AUX or CRAFT) carrying out-of-band GMPLS messages
experiences a failure, out-of-band GMPLS will not be available and circuits cannot be provisioned
on the nodes that are configured for out-of-band GMPLS.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


8-12 Out-of-band GMPLS

■ Multiple GRE tunnels are supported over the same physical interface (i.e., DCN, AUX, or CRAFT),
but only one GRE tunnel can be associated per BMM direction.
■ For Optical Amplifiers, a GRE tunnel can be created only via the TL1 interface. For DTN-Xs and
DTNs, GRE tunnels can be created via GNM, DNA, or TL1.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


CHAPTER 9

IQ NOS Management Plane


Overview

IQ NOS provides a highly available, reliable, and redundant management plane communications path
which connects the network operations centers (NOCs) to the physical transport network and meets the
diverse customers’ needs. The management plane includes:
■ Direct DCN (Data Communications Network) access where the NOC is connected to the network
element through a DCN network which is typically an IP-based network. The DCN is designed in
such a way that there is no single point of failure within the DCN network. (See DCN
Communication Path on page 9-2.)
■ In-band access through a Gateway Network Element (GNE) where a network element is accessed
through another network element that acts as a gateway and transports the management traffic
over the OSC control link between the network elements. (See Gateway Network Element on page
9-8.)
■ Static routing to access external networks that are not within the DCN network. (See Static Routing
on page 9-12.)
■ Telemetry access utilizing a dial-up modem which provides users remote access through the serial
port on the network element.
IQ NOS management plane supports Network Time Protocol (NTP) to provide accurate time stamping of
alarms, events and reports from the network element. (See Time-of-Day Synchronization on page 9-14.)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-2 DCN Communication Path

DCN Communication Path


As described in Management Ports, Infinera nodes provide two redundant DCN ports:
■ Auto-negotiating 10/100Mbps Ethernet RJ-45 ports on the XTC-2/XTC-2E/MTC/DTC/OTC/
XT(S)-3300, and on MTC-9/MTC-6 with IMM-B.
■ Auto-negotiating 10/100/1000Mbps Ethernet RJ-45 ports on the XTC-4/XTC-10, and on MTC-9
with IMM.

Note: DCN ports can also be configured for 100Mbps full duplex (with auto-negotiation disabled). This
configuration is performed as a part of node commissioning (see DCN Port Configuration on page 9-
5 for more information).

In a redundant configurations:
■ For XTC-10, the DCN-A port is controlled by the XCM in shelf A slot 6B; DCN-B is controlled by the
XCM in shelf B slot 6B.
■ For XTC-4, the DCN-A port is controlled by the XCM in slot 5A; DCN-B is controlled by the XCM in
slot 5B.
■ For DTC/MTC, the DCN-A port is controlled by the MCM in slot 7A; DCN-B is controlled by the
MCM in slot 7B.
■ For OTC, the DCN-A port is controlled by the OMM in slot 1A; DCN-B is controlled by the OMM in
slot 1B.
■ For MTC-9/MTC-6, the DCN port is on the IMM; each IMM has a single DCN port. DCN
redundancy is achieved by installing a redundant IMM.
■ For XTC-2/XTC-2E, the DCN port is on the XCM-H; each XCM-H has a single DCN port. DCN
redundancy is achieved by installing a redundant XCM-H.
As shown in Figure 9-1: Redundant DCN Connectivity (DTN Example) on page 9-3, Ethernet cables
from each of the DCN ports must be connected to a single Ethernet switch or hub (no other physical
connectivity from the DCN port is supported).
In an environment that has redundantly equipped and serviceable controllers, it is the active controller
module that processes the DCN management traffic and that determines which DCN port is active. As
described in the following sections, port selection depends upon not only which controller module is active
but also the state of the connected DCN links. Only one DCN IP address is specified, and it is mapped by
the active controller to whichever link has been selected for operation. The DCN IP address is
configurable through the CCLI application during network element turn-up.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS Management Plane Overview 9-3

Figure 9-1 Redundant DCN Connectivity (DTN Example)

DCN Link Failure Recovery


In the example shown in Figure 9-1: Redundant DCN Connectivity (DTN Example) on page 9-3, in the
presence of the active controller module and standby controller module the active controller module will
be processing the management traffic received through the DCN-A port. The DCN IP address is mapped
to the MAC address of the active controller module.
When there is a failure in the link between the DCN-A port and the switch/hub, as shown in Figure 9-2:
DCN Link Failure Recovery on page 9-4, the active controller module detects the failure by monitoring
the DCN-A port link status. Upon detecting link failure, the active controller module disables its Ethernet
link to the DCN-A port and enables the Ethernet link between itself and the standby controller module.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-4 DCN Communication Path

Then the active controller module sends gratuitous ARP (i.e. an ARP request for the network element’s
DCN IP address) request through the Standby controller module in order to refresh the ARP entry in the
switch so that the DCN IP address maps to the MAC address of the Standby controller module. At this
point the active controller module is receiving the management traffic through the DCN-B port.

Figure 9-2 DCN Link Failure Recovery

Note: Link failures between the switch/hub and the DCN routers is not detected by the network
element nor will any redundant path be provided by the network element. It is assumed that the user
will deploy routers which provide the necessary redundancy to take care of such failures.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS Management Plane Overview 9-5

Controller Module Failure Recovery


As described in DCN Link Failure Recovery on page 9-3, assume that the active controller module is
receiving the management traffic through the DCN-A port. If the active controller module fails, as shown
in Figure 9-3: Controller Module Failure Recovery on page 9-5, the standby controller module becomes
active, and sends gratuitous ARP in order to refresh the ARP entry in the switch so that the DCN IP
address maps to the MAC address of the now-active controller module. At this point the now-active
controller module is receiving the management traffic through the DCN-B port and is also processing the
packets.

Figure 9-3 Controller Module Failure Recovery

DCN Port Configuration


Infinera nodes support configuration of the DCN ports to 100Mbps, full-duplex operation (as examples,
the standard configuration for DCN ports on the XTC-10 is 10/100/1000Mbps, auto-negotiating operation;
and for the DTC/XT(S)-3300 it is 10/100Mbps, auto-negotiating operation). The DCN port can be

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-6 DCN Communication Path

configured via CCLI interface at the time of node commissioning to the fixed 100Mbps rate with auto-
negotiation disabled. This setting applies to the DCN port on the active node controller module and to the
DCN port on the standby node controller module, if the node has a standby controller module. This
setting is supported for DTN-X, FlexILS, DTN, XT and Optical Amplifier nodes. The DCN configuration
persists through software upgrades, software reverts, database backup/restore, node power cycles, and
control module reboots/switchovers.
Starting IQ NOS Release 17.1, Infineramanagement interfaces and CCLI support configuration of default
route for the DCN subnet on IPV4/IPv6 DTN, DTN-X, ROADM, OLA or XT network elements running IQ
NOS R17.1 As part of the DCN route configuration, the following are specified:
■ DCN Destination (IPv4 and IPv6): The host IP of the subnet to which the DCN packets are routed
For default routing, the destination on IPv4 network elements should be 0.0.0.0 and the IPv6
destination for IPv6 network elements should be ::
■ DCN Subnet Mask (IPv4): For default routing, the subnet mask on IPv4 Nodes should be 0.0.0.0.
■ DCN Prefix Length (IPv6): For default routing the prefix length of the packet for the destination
network should be 0.
■ Route Cost (IPv4 and IPv6): The cost of the route and the default value can be specified for IPv4
and IPv6 network elements.
■ Route Type (IPv4 and IPv6): The route type can be defined as Local (i.e. a route for which the next
hop is the final destination) or Distributed (i.e. a route for which the next hop is not the final
destination). For IPv4 or IPv6 Nodes, the Route Type can be configured either ‘Local’ or
‘Distributed‘ and Default value is ‘Local’.

Note: If the user reconfigures any of the configuration parameters in the CCLI interface, the DCN port
configuration setting will also need to be reconfigured for 100Mbps full duplex (auto-negotiation
disabled). Otherwise, if any parameters are configured in the CCLI interface and the DCN port
configuration for auto-negotiation is not set, the default setting of auto-negotiation “enabled” will be
applied.

Note: If a controller module (or both controller modules) fails in a node and is replaced, the user
needs to ensure the correct database is used to bring up the new controller module, and the CCLI
configuration will need to be performed again to provision the DCN ports with 100Mbps full duplex
(auto-negotiation disabled).

Note: When any IPv4 or IPv6 DCN related parameters (such as IP Address/Netmask/Gateway/
Destination/Prefix) are changed from the management interfaces, the stand by management module
(if plugged in) undergoes a warm reboot. If the DCN is connected to the standby management
module, then the network element may be unreachable for a short duration.If any of the DCN
configurations attributes are changed to an IP address that doesn't exist in the network, the node will
accept the IP address as long as the IP address is in the correct range and is valid. The node will not
be reachable until:
■ The node is accessed via the new DCN IP address.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS Management Plane Overview 9-7

■ The network is changed in accordance with the node configuration. In this case, the only way
to gain access to the node is to physically connect to the node via local network or by directly
connecting a cable to the node.

The DCN cables supported depends on the type of chassis:


■ The XTC and MTC-9/MTC-6 support only RJ-45 Ethernet straight-through cables for DCN ports.
■ The MTC, DTC, and OTC support only cross-over RJ-45 Ethernet cables for DCN ports.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-8 Gateway Network Element

Gateway Network Element


IQ NOS provides Gateway Network Element (GNE) capability, similar to the one defined in the GR-253
specification, to support in-band access to the network element as opposed to DCN access. In-band
access is typically used where either DCN access is unavailable (e.g., intermediate huts where a Digital
Repeater might be installed) or when DCN bandwidth needs to be conserved.

Note: It is recommended that every signaling domain have at least two GNEs with DCN capability to
enable management traffic to find/use a redundant path if the primary DCN path fails.

Note: Configuring an Optical Amplifier as a GNE is not supported.

Additionally, IQ NOS has enhanced the GNE capability in order to support a variety of management
protocols. The enhanced GNE capability provided by IQ NOS is called Management Application Proxy,
often referred to as MAP. Hence, the MAP provides the ability to manage those network elements that
are not directly DCN addressable through the network elements that are directly DCN addressable.
The MAP supports the following functions (also see Figure 9-4: Management Application Proxy Function
on page 9-9):
■ GNE—The GNE is a network element that is directly IP addressable from the DCN. The GNE
provides management proxy services to any network element within the same routing domain as
the GNE. The GNE provides management proxy service to any management traffic received via its
DCN, OSC or craft interfaces. The GNE can be accessed from the DCN through a IPv4 or IPv6
address.
■ Subtending Network Element (SNE)—This is a network element that does not have physical
connectivity to the DCN and is not directly IP addressable from the DCN. The SNE is capable of
providing management proxy support to any management traffic received through its craft and OSC
interfaces. The proxy functionality is optional, and can be enabled/disabled by the user. The proxy
session between the GNE and SNE is supported over IPv4 only.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS Management Plane Overview 9-9

Figure 9-4 Management Application Proxy Function

The MAP provides proxy services to the following protocols and enables various accessibility options as
described below:
■ HTTP Protocol—The MAP service on the GNE and SNE network elements relays HTTP protocol
messages by listening to a dedicated HTTP Proxy port 10080. This capability enables the DNA and
GNM applications to access all network elements within the purview of the GNE through the DCN
ports. Also, it enables the GNM to access all network elements within the purview of a network
element through the craft Ethernet and craft serial interfaces.
■ XML/TCP Protocol—The MAP service on the GNE and SNE network elements relays XML/TCP
protocol messages by listening to a dedicated XML/TCP Proxy port 15073. This capability enables
the DNA and GNM applications to securely access all network elements within the purview of the
GNE through the DCN ports. Also, it enables the GNM to access all network elements within the
purview of a network element though the craft Ethernet and craft serial interfaces.
■ Telnet Protocol—The MAP service on the GNE and SNE relays Telnet protocol messages by
listening to a dedicated Telnet Proxy port 10023. This capability enables the Telnet sessions to be
launched from the DNA and GNM applications to access all network elements within the purview of
the GNE through the DCN ports. Similarly, it enables the Telnet session to be launched from the
GNM to access all network elements within the purview of a network element through the craft
Ethernet and craft serial interfaces.
■ FTP Protocol—The MAP service on GNE and SNE relays FTP protocol messages by listening to a
dedicated FTP Proxy port 10021. This capability enables the communication between the FTP
client on the SNE and the DNA or external FTP Server through the GNE. The FTP client will be
used to upload performance monitoring data, downloading software, etc.
■ TL1 Protocol—The MAP service on GNE and SNE relays TL1 protocol messages by listening to a
dedicated TL1 Proxy port 9090. This capability enables TL1 terminal users to access all network
elements within the purview of the network element through a single connection to the GNE.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-10 Gateway Network Element

Note: There is no specific limitation on the number of SNEs that a GNE can support; instead, the
GNE is limited only by the number of proxy sessions it can support. Each GNM session to an SNE
requires one proxy XML session at the relevant GNE, and each DNA server managing an SNE
requires one proxy XML session at the relevant GNE.

The number of proxy sessions supported by the GNE depends on the type of controller module in the
Main Chassis of the GNE:
■ An XCM, IMM, or MCM-C node controller can support a maximum of 150 proxy XML sessions, 150
proxy TL1 sessions, and 10 proxy FTP sessions.
■ An MCM-B node controller can support a maximum of 60 proxy XML sessions, 60 proxy TL1
sessions, and 10 proxy FTP sessions.
■ An XTMM node controller can support a maximum of 50 proxy XML sessions, 50 proxy TL1
sessions, and 10 proxy FTP sessions.

Configuration Settings
IQ NOS provides several configuration options so that the users can design their DCN and management
communication access to meet their needs. Following are the various configuration options provided:
■ MAP Enabled—Users must set this option to enable MAP services on a network element.
■ Primary GNE IP Address—The Primary GNE IP Address should be configured on all network
elements. The Primary GNE IP Address is the router ID (also known as the GMPLS node ID) of the
GNE in the same domain as the network element being configured. If more than one GNE exists in
the same domain, it is recommended that the closest GNE to this node (in terms of hops) should be
selected as the primary GNE. The main function of the primary GNE is to provide FTP services
routing for SNEs that do not have direct DCN connectivity and for GNEs experiencing DCN
connectivity failure. FTP services include uploading historical performance monitoring data,
uploading database backups, and downloading software.
■ Secondary GNE IP Address—As with Primary GNE IP Address parameter, the Secondary GNE IP
Address is configured on all network elements. The Secondary GNE IP Address is the router ID
(also known as the GMPLS ID) of the GNE within the same domain as the GNE or SNE. The
Secondary GNE is used if the Primary GNE is unavailable. For Secondary GNE, it is recommended
to choose the GNE which:
□ Is the next closest network element in terms of number of hops from the network element
being configured.
□ Provides a completely separate path to the management station from the network element.
In other words, the inability to reach the Primary GNE should never mean that the Secondary
GNE is also unreachable and vice-versa.

Note: Provisioning a primary and secondary GNE IP address is required for a Subtending Network
Element (SNE) to enable FTP services. Additionally, it is recommended that both the primary and
secondary GNE IP address be provisioned on each GNE to ensure FTP services continue to function
in the event of an interruption of DCN service to the GNE. For each GNE, the closest alternate GNE

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS Management Plane Overview 9-11

GMPLS Router ID should be used for the primary GNE and the next closest alternate GNE GMPLS
Router ID should be used for the secondary GNE.

Note: For DNA connectivity purposes, the DNA uses all of the GNEs in the signaling domain
(including those that are configured to be primary and secondary GNE addresses) in a round robin
manner. Because of this, the DNA may achieve connectivity with an SNE via the OSC by way of a
GNE other than the primary or secondary GNE that is configured on the SNE.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-12 Static Routing

Static Routing
IQ NOS provides static routing capability. One application of static routes is to enable the network
elements to reach external networks that are not part of the DCN network. As shown in Figure 9-5: Using
Static Routing to Reach External Networks (IPv4 Examples) on page 9-12, the NTP Server may be
located in external networks, outside of the DCN network. In this scenario, users can configure the static
routes to external networks.
The destination address of static routes can be configured to an IPv6 address or an IPv4 address.

Figure 9-5 Using Static Routing to Reach External Networks (IPv4 Examples)

Another application of static routing is to enable the routing of the management traffic between two
topology partitions (see Network Topology on page 8-3). There might be a need to create topology
partitions within a single physical network. In such situations, users can still have the management
communication path between two topology partitions (created by disabling the GMPLS link) by
configuring static routes to reach network elements in other topology partitions.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS Management Plane Overview 9-13

The configured static routes can also be assigned cost so that the network can be designed to select
optimal path. Additionally, users can configure the ability to advertise static routes within the routing and
signaling domain.

Note: Users can configure the ability to advertise the IPv4 static routes within the GMPLS domain via
OSPF protocol. This functionality is not available for IPv6 static routes and only local static routes are
supported.

Note: Starting IQ NOS R17.1, static routes for IPv4/IPv6 network elements can be configured to
forward traffic to the default DCN subnet. This configuration is enabled or disabled from the Black
Hole Route attribute during static route creation and is applicable only when a default DCN subnet is
configured. On enabling Black Hole Route, traffic is discarded and not sent to default subnet. If Black
Hole Route is disabled, traffic is forwarded to the default subnet.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-14 Time-of-Day Synchronization

Time-of-Day Synchronization
IQ NOS provides accurate and synchronized timestamps on events and alarms, ensuring proper ordering
of alarms and events at both the network element and network levels. The synchronized time stamp
eases the network-level debugging and eliminates the inaccuracies caused by the manual configuration
of system time on each network element. Additionally, the time stamp complies with Universal
Coordinated Time (UTC) format, found in ISO 8601, and includes granularity down to seconds.
IQ NOS supports the Time-of-Day Synchronization by implementing NTP Client (Network Time Protocol)
which ensures that IQ NOS’s system time is synchronized with the specified NTP Server operating in the
customer network and also synchronized to the UTC. IQ NOS also implements NTP Server, so that one
network element may act as an NTP Server to the other network elements that do not have access to the
external NTP Server. As shown in Figure 9-6: NTP Server Configuration on page 9-14, typically a GNE
is configured to synchronize to an external NTP Server in the customer network and the SNEs are
configured to synchronize to the GNE.
In order to support NTP server redundancy, a node can be configured with up to three NTP servers (with
IPv6 or IPv4 addresses). When multiple NTP instances are configured on the node, the node determines
which of these instances to be used as active source of timing based on the NTP selection and clustering
algorithms. If the active NTP instance experiences a fault, the node ensures that another of the
configured NTP servers is available as a timing source. It is recommended for Subtending Network
Elements (SNEs) to use the Gateway Network Elements (GNE) as the primary NTP server.

Note: If the DCN ports are inaccessible, another route is selected first by gateway NE (primary, then
secondary), in-band OSC, then lastly by static route (which can take approximately 20 minutes to
implement).

Figure 9-6 NTP Server Configuration

The Infinera nodes also provide local clock with the accuracy of 23ppm or about a minute per month. If
the GNE (with NTP enabled) fails to access the external NTP Server, IQ NOS NTP (Client and Server)
uses the local clock as a time reference. When the connectivity to the external NTP Server is restored, IQ
NOS NTP Client and Server on the GNE re-synchronizes with the external NTP Server, and the new
synchronized time is propagated to all the network elements within the routing domain.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


IQ NOS Management Plane Overview 9-15

Following are some recommendations for configuring the NTP Server within an Intelligent Transport
Network:
■ Configure an external NTP Server with Stratum Level 4 or higher for each routing domain of an
Intelligent Transport Network.
■ Configure the GNE network element to point to the external NTP Server.
■ Configure the SNEs to point to the GNE as the NTP Server.
The active controller module on the Main Chassis synchronizes to the external NTP server, and itself acts
as a time server for the rest of the modules on the Main Chassis and Expansion Chassis. All of the
modules on the Main Chassis and Expansion Chassis (including the standby controller modules in
redundant controller configurations) synchronize their time settings with the time of day on the active
controller module on the Main Chassis. For multi-chassis systems, if the inter-chassis communication
links fail between the Main Chassis and Expansion Chassis, the modules on the Expansion Chassis
derive the time from the local clocks on the modules themselves.
Date and time change commands apply only to the active controller module on the Main Chassis. Once
the request is successfully applied, the remaining circuit packs on the Expansion Chassis synchronize
with the changed time automatically.
The standby controller synchronizes its internal time-of-day clock to the active controller’s clock using
NTP. If a controller switch occurs, the standby controller automatically becomes the NTP Server, as part
of the transition from standby to active, without the need for a reboot.
The active controller module on the Main Chassis synchronizes to the external NTP server using a “back-
off” algorithm to send consecutive requests to the external NTP server, so that if the controller module
compares its time to the NTP Server’s time and finds that the two times are in sync, the controller module
will wait for a longer period of time before synchronizing to the NTP Server the next time. This means that
the time between consecutive requests maybe as high as 512 seconds (~9 minutes).

Note: When changing the time on the active controller module, it may take up to a minute for the
modules on the node to sync up to the new time. Standby controller modules have an additional soak
period before changing their time to match the new time. If the system switches to the standby
controller module during this soak period, all of the modules will re-sync their time to match the now-
active controller module that is still using the previous time setting.

NTP Authentication
Starting Release 19.0, IQNOS supports authentication of the NTP servers to prevent tampering of
timestamps logged by Infinera devices.The NTP server and client (Infinera network element) are
configured with a common trusted key. The unique key is installed on the client or Infinera network
element using a key identifier with a value ranging 1 - 65534. It also specifies the type of algorithm (MD5/
SHA1) and a password for each key. During the request and response, the server calculates the hash
values of the packets using an algorithm specified in the key and NTP packet content, and fills the hash
values into the packet authentication information. The client then verifies if the packets are sent by trusted
NTP source or modified, based on the authentication information. Authentication is successful if the key

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


9-16 Time-of-Day Synchronization

identifier, type of algorithm and the password match with the server configuration. IQNOS supports NTP
authentication to configure upto three server IP addresses.

Note: Prior to Release 19.0, the default value for IP address of NTP Servers was 0.0.0.0. However,
starting Release 19.0, the default value for IP address of NTP Server1 will be 0.0.0.1, NTP Server2
will be 0.0.0.2, NTP Server3 will be 0.0.0.3. In case of IPv6 being selected, the default IP address of
NTP Server1 will be 0100::1, NTP Server2 will be 0100::2, NTP Server3 will be 0100::3. During
upgrade to release 19.0, the previous default of 0.0.0.0 is auto migrated to the new default values.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


APPENDIX A:

DTN-X Service Capabilities

This appendix lists the service provisioning and diagnostic capabilities for each service type supported by
the DTN-X:
■ 100GbE TIM/TIM2/MXP/LIM Services on page A-2
■ 100G OTN TIM/TIM2s/MXP/LIM Services on page A-6
■ 40G TIM Services on page A-10
■ 10G TIM/TIM2/MXP, SONET, SDH, and Ethernet Services on page A-13
■ 10G TIM Services (10GCC, 10.3GCC, and cDTF) on page A-16
■ 10G TIM/TIM2/MXP OTN Services on page A-20
■ Sub-10G TIM Services on page A-24
■ Packet Services on page A-27
In addition, this appendix provides the adaptation capabilities supported by the DTN-X:
■ DTN-X Adaptation Services on page A-29

Note: If Latency measurement is enabled in a direction (Terminal/ Facility), PRBS (Generation/


Monitoring) is not allowed in the same direction. The same is applicable vice versa.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-2 100GbE TIM/TIM2/MXP/LIM Services

100GbE TIM/TIM2/MXP/LIM Services


The following table shows the service provisioning and diagnostic capabilities for 100GbE services
supported by 100G TIMs/TIM2s, LIMs and MXP on the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
Supporting Chassis XTC-10 XTC-10 XTC-10
Types XTC-4 XTC-4 XTC-4
XTC-2 XTC-2
XTC-2E XTC-2E
Supporting TIMs/ TIM-1-100GE TIM-1-100GX TIM-1-100GE
LIMs/MXP TIM-1B-100GE TIM-1-100GM TIM-1B-100GE
TIM-1-100GE-Q TIM2-2-100GM TIM-1-100GE-Q
TIM-1-100GX TIM2-2-100GX LIM-1-100GE
TIM-1-100GM MXP-400
LIM-1-100GE
TIM2-2-100GM ■ TIM-1-100GM is
TIM2-2-100GX
supported on
XTC-10 or XTC-4
■ TIM-1-100GM is only
supported on
XTC-10 or XTC-4 ■ TIM2-2-100GM and
TIM2-2-100GX are
only
supported on
■ TIM2-2-100GM and XTC-10 or XTC-4
TIM2-2-100GX are
supported on ■ MXP-400 is
supported on
XTC-10 or XTC-4
XTC-2 / XTC-2E
only

Mapping GMPi Standard G.709 adaptation GMPi


ODUk ODU4i ODU4 ODU2i-10v

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-3

Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
(continued)
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
Tributary slots used 1, 80 80 Ten groups of 8.
2, 3, 4 (Line-side ODU3i+ not (Line-side ODU3i+ not
applicable for 100GbE) applicable for ODU4) Note: For GMPLS
SNCs, all TS must be
on the same line
module. For manual
cross-connects, the
groups can be on
different line modules.

GMPLS Restoration Yes Yes Yes

Note: VCAT will restore


as VCAT only and non-
VCAT will restore as
Non-VCAT only.

Support Line-side Yes Yes No


Terminating SNC

1
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
2
OTU3i+ is not supported for XTC-2/XTC-2E.
3
4
3QAM is not supported on XTC-2/XTC-2E.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-4 100GbE TIM/TIM2/MXP/LIM Services

Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
(continued)
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
1 Port D-SNCP (either Yes Yes Yes
with SNCs or cross- (Supported on TIM-1-100GE, (Supported on TIM-1-100GM,
connects) TIM-1-100GE-Q, TIM-1-100GX, Note: Both working and
TIM-1-100GM, TIM-1-100GX, TIM2-2-100GM and protection paths need to
TIM-1B-100GE, TIM2-2-100GX) be either VCAT or non-
TIM2-2-100GM, VCAT.
TIM2-2-100GX and
LIM-1-100GE only)
Note: 1 Port D-SCNP
on 100GbE
(ODU2i-10v) services
over a TIM-1-100GM/
TIM-1-100GX are not
supported on XTC-2
and XTC-2E

2 Port D-SNCP (either Yes Yes Yes


with SNCs or cross- (Supported on TIM-1-100GE, (Supported on TIM-1-100GM,
connects) TIM-1-100GE-Q, TIM-1-100GX, Note: Both working and
TIM-1-100GM, TIM-1-100GX, TIM2-2-100GM , protection paths need to
TIM-1B-100GE, TIM2-2-100GX and MXP-400 be either VCAT or non-
LIM-1-100GE, TIM2-2-100GM only) VCAT.
and TIM2-2-100GX only)
Note: 2 Port D-SCNP
on 100GbE
(ODU2i-10v) services
over a TIM-1-100GM/
TIM-1-100GX are not
supported on XTC-2
and XTC-2E

Line Side Protection Yes Yes Yes


group (either with (Supported only on
SNCs or cross- TIM-1-100GE/
connects) TIM-1B-100GE/
TIM-1-100GE-Q via manual
Note: Not cross-connects with a single
supported on OCG)
XTC-2/XTC-2E.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-5

Table A-1 Provisioning, Protection, and Diagnostic Support for 100GbE Services on the DTN-X
(continued)
Service Type
100GbE 100GbE 100GbE
(ODU4i) (ODU4) (ODU2i-10v)
FastSMP Protection Yes No No
(XTC-10 and XTC-4 only)
Note: Not
supported on Note: Not supported on
XTC-2/XTC-2E. TIM2-2-100GM and
TIM2-2-100GX

Latency Measurement No No No
(Yes - Applicable for
TIM2-2-100GM/GX)
CTP PRBS IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only
Generation and generation, no monitoring generation, no monitoring generation, no monitoring
Monitoring Towards
the Client Interface
CTP PRBS IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only IEEE 802.3 82.2.17 Only
Generation and generation, no monitoring generation, no monitoring generation, no monitoring
Monitoring Towards
the Network
ODUk Wrapper PRBS Not available PRBS-31 Not available
Generation and
Monitoring Towards
the Network
Loopbacks Supported Facility and Terminal Facility and Terminal Facility and Terminal
by Client CTP Object
Loopbacks Supported Facility Facility Facility
by ODUk Object
(Provided at OXM for
TIMs and at TIM2 for
TIM2-2-100GM and
TIM2-2-100GX)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-6 100G OTN TIM/TIM2s/MXP/LIM Services

100G OTN TIM/TIM2s/MXP/LIM Services


The following table shows the service provisioning and diagnostic capabilities for OTN services supported
by 100G TIMs/TIM2s/MXP and LIMs on the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
Supporting XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Chassis Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Supporting TIMs/ TIM-1-100G TIM-1-100G TIM-1-100GX TIM-1-100GX TIM-1-100GX TIM-1-100GX
LIMs/MXP TIM-1-100GM TIM-1-100GM LIM-1-100GX LIM-1-100GX LIM-1-100GX LIM-1-100GX
TIM-1-100GX TIM-1-100GX TIM2-2-100GX TIM2-2-100GX TIM2-2-100GX TIM2-2-100GX
LIM-1-100GX LIM-1-100GX
LIM-1-100GM LIM-1-100GM
TIM2-2-100GM
TIM2-2-100GX
MXP-400
■ TIM2-2-100GM
and
TIM2-2-100GX
are supported
on XTC-10
and XTC-4
■ TIM-1-100G,
TIM-1-100GM,
and
LIM-1-100GM
are supported
on XTC-10 or
XTC-4 only
■ MXP-400 is
supported on
XTC-2/
XTC-2E only

Mapping Standard G.709 GMPi Standard G. Standard G. Standard G. Standard G.


adaptation 709 adaptation 709 adaptation 709 adaptation 709 adaptation

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-7

Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X (continued)
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
ODUk ODU4 ODU2i-10v ODU2 ODU2e ODU0 ODU1
Tributary slots 80 Ten groups of 8 8 1 2
used 5, 6, 7, 8 (Line-side ODU3i+ not 8.
applicable for ODU4)
Note: For
GMPLS
SNCs, all
TS must
be on the
same line
module.
For
manual
cross-
connects,
the
groups
can be
on
different
line
modules.

GMPLS Yes No Yes Yes Yes Yes


Restoration (Not supported (Not supported (Not supported (Not supported
for for for for
TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX)
Support Line-side Yes No Yes Yes Yes Yes
Terminating SNC

5
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
6
OTU3i+ is not supported for XTC-2/XTC-2E.
7
8
3QAM is not supported on XTC-2/XTC-2E.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-8 100G OTN TIM/TIM2s/MXP/LIM Services

Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X (continued)
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
1 Port D-SNCP Yes No Yes Yes Yes Yes
(either with SNCs (Not supported (Not supported (Not supported (Not supported
or cross-connects) for for for for
TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX)
Note: Not
supported on
TIM-1-100GE-
Q.

2 Port D-SNCP Yes No No No No No


(either with SNCs (Supported on
or cross-connects) MXP-400)

Note: Not
supported on
TIM-1-100GE-
Q.

Line Side Yes No Yes Yes No No


Protection group
(either with SNCs
or cross-connects)

Note: Not
supported on
XTC-2/
XTC-2E.

FastSMP Yes No Yes Yes No Yes


Protection (Not supported (Not supported (Not supported (Not supported
for for for for
Note: Not TIM2-2-100GM/ TIM2-2-100GM/ TIM2-2-100GM/ TIM2-2-100GM/
supported on GX) GX) GX) GX)
XTC-2/
XTC-2E.

Latency No No No No No No
Measurement (Applicable for (Applicable for (Applicable for (Applicable for (Applicable for
TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX) TIM2-2-100GX)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-9

Table A-2 Provisioning, Protection, and Diagnostic Support for 100G OTN Services on the DTN-X (continued)
Service Type
ODU4 Switching Transparent ODU2 inside a ODU2e inside ODU0 inside a ODU1 inside a
Service OTU4 w/o channelized a channelized channelized channelized
FEC OTU4 OTU4 OTU4 OTU4
(ODU (ODU (ODU (ODU
Multiplexing) Multiplexing) Multiplexing) Multiplexing)
CTP PRBS PRBS-31 Not available ODUk: ODUk: ODUk: ODUk:
Generation and Both generation and PRBS-31 PRBS-31 PRBS-31 PRBS-31
Monitoring monitoring supported ODUj: ODUj: ODUj: ODUj:
Towards the Client for MXP-400 PRBS-31 PRBS-31 PRBS-31 PRBS-31
Interface (inverted) (inverted) (inverted) (inverted)
CTP PRBS PRBS-31 Not available ODUj: ODUj: ODUj: ODUj:
Generation and Both generation and PRBS-31 PRBS-31 PRBS-31 PRBS-31
Monitoring monitoring supported (inverted) (inverted) (inverted) (inverted)
Towards the for MXP-400
Network
ODUk Wrapper Not applicable Not available Not applicable Not applicable Not applicable Not applicable
PRBS Generation
and Monitoring
Towards the
Network
Loopbacks Facility and Terminal Facility and Facility Facility Facility Facility
Supported by Terminal
Client CTP Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by loopback at loopback at loopback at loopback at
ODUk Object both ODUj and both ODUj and both ODUj and both ODUj and
(Provided at OXM ODUk ODUk ODUk ODUk
for TIMs and at
TIM2s for
TIM2-2-100GM
and
TIM2-2-100GX)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-10 40G TIM Services

40G TIM Services


The following table shows the service provisioning and diagnostic capabilities for services supported by
40G TIMs on the DTN-X.

Note: 40G services are not supported on the XTC-2/XTC-2E.

Table A-3 Provisioning, Protection, and Diagnostic Support for 40G Services on the DTN-X
Service Type
40GbE 40GbE (ODU2i-4v) ODU3 ODU3e1 ODU3e2 OC-768/
(ODU3i) switching switching switching STM-256
service service service
Supporting XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Chassis Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
Supporting TIMs TIM-1-40GE TIM-1-40GE TIM-1-40G TIM-1-40G TIM-1-40G TIM-1-40GM
Mapping GMPi GMPi Standard G. G.Sup43 G.Sup43 AMP, BMP
709
adaptation
ODUk ODU3i ODU2i-4v ODU3 ODU3e1 ODU3e2 ODU3
Tributary slots 32 Four groups of 8 31 32 32 31
used 9, 10, 11, 12

Note: All TS
must be on the
same line
module).

9
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
10
OTU3i+ is not supported for XTC-2/XTC-2E.
11
12
3QAM is not supported on XTC-2/XTC-2E.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-11

Table A-3 Provisioning, Protection, and Diagnostic Support for 40G Services on the DTN-X (continued)
Service Type
40GbE 40GbE (ODU2i-4v) ODU3 ODU3e1 ODU3e2 OC-768/
(ODU3i) switching switching switching STM-256
service service service
GMPLS Yes Yes Yes Yes Yes Yes
Restoration
Note: VCAT
will restore as
VCAT only
and non-VCAT
will restore as
non-VCAT
only.

Support Line-side Yes No Yes Yes Yes Yes


Terminating SNC
1 Port D-SNCP Yes Yes Yes Yes Yes No
(either with SNCs
or cross-connects) Note: Both
working and
protection
paths need to
be either
VCAT or non-
VCAT.

2 Port D-SNCP Yes Yes Yes Yes Yes No


(either with SNCs
or cross-connects) Note: Both
working and
protection
paths need to
be either
VCAT or non-
VCAT.

Line side No No No No No No
Protection group
(either with SNCs
or cross-connects)
FastSMP Yes No No No No Yes
Protection
Latency No No No No No No
Measurement

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-12 40G TIM Services

Table A-3 Provisioning, Protection, and Diagnostic Support for 40G Services on the DTN-X (continued)
Service Type
40GbE 40GbE (ODU2i-4v) ODU3 ODU3e1 ODU3e2 OC-768/
(ODU3i) switching switching switching STM-256
service service service
CTP PRBS IEEE 802.3 IEEE 802.3 82.2.17 PRBS-31 PRBS-31 PRBS-31 Framed
Generation and 82.2.17 Only Only generation, PRBS-31 in
Monitoring generation, no no monitoring the SDH
Towards the monitoring Payload
Client Interface
CTP PRBS IEEE 802.3 IEEE 802.3 82.2.17 PRBS-31 PRBS-31 PRBS-31 Framed
Generation and 82.2.17 Only Only generation, PRBS-31 in
Monitoring generation, no no monitoring the SDH
Towards the monitoring Payload
Network
ODUk Wrapper Not available Not available Not Not Not PRBS-31
PRBS Generation applicable applicable applicable (inverted)
and Monitoring
Towards the
Network
Loopbacks Facility and Facility and Facility and Facility and Facility and Facility and
Supported by Terminal Terminal Terminal Terminal Terminal Terminal
Client CTP Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by
ODUk Object
(Provided at OXM)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-13

10G TIM/TIM2/MXP, SONET, SDH, and Ethernet Services


The following table shows the service provisioning and diagnostic capabilities for OC-192, STM-64,
10GbE LAN, and 10GbE WAN services supported by 10G TIMs/TIM2s/MXP, SONET, SDH, and Ethernet
Services on the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Table A-4 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(SONET/SDH and 10GbE LAN/WAN)
Service Type
OC-192/STM-64 10GbE WAN 10GbE LAN 10GbE LAN
(ODU2e) (ODU1e)
Supporting TIMs ,MXP TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM
TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX
TIM2-18-10GM TIM2-18-10GM TIM2-18-10GM
TIM2-18-10GX TIM2-18-10GX TIM2-18-10GX
MXP-400 MXP-400 (TIM2-18-10GM and
(TIM2-18-10GM and (TIM2-18-10GM and TIM2-18-10GX are
TIM2-18-10GX are TIM2-18-10GX are supported on XTC-10
supported on supported on and XTC-4)
XTC-10 and XTC-4) XTC-10 and XTC-4) MXP-400 is supported
MXP-400 is MXP-400 is on XTC-2 /XTC-2E only
supported on supported on
XTC-2 / XTC-2E XTC-2 / XTC-2E
only only
Supporting Chassis Types XTC-10 XTC-10 XTC-10 XTC-10
XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E
Mapping AMP, BMP AMP, BMP 16FS+BMP BMP
ODUk ODU2 ODU2 ODU2e ODU1e
Tributary slots used 13, 14, 8 8 8 8
15, 16

13
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
14
OTU3i+ is not supported for XTC-2/XTC-2E.
15
16
3QAM is not supported on XTC-2/XTC-2E.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-14 10G TIM/TIM2/MXP, SONET, SDH, and Ethernet Services

Table A-4 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(SONET/SDH and 10GbE LAN/WAN) (continued)
Service Type
OC-192/STM-64 10GbE WAN 10GbE LAN 10GbE LAN
(ODU2e) (ODU1e)
GMPLS Restoration Yes Yes Yes Yes
Support Line-side Yes Yes Yes Yes
Terminating SNC
1 Port D-SNCP (either with Yes Yes Yes Yes
SNCs or cross-connects)
2 Port D-SNCP (either with Yes Yes Yes Yes
SNCs or cross-connects) (Supported on (Supported on MXP-400)
MXP-400)
Line side Protection group Yes Yes Yes Yes
(either with SNCs or cross- (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and
connects) only) only) only) XTC-4 only)

Note: Not supported


on XTC-2/XTC-2E.
Hence no MXP-400
support.

FastSMP Protection Yes Yes Yes Yes


(XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and
Note: Not supported only) only) only) XTC-4 only)
on XTC-2/XTC-2E.

Note: Not supported


on TIM2-18-10GM and
TIM2-18-10GX

Latency Measurement Yes Yes Yes Yes


(XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and
Note: Not supported only) only) only) XTC-4 only)
on XTC-2/XTC-2E.

CTP PRBS Generation and Not available Not available IEEE Test Pattern IEEE Test
Monitoring Towards the Pattern
Client Interface Note: Not available
for TIM2-18-10GM
and TIM2-18-10GX

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-15

Table A-4 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X
(SONET/SDH and 10GbE LAN/WAN) (continued)
Service Type
OC-192/STM-64 10GbE WAN 10GbE LAN 10GbE LAN
(ODU2e) (ODU1e)
CTP PRBS Generation and Not available Not available IEEE Test Pattern IEEE Test
Monitoring Towards the Both generation and Both generation and Pattern
Network monitoring monitoring Note: Not supported
supported for supported for on TIM2-18-10GM
MXP-400 MXP-400 and TIM2-18-10GX

Both generation and


monitoring supported for
MXP-400
ODUk Wrapper PRBS PRBS-31 (inverted) PRBS-31 (inverted) PRBS-31 (inverted) PRBS-31
Generation and Monitoring (inverted)
Towards the Network
Loopbacks Supported by Facility and Facility and Facility and Terminal Facility and
Client CTP Object Terminal Terminal Terminal
Loopbacks Supported by Facility Facility Facility Facility
ODUk Object (Provided at
OXM for TIMs and at TIM2
for TIM2-18-10GM and
TIM2-18-10GX)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-16 10G TIM Services (10GCC, 10.3GCC, and cDTF)

10G TIM Services (10GCC, 10.3GCC, and cDTF)


The following table shows the service provisioning and diagnostic capabilities for 10G Clear Channel,
10.3G Clear Channel, and cDTF services supported by 10G TIMs on the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Table A-5 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (10GCC, and
cDTF)
Service Type
10G Clear 10.3G Clear 10.3G Clear 10.3G Clear cDTF
Channel Channel Channel Channel
(ODU2) (ODU2e) (ODU1e) (ODU2i)
Supporting TIMs TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM
TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX
Supporting Chassis XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Mapping AMP, BMP 16FS+BMP BMP GMPi GMPi
ODUk ODU2 ODU2e ODU1e ODU2i ODUFlexi
Tributary slots used 17, 8 8 8 9 9
18, 19, 20

GMPLS Restoration Yes Yes Yes Yes Yes


Support Line-side Yes Yes Yes Yes Yes
Terminating SNC
1 Port D-SNCP (either Yes Yes Yes Yes Yes
with SNCs or cross-
connects)
2 Port D-SNCP (either Yes Yes Yes Yes Yes
with SNCs or cross-
connects)

17
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
18
OTU3i+ is not supported for XTC-2/XTC-2E.
19
20
3QAM is not supported on XTC-2/XTC-2E.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-17

Table A-5 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (10GCC, and
cDTF) (continued)
Service Type
10G Clear 10.3G Clear 10.3G Clear 10.3G Clear cDTF
Channel Channel Channel Channel
(ODU2) (ODU2e) (ODU1e) (ODU2i)
Line side Protection Yes Yes Yes Yes No
group (either with (XTC-10 and (XTC-10 and (XTC-10 and (XTC-10 and
SNCs or cross- XTC-4 only) XTC-4 only) XTC-4 only) XTC-4 only)
connects)

Note: Not
supported on
XTC-2/XTC-2E.

FastSMP Protection Yes Yes Yes Yes Yes


(XTC-10 and (XTC-10 and (XTC-10 and (XTC-10 and (XTC-10 and
Note: Not XTC-4 only) XTC-4 only) XTC-4 only) XTC-4 only) XTC-4 only)
supported on
XTC-2/XTC-2E.

Latency Measurement Yes Yes Yes Yes Yes


(XTC-10 and (XTC-10 and (XTC-10 and (XTC-10 and (XTC-10 and
Note: Not XTC-4 only) XTC-4 only) XTC-4 only) XTC-4 only) XTC-4 only)
supported on
XTC-2/XTC-2E.

CTP PRBS Generation Not available Not available Not available Not available Not available
and Monitoring
Towards the Client
Interface
CTP PRBS Generation Not available Not available Not available Not available Not available
and Monitoring
Towards the Network
ODUk Wrapper PRBS PRBS-31 PRBS-31 PRBS-31 Not available Not available
Generation and (inverted) (inverted) (inverted)
Monitoring Towards
the Network
Loopbacks Supported Facility and Facility and Facility and Facility and Facility and
by Client CTP Object Terminal Terminal Terminal Terminal Terminal
Loopbacks Supported Facility Facility Facility Facility Facility
by ODUk Object
(Provided at OXM)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-18 10G TIM Fibre Channel Services

10G TIM Fibre Channel Services


The following table shows the service provisioning and diagnostic capabilities for 10G and 8G Fibre
Channel services supported by 10G TIMs on the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Table A-6 Provisioning, Protection, and Diagnostic Support for Fibre Channel Services on the DTN-X
(8GFC and 10GFC)
Service Type
8G Fibre Channel 10G Fibre Channel
Supporting TIMs TIM-5-10GM TIM-5-10GM
TIM-5-10GX TIM-5-10GX
Supporting Chassis Types XTC-10 XTC-10
XTC-4 XTC-4
Mapping GMPi GMPi
ODUk ODUFlexi ODUFlexi
Tributary slots used 21, 22, 23, 24 7 9
GMPLS Restoration No No
Support Line-side Terminating SNC Yes Yes
1 Port D-SNCP (either with SNCs or cross- No No
connects)
2 Port D-SNCP (either with SNCs or cross- No No
connects)
Line-side Protection group (either with SNCs or No No
cross-connects)
FastSMP Protection No No
Latency Measurement No No

21
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
22
OTU3i+ is not supported for XTC-2/XTC-2E.
23
24
3QAM is not supported on XTC-2/XTC-2E.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-19

Table A-6 Provisioning, Protection, and Diagnostic Support for Fibre Channel Services on the DTN-X
(8GFC and 10GFC) (continued)
Service Type
8G Fibre Channel 10G Fibre Channel
CTP PRBS Generation and Monitoring Scrambled jitter pattern (JSPAT), IEEE Test Pattern (IEEE 802.3
Towards the Client Interface defined in INCITS Fibre Channel Clause 49.2.8)
Physical Interface-4 (FC-PI-4)
CTP PRBS Generation and Monitoring Scrambled jitter pattern (JSPAT), IEEE Test Pattern (IEEE 802.3
Towards the Network defined in INCITS Fibre Channel Clause 49.2.8)
Physical Interface-4 (FC-PI-4)
ODUk Wrapper PRBS Generation and Not available Not available
Monitoring Towards the Network
Loopbacks Supported by Client CTP Object Facility and Terminal Facility and Terminal
Loopbacks Supported by ODUk Object Facility Facility
(Provided at OXM)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-20 10G TIM/TIM2/MXP OTN Services

10G TIM/TIM2/MXP OTN Services


The following table shows the service provisioning and diagnostic capabilities for Transparent OTUk with
FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk services supported by 10G TIMs/
TIM2s/MXP on the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
Supporting TIMs, TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GM TIM-5-10GX TIM-5-10GX
MXP TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM-5-10GX TIM2-18-10GM TIM2-18-10GM
TIM2-18-10GM TIM2-18-10GM TIM2-18-10GX TIM2-18-10GX
TIM2-18-10GX TIM2-18-10GX (supported on (supported on
MXP-400 MXP-400 XTC-10 and XTC-10 and
(TIM2-18-10GM (TIM2-18-10GM XTC-4) XTC-4)
and and
TIM2-18-10GX TIM2-18-10GX
are supported are supported
on XTC-10 and on XTC-10 and
XTC-4) XTC-4)
MXP-400 is MXP-400 is
supported on supported on
XTC-2 and XTC-2 and
XTC-2E only XTC-2E only
Supporting Chassis XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Mapping GMPi Standard G. Standard G.709 Standard G.709 Standard G. Standard G.
709 adaptation adaptation 709 adaptation 709 adaptation
adaptation
ODUk ODUFlexi ODU1e ODU2 ODU2e ODU0 ODU1
Tributary slots used 9 8 8 8 1 2
25, 26, 27, 28

25
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-21

Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk) (continued)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
GMPLS Yes Yes Yes Yes Yes Yes
Restoration (not supported (not supported
on on
TIM2-18-10GM TIM2-18-10GM
and and
TIM2-18-10GX) TIM2-18-10GX)
Support Line-side Yes Yes Yes Yes Yes Yes
Terminating SNC
1 Port D-SNCP Yes Yes Yes Yes Yes Yes
(either with SNCs
or cross-connects)
2 Port D-SNCP Yes Yes Yes Yes No No
(either with SNCs (Supported on (Supported on
or cross-connects) MXP-400) MXP-400)
Line side No No Yes Yes No No
Protection group (XTC-10 and (XTC-10 and
(either with SNCs XTC-4 only) XTC-4 only)
or cross-connects)

Note: Not
supported on
XTC-2/
XTC-2E.

HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
26
OTU3i+ is not supported for XTC-2/XTC-2E.
27
28
3QAM is not supported on XTC-2/XTC-2E.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-22 10G TIM/TIM2/MXP OTN Services

Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk) (continued)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
FastSMP Yes Yes Yes Yes No No
Protection (XTC-10 (XTC-10 (XTC-10 and (XTC-10 and
and XTC-4 and XTC-4 XTC-4 only) XTC-4 only)
Note: Not only) only)
supported on
XTC-2/
XTC-2E.

Note: Not
supported on
TIM2-18-10GM
and
TIM2-18-10GX

Latency Yes Yes Yes Yes Yes Yes


Measurement (XTC-10 (XTC-10 (XTC-10 and (XTC-10 and (XTC-10 and (XTC-10 and
and XTC-4 and XTC-4 XTC-4 only) XTC-4 only) XTC-4 only) XTC-4 only)
Note: Not only) only)
supported on
XTC-2/
XTC-2E.

CTP PRBS PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31


Generation and (inverted) at (inverted) (inverted) (inverted) (inverted) at (inverted) at
Monitoring ODUk Both generation Both generation either ODUk/j either ODUk/j
Towards the Client and monitoring and monitoring (not supported
Interface supported for supported for onTIM-5-10GX
MXP-400 MXP-400 for HO-ODUk
multiplexing
services only)
CTP PRBS Not PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31
Generation and available (inverted) (inverted) (inverted) (inverted) Only (inverted) Only
Monitoring Both generation Both generation at ODUj at ODUj
Towards the and monitoring and monitoring
Network supported for supported for
MXP-400 MXP-400

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-23

Table A-7 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (Transparent
OTUk with FEC, ODUk Switching Services, and ODUk Inside Channelized OTUk) (continued)
Service Type
Transparent ODU1e ODU2 ODU2e ODU0 inside ODU1 inside
OTUk with Switching Switching Switching Channelized Channelized
FEC (where Service Service Service OTU2 OTU2
k = 1e, 2, or (ODU (ODU
2e) Multiplexing) Multiplexing)
ODUk Wrapper Not Not Not applicable Not applicable Not applicable Not applicable
PRBS Generation available applicable
and Monitoring
Towards the
Network
Loopbacks Facility and Facility and Facility and Facility and Facility and Facility and
Supported by Terminal Terminal Terminal Terminal Terminal Terminal
Client CTP Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by loopback at loopback at
ODUk Object ODUj, No ODUj, No
(Provided at OXM loopback at loopback at
for TIMs and at ODUk ODUk
TIM2 for
TIM2-18-10GM and
TIM2-18-10GX)

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-24 Sub-10G TIM Services

Sub-10G TIM Services


The following table shows the service provisioning and diagnostic capabilities for sub-10G services
supported by the TIM-16-2.5GM on the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Note: Starting IQ NOS Release 17.1, non-bookended OC-3 and OC-12 services are supported
between a TIM-16-2.5GM (at one end) and TIM-1-100GX or TIM-5-10GX (at the other end).

Table A-8 Provisioning, Protection, and Diagnostic Support for sub-10G Services on the DTN-X
Service Type
1GbE OC-48/STM-16 OC-3/STM-1 OC-12/STM-4 2GFC 4GFC
Supporting XTC-10 XTC-10 XTC-10 XTC-10 XTC-10 XTC-10
Chassis Types XTC-4 XTC-4 XTC-4 XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2 XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E XTC-2E
Supporting TIM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM TIM-16-2.5GM
Mapping TTT+GMP BMP TTT+GMP TTT+GMP TTT+GMP TTT+GMP
ODUk ODU0 ODU1 ODU0 ODU0 ODU1 ODUflexi
Tributary slots 1 2 1 1 2 4
used 29, 30, 31,
32

GMPLS Yes Yes No No No No


Restoration
Support Line- Yes Yes Yes Yes Yes Yes
side
Terminating
SNC

29
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
30
OTU3i+ is not supported for XTC-2/XTC-2E.
31
32
3QAM is not supported on XTC-2/XTC-2E.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-25

Table A-8 Provisioning, Protection, and Diagnostic Support for sub-10G Services on the DTN-X
(continued)
Service Type
1GbE OC-48/STM-16 OC-3/STM-1 OC-12/STM-4 2GFC 4GFC
1 Port D-SNCP Yes Yes No No No No
(either with
SNCs or cross-
connects)
2 Port D-SNCP Yes Yes No No No No
(either with
SNCs or cross-
connects)
Line side No No No No No No
Protection
group (either
with SNCs or
cross-
connects)
FastSMP Yes Yes No No No No
Protection
Latency No No No No No No
Measurement
CTP PRBS Unframed Unframed Unframed Unframed Unframed Unframed
Generation PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31
and Monitoring
Towards the
Client Interface
CTP PRBS Not available Not available Not available Not available Not available Not available
Generation
and Monitoring
Towards the
Network
ODUk Wrapper PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31 PRBS-31
PRBS (inverted) (inverted) (inverted) (inverted) (inverted) (inverted)
Generation
and Monitoring
Towards the
Network

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-26 Sub-10G TIM Services

Table A-8 Provisioning, Protection, and Diagnostic Support for sub-10G Services on the DTN-X
(continued)
Service Type
1GbE OC-48/STM-16 OC-3/STM-1 OC-12/STM-4 2GFC 4GFC
Loopbacks Facility and Facility and Facility and Facility and Facility and Facility and
Supported by Terminal Terminal Terminal Terminal Terminal Terminal
Client CTP
Object
Loopbacks Facility Facility Facility Facility Facility Facility
Supported by
ODUk Object
(Provided at
OXM)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-27

Packet Services
The following table shows the service provisioning and diagnostic capabilities for packet services on the
DTN-X.

Note: Support is the same for all XTC chassis types, except where noted.

Table A-9 Provisioning, Protection, and Diagnostic Support for Packet Services on the DTN-X
Service
1G Switched Packet 10G Switched Packet 100G Switched Packet
Services Services Services
Supporting Chassis Types XTC-10 XTC-10 XTC-10
XTC-4 XTC-4 XTC-4
XTC-2 XTC-2 XTC-2
XTC-2E XTC-2E XTC-2E
Supporting PXM PXM-16-10GE PXM-16-10GE PXM-1-100GE
Mapping GFP-F+GMPi GFP-F+GMPi GFP-F+GMPi
ODUk ODUflexi-n, n=1 to 80 ODUflexi-n, n=1 to 80 ODUflexi-n, n=1 to 80

Note: Line-side may constrain


maximum ODUflexi size; Up to
10 different OTN paths and a
total of 100G OTN bandwidth.

Tributary slots used 33, 34, 35, 36 n, n=1 to 80; Up to 10 n, n=1 to 80; Up to 10 n, n=1 to 80; Up to 10
different OTN paths different OTN paths different OTN paths
GMPLS Restoration Yes Yes Yes
Support Line-side Terminating SNC Yes Yes Yes
(XTC-10 and XTC-4 (XTC-10 and XTC-4
only) only)

33
HO ODU4i uses 80 tributary slots
HO ODU3i+ uses 40 tributary slots
HO ODUCni uses the following tributary slots: ODUC1i-15: 60, ODUC1i: 80, ODUC2i-22.5: 90,
ODUC2i-30: 120, ODUC2i-37.5: 150, ODUC2i: 160, ODUC3i-45: 180, ODUC3i-50: 200,
ODUC3i-52.5: 210, ODUC3i: 240, ODUC4i-67.5: 270, ODUC4i-75: 300
34
OTU3i+ is not supported for XTC-2/XTC-2E.
35
36
3QAM is not supported on XTC-2/XTC-2E.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-28 Packet Services

Table A-9 Provisioning, Protection, and Diagnostic Support for Packet Services on the DTN-X
(continued)
Service
1G Switched Packet 10G Switched Packet 100G Switched Packet
Services Services Services
1 Port D-SNCP (either with SNCs or Yes Yes Yes
cross-connects) (XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and XTC-4
only) only) only)
Note: Not supported on XTC-2/
XTC-2E.

2 Port D-SNCP (either with SNCs or No No No


cross-connects)
Line side Protection group (either No No No
with SNCs or cross-connects)
FastSMP Protection Yes Yes Yes
(XTC-10 and XTC-4 (XTC-10 and XTC-4 (XTC-10 and XTC-4
Note: Not supported on XTC-2/ only) only) only)
XTC-2E.

Latency Measurement No No No
CTP PRBS Generation and Not available Not available Not available
Monitoring Towards the Client
Interface
CTP PRBS Generation and Not available Not available Not available
Monitoring Towards the Network
ODUk Wrapper PRBS Generation Not available Not available Not available
and Monitoring Towards the
Network
Loopbacks Supported by Client Facility and Terminal Facility and Terminal Terminal
CTP Object
Loopbacks Supported by ODUk Facility Facility Facility
Object (Provided at OXM)

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-29

DTN-X Adaptation Services


The following table shows the adaptation services supported by the DTN-X.

Note: Support is the same for all XTC chassis types, except where noted. 100GE to OTU4 adaptation
services are supported for XT(S)-3600 only

Note: Note the following for TIM2s:


■ TIM2-18-10GM, TIM2-18-10GX, TIM2-2-100GM and TIM2-2-100GX are supported only on
XTC-10 and XTC-4
■ FastSMP is not supported on TIM2-18-10GM, TIM2-18-10GX, TIM2-2-100GM and
TIM2-2-100GX
■ Latency Measurement is supported on TIM2-18-10GM, TIM2-18-10GX

Note: The following are not supported for XTC-2/XTC-2E chassis:


■ FastSMP
■ Latency Measurement
■ Line-side OTU3i+
■ 40Gbps Services

Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8

1G
1GbE ODU0 TIM-16-2.5GM TIM-5-10GX TTT ODU0 1 1 Yes No No
inside TIM2-18-10GX +GMP
Channelized
OTU2
ODU0 TIM-16-2.5GM TIM-1-100GX TTT ODU0 1 1 Yes No No
inside or +GMP
Channelized LIM-1-100GX
OTU4 TIM2-2-100GX

37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-30 DTN-X Adaptation Services

Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN) (continued)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8

2.5G
OC-48/ ODU1 TIM-16-2.5GM TIM-5-10GX BMP ODU1 2 2 Yes No No
STM-16 inside TIM2-18-10GX
Channelized
OTU2
ODU1 TIM-16-2.5GM TIM-1-100GX BMP ODU1 2 2 Yes No No
inside TIM2-2-100GX
Channelized
OTU4
10G
OC-192/ ODU2 TIM-5-10GM TIM-5-10GM AMP, ODU2 8 8 Yes Yes Yes
STM-64 switching or TIM-5-10GX or TIM-5-10GX BMP (XTC-10 (XTC-10
service or or or XTC-4 or
TIM2-18-10GM TIM2-18-10GM only) XTC-4
or or only)
TIM2-18-10GX TIM2-18-10GX
10GbE ODU2 TIM-5-10GM TIM-5-10GM AMP, ODU2 8 8 Yes Yes Yes
WAN switching or TIM-5-10GX or TIM-5-10GX BMP (XTC-10 (XTC-10
service or XTC-4 or
only) XTC-4
only)
10GbE ODU2e TIM-5-10GM TIM-5-10GM 16FS ODU2e 8 8 Yes Yes Yes
LAN switching or TIM-5-10GX or TIM-5-10GX +BMP (XTC-10 (XTC-10
service or or or XTC-4 or
(MXP-400 is TIM2-18-10GM TIM2-18-10GM only) XTC-4
supported or or only)
on XTC-2 / TIM2-18-10GX TIM2-18-10GX
XTC-2E MXP-400 (with MXP-400 (with
only) TIM2s as TIM TIM2s as TIM
Z) Z)
ODU1e TIM-5-10GM TIM-5-10GM BMP ODU1e 8 8 Yes Yes Yes
switching or TIM-5-10GX or TIM-5-10GX (XTC-10 (XTC-10
service or XTC-4 or
only) XTC-4
only)

37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


DTN-X Service Capabilities A-31

Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN) (continued)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8

10G ODU2 TIM-5-10GM TIM-5-10GM AMP, ODU2 8 8 Yes Yes Yes


Clear switching or TIM-5-10GX or TIM-5-10GX BMP (XTC-10 (XTC-10
Channel service or XTC-4 or
only) XTC-4
only)
10.3G ODU2e TIM-5-10GM TIM-5-10GM 16FS ODU2e 8 8 Yes Yes Yes
Clear switching or TIM-5-10GX or TIM-5-10GX +BMP (XTC-10 (XTC-10
Channel service or XTC-4 or
only) XTC-4
only)
ODU1e TIM-5-10GM TIM-5-10GM BMP ODU1e 8 8 Yes Yes Yes
switching or TIM-5-10GX or TIM-5-10GX (XTC-10 (XTC-10
service or XTC-4 or
only) XTC-4
only)
OC-192/ ODU2 TIM-5-10GM TIM-1-100GX AMP, ODU2 8 8 Yes No Yes
STM-64 inside a or TIM-5-10GX or BMP (XTC-10
channelized or LIM-1-100GX or
OTU4 TIM2-18-10GM TIM2-2-100GX XTC-4
or only)
TIM2-18-10GX
10GbE ODU2 TIM-5-10GM TIM-1-100GX AMP, ODU2 8 8 Yes No Yes
WAN inside a or TIM-5-10GX or BMP (XTC-10
channelized LIM-1-100GX or
OTU4 TIM2-2-100GX XTC-4
only)
10GbE ODU2e TIM-5-10GM TIM-1-100GX 16FS ODU2e 8 8 Yes No Yes
LAN inside a or TIM-5-10GX or +BMP (XTC-10
channelized or LIM-1-100GX or
OTU4 TIM2-18-10GM TIM2-2-100GX XTC-4
or only)
TIM2-18-10GX
10G ODU2 TIM-5-10GM TIM-1-100GX AMP, ODU2 8 8 Yes No Yes
Clear inside a or TIM-5-10GX or BMP (XTC-10
Channel channelized LIM-1-100GX or
OTU4 XTC-4
only)

37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


A-32 DTN-X Adaptation Services

Table A-10 Provisioning, Protection, and Diagnostic Support for 10G Services on the DTN-X (SONET/SDH,
10GbE LAN/WAN) (continued)
Service A Service Z TIM A TIM Z Mapping ODUk OTU4i OTU3i GMPLS FastSMP Latency
Trib + Trib Restore Measure
Slots 3 Slots 3
7 8

10.3G ODU2e TIM-5-10GM TIM-1-100GX 16FS ODU2e 8 8 Yes No Yes


Clear inside a or TIM-5-10GX or +BMP (XTC-10
Channel channelized LIM-1-100GX or
OTU4 XTC-4
only)
40G (XTC-10 or XTC-4 only)
OC-768/ ODU3 TIM-1-40GM TIM-1-40G AMP, ODU3 31 31 Yes No No
STM-256 switching BMP
service
100G
100GbE ODU4 TIM-1-100GM TIM-1-100GM AMP, ODU4 80 N/A Yes No No
LAN switching or or BMP
(MXP-400 service TIM-1-100GX TIM-1-100GX
is or or
supported TIM2-2-100GM TIM2-2-100GM
on or or
XTC-2/ TIM2-2-100GX TIM2-2-100GX
XTC-2E or or
only) MXP-400 (with MXP-400 (with
TIM2s as TIM TIM2s as TIM
Z) A)

37 The number of tributary slots used with OTU4i line-side (DC-PM-QPSK). OTU4i has 80 tributary slots.
38 The number of tributary slots used with OTU3i+ line-side (SC-PM-QPSK, DC-PM-BPSK). OTU3i+ has 40
tributary slots.

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


APPENDIX B:

XT Service Capabilities

The following table shows the service provisioning and diagnostic capabilities for 100GbE and 10GbE
services supported by XT(S)-3300 and XT(S)-3600.

Table B-1 Provisioning, Protection, and Diagnostic Support for GbE Services on XT
Service Type
100GbE 100GbE 10GbE 10GbE OTU4 ODU2 inside ODU2e
(ODU4) (ODU2e) a inside a
channelized channelized
OTU4 OTU4
(ODU (ODU
Multiplexing) Multiplexing)
Supporting XT(S)-3600 XT(S)-3300 XT(S)-3600 XT(S)-3300 XT(S)-3600 XTC-10 XTC-10
Node Types XTC-4 XTC-2 XTC-4 XTC-2
XTC-2E XTC-2E
Mapping GMPi NA Standard G. Standard G. ODU4 Standard G. Standard G.
709 709 709 709
adaptation adaptation adaptation adaptation
ODUk ODU4, NA ODU2i ODU2i ODU4 ODU2 ODU2e
ODU4i
Tributary slots 80 NA 80 80 80 8 8
used
GMPLS Yes No Yes Yes Yes No Yes
Restoration

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


B-2

Table B-1 Provisioning, Protection, and Diagnostic Support for GbE Services on XT (continued)
Service Type
100GbE 100GbE 10GbE 10GbE OTU4 ODU2 inside ODU2e
(ODU4) (ODU2e) a inside a
channelized channelized
OTU4 OTU4
(ODU (ODU
Multiplexing) Multiplexing)
Support Line- Yes No Yes Yes No Yes Yes
side
Terminating
SNC
1 Port D-SNCP No No No No No Yes Yes
(either with
SNCs or cross-
connects)
2 Port D-SNCP Yes No No No Yes No No
(either with (Supported
SNCs or cross- only through
connects) Dual-chassis
Y-cable
protection on
XT(S)-3600)
Line Side Yes No Yes Yes No Yes Yes
Protection
group (either
with SNCs or
cross-connects)
FastSMP No No No No No Yes Yes
Protection
Latency No No No No No No No
Measurement
CTP PRBS Yes No No No Yes ODUk: ODUk:
Generation and PRBS-31 PRBS-31
Monitoring ODUj: ODUj:
Towards the PRBS-31 PRBS-31
Client Interface (inverted) (inverted)
CTP PRBS Yes No No No Yes ODUj: ODUj:
Generation and PRBS-31 PRBS-31
Monitoring (inverted) (inverted)
Towards the
Network

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential


XT Service Capabilities B-3

Table B-1 Provisioning, Protection, and Diagnostic Support for GbE Services on XT (continued)
Service Type
100GbE 100GbE 10GbE 10GbE OTU4 ODU2 inside ODU2e
(ODU4) (ODU2e) a inside a
channelized channelized
OTU4 OTU4
(ODU (ODU
Multiplexing) Multiplexing)
ODUk Wrapper Not available NA PRBS-31 PRBS-31 Not Not Not
PRBS available applicable applicable
Generation and
Monitoring
Towards the
Network
TTI Yes NA Yes Yes Yes Yes Yes
Loopbacks Facility Facility Facility Facility Facility Facility Facility
Supported by Terminal Terminal Terminal Terminal
Client CTP
Object
Loopbacks Facility Facility Facility Facility Facility Facility Facility
Supported by
Line CTP
Object
Loopbacks Terminal Terminal Terminal Terminal Terminal Terminal Terminal
Supported by
OCG PTP or
SCG PTP
Object (OCG
PTP is
applicable for
XT-500S and
SCG PTP for
XT-500F,
XT(S)-3300 and
XT(S)-3600)
Loopbacks Facility NA Facility Facility Facility Facility Facility
Supported by
Tributary ODUk
CTP

Infinera Corporation Overview Guide Release 20.0 V001

Infinera Proprietary and Confidential


B-4

Overview Guide Release 20.0 V001 Infinera Corporation

Infinera Proprietary and Confidential

You might also like