450-3709-010 (MCP R4.2 Engineering Guide) 12.03

Download as pdf or txt
Download as pdf or txt
You are on page 1of 90

Manage, Control and Plan

Engineering Guide
Release 4.2

What’s inside...
Introduction
Deployment options
How to size and engineer MCP
Engineering guidelines
Procedures and guidelines for different network sizes
Ordering information
Appendix A - Deployment examples
Appendix B - Scale and memory values used during optimization of MCP for managed network size

450-3709-010 - Standard Issue 12.03


June 2020
Copyright© 2016-2020 Ciena® Corporation. All rights reserved.
LEGAL NOTICES
THIS DOCUMENT CONTAINS CONFIDENTIAL AND TRADE SECRET INFORMATION OF CIENA
CORPORATION AND ITS RECEIPT OR POSSESSION DOES NOT CONVEY ANY RIGHTS TO REPRODUCE
OR DISCLOSE ITS CONTENTS, OR TO MANUFACTURE, USE, OR SELL ANYTHING THAT IT MAY DESCRIBE.
REPRODUCTION, DISCLOSURE, OR USE IN WHOLE OR IN PART WITHOUT THE SPECIFIC WRITTEN
AUTHORIZATION OF CIENA CORPORATION IS STRICTLY FORBIDDEN.
EVERY EFFORT HAS BEEN MADE TO ENSURE THAT THE INFORMATION IN THIS DOCUMENT IS
COMPLETE AND ACCURATE AT THE TIME OF PUBLISHING; HOWEVER, THE INFORMATION CONTAINED IN
THIS DOCUMENT IS SUBJECT TO CHANGE.
While the information in this document is believed to be accurate and reliable, except as otherwise expressly agreed
to in writing CIENA PROVIDES THIS DOCUMENT “AS IS” WITHOUT WARRANTY OR CONDITION OF ANY
KIND, EITHER EXPRESS OR IMPLIED. The information and/or products described in this document are subject to
change without notice. For the most up-to-date technical publications, visit www.ciena.com.
Copyright© 2016-2020 Ciena® Corporation. All Rights Reserved
Use or disclosure of data contained in this document is subject to the Legal Notices and restrictions in this section
and, unless governed by a valid license agreement signed between you and Ciena, the Licensing Agreement that
follows.
The material contained in this document is also protected by copyright laws of the United States of America and
other countries. It may not be reproduced or distributed in any form by any means, altered in any fashion, or stored
in a data base or retrieval system, without express written permission of the Ciena Corporation.
Security
Ciena® cannot be responsible for unauthorized use of equipment and will not make allowance or credit for
unauthorized use or access.
Contacting Ciena
Corporate Headquarters 410-694-5700 or 800-921-1144 www.ciena.com
Customer Technical Support/Warranty www.ciena.com/support/
Sales and General Information North America: 1-800-207-3714 E-mail: [email protected]
International: +44 20 7012 5555
In North America 410-694-5700 or 800-207-3714 E-mail: [email protected]
In Europe +44-207-012-5500 (UK) E-mail: [email protected]
In Asia +81-3-3248-4680 (Japan) E-mail: [email protected]
In India +91-22-42419600 E-mail: [email protected]
In Latin America 011-5255-1719-0220 (Mexico City) E-mail: [email protected]
Training E-mail: [email protected]
For additional office locations and phone numbers, please visit the Ciena web site at www.ciena.com.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
READ THIS LICENSE AGREEMENT (“LICENSE”) CAREFULLY BEFORE INSTALLING OR USING CIENA
SOFTWARE OR DOCUMENTATION. THIS LICENSE IS AN AGREEMENT BETWEEN YOU AND CIENA
COMMUNICATIONS, INC. (OR, AS APPLICABLE, SUCH OTHER CIENA CORPORATION AFFILIATE
LICENSOR) (“CIENA”) GOVERNING YOUR RIGHTS TO USE THE SOFTWARE. BY INSTALLING OR USING
THE SOFTWARE, YOU ACKNOWLEDGE THAT YOU HAVE READ THIS LICENSE AND AGREE TO BE BOUND
BY IT.
1. License Grant. Ciena may provide “Software” to you either (1) embedded within or running on a hardware
product or (2) as a standalone application, and Software includes upgrades acquired by you from Ciena or a Ciena
authorized reseller. Subject to these terms, and payment of all applicable License fees including any usage-based
fees, Ciena grants you, as end user, a non-exclusive, non-transferable, personal License to use the Software only in
object code form and only for its intended use as evidenced by the applicable product documentation. Unless the
context does not permit, Software also includes associated documentation.
2. Open Source and Third Party Licenses. Software excludes any open source or third-party programs supplied
by Ciena under a separate license, and you agree to be bound by the terms of any such license. If a separate
license is not provided, any open source and third party programs are considered “Software” and their use
governed by the terms of this License.
3. Title. You are granted no title or ownership rights in or to the Software. Unless specifically authorized by Ciena in
writing, you are not authorized to create any derivative works based upon the Software. Title to the Software,
including any copies or derivative works based thereon, and to all copyrights, patents, trade secrets and other
intellectual property rights in or to the Software, are and shall remain the property of Ciena and/or its licensors.
Ciena's licensors are third party beneficiaries of this License. Ciena reserves to itself and its licensors all rights in
the Software not expressly granted to you.
4. Confidentiality. The Software contains trade secrets of Ciena. Such trade secrets include, without limitation, the
design, structure and logic of individual Software programs, their interactions with other portions of the Software,
internal and external interfaces, and the programming techniques employed. The Software and related technical
and commercial information, and other information received in connection with the purchase and use of the
Software that a reasonable person would recognize as being confidential, are all confidential information of Ciena
(“Confidential Information”).
5. Obligations. You shall:
i) Hold the Software and Confidential Information in strict confidence for the benefit of Ciena using your best efforts
to protect the Software and Confidential Information from unauthorized disclosure or use, and treat the Software
and Confidential Information with the same degree of care as you do your own similar information, but no less than
reasonable care;
ii) Keep a current record of the location of each copy of the Software you make;
iii) Use the Software only in accordance with the authorized usage level;
iv) Preserve intact any copyright, trademark, logo, legend or other notice of ownership on any original or copies of
the Software, and affix to each copy of the Software you make, in the same form and location, a reproduction of the
copyright notices, trademarks, and all other proprietary legends and/or logos appearing on the original copy of the
Software delivered to you; and
v) Issue instructions to your authorized personnel to whom Software is disclosed, advising them of the confidential
nature of the Software and provide them with a summary of the requirements of this License.
6. Restrictions. You shall not:
i) Use the Software or Confidential Information a) for any purpose other than your own internal business purposes;
and b) other than as expressly permitted by this License;
ii) Allow anyone other than your authorized personnel who need to use the Software in connection with your rights
or obligations under this License to have access to the Software;
iii) Make any copies of the Software except such limited number of copies, in machine readable form only, as may
be reasonably necessary for execution in accordance with the authorized usage level or for archival purposes only;
iv) Make any modifications, enhancements, adaptations, derivative works, or translations to or of the Software;
v) Reverse engineer, disassemble, reverse translate, decompile, or in any other manner decode the Software;
vi) Make full or partial copies of the associated documentation or other printed or machine-readable matter provided
with the Software unless it was supplied by Ciena in a form intended for reproduction;
vii) Export or re-export the Software from the country in which it was received from Ciena or its authorized reseller
unless authorized by Ciena in writing; or

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
viii) Publish the results of any benchmark tests run on the Software.
7. Audit: Upon Ciena's reasonable request you shall permit Ciena to audit the use of the Software to ensure
compliance with this License.
8. U.S. Government Use. The Software is provided to the Government only with restricted rights and limited rights.
Use, duplication, or disclosure by the Government is subject to restrictions set forth in FAR Sections 52-227-14 and
52-227-19 or DFARS Section 52.227-7013(C)(1)(ii), as applicable. The Software and any accompanying technical
data (collectively “Materials”) are commercial within the meaning of applicable Federal acquisition regulations. The
Materials were developed fully at private expense. U.S. Government use of the Materials is restricted by this
License, and all other U.S. Government use is prohibited. In accordance with FAR 12.212 and DFAR Supplement
227.7202, the Software is commercial computer software and the use of the Software is further restricted by this
License.
9. Term of License. This License is effective until the applicable subscription period expires or the License is
terminated. You may terminate this License by giving written notice to Ciena. This License will terminate
immediately if (i) you breach any term or condition of this License or (ii) you become insolvent, cease to carry on
business in the ordinary course, have a receiver appointed, enter into liquidation or bankruptcy, or any analogous
process in your home country. Termination shall be without prejudice to any other rights or remedies Ciena may
have. Upon any termination of this License you shall destroy and erase all copies of the Software in your
possession or control, and forward written certification to Ciena that all such copies of Software have been
destroyed or erased. Your obligations to hold the Confidential Information in confidence, as provided in this License,
shall survive the termination of this License.
10. Compliance with laws. You agree to comply with all laws related to your installation and use of the Software.
Software is subject to U.S. export control laws, and may be subject to export or import regulations in other
countries. If Ciena authorizes you to import or export the Software in writing, you shall obtain all necessary licenses
or permits and comply with all applicable laws.
11. Limitation of Liability. ANY LIABILITY OF CIENA SHALL BE LIMITED IN THE AGGREGATE TO THE
AMOUNTS PAID BY YOU TO CIENA OR ITS AUTHORIZED RESELLER FOR THE SOFTWARE. THIS
LIMITATION APPLIES TO ALL CAUSES OF ACTION, INCLUDING WITHOUT LIMITATION BREACH OF
CONTRACT, BREACH OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER
TORTS. THE LIMITATIONS OF LIABILITY DESCRIBED IN THIS SECTION ALSO APPLY TO ANY LICENSOR OF
CIENA. NEITHER CIENA NOR ANY OF ITS LICENSORS SHALL BE LIABLE FOR ANY INJURY, LOSS OR
DAMAGE, WHETHER INDIRECT, SPECIAL, INCIDENTAL OR CONSEQUENTIAL INCLUDING WITHOUT
LIMITATION ANY LOST PROFITS, CONTRACTS, DATA OR PROGRAMS, AND THE COST OF RECOVERING
SUCH DATA OR PROGRAMS, EVEN IF INFORMED OF THE POSSIBILITY OF SUCH DAMAGES IN ADVANCE.
12. General. Ciena may assign this License to an affiliate or to a purchaser of the intellectual property rights in the
Software. You shall not assign or transfer this License or any rights hereunder, and any attempt to do so will be void.
This License shall be governed by the laws of the State of New York without regard to conflict of laws provisions.
The U.N. Convention on Contracts for the International Sale of Goods shall not apply hereto. This License
constitutes the complete and exclusive agreement between the parties relating to the license for the Software and
supersedes all proposals, communications, purchase orders, and prior agreements, verbal or written, between the
parties. If any portion hereof is found to be void or unenforceable, the remaining provisions shall remain in full force
and effect.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
v

Publication history 0

June 2020
Issue 12.03 of the MCP Engineering Guide, 450-3709-010.

March 2020
Issue 12.02 of the MCP Engineering Guide, 450-3709-010.

September 2019
Issue 10.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.

July 2019
Issue 8.03 of the Blue Planet MCP Engineering Guide, 450-3709-010. This
issue applies to both the MCP 3.0 software load and the MCP 3.0.1 software
load.

April 2019
Issue 08.02 of the Blue Planet MCP Engineering Guide, 450-3709-010. This
issue applies to both the MCP 3.0 software load and the MCP 3.0.1 software
load.

February 2019
Issue 08.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.

November 2018
Issue 07.04 of the Blue Planet MCP Engineering Guide, 450-3709-010.

October 2018
Issue 07.03 of the Blue Planet MCP Engineering Guide, 450-3709-010. This
issue applies to both the MCP 18.06.00 software load and the MCP 18.06.01
software load.

September 2018
Issue 07.02 of the Blue Planet MCP Engineering Guide, 450-3709-010.

September 2018
Issue 07.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
vi Publication history

June 2018
Issue 06.03 of the Blue Planet MCP Engineering Guide, 450-3709-010.

April 2018
Issue 06.02 of the Blue Planet MCP Engineering Guide, 450-3709-010.

December 2017
Issue 05.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.

November 2017
Issue 04.02 of the Blue Planet MCP Engineering Guide, 450-3709-010.

September 2017
Issue 04.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.

July 2017
Issue 03.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
vii

Contents 0

About this document ix

Introduction 1-1
Product overview 1-1
Documentation 1-2
Device support 1-2
External License server support 1-3
Upgrades 1-3

Deployment options 2-1


What’s new in deployment options? 2-1
Supported deployment options 2-2
Single-host MCP configuration 2-3
Single-host MCP configuration with Geographical Redundancy (GR) 2-5
Multi-host MCP configuration 2-7
Multi-host MCP configuration with Geographical Redundancy (GR) 2-9

How to size and engineer MCP 3-1


Determine the total number of NEUs/NEs to be managed 3-1
Determine the total number of services to be managed 3-2
Determine number and size of MCP hosts (single-host or multi-host) 3-2
Determine if Geographical Redundancy (GR) is required 3-2
Determine License Server requirements 3-3
Determine whether physical servers or VMs will be used 3-3
Review and understand all remaining requirements 3-5

Engineering guidelines 4-1


Sizing and engineering of MCP hosts 4-1
Network element equivalent units (NEUs) 4-1
MCP host configuration, CPU, RAM and storage sizing 4-4
Disks, storage space and file systems 4-8
LAN/WAN requirements 4-15
MCP deployments on physical servers or VMs 4-18
Operating system requirements 4-19
Operating system 4-19
Domain Name Service (DNS) 4-21
Hostname 4-21

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
viii Contents

Site IP 4-21
Kernel parameters 4-21
Network Time Protocol (NTP) 4-21
BPI Installer 4-22
Port and protocol requirements 4-23
MCP user interface (UI) 4-28
External authentication 4-29

Procedures and guidelines for different network sizes 5-1


Operational guidelines 5-1
Device Management 5-1
Installation 5-1
Network maps 5-2
Administration 5-2
Services 5-2
REST API clients 5-2
Historical PM collection 5-3
Hybrid deployments with OneControl/MCP 5-3
Optimizing MCP for managed network size 5-4
Procedure to tune OS for optimal swap usage 5-5
Procedure to tune memory settings and scale of selected apps 5-6
Procedure to enable support for new NE types 5-10

Ordering information 6-1


Product codes 6-1
Licenses 6-5
MCP licenses 6-5
Network element licenses 6-5
Upgrades 6-6

Appendix A - Deployment examples 4-1


Deployment examples and hardware 4-1
Example - Managing up to 20,000 NEUs/10,000 NEs on HP blades with VMs 4-1
Example - Managing up to 20,000 NEUs/15,000 NEs on Oracle X7-2 servers 4-3
Example - Managing up to 10,000 NEUs/NEs on HP blades with VMs 4-4
Example - Managing up to 200 NEUs/NEs on a single-host VM 4-5
Example - Lab only single-host VM 4-6

Appendix B - Scale and memory values used during optimization


of MCP for managed network size 4-1

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
ix

About this document 0

This document provides the engineering guidelines for the Manage, Control
and Plan (MCP) product.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
x About this document

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
1-1

Introduction 1-

This chapter provides a brief overview of the Manage, Control and Plan (MCP)
product.

Product overview
MCP is Ciena’s next generation multi-layer Software Defined Networking
(SDN) and Network Management System (NMS) platform with integrated
network planning functionality that combines a web-scale platform, industry-
leading SDN functionality and open interfaces.

MCP consists of a series of software modules (microservices) that combine


to implement use cases for element management, network management with
integrated network planning functionality and software-defined networking.
The foundation of MCP is a web-scale platform that leverages several open-
source technologies to allow for redundancy and scalability. To leverage the
microservices nature of the platform, the MCP software installation is based
on the use of Docker containers.

All of these microservices expose an open and documented set of REST /


JSON APIs. These APIs can be used directly by external systems, such as
orchestration systems and Operator Support Systems (OSS), to access data
provided by, and perform operations on, MCP. These same APIs are
leveraged to provide the user with an HTML5 based graphical user interface
for user driven operations. The MCP UI is a light weight web based client that
can be accessed using industry standard web browsers.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
1-2 Introduction

Documentation
The most recent documentation is available on the Ciena Portal.
• As a registered user with a my.ciena.com account, log into
https://my.ciena.com.
• Navigate to Documentation > Manage Control and Plan (MCP) > Release
#.
• This location contains MCP documents and/or pointers to additional
locations where documents can be downloaded and how they can be
viewed.
• In this release of MCP, some documents have moved from a pdf format to
an html format.

The following documents are available in pdf format:


• MCP Engineering Guide
• MCP Release Notes (for MCP 4.2)

The following documents are available in an html format. They are packaged
within the MCP software as online help. The package can also be downloaded
for offline viewing using a web browser:
• For details on the functions and features
— MCP Release Notes (for MCP 4.2.x and later releases)
— MCP Administration Guide
— MCP API Guide
— MCP Geo-Redundancy Guide
— MCP Security Guide
— MCP Troubleshooting Guide
— MCP User Guide
• For details on the installations and upgrades:
— MCP Installation Guide
— MCP Upgrade Guide

Device support
For the latest list of devices, configurations and features supported, refer to
MCP Release Notes.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Introduction 1-3

External License server support


The License Server software component used to manage MCP licenses is
decoupled from the MCP software.
• License Server 7.3.1 is the minimum required for use in conjunction with
MCP 4.2 (at the time of this document release). The external License
Server must be installed/upgraded to this minimum release before
installing/upgrading MCP 4.2.
• For the most recent guidance, consult the Ciena Portal:
— As a registered user with a my.ciena.com account, log into
https://my.ciena.com.
— Navigate to Software > Standalone License Server.
— To facilitate the search, sort the results by part number by clicking on
the CIENA PART # column heading.
— Find Part# TOC_S_LICENSE and open the file
(ELSFullSupportMatrixByProduct.pdf). Use this file to identify the
supported License Server release(s) for your MCP release.
— Documentation for the License Server is also available at this location.

Upgrades
The following upgrade paths are supported (at the time of this document
release). For the most recent guidance, refer to MCP Release Notes.
• MCP 3.0 to MCP 4.2
• MCP 3.0.1 to MCP 4.2
• MCP 4.0 to MCP 4.2
• MCP 4.0.1 to MCP 4.2

Unless otherwise indicated by Ciena, the MCP release being upgraded from
can be at any MCP patch level before the upgrade process is started.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
1-4 Introduction

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
2-1

Deployment options 2-

This chapter details the Manage, Control and Plan (MCP) deployment
options.

What’s new in deployment options?


This release of MCP supports the same deployment options as MCP 4.0.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
2-2 Deployment options

Supported deployment options


MCP can be deployed in either a single-host configuration (for smaller
networks) or a multi-host configuration (for larger networks).

A single-host configuration consists of 1 MCP host. A multi-host configuration


consists of multiple MCP hosts at the same site, and provides local site
redundancy through N+M clustering, where all nodes in the cluster are active.
If deployed in multi-host mode, the minimum cluster size for an MCP multi-
host configuration is three hosts (a 2+1 cluster).

To users of the MCP user interface (UI) and clients of the MCP northbound
API, the MCP multi-host configuration looks like a single logical unit. It is
accessed using a virtual Site IP address. This Site IP address redirects
requests to the appropriate node in the cluster. If one node in the cluster goes
down, requests are redirected. This process makes the loss of nodes in the
cluster seamless to clients of the northbound API.

For both single-host and multi-host, geographical redundancy (GR)


configurations are also supported. A GR deployment consists of two
independent, geographically diverse sites, each deployed with the same VM/
server configuration. The two sites communicate using secure IPsec tunnels.
Clients connect to the Active site which is used to actively manage the
network, and maintains sessions to the managed devices. A subset of data is
replicated automatically from the Active site to the Standby site. Clients can
connect to the Standby site to perform a subset of read-only operations. The
Standby also enables seamless fault management capabilities during DCN
outages or server maintenance of the Active site. If the Active site fails, the
system administrator takes actions to have the Standby site become the
Active site.

Note: License Server GR is supported only in conjunction with LS HA


mode (GR cannot be used without HA mode). If License Server GR is
used, 4 external servers/VMs are required for the License Server.

Note: License Server HA should be used for all deployments with more
than 200 NEUs/NEs.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Deployment options 2-3

Single-host MCP configuration


Table 2-1 on page 2-3 provides details for single-host MCP deployments.

Table 2-1
Single-host MCP deployments

Configuration Single-host MCP Single-host MCP


managing up to 200 NEUs/NEs managing 200-1,000 NEUs/NEs

Target use Deployments with a smaller number of Deployments with a smaller number of
NEs (up to 200 NEUs/NEs). NEs (up to 1,000 NEUs/NEs).
If the network size is anticipated to grow If the network size is anticipated to grow
beyond the supported limits of a single- beyond the supported limits of a single-
host configuration (1,000 NEUs/NEs), a host configuration (1,000 NEUs/NEs), a
multi-host configuration should be used multi-host configuration should be used
instead. instead.

MCP - 1 1
No. of VMs/servers
required

License Server - 1 external VM/server 2 external VMs/servers


No. of VMs/servers License Server HA
required • Failure of License Server requires • License Server HA mode provides
manual intervention survivability of 1 License Server failure
at the same site.
• 3 IP addresses are required (1 for
each License Server and 1 virtual IP
address).
• MCP and NEs reference the License
Server using the virtual IP address.

Figure reference Figure 2-1 on page 2-4 Figure 2-2 on page 2-4

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
2-4 Deployment options

Figure 2-1
MCP single-host configuration (up to 200 NEUs/NEs)

Figure 2-2
MCP single-host configuration (greater than 200 NEUs/NEs)

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Deployment options 2-5

Single-host MCP configuration with Geographical Redundancy (GR)


A single-host configuration with GR consists of two independent single-host
configurations, each at a geographically diverse site.

Table 2-2 on page 2-5 provides details for single-host MCP deployments with
GR.

Table 2-2
Single-host MCP deployments with GR

Configuration Single-host MCP with GR

Target use Deployments with a smaller number of NEs (up to 1,000 NEUs/NEs).
If the network size is anticipated to grow beyond the supported limits of a single-
host configuration (1,000 NEUs/NEs), a multi-host configuration should be used
instead.

MCP - 2 (1 at each site)


No. of VMs/servers
required

License Server - 4 external VMs/servers


No. of VMs/servers License Server HA+GR
required • License Server HA mode provides survivability of 1 License Server failure at the
same site. License Server GR mode provides survivability of a License Server
site failure.
• 6 IP addresses are required, 3 IP addresses are required at each site (1 for each
License Server and 1 virtual IP address).
• MCP is configured with both License Server virtual IP addresses.
• NEs that support 2 License Servers reference both License Server virtual IP
addresses. NEs that support 1 License Server reference the active License
Server virtual IP address.
Note: If VMs are used for the License Server, it is recommended that all 4 LS
VMs be deployed on different physical servers, in order to allow the License
Server HA feature to protect against the loss of a single physical server.
Alternatively, 1 physical server can be used at each GR site to house the 2
License Server VMs in HA mode (i.e. 4 VMs total on 2 physical servers). However
this option will not protect against failure of hardware components shared
between the 2 VMs.

Figure reference Figure 2-3 on page 2-6

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
2-6 Deployment options

Figure 2-3
MCP single-host GR configuration

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Deployment options 2-7

Multi-host MCP configuration


A multi-host configuration consists of multiple MCP hosts co-located at the
same site and working together to manage the network. All hosts are active.
This configuration provides local redundancy and survivability of a host failure.
It also allows for future network growth. Instead of requiring a bigger host with
more resources to allow for network growth, MCP will support network growth
via the horizontal addition of more MCP hosts.

For multi-host configurations in production deployments, if VMs are used for


MCP, they must be deployed on different physical servers/blades in order to
allow the high-availability feature of MCP to protect the cluster from the loss
of a single physical server/blade. The loss may be due to a hardware failure,
power interruption, intentional shutdown, etc.

In this release of MCP a multi-host configuration consists of 3 hosts (a 2+1


cluster), 4 hosts (a 3+1 cluster), or 6 hosts (a 5+1 cluster). This configuration
provides survivability of 1 host failure.

Table 2-3 on page 2-7 provides details for multi-host MCP deployments.

Table 2-3
Multi-host MCP deployments

Configuration Multi-host MCP

Target use Deployments with a larger number of NEs (greater than 1,000 NEUs/NEs)

MCP - Depends on network size:


No. of VMs/servers • 3 (for a 2+1 config)
required
• 4 (for a 3+1 config)
• 6 (for a 5+1 config)

License Server - 2 external VMs/servers


No. of VMs/servers License Server HA
required • License Server HA mode provides survivability of 1 License Server failure at the
same site.
• 3 IP addresses are required at each site (1 for each License Server and 1 virtual
IP address).
• MCP and NEs reference the License Server using the virtual IP address.

Figure reference Figure 2-4 on page 2-8

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
2-8 Deployment options

Figure 2-4
MCP 2+1 multi-host configuration

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Deployment options 2-9

Multi-host MCP configuration with Geographical Redundancy (GR)


A multi-host configuration with GR consists of two independent multi-host
configurations, each at a geographically diverse site.

In this release of MCP a multi-host GR configuration consists of 6, 8 or 12


hosts (a 2+1, 3+1 or 5+1 cluster at one site; and another 2+1, 3+1 or 5+1
cluster at a second site). This configuration provides survivability of 1 host
failure at the Active site, as well as survivability of an entire site failure.

Table 2-4 on page 2-9 provides details for multi-host MCP deployments with
GR.

Table 2-4
Multi-host MCP deployments with GR

Configuration Multi-host MCP with GR

Target use Deployments with a larger number of NEs (greater than 1,000 NEUs/NEs)

MCP - Depends on network size:


No. of VMs/servers • 6(for a 2+1 config, 3 at each site)
required
• 8(for a 3+1 config, 4 at each site)
• 12(for a 5+1 config, 6 at each site)

License Server - 4 external VMs/servers


No. of VMs/servers License Server HA+GR
required • License Server HA mode provides survivability of 1 License Server failure at the
same site. License Server GR mode provides survivability of a License Server
site failure.
• 6 IP addresses are required, 3 IP addresses are required at each site (1 for each
License Server and 1 virtual IP address).
• MCP is configured with both License Server virtual IP addresses.
• NEs that support 2 License Servers reference both License Server virtual IP
addresses. NEs that support 1 License Server reference the active License
Server virtual IP address.
Note: If VMs are used for the License Server, it is recommended that all 4 LS
VMs be deployed on different physical servers, in order to allow the License
Server HA feature to protect against the loss of a single physical server.
Alternatively, 1 physical server can be used at each GR site to house the 2
License Server VMs in HA mode (i.e. 4 VMs total on 2 physical servers). However
this option will not protect against failure of hardware components shared
between the 2 VMs.

Figure reference Figure 2-5 on page 2-10

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
2-10 Deployment options

Figure 2-5
MCP 2+1 multi-host GR configuration

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
3-1

How to size and engineer MCP 3-

This chapter provides an overview of how to size and engineer you MCP
deployment.

Determine the total number of NEUs/NEs to be managed


The main factor that determines the type and size of an MCP deployment is
the number and types of NEs deployed in network to be managed.

Different types of NEs can place a different load on the software because
some NE types are capable of reporting a larger number of ports and
connections than other NE types. Because of this, the concept of a network
element (NE) equivalent unit is used. In order to engineer your network, you
must calculate the total number of NEs, and the total number of NE equivalent
units in your network.

Use Table 4-1 on page 4-2 to calculate the following for the network to be
managed by MCP (taking into account any planned growth of the network):
• Total NEs - An NE is a device that will be enrolled into MCP (for 6500 TIDc
NEs only the primary shelf is enrolled into MCP; all shelves in a 6500 TIDc
NE are managed as one NE).
• Total NEUs - For NE types that can have multiple shelves, such as 6500
TIDc NEs, first determine the total number of shelves of each 6500 type.
For all other NE types determine the total number of NEs of each type.
Then multiply each count by the applicable NEU value, and add all for the
total.
• L0 total NEs - The subset of the total NE count that is Layer0 devices (eg.
Waveserver, Waveserver Ai, Waveserver 5, 6500 Photonic, RLS, 8180
photonic).
• L1 total OTN CP shelves - The total number of Layer 1 OTN control plane
enabled shelves (eg. all 6500 OTN CP enabled shelves, all 54xx CP NEs).
• L2 total NEs - The subset of the total NE count that is Layer 2 devices (eg.
39xx, 51xx, 8700, 6200, 8180 packet, Z-series).
• L2 total 6200 NEs - The subset of the total NE count that is 6200 devices.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
3-2 How to size and engineer MCP

Determine the total number of services to be managed


The type and size of an MCP deployment is also determined by the number
of services expected to be managed in the network.

Determine the following about the network to be managed (taking into account
any planned growth of the network):
• L0 - Max wavelength services present in the network.
• L1 - Max services present in the network.
• L2 - Max packet service endpoints present in the network.
• L2 - Max unprotected LSPs present in the network.
• L3 - Max services present in the network

Determine number and size of MCP hosts (single-host or multi-host)


Use Table 4-2 on page 4-5 and Table 4-3 on page 4-7 to decide which MCP
host configuration meets the criteria identified so far.

Identify the smallest configuration that will support the total NEU/NE/service
counts as calculated in “Determine the total number of NEUs/NEs to be
managed” on page 3-1 and in “Determine the total number of services to be
managed” on page 3-2. Smaller networks (under 1,000 NEU/NEs) can usually
be managed by an MCP single-host configuration. Larger networks usually
require an MCP multi-host configuration.

Take note of the required MCP host configuration:


• Single-host or Multi-host?
• If multi-host, number of MCP hosts required (2+1, 3+1, 5+1)?
• vCPUs per MCP host?
• RAM per MCP host?
• Disks and storage space per MCP host?

Review “Storage space for historical PMs and NE backups” on page 4-13 to
determine if additional storage space is needed.

Review “Storage space for PinPoint” on page 4-14 to determine if additional


storage space is needed.

Determine if Geographical Redundancy (GR) is required


Review and understand the deployments options in Chapter 2, “Deployment
options”.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
How to size and engineer MCP 3-3

Determine if GR is required for the deployment. If GR is required, the same


number/configuration of MCP and License Server hosts planned for the main
site must also be planned for the standby site.

Determine License Server requirements


Review and understand the deployments options in Chapter 2, “Deployment
options”.

Determine the License Server requirements for the deployment. Is License


Sever HA mode required? Is License Server GR mode required.

The main factor that determines the number of required external License
Server VMs/servers is what type of MCP deployment is used (single-host or
multi-host; GR or non-GR).

Refer to the External License Server User Guide for hardware requirements.

Determine whether physical servers or VMs will be used


MCP is deployed on host(s) running a Linux operating system. These hosts
can be:
• Physical hardware servers - Based on Intel/x86_64 CPUs. These are
sometimes referred to as bare metal installs. The operating system is
installed directly on the physical sever (no virtualization software).
• Virtual Machines (VMs) provided from an existing IT VM farm
infrastructure - In some customer networks, an IT infrastructure already
exists for the deployment of new VMs to house software applications. In
this scenario, the existing infrastructure can be used to turn up the VMs
required for MCP.
• Virtual Machines (VMs) built on dedicated hardware - For customers
who prefer to use VMs, but who do not have an existing IT infrastructure
to provide those VMs, they may choose to build the MCP VMs from server
hardware and virtualization software such as VMware ESXi.

Note: In all cases the servers or VMs used as MCP hosts must meet all
MCP resource requirements. This includes all requirements in this
document (most notably the CPU benchmark requirements, Docker
storage disk speed requirements, and DCN delay and bandwidth
requirements).

Note: If using bare metal servers, MCP has been tested/evaluated on


specific server hardware models. Deploying MCP on a bare metal server
not detailed in this document requires prior approval from Ciena.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
3-4 How to size and engineer MCP

Note: If using VMs, additional hardware resources (CPU/RAM/storage)


are required by the virtualization software. These resources must be
planned for, above and beyond what is required for each MCP host (refer
to the virtualization software documentation for these requirements).

The choice of using bare metal vs VMs for MCP depends on the existing
customer environment and IT skill set. Both approaches have benefits and
considerations:
• Virtual Machines (VMs)
— Require knowledge of virtualization software and associated IT
administration.
— Require some additional hardware resources (CPU/RAM/storage) for
the virtualization software (eg. VMware ESXi).
— May incur licensing costs for the virtualization software in use.
— Provide a level of abstraction for the operating system from the
hardware model in use, eliminating OS dependencies/requirements
for hardware drivers, etc. This can simplify IT management of OS
images.
— Can be easily integrated into existing IT operational practices where
VM infrastructures are already in use.
• Physical hardware servers (bare metal)
— Do not require knowledge of virtualization software and associated IT
administration.
— Allow all hardware resources to be used by the software application.
— Can introduce more complexity for IT management and administration
of OS images and hardware if other applications in customer
environment consists primarily of VMs.
— Specific hardware models need to be evaluated or tested to ensure no
conflicts between software applications and hardware specific drivers.
See “MCP deployments on physical servers or VMs” on page 4-18.

“Appendix A - Deployment examples” on page 7-1 provides some examples


of deployment scenarios, for different network sizes.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
How to size and engineer MCP 3-5

Review and understand all remaining requirements


Review and understand all remaining engineering considerations that affect
planning and sizing of the MCP deployment:
• “Sizing and engineering of MCP hosts” on page 4-1
• “Operating system requirements” on page 4-19
• “Port and protocol requirements” on page 4-23
• “MCP user interface (UI)” on page 4-28
• “External authentication” on page 4-29

Review and understand all remaining engineering considerations that affect


installation and operation of the MCP deployment:
• “Operational guidelines” on page 5-1
• “Optimizing MCP for managed network size” on page 5-4

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
3-6 How to size and engineer MCP

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-1

Engineering guidelines 4-

This chapter details the Manage, Control and Plan (MCP) engineering
guidelines.

Sizing and engineering of MCP hosts


This section provides details on the sizing and engineering of MCP hosts.

MCP is deployed on host(s) running a Linux operating system. These hosts


can be physical servers, or VMs (Virtual Machines) in a virtualized
environment.

The sizing requirements in this section apply to both types of installs. For more
details see “Determine whether physical servers or VMs will be used” on page
3-3, and “MCP deployments on physical servers or VMs” on page 4-18.

Network element equivalent units (NEUs)


The first step to determining MCP host requirements is determining the
number of NEs and NEUs in the network to be managed. Table 4-1 on page
4-2 defines the number of NE equivalent units that must be used to represent
each NE type, in order to calculate the total number of NE equivalent units you
have in your network.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-2 Engineering guidelines

Table 4-1
NEU value to use for each NE type
NE type and configuration NEU value

3902, 3903, 3903x, 3904, 3905 - 0.2

3906mvi - 0.2

3920 - 0.3

3916, 3926m, 3928, 3930, 3931, 3932, - 0.5


3938vi, 3940, 3942

3960 - 1

5142 - 1

5150 - 1.5

5160 - 3

5162 - 4

5170 - 4

* 5171 - 4*

* 8180 Layer2 4*

Layer0 book-ended config over 6500 photonics 1

8700 4-slot 15

10-slot 45

6200 Packet only shelf 1

* Z-Series - See Note 3 *

Waveserver - 1

Waveserver Ai (Note 2) - 3

* Waveserver 5 - 3*

6500 D-series (2-slot) - 0.15

6500 S-series/D-series With/without L0 Control Plane 1


Broadband/Photonic shelf (Note 1)

6500 S-series (32-slot, 14-slot, 8-slot) PKT/OTN X-Conn - 1200G 1 (per shelf)
PKT/OTN shelf (Note 1, Note 2)
PKT/OTN X-Conn - 3200G 2 (per shelf)

OTN Control Plane enabled shelf 45 (per shelf)

6500 T-series (12-slot) (Note 1, Note 2) - 45

6500 T-series (24-slot) (Note 1, Note 2) - 90

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-3

Table 4-1
NEU value to use for each NE type (continued)
NE type and configuration NEU value

* 6500 S-series shelf with PTS cards (Note With no Control Plane (eg. MPLS only) 2 (per shelf) *
1, Note 2)
With Control Plane 45 (per shelf) *

* RLS - 3*

5410 1.2TB (Optical) (Note 2) With Control Plane 20

5410 5TB (Optical) (Note 2) With Control Plane 20

5430 3.6TB (Note 2) With Control Plane 45

5430 15TB (Note 2) With Control Plane 45

Note 1: Each shelf in a consolidated TID must be counted (for example, a consolidated TID with 5 shelves has an
NEU value of 5).
Note 2: The equivalent units value provided is for a network element (NE) that is fully loaded. If the NE is not fully
loaded, the value can be multiplied by the percentage of the NE bandwidth that will be used
Note 3: MCP 4.2 introduces support for Z-Series devices for select device management functionalities only.
Scalability testing with this release of MCP has been done with up to 200 Z-series devices managed.

* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-4 Engineering guidelines

MCP host configuration, CPU, RAM and storage sizing


MCP can be deployed in either a multi-host configuration or a single-host
configuration.
• Multi-host configuration
— An MCP multi-host configuration consists of 3 or more hosts. These
hosts are co-located in the same data center.
— The multi-host configuration can be used to manage a variety of
network sizes. Table 4-2 on page 4-5 provides the sizing requirements
for each MCP host in a multi-host configuration, for managing various
network sizes.
— The sizing options listed include 3 host (2+1), 4 host (3+1), and 6 host
(5+1) multi-host configurations. Use the configuration that meets or
exceeds the network size to be managed. If future network growth is
expected to reach a number of NEs/services requiring a 4 host (3+1)
or 6 host (5+1) multi-host config, MCP should be deployed in this
config from initial install.
— For multi-host configurations in production deployments, if VMs are
used for MCP, they must be deployed on different physical servers/
blades in order to allow the high-availability feature of MCP to protect
the cluster from the loss of a single physical server/blade. The loss
may be due to a hardware failure, power interruption, intentional
shutdown, etc.
• Single-host configuration
— An MCP single-host configuration consists of 1 host.
— The single-host configuration is targeted at management of smaller
network sizes. Table 4-3 on page 4-7 provides the sizing requirements
for the MCP host in a single-host configuration.

Note: There is no migration path from an MCP single-host configuration


to an MCP multi-host configuration. If network growth beyond 1,000 NEs
is expected, an MCP multi-host configuration should be used (which
consists of 3 or more hosts).

Note: MCP can be used in conjunction with the web-based Site Manager
craft interface for 6500 NE types. When this is done, the web-based Site
Manager must be installed on its own separate host. Do not install it on the
same VM/server as MCP (it is not supported co-resident with MCP). Refer
to the MCP Installation Guide, for web-based Site Manager hardware
requirements.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-5

Table 4-2
MCP host sizing requirements for different network sizes - Multi-host deployments
Host resource requirements for multi-host deployment
Multi-host configs consist of 3, 4 or 6 MCP hosts (these requirements are per host) (Note 1, Note 2, Note 3, Note 4, Note 5)

Number of MCP hosts 6 (5+1) 4 (3+1) 3 (2+1)


required

Virtual CPUs (per MCP host) 40 40 16 40 32 40 32 16 16 8

RAM (per MCP host) 128GB 128GB 64GB 128GB 128GB 96GB 96GB 96GB 64GB 64GB

Disks & - See “Disks, storage space and file systems” on page 4-8 for detailed requirements
storage
space OS disk 500GB (if OS is on a separate disk) 500GB
(per (Note
Docker storage disk 4TB 4 TB 2TB 4 TB 4 TB 2 TB 2 TB 2 TB 1 TB 6)
host)

Network bandwidth and delay See “LAN/WAN requirements” on page 4-15 for details.

Deployment type Production and lab Lab


only

Maximum concurrent MCP clients (REST API and MCP UI)(Note 7)

Total concurrent MCP clients 200 100 100 100 100 100 100 100 100 10

Max NEUs/NEs enrolled in MCP multi-host config (Note 8, Note 9)

Total NEUs 30,000 20,000 1,000 20,000 15,000 10,000 5,000 2,000 500 20
(total managed by multi-host)

Total NEs 30,000 15,000 1,000 10,000 10,000 10,000 5,000 2,000 500 20
(total managed by multi-host)

Per NE L0-max total NEs 5,000 * 5,000 * (same as 2,000 (same as total NEs)
type total NEs)
max
(see L1-max OTN CP 500 300 150 150 80 80 10
Note 8 shelves
for full
L2-max total NEs (same as (same as (same as 10,000 (same as total NEs)
details)
total NEs) total NEs) total NEs)

L2-max 6200 NEs 5,000 5,000 (same as 4,000 (same as total NEs)
total NEs)

Max services

L0 - Max wavelength services 10,000 10,000 2,500 10,000 10,000 10,000 8,000 6,000 1,200 500

L1 - Max services 50,000 35,000 2,500 20,000 15,000 10,000 8,000 7,000 1,000 500

L2 - Max packet service 200,000 100,000 11,000 65,000 65,000 65,000 25,000 15,000 3,500 500
endpoints

L2 - Max unprotected LSPs 75,000 50,000 7,500 30,000 30,000 30,000 18,000 10,000 2,500 500

* L3 - Max services See Note 10 *

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-6 Engineering guidelines

Table 4-2
MCP host sizing requirements for different network sizes - Multi-host deployments (continued)
Note 1: The physical CPU performance must be equal to or better than a E5-2640v4 (2.4GHz/10-core).Only Intel/x86_64 processor
based platforms/VMs are supported. AMD processors are not supported for use with MCP.
Note 2: The number of vCPUs available on a physical CPU is equal to the number of threads or logical processors. For example,
a system with 2xE5-2640v4 (2.4GHz/10-core) CPUs has 40 vCPUs total (since the cores are dual threaded: 2 CPUs * 10 cores * 2
threads/core). The required CPU resources must be fully reserved for MCP, and not be oversubscribed.
Note 3: For multi-host configurations in production deployments, the MCP VMs must be deployed on different physical servers/
blades in order to allow the high-availability feature of MCP to protect the cluster from the loss of a single physical server/blade. The
loss may be due to a hardware failure, power interruption, intentional shutdown, etc.
Note 4: A minimum of 64GB RAM is currently required for any MCP installation (production or lab installs). All MCP installations
must be deployed with the minimum required RAM for the network size to be managed, as detailed in this document. Using less
RAM is not supported and can result in degraded performance and platform instability.
Note 5: The License Server component is decoupled from the MCP software. The License Server is deployed on external servers,
see External License Server User Guide for the hardware requirements.
Note 6: This configuration is supported for lab deployments only, with limited NEs (for fresh installs only, not upgrades). In this
deployment, 1x500GB disk can be used to house both the OS, as well as the docker storage disk contents. See “Example - Lab
only single-host VM” on page 7-6 for details on the disk configuration for this scenario.
Note 7: In addition to the max concurrent clients guidelines, MCP supports up to a max of 500 defined users.
Note 8: MCP NE scale values are expressed in terms of total NEs managed, total NEUs managed, as well as specific maximums
on a per NE type basis. All the maximums apply. Eg. If a 2+1 config with 32 vCPU, 96G RAM, is being used to manage a network
with only 6500 and Waveserver NE types, then a maximum of 2,000 NEs / 5,000 NEUs can be manged (since the L0 max NE value
is lower than the total NE value).
Note 9: If future network growth is expected to reach a number of NEs/services requiring a 4 host (3+1) or 6 host (5+1) multi-host
config, then MCP should be deployed in this host config from initial install.
Note 10: MCP 4.2 introduces support for L3VPN services. The following scalability testing has been done with this release of MCP:
- MCP Configurations: To date L3 scalability has been characterized for MCP host configurations with 40vCPU/128G (2+1,
3+1 or 5+1). For guidance on any other MCP configurations contact Ciena.
- Number of NEs with L3 enabled: 1,500
- 49 VRFs built between sets of 2 NEs (49 * 750 NE sets = 36750 total VRFs)
- A VRF (virtual routing and forwarding), is a Layer3 service construct; VRFs are members in a L3VPN service.

* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-7

Table 4-3
MCP host sizing requirements for different network sizes - Single-host deployments
Host resource requirements for single-host deployment
Single-host configs consist of 1 MCP host (Note 1, Note 2, Note 3, Note 4)

Number of MCP hosts required 1 1 1 1

CPU and Virtual CPUs 24 vCPUs 16 vCPUs 16 vCPUs 8 vCPUs


RAM
RAM 128 GB 96GB 64GB 64GB

Disks & - See “Disks, storage space and file systems” on page 4-8 for detailed requirements
storage
space OS disk 500GB (if OS is on a separate disk) 500 GB
(Note 5)
Docker storage disk 1 TB 1 TB 1 TB

Network bandwidth and delay See “LAN/WAN requirements” on page 4-15 for details.

Deployment type Production and lab Lab only

Max NEUs/NEs enrolled in MCP single-host config

Total NEUs enrolled 1,000 500 200 20

Total NEs enrolled 1,000 500 200 20

Maximum concurrent MCP clients (REST API and MCP UI) (Note 6)

Total concurrent MCP clients 30 30 25 5

Max services

L0 - Max wavelength services 2,000 2,000 1,500 300

L1 - Max services 1,500 1,500 1,000 300

L2 - Max packet service endpoints 8,000 8,000 7,000 300

Note 1: The physical CPU performance must be equal to or better than a E5-2640v4 (2.4GHz/10-core). Only Intel/x86_64
processor based platforms/VMs are supported. AMD processors are not supported for use with MCP.
Note 2: The number of vCPUs available on a physical CPU is equal to the number of threads or logical processors. For example,
a system with 2xE5-2640v4 (2.4GHz/10-core) CPUs has 40 vCPUs total (since the cores are dual threaded: 2 CPUs * 10 cores
* 2 threads/core). The required CPU resources must be fully reserved for MCP, and not be oversubscribed.
Note 3: A minimum of 64GB RAM is currently required for any MCP installation (production or lab installs). All MCP installations
must be deployed with the minimum required RAM for the network size to be managed, as detailed in this document. Using less
RAM is not supported and can result in degraded performance and platform instability.
Note 4: The License Server component is decoupled from the MCP software. The License Server is deployed on external servers,
see External License Server User Guide for the hardware requirements.
Note 5: This configuration is supported for lab deployments only, with limited NEs (for fresh installs only, not upgrades). In this
deployment, 1x500GB disk can be used to house both the OS, as well as the docker storage disk contents. See “Example - Lab
only single-host VM” on page 7-6 for details on the disk configuration for this scenario.
Note 6: In addition to the max concurrent clients guidelines, MCP supports up to a max of 500 defined users.

* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-8 Engineering guidelines

Disks, storage space and file systems

Attention: The MCP multi-host configuration architecture is designed to


provide local redundancy and survivability of a host failure. This model allows
failures to be managed at a host level (instead of a component or disk level).

As a result, the use of RAID disk configurations is unnecessary and not


recommended for use in conjunction with MCP. These configurations can
introduce delays in read/write operations that affect MCP functionality and
performance. Although unnecessary, hardware RAID10 (and hardware
RAID0) can be used if desired. All other forms of hardware or software RAID
are not supported for use with MCP.

The storage space requirements for MCP are expressed in terms of two
categories:
• storage space for the host operating system (OS)
• storage space and physical disk requirements for the Docker storage disk

The MCP software architecture is based on the use of Docker containers, and
the use of a high performance Docker thin pool. The Docker thin pool is the
storage space used by Docker for image and container management (for
details see, “More on the Docker thin pool” on page 4-9).

When deploying software applications in a virtualized environment, some


corporate infrastructures require that the host operating system of a VM (i.e.
the root space of a VM) be kept on a separate space/disk from the application
itself. Other corporate infrastructures have no such requirement. This
document provides guidance for disk configuration in both these scenarios.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-9

More on the Docker thin pool

The MCP software architecture is based on the use of Docker containers, and
the use of a high performance Docker thin pool. The Docker thin pool is the
storage space used by Docker for image and container management.

When Docker is used on hosts with Red Hat Enterprise Linux, it uses the
devicemapper storage driver as the storage backend, which defaults to a
configuration mode known as loop-lvm. While this mode is designed to work
out-of-the-box with no additional configuration, production deployments should
not be run under loop-lvm mode, as it does not configure the thin pool for
optimal performance. The configuration required for production deployments is
the direct-lvm configuration mode. This mode uses block devices to create the
thin pool.

MCP uses and configures Docker in this direct-lvm optimal mode. No direct
user configuration is required for this. It is done automatically by the MCP
installation software, as long as the Docker storage disk is configured as per
the requirements in this document (i.e. the required unallocated PE space is
made available inside the volume group with name vg_sys, or with the name
specified by the user in the MCP installation procedures, as per Table 4-5 on
page 4-11).

Storage space and file systems when separate OS disk is used


This section details the disk configuration requirements when deploying MCP
on a host where the host operating system is deployed on a separate space/
disk.

Table 4-4 on page 4-10 specifies the storage requirements for the host OS.

Table 4-5 on page 4-11 specifies the storage requirements for the Docker
storage disk.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-10 Engineering guidelines

Table 4-4
Requirements for host operating system (OS) disk

Physical disk(s)

Disk size 500 GB

File systems & mount points (Note 1)

File system FS mount point Size


(FS)
boot 1 GB

/ (root file system) All remaining space


(Note 2, Note 3) (minimum: 50 GB)

/var/log/ciena 150 GB
(Note 2)

swap 16G (for hosts with up to 64G RAM),


24G (for hosts with 96G RAM),
32G (for hosts with 128G RAM or more)

Note 1: MCP can be deployed on physical servers, or VMs (Virtual Machines) in a virtualized
environment. If physical servers are used, it should noted that some server hardware types (eg. Oracle
X6-2 servers) require that a physical bootbios partition of 1MB be present for the operating system
installation and successful operation (consult your hardware documentation and the your Linux
operating system documentation for details).
Note 2: It is recommended that all free space left on the disk be assigned to root, after assigning all
other identified file system mount points. If root is being partitioned to a finer granularity (i.e. separate
partitions created for the /home, etc.), the following considerations should be taken into account:
•root - The absolute minimum of 50GB must be assigned to the root. This minimum size accounts
only for the scenario when the root space is used primarily for: operating system files, and space
required for MCP related files outside the Docker storage disk and outside of MCP logging.
•/var - if /var is created as a separate partition, the absolute minimum of 20 GB must be assigned to
/var (not taking into account space required for /var/log/ciena).
•/var/log/ciena - The /var/log/ciena is now the default location used for MCP logging purposes.
•/home/bpuser - A minimum of 20 GB must be free/available for use by the bpuser userid in its home
directory during the MCP installation (bpuser is the owner of the MCP software). By default the
bpuser home directory is set to /home/bpuser (i.e. if /home is created as a separate partition, assign
a minimum of 20GB).
Note 3: The use of LVM (Logical Volume Manager) for the root file system is optional.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-11

Table 4-5
Requirements for Docker storage disk - When OS is on separate disk
Physical disk(s)

Disk size Depends on host size (use the number/size of disks needed to get the total required disk space for the host size
chosen, as detailed in Table 4-2 on page 4-5).

Disk speed Disk(s) must be very fast and directly attached. The only supported options meeting these speed specifications are:
• Local solid state disks (SSD) with:
— 4K random read of at least 85,000 IOPS or better
— 4K random write of at least 43,000 IOPS or better
— sequential read/write of up to at least 500 MB/s or better
— Eg. Intel® SSD Data Center S3710 Series drives meet these specifications
• SAN (Storage Area Network):
— directly attached via: Fiber Channel (at least 8Gb/s), or iSCSI (dedicated network; at least 10 Gb/s)
— 4K random read of at least 85,000 IOPS or better; 4K random write of at least 43,000 IOPS or better; sequential
read/write of up to at least 500 MB/s or better

Volume group

VG name Volume group name can be set to any name during OS installation (must be same on all hosts). Used by MCP
installation software. MCP assumes name is vg_sys . If it is different, the MCP installation procedures include steps
where the non-default VG name can be entered by the user.

VG size VG configured to use the entire space on the disk(s).

VG space VG configured to have:


allocation • 3 logical volumes (LV), one for each file system / mount point in this table
• unallocated PE (physical extents) space, with size as per this table (used by MCP for configuration of the Docker
thin pool); the free space must exist inside the VG

File systems & mount points

File system FS mount point LV with 4TB LV with 2TB LV with 1TB FS type
(FS)
(Note 1, Note 2) /opt/ciena/bp2 (Note 3) 3,500 GB (Note 4) 1,500 GB (Note 5) 500 GB (Note 6) ext4

/opt/ciena/data/docker 150 GB 150 GB 150 GB ext4

/opt/ciena/loads 200 GB 200 GB 200 GB ext4

Unallocated - 150 GB 150 GB 150 GB -


PE space in VG

Note 1: File system type recommendation is ext4. For guidance on other possible file system types, contact Ciena.
Note 2: In a multi-host config, logical volume names for each required file system mount point must be the same on all hosts.
Note 3: No space included for PinPoint, see “Storage space for PinPoint” on page 4-14.
Note 4: Includes 1.5TB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 5: Includes 600GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 6: Includes 250GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.

Storage space and file systems when OS is on same disk


This section details the disk configuration requirements when deploying MCP
on a host where the host operating system is deployed on a the same space/
disk as the application.

Table 4-6 on page 4-12 specifies the storage requirements for the host OS and
the application when deployed on the same space/disk.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-12 Engineering guidelines

Table 4-6
Requirements for Docker storage disk - When OS is on same disk
Physical disk(s)

Disk size Depends on host size (use the number/size of disks needed to get the total required disk space for the host size
chosen, as detailed in Table 4-2 on page 4-5).

Disk speed Disk(s) must be very fast and directly attached. The only supported options meeting these speed specifications are:
• Local solid state disks (SSD) with:
— 4K random read of at least 85,000 IOPS or better
— 4K random write of at least 43,000 IOPS or better
— sequential read/write of up to at least 500 MB/s or better
— Eg. Intel® SSD Data Center S3710 Series drives meet these specifications
• SAN (Storage Area Network):
— directly attached via: Fiber Channel (at least 8Gb/s), or iSCSI (dedicated network; at least 10 Gb/s)
— 4K random read of at least 85,000 IOPS or better; 4K random write of at least 43,000 IOPS or better;
sequential read/write of up to at least 500 MB/s or better

Volume group

VG name Volume group name can be set to any name during OS installation (must be same on all hosts). Used by MCP
installation software. MCP assumes name is vg_sys . If it is different, the MCP installation procedures include steps
where the non-default VG name can be entered by the user.

VG size VG configured to use the entire space on the disk(s).

VG space VG configured to have:


allocation • 6 logical volumes (LV), one for each file system / mount point in this table (boot is outside VG)
• unallocated PE (physical extents) space, with size as per this table (used by MCP for configuration of the Docker
thin pool); the free space must exist inside the VG

File systems & mount points

File system FS mount point LV with 4TB LV with 2TB LV with 1TB FS type
(FS)
(Note 1, Note 2, boot 1 GB 1 GB 1 GB -
Note 3)
/ (root file system) 70 GB 70 GB 70 GB -

/var/log/ciena 150 GB 150 GB 80 GB

swap 32 GB 16 GB (if 64GB RAM) 16 GB -


24G (if 96G RAM)

/opt/ciena/bp2 (Note 4) 3,350 GB (Note 5) 1,350 GB (Note 6) 500 GB (Note 7) ext4

/opt/ciena/data/docker 150 GB 150 GB 70 GB ext4

/opt/ciena/loads 100 GB 100 GB 100 GB ext4

Unallocated - 150 GB 150 GB 150 GB -


PE space in VG

Note 1: File system type recommendation is ext4. For guidance on other possible file system types, contact Ciena.
Note 2: If root is being partitioned to a finer granularity, refer to guidelines in Note 2 of Table 4-4 on page 4-10.
Note 3: In a multi-host config, logical volume names for each required file system mount point must be the same on all hosts.
Note 4: No space included for PinPoint, see “Storage space for PinPoint” on page 4-14.
Note 5: Includes 1.5TB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 6: Includes 600GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 7: Includes 250GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-13

Storage space for historical PMs and NE backups


The disk space required for storing NE PMs and NE backups depends on the
type and number of NEs managed and on the number of days of history kept.
The disk space sizing and partitioning guidelines in Tables 4-2, 4-3, 4-5 and
4-6 provide guidance on how much storage space is needed for your
deployment, and how much space should be allocated to the /opt/ciena/bp2
file system. These values assume that a specific maximum amount of space
will be used to store NE historical PMs and NE backups.

In order to determine if more storage space is required, you must calculate the
actual storage requirements based on the type and number of NEs in your
network. Use the values in Table 4-7 on page 4-13 to calculate the space
needed for NE historical PMs and NE backups. The value should then be
compared to determine if it is more than what has been set aside for this
purpose in the guidelines provided. If it is more, the storage space for the
deployment must be increased and the extra space allocated to the /opt/ciena/
bp2 file system.

Table 4-7
Storage space required for NE PMs and NE backups
NE type NE data size per day
(PMs plus 1 NE backup per day)

6500 (per shelf) 70 Mbytes

Waveserver WL3 4 MBytes

Waveserver Ai, Waveserver 5 10 MBytes

5410 (Optical), 5430 Available upon request, contact Ciena.


39xx, 51xx
8700
6200

Example #1

If you have:
• PM retention period of 7 days
• 1 NE backup per day with max 7 backups kept
• NEs in network - 100 x 6500 Photonic TIDc NEs with 5 shelves each
• MCP host configuration where each host has 1TB storage space

Then:
• The storage space required would be approximately 240 GB (100 NEs * 5
shelves per NE * 7 days * 70 Mbytes per day)

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-14 Engineering guidelines

• Comparing that value to Table 4-6 on page 4-12, we see that the 1TB
config allows for up to 250GB of space to be used for NE PMs/backups
• No additional storage is needed

Example #2

If you have:
• PM retention period of 7 days
• 1 NE backup per day with max 7 backups kept
• NEs in network - 1,500 x 6500 Photonic NEs with 1 shelf each, and 200
Waveserver Ai NEs
• MCP host configuration where each host has 2TB storage space

Then:
• The storage space required would be approximately 735GB (1,500 NEs *
1 shelves per NE * 7 days * 70 Mbytes per day; plus 200 NEs * 7 days *
10 Mbytes per day)
• Comparing that value to Table 4-6 on page 4-12, we see that the 2TB
config allows for up to 600GB of space to be used for NE PMs/backups
• The storage space for each MCP host must be increased by an additional
135GB, therefore 2.13TB is required for each MCP host
• The /opt/ciena/bp2 file system on each MCP host should be allocated
1,485GB (1,350GB + 135GB)

Storage space for PinPoint


PinPoint is optional, add-on functionality for fiber fault troubleshooting and
importing KML/KMZ fiber files. This functionality is enabled via the use of an
optional license.

In this MCP release, PinPoint can be used to identify an approximate


geographic location for fiber faults. Third party websites or applications can
then be used to identify a corresponding street level address. No additional
storage space is required when PinPoint is used in this mode.

In some deployment scenarios, it may be possible to enable zooming to the


street level map directly within the PinPoint map. This mode of operation
would need to be planned and customized on a case by case basis (targeted
street level map data would be required, and additional storage space must
be planned for and allocated for your MCP deployment). For details, contact
Ciena.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-15

LAN/WAN requirements

Attention: NAT (Network Address Translation) is not currently supported.

Network delay and bandwidth requirements


Table 4-8 on page 4-15 details the bandwidth requirements between MCP
components.

Table 4-9 on page 4-16 details the bandwidth requirements between MCP and
managed devices.

Table 4-8
MCP bandwidth requirements
Communications channel Recommended bandwidth Maximum DCN delay (Note 1)

MCP host MCP host Between MCP hosts in a multi-host 2 ms


in multi-host config in multi-host config config (Note 2): 1 Gb/s

MCP Northbound clients 10 Mb/s per client 300 ms


(MCP UI or API client)

MCP hosts at GR MCP host at GR Between GR sites (Note 3): 300 ms


Site A Site B • 400 Mb/s - For networks with
more than 2,000 NEUs/NEs
• 100 Mb/s - For networks with
between 1,000 - 2,000 NEUs/NEs
• 16 Mb/s - For networks with
less than 1,000 NEUs/NEs

MCP Managed NEs See Table 4-9 on page 4-16 300 ms

Note 1: DCN delay is defined as Round-trip time, RTT; utilities such as ping can be used to help estimate average
RTT between hosts. DCN segments should have no packet loss.
Note 2: When MCP is deployed in a multi-host configuration, all the MCP hosts must be on the same subnet and
must meet the bandwidth and DCN delay requirements (typically this implies they must all be located at the same
data center on the same switch).
Note 3: Using less than the minimum bandwidth required between MCP GR sites may result in data
synchronization delays and failures between the sites. This can impact the ability of the standby site to assume full
management control after a GR switch-over.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-16 Engineering guidelines

Table 4-9
MCP to NE bandwidth requirements
Communications channel Recommended bandwidth (Note 1, Note 2)

MCP 6500 200 kbits/s per shelf (Note 3)

MCP 3902, 3903, 3903x, 3904, 3905, 3906mvi, 3920 40 kbits/s


3916, 3926m, 3928, 3930, 3931, 3932, 3938vi, 3940, 3942
3960
5142, 5150, 5160, 5170, 8700
6200

MCP 5410 Optical, 5430 1.5 Mb/s

MCP Waveserver WL3 40 kbits/s

MCP Waveserver Ai, Waveserver 5 80-100 kbits/s

MCP RLS 80-100 kbits/s

Note 1: The average delay on any segments of the DCN should not exceed 300 ms, and these DCN segments
should have no packet loss.
Note 2: All DCN, DCC, and bandwidth rules must be respected on the NEs being managed. This includes the
scenario where NEs are set up in a GNE (Gateway Network Element) configuration. GNEs must be configured to
support the total bandwidth required to manage all remote NEs through the GNE.
Note 3: Each shelf in a consolidated TID must be counted (for example, a consolidated TID with 5 shelves requires
5x200 kbits/s).

Multiple network interfaces


Multiple network interfaces can be configured on the MCP hosts but only one
can be assigned to carry all MCP network management traffic (the use of
special network interface configurations such as bonded interfaces is not
supported yet).

The network interface to be used will be automatically determined from the


configuration of the IP addresses and sub-networks assigned to the MCP
hosts (the first network interface assigned to the specified IP address is used
by default).

IP connectivity and subnet


IP connections are required between all components:
• Between all MCP hosts - IPv4 is required
• Between each MCP host and any MCP clients - IPv4 is required
• Between each MCP host and any managed NEs - IPv4 or IPv6 is
supported (IPv6 is supported as long as the NE in question supports
management via IPv6).

If both IPv4 and IPv6 are used, a single network interface must be used on the
MCP hosts for both. This interface is configured to handle both IPv4 and IPv6
communications, and is commonly referred to as a dual stack configuration.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-17

IP address allocation
Installing MCP includes use of one or more IP addresses for the physical
server(s)/VM(s) where MCP is installed:
• These IPs are chosen by the customer (based on the network subnet the
server/VM is located in).
• For MCP single-host configurations, 1 IP address must be allocated for the
MCP host (that same IP is used when setting the Site IP as well). For MCP
multi-host configurations at least 4 IP addresses must be allocated (1 for
each of the MCP hosts and 1 for the site IP).
• All IPs assigned to MCP hosts in a single-host or multi-host configuration,
including the Site IP, must be on the same subnet. As a result, a subnet of
/32 is not supported for use with MCP (/32 implies 1 IP per subnet).
• If it is not already set up and in place by the IT system administrator,
additional IPs may also be required for setting up the virtualization
infrastructure (eg. setting up VMware ESXi), or for hardware management
ports (eg. Oracle network management ILOM port).

MCP also uses a private IP range for communications between software


components in its internal architecture.
• The default private IP range used in MCP is 172.16.0.0/16 (i.e. translates
to all IPs in the 172.16.0.0 - 172.16.255.255 range).
• This default can be changed via a configuration file during MCP
installation. The format of the entry used in the configuration file is
172.16.0.0/16/24.
• It must not overlap with any other IP range used in the customer's network
(i.e. the private IP range used by MCP must not be routed by the
customer’s DCN).
• The range used by MCP should only be changed if the 172.16.0.0/16
range is currently in use within the customer’s network. If changed,
another 172.x.0.0/16 range must be chosen, where x equals any value
from 16 to 31 (with configuration file entry in the format 172.x.0.0/16/24).
• For geographically redundant (GR) deployments, the 2 sites must use
different ranges. For example, if the default range of 172.16.0.0/16 is used
when installing the first site, a different range, such as 172.17.0.0/16,
should be used when installing the second site.

MTU size of network interface


The maximum transmission unit (MTU) of a network interface is the size of the
largest packet or frame that can be sent in a single network transaction. The
performance and scalability data provided for MCP assumes an MTU size of
1500. The performance and scalability of MCP may be affected if an MTU size
smaller than 1500 is used.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-18 Engineering guidelines

MCP deployments on physical servers or VMs


MCP is deployed on host(s) running a Linux operating system. These hosts
can be physical servers, or VMs (Virtual Machines) in a virtualized
environment.

The choice of using physical servers (bare metal) vs VMs for MCP depends
on the existing customer environment and IT skill set. Both approaches have
benefits and considerations. See “Determine whether physical servers or VMs
will be used” on page 3-3 for details on how to decide whether physical
servers or VMs are the best option for your environment.

If using bare metal servers, MCP has been tested/evaluated on specific server
hardware models. Deploying MCP on a bare metal server not detailed in this
document requires prior approval from Ciena. In this release, MCP has been
tested/evaluated on the following hardware models:
• Oracle X6-2 - Eg. 3 of these servers can be used to create a 2+1 multi-
host configuration matching the specs of the 40 vCPU/96G (2+1) column
in Table 4-2 on page 4-5 (when equipped with 2xE5-2630v4 10-core 2.2
GHz CPUs, 96G RAM, and 5x400GB SSD disks).
• Oracle X7-2 - Eg. 4 of these servers can be used to create a 3+1 multi-
host configuration matching the specs of the 40 vCPU/64G (3+1) column
in Table 4-2 on page 4-5 (when equipped with 2 Intel Xeon Silver 4114 10-
core 2.2 GHz CPUs, 64G RAM, and 3x800GB SSD storage space).
• Oracle X8-2 - Eg. 4 of these servers can be used to create a 3+1 multi-
host configuration matching the specs of the 40 vCPU/64G (3+1) column
in Table 4-2 on page 4-5 (when equipped with 2 Intel Xeon Gold 5218 16-
core 2.3 GHz CPUs, 64G RAM, and 3x800GB SSD storage space; note
this configuration will have slightly more vCPUs than required for MCP).
• HP BL460c G9 blades - Eg. 3 of these blades can be used to create a 2+1
multi-host configuration matching the specs of the 40 vCPU/96G (2+1)
column in Table 4-2 on page 4-5 (when equipped with 2xE5-2640v4 10-
core 2.4GHz CPUs, 96G RAM, and 2x1.2TB SSD disks)

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-19

Operating system requirements


Operating system
MCP is supported on the following operating system configuration:
• Operating system - See Table 4-10 on page 4-20
• Ciena System bundle -
— 2020.13.0 bundle is the recommended system bundle for use in
conjunction with MCP 4.2 (at the time of this document release).
2020.18.1 bundle is the recommended system bundle for use in
conjunction with MCP 4.2.2.
— For the most recent guidance, consult the Ciena Portal:
– As a registered user with a my.ciena.com account, log into
https://my.ciena.com.
– Navigate to Software > Manage Control and Plan (MCP).
– To facilitate the search, sort the results by part number by clicking
on the CIENA PART # column heading.
– Find Part# MCP_4_2 and open the associated PDF file in the
RELEASE INFO column
(MCP_4.2_Manifest_Download_Readme.pdf). Use this manifest
file to identify the recommended system bundle.
– Alternatively, find Part# TOC_S_BUNDLE and open the file
(RecommendedSystemBundleAndOSByProduct.pdf). Use this file
to identify the recommended system bundle for your MCP release.
• Virtualization software -
— MCP has been tested in virtualized environments based on
OpenStack and VMware ESXi (minimum: VMware ESXi 5.5 or later;
refer to VMware documentation and to your hardware platform
documentation to verify minimum VMware ESXi release required).
— When deploying in a virtualized environment, the physical storage and
all physical resources associated with the VMs must be static. Using
software applications that dynamically move, migrate or re-allocate the
physical storage or resources associated with the VMs is not
supported (eg. VMware VMotion or any application that does dynamic
replication of the VM to different hardware).

Table 4-10 on page 4-20 details the operating system configurations


supported for MCP 4.2.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-20 Engineering guidelines

Table 4-10
Operating system support for MCP 4.2
Operating system OS OS package SELinux Supported for Supported for MCP
(Note 1, Note 2, release set (Secure Linux) new MCP installs upgrades from earlier
Note 7, Note 8) (Note 3) (Note 3, Note 4) mode required (Note 5) release

Red Hat Enterprise 7.2 64-bit Server, Disabled Not * Supported


Linux Infrastructure Recommended *
Server option After the MCP upgrade is
Red Hat Enterprise 7.3 (minimum Disabled complete, it is strongly
Linux, or package set) recommended that the OS
Oracle Linux release be updated as a
This is the last separate activity (see Note
Red Hat Enterprise 7.4 Disabled
MCP release that 6).
Linux, or
will allow new MCP
Oracle Linux, or
installs on Linux This is the last MCP
CentOS
7.2/7.3/7.4. release that will support
Upgrades to future upgrades on systems
MCP releases will currently running Linux 7.2/
require the OS to 7.3/7.4. Upgrades to future
be running a MCP releases will require
minimum of Linux the OS to be running a
7.5. minimum of Linux 7.5.

Red Hat Enterprise 7.5, Disabled, Recommended Supported


Linux, or 7.6, Permissive or
Oracle Linux, or 7.7 * Enforcing
CentOS

Note 1: When MCP is deployed in a multi-host configuration (with or without GR), all MCP hosts must run the same
operating system type and version.
Note 2: MCP has been verified on operating systems with the language set to English. Using other languages is
not currently supported.
Note 3: MCP has been verified against specific sets of operating system releases/packages. MCP installations are
supported on the operating system releases identified in this table. Operating system vendors periodically provide
updates (general updates and security updates). Ciena regularly evaluates these new updates and their
compatibility with MCP. For more details contact Ciena.
Note 4: MCP has been verified against specific sets of operating system packages. Installation of 3rd party
applications is not supported co-resident with MCP, unless approved by Ciena.
Note 5: Industry support and security updates for Linux releases are periodically capped by operating system
vendors. It is strongly recommended that fresh installs of MCP be performed on the most recent OS release that
has been tested with MCP. OS releases identified as “Not recommended” in this table may not be supported in
future releases of MCP.
Note 6: The operating system release should not be changed during an MCP upgrade. MCP release upgrades and
operating system release updates should be treated as separate activities. If the OS release currently in use is listed
in this table as Not Recommended for new MCP installs, it is strongly recommended that the OS release be updated
as a separate activity following the MCP upgrade.
Note 7: MCP provides support for CIS Level1/Level2 Benchmark OS hardening. For details on what policies are
supported, and the MCP procedures that must be applied before installing MCP on a hardened system, contact
Ciena.
Note 8: The use of the operating system firewalld service to manage/maintain rules in iptables is not currently
supported. MCP currently makes use of the operating system iptables functionality (and manages/maintains rules
dynamically via its bpfirewall service).

* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-21

Domain Name Service (DNS)


MCP is supported in environments with or without DNS (Domain Name
Service). MCP hosts have no requirements for hostname/IP lookup.

Hostname
In an MCP multi-host configuration, the hostname of each MCP host must be
unique within the configuration.

Site IP
A site IP address must be defined for all configurations.

In an MCP multi-host configuration, an additional virtual Site IP address must


be allocated for use by MCP (in addition to the IP addresses assigned to the
individual MCP hosts). This Site IP must be on the same subnet as the MCP
hosts. To users of the northbound API (for example, MCP UI, orchestration,
and OSS), an MCP multi-host configuration looks like a single logical unit.
This is achieved via the use of a virtual Site IP address. This Site IP address
redirects requests to the appropriate host in the cluster. If one host in the
cluster goes down, requests are automatically redirected. This process makes
the loss of a host in the cluster seamless to clients of the northbound API. The
Site IP is also used by select MCP device management functionality (eg. PM
retrieval from devices).

In an MCP single-host configuration, a Site IP address must also be defined


(to enable functionality like PM retrieval). In this scenario, no additional IP
address needs to be allocated. The Site IP address can be defined as the
same as the MCP host IP address.

Kernel parameters
All required kernel parameter updates are applied automatically when the
Ciena System bundle is installed.

Network Time Protocol (NTP)


An NTP server timing source must be reachable by all MCP hosts during MCP
installation.

The date and time between all hosts where MCP is installed, and between
MCP and managed devices, must be synchronized. This is achieved through
the use of an NTP server timing source. The MCP installation procedures
include steps where one or more NTP servers must be specified (MCP does
not push NTP server settings to managed devices; this must be configured
separately on the devices).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-22 Engineering guidelines

NTP timing sources and method


In environments where a corporate NTP timing source is available, MCP hosts
and managed devices can be configured as NTP clients of this source.
Alternatively, public NTP servers can be used for MCP hosts that have Internet
access.

Only one method of timing synchronization should be used for MCP hosts.
This method is NTP, using the ntpd service. Ensure that all other timing
synchronization methods are disabled so as not to conflict/interfere with NTP
(including but not limited to methods such as VMware Tools time
synchronization). During MCP installation, the BPI installer software
configures the ntpd service for use by MCP (and also disables the chronyd
service, an alternate NTP service that exists by default on the operating
system).

Number of NTP timing sources defined


The number of NTP server timing sources that should be defined, in order
from most to least preferred, is:
• 4 NTP servers - protects against sources reporting incorrect time (referred
to as a falseticker), or sources being unreachable
• 3 NTP servers - minimum number required to allow ntpd to detect if one
source is providing an incorrect time (falseticker)
• 1 NTP server - no risk of conflicting timing sources, but provides no
redundancy

Note: Defining 2 NTP servers is not supported and will be blocked by the
BPI installer software. With 2 sources defined, it is not possible to
determine which timing source is accurate if they conflict.

BPI Installer
MCP is installed using the BPI installer software. The MCP installation
procedures include steps where the BPI installer software is used to validate
that the hosts used meet MCP engineering requirements. This validation step
checks for free space in target locations using three built-in profiles: small,
medium, and large. These profiles are defined as follows (sizes are in GB):
• Free space in / (root): {small: 50, medium: 50, large: 50}
• Free space in /var/log/ciena: {small: 80, medium: 150, large: 150}
• Free space in /opt/ciena/bp2: {small: 500, medium: 1350, large: 3350}
• Free space in /opt/ciena/data/docker: {small: 70, medium: 150, large: 150}
• Free space in /opt/ciena/loads: {small: 100, medium: 100, large: 100}
• Free space in volume group for Docker thin pool creation: 150 GB

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-23

A fourth profile, mcp-prod-mini, also exists, to be used for smaller lab


deployments. This profile checks for 500 GB in / (root), and 150 GB free space
for Docker thin pool creation.

Port and protocol requirements


This section details the ports used by MCP. The port information provided is
valid for usage with simple single firewall configurations. If your deployment
involves a more complex firewall configuration (such as a dual firewall layer),
contact Ciena® Corporation for additional information.

Table 4-11 on page 4-23 and Table 4-12 on page 4-28 list the ports used by
MCP.

Attention: MCP can be deployed in either a multi-host configuration or a


single-host configuration. An MCP multi-host configuration consists of 3 or
more hosts. These hosts are co-located in the same data center, and there
must not be a firewall between these hosts.

Attention: The following protocols must also be allowed/enabled between


hosts (some VM infrastructure environments block these by default):
- GRE traffic (IP protocol ID 47) - GRE tunnels are used between hosts in a
multi-host config.
- Gratuitous ARP (GARP) - Used for site IP functionality.

Table 4-11
Port information for MCP
Source SrcPort Destination DstPort Proto Description
See Note 1, Note 2, Note 3. For requirements between MCP GR sites, see also Table 4-12 on page 4-28.
Between MCP clients (MCP UI or REST API) and MCP hosts
MCP UI or REST any MCP 443 HTTPS Communication between client and MCP
API client (TLS authenticated).
* MCP UI any MCP 80 * HTTP Communication between MCP UI client and
MCP.
Websocket client any MCP 80 or WS or Websocket notifications (only if developing a
for notifications (websocket) 443 WSS client to receive MCP notifications).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-24 Engineering guidelines

Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
Between the License Server(s) and any MCP/devices requiring licenses
License any License 4200 HTTP (Optional) Required if direct access to the
administrator Server(s) external License Server UI is desired. The
browser MCP UI provides the ability to query licenses
for the License Server(s) it points to.
Device any License 7071 HTTPS Flexera licensing server. Device licenses.
Server(s) 7072 HTTP
* MCP any License 7071 HTTPS Flexera licensing server. MCP licenses.
Server(s) 7072 HTTP
7073 *
* MCP UI any License 7071 * HTTPS Flexera licensing server. Used on first launch
Server(s) of MCP UI Licensing page to load certificate
(cached by browser for future launches).
License any License 7071 HTTPS When License Server HA mode is enabled
Server(s) - HA Server(s) - HA 7072 HTTP (Note 3, Note 4)
License any License 2224 TCP
Server(s) - HA Server(s) - HA
License any License 5404 UDP
Server(s) - HA Server(s) - HA 5405
License 7071 License 7071 HTTPS When License Server GR mode is used (Note
Server(s) - GR 7073 Server(s) - GR 7073 5)
site site
License 22 License 22 RSYNC
Server(s) - GR Server(s) - GR over
site site SSH
Between MCP hosts and external applications (only if used)
MCP any External NTP 123 UDP NTP.
Device any or MCP 123 UDP NTP. If using any MCP host as a timing source
123 for any other device.
MCP any External RADIUS 1812 UDP RADIUS. Authentication.
External RADIUS 1812 MCP any UDP RADIUS response. Stateful reply.
Between MCP hosts and 6500 NEs
MCP any 6500 22 TCP Standard SSH port. Recommended for
troubleshooting & tech support (even when
not being used for device management).
6500 any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for 6500 NE backups.
MCP any 6500 161 UDP NE Profile SNMP. Device management for
6500, only if Packet Fabric cards present.
6500 any MCP 162 UDP SNMP. Trap destination set on 6500, only if
Packet Fabric cards present.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-25

Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
MCP any 6500 161 UDP SNMP. Used for contact with NE before
6500 any MCP 161 UDP enrolling.

MCP any NE 162 UDP


6500 any MCP 2023/ TCP PM retrievals from device.
2555
MCP UI any MCP 3800 TCP NE comms log
MCP UI any Web-based Site 8080 or HTTP/ Port/protocol as defined in MCP policy for
Manager Server 8443 HTTPS 6500 NE craft launch during MCP installation.
Default port defined on Web-based Site
Manager Server for HTTP/HTTPS access is
8080/8443. This can be changed.
Web-based Site any 6500 22 TCP SSH. Comms Site Manager to device.
Manager Server
MCP any 6500 20000 TCP NE Profile TL1. Device management (SSH).
MCP any 6500 20000 TCP TL1. Used when gaining association with the
MCP any 6500 20001 TCP NE.

MCP any 6500 20002 TCP NE Profile CLI. CLI for 6500, only if Packet
Fabric cards present (SSH).
* Between MCP hosts and RLS NEs
* MCP any RLS NE 22 * TCP NE Maintenance Profile SFTP. Target server
for RLS NE backups (MCP or other).
* MCP any RLS NE 22 * TCP NE Profile CLI. Device management (SSH).
* MCP any RLS NE 830 * TCP NE Profile NETCONF. Device management.
* RLS NE any MCP 2023 * TCP PM retrievals from device.
Between MCP hosts and Waveserver, Waveserver Ai and Waveserver 5 NEs
Waveserver NE any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for Waveserver NE backups (MCP or other).
MCP any Waveserver NE 22 TCP NE Profile CLI. Device management (SSH).
MCP any Waveserver NE 443 REST NE Profile RESTCONF. Device management
CONF (comms MCP to device).
Waveserver NE any MCP 443 REST Device management (comms device to MCP).
CONF
Waveserver NE any MCP 2023 TCP PM retrievals from device.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-26 Engineering guidelines

Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
Between MCP hosts and Layer 2 devices running SAOS 6.x/8.x software (51xx, 39xx, 8700)
L2 Device any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for L2 device backups (MCP or other).
MCP any L2 Device 22 TCP NE Profile CLI. L2 device CLI (SSH).
MCP any L2 Device 161 UDP NE Profile SNMP. Device management.
L2 Device any MCP 162 UDP SNMP. Trap destination set on L2 device.
L2 Device any MCP 1163 * UDP SNMP INFORMS. Required only if INFORMS
are enabled on the managed device and
configured for usage with MCP.
L2 Device any MCP 2023 TCP PM retrievals from device.
* Between MCP hosts and Layer 2 devices running SAOS 10.x software (51xx, 8180)
* L2 Device any Server for NE 22 * TCP NE Maintenance Profile SFTP. Target server
Backups for L2 device backups (MCP or other).
* MCP any L2 Device 22 * TCP NE Profile CLI. L2 device CLI (SSH).
* MCP any L2 Device 830 * TCP NE Profile NETCONF. Device management
for SAOS 10.x devices.
* L2 Device any MCP 2023 * TCP PM retrievals from device. Also used for
certificate transfers by SAOS 10.x devices.
* MCP any L2 Device 6702 * TCP gNMI (gRPC). Fault management for SAOS
10.x devices.
Between MCP hosts and 6200 Packet only NEs
6200 any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for 6200 NE backups (MCP or other).
MCP any 6200 161 UDP NE Profile SNMP. Device management.
6200 any MCP 162 UDP SNMP. Trap destination set on 6200.
MCP any 6200 20080 HTTP NE Profile HTTP. Device management
(comms MCP to device).
6200 any MCP 80 HTTP Device management (comms device to MCP).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-27

Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
Between MCP hosts and 5430/5410 NEs
5430/5410 any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for 5430/5410 NE backups (MCP or other).
MCP any 5430/5410 22 TCP NE Profile CLI. 5430/5410 device CLI (SSH).
MCP any 5430/5410 80 HTTP NE Profile CORBA. Used to retrieve CORBA
IOR if not retrievable via SFTP.
MCP any 5430/5410 161 UDP NE Profile SNMP. Device management, only
if eSLM cards present.
MCP any 5430/5410 683 CORBA Device management (comms MCP to device).
Port as configured on device. Default is 683.
5430/5410 any MCP 5435 * TCP PM retrievals from device.
5430/5410 any MCP 12234 CORBA Device management (comms device to MCP).
Note 1: All TCP ports have a bidirectional data flow unless otherwise noted.
Note 2: For multi-host configurations: required ports between MCP and managed devices, or MCP and clients,
apply to all of the MCP host IPs; required ports between MCP and clients apply to the MCP site IP as well.
Note 3: When 2 external License Servers are deployed and License Server HA mode is enabled, Virtual Router
Redundancy Protocol (VRRP) is used to implement the virtual IP address that is used by MCP and the NE to
reference the License Server(s). IGMP-based multicast forwarding is used. Both License Servers, as well as the
virtual IP address must be on the same subnet. See External License Server User Guide for details.
Note 4: When 2 external License Servers are deployed and License Server HA mode is enabled, required ports
involving the License Server(s) apply to the virtual IP address as well.
Note 5: When License Servers GR mode is used in conjunction with License Server HA mode, required ports
involving the License Server(s) apply to both License Server host IPs at each site.

* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-28 Engineering guidelines

Table 4-12
Port information for MCP (between GR sites)
Source SrcPort Destination DstPort Proto Description
MCP (any host at any MCP (any host at 22 TCP SSH.
SiteA/SiteB) SiteB/SiteA) 443 HTTPS
500 UDP Standard IPSec IKE port.
IPsec tunnels are used between GR sites.
In addition to the required ports, any firewall present on the network path between the endpoints where IPSec traffic
flows must be configured to allow IPSec traffic to be passed (including public/core network). Some of the settings
are vendor specific router settings, but in general, this can usually be achieved on firewalls by doing the following:
• UDP port 500 should be open for traffic flow (standard IPSec IKE port).
• UDP Port 500 should be opened to allow Internet Security Association and Key Management Protocol (ISAKMP)
traffic to be forwarded through firewall.
• ACL lists should be adjusted to permit IP protocol ids 50 and 51 on both inbound and outbound firewall filters.
IPSec data traffic does not use Layer 4, so there is no concept of TCP/UDP/port for this traffic (therefore must
specifically be enabled in firewalls/VPN gateways/routers).
• IP protocol ID 50 should be set to allow IPSec Encapsulating Security Protocol (ESP) traffic to be forwarded.
• IP protocol ID 51 should be set to allow Authentication Header (AH) traffic to be forwarded.

Note 1: The use of PAT (Port Address Translation) between GR sites is not supported. ESP (IP Protocol 50) is used
for encryption. Since ESP does not use Layer 4 (no TCP/UDP/port), it will be dropped by devices that do PAT
(packets can’t be assigned a unique port and therefore PAT will fail).
Note 2: The use of VPNs with NAT (Network Address Translation) devices on the network path between GR sites
is not recommended, as it requires a much more complex configuration to successfully establish IPSec tunnels. If
the network path (public or private) between GR sites includes VPN routers with NAT devices, the entire path must
be configured to do NAT traversal (eg. using standard UDP ESP port 4500). The use of VPNs with NAT devices on
the network path where IPSec is enabled is not recommended.

MCP user interface (UI)


The MCP UI is a web user interface. The following browsers are officially
supported:
• Google Chrome (minimum version 74.0.3729.169)
• Mozilla Firefox (minimum version 62.0.2)

It is recommended that the browser used for the MCP UI runs on a platform
with:
• CPU - 64-bit CPU with performance equivalent to or better than an Intel
Dual Core 2.2 GHz
• RAM - 2GB above the minimum requirements of the platform’s operating
system
• Storage - no significant amount of storage space is required

For example, a Windows 10 64-bit operating system generally requires a


minimum of 2G RAM. If running the MCP UI on a Chrome browser of a
Windows 10 64-bit PC, 4 GB of RAM would be recommended (2GB+2GB).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Engineering guidelines 4-29

External authentication
MCP supports the use of an external RADIUS or an external LDAP for
authentication. These applications must be installed on separate platforms,
they are not supported co-resident with MCP.

For details refer to the MCP API Reference Guide and the MCP Administration
Guide.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
4-30 Engineering guidelines

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
5-1

Procedures and guidelines for different


network sizes 5-

This chapter details the post-installation procedures to optimize Manage,


Control and Plan (MCP) for the network size being managed, as well as the
operational guidelines that should be taken into account for different network
sizes.

Operational guidelines
The following guidelines should be taken into account.

Device Management
Device management considerations include:
• When network elements are pre-enrolled in bulk from a file on the MCP UI,
it is recommended that the file contain a maximum of 100 entries.
• When selecting multiple NEs in Pending state to Enroll, it is recommended
that a maximum of 100 network elements are selected. Those NEs should
be allowed to complete to Connected and Synchronized state before the
next set of NEs is enrolled.
• When network elements are enrolled in bulk using the REST API interface,
it is recommended that a maximum of 500 devices be enrolled at the same
time.

Installation
Installation considerations include:
• Following the installation of MCP, wait 15-20 minutes before logging into
MCP and performing any actions (to allow for all software components to
complete their initialization).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
5-2 Procedures and guidelines for different network sizes

Network maps
Map considerations include:
• MCP supports the loading of data to be used as backgrounds for the
network map. The storage size taken up by this data is a combination of
map size and level of details. If map data including street level details is
used, the region covered by the map should be limited (eg. if using map
data for North American regions with dense street level details, a
maximum of approximately 2 states/provinces can be accommodated; if
street level details are not included a much larger region can be
accommodated).

Administration
Administration considerations include:
• When performing an MCP restoration from an MCP backup file, the time
to complete the restore will depend on the size of the managed network.
For large networks this may take several hours to complete.
• For MCP deployments in a geographically redundant configuration, if
there is a failure of the active site, the standby site is activated. This
activation triggers restoration of certain MCP components followed by a
network sync. The time to complete the activation will depend on the size
of the managed network. For large networks this may take several hours
to complete. While the list of network elements is immediately reported in
the Enrollment page, the Dashboard and Network Elements page will not
be fully updated until completion of the restore and network sync.

Services
Services considerations include:
• In a GR configuration, the total number Transport or Packet services
displayed in the MCP UI may not exactly match between the Active and
the Standby site. Stitching of services is done independently on each site
(and dynamic activity in the managed network may result in some services
being re-stitched dynamically).

REST API clients


REST API considerations include:
• MCP supports the retrieval of historical PM data and historical alarm data
for managed NEs using the REST API interface. These API calls allow
filters to be applied that target the type and time interval of data to be
returned. REST API clients performing bulk queries for this type of
historical data should always apply filters that contain the scope of data
returned to a specific NE and a specific time interval (as an example, the
last 30 minutes on a specific NE). Doing bulk queries of all available PMs
on all managed NEs is not recommended. In all cases a maximum of
10,000 records can be returned by any one API call. If the response would

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Procedures and guidelines for different network sizes 5-3

be too large, an API error response code is returned, indicating that the
size limit for the query has been exceeded. In this case, additional filters
should be applied to the API call to further reduce the scope of data
returned.

Historical PM collection
PM considerations include:
• The disk space required for storing historical NE PMs can be significant
for certain NE types (eg. 6500). This should be taken into account when
planning storage space for an MCP deployment (see “Storage space for
historical PMs and NE backups” on page 4-13). For all deployments, once
MCP is installed, the HPM Retention Period should be immediately set to
match the number of days desired and planned for.

Hybrid deployments with OneControl/MCP


Considerations include:
• In some deployment scenarios, NEs may be under the management of
multiple Ciena network management software applications (eg. Ciena’s
OneControl management software may still be in use in conjunction with
MCP, during a transition period). In these scenarios:
— NE backups should only be scheduled and taking place from one NMS
instance (eg. either OneControl or MCP)
— PM collection should only be configured and taking place within one
instance within each NMS type (eg. only enabled on one OneControl
server if multiple servers are in use or in a GR configuration; only
enabled on one MCP server if multiple servers are in use).
— In all cases, settings must be configured in order to ensure that
managed NEs do not receive NE backup or PM collection requests
from multiple sources at the same time. This can result in the NE failing
to respond as expected and impact to software features that rely on
PM collection data.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
5-4 Procedures and guidelines for different network sizes

Optimizing MCP for managed network size


Attention: This section applies to both MCP single-host and MCP multi-host
deployments. For GR deployments, procedures must be run on both the
Active and Standby sites.

When MCP is installed, it is engineered with a set of default configuration


values. The following procedures must be run for all MCP installations. These
procedures optimize the MCP configuration to the size of the network being
managed.

Two procedures are required when MCP is installed:


• Procedure to tune OS for optimal swap usage
• Procedure to tune memory settings and scale of selected apps

One optional procedure is available for existing MCP installations:


• Procedure to enable support for new NE types

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Procedures and guidelines for different network sizes 5-5

Procedure to tune OS for optimal swap usage


When and why to use this procedure
This procedure must be run for all MCP installations or upgrades. It applies to
both MCP single-host and MCP multi-host deployments. For GR
deployments, this procedure must be run on both the Active and Standby
sites.

It can be done at any time without impact, however any swap allocated and in
use will not be released until the next time the MCP hosts are rebooted. As
such it is recommended that these steps be completed prior to installing or
upgrading MCP.

In this procedure:
• The operating system is configured to optimize usage of memory vs swap
space.

Requirements
Before you start this procedure, make sure that you
• can log in to all MCP hosts using the root account

Steps
1 Login to the host0 VM as root.
2 Check the current Linux profile.
tuned-adm active
3 If not already configured as such, set the Linux profile to throughput-
performance.
tuned-adm profile throughput-performance
4 Note that current available Linux profiles can be listed if required.
tuned-adm list
5 Repeat steps 1 to 4 for all hosts in your multi-host configuration.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
5-6 Procedures and guidelines for different network sizes

Procedure to tune memory settings and scale of selected apps


When and why to use this procedure
This procedure must be run following all MCP installations or upgrades. It
applies to both MCP single-host and MCP multi-host deployments. For GR
deployments, this procedure must be run on both the Active and Standby
sites.

The Installation Guide, or Upgrade Guide, indicate where in the install or


upgrade sequence this procedure must be run.

In this procedure:
• Support is enabled for NE types that will be in the managed network.
• The number of instances is adjusted, for one or more apps within the MCP
solution, to adjust to the size of the managed network.
• The memory settings are adjusted, for one or more apps within the MCP
solution, to adjust to the size of the managed network.

Requirements
Before you start this procedure, make sure that you
• can log in to Host 0 using the bpadmin account
• can log into https://my.ciena.com as a registered user with a my.ciena.com
account
• know which of the supported MCP VM configurations is deployed (this will
determine which tuning profile filename to use):
— multi-host or single-host?
— number of vCPUs per MCP host?
— amount of RAM per MCP host?
• know which NE types will be managed in the managed network
• identify the name of the RA component in MCP that is used to manage the
NE types in the managed network, using Table 5-1:

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Procedures and guidelines for different network sizes 5-7

Table 5-1
Name of MCP RA component for each NE type

NE type Product family Name of RA


name on MCP UI

8180 Ciena8180 bpraciena8180

8700 and PN8x bpraciena8700


Packet devices running SAOS 8.x

MPB MPB bpracienampbraman

Packet devices running SAOS 6.x PN6x bpracienapacket

RLS RLS bpracienarls

Packet devices running SAOS 10.x PN10x bpracienasaos10x


(except the 8180)
Waveserver, Waveserver Ai, CienaWaveserver bpracienawaveserver
Waveserver 5

SubCom SubCom bprasubcom

5410/5430 Ciena54xx raciena54xx

6200 Ciena6200 raciena6200

6500 Ciena6500 raciena6500

Z-series CienaZSeries razseries

Steps
Downloading MCP tuning profiles package

1 If you have already downloaded the MCP 4.2 Tuning Profiles, skip to step
7. Otherwise continue to step 2.
2 As a registered user with a my.ciena.com account, log into
https://my.ciena.com.
3 Navigate to Software > Manage Control and Plan (MCP).
4 To facilitate the search, sort the results by part number by clicking on the
CIENA PART # column heading.
5 Find Part# MCP_4_2 and open the associated PDF file in the RELEASE
INFO column (MCP_4.2_Manifest_Download_Readme.pdf). Use this
manifest file to identify the MCP 4.2 Tuning Profiles part number
(MCP_TUN_4.2-*).
6 Find the MCP 4.2 Tuning Profiles part number in the list and download the
associated software file (mcp-solution-tuning-profiles-4.2-*.ext.tar).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
5-8 Procedures and guidelines for different network sizes

Extracting the tuning profile files

7 Log in to Host 0 as bpadmin user.


8 As the bpadmin user, use SFTP to transfer the tuning profile files to the
/opt/ciena/loads/4.2 directory of Host 0.
9 Change the directory by entering:
cd /opt/ciena/loads/4.2
10 Extract the contents of the tuning tar file by entering:
tar -xf mcp-solution-tuning-profiles-4.2-*.ext.tar
Identifying the filename of the tuning profile that applies to your MCP configuration

11 List the extracted files


cd mcp-solution-tuning-profiles-4.2-*
ls
The list of files will look similar to the following:
mh_2_plus_1_16_64.yml
mh_2_plus_1_16_96.yml
mh_2_plus_1_32_128.yml
mh_2_plus_1_32_96.yml
mh_2_plus_1_40_128.yml
mh_2_plus_1_40_96.yml
mh_3_plus_1_16_64.yml
mh_3_plus_1_40_128.yml
mh_5_plus_1_40_128.yml
sh_24_128.yml
sh_16_64.yml
sh_16_96.yml
12 Use the information you gathered in the requirements about the MCP VM
configurations to identify which tuning profile applies to your MCP
configuration (multi-host or single-host? number of vCPUs per MCP host?
amount of RAM per MCP host?).
For multi-host deployments, the filenames use the following format
mh_<#hosts>_<#vcpus>_<ram>.yml
For single-host deployments, the filenames use the following format
sh_<#vcpus>_<ram>.yml
Example: If the deployment is a multi-host 2+1 where each host has 40
vCPUs and 128GB RAM, then the tuning profile to use is:
mh_2_plus_1_40_128.yml
Take note of the filename for your MCP configuration (will be used later).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Procedures and guidelines for different network sizes 5-9

Choosing NE types to enable for MCP

Note: If this is the Standby site in an MCP GR configuration, ensure that


the RAs enabled are the same as those enabled on the Active site.
13 Use a text editor to edit the tuning profile you identified in step 12
(example: if using vi, then enter sudo vi <filename> ).
14 Use the information you gathered in the requirements about the NE types
to be managed to identify which RAs need to be enabled.
The beginning of the tuning profile contains a section for each RA type
similar to the following
- category: 'scale'
application: 'raciena6500'
solution: 'mcp'
scale: 3
apply: no
For every RA that needs to enabled, change the apply line to yes . Make
this change only for those RAs that need to be enabled (do not edit any
other lines in the file). For example, if the raciena6500 RA needs to be
enabled then the entry after the edit will look similar to the following (the
scale number may be different)
- category: 'scale'
application: 'raciena6500'
solution: 'mcp'
scale: 3
apply: yes
15 Save the file.
Applying the MCP tuning profile

16 Change the directory by entering:


cd /home/bpadmin/bpi
17 Apply the MCP tuning profile (enter the command on a single line)
./bpi --tune /opt/ciena/loads/4.2/mcp-solution-tuning-
profiles-4.2-*/<filename>
where <filename> is the one you identified in step 12 and edited in steps
13 to 15.

Note: Following the application of the tuning profile on 3+1 or 5+1 multi-
host configurations, it is possible that the MCP System Services page
(Nagios page) reports a critical error against the datomic service
indicating that "dataomic instance N exits in the last hour". In this scenario,
the error message can be ignored and will clear on its own within
approximately 1 hour.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
5-10 Procedures and guidelines for different network sizes

Procedure to enable support for new NE types


When and why to use this procedure
This procedure must be run if a new, supported, NE type needs to be
managed by MCP, and that NE type is not currently displayed as an option
when enrolling new NEs. It applies to both MCP single-host and MCP multi-
host deployments. For GR deployments, this procedure must be run on both
the Active and Standby sites.

In this procedure:
• Support is enabled for NE types that were not turned on at MCP
installation time.

Requirements
Before you start this procedure, make sure that you
• can log in to Host 0 using the bpadmin account
• know which of the supported MCP VM configurations is deployed (this will
determine which tuning profile filename to use):
— multi-host or single-host?
— number of vCPUs per MCP host?
— amount of RAM per MCP host?
• know which NE types will be managed in the managed network
• identify the name of the RA component in MCP that is used to manage the
new NE type to be enabled, using Table 5-1.

Steps
Identifying the filename of the tuning profile that applies to your MCP configuration

1 Log in to Host 0 as bpadmin user.


2 Change the directory by entering:
cd /opt/ciena/loads/4.2

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Procedures and guidelines for different network sizes 5-11

3 List the extracted files


cd mcp-solution-tuning-profiles-4.2-*
ls
The list of files will look similar to the following:
mh_2_plus_1_16_64.yml
mh_2_plus_1_16_96.yml
mh_2_plus_1_32_128.yml
mh_2_plus_1_32_96.yml
mh_2_plus_1_40_128.yml
mh_2_plus_1_40_96.yml
mh_3_plus_1_16_64.yml
mh_3_plus_1_40_128.yml
mh_5_plus_1_40_128.yml
sh_24_128.yml
sh_16_64.yml
sh_16_96.yml
4 Use the information you gathered in the requirements about the MCP VM
configurations to identify which tuning profile applies to your MCP
configuration (multi-host or single-host? number of vCPUs per MCP host?
amount of RAM per MCP host?).
For multi-host deployments, the filenames use the following format
mh_<#hosts>_<#vcpus>_<ram>.yml
For single-host deployments, the filenames use the following format
sh_<#vcpus>_<ram>.yml
Example: If the deployment is a multi-host 2+1 where each host has 40
vCPUs and 128GB RAM, then the tuning profile to use is:
mh_2_plus_1_40_128.yml
Take note of the filename for your MCP configuration (will be used later).
Choosing NE types to enable for MCP

Note: This procedure assumes that the tuning profile was already edited
and applied during MCP installation (and therefore still contains the
changes to the file that were done and used to trigger the initial tuning
using “Procedure to tune memory settings and scale of selected apps” on
page 5-6).
Note: If this is the Standby site in an MCP GR configuration, ensure that
the RAs enabled are the same as those enabled on the Active site.
5 Use a text editor to edit the tuning profile you identified in step 4 (example:
if using vi, then enter sudo vi <filename> ).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
5-12 Procedures and guidelines for different network sizes

6 Use the information you gathered in the requirements about the NE types
to be managed to identify which new RA(s) need to be enabled.
The beginning of the tuning profile contains a section for each RA type
similar to the following
- category: 'scale'
application: 'raciena6500'
solution: 'mcp'
scale: 3
apply: no
For every RA that needs to enabled, change the apply line to yes . Make
this change only for those RAs that need to be enabled (do not edit any
other lines in the file). For example, if the raciena6500 RA needs to be
enabled then the entry after the edit will look similar to the following (the
scale number may be different)
- category: 'scale'
application: 'raciena6500'
solution: 'mcp'
scale: 3
apply: yes
7 Save the file.
Applying the RA scaling of the MCP tuning profile

8 Change the directory by entering:


cd /home/bpadmin/bpi
9 Apply the RA scaling of the MCP tuning profile (enter the command on a
single line)
./bpi --tune /opt/ciena/loads/4.2/mcp-solution-tuning-
profiles-4.2-*/<filename> --playbook-args='--tags
tuning:validate_scale,tuning:execute_scale'
where <filename> is the one you identified in step 4 and edited in steps 5
to 7.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
6-1

Ordering information 6-

This chapter details the Manage, Control and Plan (MCP) 4.2 ordering
information.

Product codes
Customers can choose a perpetual model or an annual subscription model:
• Perpetual Model -
— MCP Software Perpetual License - This includes a perpetual software
license specific for this software release only. It also includes a 1 year
warranty.
— Perpetual RTU - Right to use the software on the current managed
network size.
— A support subscription (one of the following)
– MCP Select Support (formerly known as Smart Support) - This is
a per year subscription. It allows access to upgrades on an if/when
available basis. It also provides technical support and extended
warranty.
– MCP Comprehensive Support - A higher support level. Includes in-
region support, and enhanced support response and resolution
times.
– MCP Premier Support - The highest support level. Includes
support out of a dedicated team, and best in industry support
response and resolution times.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
6-2 Ordering information

• Annual Subscription Model -


— MCP Software Annual Subscription License - This is a per year
subscription. It provides the right to use the software during the
subscription term period. It allows access to upgrades on an if/when
available basis during the subscription period. And includes warranty
and technical support during the subscription period. Either the Select,
Comprehensive, or Premier support options can be chosen.
— Annual subscription RTU - Right to use the software on the current
managed network size (per year).

For additional details on support subscription levels, please contact Ciena.


Note that RTUs and support subscriptions are variably priced depending on
various factors (eg. network size, migration from earlier Ciena management
software platforms, etc.)

Customers can choose between MCP Base functionality or MCP Plus


functionality. In addition, optional Enhanced Troubleshooting capabilities can
be enabled on either MCP Base or MCP Plus deployments:
• MCP Base - MCP Base provides a full range of Manage, Control and Plan
capabilities. It also includes use of the Wave Line Synchronizer Liquid
Spectrum App.
• MCP Plus - MCP Plus provides all the functionality in MCP Base, plus
additional capabilities. In this release of MCP, the following additional
capabilities are enabled with MCP Plus:
— Liquid Spectrum App - Planning Tool Calibrator
— Liquid Spectrum App - Channel Margin Gauge
— Liquid Spectrum App - Bandwidth Optimizer
• MCP Base with Enhanced Troubleshooting - MCP Base, plus
additional enhanced troubleshooting capabilities. In this release of MCP,
the following additional capabilities are enabled with MCP Enhanced
Troubleshooting:
— PinPoint - Fiber fault troubleshooting and ability to import KML/KMZ
fiber files
• MCP Plus with Enhanced Troubleshooting - MCP Plus, plus the
additional enhanced troubleshooting capabilities.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Ordering information 6-3

Table 6-1 on page 6-3 lists the MCP product codes for customers using the
Perpetual ordering model. Identify the option and feature set needed to find
the row that applies to your deployment.

Table 6-1
MCP Perpetual product codes
Option Feature set RTU and License(s) to order Support to order (select one of the three support tiers)
(Note 2)
Select Support Comprehensive Premier Support
(per yr) Support (per yr) (per yr)

Perpetual Base S16-RTU-MCPBA 80M-MCPBA-SEL 80M-MCPBA-COM 80M-MCPBA-PREM


(no GR) 1 x S16-LIC-MCPBA0420

Base S16-RTU-MCPBAET 80M-MCPBAET-SEL 80M-MCPBAET-COM 80M-MCPBAET-PREM


with Enhanced 1 x S16-LIC-MCPBAET0420
Troubleshooting

Plus S16-RTU-MCPPL 80M-MCPPL-SEL 80M-MCPPL-COM 80M-MCPPL-PREM


1 x S16-LIC-MCPPL0420

Plus S16-RTU-MCPPLET 80M-MCPPLET-SEL 80M-MCPPLET-COM 80M-MCPPLET-PREM


with Enhanced 1 x S16-LIC-MCPPLET0420
Troubleshooting

Perpetual Base with GR S16-RTU-MCPBAG 80M-MCPBAG-SEL 80M-MCPBAG-COM 80M-MCPBAG-PREM


(with GR) 2 x S16-LIC-MCPBAG0420

Base with GR S16-RTU-MCPBAGET 80M-MCPBAGET-SEL 80M-MCPBAGET-COM 80M-MCPBAGET-PREM


with Enhanced 2 x S16-LIC-MCPBAGET0420
Troubleshooting

Plus with GR S16-RTU-MCPPLG 80M-MCPPLG-SEL 80M-MCPPLG-COM 80M-MCPPLG-PREM


2 x S16-LIC-MCPPLG0420

Plus with GR S16-RTU-MCPPLGET 80M-MCPPLGET-SEL 80M-MCPPLGET-COM 80M-MCPPLGET-PREM


with Enhanced 2 x S16-LIC-MCPPLGET0420
Troubleshooting

Perpetual Enhanced S16-RTU-MCPETR 80M-MCPETR-SEL 80M-MCPETR-COM 80M-MCPETR-PREM


Tier Troubleshooting 1 x S16-LIC-MCPETR0420
Uplift only
(Note 1)

Perpetual Enhanced S16-RTU-MCPETR 80M-MCPETR-SEL 80M-MCPETR-COM 80M-MCPETR-PREM


Tier Troubleshooting 2 x S16-LIC-MCPETR0420
Uplift only
(with GR)
(Note 1)

Note 1: The Enhanced Troubleshooting only license is available for customers who already have MCP ordered/deployed, and choose to add
Enhanced Troubleshooting functionality after.
Note 2: MCP RTUs are virtual (no paper RTU is shipped). This reduces cost and time for both the customer and Ciena and it is
environmentally friendly. For customers with a legacy procurement-receiving process that cannot accept virtual RTUs, a standard RTU can
be ordered (replace S16-RTU-x codes with S16-RTUS-x).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
6-4 Ordering information

Table 6-2 on page 6-4 lists the MCP product codes for customers using the
Annual subscription ordering model. Identify the option and feature set
needed to find the row that applies to your deployment.

Table 6-2
MCP Annual subscription product codes
Option Feature set RTU and License(s) to order
(select one of the three support tiers) (Note 2)

Select Support Comprehensive Support (per yr) Premier Support


(per yr) (per yr)

Annual Base S16-RTU-MCPBA-SEL-1Y S16-RTU-MCPBA-COM-1Y S16-RTU-MCPBA-PRM-1Y


(no GR) 1 x S16-LIC-MCPBA-SEL-1Y 1 x S16-LIC-MCPBA-COM-1Y 1 x S16-LIC-MCPBA-PREM-1Y

Base S16-RTU-MCPBAET-SEL S16-RTU-MCPBAET-COM S16-RTU-MCPBAET-PRM


with Enhanced 1 x S16-LIC-MCPBAET-SEL 1 x S16-LIC-MCPBAET-COM 1 x S16-LIC-MCPBAET-PREM
Troubleshooting

Plus S16-RTU-MCPPL-SEL-1Y S16-RTU-MCPPL-COM-1Y S16-RTU-MCPPL-PRM-1Y


1 x S16-LIC-MCPPL-SEL-1Y 1 x S16-LIC-MCPPL-COM-1Y 1 x S16-LIC-MCPPL-PREM-1Y

Plus S16-RTU-MCPPLET-SEL S16-RTU-MCPPLET-COM S16-RTU-MCPPLET-PRM


with Enhanced 1 x S16-LIC-MCPPLET-SEL 1 x S16-LIC-MCPPLET-COM 1 x S16-LIC-MCPPLET-PREM
Troubleshooting

Annual Base with GR S16-RTU-MCPBAG-SEL-1Y S16-RTU-MCPBAG-COM-1Y S16-RTU-MCPBAG-PRM-1Y


(with GR) 2 x S16-LIC-MCPBAG-SEL-1Y 2 x S16-LIC-MCPBAG-COM-1Y 2 x S16-LIC-MCPBAG-PREM-1Y

Base with GR S16-RTU-MCPBAGET-SEL S16-RTU-MCPBAGET-COM S16-RTU-MCPBAGET-PRM


with Enhanced 2 x S16-LIC-MCPBAGET-SEL 2 x S16-LIC-MCPBAGET-COM 2 x S16-LIC-MCPBAGET-PREM
Troubleshooting

Plus with GR S16-RTU-MCPPLG-SEL-1Y S16-RTU-MCPPLG-COM-1Y S16-RTU-MCPPLG-PRM-1Y


2 x S16-LIC-MCPPLG-SEL-1Y 2 x S16-LIC-MCPPLG-COM-1Y 2 x S16-LIC-MCPPLG-PREM-1Y

Plus with GR S16-RTU-MCPPLGET-SEL S16-RTU-MCPPLGET-COM S16-RTU-MCPPLGET-PRM


with Enhanced 2 x S16-LIC-MCPPLGET-SEL 2 x S16-LIC-MCPPLGET-COM 2 x S16-LIC-MCPPLGET-PREM
Troubleshooting

Annual Enhanced S16-RTU-MCPETR-SEL S16-RTU-MCPETR-COM S16-RTU-MCPETR-PRM


Tier Troubleshooting 1 x S16-LIC-MCPETR-SEL 1 x S16-LIC-MCPETR-COM 1 x S16-LIC-MCPETR-PREM
Uplift only
(Note 1)

Annual Enhanced S16-RTU-MCPETR-SEL S16-RTU-MCPETR-COM S16-RTU-MCPETR-PRM


Tier Troubleshooting 2 x S16-LIC-MCPETR-SEL 2 x S16-LIC-MCPETR-COM 2 x S16-LIC-MCPETR-PREM
Uplift only
(with GR)
(Note 1)

Note 1: The Enhanced Troubleshooting only license is available for customers who already have MCP ordered/deployed, and choose to add
Enhanced Troubleshooting functionality after.
Note 2: MCP RTUs are virtual (no paper RTU is shipped). This reduces cost and time for both the customer and Ciena and it is
environmentally friendly. For customers with a legacy procurement-receiving process that cannot accept virtual RTUs, a standard RTU can
be ordered (replace S16-RTU-x codes with S16-RTUS-x).

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Ordering information 6-5

Licenses
In this release and recent releases of MCP, the License Server component
used to manage licenses for both MCP and NEs is decoupled from the MCP
software. The License Server is deployed on external VMs/servers. See
External License Server User Guide for hardware requirements, operating
system requirements, and install procedures.

MCP licenses
MCP uses Ciena’s licensing model, which applies to both software and
hardware platforms.

The licensing process consists of the following steps:


1 Order - Customer places order with Ciena.
2 Generate - Ciena generates licenses for the ordered items. The date of
license generation is aligned with the ship date on the purchase order.
3 Receive - Ciena sends an email to the customer that contains the license
Activation Code.
4 Register - Customer registers the license server. This is a one time
operation. The External License Server installation procedures include
steps for generating a Registration ID file. This Registration ID file is used
to register the license server on the Ciena portal.
5 Activate - Customer activates the license and downloads the license file.
The license is activated on the Ciena portal, by associating the activation
code(s) received with the registered license server (license file format:
ciena-<LSname>-<yymmddHHMM>.lic).
6 Install - Customer installs/loads the license file on the License Server.
A valid license file must be loaded into the License Server to allow MCP
installation and operation. During MCP installation, the installer enters the IP
address of the License Server that MCP will point to.

Network element licenses


The license servers deployed to provide MCP licenses can also be used to
host licenses for those network element (NE) types that support them. Using
the NE craft interface, each NE can be configured to point to the IP
address(es) of the license server(s).

In the current MCP release, 6500 and Waveserver NE types can be pointed
to the license servers using MCP. If this is done:
• MCP provides the ability to automatically set the license server IP address
on the NE (IPv4 NEs only). This is done using the license server
commissioning policy functionality. This policy can be enabled/disabled,
and has a parameter to control whether the value should be overwritten by
MCP if it is already populated on the NE.
• If License Server HA mode is enabled, NEs are configured to point to the
virtual IP address of the LS HA cluster.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
6-6 Ordering information

• If License Server HA+GR is used, NEs that support 2 License Servers


reference both License Server virtual IP addresses (eg. 6500 NE types
running release 12.4 or later). NEs that support 1 License Server
reference the active License Server virtual IP address.

Upgrades
If upgrading to MCP 4.2 from an earlier MCP release:
• The same product order codes are used for both fresh installs and
upgrades of MCP.
• Customers using the perpetual model
— Place an order for the new order codes to allow for MCP 4.2 installs/
upgrades.
• Customers using the annual subscription model
— if the upgrade is during the 1 year annual subscription period,
customers do not need to contact Ciena to have a new/updated annual
subscription license generated for them. These customers can access
the Ciena portal and simply re-download a new copy of the license file
for the existing site. This new license will enable the customer to
upgrade to the latest supported active MCP release (the annual
subscription license already loaded will not enable MCP 4.2 installs
until an updated copy of the license file is downloaded from the Ciena
portal).
— If the subscription period has ended, or is nearing the end, contact
Ciena to renew the annual subscription for another year to continue to
use the software.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
7-1

Appendix A - Deployment examples 7-

This chapter provides examples of how MCP can be deployed on specific


hardware platform models.

Deployment examples and hardware


In some customer networks, an IT infrastructure already exists for the
deployment of new VMs to house software applications. In this scenario, the
existing infrastructure can be used to turn up the VMs required for MCP, if that
infrastructure can provide VMs to meet all MCP resource requirements. This
includes all requirements in this document (most notably the CPU benchmark
requirements, Docker storage disk speed requirements, and DCN delay and
bandwidth requirements).

For customer networks where there is no existing infrastructure to provide


VMs for MCP to be hosted on, customers may choose to build the required
MCP VMs from server hardware and virtualization software such as VMware
ESXi.

Alternatively, MCP is also supported, in a limited fashion, for deployments


directly on hardware platforms without any virtualization software. These are
sometimes referred to as bare metal installs. In this release, support for these
bare metal installs is only available on specific hardware models (see “MCP
deployments on physical servers or VMs” on page 4-18).

This chapter provides some examples for various scenarios, for different
network sizes. For multi-host configurations in production deployments, the 3
(or 4) hosts should be deployed on different hardware in order to take
advantage of the local redundancy benefits a a multi-host configuration
provides (eg. protecting against hardware failure of 1 VM)

Example - Managing up to 20,000 NEUs/10,000 NEs on HP blades with


VMs
Table 7-1 on page 7-2 shows how to use HP BL660c G9 blades to create an
MCP multi-host configuration (2+1), to manage up to 20,000 NEUs/10,000
NEs.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
7-2 Appendix A - Deployment examples

Table 7-1
Example deployment for network with 20,000 NEUs/10,000 NEs
Physical enclosure and components

Enclosure HP BladeSystem c3000 or c7000 (c3000 fits 4 full-height blades; c7000 fits 8 full-height blades)

Blades 3 blades, with 1 VM configured on each blade

Interconnect Any interconnect that provides a minimum of 1Gb/s (and that does not use NAT’ing)

VMs/Blades (all 3 configured the same way)

MCP Host 0 VM MCP Host 1 VM MCP Host 2 VM

Blade model BL660c G9 BL660c G9 BL660c G9

Blade size Full-height Full-height Full-height

ESXi version ESXi 5.5 or later ESXi 5.5 or later ESXi 5.5 or later

CPUs 2xE5-4627v4(10-core,2.6GHz) 2xE5-4627v4(10-core,2.6GHz) 2xE5-4627v4(10-core,2.6GHz)


(Note 1) (total vCPUs: 40) (total vCPUs: 40) (total vCPUs: 40)
or or or
2xE5-4650v4(14-core,2.2GHz) 2xE5-4650v4(14-core,2.2GHz) 2xE5-4650v4(14-core,2.2GHz)

RAM (Note 2) 128 GB 128 GB 128 GB

Disks 4x1.2TB local SSDs 4x1.2TB local SSDs 4x1.2TB local SSDs

Disk speeds must meet requirements in Table 4-5 on page 4-11.

Storage configuration (all 3 VMs configured the same way)

Disks and Configure the 4 disks on each blade/VM the same way. Disks split into:
volume groups • Boot partition - first 1 GB of Disk0
• Volume Group for OS - next 500 GB of Disk0 (this VG can have any name)
• Volume Group for Docker - remaining 700 GB on Disk0 + all 3.6 TB space on Disks 1/2/3 (if the
VG name is not vg_sys, note the name so it can be entered during the installation procedures)

Volume Group Create:


for OS • swap - 32 GB
• / (root file system) - 318 GB (LVM optional)
• /var/log/ciena - 150 GB

Volume group Create 3 logical volumes inside the volume group:


for Docker • /opt/ciena/bp2 - 3.8 TB
• /opt/ciena/data/docker - 150 GB
• /opt/ciena/loads - 200 GB
Ensure that after creating the 3 LVs, at least 150 GB is left unallocated, within the VG. If needed
lower the /opt/ciena/bp2 slightly to achieve this.

Note 1: The speed of the E5-4650v4 CPU is lower than recommended for MCP, however this is balanced out by
the extra cores it has. As a result, it provides similar performance to the E5-4627v4.
Note 2: When using 64G RAM or higher, no extra RAM is required to account for blade/VM management overhead.
Therefore, in this example, total RAM needed for the blade is equal to total RAM required by MCP.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Appendix A - Deployment examples 7-3

Example - Managing up to 20,000 NEUs/15,000 NEs on Oracle X7-2


servers
Table 7-2 on page 7-3 shows how to use Oracle X7-2 rack mount servers to
create an MCP multi-host configuration (3+1), to manage up to 20,000 NEUs/
15,000 NEs.

Table 7-2
Example deployment for network with 15,000 NEUs/NEs
Physical servers

Server model 4 Oracle X7-2 rack mount servers

Interconnect Network interface cards on these servers must be connected to a switch/router at a minimum of
1Gb/s, on the same subnet, and must meet all latency and bandwidth requirements (see “Network
delay and bandwidth requirements” on page 4-15).

Server configuration (all 4 servers configured the same way)

Server 0 Server 1 Server 2 Server 3

Virtualization None (this is not a VM based deployment; each server has the operating system directly installed)

CPUs 2 x Intel Xeon Silver 2 x Intel Xeon Silver 2 x Intel Xeon Silver 2 x Intel Xeon Silver
4114 (10-core, 2.2 4114 (10-core, 2.2 4114 (10-core, 2.2 4114 (10-core, 2.2
GHz) GHz) GHz) GHz)
(total vCPUs: 40) (total vCPUs: 40) (total vCPUs: 40) (total vCPUs: 40)

RAM 128 GB 128 GB 128 GB 128 GB

Disks 5x800GB local SSDs 5x800GB local SSDs 5x800GB local SSDs 5x800GB local SSDs

Disk speeds must meet requirements in Table 4-5 on page 4-11.

Storage configuration (all 4servers configured the same way)

Disks and Storage space split into:


volume groups • Boot partition - 1 GB
• Volume Group for OS and Docker - all remaining space (if the VG name is not vg_sys, note the
name so it can be entered during the installation procedures)

Volume group Create 5 logical volumes inside the volume group:


for OS and • swap - 24 GB
Docker • / (root file system) - 70 GB
• /var/log/ciena - 150 GB
• /opt/ciena/bp2 - 3,350 GB
• /opt/ciena/data/docker - 150 GB
• /opt/ciena/loads - 100 GB
Ensure that after creating the 5 LVs, approximately 150 GB is left unallocated, within the VG.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
7-4 Appendix A - Deployment examples

Example - Managing up to 10,000 NEUs/NEs on HP blades with VMs


Table 7-3 on page 7-4 shows how to use HP BL460c G9 blades to create an
MCP multi-host configuration (2+1), to manage up to 10,000 NEUs/NEs.

Table 7-3
Example deployment for network with 10,000 NEUs/NEs
Physical enclosure and components

Enclosure HP BladeSystem c3000 or c7000 (c3000 fits 8 half-height blades; c7000 fits 16 half-height blades)

Blades 3 blades, with 1 VM configured on each blade

Interconnect Any interconnect that provides a minimum of 1Gb/s (and that does not use NAT’ing)

VMs/Blades (all 3 configured the same way)

MCP Host 0 VM MCP Host 1 VM MCP Host 2 VM

Blade model BL460c G9 BL460c G9 BL460c G9

Blade size Half-height Half-height Half-height

ESXi version ESXi 5.5 or later ESXi 5.5 or later ESXi 5.5 or later

CPUs 2xE5-2640v4(10-core,2.4GHz) 2xE5-2640v4(10-core,2.4GHz) 2xE5-2640v4(10-core,2.4GHz)


(total vCPUs: 40) (total vCPUs: 40) (total vCPUs: 40)

RAM 96 GB 96 GB 96 GB

Disks 2x1.2TB local SSDs 2x1.2TB local SSDs 2x1.2TB local SSDs

Disk speeds must meet requirements in Table 4-5 on page 4-11.

Storage configuration (all 3 VMs configured the same way)

Disks and Configure the 2 disks on each blade/VM the same way. Disks split into:
volume groups • Boot partition - first 1 GB of Disk0
• Volume Group for OS - next 400 GB of Disk0 (this VG can have any name)
• Volume Group for Docker - remaining 800 GB on Disk0 + all 1.2 TB space on Disk 1 (if the VG name is not
vg_sys, note the name so it can be entered during the installation procedures)

Volume Group Create:


for OS • swap - 24 GB
• / (root file system) - 218 GB (LVM optional)
• /var/log/ciena - 150 GB

Volume group Create 3 logical volumes inside the volume group:


for Docker • /opt/ciena/bp2 - 1.5 TB
• /opt/ciena/data/docker - 150 GB
• /opt/ciena/loads - 200 GB
Ensure that after creating the 3 LVs, at least 150 GB is left unallocated, within the VG. If needed lower the /
opt/ciena/bp2 slightly to achieve this.

Note 1: When using 64G RAM or higher, no extra RAM is required to account for blade/VM management overhead. Therefore,
in this example, total RAM needed for the blade is equal to total RAM required by MCP.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Appendix A - Deployment examples 7-5

Example - Managing up to 200 NEUs/NEs on a single-host VM


Table 7-4 on page 7-5 shows an example of using a VM from an existing IT
infrastructure to create an MCP single-host deployment, to manage up to 200
NEUs/NEs.

Table 7-4
Example deployment for network with 200 NEUs/NEs

VM provided by IT

Virtual CPUs 16 vCPUs (20 vCPUs if license server is installed co-resident and managing
more than 100 NEs)

RAM 64 GB (80G if license server is installed co-resident and managing more than
100 NEs)

Storage space 1 TB - Disk speeds must meet requirements in Table 4-5 on page 4-11.

Storage configuration

Storage space and Storage space split into:


volume groups • Boot partition - 1 GB
• Volume Group for OS and Docker - all remaining space (if the VG name is
not vg_sys, note the name so it can be entered during the installation
procedures)

Volume Group for OS Create 6 logical volumes inside the volume group:
and Docker • swap - 16 GB
• / (root file system) - 70 GB
• /var/log/ciena - 80 GB
• /opt/ciena/bp2 - 500 GB
• /opt/ciena/data/docker - 70 GB
• /opt/ciena/loads - 100 GB
Ensure that after creating the 6 LVs, approximately 150 GB is left unallocated,
within the VG.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
7-6 Appendix A - Deployment examples

Example - Lab only single-host VM


Table 7-5 on page 7-6 shows an example of using a VM from an existing IT
infrastructure to create a lab only, MCP single-host, to manage up to 20 NEUs/
NEs. This configuration is only supported for testing in a small lab
environment, with a limited number of NEs, for non-performance related
testing (for fresh installs only, not upgrades; the storage space associated with
this configuration is not sufficient for performing upgrades).

Table 7-5
Example deployment for lab with 20 NEs

VM provided by IT

Virtual CPUs 8 vCPUs

RAM 64 GB

Storage space 500 GB - In this deployment, the 500GB space can be used to house both the
OS, as well as the docker storage disk contents.
Use for This configuration is supported for lab deployments only, with a limited
number of NEs, for non-performance related testing.

Storage configuration

Storage space and Storage space split into:


volume groups • Boot partition - 1 GB
• Volume Group for OS and Docker - all remaining space (if the VG name is
not vg_sys, note the name so it can be entered during the installation
procedures)

Volume Group for OS Create 5 logical volumes inside the volume group:
and Docker • swap - 8 GB
• / (root file system) - 70 GB
• /opt/ciena/bp2 - 120 GB
• /opt/ciena/data/docker - 100 GB
• /opt/ciena/loads - 100 GB
Ensure that after creating the 5 LVs, approximately 100 GB is left unallocated,
within the VG.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
8-1

Appendix B - Scale and memory values


used during optimization of MCP for
managed network size 8-

In this release of MCP, the procedures to optimize the MCP configuration to


the size of the network being managed have been automated (including
scaling and memory adjustments for MCP apps). See “Optimizing MCP for
managed network size” on page 5-4 for the new procedures.

This chapter is for reference only and details the scale/memory adjustments
that are applied for each MCP configuration when using the new procedures.
The values detailed here are those applied when using version 4.2-85 of the
MCP tuning profiles, which is the recommended tuning profiles version at the
time of this document release (for the most up to date guidance always consult
the Ciena Portal). See Table 8-1 on page 8-2, and Table 8-2 on page 8-3.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
8-2 Appendix B - Scale and memory values used during optimization of MCP for managed network size

Table 8-1
Number of instances set up by tuning profiles for selected apps (for each network size)
Configuration being deployed (Note 1)

Number of VMs Multi-host Single-host

6 4 3 1
(5+1) (3+1) (2+1)

Virtual CPUs 40 40 16 40 32 40 32 16 16 24 16 16

RAM (GB) 128 128 64 128 128 96 96 96 64 128 96 64

RAs (Resource adapters)

bpraciena8180 (Def=0) 3* 1*

bpraciena8700 (Def=0) 15 * 8 4 6 4 2 2 1

bpracienampbraman (Def=0) 3* 1*

bpracienapacket (Def=0) 40 * 32 * 4 16 * 8* 4 2 2 1

bpracienarls (Def=0) 3* 1*

bpracienasaos10x (Def=0) 30 * 20 * 4* 16 * 8* 4 2 2 1

bpracienawaveserver (Def=0) 20 * 20 * 5 8* 6* 4* 4* 2 1

bprasubcom (Def=0) 3* 1*

raciena54xx (Def=0) 10 * 10 * 3* 3* 1*

raciena6200 (Def=0) 15 * 8 4 6 4 2 2 1

raciena6500 (Def=0) 40 * 40 * 7 16 * 14 8* 4 8 4 2

razseries (Def=0) 3* 1*

Other apps

aeprocessor (Def=2) 6 4 3 Default Default Default


(1) (1) (1)
collectd (Def=3) Default(3)

elasticsearch (Def=3) Default(3)

heka (Def=3) Default(3)

kafka (Def=3) Default(3)

nrpe (Def=3) Default(3)

nsi (Def=3) Default(3)

Note 1: Any apps not listed are left at their default scale settings. Only those RAs that are selected to be enabled
will be scaled.

* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Appendix B - Scale and memory values used during optimization of MCP for managed network size 8-3

Table 8-2
Memory settings set up by tuning profiles for selected apps (for each network size)
Configuration being deployed (Note 1)

Number of VMs Multi-host Single-host

6 (5+1) 4 (3+1) 3 (2+1) 1

Virtual CPUs 40 40 16 40 32 40 32 16 16 24 16 16

RAM (GB) 128 128 64 128 128 96 96 96 64 128 96 64

App name and heap memory values for each app

pce MAX_HEAP_SIZE 31G 12G 24G 8G 4G 4G

stitcher MAX_HEAP_SIZE 31G 12G 24G 8G 4G 4G

mcpview HEAP_SIZE 10G 6G 8G 2G 2G 2G

bpocore XMS 1024M Default

XMX 8192M

datomic XMS 4096M 256M 256M

XMX 4096M 4096M 4096M

nsi XMX 2048M

kafka - Default (see Note 2 for upgrades)

discovery XMX 2048M Default

pm - Default (see Note 3) Default

cassandra XMS 2048M Default Default

XMN 2048M Default

XMX 8192M 4096M

elasticsearch XMS 2048M Default Default

XMN 1500M 1500M 1500M

XMX 8192M 6144M 6144M

Note 1: Any apps not listed are left at their default settings.
Note 2: If upgrading from a previous release of MCP, there may be custom parameter settings present for kafka that are removed
this release (offsets.retention.minutes , num.replica.fetchers).
Note 3: Some pm parameters are modified for large configurations (collection.workers).

* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
8-4 Appendix B - Scale and memory values used during optimization of MCP for managed network size

Manage, Control and Plan Engineering Guide


Release 4.2 450-3709-010 Standard Issue 12.03
Copyright© 2016-2020 Ciena® Corporation June 2020
Manage, Control and Plan

Engineering Guide

Copyright© 2016-2020 Ciena® Corporation. All rights reserved.

Release 4.2
Publication: 450-3709-010
Document status: Standard
Issue 12.03
Document release date: June 2020

CONTACT CIENA
For additional information, office locations, and phone numbers, please visit the Ciena
web site at www.ciena.com

You might also like