450-3709-010 (MCP R4.2 Engineering Guide) 12.03
450-3709-010 (MCP R4.2 Engineering Guide) 12.03
450-3709-010 (MCP R4.2 Engineering Guide) 12.03
Engineering Guide
Release 4.2
What’s inside...
Introduction
Deployment options
How to size and engineer MCP
Engineering guidelines
Procedures and guidelines for different network sizes
Ordering information
Appendix A - Deployment examples
Appendix B - Scale and memory values used during optimization of MCP for managed network size
Publication history 0
June 2020
Issue 12.03 of the MCP Engineering Guide, 450-3709-010.
March 2020
Issue 12.02 of the MCP Engineering Guide, 450-3709-010.
September 2019
Issue 10.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.
July 2019
Issue 8.03 of the Blue Planet MCP Engineering Guide, 450-3709-010. This
issue applies to both the MCP 3.0 software load and the MCP 3.0.1 software
load.
April 2019
Issue 08.02 of the Blue Planet MCP Engineering Guide, 450-3709-010. This
issue applies to both the MCP 3.0 software load and the MCP 3.0.1 software
load.
February 2019
Issue 08.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.
November 2018
Issue 07.04 of the Blue Planet MCP Engineering Guide, 450-3709-010.
October 2018
Issue 07.03 of the Blue Planet MCP Engineering Guide, 450-3709-010. This
issue applies to both the MCP 18.06.00 software load and the MCP 18.06.01
software load.
September 2018
Issue 07.02 of the Blue Planet MCP Engineering Guide, 450-3709-010.
September 2018
Issue 07.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.
June 2018
Issue 06.03 of the Blue Planet MCP Engineering Guide, 450-3709-010.
April 2018
Issue 06.02 of the Blue Planet MCP Engineering Guide, 450-3709-010.
December 2017
Issue 05.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.
November 2017
Issue 04.02 of the Blue Planet MCP Engineering Guide, 450-3709-010.
September 2017
Issue 04.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.
July 2017
Issue 03.01 of the Blue Planet MCP Engineering Guide, 450-3709-010.
Contents 0
Introduction 1-1
Product overview 1-1
Documentation 1-2
Device support 1-2
External License server support 1-3
Upgrades 1-3
Site IP 4-21
Kernel parameters 4-21
Network Time Protocol (NTP) 4-21
BPI Installer 4-22
Port and protocol requirements 4-23
MCP user interface (UI) 4-28
External authentication 4-29
This document provides the engineering guidelines for the Manage, Control
and Plan (MCP) product.
Introduction 1-
This chapter provides a brief overview of the Manage, Control and Plan (MCP)
product.
Product overview
MCP is Ciena’s next generation multi-layer Software Defined Networking
(SDN) and Network Management System (NMS) platform with integrated
network planning functionality that combines a web-scale platform, industry-
leading SDN functionality and open interfaces.
Documentation
The most recent documentation is available on the Ciena Portal.
• As a registered user with a my.ciena.com account, log into
https://my.ciena.com.
• Navigate to Documentation > Manage Control and Plan (MCP) > Release
#.
• This location contains MCP documents and/or pointers to additional
locations where documents can be downloaded and how they can be
viewed.
• In this release of MCP, some documents have moved from a pdf format to
an html format.
The following documents are available in an html format. They are packaged
within the MCP software as online help. The package can also be downloaded
for offline viewing using a web browser:
• For details on the functions and features
— MCP Release Notes (for MCP 4.2.x and later releases)
— MCP Administration Guide
— MCP API Guide
— MCP Geo-Redundancy Guide
— MCP Security Guide
— MCP Troubleshooting Guide
— MCP User Guide
• For details on the installations and upgrades:
— MCP Installation Guide
— MCP Upgrade Guide
Device support
For the latest list of devices, configurations and features supported, refer to
MCP Release Notes.
Upgrades
The following upgrade paths are supported (at the time of this document
release). For the most recent guidance, refer to MCP Release Notes.
• MCP 3.0 to MCP 4.2
• MCP 3.0.1 to MCP 4.2
• MCP 4.0 to MCP 4.2
• MCP 4.0.1 to MCP 4.2
Unless otherwise indicated by Ciena, the MCP release being upgraded from
can be at any MCP patch level before the upgrade process is started.
Deployment options 2-
This chapter details the Manage, Control and Plan (MCP) deployment
options.
To users of the MCP user interface (UI) and clients of the MCP northbound
API, the MCP multi-host configuration looks like a single logical unit. It is
accessed using a virtual Site IP address. This Site IP address redirects
requests to the appropriate node in the cluster. If one node in the cluster goes
down, requests are redirected. This process makes the loss of nodes in the
cluster seamless to clients of the northbound API.
Note: License Server HA should be used for all deployments with more
than 200 NEUs/NEs.
Table 2-1
Single-host MCP deployments
Target use Deployments with a smaller number of Deployments with a smaller number of
NEs (up to 200 NEUs/NEs). NEs (up to 1,000 NEUs/NEs).
If the network size is anticipated to grow If the network size is anticipated to grow
beyond the supported limits of a single- beyond the supported limits of a single-
host configuration (1,000 NEUs/NEs), a host configuration (1,000 NEUs/NEs), a
multi-host configuration should be used multi-host configuration should be used
instead. instead.
MCP - 1 1
No. of VMs/servers
required
Figure reference Figure 2-1 on page 2-4 Figure 2-2 on page 2-4
Figure 2-1
MCP single-host configuration (up to 200 NEUs/NEs)
Figure 2-2
MCP single-host configuration (greater than 200 NEUs/NEs)
Table 2-2 on page 2-5 provides details for single-host MCP deployments with
GR.
Table 2-2
Single-host MCP deployments with GR
Target use Deployments with a smaller number of NEs (up to 1,000 NEUs/NEs).
If the network size is anticipated to grow beyond the supported limits of a single-
host configuration (1,000 NEUs/NEs), a multi-host configuration should be used
instead.
Figure 2-3
MCP single-host GR configuration
Table 2-3 on page 2-7 provides details for multi-host MCP deployments.
Table 2-3
Multi-host MCP deployments
Target use Deployments with a larger number of NEs (greater than 1,000 NEUs/NEs)
Figure 2-4
MCP 2+1 multi-host configuration
Table 2-4 on page 2-9 provides details for multi-host MCP deployments with
GR.
Table 2-4
Multi-host MCP deployments with GR
Target use Deployments with a larger number of NEs (greater than 1,000 NEUs/NEs)
Figure 2-5
MCP 2+1 multi-host GR configuration
This chapter provides an overview of how to size and engineer you MCP
deployment.
Different types of NEs can place a different load on the software because
some NE types are capable of reporting a larger number of ports and
connections than other NE types. Because of this, the concept of a network
element (NE) equivalent unit is used. In order to engineer your network, you
must calculate the total number of NEs, and the total number of NE equivalent
units in your network.
Use Table 4-1 on page 4-2 to calculate the following for the network to be
managed by MCP (taking into account any planned growth of the network):
• Total NEs - An NE is a device that will be enrolled into MCP (for 6500 TIDc
NEs only the primary shelf is enrolled into MCP; all shelves in a 6500 TIDc
NE are managed as one NE).
• Total NEUs - For NE types that can have multiple shelves, such as 6500
TIDc NEs, first determine the total number of shelves of each 6500 type.
For all other NE types determine the total number of NEs of each type.
Then multiply each count by the applicable NEU value, and add all for the
total.
• L0 total NEs - The subset of the total NE count that is Layer0 devices (eg.
Waveserver, Waveserver Ai, Waveserver 5, 6500 Photonic, RLS, 8180
photonic).
• L1 total OTN CP shelves - The total number of Layer 1 OTN control plane
enabled shelves (eg. all 6500 OTN CP enabled shelves, all 54xx CP NEs).
• L2 total NEs - The subset of the total NE count that is Layer 2 devices (eg.
39xx, 51xx, 8700, 6200, 8180 packet, Z-series).
• L2 total 6200 NEs - The subset of the total NE count that is 6200 devices.
Determine the following about the network to be managed (taking into account
any planned growth of the network):
• L0 - Max wavelength services present in the network.
• L1 - Max services present in the network.
• L2 - Max packet service endpoints present in the network.
• L2 - Max unprotected LSPs present in the network.
• L3 - Max services present in the network
Identify the smallest configuration that will support the total NEU/NE/service
counts as calculated in “Determine the total number of NEUs/NEs to be
managed” on page 3-1 and in “Determine the total number of services to be
managed” on page 3-2. Smaller networks (under 1,000 NEU/NEs) can usually
be managed by an MCP single-host configuration. Larger networks usually
require an MCP multi-host configuration.
Review “Storage space for historical PMs and NE backups” on page 4-13 to
determine if additional storage space is needed.
The main factor that determines the number of required external License
Server VMs/servers is what type of MCP deployment is used (single-host or
multi-host; GR or non-GR).
Refer to the External License Server User Guide for hardware requirements.
Note: In all cases the servers or VMs used as MCP hosts must meet all
MCP resource requirements. This includes all requirements in this
document (most notably the CPU benchmark requirements, Docker
storage disk speed requirements, and DCN delay and bandwidth
requirements).
The choice of using bare metal vs VMs for MCP depends on the existing
customer environment and IT skill set. Both approaches have benefits and
considerations:
• Virtual Machines (VMs)
— Require knowledge of virtualization software and associated IT
administration.
— Require some additional hardware resources (CPU/RAM/storage) for
the virtualization software (eg. VMware ESXi).
— May incur licensing costs for the virtualization software in use.
— Provide a level of abstraction for the operating system from the
hardware model in use, eliminating OS dependencies/requirements
for hardware drivers, etc. This can simplify IT management of OS
images.
— Can be easily integrated into existing IT operational practices where
VM infrastructures are already in use.
• Physical hardware servers (bare metal)
— Do not require knowledge of virtualization software and associated IT
administration.
— Allow all hardware resources to be used by the software application.
— Can introduce more complexity for IT management and administration
of OS images and hardware if other applications in customer
environment consists primarily of VMs.
— Specific hardware models need to be evaluated or tested to ensure no
conflicts between software applications and hardware specific drivers.
See “MCP deployments on physical servers or VMs” on page 4-18.
Engineering guidelines 4-
This chapter details the Manage, Control and Plan (MCP) engineering
guidelines.
The sizing requirements in this section apply to both types of installs. For more
details see “Determine whether physical servers or VMs will be used” on page
3-3, and “MCP deployments on physical servers or VMs” on page 4-18.
Table 4-1
NEU value to use for each NE type
NE type and configuration NEU value
3906mvi - 0.2
3920 - 0.3
3960 - 1
5142 - 1
5150 - 1.5
5160 - 3
5162 - 4
5170 - 4
* 5171 - 4*
* 8180 Layer2 4*
8700 4-slot 15
10-slot 45
Waveserver - 1
Waveserver Ai (Note 2) - 3
* Waveserver 5 - 3*
6500 S-series (32-slot, 14-slot, 8-slot) PKT/OTN X-Conn - 1200G 1 (per shelf)
PKT/OTN shelf (Note 1, Note 2)
PKT/OTN X-Conn - 3200G 2 (per shelf)
Table 4-1
NEU value to use for each NE type (continued)
NE type and configuration NEU value
* 6500 S-series shelf with PTS cards (Note With no Control Plane (eg. MPLS only) 2 (per shelf) *
1, Note 2)
With Control Plane 45 (per shelf) *
* RLS - 3*
Note 1: Each shelf in a consolidated TID must be counted (for example, a consolidated TID with 5 shelves has an
NEU value of 5).
Note 2: The equivalent units value provided is for a network element (NE) that is fully loaded. If the NE is not fully
loaded, the value can be multiplied by the percentage of the NE bandwidth that will be used
Note 3: MCP 4.2 introduces support for Z-Series devices for select device management functionalities only.
Scalability testing with this release of MCP has been done with up to 200 Z-series devices managed.
* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.
Note: MCP can be used in conjunction with the web-based Site Manager
craft interface for 6500 NE types. When this is done, the web-based Site
Manager must be installed on its own separate host. Do not install it on the
same VM/server as MCP (it is not supported co-resident with MCP). Refer
to the MCP Installation Guide, for web-based Site Manager hardware
requirements.
Table 4-2
MCP host sizing requirements for different network sizes - Multi-host deployments
Host resource requirements for multi-host deployment
Multi-host configs consist of 3, 4 or 6 MCP hosts (these requirements are per host) (Note 1, Note 2, Note 3, Note 4, Note 5)
RAM (per MCP host) 128GB 128GB 64GB 128GB 128GB 96GB 96GB 96GB 64GB 64GB
Disks & - See “Disks, storage space and file systems” on page 4-8 for detailed requirements
storage
space OS disk 500GB (if OS is on a separate disk) 500GB
(per (Note
Docker storage disk 4TB 4 TB 2TB 4 TB 4 TB 2 TB 2 TB 2 TB 1 TB 6)
host)
Network bandwidth and delay See “LAN/WAN requirements” on page 4-15 for details.
Total concurrent MCP clients 200 100 100 100 100 100 100 100 100 10
Total NEUs 30,000 20,000 1,000 20,000 15,000 10,000 5,000 2,000 500 20
(total managed by multi-host)
Total NEs 30,000 15,000 1,000 10,000 10,000 10,000 5,000 2,000 500 20
(total managed by multi-host)
Per NE L0-max total NEs 5,000 * 5,000 * (same as 2,000 (same as total NEs)
type total NEs)
max
(see L1-max OTN CP 500 300 150 150 80 80 10
Note 8 shelves
for full
L2-max total NEs (same as (same as (same as 10,000 (same as total NEs)
details)
total NEs) total NEs) total NEs)
L2-max 6200 NEs 5,000 5,000 (same as 4,000 (same as total NEs)
total NEs)
Max services
L0 - Max wavelength services 10,000 10,000 2,500 10,000 10,000 10,000 8,000 6,000 1,200 500
L1 - Max services 50,000 35,000 2,500 20,000 15,000 10,000 8,000 7,000 1,000 500
L2 - Max packet service 200,000 100,000 11,000 65,000 65,000 65,000 25,000 15,000 3,500 500
endpoints
L2 - Max unprotected LSPs 75,000 50,000 7,500 30,000 30,000 30,000 18,000 10,000 2,500 500
Table 4-2
MCP host sizing requirements for different network sizes - Multi-host deployments (continued)
Note 1: The physical CPU performance must be equal to or better than a E5-2640v4 (2.4GHz/10-core).Only Intel/x86_64 processor
based platforms/VMs are supported. AMD processors are not supported for use with MCP.
Note 2: The number of vCPUs available on a physical CPU is equal to the number of threads or logical processors. For example,
a system with 2xE5-2640v4 (2.4GHz/10-core) CPUs has 40 vCPUs total (since the cores are dual threaded: 2 CPUs * 10 cores * 2
threads/core). The required CPU resources must be fully reserved for MCP, and not be oversubscribed.
Note 3: For multi-host configurations in production deployments, the MCP VMs must be deployed on different physical servers/
blades in order to allow the high-availability feature of MCP to protect the cluster from the loss of a single physical server/blade. The
loss may be due to a hardware failure, power interruption, intentional shutdown, etc.
Note 4: A minimum of 64GB RAM is currently required for any MCP installation (production or lab installs). All MCP installations
must be deployed with the minimum required RAM for the network size to be managed, as detailed in this document. Using less
RAM is not supported and can result in degraded performance and platform instability.
Note 5: The License Server component is decoupled from the MCP software. The License Server is deployed on external servers,
see External License Server User Guide for the hardware requirements.
Note 6: This configuration is supported for lab deployments only, with limited NEs (for fresh installs only, not upgrades). In this
deployment, 1x500GB disk can be used to house both the OS, as well as the docker storage disk contents. See “Example - Lab
only single-host VM” on page 7-6 for details on the disk configuration for this scenario.
Note 7: In addition to the max concurrent clients guidelines, MCP supports up to a max of 500 defined users.
Note 8: MCP NE scale values are expressed in terms of total NEs managed, total NEUs managed, as well as specific maximums
on a per NE type basis. All the maximums apply. Eg. If a 2+1 config with 32 vCPU, 96G RAM, is being used to manage a network
with only 6500 and Waveserver NE types, then a maximum of 2,000 NEs / 5,000 NEUs can be manged (since the L0 max NE value
is lower than the total NE value).
Note 9: If future network growth is expected to reach a number of NEs/services requiring a 4 host (3+1) or 6 host (5+1) multi-host
config, then MCP should be deployed in this host config from initial install.
Note 10: MCP 4.2 introduces support for L3VPN services. The following scalability testing has been done with this release of MCP:
- MCP Configurations: To date L3 scalability has been characterized for MCP host configurations with 40vCPU/128G (2+1,
3+1 or 5+1). For guidance on any other MCP configurations contact Ciena.
- Number of NEs with L3 enabled: 1,500
- 49 VRFs built between sets of 2 NEs (49 * 750 NE sets = 36750 total VRFs)
- A VRF (virtual routing and forwarding), is a Layer3 service construct; VRFs are members in a L3VPN service.
* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.
Table 4-3
MCP host sizing requirements for different network sizes - Single-host deployments
Host resource requirements for single-host deployment
Single-host configs consist of 1 MCP host (Note 1, Note 2, Note 3, Note 4)
Disks & - See “Disks, storage space and file systems” on page 4-8 for detailed requirements
storage
space OS disk 500GB (if OS is on a separate disk) 500 GB
(Note 5)
Docker storage disk 1 TB 1 TB 1 TB
Network bandwidth and delay See “LAN/WAN requirements” on page 4-15 for details.
Maximum concurrent MCP clients (REST API and MCP UI) (Note 6)
Max services
Note 1: The physical CPU performance must be equal to or better than a E5-2640v4 (2.4GHz/10-core). Only Intel/x86_64
processor based platforms/VMs are supported. AMD processors are not supported for use with MCP.
Note 2: The number of vCPUs available on a physical CPU is equal to the number of threads or logical processors. For example,
a system with 2xE5-2640v4 (2.4GHz/10-core) CPUs has 40 vCPUs total (since the cores are dual threaded: 2 CPUs * 10 cores
* 2 threads/core). The required CPU resources must be fully reserved for MCP, and not be oversubscribed.
Note 3: A minimum of 64GB RAM is currently required for any MCP installation (production or lab installs). All MCP installations
must be deployed with the minimum required RAM for the network size to be managed, as detailed in this document. Using less
RAM is not supported and can result in degraded performance and platform instability.
Note 4: The License Server component is decoupled from the MCP software. The License Server is deployed on external servers,
see External License Server User Guide for the hardware requirements.
Note 5: This configuration is supported for lab deployments only, with limited NEs (for fresh installs only, not upgrades). In this
deployment, 1x500GB disk can be used to house both the OS, as well as the docker storage disk contents. See “Example - Lab
only single-host VM” on page 7-6 for details on the disk configuration for this scenario.
Note 6: In addition to the max concurrent clients guidelines, MCP supports up to a max of 500 defined users.
* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.
The storage space requirements for MCP are expressed in terms of two
categories:
• storage space for the host operating system (OS)
• storage space and physical disk requirements for the Docker storage disk
The MCP software architecture is based on the use of Docker containers, and
the use of a high performance Docker thin pool. The Docker thin pool is the
storage space used by Docker for image and container management (for
details see, “More on the Docker thin pool” on page 4-9).
The MCP software architecture is based on the use of Docker containers, and
the use of a high performance Docker thin pool. The Docker thin pool is the
storage space used by Docker for image and container management.
When Docker is used on hosts with Red Hat Enterprise Linux, it uses the
devicemapper storage driver as the storage backend, which defaults to a
configuration mode known as loop-lvm. While this mode is designed to work
out-of-the-box with no additional configuration, production deployments should
not be run under loop-lvm mode, as it does not configure the thin pool for
optimal performance. The configuration required for production deployments is
the direct-lvm configuration mode. This mode uses block devices to create the
thin pool.
MCP uses and configures Docker in this direct-lvm optimal mode. No direct
user configuration is required for this. It is done automatically by the MCP
installation software, as long as the Docker storage disk is configured as per
the requirements in this document (i.e. the required unallocated PE space is
made available inside the volume group with name vg_sys, or with the name
specified by the user in the MCP installation procedures, as per Table 4-5 on
page 4-11).
Table 4-4 on page 4-10 specifies the storage requirements for the host OS.
Table 4-5 on page 4-11 specifies the storage requirements for the Docker
storage disk.
Table 4-4
Requirements for host operating system (OS) disk
Physical disk(s)
/var/log/ciena 150 GB
(Note 2)
Note 1: MCP can be deployed on physical servers, or VMs (Virtual Machines) in a virtualized
environment. If physical servers are used, it should noted that some server hardware types (eg. Oracle
X6-2 servers) require that a physical bootbios partition of 1MB be present for the operating system
installation and successful operation (consult your hardware documentation and the your Linux
operating system documentation for details).
Note 2: It is recommended that all free space left on the disk be assigned to root, after assigning all
other identified file system mount points. If root is being partitioned to a finer granularity (i.e. separate
partitions created for the /home, etc.), the following considerations should be taken into account:
•root - The absolute minimum of 50GB must be assigned to the root. This minimum size accounts
only for the scenario when the root space is used primarily for: operating system files, and space
required for MCP related files outside the Docker storage disk and outside of MCP logging.
•/var - if /var is created as a separate partition, the absolute minimum of 20 GB must be assigned to
/var (not taking into account space required for /var/log/ciena).
•/var/log/ciena - The /var/log/ciena is now the default location used for MCP logging purposes.
•/home/bpuser - A minimum of 20 GB must be free/available for use by the bpuser userid in its home
directory during the MCP installation (bpuser is the owner of the MCP software). By default the
bpuser home directory is set to /home/bpuser (i.e. if /home is created as a separate partition, assign
a minimum of 20GB).
Note 3: The use of LVM (Logical Volume Manager) for the root file system is optional.
Table 4-5
Requirements for Docker storage disk - When OS is on separate disk
Physical disk(s)
Disk size Depends on host size (use the number/size of disks needed to get the total required disk space for the host size
chosen, as detailed in Table 4-2 on page 4-5).
Disk speed Disk(s) must be very fast and directly attached. The only supported options meeting these speed specifications are:
• Local solid state disks (SSD) with:
— 4K random read of at least 85,000 IOPS or better
— 4K random write of at least 43,000 IOPS or better
— sequential read/write of up to at least 500 MB/s or better
— Eg. Intel® SSD Data Center S3710 Series drives meet these specifications
• SAN (Storage Area Network):
— directly attached via: Fiber Channel (at least 8Gb/s), or iSCSI (dedicated network; at least 10 Gb/s)
— 4K random read of at least 85,000 IOPS or better; 4K random write of at least 43,000 IOPS or better; sequential
read/write of up to at least 500 MB/s or better
Volume group
VG name Volume group name can be set to any name during OS installation (must be same on all hosts). Used by MCP
installation software. MCP assumes name is vg_sys . If it is different, the MCP installation procedures include steps
where the non-default VG name can be entered by the user.
File system FS mount point LV with 4TB LV with 2TB LV with 1TB FS type
(FS)
(Note 1, Note 2) /opt/ciena/bp2 (Note 3) 3,500 GB (Note 4) 1,500 GB (Note 5) 500 GB (Note 6) ext4
Note 1: File system type recommendation is ext4. For guidance on other possible file system types, contact Ciena.
Note 2: In a multi-host config, logical volume names for each required file system mount point must be the same on all hosts.
Note 3: No space included for PinPoint, see “Storage space for PinPoint” on page 4-14.
Note 4: Includes 1.5TB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 5: Includes 600GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 6: Includes 250GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Table 4-6 on page 4-12 specifies the storage requirements for the host OS and
the application when deployed on the same space/disk.
Table 4-6
Requirements for Docker storage disk - When OS is on same disk
Physical disk(s)
Disk size Depends on host size (use the number/size of disks needed to get the total required disk space for the host size
chosen, as detailed in Table 4-2 on page 4-5).
Disk speed Disk(s) must be very fast and directly attached. The only supported options meeting these speed specifications are:
• Local solid state disks (SSD) with:
— 4K random read of at least 85,000 IOPS or better
— 4K random write of at least 43,000 IOPS or better
— sequential read/write of up to at least 500 MB/s or better
— Eg. Intel® SSD Data Center S3710 Series drives meet these specifications
• SAN (Storage Area Network):
— directly attached via: Fiber Channel (at least 8Gb/s), or iSCSI (dedicated network; at least 10 Gb/s)
— 4K random read of at least 85,000 IOPS or better; 4K random write of at least 43,000 IOPS or better;
sequential read/write of up to at least 500 MB/s or better
Volume group
VG name Volume group name can be set to any name during OS installation (must be same on all hosts). Used by MCP
installation software. MCP assumes name is vg_sys . If it is different, the MCP installation procedures include steps
where the non-default VG name can be entered by the user.
File system FS mount point LV with 4TB LV with 2TB LV with 1TB FS type
(FS)
(Note 1, Note 2, boot 1 GB 1 GB 1 GB -
Note 3)
/ (root file system) 70 GB 70 GB 70 GB -
Note 1: File system type recommendation is ext4. For guidance on other possible file system types, contact Ciena.
Note 2: If root is being partitioned to a finer granularity, refer to guidelines in Note 2 of Table 4-4 on page 4-10.
Note 3: In a multi-host config, logical volume names for each required file system mount point must be the same on all hosts.
Note 4: No space included for PinPoint, see “Storage space for PinPoint” on page 4-14.
Note 5: Includes 1.5TB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 6: Includes 600GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
Note 7: Includes 250GB for historical PM/NE data, see “Storage space for historical PMs and NE backups” on page 4-13.
In order to determine if more storage space is required, you must calculate the
actual storage requirements based on the type and number of NEs in your
network. Use the values in Table 4-7 on page 4-13 to calculate the space
needed for NE historical PMs and NE backups. The value should then be
compared to determine if it is more than what has been set aside for this
purpose in the guidelines provided. If it is more, the storage space for the
deployment must be increased and the extra space allocated to the /opt/ciena/
bp2 file system.
Table 4-7
Storage space required for NE PMs and NE backups
NE type NE data size per day
(PMs plus 1 NE backup per day)
Example #1
If you have:
• PM retention period of 7 days
• 1 NE backup per day with max 7 backups kept
• NEs in network - 100 x 6500 Photonic TIDc NEs with 5 shelves each
• MCP host configuration where each host has 1TB storage space
Then:
• The storage space required would be approximately 240 GB (100 NEs * 5
shelves per NE * 7 days * 70 Mbytes per day)
• Comparing that value to Table 4-6 on page 4-12, we see that the 1TB
config allows for up to 250GB of space to be used for NE PMs/backups
• No additional storage is needed
Example #2
If you have:
• PM retention period of 7 days
• 1 NE backup per day with max 7 backups kept
• NEs in network - 1,500 x 6500 Photonic NEs with 1 shelf each, and 200
Waveserver Ai NEs
• MCP host configuration where each host has 2TB storage space
Then:
• The storage space required would be approximately 735GB (1,500 NEs *
1 shelves per NE * 7 days * 70 Mbytes per day; plus 200 NEs * 7 days *
10 Mbytes per day)
• Comparing that value to Table 4-6 on page 4-12, we see that the 2TB
config allows for up to 600GB of space to be used for NE PMs/backups
• The storage space for each MCP host must be increased by an additional
135GB, therefore 2.13TB is required for each MCP host
• The /opt/ciena/bp2 file system on each MCP host should be allocated
1,485GB (1,350GB + 135GB)
LAN/WAN requirements
Table 4-9 on page 4-16 details the bandwidth requirements between MCP and
managed devices.
Table 4-8
MCP bandwidth requirements
Communications channel Recommended bandwidth Maximum DCN delay (Note 1)
Note 1: DCN delay is defined as Round-trip time, RTT; utilities such as ping can be used to help estimate average
RTT between hosts. DCN segments should have no packet loss.
Note 2: When MCP is deployed in a multi-host configuration, all the MCP hosts must be on the same subnet and
must meet the bandwidth and DCN delay requirements (typically this implies they must all be located at the same
data center on the same switch).
Note 3: Using less than the minimum bandwidth required between MCP GR sites may result in data
synchronization delays and failures between the sites. This can impact the ability of the standby site to assume full
management control after a GR switch-over.
Table 4-9
MCP to NE bandwidth requirements
Communications channel Recommended bandwidth (Note 1, Note 2)
Note 1: The average delay on any segments of the DCN should not exceed 300 ms, and these DCN segments
should have no packet loss.
Note 2: All DCN, DCC, and bandwidth rules must be respected on the NEs being managed. This includes the
scenario where NEs are set up in a GNE (Gateway Network Element) configuration. GNEs must be configured to
support the total bandwidth required to manage all remote NEs through the GNE.
Note 3: Each shelf in a consolidated TID must be counted (for example, a consolidated TID with 5 shelves requires
5x200 kbits/s).
If both IPv4 and IPv6 are used, a single network interface must be used on the
MCP hosts for both. This interface is configured to handle both IPv4 and IPv6
communications, and is commonly referred to as a dual stack configuration.
IP address allocation
Installing MCP includes use of one or more IP addresses for the physical
server(s)/VM(s) where MCP is installed:
• These IPs are chosen by the customer (based on the network subnet the
server/VM is located in).
• For MCP single-host configurations, 1 IP address must be allocated for the
MCP host (that same IP is used when setting the Site IP as well). For MCP
multi-host configurations at least 4 IP addresses must be allocated (1 for
each of the MCP hosts and 1 for the site IP).
• All IPs assigned to MCP hosts in a single-host or multi-host configuration,
including the Site IP, must be on the same subnet. As a result, a subnet of
/32 is not supported for use with MCP (/32 implies 1 IP per subnet).
• If it is not already set up and in place by the IT system administrator,
additional IPs may also be required for setting up the virtualization
infrastructure (eg. setting up VMware ESXi), or for hardware management
ports (eg. Oracle network management ILOM port).
The choice of using physical servers (bare metal) vs VMs for MCP depends
on the existing customer environment and IT skill set. Both approaches have
benefits and considerations. See “Determine whether physical servers or VMs
will be used” on page 3-3 for details on how to decide whether physical
servers or VMs are the best option for your environment.
If using bare metal servers, MCP has been tested/evaluated on specific server
hardware models. Deploying MCP on a bare metal server not detailed in this
document requires prior approval from Ciena. In this release, MCP has been
tested/evaluated on the following hardware models:
• Oracle X6-2 - Eg. 3 of these servers can be used to create a 2+1 multi-
host configuration matching the specs of the 40 vCPU/96G (2+1) column
in Table 4-2 on page 4-5 (when equipped with 2xE5-2630v4 10-core 2.2
GHz CPUs, 96G RAM, and 5x400GB SSD disks).
• Oracle X7-2 - Eg. 4 of these servers can be used to create a 3+1 multi-
host configuration matching the specs of the 40 vCPU/64G (3+1) column
in Table 4-2 on page 4-5 (when equipped with 2 Intel Xeon Silver 4114 10-
core 2.2 GHz CPUs, 64G RAM, and 3x800GB SSD storage space).
• Oracle X8-2 - Eg. 4 of these servers can be used to create a 3+1 multi-
host configuration matching the specs of the 40 vCPU/64G (3+1) column
in Table 4-2 on page 4-5 (when equipped with 2 Intel Xeon Gold 5218 16-
core 2.3 GHz CPUs, 64G RAM, and 3x800GB SSD storage space; note
this configuration will have slightly more vCPUs than required for MCP).
• HP BL460c G9 blades - Eg. 3 of these blades can be used to create a 2+1
multi-host configuration matching the specs of the 40 vCPU/96G (2+1)
column in Table 4-2 on page 4-5 (when equipped with 2xE5-2640v4 10-
core 2.4GHz CPUs, 96G RAM, and 2x1.2TB SSD disks)
Table 4-10
Operating system support for MCP 4.2
Operating system OS OS package SELinux Supported for Supported for MCP
(Note 1, Note 2, release set (Secure Linux) new MCP installs upgrades from earlier
Note 7, Note 8) (Note 3) (Note 3, Note 4) mode required (Note 5) release
Note 1: When MCP is deployed in a multi-host configuration (with or without GR), all MCP hosts must run the same
operating system type and version.
Note 2: MCP has been verified on operating systems with the language set to English. Using other languages is
not currently supported.
Note 3: MCP has been verified against specific sets of operating system releases/packages. MCP installations are
supported on the operating system releases identified in this table. Operating system vendors periodically provide
updates (general updates and security updates). Ciena regularly evaluates these new updates and their
compatibility with MCP. For more details contact Ciena.
Note 4: MCP has been verified against specific sets of operating system packages. Installation of 3rd party
applications is not supported co-resident with MCP, unless approved by Ciena.
Note 5: Industry support and security updates for Linux releases are periodically capped by operating system
vendors. It is strongly recommended that fresh installs of MCP be performed on the most recent OS release that
has been tested with MCP. OS releases identified as “Not recommended” in this table may not be supported in
future releases of MCP.
Note 6: The operating system release should not be changed during an MCP upgrade. MCP release upgrades and
operating system release updates should be treated as separate activities. If the OS release currently in use is listed
in this table as Not Recommended for new MCP installs, it is strongly recommended that the OS release be updated
as a separate activity following the MCP upgrade.
Note 7: MCP provides support for CIS Level1/Level2 Benchmark OS hardening. For details on what policies are
supported, and the MCP procedures that must be applied before installing MCP on a hardened system, contact
Ciena.
Note 8: The use of the operating system firewalld service to manage/maintain rules in iptables is not currently
supported. MCP currently makes use of the operating system iptables functionality (and manages/maintains rules
dynamically via its bpfirewall service).
* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.
Hostname
In an MCP multi-host configuration, the hostname of each MCP host must be
unique within the configuration.
Site IP
A site IP address must be defined for all configurations.
Kernel parameters
All required kernel parameter updates are applied automatically when the
Ciena System bundle is installed.
The date and time between all hosts where MCP is installed, and between
MCP and managed devices, must be synchronized. This is achieved through
the use of an NTP server timing source. The MCP installation procedures
include steps where one or more NTP servers must be specified (MCP does
not push NTP server settings to managed devices; this must be configured
separately on the devices).
Only one method of timing synchronization should be used for MCP hosts.
This method is NTP, using the ntpd service. Ensure that all other timing
synchronization methods are disabled so as not to conflict/interfere with NTP
(including but not limited to methods such as VMware Tools time
synchronization). During MCP installation, the BPI installer software
configures the ntpd service for use by MCP (and also disables the chronyd
service, an alternate NTP service that exists by default on the operating
system).
Note: Defining 2 NTP servers is not supported and will be blocked by the
BPI installer software. With 2 sources defined, it is not possible to
determine which timing source is accurate if they conflict.
BPI Installer
MCP is installed using the BPI installer software. The MCP installation
procedures include steps where the BPI installer software is used to validate
that the hosts used meet MCP engineering requirements. This validation step
checks for free space in target locations using three built-in profiles: small,
medium, and large. These profiles are defined as follows (sizes are in GB):
• Free space in / (root): {small: 50, medium: 50, large: 50}
• Free space in /var/log/ciena: {small: 80, medium: 150, large: 150}
• Free space in /opt/ciena/bp2: {small: 500, medium: 1350, large: 3350}
• Free space in /opt/ciena/data/docker: {small: 70, medium: 150, large: 150}
• Free space in /opt/ciena/loads: {small: 100, medium: 100, large: 100}
• Free space in volume group for Docker thin pool creation: 150 GB
Table 4-11 on page 4-23 and Table 4-12 on page 4-28 list the ports used by
MCP.
Table 4-11
Port information for MCP
Source SrcPort Destination DstPort Proto Description
See Note 1, Note 2, Note 3. For requirements between MCP GR sites, see also Table 4-12 on page 4-28.
Between MCP clients (MCP UI or REST API) and MCP hosts
MCP UI or REST any MCP 443 HTTPS Communication between client and MCP
API client (TLS authenticated).
* MCP UI any MCP 80 * HTTP Communication between MCP UI client and
MCP.
Websocket client any MCP 80 or WS or Websocket notifications (only if developing a
for notifications (websocket) 443 WSS client to receive MCP notifications).
Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
Between the License Server(s) and any MCP/devices requiring licenses
License any License 4200 HTTP (Optional) Required if direct access to the
administrator Server(s) external License Server UI is desired. The
browser MCP UI provides the ability to query licenses
for the License Server(s) it points to.
Device any License 7071 HTTPS Flexera licensing server. Device licenses.
Server(s) 7072 HTTP
* MCP any License 7071 HTTPS Flexera licensing server. MCP licenses.
Server(s) 7072 HTTP
7073 *
* MCP UI any License 7071 * HTTPS Flexera licensing server. Used on first launch
Server(s) of MCP UI Licensing page to load certificate
(cached by browser for future launches).
License any License 7071 HTTPS When License Server HA mode is enabled
Server(s) - HA Server(s) - HA 7072 HTTP (Note 3, Note 4)
License any License 2224 TCP
Server(s) - HA Server(s) - HA
License any License 5404 UDP
Server(s) - HA Server(s) - HA 5405
License 7071 License 7071 HTTPS When License Server GR mode is used (Note
Server(s) - GR 7073 Server(s) - GR 7073 5)
site site
License 22 License 22 RSYNC
Server(s) - GR Server(s) - GR over
site site SSH
Between MCP hosts and external applications (only if used)
MCP any External NTP 123 UDP NTP.
Device any or MCP 123 UDP NTP. If using any MCP host as a timing source
123 for any other device.
MCP any External RADIUS 1812 UDP RADIUS. Authentication.
External RADIUS 1812 MCP any UDP RADIUS response. Stateful reply.
Between MCP hosts and 6500 NEs
MCP any 6500 22 TCP Standard SSH port. Recommended for
troubleshooting & tech support (even when
not being used for device management).
6500 any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for 6500 NE backups.
MCP any 6500 161 UDP NE Profile SNMP. Device management for
6500, only if Packet Fabric cards present.
6500 any MCP 162 UDP SNMP. Trap destination set on 6500, only if
Packet Fabric cards present.
Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
MCP any 6500 161 UDP SNMP. Used for contact with NE before
6500 any MCP 161 UDP enrolling.
MCP any 6500 20002 TCP NE Profile CLI. CLI for 6500, only if Packet
Fabric cards present (SSH).
* Between MCP hosts and RLS NEs
* MCP any RLS NE 22 * TCP NE Maintenance Profile SFTP. Target server
for RLS NE backups (MCP or other).
* MCP any RLS NE 22 * TCP NE Profile CLI. Device management (SSH).
* MCP any RLS NE 830 * TCP NE Profile NETCONF. Device management.
* RLS NE any MCP 2023 * TCP PM retrievals from device.
Between MCP hosts and Waveserver, Waveserver Ai and Waveserver 5 NEs
Waveserver NE any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for Waveserver NE backups (MCP or other).
MCP any Waveserver NE 22 TCP NE Profile CLI. Device management (SSH).
MCP any Waveserver NE 443 REST NE Profile RESTCONF. Device management
CONF (comms MCP to device).
Waveserver NE any MCP 443 REST Device management (comms device to MCP).
CONF
Waveserver NE any MCP 2023 TCP PM retrievals from device.
Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
Between MCP hosts and Layer 2 devices running SAOS 6.x/8.x software (51xx, 39xx, 8700)
L2 Device any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for L2 device backups (MCP or other).
MCP any L2 Device 22 TCP NE Profile CLI. L2 device CLI (SSH).
MCP any L2 Device 161 UDP NE Profile SNMP. Device management.
L2 Device any MCP 162 UDP SNMP. Trap destination set on L2 device.
L2 Device any MCP 1163 * UDP SNMP INFORMS. Required only if INFORMS
are enabled on the managed device and
configured for usage with MCP.
L2 Device any MCP 2023 TCP PM retrievals from device.
* Between MCP hosts and Layer 2 devices running SAOS 10.x software (51xx, 8180)
* L2 Device any Server for NE 22 * TCP NE Maintenance Profile SFTP. Target server
Backups for L2 device backups (MCP or other).
* MCP any L2 Device 22 * TCP NE Profile CLI. L2 device CLI (SSH).
* MCP any L2 Device 830 * TCP NE Profile NETCONF. Device management
for SAOS 10.x devices.
* L2 Device any MCP 2023 * TCP PM retrievals from device. Also used for
certificate transfers by SAOS 10.x devices.
* MCP any L2 Device 6702 * TCP gNMI (gRPC). Fault management for SAOS
10.x devices.
Between MCP hosts and 6200 Packet only NEs
6200 any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for 6200 NE backups (MCP or other).
MCP any 6200 161 UDP NE Profile SNMP. Device management.
6200 any MCP 162 UDP SNMP. Trap destination set on 6200.
MCP any 6200 20080 HTTP NE Profile HTTP. Device management
(comms MCP to device).
6200 any MCP 80 HTTP Device management (comms device to MCP).
Table 4-11
Port information for MCP (continued)
Source SrcPort Destination DstPort Proto Description
Between MCP hosts and 5430/5410 NEs
5430/5410 any Server for NE 22 TCP NE Maintenance Profile SFTP. Target server
Backups for 5430/5410 NE backups (MCP or other).
MCP any 5430/5410 22 TCP NE Profile CLI. 5430/5410 device CLI (SSH).
MCP any 5430/5410 80 HTTP NE Profile CORBA. Used to retrieve CORBA
IOR if not retrievable via SFTP.
MCP any 5430/5410 161 UDP NE Profile SNMP. Device management, only
if eSLM cards present.
MCP any 5430/5410 683 CORBA Device management (comms MCP to device).
Port as configured on device. Default is 683.
5430/5410 any MCP 5435 * TCP PM retrievals from device.
5430/5410 any MCP 12234 CORBA Device management (comms device to MCP).
Note 1: All TCP ports have a bidirectional data flow unless otherwise noted.
Note 2: For multi-host configurations: required ports between MCP and managed devices, or MCP and clients,
apply to all of the MCP host IPs; required ports between MCP and clients apply to the MCP site IP as well.
Note 3: When 2 external License Servers are deployed and License Server HA mode is enabled, Virtual Router
Redundancy Protocol (VRRP) is used to implement the virtual IP address that is used by MCP and the NE to
reference the License Server(s). IGMP-based multicast forwarding is used. Both License Servers, as well as the
virtual IP address must be on the same subnet. See External License Server User Guide for details.
Note 4: When 2 external License Servers are deployed and License Server HA mode is enabled, required ports
involving the License Server(s) apply to the virtual IP address as well.
Note 5: When License Servers GR mode is used in conjunction with License Server HA mode, required ports
involving the License Server(s) apply to both License Server host IPs at each site.
* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.
Table 4-12
Port information for MCP (between GR sites)
Source SrcPort Destination DstPort Proto Description
MCP (any host at any MCP (any host at 22 TCP SSH.
SiteA/SiteB) SiteB/SiteA) 443 HTTPS
500 UDP Standard IPSec IKE port.
IPsec tunnels are used between GR sites.
In addition to the required ports, any firewall present on the network path between the endpoints where IPSec traffic
flows must be configured to allow IPSec traffic to be passed (including public/core network). Some of the settings
are vendor specific router settings, but in general, this can usually be achieved on firewalls by doing the following:
• UDP port 500 should be open for traffic flow (standard IPSec IKE port).
• UDP Port 500 should be opened to allow Internet Security Association and Key Management Protocol (ISAKMP)
traffic to be forwarded through firewall.
• ACL lists should be adjusted to permit IP protocol ids 50 and 51 on both inbound and outbound firewall filters.
IPSec data traffic does not use Layer 4, so there is no concept of TCP/UDP/port for this traffic (therefore must
specifically be enabled in firewalls/VPN gateways/routers).
• IP protocol ID 50 should be set to allow IPSec Encapsulating Security Protocol (ESP) traffic to be forwarded.
• IP protocol ID 51 should be set to allow Authentication Header (AH) traffic to be forwarded.
Note 1: The use of PAT (Port Address Translation) between GR sites is not supported. ESP (IP Protocol 50) is used
for encryption. Since ESP does not use Layer 4 (no TCP/UDP/port), it will be dropped by devices that do PAT
(packets can’t be assigned a unique port and therefore PAT will fail).
Note 2: The use of VPNs with NAT (Network Address Translation) devices on the network path between GR sites
is not recommended, as it requires a much more complex configuration to successfully establish IPSec tunnels. If
the network path (public or private) between GR sites includes VPN routers with NAT devices, the entire path must
be configured to do NAT traversal (eg. using standard UDP ESP port 4500). The use of VPNs with NAT devices on
the network path where IPSec is enabled is not recommended.
It is recommended that the browser used for the MCP UI runs on a platform
with:
• CPU - 64-bit CPU with performance equivalent to or better than an Intel
Dual Core 2.2 GHz
• RAM - 2GB above the minimum requirements of the platform’s operating
system
• Storage - no significant amount of storage space is required
External authentication
MCP supports the use of an external RADIUS or an external LDAP for
authentication. These applications must be installed on separate platforms,
they are not supported co-resident with MCP.
For details refer to the MCP API Reference Guide and the MCP Administration
Guide.
Operational guidelines
The following guidelines should be taken into account.
Device Management
Device management considerations include:
• When network elements are pre-enrolled in bulk from a file on the MCP UI,
it is recommended that the file contain a maximum of 100 entries.
• When selecting multiple NEs in Pending state to Enroll, it is recommended
that a maximum of 100 network elements are selected. Those NEs should
be allowed to complete to Connected and Synchronized state before the
next set of NEs is enrolled.
• When network elements are enrolled in bulk using the REST API interface,
it is recommended that a maximum of 500 devices be enrolled at the same
time.
Installation
Installation considerations include:
• Following the installation of MCP, wait 15-20 minutes before logging into
MCP and performing any actions (to allow for all software components to
complete their initialization).
Network maps
Map considerations include:
• MCP supports the loading of data to be used as backgrounds for the
network map. The storage size taken up by this data is a combination of
map size and level of details. If map data including street level details is
used, the region covered by the map should be limited (eg. if using map
data for North American regions with dense street level details, a
maximum of approximately 2 states/provinces can be accommodated; if
street level details are not included a much larger region can be
accommodated).
Administration
Administration considerations include:
• When performing an MCP restoration from an MCP backup file, the time
to complete the restore will depend on the size of the managed network.
For large networks this may take several hours to complete.
• For MCP deployments in a geographically redundant configuration, if
there is a failure of the active site, the standby site is activated. This
activation triggers restoration of certain MCP components followed by a
network sync. The time to complete the activation will depend on the size
of the managed network. For large networks this may take several hours
to complete. While the list of network elements is immediately reported in
the Enrollment page, the Dashboard and Network Elements page will not
be fully updated until completion of the restore and network sync.
Services
Services considerations include:
• In a GR configuration, the total number Transport or Packet services
displayed in the MCP UI may not exactly match between the Active and
the Standby site. Stitching of services is done independently on each site
(and dynamic activity in the managed network may result in some services
being re-stitched dynamically).
be too large, an API error response code is returned, indicating that the
size limit for the query has been exceeded. In this case, additional filters
should be applied to the API call to further reduce the scope of data
returned.
Historical PM collection
PM considerations include:
• The disk space required for storing historical NE PMs can be significant
for certain NE types (eg. 6500). This should be taken into account when
planning storage space for an MCP deployment (see “Storage space for
historical PMs and NE backups” on page 4-13). For all deployments, once
MCP is installed, the HPM Retention Period should be immediately set to
match the number of days desired and planned for.
It can be done at any time without impact, however any swap allocated and in
use will not be released until the next time the MCP hosts are rebooted. As
such it is recommended that these steps be completed prior to installing or
upgrading MCP.
In this procedure:
• The operating system is configured to optimize usage of memory vs swap
space.
Requirements
Before you start this procedure, make sure that you
• can log in to all MCP hosts using the root account
Steps
1 Login to the host0 VM as root.
2 Check the current Linux profile.
tuned-adm active
3 If not already configured as such, set the Linux profile to throughput-
performance.
tuned-adm profile throughput-performance
4 Note that current available Linux profiles can be listed if required.
tuned-adm list
5 Repeat steps 1 to 4 for all hosts in your multi-host configuration.
In this procedure:
• Support is enabled for NE types that will be in the managed network.
• The number of instances is adjusted, for one or more apps within the MCP
solution, to adjust to the size of the managed network.
• The memory settings are adjusted, for one or more apps within the MCP
solution, to adjust to the size of the managed network.
Requirements
Before you start this procedure, make sure that you
• can log in to Host 0 using the bpadmin account
• can log into https://my.ciena.com as a registered user with a my.ciena.com
account
• know which of the supported MCP VM configurations is deployed (this will
determine which tuning profile filename to use):
— multi-host or single-host?
— number of vCPUs per MCP host?
— amount of RAM per MCP host?
• know which NE types will be managed in the managed network
• identify the name of the RA component in MCP that is used to manage the
NE types in the managed network, using Table 5-1:
Table 5-1
Name of MCP RA component for each NE type
Steps
Downloading MCP tuning profiles package
1 If you have already downloaded the MCP 4.2 Tuning Profiles, skip to step
7. Otherwise continue to step 2.
2 As a registered user with a my.ciena.com account, log into
https://my.ciena.com.
3 Navigate to Software > Manage Control and Plan (MCP).
4 To facilitate the search, sort the results by part number by clicking on the
CIENA PART # column heading.
5 Find Part# MCP_4_2 and open the associated PDF file in the RELEASE
INFO column (MCP_4.2_Manifest_Download_Readme.pdf). Use this
manifest file to identify the MCP 4.2 Tuning Profiles part number
(MCP_TUN_4.2-*).
6 Find the MCP 4.2 Tuning Profiles part number in the list and download the
associated software file (mcp-solution-tuning-profiles-4.2-*.ext.tar).
Note: Following the application of the tuning profile on 3+1 or 5+1 multi-
host configurations, it is possible that the MCP System Services page
(Nagios page) reports a critical error against the datomic service
indicating that "dataomic instance N exits in the last hour". In this scenario,
the error message can be ignored and will clear on its own within
approximately 1 hour.
In this procedure:
• Support is enabled for NE types that were not turned on at MCP
installation time.
Requirements
Before you start this procedure, make sure that you
• can log in to Host 0 using the bpadmin account
• know which of the supported MCP VM configurations is deployed (this will
determine which tuning profile filename to use):
— multi-host or single-host?
— number of vCPUs per MCP host?
— amount of RAM per MCP host?
• know which NE types will be managed in the managed network
• identify the name of the RA component in MCP that is used to manage the
new NE type to be enabled, using Table 5-1.
Steps
Identifying the filename of the tuning profile that applies to your MCP configuration
Note: This procedure assumes that the tuning profile was already edited
and applied during MCP installation (and therefore still contains the
changes to the file that were done and used to trigger the initial tuning
using “Procedure to tune memory settings and scale of selected apps” on
page 5-6).
Note: If this is the Standby site in an MCP GR configuration, ensure that
the RAs enabled are the same as those enabled on the Active site.
5 Use a text editor to edit the tuning profile you identified in step 4 (example:
if using vi, then enter sudo vi <filename> ).
6 Use the information you gathered in the requirements about the NE types
to be managed to identify which new RA(s) need to be enabled.
The beginning of the tuning profile contains a section for each RA type
similar to the following
- category: 'scale'
application: 'raciena6500'
solution: 'mcp'
scale: 3
apply: no
For every RA that needs to enabled, change the apply line to yes . Make
this change only for those RAs that need to be enabled (do not edit any
other lines in the file). For example, if the raciena6500 RA needs to be
enabled then the entry after the edit will look similar to the following (the
scale number may be different)
- category: 'scale'
application: 'raciena6500'
solution: 'mcp'
scale: 3
apply: yes
7 Save the file.
Applying the RA scaling of the MCP tuning profile
Ordering information 6-
This chapter details the Manage, Control and Plan (MCP) 4.2 ordering
information.
Product codes
Customers can choose a perpetual model or an annual subscription model:
• Perpetual Model -
— MCP Software Perpetual License - This includes a perpetual software
license specific for this software release only. It also includes a 1 year
warranty.
— Perpetual RTU - Right to use the software on the current managed
network size.
— A support subscription (one of the following)
– MCP Select Support (formerly known as Smart Support) - This is
a per year subscription. It allows access to upgrades on an if/when
available basis. It also provides technical support and extended
warranty.
– MCP Comprehensive Support - A higher support level. Includes in-
region support, and enhanced support response and resolution
times.
– MCP Premier Support - The highest support level. Includes
support out of a dedicated team, and best in industry support
response and resolution times.
Table 6-1 on page 6-3 lists the MCP product codes for customers using the
Perpetual ordering model. Identify the option and feature set needed to find
the row that applies to your deployment.
Table 6-1
MCP Perpetual product codes
Option Feature set RTU and License(s) to order Support to order (select one of the three support tiers)
(Note 2)
Select Support Comprehensive Premier Support
(per yr) Support (per yr) (per yr)
Note 1: The Enhanced Troubleshooting only license is available for customers who already have MCP ordered/deployed, and choose to add
Enhanced Troubleshooting functionality after.
Note 2: MCP RTUs are virtual (no paper RTU is shipped). This reduces cost and time for both the customer and Ciena and it is
environmentally friendly. For customers with a legacy procurement-receiving process that cannot accept virtual RTUs, a standard RTU can
be ordered (replace S16-RTU-x codes with S16-RTUS-x).
Table 6-2 on page 6-4 lists the MCP product codes for customers using the
Annual subscription ordering model. Identify the option and feature set
needed to find the row that applies to your deployment.
Table 6-2
MCP Annual subscription product codes
Option Feature set RTU and License(s) to order
(select one of the three support tiers) (Note 2)
Note 1: The Enhanced Troubleshooting only license is available for customers who already have MCP ordered/deployed, and choose to add
Enhanced Troubleshooting functionality after.
Note 2: MCP RTUs are virtual (no paper RTU is shipped). This reduces cost and time for both the customer and Ciena and it is
environmentally friendly. For customers with a legacy procurement-receiving process that cannot accept virtual RTUs, a standard RTU can
be ordered (replace S16-RTU-x codes with S16-RTUS-x).
Licenses
In this release and recent releases of MCP, the License Server component
used to manage licenses for both MCP and NEs is decoupled from the MCP
software. The License Server is deployed on external VMs/servers. See
External License Server User Guide for hardware requirements, operating
system requirements, and install procedures.
MCP licenses
MCP uses Ciena’s licensing model, which applies to both software and
hardware platforms.
In the current MCP release, 6500 and Waveserver NE types can be pointed
to the license servers using MCP. If this is done:
• MCP provides the ability to automatically set the license server IP address
on the NE (IPv4 NEs only). This is done using the license server
commissioning policy functionality. This policy can be enabled/disabled,
and has a parameter to control whether the value should be overwritten by
MCP if it is already populated on the NE.
• If License Server HA mode is enabled, NEs are configured to point to the
virtual IP address of the LS HA cluster.
Upgrades
If upgrading to MCP 4.2 from an earlier MCP release:
• The same product order codes are used for both fresh installs and
upgrades of MCP.
• Customers using the perpetual model
— Place an order for the new order codes to allow for MCP 4.2 installs/
upgrades.
• Customers using the annual subscription model
— if the upgrade is during the 1 year annual subscription period,
customers do not need to contact Ciena to have a new/updated annual
subscription license generated for them. These customers can access
the Ciena portal and simply re-download a new copy of the license file
for the existing site. This new license will enable the customer to
upgrade to the latest supported active MCP release (the annual
subscription license already loaded will not enable MCP 4.2 installs
until an updated copy of the license file is downloaded from the Ciena
portal).
— If the subscription period has ended, or is nearing the end, contact
Ciena to renew the annual subscription for another year to continue to
use the software.
This chapter provides some examples for various scenarios, for different
network sizes. For multi-host configurations in production deployments, the 3
(or 4) hosts should be deployed on different hardware in order to take
advantage of the local redundancy benefits a a multi-host configuration
provides (eg. protecting against hardware failure of 1 VM)
Table 7-1
Example deployment for network with 20,000 NEUs/10,000 NEs
Physical enclosure and components
Enclosure HP BladeSystem c3000 or c7000 (c3000 fits 4 full-height blades; c7000 fits 8 full-height blades)
Interconnect Any interconnect that provides a minimum of 1Gb/s (and that does not use NAT’ing)
ESXi version ESXi 5.5 or later ESXi 5.5 or later ESXi 5.5 or later
Disks 4x1.2TB local SSDs 4x1.2TB local SSDs 4x1.2TB local SSDs
Disks and Configure the 4 disks on each blade/VM the same way. Disks split into:
volume groups • Boot partition - first 1 GB of Disk0
• Volume Group for OS - next 500 GB of Disk0 (this VG can have any name)
• Volume Group for Docker - remaining 700 GB on Disk0 + all 3.6 TB space on Disks 1/2/3 (if the
VG name is not vg_sys, note the name so it can be entered during the installation procedures)
Note 1: The speed of the E5-4650v4 CPU is lower than recommended for MCP, however this is balanced out by
the extra cores it has. As a result, it provides similar performance to the E5-4627v4.
Note 2: When using 64G RAM or higher, no extra RAM is required to account for blade/VM management overhead.
Therefore, in this example, total RAM needed for the blade is equal to total RAM required by MCP.
Table 7-2
Example deployment for network with 15,000 NEUs/NEs
Physical servers
Interconnect Network interface cards on these servers must be connected to a switch/router at a minimum of
1Gb/s, on the same subnet, and must meet all latency and bandwidth requirements (see “Network
delay and bandwidth requirements” on page 4-15).
Virtualization None (this is not a VM based deployment; each server has the operating system directly installed)
CPUs 2 x Intel Xeon Silver 2 x Intel Xeon Silver 2 x Intel Xeon Silver 2 x Intel Xeon Silver
4114 (10-core, 2.2 4114 (10-core, 2.2 4114 (10-core, 2.2 4114 (10-core, 2.2
GHz) GHz) GHz) GHz)
(total vCPUs: 40) (total vCPUs: 40) (total vCPUs: 40) (total vCPUs: 40)
Disks 5x800GB local SSDs 5x800GB local SSDs 5x800GB local SSDs 5x800GB local SSDs
Table 7-3
Example deployment for network with 10,000 NEUs/NEs
Physical enclosure and components
Enclosure HP BladeSystem c3000 or c7000 (c3000 fits 8 half-height blades; c7000 fits 16 half-height blades)
Interconnect Any interconnect that provides a minimum of 1Gb/s (and that does not use NAT’ing)
ESXi version ESXi 5.5 or later ESXi 5.5 or later ESXi 5.5 or later
RAM 96 GB 96 GB 96 GB
Disks 2x1.2TB local SSDs 2x1.2TB local SSDs 2x1.2TB local SSDs
Disks and Configure the 2 disks on each blade/VM the same way. Disks split into:
volume groups • Boot partition - first 1 GB of Disk0
• Volume Group for OS - next 400 GB of Disk0 (this VG can have any name)
• Volume Group for Docker - remaining 800 GB on Disk0 + all 1.2 TB space on Disk 1 (if the VG name is not
vg_sys, note the name so it can be entered during the installation procedures)
Note 1: When using 64G RAM or higher, no extra RAM is required to account for blade/VM management overhead. Therefore,
in this example, total RAM needed for the blade is equal to total RAM required by MCP.
Table 7-4
Example deployment for network with 200 NEUs/NEs
VM provided by IT
Virtual CPUs 16 vCPUs (20 vCPUs if license server is installed co-resident and managing
more than 100 NEs)
RAM 64 GB (80G if license server is installed co-resident and managing more than
100 NEs)
Storage space 1 TB - Disk speeds must meet requirements in Table 4-5 on page 4-11.
Storage configuration
Volume Group for OS Create 6 logical volumes inside the volume group:
and Docker • swap - 16 GB
• / (root file system) - 70 GB
• /var/log/ciena - 80 GB
• /opt/ciena/bp2 - 500 GB
• /opt/ciena/data/docker - 70 GB
• /opt/ciena/loads - 100 GB
Ensure that after creating the 6 LVs, approximately 150 GB is left unallocated,
within the VG.
Table 7-5
Example deployment for lab with 20 NEs
VM provided by IT
RAM 64 GB
Storage space 500 GB - In this deployment, the 500GB space can be used to house both the
OS, as well as the docker storage disk contents.
Use for This configuration is supported for lab deployments only, with a limited
number of NEs, for non-performance related testing.
Storage configuration
Volume Group for OS Create 5 logical volumes inside the volume group:
and Docker • swap - 8 GB
• / (root file system) - 70 GB
• /opt/ciena/bp2 - 120 GB
• /opt/ciena/data/docker - 100 GB
• /opt/ciena/loads - 100 GB
Ensure that after creating the 5 LVs, approximately 100 GB is left unallocated,
within the VG.
This chapter is for reference only and details the scale/memory adjustments
that are applied for each MCP configuration when using the new procedures.
The values detailed here are those applied when using version 4.2-85 of the
MCP tuning profiles, which is the recommended tuning profiles version at the
time of this document release (for the most up to date guidance always consult
the Ciena Portal). See Table 8-1 on page 8-2, and Table 8-2 on page 8-3.
Table 8-1
Number of instances set up by tuning profiles for selected apps (for each network size)
Configuration being deployed (Note 1)
6 4 3 1
(5+1) (3+1) (2+1)
Virtual CPUs 40 40 16 40 32 40 32 16 16 24 16 16
bpraciena8180 (Def=0) 3* 1*
bpraciena8700 (Def=0) 15 * 8 4 6 4 2 2 1
bpracienampbraman (Def=0) 3* 1*
bpracienapacket (Def=0) 40 * 32 * 4 16 * 8* 4 2 2 1
bpracienarls (Def=0) 3* 1*
bpracienasaos10x (Def=0) 30 * 20 * 4* 16 * 8* 4 2 2 1
bpracienawaveserver (Def=0) 20 * 20 * 5 8* 6* 4* 4* 2 1
bprasubcom (Def=0) 3* 1*
raciena54xx (Def=0) 10 * 10 * 3* 3* 1*
raciena6200 (Def=0) 15 * 8 4 6 4 2 2 1
raciena6500 (Def=0) 40 * 40 * 7 16 * 14 8* 4 8 4 2
razseries (Def=0) 3* 1*
Other apps
Note 1: Any apps not listed are left at their default scale settings. Only those RAs that are selected to be enabled
will be scaled.
* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.
Table 8-2
Memory settings set up by tuning profiles for selected apps (for each network size)
Configuration being deployed (Note 1)
Virtual CPUs 40 40 16 40 32 40 32 16 16 24 16 16
XMX 8192M
Note 1: Any apps not listed are left at their default settings.
Note 2: If upgrading from a previous release of MCP, there may be custom parameter settings present for kafka that are removed
this release (offsets.retention.minutes , num.replica.fetchers).
Note 3: Some pm parameters are modified for large configurations (collection.workers).
* In this table, an asterisk indicates that the corresponding value is new/changed since the previous release.
Engineering Guide
Release 4.2
Publication: 450-3709-010
Document status: Standard
Issue 12.03
Document release date: June 2020
CONTACT CIENA
For additional information, office locations, and phone numbers, please visit the Ciena
web site at www.ciena.com