8.1 Cloud Infrastructure Requirements and Questions
8.1 Cloud Infrastructure Requirements and Questions
Requirement Id
General
INF-GL-1
INF-GL-0
INF-GL-1
INF-GL-0
INF-GL-1
INF-GL-0
INF-GL-1
INF-GL-0
INF-GL-1
INF-GL-0
INF-GL-1
Physical Infrastructure
INF-PI-1
INF-PI-0
INF-PI-1
INF-PI-0
INF-PI-1
INF-PI-0
INF-PI-1
INF-PI-0
INF-PI-1
Racking Requirements
INF-RR-1
INF-RR-0
INF-RR-1
INF-RR-0
INF-RR-1
INF-RR-0
INF-RR-1
INF-RR-0
INF-RR-1
Power Requirements
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
INF-PR-1
INF-PR-0
Cooling Requirements
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
INF-CR-1
INF-CR-0
Virtualization Infrastructure
Compute
Compute Node Requirements
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
INF-CN-0
INF-CN-1
OS and Runtime Requirements
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
INF-OR-1
INF-OR-0
Hypervisor Requirements
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
INF-HR-1
INF-HR-0
Accelerator Requirements
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
INF-AR-0
INF-AR-1
Storage
INF-ST-1
INF-ST-0
INF-ST-1
INF-ST-0
INF-ST-1
INF-ST-0
INF-ST-1
INF-ST-0
INF-ST-1
INF-ST-0
INF-ST-1
Network
Network General Requirements
INF-NG-1
INF-NG-0
INF-NG-1
INF-NG-0
INF-NG-1
INF-NG-0
INF-NG-1
INF-NG-0
INF-NG-1
INF-NG-0
INF-NG-1
INF-NG-0
INF-NG-1
INF-NG-0
INF-NG-1
INF-NG-0
Data Network Requirements
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
INF-DN-1
INF-DN-0
Management Network Requirements
INF-MN-1
INF-MN-0
INF-MN-1
INF-MN-0
INF-MN-1
INF-MN-0
Storage Network Requirements
INF-SN-1
INF-SN-0
INF-SN-1
INF-SN-0
Infrastructure Management
General Platform Management Requirements
INF-GP-1
INF-GP-0
INF-GP-1
INF-GP-0
INF-GP-1
INF-GP-0
INF-GP-1
INF-GP-0
INF-GP-1
Infrastructure Life Cycle Management
INF-LC-1
INF-LC-0
INF-LC-1
INF-LC-0
INF-LC-1
Management Interface Protocol Requirements
INF-MP-1
INF-MP-0
INF-MP-1
INF-MP-0
INF-MP-1
INF-MP-0
INF-MP-1
INF-MP-0
INF-MP-1
INF-MP-0
INF-MP-1
INF-MP-0
Timing and Synchronization Issues
INT-TS-1
INT-TS-0
INT-TS-1
INT-TS-0
INT-TS-1
INT-TS-0
INT-TS-1
prietary & Confidential - Not to be shared without NDA
Requirement
The infrastructure shall come in different form factors capable of covering all possible dish use cases while offering unified
view and a single management and operations interface.
Please describe at a high level the physical form factors and infrastructure options that the vendor can offer In the solution
Describe the Operations Support System (OSS) and tools used to manage the proposed physical infrastructure. List all
available functions (inventory management, performance management, etc.). Specify if both GUI and CLI can be used for
each function. Explain the boundaries between the VIM, OSS, and Infrastructure Manager roles.
Describe the general power supply and grounding requirements as well as any additional specific power supply requirements.
Racks/frames shall comply with the regional radiated emissions specifications for the types of equipment hosted.
The racks/frames shall support a variety of NFVI Node scales.
Racks/frames shall support different types of environments such as Data Center nodes, EDGE nodes, and Far Edge nodes.
Rack/frames shall include options that are appropriate for deployment in small ruggedized enclosures for outdoor.
Racks/frames shall support a mapping of an arbitrary number of NFVI Nodes onto one or more physical racks of equipment
(e.g. N:M mapping).
Racks/frames shall support geographical addressing to facilitate ease of maintenance and possible optimizations based on
locality of compute/storage/networking resources. Describe alternatives.
the rack/frames shall comply with regional safety requirements for the types of equipment hosted.
The dimensions of racks/frames shall comply with one of the following NFVI Node profiles: Profile 1: IEC 60297 specifications ;
Profile 2: ETSI ETS 300 119 specifications
uirements
Rack power subsystem should meet all regional/national safety requirements for the location in which it is deployed.
Rack power subsystem should be designed with no single points of failure in order to avoid possible loss of availability of the
data center NFVI node function.
Power failures of a in a single compute/storage/networking node within the data center NFVI rack shall not impact any other
node (in the same rack or in other racks)
Please describe the maximum rack power capacity load supported
Please describe the different Power Distribution Unit (PDU) types available in the rack
Please describe connectivity (single or multiple input) for the PDU
Please describe the differnt outlets and fuses supported in the PDU
Please describe the PDU redundancy
Please describe the type of power supply units supported by their Compute/Storage/Network Nodes.
The solution shall run on 120 V AC power
The rack power subsystem should provide a means to report power system fault events to the hardware platform management
entity and/or Data Centre infrastructure management software.
The rack power subsystem should be able to support loads of 10 kW per full size rack.
The rack power subsystem should support a variety of energy efficiency levels in order to meet the particular deployment
needs.
A configuration of the rack power subsystem shall support -48VDC input facility feeds in order to provide interoperability with
existing central office infrastructures.
Configurations of the rack power subsystem shall support the three-phase format
Please list the three phase formats supported
Configurations of the rack power subsystem shall support the single phase facility feed
Configurations of the rack power subsystem shall support the high voltage DC facility feed
Configurations of the rack power subsystem shall support redundant input power feeds.
The power distribution unit shall accept power from one or more facility feeds.
The power distribution unit shall provide power from the facility feeds to the compute/storage/network nodes without regulation.
For three phase power inputs, the power distribution unit shall allow load balancing of power drawn from each phase of the
input power.
To facilitate high-power racks, multiple power distribution units shall be supported
Power distribution unit outputs should provide over-current protection by circuit breakers or similar method.
The power shelf shall be capable of measuring and remotely reporting the following operational parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
• Number of Power Supplies Installed in the Shelf
• Maximum Capacity of the Shelf Based on Installed Supplies
Each power supply unit shall be capable of measuring and remotely reporting the following operational parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
Power distribution units shall allow individual outputs to be powered up/down under software control.
Power distribution units shall support programmable power sequencing of the outputs/soft start to prevent large current spikes
when rack power is applied.
The power shelf shall be capable of providing the following information to the rack management or Data Centre infrastructure
management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
Please describe how power supply redundancy may be achieved by installing more than one power supply unit within a
Compute/Storage/Network node.
Each power supply unit shall be capable of measuring and remotely reporting the following operational parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
Each power supply unit shall support generation of alarm events with programmable limits for each of the following conditions:
• Output Over Current for Each Output Voltage
• Output Over/Under Voltage for Each Output Voltage
• Output Loss of Regulation
The power supplies unit shall be capable of providing the following information to the rack management or Data Centre
infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
The power distribution unit shall accept power from one or more facility feeds.
The power distribution unit shall provide power from the facility feeds to the rack-level power shelf/shelves without regulation.
Power distribution unit shall be capable of measuring and remotely reporting the following operational parameters:
• Input Current
• Input Voltage
• Output Current (each output)
The power distribution unit shall support generation of alarm events with programmable limits for each of the following
conditions:
• Input Over Current
• Input Over/Under Voltage
• Output Over Current (each output)
The power distribution unit shall be capable of providing the following information to the rack management or Data Centre
infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
The power shelf shall accept one or more power supply units for both capacity and redundancy purposes.
Power shelf may accept power from more than one power distribution unit feed.
More than one power shelf may be installed in an NFVI rack for capacity or redundancy purposes.
Output power shall be delivered from the power shelf to compute/storage/network nodes via bus bar or bussed cables.
The power delivered from the power shelf shall be protected from over current by circuit breaker or other similar safety device
The power shelf should be capable of measuring and remotely reporting the following operational parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
• Number of Power Supplies Installed in the Shelf
• Maximum Capacity of the Shelf Based on Installed Supplies
Each power supply unit shall be capable of measuring and remotely reporting the following operational parameters:
• Output Current for Each Output Voltage
• Output Voltage Value for Each Output Voltage
The power shelf shall support generation of alarm events with programmable limits for each of the following conditions:
• Output Over Current for Each Output Voltage
• Output Over/Under Voltage for Each Output Voltage
• Output Loss of Regulation
The power shelf shall be capable of providing the following information to the rack management or Data Centre infrastructure
management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
The power supplies shall be capable of providing the following information to the rack management or Data Centre
infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
Localized energy storage shall be capable of providing power from the time that facility power is lost and generator (or backup
power) is stabilized.
In racks with power shelves, Battery Backup Unit (BBU) units shall be placed within the power shelf power supply unit bays.
In NFVI racks without power shelves, Battery Backup Unit (BBU) units shall be placed within one of the
compute/storage/network node's power supply bays.
Battery Backup Unit (BBU) mechanical form-factor shall be identical to power supply units in order to facilitate
interchangeability.
The Battery Backup Unit (BBU) shall be capable of measuring and remotely reporting the following operational parameters:
• Output Current
• Output Voltage
• Charge Level
• Total Capacity
• Battery Temperature
The Battery Backup Unit (BBU) shall support generation of alarm events with programmable limits for each of the following
conditions:
• Output Over Current
• Battery Over Temperature
The Battery Backup Unit (BBU) shall be capable of providing the following information to the rack management or Data Centre
infrastructure management software for asset tracking purposes:
• Manufacturer
• Model
• Serial Number
• Date of Manufacture
• Rated Capacity
The solution shall include support for both 1+1 and N+N power supply redundancy
quirements
Please describe the air cooling mechanisms supported in the proposed solution
Please describe airflow mechanisms utilized in the proposed solution
Please specify the operating temperature ranges for the poposed hardware
The solution shall support Forced air cooling
The direction of airflow shall be from the front of the equipment (inlet) to the rear of the equipment (exhaust).
Air filters shall be present at the inlet side of the airflow path in order to protect the equipment from dust and particulates.
Air filters shall be field serviceable without the use of specialized tools.
Normal operating temperature range of the equipment shall be between 10 °C and 35 °C as measured at the inlet.
Blanking panels or air baffles shall be supported in order to reduce recirculation of airflow through the rack.
NFVI equipment shall comply with acoustic limits found within NEBS™ GR-63 [7].
Please describe any option supported by solution in addition to forced air cooling
Rack level fans shall be field serviceable while the equipment is in operation.
Redundancy within the rack cooling system shall provide appropriate levels of cooling to all the rack equipment while the rack
cooling system is serviced.
Redundancy within the rack cooling system shall provide appropriate levels of cooling to all the rack equipment in the event of
any single fan failure within the rack cooling system.
Rack Fans shall either be guaranteed for a specified service life or shall be serviceable without the use of specialized tools.
Rack-level telemetry shall be available to detect latent fan fault conditions (e.g. fan speed) for each rack fan.
Rack level temperature sensors shall be present at the air inlet and exhaust locations.
Fan speed control shall support autonomous mode where speed is controlled locally within the rack.
Fans shall be field serviceable.
Telemetry shall be available to detect fan speed for each fan.
Compute/storage/network node-level temperature sensors shall be present at the air inlet and exhaust locations.
The compute/storage/network node shall monitor the temperatures of critical components.
The compute/storage/network node shall have the capability to autonomously take action to prevent damage if an over
temperature condition is detected. This may include increasing fan speed, clock throttling, placing some components into low-
power states.
Action that the compute/storage/network node takes when an over temperature condition is detected shall be software
configurable, but with a pre-configured non-changeable default value/behaviour.
Fan speed control shall support autonomous mode where speed is controlled locally within compute/storage/network node.
e Node Requirements
The solution shall run on any COTS x86 Hardware
Please specify hardware requirements of configurations that can be offered in terms of:
• Number of x86 CPU sockets per physical server
• RAM per physical server
• Number of network ports / bandwidth per physical server
• Local storage per physical server
Please specify the optimal hardware model for each deployment option (see 2.02 to 2.09)
The compute hardware model for deployment in a data center shall be configured with redundancy
The compute hardware model for deployment in a central office shall be configured with redundancy
Please describe which hardware elements are configured with redundancy (power supplies, fans, NIC’s, management
processors, enclosure switching modules, etc.) to eliminate any hardware related single points of failure and support the
overall solution availability.
Please describe the interfaces implemented in the Compute Domain. (data ports, control ports). Make sure to detail
compliance to latest available ETSI NFV standards and recommendations as well as well-known technologies and protocols
adopted by the telecom industry for NFV.
Please provide supplier's recommendation on the capabilities that are supported by the equipment to support the virtualization
technologies in I/O operations
Please provide details of physical interfaces between compute nodes and leaf/TOR switches
Please describe the core processorts utilized in the proposed solution
NFVI node equipment shall include multiple Processors
NFVI compute node equipment shall support commonly used Operating Systems, Hypervisors and third-party software
Please list all the features supported by each processor model proposed
Please describe the features available to provide isoluation of each VM's compute resources
Please describe features available to minimize latency when accessing remote memory.
Server shall provide management connectivity separate from functional connectivity (physical and logical)
Servers shall support at least two 1G ethernet interfaces
Equipment shall support electrical SFPs for 1G ports
Servers shall support at least two 10G ethernet interfaces
Servers shall support at least two 25G ethernet interfaces
Equipment shall support electrical SFP+s for 10G port
Servers 10G interfaces shall be capable to auto-negotiate down to 1G ethernet
Each server shall consume 1,100W or less in Operation
Provide information on fully loaded (100% CPU) and typical (50% util) traffic loaded power consumption. Provide specific
information when thte workloads are CU and a DU.
Each server shall be mountable in a standard 19" rack
The hardware shall function normally at temperatures up to 35°C (95°F). This maximum temperature is reduced by 1°C/300 m
(1°F/547 ft) above 950 m (3,117 ft).
The hardware shall support redundant transport ports
Loss of a single trasnport port on any hardware shall not cause any downtime
Compute node shall include multiple NICs for redundancy
Compute node shall include an option for smartNIC
The NFVI solution shall allow for different networking solutions to co-exist on the same compute node providing different
performance or functionalities to upper layer applications (e.g. SR-IOV, OVS-DPDK, Smart NIC, L3 Overlay, Flat network )
The NFVI solution shall allow for compute nodes to be grouped per the virtualized networking solutions such that applications
can be mapped to the right compute node by the VIM automatically
BIOS shall allow option to disable power saving to fine tune real-time performance
BIOS shall allow option to disable CPU dynamic frequency scaling to fine tune real-time performance (e.g. Enhanced Intel
SpeedStep Tech)
BIOS shall allow option to optimize CPU hyper threading policy
BIOS shall allow option to disable unused IO devices on the node (e.g. USB ports, VGA ports) to reduce volume of HW
interrupt
BIOS shall allow option to disable "Process C3" (i.e. Idle Mode)
BIOS shall allow option to enable Intel VT for Direct I/O (needed for SR-IOV)
BIOS shall allow option to enable/disable Coherency Support (for improved performance of DMA / IO Device communication)
BIOS shall allow option for Memory Power Optimization (to trade-off performance for Power)
BIOS shall allow option to disable Direct Cache Access (to use Data Direct IO instead)
BIOS shall allow option to enable pre-fetcher (to increase cache hit and CPU performance)
Please describe all the BIOS options that can help fine tune performance.
Runtime Requirements
The solution shall support a fully containerized architecture (container over bare Metal)
The solution shall include an option to work a virtualized infrastructure using VMs and VNF
The solution shall include an option to work with a mixed environment of containers and VMs (including containers over VMs).
The solution shall support monitoring and reporting virtual and physical CPUs utilization, memory utilization and hard disk
utilization.
The solution shall support monitoring, and reporting virtual and physical network card and Virtual Switchport throughput and
error packets.
The solution shall support monitoring and reporting virtual hard disk read and write rate
All fault indications shall include fault time
All fault indications shall include fault location
All fault indications shall include fault type
All fault indications shall include fault level
The solution shall offer an interface for NICs, Smart NICS and their built in accelerators to be managed and configured from
VIM or MANO.
The solution shall offer an interface for smart NICS' FPGAs to Accelerators to be managed and configured in software from
VIM or MANO (e.g. FPGA as a service or Accelerator as a service)
Smart NICS should include DDR memory attached to the FPGA to allow local memory access without hitting the PCIe.
Operating System shall use a realtime optimized kernel with Pre-emptive scheduler
Operating System shall support dedicating CPU cores for virtual machines, containers and virtual switches
Operating System shall support Huge Pages
or Requirements
Please state on which hypervisor and Host OS is the solution based. Refer to full product stack with exact releases and
components.
Hypervisor shall support Ubuntu/RHEL Linux based Host OS release 16.04 or higher
Hypervisor shall support DPDK support relase 16.x or higher
Hypervisort shall support vSwitch based on OVS release 2.5 or higher
Pleasse describe the interfaces implemented in Hypevisor domain of the proposed solution in accordance with the latest
available ETSI NFV standards and recommendations and by means of well-known technologies and protocols, adopted by
telecom industry for NFV.
Please list the management operation provided by the Hypervisor over Nf-Vi/H interface to VIM e.g. Create virtual machine,
Create Storage Pool, Create Snapshot, Migrate a VM, Create virtual network device etc.
Please describe the any specific enhancements developed on the native Hypervisor to achieve improvements in performance,
availability, security etc.
Please describe how containerized technology may be implemented on top of Hypervisor e.g. Docker. If yes, please indicate
which platforms are compatible and supported.
Please list any specific enhancements / customization done on the Hypervisor.
Please specify whether the proposed Hypervisor and Host OS has the ability to use any Guest OS on top. Alternatively list all
types and versions of VM Guest OS that can be hosted. Vendor shall list any limitationsand constraints.
Please list the drivers that can be exposed to a Guest OS/VM e.g. virtio and whether there are any proprietary drivers that
shall be considered per case.
Please provide a list of supported 3rd party Storage solutions proposed Hypervisor can be installed with.
Please describe the how Hypervisor provides the capability to manage VMs with diverse requirements in terms of CPU,
memory, storage, and vNICs.
Please describe the features provided by the Hypervisor to fully exploit the capabilities of the compute domain e.g. address
translation services, allocation of threads to CPUs, allocation of memory to NUMA nodes etc.
Please describe the features provided by the Hypervisor in the context of the virtual Storage/Datastores domain.
Please describe the features provided by the Hypervisor in context of networking domain e.g. vNICs, vSwitches, vRouters etc.
Please list any limitations regarding the maximum number of vNICs that can assigned to a single VM.
Pleasel provide a recommendation on the maximum number of vNICs that shall be considered for user plane VMs
Please provide a list on the standard optimization technologies for virtualization provided e.g. SR-IOV, DPDK for vSwitch,
NUMA, CPU pinning, Huge pages etc.
Please state for which VM types DPDK can be enabled by default.
Please specify whether additional compute resources are required for deploying DPDK versus the expected efficiency gained.
Please provide a list on the need and capability to bind multiple physical network ports into a single network port for network
load balancing and network redundancy to avoid single-point failures of physical network ports. Vendor shall describe the any
limitations in case SR-IOV and DPDK is deployed.
Please describe the all available options for VNF scaling (scale in, and scale out, etc.)
Please describe the high availability options of the Hypervisor and the high availability level the Hypervisor can guarantee.
Please describe how Hypervisor provides the capability to manage VMs with diverse requirements in terms of CPU, memory,
storage, and vNICs.
Please describe the features provided by the Hypervisor to fully exploit the capabilities of the compute domain e.g. address
translation services, allocation of threads to CPUs, allocation of memory to NUMA nodes etc.
Please describe the features provided by the Hypervisor in the context of the virtual Storage/Datastores domain.
Please describe the features provided by the Hypervisor in context of networking domain e.g. vNICs, vSwitches, vRouters etc.
Please specify the supported version of the vSwitch. Describe the vSwitch features. Provide a list of any proprietary patches
added on OVS and the purpose for each proprietary patch.
Please list the proposed Hypervisor limits in the following areas:
• max vCPUs per VM host
• max memory per VM host
• max CPU sockets per compute
• max physical memory per compute
• max datastore size in TB
• max no of Datastores
• max no of managed blade servers by the hypervisor
• max no of VMs on the same cluster
• max no of virtual switches
The hypervisor domain shall enable the VIM to configure last level cache size allocation.
The hypervisor domain shall enable the VIM to configure cache bandwidth allocation & control.
The hypervisor domain shall be able to notify the VIM performance metrics specified in ETSI GS NFV-TST 008 [8].
The hypervisor domain shall be able to provide the VIM with performance metrics specified in ETSI GS NFV-TST 008 [8] in
response to a query
The hypervisor should implement flexible resource allocation mechanisms to enable the VIM to set policies to ensure failure
isolation under a variety of configurations and workloads.
The Hypervisor (or the underlying Host together with its Hypervisor) shall report all failures detected to the VIM for subsequent
processing, decision making, and/or logging purposes.
In an NFV context, it is the responsibility of the Hypervisor (or the underlying Host together with its Hypervisor) to perform the
hardware failure detection which may have been previously performed by the NF itself (or the NF's underlying OS +
middleware). The intent is to provide parity with the level of failure detection previously performed on the NF prior to
virtualisation.
The hypervisor shall support partitioning of the resources of a compute node.
The hypervisor shall support nested virtualisation.
The hypervisor shall support acceleration requirements as specified in ETSI GS NFV-IFA 002 [2].
The hypervisor shall support partitioning of the resources.
The hypervisor domain shall be able to support memory bandwidth allocation & control.
The hypervisor domain shall support non virtualised timing (e.g. rdtsc/rdtscp instructions).
The hypervisor domain shall support non virtualised synchronization primitives (e.g. monitor/mwait instructions).
The hypervisor domain shall enable the VIM to configure last level cache size allocation.
The hypervisor domain shall enable the VIM to configure cache bandwidth allocation & control.
The hypervisor domain shall support a deployment option where the vswitch functionality is deployed independently from the
served hypervisor.
The hypervisor domain shall support a deployment option where the vswitch functionality is hosted in a VM.
The hypervisor domain shall support a deployment option where VMs are directly connected with each other via their vNICs
(i.e. point-to-point communication), not using any vSwitch, vRouter or eSwitch.
The hypervisor domain shall support a deployment option where VMs on the same compute node can communicate with each
other by sharing memory directly.
The hypervisor domain shall support a deployment option where VMs can communicate with each other, through distributed
memory technology when they reside on different compute nodes (e.g. using RDMA as a cluster technology).
The hypervisor domain shall support a deployment option where VMs can communicate directly with each other through a
serial bus
The hypervisor domain shall provide the Extended Para-virtualised Device (EPD) backend functionality specified in ETSI GS
NFV-IFA 002
The hypervisor domain shall provide the EPD backend functionality for all interfaces specified in ETSI GS NFV-IFA 002
The hypervisor domain shall support a deployment option where dedicated switches can be assigned to VNFC instances to
enable Dynamic Optimization of Packet Flow Routing (DOPFR) as per ETSI GS NFV-IFA 018
The hypervisor domain shall provide the Acceleration Resource Discovery interface specified in ETSI GS NFV-IFA 019
The hypervisor domain shall provide the Acceleration Resource Lifecycle Management interface specified in ETSI GS NFV-
IFA 019
The hypervisor domain shall provide the Acceleration Enabler capability as specified in ETSI GS NFV-IFA 004
The hypervisor shall be configurable with resource utilization thresholds leading to automatic changes of the CPUs' power
state.
The hypervisor shall expose the Advanced Configuration and Power Interface (ACPI) to the VMs.
The hypervisor shall be able to apply power management policies provided by the VIM.
The hypervisor shall be configurable with the time required for a compute resource, to return to a normal operating mode after
leaving a specific power-saving mode.
The hypervisor domain shall be able to notify the VIM performance metrics specified in ETSI GS NFV-TST 008
The hypervisor domain shall be able to provide the VIM with performance metrics specified in ETSI GS NFV-TST 008 in
response to a query
Please describe features of the hypervisor that assist in VM status detection and self-healing
Please describe process flow for migrating a VM affected by HW fault to another machine
Hypervisor shall be able to interface with the node providing NFVI Control/Management functions when this node is in HA
configuration with active standbye.
Hypervisor shall detect fault of VM and report alarm within a maximum delay of 1 second.
Hypervisor shall support VM isolation to avoid one VM affecting another VM when running in the same server
Hypervisor shall record VM information and properties regularly to provide necessary log for troubleshooting
Hypervisor shall support cold VM migration
Hypervisor shall support live VM migration.
Hypervisor shall be optimized for Real-Time Applications (e.g. RT-KVM)
Hypervisor shall map priority of processes in guest OS to that in host OS.
Hypervisor shall support virtio and vhostuser vNIC.
tor Requirements
UID: Each acceleration resource shall have a unique identifier.
version: Each acceleration resource shall specify the version of its accelerator.
Type: Each acceleration resource shall have a clear type (e.g. Crypto, FFT, IPSec, etc.).
Capabilities: Each acceleration resource shall indicate its acceleration specific capabilities.
Number of Channels: Each acceleration resource shall indicate how many channels it supports.
Number of Contexts: Each acceleration resource shall indicate how many contexts it supports.
Allows Migration: Each acceleration resource shall indicate if it supports live migration capabilities.
QoS: Each acceleration resource shall indicate the quality of service level it supports.
Data Format: Each acceleration resource shall indicate the data format they operate on.
Re-Programmability: Each acceleration resource shall indicate whether it requires a hardware image to be programmed with
before it can operate.
Resource Availability: Each acceleration resource shall indicate the level or amount of availability that are currently unused and
can be allocated.
The NFVI shall support the capability of attaching/detaching an accelerator to a virtualisation container (e.g. VM) (see note in
ETSI IFA 4).
The NFVI may support the capability of virtualising/slicing hardware accelerator.
The NFVI shall support the capability to collect accelerator performance metrics.
The NFVI shall support the capability to report acceleration resource's information (see note 2).
The NFVI may support the capability to configure an accelerator.
The NFVI may support the capability of accelerator driver management.
The NFVI should support the capability to reserve acceleration resources.
The NFVI should support the capability of accelerator fault management.
The NFVI shall support the following Acceleration technologies
- Enhanced Plaform Awareness (EPA)
- CPU pinning (aka dedicated CPU)
- sharing vCPU
- Virtual Memory Management per NUMA node
- vCPU type configuraiton (e.g. Xeon or Haswell)
- CPU passthrough
- vCPU schedule policy configuration (e.g. Round Robin)
- Memory page size configuration per vCPU
- PCI device NUMA affinity
- Server Group Affinity
- Server Group anti-Affinity
- Configurable huge page setting
- Cache Allocation Technology (CAT)
- Cache Monitoring Technology (CMT)
- Smart NICs
- FPGAs in smart NICS.
- SR-IOV
- DPDK
Please describe the Storage solution for large Market, Regional and National DC deployments
Please describe the Storage solution for Edge and far Edge deployments.
The solution shall support using a SAN for storage
The solution shall support using local server storage
The solution shall support distributed storage using an SDS
The SAN storage device shall include dual (or more) controllers to ensure reliability of physical links.
The SAN storage device shall include hot spare, redundant backup
Please describe fault tolerance and reliability performance of the proposed SAN storage
Distributed storage solution shalls support multiple data replicas in different servers/disks
Distributed storage solution shall emphasize consistency of replicas
Please describe how replication is achieved in the proposed Distributed Storage solution
General Requirements
The network switches shall suppport 10G interfaces
The network switches shall suppport 25G interfaces
The network switches shall suppport 40G interfaces
The network switches shall suppport 100G interfaces
Please describe the number of interfaces of each type 10G/25G/40G/100G supported by each proposed switch model
The solution shall support a Clos architecture
Please provide details of the Clos architecture proposed for each deployment options.
The solution shall include separate network ports for management traffic on all switches and routers proposed
Rack-level systems should have redundant, field replaceable and hot swappable components such as power supplies, etc.
Please list any in-field replaceable components and indicate if they support hot-swap.
Please describe the optimal configuations supported by it's solution for User Plane and Control Plane VNF's?
All Network Elements (NE) shall be labelled with identifying information including model number, serial number, and build
revision.
the TOR switches shat fit within the standard DC rack's
The Network Fabric for both the Data Plane and the Management Plane architecture shall be based on 2 or 3 tier industry
standard CLOS architecture.
Dual-Stack VNFs shall be supported for tenant workloads
The solution shall include rendundant physical network link
work Requirements
The vendor shall state if leaf pairs are deployed in pairs to provide redundancy. Describe when this is utilized in the proposed
solution.
The data network shall support L2 and L3 Clos.
Please state the number of ports supported on each data switch.
The data network switches shall support HW-VTEP to enable bandwidth performance features like SRIOV
Please describe the oversubscription ratio used in dimensioning to maximize a port's usage.
All Border Leaf devices Shall feature individual physical interfaces that can present either 100Gb/s or 4x10Gb/s or 4x25Gb/s
interfaces, via suitable cables or patch panels, to enable connectivity of server and compute devices within the Border
functional block.
All Internet Border routers shall feature individual interfaces with a minimum of 100Gb/s duplex line-rate bandwidth for
northbound connectivity to Border Leaf devices and external networks.
Each network element of the solution shall support IPv4
Each network element of the solution shall support IPv6
Each network element of the solution shall support dual stack IPv4/IPv6 on all interfaces
Each network element of the solution shall support configuration of DSCP on all interfaces
Each network element of the solution shall support link aggregation control protocol (802.3ad)
Each network element of the solution shall support static routing
Each network element of the solution shall support dynamic routing protocol OSPFv2
Each network element of the solution shall support dynamic routing protocol BGP
Each network element of the solution shall support dynamic routing protocol OSPFv3
Each network element of the solution shall support configuration of VLANs per 802.1Q.
Please specify the attributes used to configure different VLANs (e.g. destination IP address, APN, PLMN id ….)
Each network element of the solution shall support configuration of 802.1p prioritization on all interfaces.
Each network element of the solutions shall support Netconf / Yang
Each network element of the solution shall support openflow
Each network element of the solution shall support VxLAN
Each network element of the solution shall support large MTU sizes and jumbo ethernet frames
Each network element of the solution shall support router advertisement, router solicitation, neighbor advertisement and
neighbor solicitation.
ment Network Requirements
The management plane switch downlinks to the nodes shall be 10/100/1000BASE-T
Please explain how the Leaf Rack Management switches architecture is designed