Telecom 1
Telecom 1
240-00-0153
Revision 1.0
© 2019
200 Ames Pond Drive | Tewksbury, MA 01876 | Office: 855.709.0701 | Fax: 978.482.0636
WWW.ALTIOSTAR.COM
vRAN Introduction Training Guide
Copyright
© Altiostar Networks, Inc. 2019. All rights reserved. No part of this document may be
reproduced in any form without the written permission of the copyright owner. The Altiostar
logo is a trademark and a service mark of Altiostar Networks, Inc. All other trademarks are the
property of their respective owners.
Disclaimer
The information contained in this document is the property of Altiostar Networks, Inc. and is
subject to change without notice as part of the company continuous process and methodology
improvement. Altiostar reserves the right to make changes in the design of its products or
components as progress in engineering and manufacturing may warrant. It is the responsibility
of the customer to satisfy itself as to whether the information contained herein is adequate and
sufficient for particular use of a user. It is the further responsibility of each user to ensure that
all applications of Altiostar products are appropriate and safe based on conditions anticipated
or encountered during use. This document does not create any additional obligation for
Altiostar and does not constitute additional warranties and representations.
Table of Contents
List of Figures
Figure 1: Mobile Data Traffic per Month................................................................................... 8
Figure 2: Average Mobile Connection Traffic per Month ........................................................... 8
Figure 3: Mobile Operator Landscape....................................................................................... 9
Figure 4: Distributed RAN .......................................................................................................10
Figure 5: Path from Traditional RAN to Virtual RAN .................................................................11
Figure 6: CTO Choice Award ....................................................................................................12
Figure 7: Rakuten Press Release ..............................................................................................13
Figure 8: vRAN Architecture ....................................................................................................13
Figure 9: vRAN Functional Split ...............................................................................................14
Figure 10: Deployment flexibility with Ethernet Fronthaul .......................................................17
Figure 11: High-Level NFV Framework.....................................................................................21
Figure 12: Cisco NFVI Full POD ................................................................................................23
Figure 13: Cisco NFVI Micro POD .............................................................................................24
Figure 14: Detail Cisco NFVI Architecture for Rakuten MNO .....................................................25
Figure 15: Cisco NFVI Node Functions .....................................................................................26
Figure 16: Physical Reference Architecture for vDU and vCU to Cell Site...................................26
Figure 17: GC Site Logical Connectivity with Cell Site and CDC..................................................27
Figure 18: Logical Deployment view of Rakuten Mobile Network with NFV Architecture ..........28
Figure 19: Rakuten vRAN Automation Architecture .................................................................28
Figure 20: VMs Connection to Management Network and Internal Network ............................30
Figure 21: Antenna and Flexible Integrated Antenna Site .........................................................31
Figure 22: AHEH Interface .......................................................................................................33
Figure 23: Single RIU Deployment Example .............................................................................34
Figure 24: RIU Bottom View ....................................................................................................35
Figure 25: AHEH Port 1 Connectivity for AISG2.0 .....................................................................36
Figure 26: Macro eNB Management Architecture ....................................................................38
Figure 27: eSON Architecture ..................................................................................................44
List of Tables
Table 1: Antenna Technical Data .............................................................................................32
Table 2: AHEH Technical Data .................................................................................................33
Table 3: AHEH Interface Detail ................................................................................................34
Table 4: RIU Technical Specification ........................................................................................35
Table 5: RIU Connection Quick Reference................................................................................35
Table 6: Sample Mapping of the Information at OSS ................................................................39
This will challenge service providers’ ability to address this demand and satisfy the needs of
their high-value customers.
• RAN architecture evolution aided by NFV technology whereby baseband functions are
decoupled from underlying hardware and deployed on NFV infrastructure.
• Software programmability and Network automation enabled via NFV.
• Reduced time to market as new services can be quickly realized in the SW without the need
for hardware changes.
• Open interfaces to foster development of robust hardware and software eco-system.
• C-RAN like features for inter-site co-ordination over packet transport. No need for point to
point Dark fiber.
• Software upgradeable to 5G without the need to forklift underlying hardware.
• TIP: Telecom Infra Project - An initiative of Facebook and leading operators around the
world - https://telecominfraproject.com/
Each cell-site may have multiple 3rd party RRHs and Antennas. These are connected to a single
RIU over CPRI. Multiple such RIUs interface with a single instance of an vDU, which can be run
in an edge data center cloud. Multiple such vDU instances interface with a single instance of an
vCU, which can be run in a centralized data center cloud. Multiple such vCU instances interface
with a single instance of vEMS, which can be run in a centralized data center cloud. vDU and
vCU can be run in the same data center cloud.
Using CPRI it interfaces with RRH and using Altiostar’s IPC (Inter Process Communication) (over
Ethernet) it interfaces with vDU VNF.
Number of RIUs, which can be served by a single instance of vDU VNF, are determined based on
the deployment requirement and allocation of the virtual resources to the single instance of
vDU VNF.
1.7.4 virtual Centralized Unit(vCU)
vCU is an VNF (Virtual Network Function) that run on an Intel Architecture based COTS server,
running kernel-based virtual machine (KVM), managed by OpenStack Virtual Infrastructure
Management (VIM) software. Following are the major functions supported by vDU:
Using Altiostar’s IPC (over IP/ Ethernet) it interfaces with vDU. It interfaces with EPC (Enhanced
Packet Core) functions over 3GPP based S1 interface and other eNBs over 3GPP based X2
interface.
Number of vDU-VNF instances, which can be served by a single instance of vCU VNF, are
determined based on the deployment requirement and allocation of the virtual resources to
the single instance of vCU VNF.
2.1 Support for IPv4 and IPv6 for BackHaul, and Mid Haul Networks
Altiostar vRAN SW supports both IPv4 and IPv6 at all its networks.
• Use of lower-cost equipment and shared use of infrastructure with fixed access
networks.
• Obtain statistical multiplexing.
• Helps optimize network performance through monitoring.
Below is an example calculation for the OTA bandwidth requirement for one instance of vCU:
For UL, in addition to the user-plane data, provision for additional bandwidth will be required
for accommodating “management plane traffic” related to various alarms, notification,
performance counter reporting, reporting of debug and syslogs and so on.
Based on the above calculation, the midhaul transport network bandwidth requirement can be
derived. Please note that the above is for illustration purpose only and each operator may have
their own criteria for calculation of average sector/ eNB throughput based on the OTA peak
sector/ eNB throughput.
vDU to OTA transmission latency budget = (edge cloud data center switching delay + jitter) +
fronthaul transport Delay + RIU processing time + RRH processing time = 250 usec
vDU to OTA transmission latency budget = (edge cloud data center switching delay + jitter) +
fronthaul transport delay + 45 (RIU processing time) + 70 usec (example RRH processing time) =
250 usec
[(edge cloud data center switching delay + jitter) + fronthaul transport delay] budget = 250 – 45
– 70 = 135 usec.
The above formula can be used to verify if the deployment can meet the fronthaul latency
requirement or not, example- if the “edge cloud data center switching delay + jitter” is within
35 usec then for fiber based fronthaul, upto 20 km of distance can be supported (transmission
delay through the fiber for a given refractive index = 5 usec/ Km). Or if the “edge cloud data
center switching delay + jitter” is known then the maximum length of the fronthaul transport
can be determined.
Below is an example calculation for the OTA bandwidth requirement for one instance of vDU:
Following table provides capacity information for one eNB, which consists of a single instance
of vCU and two instances of vDU.
Parameter Value
Number of eNB 1
Number of vCU VNF instance 1
Number of vDU VNF instance 2
Total number of sectors supported 12
Max number of RRC connected users per sector 700
Max number of VoLTE users per sector 256
Sector Throughput with Max number of RRC connected UEs DL: ~120 Mbps
UL: ~40 Mbps
2.8.2 Resource Requirements: vCU
Each instance of vCU will require 6 pCPUs for supporting upto 12 sectors of 4T4R 20 MHz FDD
radio.
• OpenStack Concepts
Cisco NFVI is deployed for hosting different VNFs. NFVI architecture is delivered using point of
delivery (POD). In easy term POD can be defined as logical instiaitaion of workload with similar
affinity or functions. NFVI software is supplied by Cisco and it is referred as Cisco Virtualized
Infrastructure Manager (CVIM). NFVI hardware is from Quanta which is defined into different
SKU.
Cisco NFVI deployed for this project support following types of PODs:
• Full POD Architecture
• Micropod POD Architecture
Full POD Architecture: This is complete scalable POD with dedicated Openstack Controller,
Compute and Storage into separate nodes. Number of compute nodes can be scalable
depending upon requirements.
Micropod POD Architecture: MicroPOD uses 3 nodes as combined controller, compute and
storage functions. Any additional node will function as compute nodes.
• Rebuild – A process that removes all data on the server and replaces it with the specified
image. The server ID and the IP addresses remain the same for the rebuilt instance.
• Resize – A feature of Nova that allows you to convert the flavor of an existing server to a
different flavor.
As it is explained in the above section there are three types of high level POD architecture
supported by Cisco VIM: Full standard POD architecture, Hyper converged architecture,
Micropod architecture. We selected Full standard Pod architecture for CDC and Micropod
architecture for GC.
After the installation finishes, Cisco VIM provides OpenStack services using Docker™ containers
to allow for OpenStack services and pod management software updates. The following diagram
shows the functions and services managed by each Cisco NFVI nodes.
Note: For more details on CDC and GC NFVI Architecture refer to Cisco HLD.
The Physical Reference Architecture for VDU and VCU and the interface information for each
connectivity points are below.
Figure 16: Physical Reference Architecture for vDU and vCU to Cell Site
The GC Site Logical Connectivity with Cell Site and CDC is summarized as below.
Figure 17: GC Site Logical Connectivity with Cell Site and CDC
Group Center (GC) NFVI Architecture is described as follows and there are four types of GC
SKUs.
• GC SKU1: vDU VNF runs, which requires FPGA acceleration
• GC SKU2: The other VNFs run, which does not requires FPGA acceleration and vCU runs
on this GC SKU2.
• GC SKU4: Both vDU VNF and the other VNFs run as standby compute node
There are 12 types of GC sites depending on how many cell sites should be handled on GC.
The Rakuten Mobile Network is fully virtualized Telecloud which provides zero
footprint, fully automated and software defined RAN network to serve tens of millions
of subscribers.
Rakuten is working with partners such as Altiostar, Nokia, Cisco, Innoeye, Quanta,
Intel, and so on to work on designing, developing and integrating to launch a new
network in 2019.
Figure 18: Logical Deployment view of Rakuten Mobile Network with NFV Architecture
• The Rakuten Mobile Network architecture is hierarchically divided into Central Data
Center (CDC), Group Center (GC) and Cell Site. Between GC and CDC, there’s one
additional level of data center, Zone Center (ZC), which is mainly used for aggregation
where routing functions will be deployed, and hence it’s not in scope of this document.
• CDC will host packet core from Cisco, IMS from Nokia, NFV management from Cisco, OSS
from InnoEye, BSS from NEC, RAN EMS from Altiostar and other signaling platforms.
• Group Center (GC) will host vRAN functions, including Virtualized Distributed Unit (vDU)
and Virtualized Central Unit (vCU) from Altiostar. In future, it can also provide zero
latency services with massive capacity to host next generation applications such as AR,
VR, real-time gaming servers and intelligent caching and internet offloading.
• Cell Site will host RRH from Nokia and RIU from Altiostar.
• In each data center, all network functions are deployed as VNF instances on one
horizontal NFVI.
• All services will be deployed and managed from a single VNF Orchestrator using the NSO
from Cisco. This will require deployment of automated workflow for each service so that
NSO can manage the lifecycle if these services.
Rakuten NFV C-RAN solution is designed to use commercial off-the-shelf Intel based hardware.
The infrastructure management SW requires that the CPU is from the Intel Xeon family.
Rakuten NFV C-RAN solution requires 10GE Top-of-Rack (ToR) switches to establish the
infrastructure and tenant networks. It also requires 1GE switches to establish the different
management networks. The main requirement for these switches is that they are managed
switches and the number of ports per switch depends on the number of compute nodes being
provisioned. As an example, with a cluster of 20 compute servers and two control servers, we
recommend a ToR switch with 48 ports 10GE plus 4 ports 40GE for the uplink.
VNF Manager supports Vi-VNFM interface to the virtual infrastructure manager (VIM) based on
the ETSI NFV MANO specifications.
vCU VNF is a single VM VNF. vCU is connected to the following networks:
• Management Network
• Backhaul Network
• Midhaul Network
Management network connection uses VirtIO type port, whereas connectivity to Backhaul and
Midhaul networks are over SR-IOV ports.
vEMS is a 4 VM VNF, which has 2 Access Node (AN) VMs running in Active/Standby mode and 2
Data Node (DN) VMs running in Active/Standby mode.
These VMs are connected to a management network and another internal network as shown
below.
Altiostar also supports direct integration with G-VNFM of the operator for lifecycle
management of vCU VNF.
In such a scenario, the sectors in the failed Compute Node will not be reachable which leads to
service loss to the customer. During this time, the disaster recovery process is enabled and
creates new VMs on a different Compute Node and the affected sectors come up without
operator intervention to minimize the service downtime for the customers.
The total time for the VMs to be created on a different Compute Node and restoration of full
service on the affected sector is around 5 minutes.
4.1 Antenna
The RRH is tightly attached to the antenna with TCC block set.
16dBi 17dBi
Frequency Range (MHz) 1730 -1845
Impedance 50Ω
Polarization Type Dual, Slant ±45˚
Gain (dBi) 16.0 17.0
3dB Horizontal 62˚ ± 5 62˚ ± 5
Beam-Width Vertical 10.5˚ ± 1.5 7.0˚ ± 1
Electrical Down Tilt Range 0˚ - 15˚ / 1˚step 0˚ - 12˚ / 1˚step
AHEH
Supported Frequency bands 3GPP band 3
Frequencies DL 1805-1860 MHz, UL 1710-1765 MHz
Number of TX/RX paths/pipes 4/4
Instantaneous Bandwidth IBW 55 MHz
Occupied Bandwidth OBW 55 MHz
Supported LTE Bandwidth LTE5MHz, 10MHz, 15MHz, 20MHz
Output Power Max 40 W, Min 8W
AHEH
Optical Ports 2 x RP3-01/CPRI 9.8 Gbps
AISG2.0 and AISG3.0 from ANT1, 2, 3, 4 and RET
ALD Control Interfaces
(Power supply ANT1 and ANT3)
Other Interfaces EAC (MDR26)
Tx monitor port No
Installation Mounted on antenna
Ingress protection class IP65
Salt fog Telcordia GR-487-CORE
Earthquake ETSI EN 300 019-2-4, Class 4.1, Zone 4
Wind driven rain MIL – STD – 810G, method 506.5, procedure 1.
Operational Temperature Range -40°C to 55°C
Surge protection Class II 5kA
Table 2: AHEH Technical Data
AHEH interfaces are shown as in the above figure and the detail information is shown in the
below table.
The RIU can be mounted on a wall using supplied mounting bracket assembly and mounting
plate. The installer is responsible for supplying and installing antennas and associated cables
and materials for site installation including the GPS, RF, CPRI, Alarm, Power and ground
cables/connector along with SPF’s and other site materials as required.
Consumption
Environmental
-40° to 55° C (-40° to 131° F)
Working temperature
Operating altitude -60 to 3000 m (-197 to 9,843 ft)
Relative humidity 5 to 100%
Cooling Convection (fanless)
Table 4: RIU Technical Specification
The following connectors are located on the bottom of the RIU unit.
Refer to the below tab le for details of the connectors used in the RIU unit.
The RF port 1 of RRH supplies the RET control capability via the AISG2.0 control cable.
Ensure that the following details are available to create the SC Profiles for an eNB at the vEMS:
For more details on the self-configuration procedure, refer to the Altiostar Management
Integration Guide.
OSS is responsible for instantiating EMS instances in CDC. Rakuten has 2 CDCs, one for EAST
region and one for WEST region, currently. EMS instances in a CDC manage eNBs in the same
geographic region of the CDC it belongs to. All the eNBs in a particular GC site will be managed
by the same EMS instance. An EMS instance can manage eNBs in more than one GC sites. An
EMS instance can manage GC site (eNBs) from multiple prefectures – as long as prefectures
belongs to same region. Similarly, GC sites within a prefecture can be mapped to multiple EMS
instances, provided same GC site is not mapped to more than one EMS instance.
Note: eNB management architecture for small (micro) cell will be slightly different, as the eNBs
are not running in GC sites. Instead small cell eNB will run in CDCs. There will be separate EMS
instances to manage small cell eNBs.
Rakuten deploys different types of GC sites (Type A to F, Type XA to XF for example). These
sites are planned for different cell/sector capacities based on hardware and fiber connections
to cell sites. For example, type A GC is planned for 144 sectors and type XA for 288 sectors
Note: These are just sample GC type names and sector capacities. Please get the up to date
value for GC site types and sector capacity from Rakuten.
To perform the vEMS commissioning procedures, ensure to complete the radio network
planning exercise and to provision the OSS with the following information:
• GC site details
• Mapping of eNBs and the corresponding GC sites managing the eNBs
• Mapping of GC sites and the corresponding vEMS hostnames managing the GC sites
• DHCP servers
Refer to the following sample mapping of the information at the OSS:
GC Site eNBs
EMS
(GC Site ID) (20-bit eNB IDs)
vEMS-1 GC Site – 1 1,2,3,4,5,6,7,8,9,10
(hostname of the vEMS) GC Site – 2 11,12,13,14,15,16,17,18,19,20
GC Site – 3 21,22,23,24,25,26,27,28,29,30
GC Site – 4 31,32,33,34,35,36,37,38,39,40
GC Site – 5 41,42,43,44,45,46,47,48,49,50
vEMS-2 GC Site – 6 51,52,53,54,55,56,57,58,59,60
(hostname of the vEMS) GC Site – 7 61,62,63,64,65,66,67,68,69,70
GC Site – 8 71,72,73,74,75,76,77,78,79,80
GC Site – 9 81,82,83,84,85,86,87,88,89,90
GC Site – 10 91,92,93,94,95,96,97,98,99,100
vEMS-3 GC Site – 11 101,102,103,104,105,106,107,108,109,110
(hostname of the vEMS) GC Site – 12 111,112,113,114,115,116,117,118,119,120
GC Site – 13 121,122,123,124,125,126,127,128,129,130
GC Site – 14 131,132,133,134,135,136,137,138,139,140
GC Site – 15 141,142,143,144,145,146,147,148,149,150
Table 6: Sample Mapping of the Information at OSS
OSS is responsible for assigning a particular GC site to an EMS instance. OSS needs to know
max sector capacity of the GC sites. OSS will find an EMS instance, which can accommodate
those sectors without exceeding the max sector capacity of the EMS instance. When binding
GC sites to EMS instances, we only use up to 90% of the sector capacity for an Instance. This is
done to make some space for future expansion of some GC sites. OSS will also need to find EMS
instance which can serve the GC based on the region GC belongs to (EAST or WEST). Region of
the GC is figured out based on region of the prefecture where the GC is located. If there is no
EMS instance in the CDC to assign the GC site, OSS will instantiate a new EMS instance in the
CDC to assign the EMS instance.
Based on the information in Table 1, the OSS also derives the mapping of eNBs and the
corresponding vEMS hostname managing the eNBs. Upon commissioning the vEMS, all the
eNBs, to be managed by this vEMS, need to be pre-provisioned in the vEMS by the OSS. For the
steps involved in vEMS commissioning, refer to the following sections.
Note: The OSS needs to configure the FQDN information of the vEMS managing the GC sites, on
the DHCP server. For more information, contact Cisco/InnoEye teams working on Automtaion
Project.
• Type of a particular GC site is changed, which changes the max-sector of that GC site
• max-sector value of a particular GC-site type is changed
• New GC site type is added
1. Installation at the Radio site is completed that (involves installing Radios and the RIU
units at the site) and the equipment is powered on.
2. On SW boot up at the RIU, It identifies its own serial number.
3. It also sends out a DHCPV6 request with device type as Macro. The DHCPV6 response
received from the DHCPV6 server contains the EMS FQDN, specific to this GC site (the
DHCPV6 was configured with this FQDN as a part of GC site infrastructure brings up by
the NSO).
4. The RIU now sends a Power up notification to the EMS with its identity (serial number).
EMS forwards the same notification to OSS. If this RIU is the first RIU to be powered up,
to be managed by a given eNB instance, then this eNB instance is not yet running.
5. EMS sends RIU Initialized Notification with Procedural state as “Initialized” to OSS.
Procedural state “Initialized” represents that EMS is not managing any RIU with this
serial number.
6. On reception of this notification, the OSS identifies the eNB ID, to manage this RIU, and
triggers eNB network service instantiation (6) towards the NSO. In order to do this, it
needs the following parameters:
a. eNB (corresponding to the RIU serial number)ID – part of the Radio network
planning output provisioned into OSS.
b. GC site ID – part of the Radio network planning output provisioned into OSS.
c. EMS FQDN (to be provided as part of vCU VNF instantiation)
d. vCU FQDN (to be provided as part of vDU instantiation)
e. All Static IPv6 addresses for VNF interfaces and FQDNs assigned for the VNF.
The NSO then triggers relevant VNF instantiation: that is vCU instantiation (6a) followed
by vDU instantiation (6d).
On completion of instantiation, it gets the VNF record with assigned IPv6 addresses and
hostnames. It then uses this information to update the DNS server with the DNS records
for the vCU (6b) and the vDU (6d). There would be a standard way to convert
Hostnames to FQDN by adding suitable prefix/suffixes.
7. On successful instantiation, the vDU registers with the vCU VNF using the vCU FQDN
provided to it during its instantiation by the ESC.
8. The vCU sends an eNB up notification to the EMS.
9. EMS starts self-configuration process and sends “SC process created notification” to
OSS.
10. EMS sends “Request eNB license info (eNB ID)” to OSS and in turn receives license
information from OSS.
11. EMS generates eNB license and sends eNB license to vCU.
12. EMS sends “Request eNB configuration (eNB ID)” to OSS and receives eNB configuration
from OSS.
13. EMS sends “Set Configuration eNB” to vCU.
14. vCU in turn sends “vDU configuration” to vDUs.
a. vCU sends “Second Initial Boot Notification” to EMS indicating to EMS that the
configuration is successfully applied.
b. EMS collects the inventory and config data from vCU and updates the EMS database.
EMS send activation request to vCU.
c. The EMS sends “Notify SC Process completed” to OSS to indicate that the
commissioning process is concluded.
15. The EMS now responds to the Power up notification from the RIU with the identity
(FQDN) of the vDU that will manage the RIU and its connected Radios.
The RIU then initiates a connection towards this vDU and the vDU then configures the RIU and
radios.
• PM File Processing
• eNB will send the PM files at 15 minutes interval. EMS will calculates KPI and
generate PM report (3GPP format).
• 3gpp PM reports and PM data files received from eNB will be maintained in EMS for
2 days only.
• EMS will not store any processed PM data in database.
• Around 1000 performance counters per Network Element.
• Fault Management
• 0.005 alarms per second per NE.
• Alarms will be forwarded to only one target destination (OSS).
• Only last 30 days of notifications are stored in EMS.
• vEMS Backup
• vEMS backs up configuration and inventory data on a daily basis. Configuration is full
back up where data (notification or Alarm data) is backed up in an incremental
manner where each incremental backup contains previous day data only.
• Latest two backup files only maintained in EMS.
• Last 30 days of operation audit information will be stored in EMS.
• 300 UEs can be traced at any instant (3GPP + Vendor specific parameters)
20 Simultaneous Users
• Following are the resource requirement.
CPU RAM Storage
App Node 1 20 20 GB 900 GB
App Node 2 20 20 GB 900 GB
Data Node 1 20 20 GB 3.5 TB
Data Node 2 20 20 GB 3.5 TB
Altiostar’s SON ANR offers the following functionalities (automatic in closed loop fashion)
1. Neighbor Discovery and addition -both neighbor cell and X2 neighbor).
2. Neighbor parameters update as part of changes in network (both neighbor cell and X2
neighbor).
3. Neighbor Deletion based on aging/inactivity.
Altiostar’s implementation of the ANR function uses hybrid architecture with responsibilities
and decision making shared between the following two modules:
1. ANR module at vCU
2. EMSON module at EMS
Altiostar supports 512 neighbor cells per frequency for up to nine frequencies for a given cell
and 256 X2 entries for a given eNB.
Altiostar SON PCID uses a periodic PCID resolution of existing neighbors to detect PCID
confusion. Since using measurement reports for resolving PCID consumes bandwidth and
affects UE throughput, judicious choice of UE for such an operation should be made such that
an existing conflict situation can be identified swiftly with least number of UEs and much faster.
Altiostar SON PCID, utilizes, X2 message to warn operator of a possible PCID Collision. If an eNB
sees that any of its X2 neighbor has any serving sector with PCID same as PCID of one of its own
serving sector, the eNB raises a PCID Conflict alarm. The operator is expected to verify if there
really is a PCID Collision and take appropriate remedial actions.
The root sequences are grouped into 838 logical root sequences and the first index of the RSI
block allocated is broadcast in every cell. The aim of assigning RSI is that each cell must have a
different Root Sequence Index (RSI) to avoid the reception of false preambles in the adjacent
eNBs as far as possible. It is assumed that RF planning tool allocates initial RSI to every cell such
that the neighboring cells have unique PRACH preambles.
The objective of RACH Optimization algorithm is to automatically detect the possibility of a RSI
collision and execute a resolution across neighboring cells. In addition, it also calculates Zero
Correlation Zone (ZCZ) index, based on the size of the cell, to determine the required number of
RSIs to generate 64 preambles in each cell.
To the extent possible, this feature will assign non-overlapping blocks of RSIs to immediate
neighbor cells. However, if the available pool is already exhausted, the feature will attempt to
avoid PRACH-PRACH frequency resource overlap between immediate neighbors. This is
accomplished by considering the neighbor cells’ frequency offset values upon RSI assignments.
At registration with the eSON (centralized SON server) system, an LTE cell must provide its
initial RSI assignment. In the event that an RSI conflict (defined as a partial or complete overlap
of RSI blocks between two immediate neighbor cells) is detected, the feature is activated only
at the cell with the higher ECGI, i.e. a new RSI block will be assigned if an RSI block with no
overlap is available.
each Random Access Response (RAR) based on which it will calculate the cell’s ZCZ index. If the
calculated value is different from the current value, the feature will assign a new ZCZ index to
the cell.
With a higher ZCZ index, a larger number of RSIs is required to generate the cell’s 64 preambles.
Accordingly, in the event the calculated ZCZ index is larger than the current index, the feature
will recalculate the cell’s RSI block. Conversely if the calculated ZCZ index is smaller than the
current index, the feature will enable the release the additional RSIs which can be used for
further allocations to different cells.
• Load reporting
• Load balancing action based on handovers
• Adapting handover and/or reselection configuration
The goal of eSON MLB feature is to offload excess traffic from a highly loaded home cell to its
lightly loaded neighbor cells by tuning the home cell's Cell Individual Offsets (CIOs). Once the
load of the home cell decreases to a level considered to be no longer highly loaded, the
algorithm reverts the CIOs to their original levels that are prior to the MLB operation.
2. Collect home cell’s Capacity Value (CV*) on a periodic basis and exchange the load
information with neighbor cells.
3. Change the state and CIO value according to each cell pair’s (home cell and one of its
neighbor cells) depending on the current state and load information.
4. When CIO is changed at the home cell, a corresponding change may be made to the
neighbor cell’s CIO, depending on the current guard distance between the handover
boundaries of the two cells. This is called pairwise operation.
A heterogeneous network is comprised of types of cells with different sizes, transmit powers
and bandwidths. Load balancing in such network would have to take the capacity disparity
between these cell types into account. In SON terminology this is reflected in what is referred
to as the Capacity Class Value. As an example, if cell type A with the largest bandwidth, highest
transmit power, fastest CPU and largest compute/transport resources has a capacity of 100%,
the other cell types' capacity is scaled down (normalized) with respect to cell type A. As such a
cell with 20MHz would have 4 times the capacity of that of a 5MHz cell. As such the capacity
class value for the former cell type will be defined as 100% and the capacity class value of the
latter cell type will be defined as 25%. And thus, with a uniform traffic distribution and uniform
deployment of the two cell types across the network, the desired load split of 80%-20% can be
achieved.
eSON intra-frequency MLB algorithm is triggered when home cell’s new load information is
received. Upon receiving the home cell’s load information, the algorithm will take the following
actions per neighbor cell, the algorithm decides on
• The state of the cell pair (home, neighbor) and
• The modifications to the relevant mobility parameters that is CIOs.
MRO functionality is divided into two blocks: MRO monitoring function and MRO corrective
action function. MRO monitoring function monitors and detects radio link failures that happen
due to a too early handover, too late handover or a Handover to wrong cells. It also detects
ping pong HO which can result in inefficient use of network resources.
The green subbands are recommended for cell edge UE transmissions possibly with power
boosting, and the red subbands (and all remaining unused green subbands) are recommended
for cell center UE transmissions.
Essentially the feature dynamically provides a set of protected set of subbands (‘green’
subbands) with higher SINR levels for the cell edge UEs. This in turn results in a higher
throughput and improved cell edge spectral efficiency.
It is important to note that the number of green subbands is also dynamically adjusted based
on the cell load and the real-time feedback on the UE throughputs. As the cell edge load
increases, a higher number of green subbands are required; conversely as the cell edge load
decreases the number of green subbands is lowered to provide more protection to the
neighboring cells.
The required inputs with their respective desired periodicities are provided below.
Input Parameters
Input parameter Default desired report period
Subband CQI report 80ms per UE
Wideband CQI report 80ms per UE
UE RSRP measurement 5.12s per UE
UE throughput 1s per UE
Accordingly, a subband mask is calculated with a desired periodicity of once every 2 seconds.
There are different types of MDT such as: Immediate MDT, Logged MDT, RCEF and RLF.
The MDT procedure activation in EUTRAN is divided into two categories; Management MDT and
Signaling MDT. Management MDT activation and deactivation is managed by EMS and Signaling
MDT activation and deactivation is managed by MME.
6.7.5 Management MDT
Management MDT activation from EMS supports following job types. In interface for enabling
the MDT is provided in the below section.
• Immediate MDT only
• Logged MDT only
• Immediate MDT and Trace
• RLF reports only
• RCEF reports only
In eNB, based on the above job type combination, each cell will support maximum of 5
management MDT sessions in parallel. Each cell will not allow repeated or duplicate job types in
parallel.
6.7.8 Configuring and Activating MDT for the selected eNBs through EMS
In EMS, MDT configuration is an extension of Subscriber Trace Configuration. Currently the total
MDT trace recording sessions allowed per job type is 300.
CHAPTER 7: ACRONYMS
7 Acronyms
Term Description
ANR Automatic Neighbor Relations
BH BackHaul
CA Carrier Aggregation
CV Capacity Value
CCV Capacity Class Value
CDC Central Data Center
CFR Call Failure Rate
CIOs Cell Individual Offsets
CoMP Coordinated Multipoint
COTS Commercial Off-The-Shelf
C-RAN Cloud Radio Access Network / Centralized Radio Access Network
DL DownLink
DPD Digital Pre-Distortion
DPDK Data Plane Development Kit
EMS Element Management System
EMSON Element Management Self Organizing Networks
vCU eNB virtual Central Unit
vDU eNB virtual Data Unit
FH FrontHaul
GC Group Center
GPON Gigabit Passive Optical Network
GTP-U GPRS Tunnelling Protocol – User plane
GUI Graphical User Interface
HA High Availability
L1 Layer 1
L2 Layer 2
L3 Layer 3
LOS Line-of-Sight
LTE Long Term Evolution
MAC Medium Access Control
MCIM Multi-Cell Interference Management
MDT Minimization of Drive Test
MLB Mobility Load Balancing
MRO Mobility Robustness Optimization
Term Description
NFV Network Function Virtualization
NIC Network Interface Card
OSS/BSS Operation Support System/ Business Support Systems
PCID Physical Cell Identity
PDCP Packet Data Convergence Protocol
PRACH Physical Random Access Channel
QoS Quality of Service
RH RedHat
RACH Random Access Channel
RLC Radio Link Control
RLF Radio Link Failure
RRC Radio Resource Control
SON Self Organizing Network
SPM Signal Processing Module
UE User Equipment
UL UpLink
VIM Virtualized Infrastructure Manager
VM Virtual Machine
VNF Virtual Network Function
vRAN Virtual Radio Access Network