Distributed Cloud White Paper - Final
Distributed Cloud White Paper - Final
Published by
© 2018
Executive Summary
Centralized cloud computing platforms built on huge, monolithic data centers
have served IT and communications networks well for more than a decade. The
seemingly boundless capacity of traditional data centers has supported
massive growth of cloud-based services. But new applications and services
have emerged that reveal the limitations of the stalwart centralized
architecture. In order to meet the needs of existing customers while also
attracting new types of customers, service providers will need to support
applications requiring extremely low latency and extremely high bandwidth to
cloud services. To deliver these new services and optimize existing ones,
operators need an edge cloud architecture that distributes cloud resources
closer to end users at the edge of the network.
The communications industry’s drive for edge computing solutions can be seen
in the expanding activities at industry standards bodies and open source
groups. The European Telecommunications Standards Institute (ETSI), Linux
Foundation, and OpenStack Foundation as well as the Telecom Infra Project
have all launched working groups dedicated to accelerating edge computing
for network operators. Projects include ETSI’s Multi-access Edge Computing
(MEC), the Linux Foundation-hosted Akraino Edge Stack and OpenStack’s new
StarlingX edge computing infrastructure.
4 | WHITEPAPER
Distributed Edge Cloud Topology
The basic topology of distributed edge cloud networks comprises two levels: a central site and many
geographically dispersed edge sites (i.e., edge clouds), which are connected to the central site over
Layer 3 networks. The number of edge clouds in a distributed deployment can be anywhere from one
to tens, or even hundreds, of thousands.
vRAN VMs
&
Containers
Industrial 4.0
Regional
Data Center
Servers
Transportation Edge
Far Edge Servers
Servers
Edge Use Cases Latency Requirement
~20ms ~50ms ~100ms
6 | WHITEPAPER
With centralized management tools and APIs, Single pane of glass provides system-wide view.
administrators can configure the infrastructure Centralized management capabilities must be
once and synchronize the configuration across supported by a single pane of glass view. System
the distributed edge clouds. Configuration administrators need a simple way to see
updates made on the system controller can also everything that’s going on across their entire
be automatically applied to all edge clouds. distributed edge cloud deployment, from
OpenStack resources can be synchronized and infrastructure data synchronization to
automatically applied during installation. connectivity and overall health status to software
Synchronizing the configuration data prevents updates, without having to access multiple
administrators from having to configure each different interfaces and correlate the information.
edge cloud separately, which can be error
prone, with the same tasks (errors) potentially Massive scalability is a must. A distributed edge
repeated thousands of times depending on the cloud architecture provides unprecedented
size of deployment. flexibility for network operators to deploy cloud
resources where they are needed most, whether
It is worth noting that there may be some the edge clouds are deployed to optimize existing
circumstances where service providers may not services or support new applications. To ensure
want to configure all distributed edge clouds in deployment flexibility, a distributed edge cloud
the same way. Centralized management tools solution must be highly scalable to support any
need to allow for exceptions in the size of deployment. The solution needs to be able
synchronization of configuration data. to scale seamlessly to tens or hundreds of
thousands of distributed edge clouds in
In addition to configuring the infrastructure, the geographically dispersed locations. The edge
status of the edge cloud infrastructure also needs clouds themselves need to be scalable from a
to be managed centrally so that administrators single node to thousands of nodes.
can easily monitor the health of the entire system
as well as individual edge clouds. The system Edge cloud autonomy. In many cases it’s critical
controller at the central site needs to aggregate that edge clouds are completely autonomous. If
fault and telemetry data from all the edge clouds, connectivity is lost between the central site and
including fault alarms, logs and telemetry statistics. an edge cloud site, the edge cloud still needs to
perform its mission critical operations and users
The user workloads running on the distributed still need to be able to access the edge cloud. This
edge clouds also need to be centrally managed. is a possible scenario if, for example, an edge
This allows users to launch applications on VMs or cloud is located where mobile or satellite network
containers from different edge cloud sites when coverage is patchy. But if the infrastructure and
needed. It also allows VMs to be migrated from workload data is synchronized across all the edge
one edge cloud site to another. Being able to sites, then users will still be able to access their
centrally manage the edge cloud workloads also services and the edge cloud will function
assists in fault scenarios across edge sites and independently until connectivity is restored.
disaster recovery efforts.
Zero touch provisioning. Installation and
Software updates can be challenging in commissioning at the edge sites need to be as
distributed cloud environments. To make software simple as possible. Beyond the physical server
updates easier and faster, it is necessary to installation and power-on at the edge site, the
orchestrate software patching across the entire remaining installation and commissioning tasks
system to ensure bug fixes and new features are must be as automated as possible, reducing the
applied correctly on each edge cloud. Once the need for human interaction. From that point, the
software update has been applied to the system administrator back at the central site should be
controller at the central site, the update should be able to bring up the cloud environment on the
automatically applied across each node of every nodes at the edge sites with just one button click.
edge cloud. During the update process, it is also
important that VMs are automatically migrated to
ensure network uptime.
8 | WHITEPAPER
Next Steps for Edge Cloud Manageability
Initiatives like the StarlingX and Akraino Edge Stack projects have made great strides in reducing
operational complexity of distributed edge cloud deployments, but there is more work to be done.
Priorities should include georedundancy for system controller central sites to ensure highly available
deployments; enhanced security for communication between edge clouds; increased installation
automation to achieve truly zero-touch provisioning; and support for the lifecycle management of both
virtual network functions (VNFs) and container network functions (CNFs) among edge clouds. Other
improvements that will make management easier include the distribution and synchronization of images
across edge clouds as well as the ability to synchronize configuration to a subset of edge clouds.
Disclaimer: The views and opinions expressed in this whitepaper are those of the authors
and do not necessarily reflect the official policy or position of the GSMA or its subsidiaries. © 2018
11 | WHITEPAPER