0% found this document useful (0 votes)
93 views11 pages

Distributed Cloud White Paper - Final

White Paper

Uploaded by

Chaly Khoa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views11 pages

Distributed Cloud White Paper - Final

White Paper

Uploaded by

Chaly Khoa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

WHITEPAPER

Distributed Edge Clouds


Are Complex, But Must
They Be Difficult?

Published by

© 2018
Executive Summary
Centralized cloud computing platforms built on huge, monolithic data centers
have served IT and communications networks well for more than a decade. The
seemingly boundless capacity of traditional data centers has supported
massive growth of cloud-based services. But new applications and services
have emerged that reveal the limitations of the stalwart centralized
architecture. In order to meet the needs of existing customers while also
attracting new types of customers, service providers will need to support
applications requiring extremely low latency and extremely high bandwidth to
cloud services. To deliver these new services and optimize existing ones,
operators need an edge cloud architecture that distributes cloud resources
closer to end users at the edge of the network.

The communications industry’s drive for edge computing solutions can be seen
in the expanding activities at industry standards bodies and open source
groups. The European Telecommunications Standards Institute (ETSI), Linux
Foundation, and OpenStack Foundation as well as the Telecom Infra Project
have all launched working groups dedicated to accelerating edge computing
for network operators. Projects include ETSI’s Multi-access Edge Computing
(MEC), the Linux Foundation-hosted Akraino Edge Stack and OpenStack’s new
StarlingX edge computing infrastructure.

The biggest initial challenge for distributed edge cloud architecture is


operational complexity. While distributed edge clouds resolve latency and
bandwidth networking issues, deployments will not be feasible for critical
infrastructure operators if the management of edge clouds is so complex that it
results in soaring operational costs.

With distributed edge cloud deployments comprising potentially thousands of


geographically dispersed remote nodes, service providers need comprehensive
management tools for system-wide orchestration to successfully implement the
distributed cloud architecture and deliver new revenue-generating services.
This paper presents the key requirements for distributed edge cloud solutions,
evaluates progress to date in improving manageability and proposes next steps
for accelerating the implementation of edge cloud architectures.
What Are Distributed Edge Clouds?
There are many terms for describing edge latency in applications, such as mobile HD video
computing in critical infrastructure networks, and streaming, and enables new real-time
each one can mean different things to different applications that were previously not possible to
people. We define distributed edge clouds simply deliver, such as vehicle-to-infrastructure or
as providing cloud services — compute, storage autonomous vehicle services.
and networking — close to the end user device
with integral system-wide management To achieve similar network improvements with a
capabilities. The last point is especially important centralized cloud architecture, critical
because without management you have the infrastructure operators would have to
potential to increase cost with the introduction of significantly increase bandwidth in access and
complexity. The objective of distributing cloud backhaul networks, but this is a costly solution
services to the network edge is to reduce latency that may not even meet low latency
and reduce bandwidth requirements in access and requirements. Alternatively, operators could
backhaul networks, which will not only improve provide more compute and storage resources on
application performance and network efficiency, edge devices, but this is a far less dynamic
but also support an emerging set of new services. solution and results in more costly, complex and
power-hungry devices; and in many cases may
By locating cloud resources closer to where be impractical due to the size of the devices. The
applications are consumed and where alternatives to distributed edge clouds cannot
application data is generated, service providers efficiently mitigate latency and bandwidth
eliminate the need to backhaul data to the core restrictions mainly because they are too costly,
network for processing. This greatly reduces inflexible and difficult to manage.

Distributed Edge Clouds Are Complex, But Must They Be Difficult? | 3


Edge Cloud Development Enables
Delivery of New Services
Low-latency, high-bandwidth environments are Enterprise private networks
fertile ground for network operators to develop Network operators can deploy edge compute
unique real-time services. By distributing cloud resources directly on customer premises or in
resources to the network edge, operators have public venues like a sports stadium to create a
tremendous opportunities to grow revenue by new breed of specialized services. In a sports
offering innovative services. In addition, edge arena, for example, network operators can
clouds minimize the traffic load on backhaul create new experiences for fans by delivering
networks by processing data locally, which personalized content to their smartphones,
reduces transport costs. representing a welcome new source of revenue
that offsets the cost of the new infrastructure.
High-bandwidth content delivery
Distributed edge clouds will transform content 5G and Industrial IoT
delivery services over mobile and fixed networks, The requirements for 5G networks aim to reduce
such as mobile HD video streaming or security latency down to a single millisecond to support
surveillance applications, enabling service tactile Internet applications, which are
providers to offer a higher quality of experience characterized by real-time interaction between
for consumers and businesses. Distributed cloud humans and machines. Such services are currently
environments allow network operators to cache not possible via today’s centralized cloud
and process content locally so that it does not architectures. But the combination of ultra-low
have to be retrieved from the core network, latency and 5G speeds (up to 10 Gbps) will enable
thereby reducing network latency and improving remote surgeries, new levels of industrial
video service quality. Edge clouds can also host automation, connected vehicle applications and
real-time analytics that provide insight into even autonomous vehicles whether they are
current network conditions, enabling operators drones, cars or trucks. Vehicle-to-everything (V2X)
to route traffic over paths that will deliver the communication applications are under
best content experience. development that will facilitate smart city
implementations, reduce traffic congestion and
Immersive AR/VR services improve road safety. In industrial settings, edge
Augmented reality and virtual reality promise to cloud deployments will improve the operation of
create immersive communications experiences. control systems in manufacturing and energy
The benefits will not only improve consumer applications as well as enable better patient
applications like gaming, but they will also impact monitoring in the healthcare sector.
industries including retail, healthcare and
education. But to be viable, these resource-
intensive services require data processing and
intelligence close to the end user devices.

“ By distributing cloud resources to the network edge,


operators have tremendous opportunities to grow
revenue by offering innovative services.”

4 | WHITEPAPER
Distributed Edge Cloud Topology
The basic topology of distributed edge cloud networks comprises two levels: a central site and many
geographically dispersed edge sites (i.e., edge clouds), which are connected to the central site over
Layer 3 networks. The number of edge clouds in a distributed deployment can be anywhere from one
to tens, or even hundreds, of thousands.

Fast Distributed Edge Cloud


Multi-access
Edge Reliable
Secured
Scalable

vRAN VMs
&
Containers

Industrial 4.0

Regional
Data Center
Servers
Transportation Edge
Far Edge Servers
Servers
Edge Use Cases Latency Requirement
~20ms ~50ms ~100ms

The central site acts as the system controller and


hosts the system-wide management functions.
These centralized functions enable “To ensure deployment
administrators to remotely synchronize the flexibility, a distributed edge
deployment, configuration and management of
all the edge clouds. cloud solution must be highly
scalable to support any size of
The edge clouds can run on a variety of
hardware form factors, from a single server to deployment. The solution needs
multi-server scenarios. Smaller footprint
implementations may be limited in terms of
to be able to scale seamlessly to
power, compute and storage resources and may tens or hundreds of thousands
run a reduced control plane since they will share
management functions from the central site.
of distributed edge clouds in
Communications between the remote edge geographically dispersed
clouds and the central site is supported by REST
APIs over Layer 3 networks. locations.”

Distributed Edge Clouds Are Complex, But Must They Be Difficult? | 5


Critical Requirements for Distributed
Edge Clouds
With edge clouds scalable from small single service providers need the ability to remotely
server solutions to large multi-server solutions, manage the cloud infrastructure as well as the
replicated hundreds or thousands of times and application workloads across the entire
spread out over a wide area, the biggest challenge distributed system from a central site.
is manageability. How can service providers cost-
efficiently manage thousands of distributed edge It is essential to centrally manage the configuration
clouds over diverse network conditions? and status of the edge cloud infrastructure to save
time and minimize operational costs. All the
To overcome manageability issues, distributed components of edge cloud infrastructure need to
edge cloud solutions require centralized be configured for how the cloud will be used and
management capabilities, massive scalability, what resources will be made available to users.
edge cloud autonomy and zero touch This includes setting user login parameters,
provisioning. These features are the key establishing the physical nodes that the cloud
essentials for cost-efficient management. software will run on, determining what software
Together, these capabilities will shorten edge will be running and what software images will be
cloud deployment times, streamline operations, available to install for the applications, and
ensure availability, minimize human errors, and, configuring the storage clusters.
ultimately, lower overall operating costs to
support the business case for distributed edge The virtualized applications, whether
cloud deployments. implemented as containers or virtual machines,
also need to be launched and defined according
Centralized management of edge cloud to the resources they will be allowed to use –
infrastructure and workloads. Large-scale that is, setting the number of CPU cores needed
deployments of geographically dispersed edge and amount of RAM memory and disk space
clouds simply cannot be managed manually. required. Other administrative configuration
Unlike centralized data centers with teams of tasks include securing the network traffic by
technicians, administrators, and engineers, most creating security groups and security group
remote edge clouds will not have anyone on site rules for ingress and egress packet filtering. In an
to configure, provision and manage operations. Of OpenStack-based system, for example, VM or
course, the servers themselves do need to be container image definitions, packet filtering and
physically installed, cabled and powered up on quotas would be handled by elements of Nova,
site. But once the servers are up and running, Neutron and Cinder resources, respectively.

“Large-scale deployments of geographically dispersed edge clouds


simply cannot be managed manually. Unlike centralized data centers
with teams of technicians, administrators, and engineers, most
remote edge clouds will not have anyone on site to configure,
provision and manage operations.”

6 | WHITEPAPER
With centralized management tools and APIs, Single pane of glass provides system-wide view.
administrators can configure the infrastructure Centralized management capabilities must be
once and synchronize the configuration across supported by a single pane of glass view. System
the distributed edge clouds. Configuration administrators need a simple way to see
updates made on the system controller can also everything that’s going on across their entire
be automatically applied to all edge clouds. distributed edge cloud deployment, from
OpenStack resources can be synchronized and infrastructure data synchronization to
automatically applied during installation. connectivity and overall health status to software
Synchronizing the configuration data prevents updates, without having to access multiple
administrators from having to configure each different interfaces and correlate the information.
edge cloud separately, which can be error
prone, with the same tasks (errors) potentially Massive scalability is a must. A distributed edge
repeated thousands of times depending on the cloud architecture provides unprecedented
size of deployment. flexibility for network operators to deploy cloud
resources where they are needed most, whether
It is worth noting that there may be some the edge clouds are deployed to optimize existing
circumstances where service providers may not services or support new applications. To ensure
want to configure all distributed edge clouds in deployment flexibility, a distributed edge cloud
the same way. Centralized management tools solution must be highly scalable to support any
need to allow for exceptions in the size of deployment. The solution needs to be able
synchronization of configuration data. to scale seamlessly to tens or hundreds of
thousands of distributed edge clouds in
In addition to configuring the infrastructure, the geographically dispersed locations. The edge
status of the edge cloud infrastructure also needs clouds themselves need to be scalable from a
to be managed centrally so that administrators single node to thousands of nodes.
can easily monitor the health of the entire system
as well as individual edge clouds. The system Edge cloud autonomy. In many cases it’s critical
controller at the central site needs to aggregate that edge clouds are completely autonomous. If
fault and telemetry data from all the edge clouds, connectivity is lost between the central site and
including fault alarms, logs and telemetry statistics. an edge cloud site, the edge cloud still needs to
perform its mission critical operations and users
The user workloads running on the distributed still need to be able to access the edge cloud. This
edge clouds also need to be centrally managed. is a possible scenario if, for example, an edge
This allows users to launch applications on VMs or cloud is located where mobile or satellite network
containers from different edge cloud sites when coverage is patchy. But if the infrastructure and
needed. It also allows VMs to be migrated from workload data is synchronized across all the edge
one edge cloud site to another. Being able to sites, then users will still be able to access their
centrally manage the edge cloud workloads also services and the edge cloud will function
assists in fault scenarios across edge sites and independently until connectivity is restored.
disaster recovery efforts.
Zero touch provisioning. Installation and
Software updates can be challenging in commissioning at the edge sites need to be as
distributed cloud environments. To make software simple as possible. Beyond the physical server
updates easier and faster, it is necessary to installation and power-on at the edge site, the
orchestrate software patching across the entire remaining installation and commissioning tasks
system to ensure bug fixes and new features are must be as automated as possible, reducing the
applied correctly on each edge cloud. Once the need for human interaction. From that point, the
software update has been applied to the system administrator back at the central site should be
controller at the central site, the update should be able to bring up the cloud environment on the
automatically applied across each node of every nodes at the edge sites with just one button click.
edge cloud. During the update process, it is also
important that VMs are automatically migrated to
ensure network uptime.

Distributed Edge Clouds Are Complex, But Must They Be Difficult? | 7


State of Play for Distributed Edge Clouds
How close is the industry to meeting these To date, StarlingX has demonstrated many
requirements for distributed edge clouds? As critical capabilities, such as synchronizing
noted above, many initiatives at open source and OpenStack and infrastructure configuration as
industry standards groups are tackling various well as dynamically managing quotas across all
aspects of edge computing for network edge clouds from the central system controller.
operators. Among these efforts, the OpenStack The project has also developed a simple
Foundation’s StarlingX project is notable for its installation sequence for edge clouds, which is
work on distributed edge cloud manageability approaching the goal of zero touch provisioning.
and contribution to other open source projects The platform can automatically orchestrate
to broaden community engagement and widen software upgrades across edge clouds and
industry support. aggregate fault alarms and telemetry data. And
the project is working on improving the
As part of OpenStack’s Edge Computing group, scalability and autonomy of authentication and
the StarlingX project started in May 2018 with authorization processes.
seed code from the Wind River® Titanium
Cloud™ critical infrastructure platform. The open Going forward, Titanium Cloud will continue to
source project is based on proven technology deliver productized and commercially supported
from the widely deployed Titanium Cloud, which implementations of the StarlingX project.
delivers the reliable uptime, performance,
security and operational simplicity that will be
necessary for distributed edge cloud solutions.
StarlingX code will also be contributed to the
Linux Foundation’s Akraino Edge Stack project.

8 | WHITEPAPER
Next Steps for Edge Cloud Manageability
Initiatives like the StarlingX and Akraino Edge Stack projects have made great strides in reducing
operational complexity of distributed edge cloud deployments, but there is more work to be done.
Priorities should include georedundancy for system controller central sites to ensure highly available
deployments; enhanced security for communication between edge clouds; increased installation
automation to achieve truly zero-touch provisioning; and support for the lifecycle management of both
virtual network functions (VNFs) and container network functions (CNFs) among edge clouds. Other
improvements that will make management easier include the distribution and synchronization of images
across edge clouds as well as the ability to synchronize configuration to a subset of edge clouds.

Distributed Edge Clouds Are Complex, But Must They Be Difficult? | 9


Conclusion
Operational complexity is the biggest initial challenge for distributed edge
cloud deployments. Network operators need confidence that they can easily
manage edge clouds to meet service quality commitments without incurring
excessive operating costs. Distributed edge cloud solutions must be designed
to support centralized management, scalability, edge cloud autonomy and zero
touch provisioning. These basic requirements will provide the operational
simplicity, high performance and reliable uptime for distributed edge cloud
deployments so that network operators can seize the opportunities to deliver
new real-time services that deliver new sources of revenue.
Wind River® is the world leader in embedded software Produced by the mobile industry for the mobile
solutions and a pioneer in edge infrastructure industry, Mobile World Live is the leading multimedia
technologies for the telecommunications and resource that keeps mobile professionals on top of the
communications industries. As service providers news and issues shaping the market. It offers daily
transition to software-defined systems that will breaking news from around the globe. Exclusive video
transform the network, they need innovative interviews with business leaders and event reports
technologies they can trust, and Wind River has been provide comprehensive insight into the latest
used by the top 20 telecommunications equipment developments and key issues. All enhanced by incisive
providers for nearly four decades. Wind River’s analysis from our team of expert commentators. Our
portfolio of scalable, highly reliable, and deployment- responsive website design ensures the best reading
ready software solutions can help service providers experience on any device so readers can keep up-to-
deliver virtualized services faster and at lower cost for date wherever they are.
the networks of the future.
We also publish five regular eNewsletters to keep the
Why roll the dice? Get in touch with us now. mobile industry up-to-speed: The Mobile World Live
Daily, plus weekly newsletters on Mobile Apps, Asia,
www.windriver.com Mobile Devices and Mobile Money.

What’s more, Mobile World Live produces webinars,


the Show Daily publications for all GSMA events and
Mobile World Live TV – the award-winning broadcast
service of Mobile World Congress and exclusive home
to all GSMA event keynote presentations.

Find out more www.mobileworldlive.com

Disclaimer: The views and opinions expressed in this whitepaper are those of the authors
and do not necessarily reflect the official policy or position of the GSMA or its subsidiaries. © 2018

11 | WHITEPAPER

You might also like