Edge Computing
Edge Computing
Bibin Sajeevan
Roll No:34
S6 BCA
Edge computing
Edge computing is a distributed computing paradigm which
brings computation and data storage closer to the location where it is needed, to
improve response times and save bandwidth.The origins of edge computing lie
in content delivery networks that were created in the late 1990s to serve web and
video content from edge servers that were deployed close to users.In the early 2000s,
these networks evolved to host applications and application components at the edge
servers,resulting in the first commercial edge computing services that hosted
applications such as dealer locators, shopping carts, real-time data aggregators, and
ad insertion engines.Modern edge computing significantly extends this approach
through virtualization technology that make it easier to deploy and run a wider range of
applications on the edge servers.
Definition
One definition of edge computing is any type of computer program that delivers low
latency nearer to the requests. Karim Arabi, in an IEEE DAC 2014 Keynote and
subsequently in an invited talk at MIT's MTL Seminar in 2015 defined edge computing
broadly as all computing outside the cloud happening at the edge of the network, and
more specifically in applications where real-time processing of data is required. In his
definition, cloud computing operates on big data while edge computing operates on
"instant data" that is real-time data generated by sensors or users.
According to The State of the Edge report, edge computing concentrates on servers
"in close proximity to the last mile network."Alex Reznik, Chair of the ETSI MEC ISG
standards committee loosely defines the term: "anything that's not a traditional data
center could be the 'edge' to somebody."
Edge nodes used for game streaming are known as gamelets, which are usually one
or two hops away from the client.Per Anand and Edwin 'the edge node is mostly one or
two hops away from the mobile client to meet the response time constraints for
real-time games' in the cloud gaming context.
Concept
The increase of IoT devices at the edge of the network is producing a massive amount
of data to be computed to data centers, pushing network bandwidth requirements to
the limit. Despite the improvements of network technology, data centers cannot
guarantee acceptable transfer rates and response times, which could be a critical
requirement for many applications. Furthermore, devices at the edge constantly
consume data coming from the cloud, forcing companies to build content delivery
networks to decentralize data and service provisioning, leveraging physical proximity
to the end user. In a similar way, the aim of Edge Computing is to move the
computation away from data centers towards the edge of the network, exploiting smart
objects, mobile phones or network gateways to perform tasks and provide services on
behalf of the cloud. By moving services to the edge, it is possible to provide
content caching, service delivery, storage and IoT management resulting in better
response times and transfer rates. At the same time, distributing the logic in different
network nodes introduces new issues and challenges.
Scalability
Scalability in a distributed network must face different issues. First, it must take into
account the heterogeneity of the devices, having different performance and energy
constraints, the highly dynamic condition and the reliability of the connections,
compared to more robust infrastructure of cloud data centers. Moreover, security
requirements may introduce further latency in the communication between nodes,
which may slow down the scaling process.
Reliability
Management of failovers is crucial in order to maintain a service alive. If a single node
goes down and is unreachable, users should still be able to access a service without
interruptions. Moreover, edge computing systems must provide actions to recover
from a failure and alerting the user about the incident. To this aim, each device must
maintain the network topology of the entire distributed system, so that detection of
errors and recovery become easily applicable. Other factors that may influence this
aspect are the connection technology in use, which may provide different levels of
reliability, and the accuracy of the data produced at the edge that could be unreliable
due to particular environment conditions.
Applications
Edge application services reduce the volumes of data that must be moved, the
consequent traffic, and the distance that data must travel. That provides lower latency
and reduces transmission costs. Computation offloading for real-time applications,
such as facial recognition algorithms, showed considerable improvements in response
times as demonstrated in early research. Further research showed that using
resource-rich machines called cloudlets near mobile users, offering services typically
found in the cloud, provided improvements in execution time when some of the tasks
are offloaded to the edge node. On the other hand, offloading every task may result in
a slowdown due to transfer times between device and nodes, so depending on the
workload an optimal configuration can be defined.
Another use of the architecture is cloud gaming, where some aspects of a game could
run in the cloud, while the rendered video is transferred to lightweight clients such as
mobile, VR glasses, etc. Such type of streaming is also known as pixel
streaming. Conventional cloud games may suffer from high latency and insufficient
bandwidth, since the amount of data transferred is huge due to the high resolutions
required by some services.
Other notable applications include connected, autonomous cars,smart cities , Industry
4.0 (smart industry) and home automation systems.