0% found this document useful (0 votes)
183 views25 pages

Energy Aware Edge Computing A Survey

This document reviews energy aware edge computing. It discusses energy efficient hardware design including heterogeneous chips, alternative energy storage, dynamic voltage and frequency scaling, and power capping. It also discusses energy aware edge computing architectures focusing on memory cache systems, networking, compiler support, benchmarking, and software-defined storage. Additionally, it examines energy aware edge operating systems, middleware, services, and applications. The goal is to provide a systematic review of energy efficiency research in edge computing across hardware, software and system levels.

Uploaded by

Summer Triangle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
183 views25 pages

Energy Aware Edge Computing A Survey

This document reviews energy aware edge computing. It discusses energy efficient hardware design including heterogeneous chips, alternative energy storage, dynamic voltage and frequency scaling, and power capping. It also discusses energy aware edge computing architectures focusing on memory cache systems, networking, compiler support, benchmarking, and software-defined storage. Additionally, it examines energy aware edge operating systems, middleware, services, and applications. The goal is to provide a systematic review of energy efficiency research in edge computing across hardware, software and system levels.

Uploaded by

Summer Triangle
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Computer Communications 151 (2020) 556–580

Contents lists available at ScienceDirect

Computer Communications
journal homepage: www.elsevier.com/locate/comcom

Review

Energy aware edge computing: A survey✩


Congfeng Jiang a,b , Tiantian Fan a,b , Honghao Gao c ,∗, Weisong Shi d , Liangkai Liu d ,
Christophe Cérin e , Jian Wan f
a
School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou 310018, China
b
Key Laboratory of Complex Systems Modeling and Simulation, Ministry of Education, Hangzhou Dianzi University, Hangzhou 310018, China
c Computing Center, Shanghai University, Shanghai 200444, China
d Department of Computer Science, Wayne State University, Detroit 48202, MI, USA
e Université Paris 13, Sorbonne Paris Cité, LIPN UMR CNRS 7030, 99 avenue Jean-Baptiste Clément, F-93430 Villetaneuse, France
f
School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China

ARTICLE INFO ABSTRACT


Keywords: Edge computing is an emerging paradigm for the increasing computing and networking demands from end
Edge computing devices to smart things. Edge computing allows the computation to be offloaded from the cloud data centers
Energy efficiency to the network edge and edge nodes for lower latency, security and privacy preservation. Although energy
Computing offloading
efficiency in cloud data centers has been broadly investigated, energy efficiency in edge computing is largely
Benchmarking
left uninvestigated due to the complicated interactions between edge devices, edge servers, and cloud data
Computation partitioning
centers. In order to achieve energy efficiency in edge computing, a systematic review on energy efficiency
of edge devices, edge servers, and cloud data centers is required. In this paper, we survey the state-of-the-
art research work on energy-aware edge computing, and identify related research challenges and directions,
including architecture, operating system, middleware, applications services, and computation offloading.

Contents

1. Introduction .................................................................................................................................................................................................... 557


2. Energy-aware edge hardware design.................................................................................................................................................................. 557
2.1. Heterogeneous chips integration ............................................................................................................................................................ 558
2.2. Alternative energy storage devices ......................................................................................................................................................... 559
2.3. DVFS support....................................................................................................................................................................................... 560
2.4. Energy resiliency .................................................................................................................................................................................. 561
2.5. Power capping ..................................................................................................................................................................................... 562
3. Energy-aware edge computing architecture ........................................................................................................................................................ 562
3.1. Memory cache systems ......................................................................................................................................................................... 562
3.2. Networking for energy efficient routing and naming ............................................................................................................................... 564
3.3. Compiler support energy profiling ......................................................................................................................................................... 565
3.4. Reprogrammability & reconfiguration..................................................................................................................................................... 565
3.5. Benchmarking & measurements of energy efficiencies.............................................................................................................................. 565
3.6. Software-defined storage infrastructures for energy awareness.................................................................................................................. 566
4. Energy-aware edge OS ..................................................................................................................................................................................... 566
4.1. The edge OS examples .......................................................................................................................................................................... 567
4.2. Energy aware resource management and scheduling................................................................................................................................ 568
4.3. Data encryption and authentication ....................................................................................................................................................... 568
4.4. Infrastructure management.................................................................................................................................................................... 568
4.5. Containerization & virtualization ........................................................................................................................................................... 569
4.6. File system .......................................................................................................................................................................................... 569
5. Energy-aware edge middleware......................................................................................................................................................................... 569

✩ This work is supported by Natural Science Foundation of China (No. 61972118, No. 61972358, and No. 61572163), and Key Research and Development
Program of Zhejiang Province, China (No. 2018C01098 and No. 2019C01059).
∗ Corresponding author.
E-mail address: [email protected] (H. Gao).

https://doi.org/10.1016/j.comcom.2020.01.004
Received 20 July 2019; Received in revised form 8 December 2019; Accepted 2 January 2020
Available online 10 January 2020
0140-3664/© 2020 Elsevier B.V. All rights reserved.
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

5.1. Message interface supporting energy awareness ...................................................................................................................................... 570


5.2. Cloud adaptivity for more energy saving ................................................................................................................................................ 570
6. Energy aware edge services & applications ....................................................................................................................................................... 570
6.1. Application specific energy aware data analytics ..................................................................................................................................... 571
6.2. Service placement ................................................................................................................................................................................ 571
6.3. Energy efficient edge computing based on machine learning .................................................................................................................... 571
7. Energy aware computing offloading .................................................................................................................................................................. 574
7.1. Computation partioning before offloading .............................................................................................................................................. 574
7.2. Gaming and cooperation between edge and the cloud ............................................................................................................................. 575
8. Conclusions and future work ............................................................................................................................................................................ 575
Declaration of competing interest ...................................................................................................................................................................... 576
References....................................................................................................................................................................................................... 576

In edge computing, computation offloading is frequently invoked for


1. Introduction latency minimization and Quality of Services guarantee. Specifically, in
order to tradeoff among system overheads, energy consumption, and
The advances in sensors, and wireless communication technologies system performance, tasks may be offloaded to edge devices from the
promote the wide deployment of mobile devices and smart things in cloud data centers. However, computing offloading may lead to jitter in
industrial and personal scenarios. With higher computing capability quality of service along with energy shifting and redistribution among
and higher storage capacity, these mobile devices and smart things can different edge nodes. Nowadays, energy efficiency has become one of
provide various kinds of services [1], such as data aggregation, real- the most important concerns for both cloud servers and mobile devices.
time local data analytics, content caching and transmission relay. The Though energy efficiency in cloud data centers has been thoroughly
devices deployed at the network edge are usually referred to as edge investigated, energy efficiency in edge computing is largely left unin-
devices. Edge devices with more functionalities and more powerful vestigated due to the complicated interactions between edge devices,
computing capabilities are referred to as edge servers since they are edge servers, and cloud data centers.
closer to network edge and farther away from the cloud data centers. In this paper, we conduct a thorough survey on the energy aware
Edge computing can fully exploit the computing capability of edge edge computing. From the hardware layer to the application layer,
devices and edge servers [2,3]. Applications can be executed at the we provide a bottom-up systematic review of the existing work on
network edge closer to data sources. Processing the data at the network energy efficiency in edge computing. A systematic view of the survey
edge provides shorter response time and less pressure on the network is presented in Fig. 1.
bandwidth. As such, edge computing improves the user experience for This remainder of this paper is organized as follows. Section 2
time-sensitive application significantly. reviews energy-aware hardware design in edge computing. We re-
In particular, mobile edge computing (MEC) is becoming a promis- view the work on energy-aware edge architectures in Section 3 and
ing paradigm to provide more responsive computing services at the net- energy-aware edge operating systems in Section 4. We discuss the
work edge [4,5]. However, edge devices are usually resource- energy-aware edge middleware in Section 5 and energy-aware applica-
constrained with limited computing capability and power supply. In tions and services provisioning in Section 6. Energy-aware computing
some cases edge devices are prohibited from executing applications that offloading is surveyed in Section 7 and we conclude our paper in
require a high-power supply and a large amount of data processing [6]. Section 8.
To this end, migrating computing tasks to nearby edge servers or cloud
data centers can improve the application performance for large volume 2. Energy-aware edge hardware design
data analytics or resource-hungry computing intensive processing such
as deep learning model training and inference [7]. In edge computing Edge devices are widely deployed in various scenarios, such as smart
environment, computing tasks can be offloaded from the cloud data cities, vehicles, industrial workplace, smart home, and information
center to edge servers or edge devices for low latency and data privacy and communication infrastructures. However, these edge devices are
preservation. resource-constrained with limited power supply, or they are equipped
In edge computing environment, both devices and servers are usu- with lower powerful computing and storage resources for computing-
ally heterogeneous in terms of hardware capabilities, architectural and intensive tasks. Hence, energy efficiency should be the primary de-
programming interoperability, operating system, and services stacks. sign goal for edge devices, such as processors, sensors, edge servers,
Many edge devices are resource constrained in computing capability, switches, and routers. Energy awareness in hardware level design
storage capacity and network connectivity. In many edge computing process can save huge energy consumption when edge devices are
scenarios increasing the energy consumption could have a negative deployed in real world scenarios. Currently there exist some studies
impact on the power-constrained IoT device or edge cloud side with on hardware design dedicated for energy efficiency optimization in
limited power sources. Since billions of edge devices are deployed edge computing. We present a bottom-up review of the related work
in edge computing environment, their total energy consumptions are on energy-aware hardware design for edge computing, as described in
immense and as important as those of cloud data centers. For example, Fig. 2 and Table 1.
for battery powered devices or power constrained edge nodes, energy Firstly, we review some related work on heterogeneous processors
aware edge computing can extend their lifetime, provide quality of and accelerators which supports a scalable configuration to integrate
service guarantee, or increase system throughput under specific power various sensors, actuators, and power and data transceiver circuits.
budget. Different from energy aware computing in server systems and Secondly, since edge devices are usually power-constrained, it is nec-
cloud data centers, energy awareness in edge computing involves all essary to deploy lightweight and longer life span storage devices in
operations conducted along data’s whole life cycle, including data edge computing environment. We review some related work on en-
generation, transmission, aggregation, storage, processing, and etc. ergy storage devices. Thirdly, we give a short review on the energy
Therefore, energy aware computing is urged for all aspects of edge com- resiliency systems. Fourthly, we review the commonly used approaches,
puting, including architecture, operating system, middleware, service i.e., DVFS and power capping, in edge computing and how they can
provisioning, and computing offloading. achieve higher energy efficiency in edge computing.

557
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Table 1
State-of-the-art on energy aware hardware design for edge computing.
Contributions and comparison Description
CPU Powerful arithmetic unit ALU; Complex internal structure; Low-latency; Low energy efficiency.
Heterogeneous GPU [8–10] High throughput; Parallelization; High peak performance; higher power consumption.
architecture FPGA [11] Low power; Low expense; High performance; Re-configurability; High energy efficiency.
Heterogeneous chips integration [12–25] High energy efficiency; Low-latency; Flexibility.
Lithium-ion battery Low expense; Low capacity; Electrode expansion.
Si-nanolayer-embedded graphite/carbon hybrids [26] High Coulombic efficiency; Enhance capacity; Excellent capacity retention.
Energy storage
Zn-battery [27] Safe; Poor re-chargeability.
devices
Zn-Ni battery [27] High energy storage intensity.
Renewable external energy [28–36] Least expensive; Energy saving; Power intermittent.
DVFS of CPU [37–42] Limited; Efficient in saving energy; Inefficient in heterogeneous architecture.
DVFS support
DVFS of other components [43–46] Higher energy efficiency; Flexible.
Fault-detection [47–49] Necessary for fault tolerance.
Energy resiliency Fault-tolerance [50–56] Robustness; Complex but necessary.
Fault-recovery [57] Suitable for emergent situations;
Processor power capping [58–61] High accuracy; Minimal application runtime; High energy efficiency.
Power capping
Power capping of other components [62–64] Less memory bandwidth; High energy efficiency; Scalability.

needed, such as graphics processing units (GPUs) [8–10] and field


programmable gate arrays (FPGAs) [11].
For example, the computing power and memory bandwidth of a typ-
ical GPU may exceed those of today’s mainstream CPU. And the CUDA
programming framework simplifies the programming and developing
details on GPUs and it allows the codes running on the traditional CPU
to be migrated directly to the GPU for execution. Moreover, GPU can
support highly parallel programmability as in high energy physics, 3D
textures rendering, Bitcoin mining, and hashing encryption and this
makes it suitable for big data analysis and edge video analytics on edge
devices because general computing platform requires excessive sup-
porting hardware peripherals and consumes more energy. Therefore,
Fig. 1. Complete view of the energy awareness in edge computing.
special designed edge computing platform harnessing powerful GPU
card can achieve both reasonable computing power and satisfactory
power consumption.
FPGA is an integrated circuit that contains a series of programmable
logic blocks which are configured with hardware description language
(HDL) and several memory modules. These logic blocks can also be
joined together to provide dynamic reconfiguration, which in turn can
be used as logic gates to provide complex combination functions. FPGA
is more flexible than CPU and GPU and it can support a variable depth
pipeline structure that provides a large amount of parallel computing
resources.
CPU, GPU, and FPGA have different features and advantages in
different scenarios. The difference is that FPGAs usually require interac-
tions with low level hardware design, while programming GPUs does
not, especially for programming GPUs with the programming frame-
work like CUDA. For example, GPUs have a fixed computational pro-
gramming model and can accelerate certain class of compute-intensive
applications, while the FPGA’s logic can be customized exactly for
an application’s needs. Since FPGA has customizable IOs, it can in-
Fig. 2. Hardware design aspects of energy aware edge computing.
teract with any chip of compatible signal levels, speed and number
of IOs for a broader range of accelerations. In edge computing envi-
ronment with heavy network data transmission and operations, FPGA
2.1. Heterogeneous chips integration is suitable for workload and applications that require very low and
predictable latency, such as TCP/IP checksum offloading, data encryp-
In traditional cloud computing data centers and desktop server tion/decryption, audio codec, etc. Moreover, the energy efficiency of
systems, processors are manufactured with high performance of com- FPGA is much better than that of CPU and GPU and therefore FPGA
puting and storage. They are designed for large scale data processing is more suitable for edge computing scenarios where power budget is
and highly efficient computing. However, these traditional processors limited.
of X86 and ARM architecture require hundreds or even thousands While GPUs have been dominating the market for quite a long
of instructions to complete one unit processing, which cannot meet time and have been recognized as the most efficient platform for
the requirements for massive big data processing in ubiquitous edge data intensive workload, FPGA also emerges as a promising computing
computing with billions of edge devices. Moreover, due to the power platform offering both high performance in machine learning (ML),
consumption limits of edge devices, instruction execution cannot be artificial intelligence (AI), and deep neural networks (DNNs) applica-
accelerated directly by increasing the CPU frequency. Therefore, spe- tions and showing improved power consumption. For higher energy
cially designed hardware for data processing in edge computing is efficiency with higher computing capability, typical modern servers are

558
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

equipped with heterogeneous architecture with CPU, GPU, and other computing, both heterogeneous processors and other hardware acceler-
ASIC accelerators [12]. We list the power and performance of some ators are integrated into servers of cloud data centers and IoT platforms.
typical CPUs and GPUs in Tables 2 and 3. For example, AMD released For instance, studies have suggested that the modern silicon integrated
the Zen series server processor with built-in Ryzen APU integrated with circuit (IC) can be used to integrate heterogeneous sensors for real-
the Radeon Vega graphics card [13]. Combined with the CPU and time sensing and diverse signal harvesting. Lindsay et al. [65] propose
the GPU, such processors can significantly improve the performance, a scalable IC, which supports a scalable configuration to integrate
guarantee visual fidelity and reduce memory latency. various sensors, actuators, and power and data transceiver circuits. Roy
Similarly, another type of heterogeneous architecture is the pro- et al. [66] introduce a highly heterogeneous intelligent signal monitor-
cessor that integrates the CPU with the FPGA on the same chip to ing system, which is suitable for distributed, interactive, and ubiquitous
increase the computing capabilities in data centers [14], such as Xeon- computing. The system is mainly designed for efficient power manage-
FPGA [15], Catapult [16], and POWER8 with FPGA [17]. For example, ment, which integrates NEMs-based crack sensors, energy scavenging
the Xeon-FPGA is Xeon E5-26xx v2 processor integrated with Arria 10 devices, rechargeable sheet battery, low-power Si wireless transceivers,
GX FPGA, 8 GB DDR4 memory banks with error correction code, and half-digital ultra-low-power sensor readout circuit, and highly efficient
128 MB flash. To integrate the FPGA, Intel Xeon provides developers DC to DC converter.
with FPGA interface manager, drivers, and application programming Edge computing system has the characteristic of fragmentation and
interfaces (APIs). With the FPGA, Intel Xeon-CPU codes can be re-used heterogeneity. At the hardware level, there are various computing
on multiple Intel platforms, and the card can be deployed in a variety units, such as CPUs, GPUs, FPGAs, and ASICs varying in computation
of servers. capabilities. We summarize the energy and power efficiency of different
The Catapult fabric is embedded in each half of a 48-server rack. chips in this subsection to provide a whole picture of these chips for
Each server is attached to a local DRAM and an FPGA in order to their application in an edge computing scenario. The energy and power
maintain the server’s homogeneity and avoid the network bottleneck. efficiency of underlying hardware must be explored and investigated
The FPGAs are directly connected to each other in a 6 × 8 2D ring. before edge computing infrastructure deployment. The energy aware-
These connected FPGAs can also be dynamically reconfigured to redis- ness in hardware is essential to build an edge computing platform with
tribute FPGA service group for desired functionality. Local communi- the computation capability to meet the application demands, while the
cation between the FPGA and the CPU is implemented by PCI Express energy consumption and cost must be considered within acceptable
(PCIe). High-level security protocols and fault handling mechanisms are ranges at the same time.
applied to guarantee runtime reliability.
The FAbRIC POWER8+CAPI heterogeneous computing platform is 2.2. Alternative energy storage devices
to provide shared memory for different processors with several X86
servers (act as a gateway node, file server, and build machine for Edge devices are usually power-constrained with limited computing
running FPGA node) and multiple POWER servers [17]. The POWER8 capability. Therefore, it is important to deploy alternative power supply
nodes can be regarded as an accelerator with a Nallatech 385 A7 devices with lighter weight, larger power capacity, and longer life
Stratix V FPGA adaptor, an Alpha-data 7V3 Virtex7 Xilinx-based FPGA spans.
adaptor, and a NVIDIA Tesla K40m GPGPU card. With a 7%–10% increase in energy densities each year, Lithium-
Heterogeneous computing platforms such as ARM-based processors ion batteries (LIBs) are quite popular in today’s mobile device markets.
with FPGAs also have both software and hardware programmability However, studies have shown that the capacity of the LIB is approach-
and the hardware acceleration can be achieved by integrating these two ing its theoretical limits. Hence, the growing demands of power supply
components. For example, the Zynq-7000 SoC [18] integrates the CPU, in edge computing require new technologies and approaches for LIBs.
Application Specific Standard Parts (ASSP), Digital Signal Processors One approach is to apply Si-containing graphite composites to the
(DSP) and mixes signal functionality on the same device, which results LIB power supply, which has excellent capacity retention and can
in a de facto energy-efficient programmable platform. reduce electrode expansion. Si-nanolayer-embedded graphite/carbon
Inta et al. [19] propose a generic FPGA–GPU–CPU heterogeneous ar- hybrids (SGC) are fabricated through a chemical vapor deposition
chitecture for cross-correction video matching, which captures (CVD) process with a scalable furnace [26]. Experiments demonstrate
1024*768 pixels/frame at 158 fps (frame per second). However, it can- that SGC can achieve high Coulombic efficiency (CE) (92%) and en-
not reduce the end-to-end delay because the throughput bottleneck of hance reversible capacity (517 mAhg-1) at the first cycle. Compared
PCIe is not taken into consideration. Hence, Bauer et al. [20] present a with the traditional graphite anode process, the SGC has better capacity
novel FPGA–GPU–CPU heterogeneous architecture for real-time pedes- retention and can reduce electrode expansion even in the case of high
trian detection, which is only suitable for intra-frame computing and electrode density.
not suitable for cross-frame image processing. To overcome the short- The zinc-based alkaline batteries are an emerging type of power
comings of the above-mentioned two architectures, Meng et al. [21] supply and can be considered as an alternative to LIBs, lead–acid and
proposes the first heterogeneous architecture for real-time cardiac nickel-hydrogen batteries. However, Zn is severely poor in recharge-
optical mapping with end-to-end delay of only 1.86 s. Their proposed ability due to dendrite formation. Therefore, Parker et al. [27] modify
platform captures 100 × 100 pixels/frame at 1024 fps, which is 273 Zn electrode into a monolithic, porous, non-periodic structure in order
times that of OpenMP. In their architecture, each processor performs to improve the rechargeability of Zn. Such batteries are durable and
its own duties: FPGAs are used to implement complex algorithms, dischargeable, even to deep levels of discharge. Parker et al. [27] also
GPUs are used to parallelize operations, and CPUs are schedulers who propose the novel Ni-3D zinc-based batteries. These series of batteries
coordinate branch tasks for throughput maximization. can not only expand the range of energy storage and applications, but
Therefore, both traditional high-performance computing and emerg- also prevent certain accidents caused by thermal runaway of lithium
ing edge computing paradigm require the architecture to be a co- batteries.
existence of heterogeneous computing hardware and general-purpose Green and renewable energy powered edge devices and edge servers
processors [22]. Heterogeneous hardware can greatly reduce the exe- are emerging nowadays, where solar power is most attractive because
cution time of one or more types of workloads and thereby can im- it is less expensive, more scalable, and easier to deploy [28–33]. Since
prove energy efficiency by sacrificing some general-purpose computing some solar cells are configured with interfaces for charging battery and
capability [23–25]. running devices, the solar cells can charge different power batteries and
Over the past few years, servers in cloud data centers and IoT plat- devices constantly, which is more usable for mobile edge computing
forms have adopted heterogeneous designs. To achieve energy efficient environment. For example, the 𝛼–Si solar cell adopts the 𝛼-Si material,

559
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Table 2
comparison of latest server CPU and mobile CPU.
Type CPU model Cores L3 Cache (MB) Base frequency (GHz) TDP (W)
Intel i7-10510Y 4 8 1.2 7
Mobile
AMD Ryzen 7 3780U 4 4 2.3 15
Intel Xeon 9282 56 77 2.6 400
Server
AMD EPYC 7742 64 256 2.25 225

Table 3
comparison of latest server GPU and mobile GPU.
Type GPU model Core Performance (single precision) Performance (Double precision) TDP
Mobile AMD Radeon RX5500M 22 Compute Units, 1408 Stream Processors 4.6TFLOPS 85
NVIDIA Quadro RTX 5000 Mobile 384 Tensor Cores, 3072 CUDA cores 9.492TFLOPS 0.296TFLOPS 110
Server Nvidia Tesla V100s 640 tensor cores, 5120 CUDA cores 16.4TFLOPS 8.2TFLOPS 250
AMD Radeon Instinct MI60(32G) 64 Compute Units, 4096 Stream Processors 14.7TFLOPS 7.4TFLOPS 300

which is based on the characteristics of direct band gap and allows state off different power state. Dynamic Voltage and Frequency Scaling
sunlight to be absorbed in a thin layer of a few micrometers. (DVFS) is one common technique that provides the capability to adjust
Though 𝛼–Si has been successful in commercial markets, new tech- the CPU frequency and voltage according to real time workloads [37,
nologies, such as CdTe and CIGS, were also developed. Based on this 67]. For example, in order to improve the battery efficiency, the latest
p–n structure and the optimization of the original CIGS, scholars have system-on-a-chips (SoC) for mobile device is also equipped with DVFS
made significant improvements in the energy efficiency of solar cells. capability [68].
Combining the advantages of above-mentioned two solar cells, CdTe For instance, Android Linux kernel has cpufreq and devfreq mod-
adopts a direct bandgap material and the same p–n heterostructure ules, which can dynamically set the frequencies of CPU and other
as CIGS. Bonnet et al. [31] develop a CdTe-CdS p-n heterojunction DVFS-capable components to reduce power consumption. The control
solar cells, which can be fabricated in various ways, such as using high governor tries to balance the power consumption and the performance
temperature vapor deposition of CdTe. The experiment results show the by using different frequency algorithms according to different running
energy efficiency of the battery is 6%. states. The CPU frequency can be rapidly increased at high workloads
However, solar panels expose the sensors, and thereby lead to and reduced at low workloads.
potential sensor failures and data privacy leakage. Fiber photovoltaics Liang et al. [39] propose a DVFS governor based on model predic-
(PV) can be a potential alternative to solar panels since PV can charge tion for Android system. Firstly, the DVFS governor performs an offline
the low power devices, and weave them into textiles to create a more analysis to obtain the optimal CPU frequency range and the memory
powerful battery. Photovoltaic, or the Solar Power System, generates access rate (MAR) for specific workloads and the analysis results are
electricity without saving the battery. In order to maximize the energy referred to as critical speed (CS). Secondly, the CS is modeled as MAR-
efficiency of PV, integrating high-pass optical rechargeable fibers with CSE. Thirdly, the DVFS governor is configured to select the optimal CPU
batteries is a suitable solution. Mickelsen et al. [34] propose a global frequency based on the model and the current running state.
navigation satellite system (GNSS) powered by PV where about 2 cm of However, DVFS is not suitable for workload that changes very
fibers is required per minute to meet the requirement of output voltage. frequently, such as video coding and decoding applications whose inter-
Wristband high-capacity battery, which maintained 98% of the original frames change very frequently. Raffin et al. [40] propose a playing-
capacity after 500 recharging cycles are also proposed [35,36]. aware DVFS (PAD) approach, which enables power-optimized, real-
In edge computing, offloading computation tasks to edge server can time, and efficient video coding. During the decoding process, better
significantly alleviate the computing pressure of mobile devices. How- DVFS adjustments are made according to the information from the
ever, the energy management is challenging since the unpredictability decoding applications instead of the states.
of the energy consumption and harvesting and the quality of service The above-mentioned DVFS enabled approaches rely solely on CPU
of upper services. During computation offloading, the system power utilization without considering user inputs. Yang et al. [41] propose a
consumption such as the local execution power and the offloading novel power management method which considers both user inputs and
transmission power must be minimized with the battery capacity and power consumption. Their proposed method is referred to as human
QoS (Quality of Service) constraints. Different characteristics of energy and application-driven frequency scaling for processor power efficiency
storage devices and their deployment result in the complexity of the en- (HAPPE). HAPPE adjusts the frequency and voltage of the processor
ergy optimization problem of edge computing with various constraints. according to the requirements of users and applications. HAPPE selects
Online and offline algorithm based on different specific optimization the default on-demand frequency when users are not involved. When
theories are of importance for system QoS guarantee. users are not satisfied with the performance, HAPPE will increase the
frequency. When users prefer to save power, HAPPE will return to the
2.3. DVFS support default state. Compared with the default DVFS method, experiment
results indicate that HAPPE can reduce the system power by 25% with
The processor’s power consumption can be formulated as in Eq. higher user satisfaction. However, HAPPE has two major shortcomings.
(1) [38]: The first is that it is not appropriate for users to provide their inputs in
some cases. The second is it neglects the fact that requirements from
𝑃 = 𝐶𝑉 2 𝐹 (1)
different users on the same machine or device can be different.
where P represents the processor’s power, C represents the switched To overcome HAPPE’s shortcomings, Muhuri et al. [42] propose
capacitor, V represents the supplied voltage, and F represents the an advanced power management approach based on user satisfaction.
running frequency. This approach is referred to as perceptual computer power management
The performance of processors and the development of memory approach (Per-C PMA). Per-C PMA collects user’s feedback on the
technologies have been improved greatly over the past decades while last cycle and models the feedback as user satisfaction. Then, during
battery technologies have not improved much. Modern processors are the running process, Per-C PMA recommends the frequency that the
manufactured with capability of dynamically adjusting the running user can use and dynamically adjusts the frequency after receiving

560
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

the user’s feedback. Experiment results indicate that Per-C PMA can However, redundancy is considered as a better approach to avoid
reduce the power consumption by 42.26% and 10.84%, which improve failure. A conventional power supply system can maintain a redundant
the performance by 16% and 10% compared with ON-DEMAND and power supply that is parallel to the main power supply line [52]. Most
HAPPE. of the time, the redundant power supply remains dormant and on
DVFS is initially proposed to adjust power of CPU. However, in standby mode. When the system requires a maintenance service, the
edge computing environment, modern heterogeneous multiprocessor redundant power supply will be used.
system-on-chip (MPSoC) is widely deployed, which consists of var- Ferreira et al. [53] use the state machine replication for fault
ious components, such as CPU, GPU, DSP, FPGA, and accelerators. tolerance. Based on the checkpoint, copies of Message Passing Interface
Therefore, application of DVFS on CPU solely is inefficient for power (MPI) processes are made. Casanova et al. [54] adopt two replication
saving on such MPSoC devices and other hardware components can also methods: one is replicating all the instances, and the other is replicating
save power consumption by adjusting their frequency [43,44]. Begum entire processes in an instance. Aupy et al. [51] propose to maintain
et al. [45] adjust the frequency of the CPU and memory modules, five copies of each data and keeps each copy consistent with each other.
which improves the performance greatly and reduces the system’s When a node fails, data can still be acquired from other non-failure
power consumption. Wang et al. [46] implement the joint CPU–GPU nodes. Therefore, replication can be considered as an efficient method
DVFS and propose the OPTiC. OPTiC automatically selects the par- to build a reliable system.
tition and working frequency for the CPU and GPU, which greatly As another fault tolerance approach, proactive fault tolerance can
improves the performance and reduces power consumption compared reduce the probability of failure. Sampaio et al. [55] propose a method
with CPU-DVFS. that can predict the probability of failure, and then perform proactive
DVFS has been the cornerstone of many software approaches that fault tolerance. Seybold et al. [56] develop a fault tolerance method
can meet the requirements of application performance with high energy based on prediction, which can be applied in battery assembly systems
efficiency. However, hardware can react faster to external events and to avoid errors caused by failures.
perform fine-grained power management across a device. As such, Though the probability of power supply failures is quite low, the
recent trends in technologies, such as moving voltage converters on consequence will be catastrophic once the failure occurs. Currently, the
chip, prefer hardware control for DVFS. Since workload may spike and loss cannot be redeemed even if the power can be restored. As such, it is
fall during the different period, it is intuitively possible to adjust the urgent to investigate how to detect and eliminate the causes of failures.
device’s working state and power state to save energy during period Kong et al. [47] propose a diagnostic method to detect the Micro-
without many external request. However, energy efficiency and QoS are Short Circuit (MSC) of a lithium-ion battery cell. This method is based
usually conflicting goals for both edge computing and cloud computing on variations of recharging charging capacity (RCC) between charges.
environment. DVFS cannot save power and energy while maintaining The MSC resistance can be obtained via dividing RCC by the mean
the QoS guarantee if it is not elaborately configured and invoked voltage. If the resistance is very small, we can predict that the battery
according to real time workload and QoS requirements. Moreover, an can be short-circuited shortly. RCC is calculated by measuring the
edge device’s energy consumption consists of different componential amount of electric leakage between the two charges, and then current
portions such as CPU, memory and network where CPU dominates voltage is measured. And RCC is divided by the mean voltage, which
more than other components. Systematic energy awareness is much is used to obtain MSC resistance.
more important than single component level energy reduction without Zhang et al. [48] propose another MSC detection method with low-
sacrificing system level performance, especially for the edge devices pass filters. They introduce the concept of a median unit whose open
which are energy-constrained. For example, workload aware DVFS on circuit voltage (OCV) or state of charge (SOC) is in the middle of the
edge devices must be elaborated for different edge computing scenarios battery group. This concept of median unit represents the character-
where workload patterns and device deployments change significantly istics of normal cells. The OCV can be obtained by low-pass filters
one to another. DVFS also affects performance and power, sometimes from the circuit voltage differences between the median unit and other
in disproportionate amounts. Therefore, each possible DVFS adjust- units. The low-pass filters smooth the SOC data and estimate the MSC
ment must consider power budget and performance goals to determine current by recursive least squares (RLS). Finally, the MSC resistance
whether the trade-off is acceptable. is computed according to the Ohm’s law. Experiment results indicate
that the proposed method is accurate in estimating the real-time MSC
2.4. Energy resiliency resistance of the battery.
For edge computing system with external power supply, a minor
Energy resiliency refers to that the energy security and energy energy failure can lead to the damage of power source and ruin user
efficiency under changing and unpredictable demands such as power experience. Doshi et al. [49] design a system that adopts the master
outage and power scarcity. Since edge devices are powered by either and slave mechanism to detect the phase of disconnected line. When the
the external power supply or the energy buffer battery, energy re- line is disconnected, the master near the transformer stops receiving the
siliency is of importance when the external power supply sources is signals from the slave mounted on the pole. Once the master receives
discontinuous, or the battery is drained out to non-working condition. no signal response from the slave, the master immediately turns off the
The work on energy resiliency can be categorized into three groups, relay of that respective phase and keep the rest phases in ON condition.
i.e., avoiding power supply failures, coping with the effects caused by After detecting a failure, the module will publish the exact pole location
failures, and recovering data and computation from energy failures. of line failure with faulty phase. As such, users or management team
Once the edge devices fail, the application will be interrupted can take measures to eliminate the fault.
and the QoS perceived by users will be degraded. Energy failure can Although there are plenty of ways to prevent failures, it is still
also lead to repeated computations and therefore waste of energy. important to cope with failures in a timely and effective manner when
Currently, three effective fault tolerance approaches are proposed, in- failures occur. Forward Error Recovery (FER) [57] analyzes the impact
cluding checkpointing, redundancy, and proactive fault tolerance [50]. of the failure on the system for later recovery. The system can be
A checkpoint is an internal event that occurs at a special time slot restored to a new state to continue without repeating the previous
to restore the previous state for data recovery. Aupy et al. [51] provide computation. However, the accuracy of FER can slightly decrease and
an accurate analytical model which calculates the energy consumption FER is usually implemented in situations where service continuity is
and the execution time of the checkpoint after running a quantitative more important than immediate recovery. When FER is used, accuracy
task. In that case, the overhead of the checkpoint can be limited to an will be sacrificed in exchange for a swift action to keep the system
acceptable range. operational.

561
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Since mobile edge devices are prone to power supply failures, it a set of types and separates the tasks that have different powers and
is important to enhance the fault tolerance capability of edge devices frequencies. They set high frequency and power for the threads that run
in order to avoid power supply failure and thereby guarantee the more time-intensive tasks. While for the threads that run less important
energy resiliency. Applications of the abovementioned approaches in tasks, the power and frequency will be set lower. This Time Warp
edge computing environment can keep energy resiliency when power architecture allows time-intensive tasks to be executed in a timely
emergency happens. manner while reducing the overall power. In addition, this architecture
is developed by open source ROOT-Sim package [61], and there is
2.5. Power capping a great potential to improve this architecture and to achieve higher
energy efficiency.
Power capping allows system operators and administrators to limit The above-mentioned approaches only manage software or hard-
the power consumed by servers, and it also allows operators to plan ware within a power capping. For software, however, power cap-
data center level power distribution more efficiently and aggressively ping takes a long time to converge. Although hardware can converge
to avoid the risk of overloading existing power supplies. Usually the quickly, only voltage and frequency can be controlled. To this end, a hy-
cap is a definitive limit that the server will not exceed, regardless of brid power capping that integrate hardware with software is required.
its current workload. The cap has no effect until the server reaches PUPiL [64] is developed for high performance with low power. PUPiL
its power consumption limit. At that point, a management processor maintains several navigation nodes. These navigation nodes represent
adjusts CPU P-states and clock throttling to limit the power consumed. different resource choices in the decision framework. According to the
The power capping capability provides the function that the demand for workload and power, PUPiL selects a navigation node to ensure a power
power during heavy load will not exceed the power available. In some capping with a quick reaction time.
cases, dynamic power capping modifies CPU behavior independently of Power capping technique adjusts CPU performance using Oper-
the operating system. ating System-directed configuration and Power Management (OSPM)
Intel’s Running Average Power Limit (RAPL) [62] allows software through the standard Advanced Configuration and Power Interface.
to set power limits on hardware. The main purpose of RAPL is to When user enforces the OSPM driver of changes to T-states, the driver
maintain an average power limit within the sliding time window. The makes corresponding changes to processor P-states. These changes
memory RAPL mainly consists of three different components. The first happen automatically and require no further input from the operating
is power measurement logic component, which is used to measure system. By setting a power management policy, administrators can
memory power. The second is power limit algorithm component, which configure systems to consume less power during times when system
is used to track memory power consumption in a sliding time window. loads are low, for example, at night or on weekends.
The third is memory power limit control component, which is used to
enforce limits on memory. The experiment results indicate that RAPL 3. Energy-aware edge computing architecture
can set different power limits for different loads with reduced memory
bandwidth and minimal impact on application performance. Energy aware architectural design is vital for edge computing al-
Imes et al. [63] propose Control Performance with Power (CoPPer), though hardware level energy reduction capabilities are available in
a software tool that uses adaptive control theory and hardware power current edge devices. Architecture design may harness and integrate
capping to meet the requirements of application performance. First, different level of energy aware approaches and provide functions and
CoPPer works on the applications without prior knowledge of the spe- capabilities for energy awareness programming interface design and
cific performance/power tradeoffs of the applications. Second, CoPPer implementation. In this section, we survey the research work of energy-
uses a Kalman filter to adaptively control non-linearities in the power aware edge computing architecture, including memory system, net-
cap/performance relationship. Third, CoPPer introduces adaptive gain working, compiler and programmability & reconfiguration, benchmark-
limits to prevent power from being over-allocated when applications ing and software defined storage.
cannot achieve additional speedup. While guaranteeing the perfor- The surveyed work of energy awareness in edge computing system
mance, CoPPer saves energy in a lot of cases compared with existing architecture is listed in Fig. 3 and Table 4.
DVFS-based approaches. The evaluations of CoPPer use Intel RAPL on
a dual-socket, 32-core server indicate that CoPPer can achieve better 3.1. Memory cache systems
energy efficiency and guarantee the performance similar to that of
software DVFS control. Computer systems use cache management policies to reduce mem-
Lefurgy et al. [58] propose an approach for adaptive power capping ory access latency and thereby increase the data access rate. Cache
on a chip with a plurality of cores in a processing system. They first ob- access plays an important role in processor performance and power
serve the events of cores and determine the active power demand. Next, consumption. In edge computing platform like SoC (system on chip),
in order to estimate power leak of the chip, an average temperature of network parameters and wire properties can be very different from
the chip is calculated using one or more on-chip thermal sensors in the traditional cache models, communication happening in on-chip caches
cores. Power capping will be performed by throttling the cores after the and heterogeneous interconnect may reduce communication overhead.
power capping threshold is obtained and the active power demand for Cache management policies include RANDOM, First-In-First-Out
the chip exceeds the power capping threshold. (FIFO), Most Recently Used (MRU), Least Recently Used (LRU), and
Reda et al. [59] propose a novel power capping approach: Pack Clean-First LRU (CFLRU). Currently the Phase-Change Memory (PCM)
and Cap, which is designed for multi-thread workloads with the thread is emerging as an attractive alternative for the memory sub-system of
packing technology. Pack and Cap can pack multiple threads on a set future microprocessor architectures, mainly because of its merits of
of active cores for power capping of servers. Pack and Cap controls large capacity and low power consumption. However, write perfor-
the number of active cores to manage all the nodes together under mance of PCM is worse than the prevalent DRAM. This necessitates the
a specific power limit. Experiment results indicate that Pack and Cap deployment of hybrid DRAM and PCM systems, in order to achieve high
can achieve minimal application runtime and high accuracy for power overall system performance. Moreover, various DRAM/PCM hybrid
capping requirements. configurations may affect system performance and energy consump-
However, the proposed Pack and Cap approach ignores the dif- tion, and novel architecture is needed to maximize the hybrid system’s
ference in applications, and sets the same power capping for all the performance without adversely affecting power efficiency. Architec-
servers. To improve the Pack and Cap, Conoci et al. [60] design a tural design of cache placement algorithm, cache discovery algorithm
novel Time Warp architecture, which classifies the applications into and, cache replacement algorithm, can reduce the caching overhead

562
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Table 4
State-of-the-art on energy aware edge architecture.
Topics Contributions Characteristic description
Memory and cache Tradition cache policies: FIFO, RANDOM, MRU, Flexible; More selectivity; Mature.
systems CFLRU, LRU [69–71]
Edge cache policies [72–75] Suitable for edge computing; Low memory access cost; Low energy
consumption.
Routing [76] Energy-aware; QoS aware; Real-time; Reliability.

Naming and routing DNS [77] Suitable in fixed service scenarios.


Naming NDN [78–85] Self-organized; Self-learning; Low bandwidth; Data-driven.
SDN [86–89] Dynamic; Flexible.
WCET-aware C compiler [90,91]
Compiler ALEA [92] Energy efficient compiler optimization.
LLVM [93]
Re-programmability Support OS [94]
Reconfigurable; Re-Programmability.
& reconfiguration Field-programmable device [95]
Traditional benchmarking For traditional computing scenarios
SD-VBS [96,97]
Benchmarking For computer vision related workload in edge computing
MEVBench [98,99]
Edge computing benchmarking
SLAMBench [100] Accuracy and energy efficiency benchmarking.
CAVBench [101] Quantitative evaluation.
Fog data [102]
Software defined
PRC [103] Data volume and capacity reduction.
storage
SDStorage [104,105]

insertion and promotion policies, but also dynamically adapts to both


insertion and promotion policies.
Similarly, Xing et al. [71] propose a multiple-factors least frequently
used (mLFU) algorithm, which adopts a distributed multi-level storage
(DMLS) model. When the storage capacity is not sufficient, mLFU will
select the data with the lowest value for replacement. In addition, the
system can replicate the important data and store those replicas in the
cloud in case of data loss.
However, the above-mentioned cache management policies are un-
suitable for the hybrid memory used in edge computing devices. The
hybrid memory used in edge devices consists of two types of memories:
DRAM and PCM. DRAM is fast read/write and PCM is slow read/write.
This hybrid memory makes the uniform memory access unavailable.
To this end, Jia et al. [72] propose Maximum Cache Value (MCV)
policy to manage the cache in edge computing devices. MCV is ef-
ficient in maximizing the cache value and reducing memory access.
In MCV, firstly, a model for maximizing the cache hit rate in hybrid
memory is proposed. Secondly, a hybrid cache management policy for
PCM/DRAM hybrid memory is developed by considering the miss cost,
reuse possibility, cache line criticality, and cache hit rate. MCV has
three major functionalities. The first is partitioning the LRU cache line
into two cache lines for read-only and write-only, respectively. The
second is cache sets sampling for process characteristic. The third is
using an algorithm to evict the cache line with minimum value. MCV
can greatly reduce memory access and achieve a higher cache hit rate.
Fig. 3. Energy awareness in edge architecture. Jia et al. [73] also propose another cache management policy,
namely, Hybrid-LRU, for another type of hybrid memory, PDRAM.
PDRAM consists of PRAM (phase change memory) and DRAM. DRAM
is a traditional storage medium, which has a bottleneck in power and
and provide optimal replacement policy and may improve the net-
storage density [74]. In order to improve the storage medium, PRAM is
work utilization, reduce the search latency, bandwidth and energy designed as the non-volatile, high-density storage. However, the read
consumption. The architecture comprises of the following algorithms. and write performance of PRAM are asymmetric, and the life span of
LRU is the most widely used policy for increasing the cache hit PRAM is limited. Hence, PDRAM is proposed to combine PRAM and
rate, where new data will be inserted into the LRU cache if a cache DRAM. As traditional cache management policies are not effective for
miss occurs. Based on LRU, Xie et al. [69] propose another cache this hybrid memory of PDRAM, a hybrid-LRU policy is proposed. This
management policy: Promotion–Insertion Pseudo Partitioning (PIPP). hybrid-LRU policy can distinguish between DRAM and PRAM, and then
applies separate cache replacement policy for each one of them. For
PIPP divides the LRU cache line into several different cache lines that
instance, DRAM is responsible for writing operations while reduces
correspond to different insertion strategies. Compared to LRU, PIPP can
the PRAM usage rate for writing operation. This Hybrid-LRU policy
prevent the thrashing applications from cache pollution. To improve can reduce the PRAM usage rate by 11.8%, and increase the PDRAM
the efficiency in traditional LRU policy, Kron et al. [70] develop Double performance by 4.6% at most along with an 88.2% energy consumption
Dynamic Insertion Policy (DIP). DIP not only simultaneously considers reduction.

563
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

To collect, analyze, and proactively deal with the large amount of


data, Zeydan et al. [75] propose a proactive caching architecture, which
can optimize the 5G wireless networks and can be used for caching at
the network edge. This architecture can parallelize the cache placement
at the Base Stations (BS), and the computation and execution of the
content prediction algorithms at the core site.
In summary, effective caching and memory architecture can reduce
memory access and achieve a higher cache hit rate. And the energy
consumption of memory access is highly correlated with the cache hit.
Therefore, efficient cache and memory management policies are vital
to save energy in edge computing environment.

3.2. Networking for energy efficient routing and naming Fig. 4. Traditional TCP/IP architecture vs. the NDN architecture.

Edge computing brings computing and storage resources closer to


the data source, and may perform the computation on the edge nodes the names of applications without any modification in headers and
of the data source and in the cloud data center. Computing and commu- contents. NDN summarizes the ‘‘application interfaces’’ and ‘‘network
nications may be invoked back and forth on the edge nodes and cloud interfaces’’ as ‘‘faces’’. The faces have two functionalities: (1) unifying
data centers. Therefore, it is necessary to adapt the existing network all transmission protocols and mechanisms for packet delivery under
architectural implementation to the requirements of edge computing the NDN network layer, and (2) transferring the communication chan-
paradigm. nels for local applications in the same packet processing logic of NDN
To maintain high energy efficiency and data transmission reliability, forwarding module.
Zhang et al. [76] propose an energy-aware and QoS-aware data trans- Specifically, NDN has changed the network communication from IP-
mission routing algorithm. In the proposed algorithm, data is classified driven to data-driven. NDN uses the name to identify and retrieve data
into groups based on their attributes and priorities to guarantee that, packets instead of location information, such as IP address. However,
the most important data, such as industrial data, can be transmitted challenges still exist in NDN network, such as security. Zhang et al. [80]
to the computing nodes in a timely manner, while the less important demonstrate three challenges and three corresponding solutions to NDN
data, such as public data, can also be effectively transmitted in certain security. The first challenge is how to build a trust anchor and guaran-
conditions. The routing algorithm can guarantee that the data can be tee that all cryptographic verifications terminate at a pre-established
transmitted to the best node while meeting the real-time and reliability trust anchor. The solution to the first challenge should rely on a secure
requirements. commercial certification organization to build temporary trust. The
Traditional Domain Naming System (DNS) based service discovery second challenge is to provide solutions to express trust management
mechanisms [77] are effective when services are not changing fre- policies and automatically execute those policies. NDN may express
quently. However, dynamic services can cost DNS servers too much trust management policies by defining the relationship between the
time in completing the synchronization of the domain naming service data name and signature key name. The third challenge is to provide
and thereby can lead to network jitter. Hence, traditional DNS is not feasible key management policies. Zhang et al. also suggest that NDN
suitable for inherently dynamic edge computing scenarios. To improve should use a new naming convention, which allows the applications to
traditional DNS, Naming Data Network (NDN) [78] is proposed and construct the required keys according to the given data.
applied for service discovery in edge computing environment. NDN is NDN defines the content-based authenticity into the architecture
a self-organized data network combined P2P with centralization, and of NDN, which only includes one part of the content-based security
it names the data and services in order to easily discover services. To model. Yu et al. [81] demonstrate how to use content-based confiden-
some extent, establishments of computing links can be regarded as data tiality to protect content-sharing applications over NDN. To achieve
association. Therefore, the connection between the service name and content-based access control, the data owners provide two types of
data name can be established via NDN for service discovery in the credentials. The first is production credential, which is generated by
computing link. producers and allows an authorized producer to authenticate itself to
The NDN architecture [79] is a content-centric network architecture data receivers. The second is consumption credential, which is a pair of
where data contents replace the role of IP (Internet Protocol) address in public/private keys generated by data owner. Yu et al. demonstrate that
the TCP/IP network architecture. Besides, the storage function is built a well-designed naming convention can explicitly convey access control
into the network and data packets can be cached in the forwarding policy. As such, a well-designed naming convention can significantly
routing node, so that the response time and network traffic can be reduce the number of crypto operations and facilitate the distribution
greatly reduced. Compared with IP-based architecture (in Fig. 4), the of encryption keys in certain scenarios.
most prominent features of NDN network are the strategy and the Shi et al. [82], investigate the trust problem in NDN network inte-
security layer. The upper layer of the content/data layer is the security grated with broadcast-based self-learning. Broadcast-based self-learning
layer which can ensure the data integrity and reliability. And below the can adjust the network according to fluctuating conditions and the
content/data layer is the strategy layer which is similar to the functions mobility of producers without any routing or control protocols, which
of the transportation layer and networking layer in the traditional makes it possible to be used in NDN [83–85]. However, broadcast-
TCP/IP architecture. Its main functions are forwarding, routing, and based self-learning is lack of authentication mechanism, which leads to
caching. And below the strategy layer, there are some physical links potential network attacks, such as ARP (Address Resolution Protocol)
which is similar with IP system. And the NDN network can be achieved spoofing. In order to prevent NDN’s data-centric security from such
base on the existing network architecture or new network architecture attacks, NDN self-learning is developed by applying the self-learning
because the lower layers of NDN are compatible with traditional UDP to local area networks and switched Ethernet. This approach builds a
(User Datagram Protocol) and IP protocols. forwarding table for low-expense quick recovery from link failure. In
In a NDN architecture, the host node functions at the applica- addition, this approach can reduce the load on the Internet. Evaluation
tion layer, the network layer, the data link layer, and the physical results indicate that NDN self-learning reduces the bandwidth by 68%
layer [79]. The applications that produce data sign the data directly at compared with RONR with file access traffic.
the time the data is produced. Moreover, for the delivery of application- The computation offloading and migration in edge computing oc-
created interests and data network layer packet, NDN directly uses curs more frequently than that in cloud computing. Such a large

564
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

amount of data migration requires more dynamic network architecture. it more important for optimal compiler level energy profiling according
Software-Defined Networking (SDN) [86,87] enables flexible network to the architecture of the underlying platform. Energy-aware compiler
programming via a centralized network controller, which is a pro- level code profiling can achieve lower energy consumption for various
grammable network where the data and control is separated. Due to applications running in edge devices. However, the main challenges
this separation, network administrators can swiftly configure routers are the accurate energy profiling and the optimal configuration of the
and switches in order to reduce network jitter and support faster compiler.
migration of traffic, computational services, and data. Since SDN can
adapt the network to the running environment on the fly, SDN-based 3.4. Reprogrammability & reconfiguration
technologies can meet the requirements of IoT from various perspec-
tives, including edge network, access network, core network, and data Edge devices should be reconfigurable for different edge computing
center networking, as well as power and energy reduction [88,89]. scenarios. Schoon et al. [94] propose a mobile device that supports mul-
Since network scale and topology in edge computing environment tiple operating systems (OSes) without modifications in the hardware.
are highly heterogeneous, dynamic and mission-oriented, NDN and The proposed device is equipped with configuration buttons, where
SDN are promising to provide high performance and security driven different button corresponds to different running OS. When one button
solutions. In summary, the network itself and network services can is pressed, the hardware and software resources of the corresponding
be better organized and managed if NDN and SDN technologies are OS will be provided for the system. When switching the OS, the CPU is
combined in edge computing. Moreover, power and energy consump- responsible for reconfiguring the boot loader program.
tion can be reduced by applying NDN and SDN in edge computing Field programmable devices (FRD), such as the FPGA and the
environment. PROM, provide a more flexible integrated circuit manufacturing, which
can be reprogrammed for adapting to different types of devices, or
3.3. Compiler support energy profiling to improve performance and fix bugs. Le et al. [95] propose a hard-
ware/software platform based on the FPGA and ARM processor to
Currently, in a typical small and middle sized data center, IT in- provide reconfigurable hardware and IP-stack and virtual machine
frastructures run at as low as 10% utilization while the server’s idle middleware to the distributed sensor networks. Le et al. [95] pro-
power consumption is around 15%–40% of its peak power [106,107]. poses an IP-based edge-centric platform, CaRDIN, for efficient de-
This situation leads to huge energy waste in the devices, especially ploying/development, which uses the FPGA and reconfigurable virtual
those mobile and wireless devices with limited battery life. As such, it machine to achieve high performance and low power consumption.
is necessary to reduce energy consumption of IT equipment including Reusable and reconfigurable devices require tedious energy-
servers, storage disk arrays and networking switches. Moreover, since consuming reconfigurations and complex designs, while the edge com-
energy consumption of server systems can be directly accounted from puting requires reusable, low-energy-consuming, self-reprogrammable,
the programming execution, source code level energy profiling plays and self-reconfigurable devices in heterogeneous environment. There-
an important role to reduce the energy consumption. Specifically, the fore, re-programmability and re-configurability provides flexible adap-
compiler estimates energy consumption of source codes in the devel- tion to energy consumption in different hardware and software plat-
opment phase and applies subsequent energy-oriented code generation forms.
optimization.
Roth et al. [90] propose an energy profiling approach for modeling 3.5. Benchmarking & measurements of energy efficiencies
and measuring software energy consumption. This modeling process
is implemented in an optimized compiler, which is referred to as Benchmarking software can provide quantitative measurements and
WCET-aware C Compiler (WCC), which is designed for application performance insights of target systems. While there are lots of bench-
optimization during the compiling process [91]. The compiler generates marking programs for traditional computing systems, few of those
a high-level intermediate representation (High-level IR), and uses the benchmarks can be applied to evaluate different aspects of edge com-
code-selector to transfer the High-level IR to a Low-level IR. Various puting systems including performance and energy efficiency.
standard assembly-level analysis and optimizations are performed at For example, Parsec [108] and HPCC [109] are benchmark suites
this stage. After the energy analysis and optimizations, the energy for traditional parallel and high performance computing systems, while
profiling information is attached to the Low-level IR for compilations BigDataBench [110] is a Big Data Benchmark suite for performance
in the compiler. evaluation of big data analytical systems. SPECpower [111] is the first
Similarly, Mukhanov et al. [92] propose ALEA, a tool that uses industrial benchmark suite for energy efficiency evaluation of computer
probabilistic methods to measure power and energy consumption of system, which is designed to evaluate energy efficiency and perfor-
basic blocks of a program. Using statistical sampling, ALEA provides mance of server-side Java applications for small and medium sized
fine-grained energy profiling to overcome the limitations of power servers at graduated utilization levels. In edge computing scenarios, the
sensing tools. The profiling results are then fed into the compiler data and computation are distributed among different network layers
or binary instrumentation to track the execution time of code paths, including edge devices, edge servers, and cloud data centers, which
iteration counts, and the events that are provided by both software results in complicated interactions and collaborations in between them.
and hardware. Next, sample-based profilers take samples of the exe- Unfortunately, the existing benchmarks are unable to benchmark this
cution state, and then associate the events with the sample to make scenario consists of heterogeneous edge devices.
energy-efficient decisions. Georgiou et al. [93] propose a new approach The BenchCouncil proposed the Edge AIBench [112] for edge AI
for compiler level energy profiling and visualization of the energy benchmarking in edge computing scenarios, which provides a compre-
consumption of the program at all different abstraction levels. The hensive end-to-end edge computing benchmark suite to measure and
specific instructions of the compiler are mapped to 1-m level of the optimize the AI systems and applications in edge computing scenar-
LLVM, which then prompts the energy information to the intermediate ios. Specifically, Edge AIBench models four typical application sce-
representation level of LLVM. narios: ICU patient monitor, surveillance camera, smart home, and
All the energy consumptions of server systems are accounted to autonomous vehicle with the focus on data distribution and workload
program execution. Therefore, source code level energy profiling in collaboration.
compiler can support optimal energy efficient code generation. In a Das et al. [113] compare two edge computing platforms, i.e., Ama-
typical edge computing environment, edge devices are heterogeneous zon AWS Greengrass and Microsoft Azure IoT Edge, using a new bench-
in instruction set, architectural design, and software stack, which make mark comprising a suite of performance metrics. They also compare

565
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

the performance of the edge frameworks to cloud-only implementa- from a traditional PC (Personal Computer), to a smartphone or tablet
tions available in their respective cloud ecosystems of Amazon AWS or even a smart TV. They deploy an edge computing platform with
Greengrass and Azure IoT Edge. The benchmarking results show that PRC and storage techniques and the platform utilizes the computing
both of these edge platforms provide comparable performance, which resources of various edge devices to process workloads and thereby
nevertheless differs in important ways for key types of workloads used avoid saturation in the cloud.
in edge applications because they use different underlying technologies, Wu et al. [114] present an alternative–cooperative storage algo-
edge Lambda functions vs. containers.
rithm named meccas. Meccas simultaneously considers the node re-
SD-VBS [96,97] and MEVBench [98,99] are designed to test the
source distribution, task scheduling, and the information of edge and
performance of computer vision workload on mobile devices. SD-VBS
cloud. To meet the requirements of a scale-out storage system, Al-
contains various computer vision applications, while MEVBench pro-
Badarneh et al. [104] propose a wireless-based software-defined stor-
vides a wide range of mobile computer vision applications, such as face
detection, object tracking, feature extraction, and feature classification. age (SDStorage) simulation framework for MEC where storage services
In addition, Nardi et al. [100] introduce a benchmarking tool named are co-provided by the nearest MEC nodes. They integrate SDStorage
SLAMBench, which can be used to evaluate performance, accuracy, and with MEC in order to provide storage services for wireless connected
energy efficiency of a dense RGB-D simultaneous localization and map- nodes at the network edge.
ping system. SLAMBench can harness ICL-NUIM dataset of synthetic To bridge the gap between local computing devices and the cloud
RGB-D sequences with trajectory and scene ground truth. data center, it is necessary to build a robust and low-power infras-
CAVBench [101] is proposed for the edge computing system in tructure for local computation and data storage. To this end, Pahl
the Connected and autonomous vehicles (CAVs) scenario, which is et al. [105] introduce a Raspberry Pi clusters based architecture for
comprised of six typical applications covering four dominating CAVs container-based edge cloud PaaS which consists of 300 Raspberry Pi
scenarios and takes four datasets as standard input. CAVBench provides nodes, and each node has a single board with an integrated ARM
quantitative evaluation results via application and system perspective 700 MHz CPU, a GPU, and 256/512 MB RAM. The proposed PaaS
output metrics.
platform provides functionalities to design the cloud, deploy the ap-
In edge computing scenarios, different data analytical and process-
plications, build the runtime environment, and deliver the applications
ing techniques are widely used to augment the intelligence of end
to the cloud.
devices, edge servers and cloud data centers, which are three typical
Edge computing enables local data storage and computation rather
hardware platforms in edge computing. Different edge computing ap-
plications require different resources such as computing power, data than remote data storage and access in cloud data centers, which has
storage, and network. Edge computing scenarios are such heteroge- the capability to perform field data analysis and thereby to reduce
neous that there is no uniform and standard benchmarking suite to the amount of data that needs to be stored and transmitted to the
quantitatively measure and evaluate both the hardware and software cloud. Moreover, efficient data storage can improve energy efficiency
system in edge computing. The existing benchmarking suites try to of edge devices because the energy and bandwidth consumption can
evaluate the typical application scenarios such as smart city, smart be greatly reduced once the number of data is reduced. In this sec-
home, autonomous vehicle, surveillance camera, smart medical and tion, we review the existing work on data storage in edge computing.
wearable devices and so on. However, the heterogeneity in edge devices However, there are few works on energy efficient data storage and
and the wide coverage of edge computing scenarios make it impossible various energy-aware storage techniques should be investigated in edge
to design a unified benchmark that can adapt to all different edge computing.
computing scenarios. Hence, for each edge computing scenario and
device, a corresponding benchmarking suite is urgently needed for
performance evaluation, especially for energy efficiency evaluation. 4. Energy-aware edge OS

3.6. Software-defined storage infrastructures for energy awareness


A typical operating system (OS) manages computer hardware, soft-
ware resources, and provides common services for computer pro-
In edge computing environment, data storage and computation are
performed on edge devices. Unlike cloud servers equipped with high grams. With operating system, developers can build applications with-
storage capacity and stable infrastructure, edge devices are usually out thinking too much about the device underneath and in most
constrained by storage capacity and exposed to unstable environments. cases, applications take advantage of services and APIs available in
Meanwhile, a large amount of data are constantly produced by various the OS without having to deal directly with hardware resources and
sensors, cameras and other devices. This makes it inefficient and im- even network connectivity. With the advance in computer technology,
practical to use traditional architectures and algorithms for data storage smaller, lighter, and faster computing devices are emerging, and we are
and processing in edge computing. To this end, efficient data storage experiencing a proliferation of high-performance handheld and mobile,
can not only improve system performance but also energy efficiency in embedded edge devices in edge computing. As these devices become
edge computing. faster, applications more complex, and microprocessors more powerful,
Dubey et al. [102] propose Fog data, a service-oriented architecture it is becoming difficult to provide sufficient runtime with the available
that can improve the overall performance and reduce storage require- stored energy in these small devices. To address this problem, energy
ments for telemedical. Fog data has the capability to perform field data
aware OS is vital for low-power and energy-conserving design in edge
analysis and thereby to reduce the amount of data that needs to be
computing to reduce the energy required for computation, generally
stored and transmitted to the cloud.
involving a trade-off between computational throughput and power
In public-resource computing (PRC), users donate the computing
consumption.
and storage resources of their devices to scientific projects. Inspired by
the paradigm of PRC, Alonso-Monsalve et al. [103] introduce the idea In this section, we review the existing literature on the design of
of using public-resource computing and storage techniques in order to edge OS, from energy aware resource management, data encryption
process part of the workload of the current cloud systems so as to avoid and authentication, to virtualization technologies and file system. The
saturating the cloud. This idea proposes the use of devices working related work of energy awareness in edge OS is listed in Fig. 5.
as participants, which form a data center between the cloud service According to Fig. 5, we provide Table 5 for a reference link to the
providers and the end-clients. A participant can be any type of device, existing work.

566
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Table 5
State-of-the-art on energy aware edge OS.
Contributions Characteristic description
EdgeOSH [115] A home operating system.
OS
EdgeOSv [116] Open vehicular data analytic platform for CAVs.
𝛷OS [117] For smart home devices.
Fog to Cloud [118]
Short execution time; Parallel services operation; Resource continuity.
Resource management and scheduling OpenFog RA [119,120]
Other related work [121–126] Short execution time; High resource utilization; Less power
consumption.
Mechanism/Algorithm [127,128] High security.
Smartphones [132]
Data security
WSNs [133–135]
TEE [129–131] High security.
Intel SGX [136]
IoT [137,138], IDCs [139]
Firework [140]
Nebula [141]
Infrastructure management Energy efficiency; High performance.
CoaaS [142]
Other related work [143–145]
Containerization [146–149] Lightweight; High speed; High performance; Low expense.
Virtualization
Virtualization [150–152] Heavy; Slow speed; Weak performance.
HDFS
QFS
Distribute file system High efficiency; Reliable; Fault-tolerant.
File system GlusterFS
AUFS [153]
Peer-to-Peer file system [154,155] Content-based address; Fast, safer and robust.

EdgeOSH uses a unified programming interface to reduce development


complexity. For smart homes, EdgeOSH is the central management that
connects the data, devices, and services while ensuring the data security
and privacy. EdgeOSH combines the devices, house occupants, and the
cloud in order to provide users with IoT services. EdgeOSH consists
of four vertical layers and two extra components, which are commu-
nication, data management, self-management, programming interface,
naming, and security & privacy.
Zhang et al. [116] propose an open vehicular data analytics plat-
form (OpenVDAP) for CAVs, which is a full-stack edge-based plat-
form. OpenVDAP is flexible in managing the polymorphic services
and thereby is able to meet the requirements of different networks
and computing workloads. OpenVDAP can evaluate resources and then
selects appropriate execution paths for a tradeoff between the QoS and
user experience. The vehicle operating system, namely, EdgeOSv is an
OS for networked cars and it is the key technology in OpenVDAP and
can guarantee the QoS along with user experience.
Xu et al. [117] propose a full stack for edge computing, which
is referred to as 𝛷-Stack. 𝛷-Stack integrates 𝛷OS and extends the
REST (Representational state transfer) architecture and LUA in order
to help users deploy their computing tasks on their home edge devices.
Designed for smart home devices, 𝛷OS uses the multi-core architecture
Fig. 5. Energy awareness in edge OS. to parallelize real-time deep learning tasks and common computing
tasks.
The k3OS [118] is a lightweight Kubernetes distribution designed
4.1. The edge OS examples for the edge in resource-constrained environments, where Kubernetes
cluster configuration and the underlying OS configuration are defined
Operating system (OS) for edge computing is required to manage with the same declarative syntax as other Kubernetes resources, mean-
heterogeneous computing resources and to process a large amount of ing both can be managed together.
heterogeneous data and applications. Edge OS is also responsible for The edge OS must consider a software centric approach to design
deploying, scheduling, and migrating computing tasks to guarantee the low-power edge computing systems, and identify key activities what
reliability and maximize the energy utilization on edge nodes. Different must be done in the operating system to create an energy aware and
from the OS in traditional IoT devices, edge OS should be able to energy efficient system. Specifically, an energy-conserving OS is needed
manage data, tasks, and resources together. for current edge computing scenario to reduce processing energy costs.
Cao et al. [115] introduce EdgeOSH , which is a home operating However, current edge OS is lack of energy efficient real-time task
system for Internet of Everything (IoE). scheduling to exploit energy-conserving hardware techniques. More-
For clouds, EdgeOSH can upstream/downstream data and comput- over, dynamic voltage and frequency scaling capability is not ported
ing requests on behalf of devices. For house occupants, EdgeOSH col- to edge OS to exploit software controlled power modes of hardware.
laborates with the occupants and the house. For service practitioners, In complicated heterogeneous edge computing scenarios with multiple

567
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

participants, energy-aware QoS guarantee framework is also needed 4.3. Data encryption and authentication
for task-adaptation to provide differentiated service quality to differ-
ent applications tasks, weighing energy requirements against value of As computing and networking resources are closer to end users,
computation and system runtime. edge computing can potentially reduce the likelihood of private data
leaks when data is transmitted to the cloud [156]. However, the fact
that edge devices are close to users still can lead to a high probability of
4.2. Energy aware resource management and scheduling security attack in edge devices compared to that in the cloud data cen-
ter. In addition, the distributed and heterogeneous nature of edge nodes
makes it difficult for a uniform resource management and thereby leads
Different edge nodes require different types and amount of re- to a series of security and privacy issues. Although traditional security
sources. To this end, it is important to select an optimal scheduling approaches can be applied to edge computing, it is still preferable to
strategy to guarantee the resource continuity and service availability. modify existing approaches to better fit in edge computing. Over the
For instance, researchers have proposed resource continuity manage- past few years, a few security technologies were proposed to enhance
ment strategies, such as Fog to Cloud (F2C) layered model and OpenFog the security in edge computing.
Reference Architecture (OpenFog RA) [119]. Pang and Mollah et al. [127,128] adopts a security mechanism
The F2C layered model is a novel architecture that can achieve with the idea that trusted central database management system (DBMS)
service parallelization and less service execution time based on data maintains a table to preserve and allocate verifiable B-trees (VB-tree).
sharing and data collaboration. OpenFog RA is designed to achieve the Next, edge servers generate result object (no modification and always
same goal of F2C. In addition, Xavi et al. [120] propose a distributed correct), and the result object generates a verification object (VO)
according to the VB-tree.
management framework, which combines edge and cloud resources in
Trusted Execution Environment (TEE) [129–131] is referred to as
order to effectively achieve resource continuity in a generic hierarchical
a trusted, isolated, and independent execution environment on a de-
architecture.
vice. In untrusted environments, TEE offers a secure and confidential
Li et al. [123] present a lightweight scripting language, EveryLite, space for private data and sensitive computation. Using hardware-based
for resource-constrained and heterogeneous edge devices. Those tasks mechanisms, TEE guarantees the code and data.
limited in both time and space complexity are invoked by interfaces in In edge computing, in order to conserve data security and to avoid
the migration task and they are referred to as micro tasks. EveryLite security breaches, many data are encrypted before it is transferred
can process micro tasks in edge computing. Kang et al. [121] design to servers for processing and storage. However, the major challenge
the Neurosurgeon, a lightweight scheduler, which can automatically is that, edge devices are very small with shortage of resources, on
schedule deep neural networks (DNN) tasks of different layers between the other hand encryption of data requires extra energy. Therefore,
mobile devices and the cloud data center without profiling each appli- it is necessary to minimize energy requirements for encryption of
cation. Neurosurgeon can be adapted to a number of DNN architectures, data [132–139]. By running the application in the TEE and encryp-
tion/decryption the used external storage, the edge application can
hardware platforms, wireless networks, and server workloads for low
guarantee the data security and privacy even if the edge node is under
latency and energy efficiency.
attack [157].
Zhang et al. [122] develop CoGTA, a task allocation framework,
Edge devices have been increased exponentionally in recent years,
by adopting cooperative-competition and game theory. CoGTA aims which consequently results in raised data generation exceptionally. An
at delay-sensitive and social-sensing applications in edge computing energy-efficient and secure encryption data transmission scheme can
systems. A few critical issues can be addressed by CoGTA, such as not only prolong the lifetime of edge nodes but also guarantee the
Bottom-up Game-theoretic Task Allocation (BGTA). Similarly, Zhang security of data. The security aspect of edge nodes is still a problem
et al. [124] uses the Markovian stochastic channel to provide an opti- that cannot be ignored. As a new computing paradigm, there exists
mal solution and offloading strategy for cooperative operation between many common problems such as application security, network security,
the cloud and edge devices. This minimum-energy task scheduling information security and system security. Although the above works is
problem can be formulated as a constrained shortest path problem related with edge security, it still requires more effective methods.
on directed acyclic graphs. The standard Lagrange Relaxation based
4.4. Infrastructure management
Aggregated Cost (LARAC) algorithm is then used to solve this problem.
Kwak et al. [125] propose a dynamic CPU/network resource/task
Previous works indicate that edge computing can achieve maximum
allocation algorithm in the mobile networking environment. They use
performance with minimal energy consumption through computation
the Lyapunov optimization technique to scale the CPU/network speed. offloading to data sources. However, these benefits of edge comput-
Meanwhile, Liang et al. [126] introduce a new resource allocation ing are achieved through ubiquitous infrastructure deployment. Ryden
approach by taking both bandwidth supply and the selection of sources et al. [141] propose an edge-based cloud infrastructure, namely Nebula.
into account. By exploiting the edge resources, Nebula is suitable for data-intensive
Data in edge computing is distributed, which thereby requires dis- computing. Nebula can improve the overall application performance
tributed computing, storage, and networking resources for data process- through workload balancing, locality awareness, and fault tolerance.
ing. Moreover, edge devices are usually heterogeneous, which leads to Specifically, Nebula consists of four components, which are Nebula
a heterogeneous runtime environment and heterogeneous data on each central, datastore, compute pool, and Nebula monitor.
edge device. Furthermore, resources on edge devices are limited. As Zhang et al. [140] propose a new messaging and collaborating
middleware, namely, Firework, which can be applied in collaborative
such, it is complicated to propose an optimal, dynamic, and energy-
edge environment (CEE) for big data processing. Firework aims at data
aware scheduling strategy in edge computing. The resource abstraction
sharing and guarantees the data privacy and data integrity of users.
and management of edge OS allows developers to concentrate on
By processing the data near the data source, Firework can reduce
their applications without thinking about hardware, connectivity, and the amount of data movement and the response time. According to
code written for specific devices would be a huge leap forward for the service type a node provides, nodes can be classified into either
the realization of complex projects. This would reduce complexity, computing nodes or firework managers.
simplify development, and make the ecosystem less expensive in edge Kaur et al. [142] propose the container-as-a-service (CoaaS)
computing. framework, which can properly schedule and allocate tasks at the

568
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

edge of the network. CoaaS uses lightweight containers instead of effectively. The gateway employs the container to customize an IoT
conventional virtual machines. CoaaS also contains a multi-objective platform to provide virtualization services such as: (1) the capability
function, which can reduce the energy consumption by considering to manage different devices, (2) SDN (software-defined networking)
various constraints, such as memory, CPU, and the user’s budget. support, (3) the capability to allocate and manage data and resource.
Based on the static bridging Message Queue Telemetry Transport The above studies use the virtualization or the container technology
(MQTT) systems, Thomas Rausch et al. [143] propose an efficient in edge computing scenarios. While employing the isolation technology
message transmission approach in geographically dispersed locations. (virtualization and container) in edge computing is possible, the neg-
An edge computing middleware, message-oriented middleware, is pro-
ative impact on the devices’ performance cannot be ignored. Usually
posed in [143] as well.
the edge OS is packaged into a container as a single binary which is
In-situ AI [144] is an automatic incremental computing framework
about 50 to 200 megabytes in size. Bundled in the single binary is
and architecture for deep learning in IoT. With minimal data move-
everything needed to run containers, including the container runtime
ment, In-situ AI can deploy the deep learning tasks to IoT nodes through
and any important host utilities like iptables, socat, and du.
data diagnostics and the computing model.
Apart from the dedicated computing hardware, researchers have
studied the applications of FPGAs in edge computing. In order to 4.6. File system
accelerate speech recognition applications, efficient speech recognition
engine (ESE) is proposed in [145], which can improve the perfor-
mance of the Long Short-Term Memory Network (LSTM) on mobile Data storage is a fundamental element of edge computing. However,
devices with FPGAs. ESE performs prune compression on LSTM through an abstraction layer is necessary to make storage transparent to the
load balancing while scheduling LSTM data flow in multiple hardware application, and making it as portable as possible, without hardware
computing units. dependencies. Since traditional file systems can be unsuitable for edge
In summary, the distributed and heterogeneous edge nodes make computing, novel file systems are required for data distribution, version
it hard to perform unified management. The edge architecture de- control, data sharing, data offloading, and data management in edge
sign is still an emerging field with a lot of challenges, such as effi- computing
cient management and evaluations of heterogeneous hardwares in edge As mentioned before, Ma et al. [150] propose an efficient service
computing. switching system based on docker’s file system. The unique feature
of docker is its layered file system images, and the file system is
4.5. Containerization & virtualization implemented by Another Union File System (AUFS) [153]. AUFS can
map all the directories to a single virtual file system in order to simplify
Isolation is an important technology for edge computing. Edge the file system management and reduce the space usage for the images.
devices require effective isolation to guarantee service reliability and Confais et al. [154] propose a novel peer-to-peer distributed file system
quality. There are two types of isolation: (1) the isolation of comput-
for Fog/Edge devices. This file system in [154] can deliver BitTorrent-
ing resources: different applications run separately without disruption
based object storage services across the infrastructures by extending
between each other; (2) the isolation of data: different applications
the interplanetary file system (IPFS) [155] instead of traditional file
have different data access permits. At present, virtual machine (VM)
systems. IPFS is a peer-to-peer distributed file system that can connect
and container technologies are the most common isolation approaches
in cloud computing. Edge computing can also develop appropriate all the computing devices with the same file system. IPFS replaces
isolation techniques based on the VM and container. the IP-based address with the content-based address. Since the user
Ismail et al. [146] have argued the Docker is a reliable technology searches for the content instead of the address, only the content hash is
to deploy isolation services for edge computing. They evaluate Docker verified with no verification of the user. Hence, this file management
from four topics. (1) service deployment and termination; (2) service is faster, safer, and more robust. Since IPFS can be used as a file
and resource management; (3) fault tolerance; (4) caching capability. system for file sharing system, IPFS can meet the requirements for edge
And the experimental results clarify that Docker is a good solution in computing.
the context of edge computing to achieve isolation service.
Ma et al. [150] propose an efficient service switching system. Based
on Docker layered file system, they design a container migration strat- 5. Energy-aware edge middleware
egy to reduce the system overhead in edge computing, such as file
system, binary memory image, and checkpoint cost. Ha et al. [147] Middleware provides abstraction of underlying resources to up-
propose a VM switching approach for the migration of VM computing per service applications. In order to provide energy efficient resource
tasks. Since the VMs can be fully encapsulated and can support fast and transparency and interpretation, edge middleware must have elaborate
transparent resource placement, this VM-based isolation can improve system design on networking, storage, and computing. For example,
the immunity of applications and the availability of the edge computing it is usually assumed that in edge computing scenario, IoT services
system. are depending on always-connected networking. However, the always-
Similarly, Ismail et al. [151] deploy Docker containers as the edge
connected approach can sometimes generate boatloads of data and not
platform. Due to its flexibility in location movement, Docker is suitable
all of it is necessary at the core. Since data has to be stored locally,
for quick service deployment and disassembly in edge computing.
optimized, and sent only when the network is available, the middle-
In [152], the authors propose a Raspberry Pi clusters based ar-
ware must provide priority based policies depending on application or
chitecture for container-based edge cloud PaaS. The cluster consists
user needs. Specifically, the edge middleware must be able to better
of multiple host nodes, and each host node also consists of several
containers as service providers. understand data, organize it, and send it to the cloud data center for
Petrolo et al. [148] propose a lightweight virtualization technology consolidation.
in the design of IoT gateway. They adopt the Docker container to In this section, we focus on the topic of energy-aware edge mid-
provide dense services deployment at the gateway level. And it supports dleware. The related work of energy awareness in edge middleware is
dynamic services dynamically and provides a greatly improvement in listed in Fig. 6. We categorize the works into two categories: message
gateway performance. Similarly, Morabito et al. [149] also propose a interface and middleware. According to Fig. 6, we provide Table 6 for
gateway which can be employed in the edge architecture flexibly and a reference to the existing work.

569
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Table 6
State-of-the-art on energy aware edge middleware.
Contributions Characteristic description
Firework [140] A cloud-edge collaborative data processing and sharing system.
Message interface
OpenVDAP [116] Combining the firework with various vehicle devices for unified
management of computing devices.
Events-based
Agents-based With the middleware, the upper-level applications can
Tuples-based space manipulate a variety of sensors and devices in a transparent
Middleware IoT middleware [158–167]
Services-oriented manner without knowledge in underlying hardware
Application-based information.
VM-based

Firework is a framework that can provide service deployment and


management at the edge. IPFS is a file system that can support data
sharing. In our future work, based on Firework and IPFS, we plan to
propose a distributed edge computing platform to achieve efficient data
storage, data sharing, data management, and collaborative computing
at the edge.

5.2. Cloud adaptivity for more energy saving

Middleware provides an environment to run upper-level applica-


tions and to assist the users in developing complex applications [168].
With the middleware, upper-level applications can use a variety of
sensors and devices in a transparent manner without the information of
underlying hardware. Since edge devices are heterogeneous and unable
to interact with each other, the middleware is required to provide
Fig. 6. Energy awareness in edge middleware.
interoperability between heterogeneous devices and resources in edge
computing.
Ngu et al. [158] review the existing IoT middleware and point
5.1. Message interface supporting energy awareness
out some research challenges and solutions. Razzaque et al. [159]
classify IoT middleware into the following six categories: events-based,
Message communication management of devices is an important
issue in edge computing. Existing approaches, such as Firework and agent-based, tuples-based space, services-oriented, application-based,
OpenVDAP, provide message interfaces to facilitate data sharing be- and VM-based. Some examples of middleware for IoT and edge com-
tween devices. However, the designs of these approaches are compli- puting include SensorBus [160], Sensorware [161], SensorWap [162],
cated and energy-consuming, which can lead to potential failures in Hydra [149], and E2M [164]. Li et al. [165] propose a middleware
energy-constrained devices. layer to manage modern multi-core smartphones. In order to balance
Firework [140] is a cloud-edge collaborative data processing and energy consumption, performance, and QoS, the middleware layer can
sharing system. Firework node enables interactions with other nodes schedule the optimal number of online cores and can dynamically
to form a computation flow even if different nodes perform separate adjust the best frequency for each core.
tasks. As such, the firework node can execute a series of computation For video monitoring, Luo et al. [166] develop EdgeBox with com-
along the data transmission path. In order to achieve the underlying puting and communication capabilities. EdgeBox can be used as the
independence, firework is designed as a three-tier structure, which middleware between the camera and data center to pre-process the
consists of a service management module, a work management module, collected data.
and an actuator management module. The service management module Orsini et al. [167] propose CloudAware as a context-adaptive mid-
is mainly used for the management of the data view. The work man- dleware for mobile edge computing. CloudAware can adapt to the
agement module is mainly responsible for starting the corresponding
context automatically by integrating the mobile middleware distri-
services. The actuator management module is an adaptor, which allows
bution functionalities with context-aware approaches. As a flexible
the tasks to be implemented regardless of the underlying architecture.
middleware for MEC, CloudAware provides distribution transparency
The actuator management module can also enable data interaction with
and programming abstraction without modifying the underlying OS.
the work management module. Firework provides a simple program-
The research work performed in the related field is consequently
ming interface to users and allows users to focus on the development
instead of underlying implementation. limited. However, there are certain research efforts that have con-
The OpenVDAP [116] combines Firework with various vehicle de- tributed in overcoming cloud adaptivity to make a more energy effi-
vices for a unified management of computing devices and collected cient edge computing.
data. OpenVDAP uses the camera as the data producer to provide video
data for other devices. The vehicle-built-in computing unit provides the
6. Energy aware edge services & applications
computing and communication resources, while the mobile devices of
drivers and passengers can also be connected to the vehicle platform in
order to provide more resources. For the same service, OpenVDAP can In this section, we focus on the topic of energy-aware edge devices
provide multiple workflow paths with different resource consumption. and applications. The related work of energy awareness in edge devices
Based on current system status and service requirements, OpenVDAP and applications is listed in Fig. 7. According to Fig. 7, we provide
selects the most suitable workflow path for execution. Table 7 for a reference to the existing work.

570
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Table 7
State-of-the-art on energy aware edge services and applications.
Contributions Characteristic description
Application specific [169–173] Application-specific approaches.
Services placement [174] Provide an optimal mapping between IoT applications
Energy-aware services Services placement
and computing resources.
and applications
Data placement [175,176] Provide an optimal data placement strategy with
minimal cost.
Energy-aware service and Methods for Supervised Forecasting, consolidating the resource and shutdown
application based on data center [177,178] the servers.
machine learning Unsupervised By putting spare servers into sleep.
[179–182]
Methods for edge computing [183–209] Enabling the machine learning applications available
at the edge.

The variabilities of edge devices, requirements, and applications


make it difficult to propose a general energy aware data analytic
strategy for edge computing. And the current works about energy aware
data analytic are application-specific. But there is a demand for generic
data analytics methods, we are looking forward to the solutions to this
issue in the future.

6.2. Service placement

Due to the geo-distribution of edge devices, edge computing be-


comes an ideal energy efficient platform to leverage the distributed
green energy for energy efficient computing. However, the requirement
of data processing and resource varies in different time, space and
devices, so it demands a careful service placement, task allocation and
resource scheduling scheme.
Skarlat et al. [174] present Fog service placement problem (FSPP),
Fig. 7. Energy awareness in edge services and applications. which allows virtualized fog resources to run IoT services. Meanwhile,
QoS limits are also taken into account in FSPP to provide an optimal
mapping between IoT applications and computing resources.
6.1. Application specific energy aware data analytics iFogStor [175] provides an optimal data placement strategy. The
authors in [175] formulate the Generalized Assignment Problem (GAP)
Many applications are running and hosted in edge computing, and as the related data placement problem, and then propose two solutions.
different types of applications have different requirements for low One solution is an integer programming solution using the CPLEX
latency, high performance, and data privacy. No uniform approach MILP (mixed-integer linear programming), while the other is a heuristic
is available to address all these issues in edge computing. Hence, solution based on geographical partition, which can reduce its time
application-specific approach is more suitable in edge computing. For complexity.
example, for video analytics, latency is more important than energy. Gu et al. [176] jointly consider the task allocation, service migration
Hence, although more energy will be consumed, it is still preferable to and energy scheduling to reduce energy consumption. They propose a
apply machine learning (ML) on edge nodes to pre-classify the video. system model which formulates the VM migration and energy schedul-
Convolutional neural networks (CNNs) [169,170] are widely used in ing problem into a MILP with the objective of minimizing the energy
image recognition. While it is efficient in identification, detection, par- cost. Then they design a low-complexity relaxation-based algorithm to
tition and retrieval, CNN requires a long time to train, and thereby leads solve this problem. The algorithm can schedule the service placement
to a high energy consumption. As such, a lightweight ML framework and resource allocation according to the fluctuation of tasks and green
that achieves low latency, high performance, and energy efficiency will
energy. The simulations results show that the algorithm can effectively
be desired.
reduce energy consumption and is close to the optimal solution.
Cao et al. [171] propose an edge computing platform that analyzes
As a matter of fact, IoT devices (e.g., gateways, sensors, or embed-
the transmission data flow and monitors abnormal patterns in real-time.
ded end nodes) offer different computation, storage, and networking
Edge devices in this platform are deployed on a transit bus. The data
resources and they are geographical distributed. How to exploit the
generated by these edge devices are analyzed by a python program and
ubiquitous presence of such devices at the edge to execute the IoT
are directly transmitted along with the bus to save bandwidth.
services successfully with minimal energy overhead is a major research
A key challenge for smart city is real-time data analytics. Stefan
et al. [172] propose a collaborative cloud and edge data analytics questions in edge computing.
platform, which applies the concept of serverless computing to the
network edge. In this platform, delay-sensitive tasks are processed at 6.3. Energy efficient edge computing based on machine learning
the edge for real-time response and computing-intensive tasks are sent
to the cloud for computing and storage. Modern data centers are complicated interaction systems of multiple
Satyanarayanan et al. [173] propose GigaSight, a hybrid cloud mechanical, electrical, and control systems. It becomes a challenge to
architecture. GigaSight is a distributed cloudlet-cloud computing in- understand and optimize energy efficiency [106,210,211]. And to solve
frastructure, and cloudlet is a VM-based edge server that is typically the problem, there are two typical solutions including the workload
deployed at the edge of the network. In this architecture, all the data consolidation and turning off spare servers. Because of the functions
are sent to the cloudlets in real-time. Only results and metadata are and characteristics of machine learning, it has an important role in
further transmitted to the cloud. various resource management problems in the data centers. In recent

571
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

years, there are many related works to optimize energy efficiency in exploit the online learning to perform a centralized joint RBs and power
data centers with machine learning based solutions. In this section, we allocation scheme for RRHs and their associated UEs, which relies on
present a comprehensive introduction of the machine learning methods a singer controller integrated to BBUs. The controller can collect the
which aim to provide solutions for energy efficiency in data center. network state information which is important for selecting the most
There are a lot of factors affecting the energy consumption of the appropriate operations and consequently, enhance energy efficiency.
data centers, such as power distribution, the heat generated by the data Compared with the standard methods allocating fixed power for all
center operations and resulting cooling overhead and computational RBs, this scheme has a considerable contribution to the high energy
load management [212]. efficiency.
Most of the current works for higher data center energy efficiency While machine learning models are currently trained on customized
focus on the allocation of computational load to enable a minimum data-center infrastructure and focus on optimizing task scheduling,
number of machines to meet the application requirements. It is a critical resource management and energy efficiency in data centers [213]. It is
issue to forecast the resource demand in data center management. of great significance to bring machine learning inference to the edge
And an effective predictive technology can bring optimal allocation though the edge devices are considered to be difficult to run large-
strategies to minimize energy consumption. scale-based machine learning because the limited computation, storage,
Vasic et al. [179] propose DejaVu, a cloud resource management bandwidth, power consumption and so on. Therefore, many scholars
system, which leverages the supervised machine learning model to clas- are working to enable the machine learning applications available at
sify the workload for load consolidation. DejaVu spends about a week the edge. By doing so, the user QoS can be improved and the energy
on learning phase to determine the workloads and their correspond- consumption can be reduced.
ing resource allocations. Then in actual use, the system can classify For example, to solve the challenge that large-scale machine learn-
each workload automatically to check if it matches the workload ing algorithms required extensive computation and storage capability
encountered before, and then it either reuses the previous allocation or is unsuitable for IoT applications, Jiang et al. [183] provide an energy-
reconfigures itself according to the classification result. The mechanism efficient binary classification kernel for ML applications which is fast,
using machine learning for adapting to new workloads allows for load energy harvesting and accurate. It utilizes approach to govern the
consolidation which can lead to less energy consumption. process, voltage and temperature changes. And it employs the three-
Besides, Duy et al. [180] use neural network-based predictors to modal architecture with a hierarchical tree structure to further reduce
perform load prediction in the cloud. With the prediction, they can power consumption.
schedule the workload and shut down the unused servers to save
In industry, Facebook has been proactive in developing tools and
energy. The experiment result shows that the scheduling algorithms can
optimizations to run inference on the edge. Wu et al. [184] present
save more than 40% energy.
some applicable directions, where the first option is to compile the
Similarly, Berral et al. [181,182] adopt supervised machine learning
applications using machine learning to the object code of the platform
methods to predict resource consumption by different tasks and SLA-
specification using TVM [185], Glow [186] or XLA [187]. The second
related factors (such as response time for a given workload). And then
option is to adopt the specific vendor API directly from the operating
the system can integrate the prediction results to perform a power-
system vendor, such as iOS CoreML [188]. Another approach is to
aware tasks scheduling and consolidation. The authors first employ the
compile the code and optimize the backend by deploying a generic in-
linear regression to predict CPU usage of each host, and then employ a
terpreter such as Caffe2 or TF/TFLite. Furthermore, the authors present
more sophisticated machine learning algorithm called M5P to predict
two case studies of Facebook to prove the efficiency of the machine
power consumption. The experiment was carried out using the actual
learning approaches on the edge.
workload and demonstrated that the method can save substantial power
The first case is the mobile inference used for image and video
while the performance is slightly degraded.
processing. They cut down the size of machine model aggressively and
All the works introduced above use supervised machine learning
optimize the performance before deploying the model to the mobile
model, however, Dabbagh et al. [177] developed another framework
employing unsupervised learning model for predicting future virtual devices by specifying features, such as compact image presentation
machine requirements and associated resource demands. The predictor and weight, quantization and trimming with Caffe2, consequently,
uses the k-means for clustering and stochastic Wiener filter for work- the model can be transferred to the edge quickly. Besides, to take
load prediction. The main idea of the framework is that it can put advantage of the limited computational resource, the authors integrate
the unwanted machine into sleep mode by analyzing the prediction two libraries into Caffe2, Neural Networks PACKage (NNPACK) [189]
results. The authors perform their framework using the traces collected and Quantized NNPACK (QNNPACK) [190], which provide customized
by Google within 29 days, and the result shows that it can achieve optimizations tailored for mobile CPUs. For example, NNPACK [189]
near-optimal energy efficiency. performs computations in 32-bit floating-point precision and NCHW
In recent years, heterogeneous data centers become popularized, layout, achieves asymptotically fast convolution algorithms based on
and the previous works for homogeneous data centers are unsuitable Fast Fourier transform and Winograd transform. It can decrease the
in such scenarios. Fortunately, there are still many researchers propose complexity of convolution calculation with large kernel by several
the new solutions applied to the heterogeneous scenarios. times.
AlQerm et al. [178] develop a green resource allocation scheme On the other hand, QNNPACK [190] is designed to enhance
using Q-learning in Heterogeneous cloud radio access networks (H- NNPACK for low-intensity convolutional networks, which performs
CRAN). In such scenario, the low-power remote radio heads (RRHs) are computations in 8-bit fixed-point precision and NHWC layout. QN-
employed to provide high data rates for users with high QoS, while the NPACK is effective to eliminate the overhead of im2col transmission
high-power macro base stations (BSs) are exploited for low QoS user and other memory layout conversions. With the choice of two mobile
support and coverage maintenance. However, there exist two major CPU backends, Caffe2 delivers a highly efficient implementation among
challenges in resource allocating in H-CRAN, one is the inter-layer substantial mobile devices and the real image and video processing
interference between macro BS and RRH, the other is how to maximize use-cases.
the energy efficiency. Therefore, the authors propose a centralized Another case is the mobile inference used for Oculus VR platform.
resource allocation mechanism leveraging the Q-learning to ensure The platform explores a variety of advanced DNN models which are
interference mitigation and maximize the network energy efficiency programmed in PyTorch 1.0, and the weights are quantified using
while maintaining the QoS requirements of all users. First, they propose PyTorch 1.0’s int8 feature for mobile inference. The DNN models
a sophisticated frequency partitioning method to account for the inter- adopt the PyTorch 1.0’s CPU and Facebook’s BoltNN DSP to infer-
interference and support better online learning exploration. Then, they ence in the backend. AR/VR wearable devices must be designed with

572
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

long-term battery consumption to consume as little power as pos- data transmitted to the edge. Li et al. [198] introduce deep learning
sible and ergonomically required platform temperatures. In addition for IoT into the edge computing environment. With the help of the
to using performance acceleration to determine where DNN models acceleration engines [199], such as DeepX and DeepEar, the deep
should be performed, VR devices must be designed to consume as learning can be deployed in IoT devices to extract the interest feature
little power as possible for long-term battery life and ergonomically from a large amount of data automatically.
required platform temperatures. With the Facebook’s DSP machine In addition, a lot of compression approaches for IoT devices have
learning model, the AR/VR devices can lower power consumption and been proposed to reduce the data transmitted to the edge. Harb et al.
operating temperature with higher performance. [200] propose a data filtering approach using the Pearson coefficient
Facebook has demonstrated the possibilities to deploy the machine metric by dividing the dataset into two equal subsets and aggregating
learning at the edge in order to capture lower power consumption, the data according to the correlation between the two subsets. And
lower latency and higher performance. And the principles it proposed Azar et al. [201,202] take advantage of the temporal correlation in
give us a lesson to better design and apply machine learning at the the collected data to propose their own compression approaches. These
edge. technologies perform well in data reduction and save energy further.
However, the above compression methods are based on non-stationary
In addition to the energy required to process the meaningful tasks,
multisensory data with low compression ratio. Azar et al. [203] propose
existing works [191,192] have shown that the tail energy of 3G/4G
an energy efficient data reduction scheme for IoT applications based
network interfaces of mobile devices also can lead to lower energy
on an error-bounded lossy compressor. And it is tested on a Polar
efficiency. The tail energy, which caused by a period of time remaining
M600 wearable and the result shows a great data reduction and energy
in the high-power state after the data packet is transmitted, accounts
conservation.
for about 60% of power consumption in data transmission. To reduce
In order to achieve both energy efficiency and bandwidth efficiency
the unnecessary power consumption, Tang et al. [193] exploit the
in edge computing networks, Wang et al. [204] propose a model that
techniques of machine learning and participatory sensing to design a combines bandwidth efficiency and energy efficiency by employing the
client–server architecture. The server is used for training phase and game theory and they design a distributed energy-harvesting sharing
the prediction results are delivered to the mobile devices to make mechanism. And it utilizes the reinforcement learning techniques to
transmission decisions. In this architecture, the center server collects learn user-equipment association and orthogonal frequency division
transmission history records generated by a lot of mobile devices and multiple access (OFDMA) scheduling.
run the machine learning algorithms to create predictors. Once the Similarly, Zeng et al. [205] take the energy into consideration while
prediction is completed, each client downloads the newest predictor they employ machine learning at the edge. They propose two energy
and updates its local record, and makes a transmission decision for efficient RRM strategies for joint bandwidth allocation and scheduling
a new request. With this architecture, the computing overhead from with federated edge learning (FEEL). In learning experiments, energy
client can be reduced significantly. can be significantly decreased thanks to the proposed strategies.
Similarly, Kumar et al. [194] propose Bonsai, the resource-efficient Besides, Liu et al. [206] and He et al. [207], propose a new sys-
machine learning framework in 2 KB RAM for IoT devices. Firstly, tem model with low computational complexity for machine learning
Bonsai learns a shadow, single and sparse tree with small model size which can be employed in distributed antenna systems (DASs). The
but powerful capability to predict accurately. Secondly, both leaf and k-nearest neighbor (k-NN) algorithm based on the traditional sub-
internal nodes in the tree are subject to non-linear prediction. Bonsai gradient iterative method database is used to obtain the power allo-
is path-based prediction methods which can leverage the sharing pa- cation scheme of DAS. The simulation results show that the new model
rameters among the path to further reduce the model size. Thirdly, can achieve a power allocation scheme similar to traditional methods
Bonsai uses sparse matrix which can be deployed in a few KB of flash while consuming less energy.
by projecting all the data points into the low-dimensional space of In WSNs for IoT, the intelligent routing plays an important role in
the learning tree. In addition, sparse projection is implemented in a improving the QoS in the network. And there is an urgent need for ma-
streaming manner, allowing the Bonsai to handle IoT applications, chine learning based routing and energy aware network performance
even if a single function vector is not suitable for 2 KB of RAM. optimization.
Last, not all nodes learn bonsai tree nodes in a greedy way, but learn Thangaramya et al. [208] propose the neuro-fuzzy rule based cluster
together with the sparse projection matrix to optimize the prediction formation (FBCFP) routing protocol for energy efficient routing for
accuracy while assigning memory budget optimization to each node. WSNs. It incorporates the current energy level of the cluster head
This contribution enables the Bonsai to predict in milliseconds even on (CH), the distance between CH and sink nodes, the area changes
between nodes and CH in the cluster due to mobility and the degree
slow microcontrollers, adapt to a few KB of flash and prolong battery
of CH to perform network learning. Then the wireless network is
life beyond all other algorithms.
trained using convolutional neural networks and fuzzy rules to adjust
Although the CPU and GPU based machine learning approaches
weights. Finally, fuzzy inference is conducted to perform powerful
have been improved a lot, the overwhelming computation and memory
cluster formation and effective cluster-based routing.
consumption limit applicability of real-world deployments especially
Hu et al. [209] propose a machine-learning-based adaptive routing
the IoT implementations. Zhang et al. [195] decide to leverage the
protocol for energy-efficient and lifetime-extended underwater sensor
FPGA to accelerate the machine learning. The design of the opti-
networks (UWSN) QELAR. QELAR can be adjusted easily to balance the
mization of machine learning based on the FPGA has three major latency and energy for longer sensor lifetime.
phases First, they introduce a flexible HLS IP to construct the network Currently, machine learning becomes an essential tool for data
for a convolution layer. Then, they develop a model to estimate the inference and decision making in IoT and machine learning based
performance and resource accurately and effectively, and they use the applications can be deployed at different processing layers in edge
model to design an entire space exploration framework to among DNN computing environment. In this section, we review the application of
layers. Next, with the configuration of the HLS IPs, they can make machine learning in the data centers and the IoT devices. There is much
an efficient resource allocation. In the end, they set the parameters machine learning-based work focusing on data processing, resource
for a smaller network. The experiment results show that this FPGA- management, transmission optimization, energy efficient routing, etc.
based machine learning is promising in IoT scenarios to capture high However, there still exist some challenges for machine learning based
performance and low energy consumption. energy reduction in edge computing. For example, lower power con-
On the other hand, data transmission from the IoT node to the sumption and lower storage requirement and less computing power are
edge is an energy-consuming task in IoT nodes [196,197]. So, an required to deploy machine learning based energy aware computing on
important step towards energy efficiency in IoT nodes is reducing the mobile devices and edge nodes.

573
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Fig. 9. Priority assignment for decision making on what to be offloaded.

BS for offloading decision. With minimal energy consumption, the BS


can decide both the users and the data to be offloaded.
Lyu et al. [219] use a QoE-based utility function to measure the
energy consumption and completion time of computation offloading.
They propose a utility ratio to the local execution and maximize the
utility function by optimizing computing and network resources using
quasiconvexity and convex optimization. The offloading decision is
Fig. 8. Computing offloading architecture.
made based on the submodular set function optimization method.
Zhang et al. [220] propose to classify and assign proper priorities
to equipment in 5G heterogeneous network to achieve a three-level
7. Energy aware computing offloading energy-efficient MEC computation offloading (EECO).
Fig. 9 illustrates the general workflow of decision making on which
Computing offloading from the edge to the cloud is a promising tasks to be offloaded. In Fig. 9, when an edge device sends an offloading
technique that saves battery while improving the computing capabil- request, the utility function will assign a priority value according to the
ities of edge devices [214]. However, it is not always energy efficient type of the task, the computing demand, and the required resources.
when applications are offloaded to the cloud. Since the energy efficien- Then, the tasks with the highest priority will be offloaded.
cies of mobile devices, edge servers, and cloud servers are different, the Sun et al. [221] propose a new performance metric in MEC net-
energy consumption of each type of device/server varies and depends works, namely, the computation efficiency, which considers both max-
on the type of device/server where the tasks are executed. An ineffi- imizing data throughput and minimizing energy consumption. The
cient offloading strategy can consume more energy and produce high computation efficiency is defined as the number of computed bits
electricity bills along with large carbon footprints [215]. divided by the corresponding energy consumption. They formulate
Mao et al. [216] propose a MEC system with an energy harvesting the problem as a computation efficiency maximization problem. When
(EH) to minimize execution energy consumption by proposing an ef- the data size is relatively small, it is more suitable to process locally
fective dynamic computation offloading algorithm. And Bi et al. [217] to obtain high computation efficiency with lower latency, because
studied the binary scheme in a wireless MEC network consisting of a offloading to cloud servers may take longer time and consume more
server and multiple UEs, and adopted a binary strategy to maximize energy than local processing.
the weighted sum calculation rate. In the vehicular networks, offloading energy-hungry workload from
These previous works either aim at maximizing total throughput the energy constrained vehicle to vehicular edge computing nodes can
or minimizing the total energy consumption. However, an optimal provide timely response and achieve energy savings. Zhou et al. [222]
offloading decision should guarantee the performance and minimize propose a computation offloading approach based on the joint op-
energy consumption, the following issues should be addressed: (1) timization of the dynamic energy consumption and latency in lo-
Whether a task can be offloaded to the cloud to be processed (2) cal computing, data transmission, workload execution and handover.
Which tasks can be offloaded to the cloud when multiple users request They explore the consensus alternating direction method of multipliers
the offloading of their own tasks (3) which is a suitable time slot for (ADMM) to solve the optimization problem.
computing offloading. When making decision on computation offloading, latency and en-
In addition to offloading from the edge to the cloud, computation ergy consumption are the key consideration. From the perspective of
can also be offloaded from the cloud to the edge. In cloud data centers edge devices, if the task can be offloaded to the edge server, the energy
complex tasks can be partitioned and assigned to the edge nodes for consumption will be saved on local edge devices themselves except
faster execution. The computation offloading from the clouds to edge the energy consumption during task partitioning and transmission.
nodes can utilize the resources of edge devices and reduce the overall However, computation offloading to edge servers may increase the
latency along with energy consumption. latency caused by the data transmitting procedure. Tang et al. [223]
We illustrate the overall architecture of computing offloading in propose an offloading strategy based on the mixed overhead model of
Fig. 8. From Fig. 8, we can conclude that each offloading strategy energy consumption and processing time for mobile users.
should be designed according to the respective conditions, such as time, In order to get low latency and less energy consumption, Wang
input task, and devices. Table 8 illustrates the terms used in this paper. et al. [224] propose a collaborative task offloading model between tasks
and servers and edge servers’ features. Firstly, they divide the tasks
7.1. Computation partioning before offloading into several subtasks (k) to be offloaded to the different servers using
the Hungarian algorithm. Secondly, for each subtask, they model the
To decide which task can be offloaded, You et al. [218] propose latency (𝑇𝑡𝑜𝑡𝑎𝑙 ) as the sum of transmission latency, computation latency
a system model that uses the central controller to select the task to be and queuing latency, and the energy consumption (𝐸𝑡𝑜𝑡𝑎𝑙 ) is defined as
offloaded. They model the MEC system with a single edge server, which the sum of energy consumption of transmission and the processing in
can provide services for multiple users and allocate resources based local server. They define the maximal latency the devices can tolerate
on time-division multiple access (TDMA) and orthogonal frequency- as 𝑇𝑡𝑜𝑙𝑒𝑟𝑎𝑡𝑒 . Then optimization goal can be defined as follows:
division multiple access (OFDMA). The base station (BS) can obtain the ∑
min 𝐸𝑡𝑜𝑡𝑎𝑙 (2)
information about the multi-users channel, the local power consump- 𝑘
tion, and the data size. Different workloads in mobile devices should be ∑
𝑠.𝑡. 𝑇𝑡𝑜𝑡𝑎𝑙 < 𝑇𝑡𝑜𝑙𝑒𝑟𝑎𝑡𝑒 (3)
computed under the same constraints of latency and then sent to the
𝑘

574
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

Table 8
Terms and notations.
Terms Explanation
Computing offloading Migrating the computing task from the edge to the edge cloud or center cloud or
from the cloud to the edge devices.
Devices Edge devices.
Computing task The task offloaded from edge to cloud.
Lk Device k need to complete a computation task with Lk latency.
Ta𝑘 The data-arrival time instant.
T𝑑k The computation deadline.

Fig. 10. The offloading timeline according to the arriving time of the tasks with the
same latency. Fig. 11. The offloading timeline of tasks according to their latencies.

The experiment results show that the algorithms can tradeoff be-
tween the latency and energy.
For edge computing, many researchers focus on computation of-
floading. However, there are many issues to be considered in task
offloading, such as latency, energy consumption, resource distribution
and the profit maximization of service provider.

7.2. Gaming and cooperation between edge and the cloud

Economic models and game theoretic approaches have been pro-


posed to develop economic models for resource management and
scheduling in computing and communication systems [225,226]. In a
gaming scenario, each user independently adjusts individual offloading
decision in the whole system in order to gain the pure Nash equilibrium Fig. 12. Where to offload from the cloud.
for the system. However, it is difficult to reach the Nash equilibrium
because of the duration of the job, user-specific channel bandwidth,
and the competition on shared communication channel. For example, Fig. 12 illustrates the workload type aware computing offloading
in computation offloading for mobile-edge devices, task arrival time considering latency and the task type. When the task is offloaded from
and acceptable latency should be calculated in advance and task latency the cloud to the edge, the most capable devices should be selected to
and task arrival time are both considered for decision-making. Next, the be offloaded.
task with less latency will be offloaded first, and the task with earlier
arrival time will be offloaded first if the tasks have the same latency.
8. Conclusions and future work
Figs. 10 and 11 illustrate the timeline of offloading according to task
arrival time and task latency.
Sine there are different workloads in both edge computing systems Edge computing is an emerging paradigm to meet the ever-
and cloud computing systems, different types of workload should be increasing computing and communication demands from billions of
offloaded to different servers [107,227]. For instance, energy-intensive edge devices. Edge computing is promising for various application sce-
tasks should be offloaded to cloud servers, computing-intensive (CI) narios due to its less network bandwidth usage and thus less data center
tasks should be offloaded edge servers, and data-intensive (DI) tasks side processing pressure, as well as enhanced service responsiveness
should be offloaded to servers that are close to the data source. Given and data privacy protection. Edge Computing is becoming feasible to
this three-layer hierarchical offloading model, a thorough understand- handle the data deluge occurring in the network edge to gain insights
ing of server energy proportionality is crucial to obtain a proper work- that assist in real-time decision-making. For example, computation
load placement for energy saving in the hybrid cloud ad edge comput- offloading plays a crucial role in edge computing in terms of network
ing architecture. packets transmission and system responsiveness through dynamic task

575
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

partitioning between cloud data centers and edge servers and edge [8] J.D. Owens, M. Houston, D. Luebke, S. Green, J.E. Stone, J.C. Phillips, GPU
devices. When the tasks require higher computing capabilities and computing, 2008.
[9] D. Luebke, M. Harris, N. Govindaraju, A. Lefohn, M. Houston, J. Owens,
higher storage capabilities, they are suitable to be executed in cloud
M. Segal, M. Papakipos, I. Buck, GPGPU: general-purpose computation on
data centers. graphics hardware, in: Proceedings of the 2006 ACM/IEEE Conference on
In edge computing environment, many devices are not as powerful Supercomputing, ACM, 2006, p. 208.
as traditional desktop and server systems. Instead, they are resource [10] P. Du, R. Weber, P. Luszczek, S. Tomov, G. Peterson, J. Dongarra, From cuda
constrained in terms of computing capability, storage capacity and to opencl: Towards a performance-portable solution for multi-platform gpu
programming, Parallel Comput. 38 (8) (2012) 391–407.
network connectivity. Moreover, since billions of edge devices are
[11] https://en.wikipedia.org/wiki/Field-programmable_gate_array, Accessed on: 13
deployed in edge computing environment, their energy consumptions June 2019.
are crucial for both edge node’s lifetime and quality of service guar- [12] C. Cavazzoni, Eurora: a european architecture toward exascale, in: Proceedings
antee, especially for battery powered devices or power constrained of the Future HPC Systems: The Challenges of Power-Constrained Performance,
edge nodes. Different from energy aware computing in server systems ACM, 2012, p. 1.
[13] Amd ryzen, https://www.amd.com/en/press-releases/first-amd-ryzen-2018feb1
and cloud data centers, energy awareness in edge computing involves
2, Accessed on: 13 June 2019.
all operations conducted along data’s whole life cycle, including data [14] S. Amiri, M. Hosseinabady, A. Rodriguez, R. Asenjo, A. Navarro, J. Nunez-
generation, transmission, aggregation, storage, processing, and etc. Yanez, Workload partitioning strategy for improved parallelism on FPGA-
Therefore, energy aware computing is urged for all aspects of edge com- CPU heterogeneous chips, in: 2018 28th International Conference on Field
puting, including architecture, operating system, middleware, service Programmable Logic and Applications (FPL), IEEE, 2018, pp. 376–3764.
[15] P.K. Gupta, Xeon+ FPGA platform for the data center, in: Fourth Workshop on
provisioning, and computing offloading. They pose a lot of challenges
the Intersections of Computer Architecture and Reconfigurable Logic, Vol. 119,
to edge computing. 2015.
In this paper a thorough literature survey is conducted to reveal [16] A. Putnam, A.M. Caulfield, E.S. Chung, D. Chiou, K. Constantinides, J. Demme,
the state-of-the-art of energy aware computing in edge computing. H. Esmaeilzadeh, J. Fowers, G.P. Gopal, J. Gray, et al., A reconfigurable fabric
Various aspects of energy awareness, including low power hardware de- for accelerating large-scale datacenter services, ACM SIGARCH Comput. Archit.
News 42 (3) (2014) 13–24.
sign, architectural features oriented energy consumption optimization,
[17] Fabric-FPGA-cloud, https://www.xilinx.com/products/silicon-devices/soc/zynq-
middleware, and quality of services guarantee and enhancement are 7000.html, Accessed on: 13 June 2019.
surveyed. Moreover, energy aware computing offloading and resource [18] Zynq, https://www.amd.com/en/press-releases/first-amd-ryzen-2018feb12, Ac-
scheduling approaches, gaming theory based tradeoffs among system cessed on: 13 June 2019.
performance and overheads are also reviewed. [19] R. Inta, D.J. Bowman, S.M. Scott, The chimera: an off-the-shelf
CPU/GPGPU/FPGA hybrid computing platform, Int. J. Reconfig. Comput.
Although the energy aware edge computing is investigated in vari-
2012 (2012) 2.
ous aspects and application domain, most of the existing work focuses [20] S. Bauer, S. Köhler, K. Doll, U. Brunsmann, FPGA-GPU architecture for kernel
on a single objective, such as low latency, data privacy, power saving, SVM pedestrian detection, in: 2010 IEEE Computer Society Conference on
or energy efficiency. Therefore, there are existing many opportunities Computer Vision and Pattern Recognition-Workshops, IEEE, 2010, pp. 61–68.
for joint multiple objectives optimization of energy efficiency and low [21] P. Meng, M. Jacobsen, R. Kastner, FPGA-GPU-CPU heterogenous architecture
for real-time cardiac physiological optical mapping, in: 2012 International
latency. For example, researchers have proposed novel architectures
Conference on Field-Programmable Technology, IEEE, 2012, pp. 37–42.
and middlewares to provide interoperability between different edge [22] E.S. Chung, P.A. Milder, J.C. Hoe, K. Mai, Single-chip heterogeneous computing:
devices and resources in edge computing. However, operating system Does the future include custom logic, FPGAs, and GPGPUs?, in: 2010 43rd
level energy awareness in edge computing is still challenging and open Annual IEEE/ACM International Symposium on Microarchitecture, IEEE, 2010,
for research. Moreover, little work is conducted on the compiler level pp. 225–236.
[23] E. Nurvitadhi, J. Sim, D. Sheffield, A. Mishra, S. Krishnan, D. Marr, Accelerating
optimization for energy aware edge computing. Thirdly, energy effi-
recurrent neural networks in analytics servers: Comparison of FPGA, CPU, GPU,
cient management of the heterogeneous hardware in edge computing and ASIC, in: 2016 26th International Conference on Field Programmable Logic
is still an open challenge for systematic energy reduction. Last but not and Applications (FPL), IEEE, 2016, pp. 1–4.
least, currently there is no usable benchmark that can evaluate the [24] E. Nurvitadhi, D. Sheffield, J. Sim, A. Mishra, G. Venkatesh, D. Marr, Accelerat-
energy efficiency of heterogeneous edge computing architectures. ing binarized neural networks: Comparison of FPGA, CPU, GPU, and ASIC, in:
2016 International Conference on Field-Programmable Technology (FPT), IEEE,
2016, pp. 77–84.
Declaration of competing interest [25] S.-C. Lin, Y. Zhang, C.-H. Hsu, M. Skach, M.E. Haque, L. Tang, J. Mars, The
architectural implications of autonomous driving: Constraints and acceleration,
The authors declare that they have no known competing finan- in: ACM SIGPLAN Notices, Vol. 53, ACM, 2018, pp. 751–766, no. 2.
[26] M. Ko, S. Chae, J. Ma, N. Kim, H.-W. Lee, Y. Cui, J. Cho, Scalable synthesis of
cial interests or personal relationships that could have appeared to
silicon-nanolayer-embedded graphite for high-energy lithium-ion batteries, Nat.
influence the work reported in this paper. Energy 1 (9) (2016) 16113.
[27] J.F. Parker, C.N. Chervin, I.R. Pala, M. Machler, M.F. Burz, J.W. Long,
References D.R. Rolison, Rechargeable nickel–3d zinc batteries: An energy dense, safer
alternative to lithium-ion, Science 356 (6336) (2017) 415–418.
[1] M.A. Khan, A survey of computation offloading strategies for performance [28] T.D. Lee, A.U. Ebong, A review of thin film solar cell technologies and
improvement of applications running on mobile devices, J. Netw. Comput. Appl. challenges, Renew. Sustain. Energy Rev. 70 (2017) 1286–1297.
56 (2015) 28–40. [29] K. Chopra, P. Paulson, V. Dutta, Thin-film solar cells: an overview, Prog.
[2] W. Shi, S. Dustdar, The promise of edge computing, Computer 49 (5) (2016) Photovolt., Res. Appl. 12 (2–3) (2004) 69–92.
78–81. [30] L. Kazmerski, F. White, G. Morgan, Thin-film cuinse2/cds heterojunction solar
[3] W. Shi, J. Cao, Q. Zhang, Y. Li, L. Xu, Edge computing: Vision and challenges, cells, Appl. Phys. Lett. 29 (4) (1976) 268–270.
IEEE Internet Things J. 3 (5) (2016) 637–646. [31] D. Bonnet, H. Rabenhorst, New results on the development of a thin-film p-
[4] X. Tao, K. Ota, M. Dong, H. Qi, K. Li, Performance guaranteed computation cdte-n-cds heterojunction solar cell, in: 9th Photovoltaic Specialists Conference,
offloading for mobile-edge cloud computing, IEEE Wirel. Commun. Lett. 6 (6) Silver Spring, Md, 1972, pp. 129–132.
(2017) 774–777. [32] D.E. Carlson, C.R. Wronski, Amorphous silicon solar cell, Appl. Phys. Lett. 28
[5] M. Patel, B. Naughton, C. Chan, N. Sprecher, S. Abeta, A. Neal, et al., Mobile- (11) (1976) 671–673.
Edge Computing Introductory Technical White Paper, White paper, mobile-edge [33] D. Carlson, C. Wronski, Solar cells using discharge-produced amorphous silicon,
computing (MEC) industry initiative, 2014, pp. 1089–7801. J. Electron. Mater. 6 (2) (1977) 95–106.
[6] P. Mach, Z. Becvar, Mobile edge computing: A survey on architecture and [34] R. Mickelsen, W.S. Chen, High photocurrent polycrystalline thin-film
computation offloading, IEEE Commun. Surv. Tutor. 19 (3) (2017) 1628–1656. cds/cuinse2 solar cellar, Appl. Phys. Lett. 36 (5) (1980) 371–373.
[7] X. Chen, L. Jiao, W. Li, X. Fu, Efficient multi-user computation offloading [35] J. Meier, R. Flückiger, H. Keppner, A. Shah, Complete microcrystalline 𝑝 − 𝑖 − 𝑛
for mobile-edge cloud computing, IEEE/ACM Trans. Netw. 24 (5) (2015) solar cell crystalline or amorphous cell behavior?, Appl. Phys. Lett. 65 (7)
2795–2808. (1994) 860–862.

576
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

[36] L. Kazmerski, F. White, M. Ayyagari, Y. Juang, R. Patterson, Growth and char- [65] M. Lindsay, K. Bishop, S. Sengupta, M. Co, M. Cumbie, C.-H. Chen, M.L.
acterization of thin-film compound semiconductor photovoltaic heterojunctions, Johnston, Heterogeneous integration of CMOS sensors and fluidic networks
J. Vac. Sci. Technol. 14 (1) (1977) 65–68. using wafer-level molding, IEEE Trans. Biomed. Circuits Syst. (99) (2018) 1–10.
[37] Z. Tang, L. Qi, Z. Cheng, K. Li, S.U. Khan, K. Li, An energy efficient task [66] K. Roy, B. Jung, D. Peroulis, A. Raghunathan, Integrated systems in
scheduling algorithm in DVFS-enabled cloud environment, J. Grid Comput. 14 the more-than-moore era: designing low-cost energy-efficient systems using
(1) (2016) 55–74. heterogeneous components, IEEE Des. Test 33 (3) (2013) 56–65.
[38] M. Dayarathna, Y. Wen, R. Fan, Data center energy consumption modeling: A [67] S. Akram, Managed language runtimes on heterogeneous hardware: Optimiza-
survey, IEEE Commun. Surv. Tutor. 18 (1) (2015) 732–794. tions for performance, efficiency and lifetime improvement, in: Companion
[39] W.-Y. Liang, P.-T. Lai, Design and implementation of a critical speed-based To the First International Conference on the Art, Science and Engineering of
DVFS mechanism for the android operating system, in: 2010 5th International Programming, ACM, 2017, p. 20.
Conference on Embedded and Multimedia Computing, IEEE, 2010, pp. 1–6. [68] K. Rao, J. Wang, S. Yalamanchili, Y. Wardi, Y. Handong, Application-specific
[40] E. Raffin, E. Nogues, W. Hamidouche, S. Tomperi, M. Pelcat, D. Menard, Low performance-aware energy optimization on android mobile devices, in: 2017
power hevc software decoder for mobile devices, J. Real-Time Image Process. IEEE International Symposium on High Performance Computer Architecture
12 (2) (2016) 495–507. (HPCA), IEEE, 2017, pp. 169–180.
[41] L. Yang, R.P. Dick, G. Memik, P. Dinda, Happe: Human andapplication-driven [69] Y. Xie, G.H. Loh, Pipp: promotion/insertion pseudo-partitioning of multi-core
frequency scaling for processor power efficiency, IEEE Trans. Mob. Comput. 12 shared caches, in: ACM SIGARCH Computer Architecture News, Vol. 37, ACM,
(8) (2012) 1546–1557. 2009, pp. 174–183, no. 3.
[42] P.K. Muhuri, P.K. Gupta, J.M. Mendel, User-satisfaction-aware power manage- [70] J.D. Kron, B. Prumo, G.H. Loh, Double-dip: Augmenting dip with adaptive
ment in mobile devices based on perceptual computing, IEEE Trans. Fuzzy Syst. promotion policies to manage shared l2 caches, in: Proc. of the Workshop on
26 (4) (2017) 2311–2323. Chip Multiprocessor Memory Systems and Interconnects, Beijing, China, 2008.
[43] H. David, C. Fallin, E. Gorbatov, U.R. Hanebutte, O. Mutlu, Memory power [71] J. Xing, H. Dai, Z. Yu, A distributed multi-level model with dynamic replace-
management via dynamic voltage/frequency scaling, in: Proceedings of the ment for the storage of smart edge computing, J. Syst. Archit. 83 (2018)
8th ACM International Conference on Autonomic Computing, ACM, 2011, pp. 1–11.
31–40. [72] G. Jia, G. Han, J. Du, S. Chan, A maximum cache value policy in hybrid
[44] Q. Deng, D. Meisner, L. Ramos, T. F.Wenisch, R. Bianchini, Memscale: active memory-based edge computing for mobile devices, IEEE Internet Things J.
low-power modes for main memory, in: ACM SIGARCH Computer Architecture (2018).
News, Vol. 39, ACM, 2011, pp. 225–238, no. 1. [73] G. Jia, G. Han, H. Xie, J. Du, Hybrid-LRU caching for optimizing data storage
[45] R. Begum, D. Werner, M. Hempstead, G. Prasad, G. Challen, Energy- and retrieval in edge computing-based wearable sensors, IEEE Internet Things
performance trade-offs on energy-constrained devices with multi-component J. (2018).
DVFS, in: 2015 IEEE International Symposium on Workload Characterization, [74] J.S. Meena, S.M. Sze, U. Chand, T.-Y. Tseng, Overview of emerging nonvolatile
IEEE, 2015, pp. 34–43. memory technologies, Nanoscale Res. Lett. 9 (1) (2014) 526.
[46] S. Wang, G. Ananthanarayanan, T. Mitra, Optic: Optimizing collaborative [75] E. Zeydan, E. Bastug, M. Bennis, M.A. Kader, I.A. Karatepe, A.S. Er, M. Debbah,
cpu–gpu computing on mobile devices with thermal constraints, IEEE Trans. Big data caching for networking: Moving from cloud to edge, IEEE Commun.
Comput.-Aided Des. Integr. Circuits Syst. 38 (3) (2018) 393–406. Mag. 54 (9) (2016) 36–42.
[47] X. Kong, Y. Zheng, M. Ouyang, L. Lu, J. Li, Z. Zhang, Faultdiagnosis and
[76] W. Zhang, Y. Liu, G. Han, Y. Feng, Y. Zhao, An energy efficient and QoS aware
quantitative analysis of micro-short circuits for lithium-ion batteries in battery
routing algorithm based on data classification for industrial wireless sensor
packs, J. Power Sources 395 (2018) 358–368.
networks, IEEE Access 6 (2018) 46495–46504.
[48] Z. Zhang, X. Kong, Y. Zheng, L. Zhou, X. Lai, Real-time diagnosis of micro-
[77] https://en.wikipedia.org/wiki/Domain_Name_System, Accessed on: 13 June
short circuit for li-ion batteries utilizing low-pass filters, Energy 166 (2019)
2019.
1013–1024.
[78] L. Zhang, A. Afanasyev, J. Burke, V. Jacobson, P. Crowley, C. Papadopoulos,
[49] D. Doshi, et al., Real time fault failure detection in power distribution line
L. Wang, B. Zhang, et al., Named data networking, ACM SIGCOMM Comput.
using power line communication, Internat. J. Engrg. Sci. 4834 (2016).
Commun. Rev. 44 (3) (2014) 66–73.
[50] I. Koren, C.M. Krishna, Fault-Tolerant Systems, Elsevier, 2010.
[79] H. Zhang, Y. Li, Z. Zhang, A. Afanasyev, L. Zhang, NDN host model, ACM
[51] G. Aupy, A. Benoit, M.E.M. Diouri, O. Glück, L. Lefèvre, Energy-Aware
SIGCOMM Comput. Commun. Rev. 48 (3) (2018) 35–41.
Checkpointing Strategies, Springer International Publishing, Cham, 2015, pp.
[80] Z. Zhang, Y. Yu, H. Zhang, E. Newberry, S. Mastorakis, Y. Li, A. Afanasyev,
279–317.
L. Zhang, An overview of security support in named data networking, IEEE
[52] Y.L. Linde, Fault tolerant power supply system, 1998, U.S. Patent 5 745 670.
Commun. Mag. 56 (11) (2018) 62–68.
[53] K. Ferreira, J. Stearley, J.H. Laros III, R. Oldfield, K. Pedretti, R. Brightwell, R.
[81] Y. Yu, A. Afanasyev, L. Zhang, Name-based access control, in: Named Data
Riesen, P.G. Bridges, D. Arnold, Evaluatingthe viability of process replication
Networking Project, Technical Report NDN-0034, 2015.
reliability for exascale systems, in: Proceedings of 2011 International Confer-
ence for High Performance Computing, Networking, Storage and Analysis, ACM, [82] J. Shi, E. Newberry, B. Zhang, On broadcast-based self-learning in named
2011, p. 44. data networking, in: 2017 IFIP Networking Conference (IFIP Networking) and
[54] H. Casanova, F. Vivien, D. Zaidouni, Using replication for resilience on exas- Workshops, IEEE, 2017, pp. 1–9.
cale systems, in: Fault-Tolerance Techniques for High-Performance Computing, [83] M. Meisel, V. Pappas, L. Zhang, Listen first, broadcast later: Topology-agnostic
Springer, 2015, pp. 229–278. forwarding under high dynamics, in: Annual Conference of International
[55] A.M. Sampaio, J.G. Barbosa, A comparative cost analysis offault-tolerance Technology Alliance in Network and Information Science, 2010, p. 8.
mechanisms for availability on the cloud, Sustain. Comput.: Inform. Syst. 19 [84] C. Partridge, R. Walsh, M. Gillen, G. Lauer, J. Lowry, W.T. Strayer, D.
(2018) 315–323. Kong, D. Levin, J. Loyall, M. Paulitsch, A secure content network in space,
[56] L. Seybold, M. Witczak, P. Majdzik, R. Stetter, Towards robust predictive fault– in: Proceedings of the Seventh ACM International Workshop on Challenged
tolerant control for a battery assembly system, Int. J. Appl. Math. Comput. Sci. Networks, ACM, 2012, pp. 43–50.
25 (4) (2015) 849–862. [85] E. Baccelli, C. Mehlis, O. Hahm, T.C. Schmidt, M. Wählisch, Information centric
[57] T. Slivinski, C. Broglio, C. Wild, J. Goldberg, K. Levitt, E. Hitt, J. Webb, Study networking in the IoT: Experiments with NDN in the wild, in: Proceedings of
of fault-tolerant software technology, 1984. the 1st ACM Conference on Information-Centric Networking, ACM, 2014, pp.
[58] http://www.freepatentsonline.com/9874917.html, Accessed on: 13 June 2019. 77–86.
[59] S. Reda, R. Cochran, A.K. Coskun, Adaptive power capping for servers with [86] D.B. Rawat, S.R. Reddy, Software defined networking architecture, security and
multithreaded workloads, IEEE Micro 32 (5) (2012) 64–75. energy efficiency: A survey, IEEE Commun. Surv. Tutor. 19 (1) (2016) 325–346.
[60] S. Conoci, D. Cingolani, P. Di Sanzo, B. Ciciani, A. Pellegrini, F. Quaglia, [87] D. Kreutz, F.M. Ramos, P. Verissimo, C.E. Rothenberg, S. Azodolmolky, S.
A power cap-oriented time warp architecture, in: Proceedings of the 2018 Uhlig, Software-defined networking: A comprehensive survey, Proc. IEEE 103
ACM SIGSIM Conference on Principles of Advanced Discrete Simulation, Ser. (1) (2015) 14–76.
SIGSIM-PADS ’18, ACM, New York, NY, USA, 2018, pp. 97–100. [88] S. Bera, S. Misra, A.V. Vasilakos, Software-defined networking for internet of
[61] F. Quaglia, A. Pellegrini, R. Vitali, S. Peluso, D. Didona, G. Castellari, V. Gheri, things: A survey, IEEE Internet Things J. 4 (6) (2017) 1994–2008.
D. Cucuzzo, T. Santoro, Root-sim: The rome optimistic simulator, 2011. [89] A.C. Baktir, A. Ozgovde, C. Ersoy, How can edge computingbenefit from
[62] H. David, E. Gorbatov, U.R. Hanebutte, R. Khanna, C. Le, Rapl: memory software-defined networking: A survey, use cases, and future directions, IEEE
power estimation and capping, in: 2010 ACM/IEEE International Symposium Commun. Surv. Tutor. 19 (4) (2017) 2359–2391.
on Low-Power Electronics and Design (ISLPED), IEEE, 2010, pp. 189–194. [90] M. Roth, A. Luppold, H. Falk, Measuring and modeling energy consumption
[63] C. Imes, H. Zhang, K. Zhao, H. Hoffmann, Handing DVFS to Hardware: Using of embedded systems for optimizing compilers, in: Proceedings of the 21st
Power Capping to Control Software Performance, Technical Report TR-2018-03, International Workshop on Software and Compilers for Embedded Systems,
2018. ACM, 2018, pp. 86–89.
[64] H. Zhang, H. Hoffmann, Maximizing performance under a power cap: A [91] K. Muts, A. Luppold, H. Falk, Multi-criteria compiler-based optimization of
comparison of hardware, software, and hybrid techniques, ACM SIGARCH hard real-time systems, in: Proceedings of the 21st International Workshop on
Comput. Archit. News 44 (2) (2016) 545–559. Software and Compilers for Embedded Systems, SCOPES 2018, 2018, pp. 54–57.

577
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

[92] L. Mukhanov, D.S. Nikolopoulos, B.R. de Supinski, Alea: fine-grain energy [116] Q. Zhang, Y. Wang, X. Zhang, L. Liu, X. Wu, W. Shi, H. Zhong, Openvdap:
profiling with basic block sampling, in: 2015 International Conference on An open vehicular data analytics platform for CAVs, in: 2018 IEEE 38th
Parallel Architecture and Compilation (PACT), IEEE, 2015, pp. 87–98. International Conference on Distributed Computing Systems (ICDCS), IEEE,
[93] K. Georgiou, S. Kerrison, Z. Chamski, K. Eder, Energy transparency for deeply 2018, pp. 1310–1320.
embedded programs, ACM Transactions on Architecture and Code Optimization [117] Z. Xu, X. Peng, L. Zhang, D. Li, N. Sun, The 𝛷-stack for smart web of things,
(TACO) 14 (1) (2017) 8. in: Proceedings of the Workshop on Smart Internet of Things, ACM, 2017, p.
[94] J.B. Schoon, M.W. Moeller, R.C. Redondo, R. McMahan, Adaptable interface for 10.
a mobile computing device, 2018, U.S. Patent 9, 924, 006. [118] https://github.com/rancher/k3os.
[95] X.S. Le, J.-C. Le Lann, L. Lagadec, L. Fabresse, N. Bouraqadi, J. Laval, Cardin: [119] O. Consortium, et al., Openfog reference architecture for fog computing, in:
An agile environment for edge computing onreconfigurable sensor networks, in: Architecture Working Group, 2017.
2016 International Conference on Computational Science and Computational [120] Cloud IoT core, 2019, https://cloud.google.com/iot-core/, accessed 13 June
Intelligence (CSCI), IEEE, 2016, pp. 168–173. 2019.
[96] S.K. Venkata, I. Ahn, D. Jeon, A. Gupta, C. Louie, S. Garcia, S. Belongie, [121] Y. Kang, J. Hauswald, C. Gao, A. Rovinski, T. Mudge, J. Mars, L. Tang,
M.B. Taylor, Sd-vbs: The san diego vision benchmark suite, in: 2009 IEEE Neurosurgeon: Collaborative intelligence between the cloud and mobile edge,
International Symposium on Workload Characterization (IISWC), IEEE, 2009, ACM SIGARCH Comput. Archit. News 45 (1) (2017) 615–629, ACM.
pp. 55–64. [122] D. Zhang, Y. Ma, C. Zheng, Y. Zhang, X.S. Hu, D. Wang, Cooperative-
competitive task allocation in edge computing for delay-sensitive social sensing,
[97] R.S. Wallace, M.D. Howard, Hba vision architecture: built and benchmarked,
in: 2018 IEEE/ACM Symposium on Edge Computing (SEC), IEEE, 2018, pp.
IEEE Trans. Pattern Anal. Mach. Intell. 11 (3) (1989) 227–232.
243–259.
[98] J. Clemons, H. Zhu, S. Savarese, T. Austin, Mevbench: A mobile computer
[123] Z. Li, X. Peng, L. Chao, Z. Xu, Everylite: A lightweight scripting language for
vision benchmarking suite, in: 2011 IEEE International Symposium on Workload
micro tasks in iot systems, in: 2018 IEEE/ACM Symposium on Edge Computing
Characterization (IISWC), IEEE, 2011, pp. 91–102.
(SEC), IEEE, 2018, pp. 381–386.
[99] https://www.eembc.org/coremark/, Accessed on: 13 June 2019.
[124] W. Zhang, Y. Wen, D.O. Wu, Energy-efficient scheduling policy for collaborative
[100] L. Nardi, B. Bodin, M.Z. Zia, J. Mawer, A. Nisbet, P.H. Kelly, A.J. Davison, execution in mobile cloud computing, in: 2013 Proceedings IEEE Infocom, IEEE,
M. Luján, M.F. O’Boyle, G. Riley, et al., Introducing slambench, a performance 2013, pp. 190–194.
and accuracy benchmarking methodology for slam, in: 2015 IEEE International [125] J. Kwak, Y. Kim, J. Lee, S. Chong, Dream: Dynamic resource and task allocation
Conference on Robotics and Automation (ICRA), IEEE, 2015, pp. 5783–5790. for energy minimization in mobile cloud systems, IEEE J. Sel. Areas Commun.
[101] Y. Wang, S. Liu, X. Wu, W. Shi, Cavbench: A benchmark suite for connected 33 (12) (2015) 2510–2523.
and autonomous vehicles, in: 2018 IEEE/ACM Symposium on Edge Computing [126] C. Liang, Y. He, F.R. Yu, N. Zhao, Energy-efficient resource allocation in
(SEC), IEEE, 2018, pp. 30–42. software-defined mobile networks with mobile edge computing and caching,
[102] H. Dubey, J. Yang, N. Constant, A.M. Amiri, Q. Yang, K. Makodiya, Fog data: in: 2017 IEEE Conference on Computer Communications Workshops (INFOCOM
Enhancing telehealth big data through fog computing, in: Proceedings of the WKSHPS), IEEE, 2017, pp. 121–126.
ASE Bigdata & Socialinformatics 2015, ACM, 2015, p. 14. [127] H. Pang, K.-L. Tan, Authenticating query results in edge computing, in:
[103] S. Alonso-Monsalve, F. García-Carballeira, A. Calderón, Fog computing through Proceedings. 20th International Conference on Data Engineering, IEEE, 2004,
public-resource computing and storage, in: 2017 Second International Con- pp. 560–571.
ference on Fog and Mobile Edge Computing (FMEC), IEEE, 2017, pp. [128] M.B. Mollah, M.A.K. Azad, A. Vasilakos, Secure data sharing and searching at
81–87. the edge of cloud-assisted internet of things, IEEE Cloud Comput. 4 (1) (2017)
[104] J. Al-Badarneh, Y. Jararweh, M. Al-Ayyoub, M. Al-Smadi, R. Fontes, Software 34–42.
defined storage for cooperative mobile edge computing systems, in: 2017 Fourth [129] M. Sabt, M. Achemlal, A. Bouabdallah, Trusted execution environment: what it
International Conference on Software Defined Systems (SDS), IEEE, 2017, pp. is, and what it is not, in: 2015 IEEE Trustcom/ BigDataSE/ISPA, Vol. 1, IEEE,
174–179. 2015, pp. 57–64.
[105] C. Pahl, S. Helmer, L. Miori, J. Sanin, B. Lee, A container-based edge [130] Z. Ning, F. Zhang, W. Shi, W. Shi, Position paper: Challenges towards securing
cloud PAAS architecture based on raspberry pi clusters, in: 2016 IEEE 4th hardware-assisted execution environments, in: Proceedings of the Hardware and
International Conference on Future Internet of Things and Cloud Workshops Architectural Support for Security and Privacy, ACM, 2017, p. 6.
(FiCloudW), IEEE, 2016, pp. 117–124. [131] Guangxia Li, Peilin Zhao, Xiao Lu, Jia Liu, Yulong Shen, Data analytics for
[106] C. Jiang, D. Ou, Y. Wang, Y. Li, J. Zhang, J. Wan, B. Luo, W. Shi, Energy fog computing by distributed online learning with asynchronous update, in:
efficiency comparison of hypervisors, Sustain. Comput.: Inf. Syst. 22 (2019) Proceedings of IEEE International Conference on Communications (ICC 2019),
311–321. Shanghai, China, 2019.
[107] C. Jiang, G. Han, J. Lin, G. Jia, W. Shi, J. Wan, Characteristics of co-allocated [132] Ghulam Mujtaba, Muhammad Tahir, Muhammad Hanif Soomro, Energy efficient
online services and batch jobs in internet data centers: A case study from data encryption techniques in smartphones, Wirel. Pers. Commun. 106 (4)
alibaba cloud, IEEE Access 7 (2019) 22, 495–22 508. (2019) 2023–2035.
[133] Jong Min Kim, Hong Sub Lee, Junmin Yi, Minho Park, Power adaptive data
[108] C. Bienia, S. Kumar, J.P. Singh, K. Li, The parsec benchmark suite: Characteri-
encryption for energy-efficient and secure communication in solar-powered
zation and architectural implications, in: Proceedings of the 17th International
wireless sensor networks, J. Sensors (2016) 2678269.
Conference on Parallel Architectures and Compilation Techniques, ACM, 2008,
[134] Tao Zhang, Lele Zheng, Yongzhi Wang, Yulong Shen, Ning Xi, Jianfeng Ma,
pp. 72–81.
Jianming Yong, Trustworthy service composition with secure data transmission
[109] P.R. Luszczek, D.H. Bailey, J.J. Dongarra, J. Kepner, R.F. Lucas, R. Rabenseifner,
in sensor networks, World Wide Web 21 (1) (2018) 185–200.
D. Takahashi, The HPC challenge (HPCC) benchmark suite, in: Proceedings of
[135] Xinbin Li, Chao Wang, Zijun Yang, Lei Yan, Song Han, Energy-efficient and
the 2006 ACM/IEEE Conference on Supercomputing, Vol. 213, Citeseer, 2006.
secure transmission scheme based on chaotic compressive sensing in underwater
[110] L. Wang, J. Zhan, C. Luo, Y. Zhu, Q. Yang, Y. He, W. Gao, Z. Jia, Y. Shi, S.
wireless sensor networks, Digit. Signal Process. 81 (2018) 129–137.
Zhang, et al., Bigdatabench: A big data benchmark suite from internet services,
[136] Yongzhi Wang, Yulong Shen, Cuicui Su, Ke Cheng, Yibo Yang, ANter Faree,
in: 2014 IEEE 20th International Symposium on High Performance Computer
Yao Liu, CFHider: Control flow obfuscation with intel SGX, in: Proceedings of
Architecture (HPCA), IEEE, 2014, pp. 488–499.
IEEE International Conference on Computer Communications (INFOCOM 2019),
[111] J.L. Henning, Spec cpu2006 benchmark descriptions, ACM SIGARCH Comput. Pairs, France, 2019.
Archit. News 34 (4) (2006) 1–17. [137] Yang Hu, John C.S. Lui, Wenjun Hu, Xiaobo Ma, Jianfeng Li, Xiao Liang,
[112] Tianshu Hao, Yunyou Huang, Xu Wen, Wanling Gao, Fan Zhang, Chen Taming energy cost of disk encryption software on data-intensive mobile
Zheng, Lei Wang, Hainan Ye, Kai Hwang, Zujie Ren, Jianfeng Zhan, Edge devices, Future Gener. Comput. Syst. http://dx.doi.org/10.1016/j.future.2017.
AIBench: Towards comprehensive end-to-end edge computing benchmarking, 09.025.
in: 2018 BenchCouncil International Symposium on Benchmarking, Measuring [138] Yulong Shen, Tao Zhang, Yongzhi Wang, Hua Wang, Xiaohong Jiang, Mi-
and Optimizing (Bench18). croThings: A generic IoT architecture for flexible data aggregation and scalable
[113] Anirban Das, Stacy Patterson, Mike P. Wittie, EdgeBench: Benchmarking edge service cooperation, IEEE Commun. Mag. 55 (9) (2017) 86–93.
computing platforms, in: 2018 IEEE/ACM International Conference on Utility [139] Y. Qiu, C. Jiang, Y. Wang, D. Ou, Y. Li, J. Wan, Energy aware virtual machine
and Cloud Computing Companion (UCC Companion). scheduling in data centers, Energies 12 (4) (2019) 646.
[114] G. Wu, J. Chen, W. Bao, X. Zhu, W. Xiao, J. Wang, L. Liu, Meccas: Collaborative [140] Q. Zhang, X. Zhang, Q. Zhang, W. Shi, H. Zhong, Firework: Big data sharing and
storage algorithm based on alternating direction method of multipliers on processing in collaborative edge environment, in: 2016 Fourth IEEE Workshop
mobile edge cloud, in: 2017 IEEE International Conference on Edge Computing on Hot Topics in Web Systems and Technologies (HotWeb), IEEE, 2016, pp.
(EDGE), IEEE, 2017, pp. 40–46. 20–25.
[115] J. Cao, L. Xu, R. Abdallah, W. Shi, EdgeOSH : a home operating system [141] M. Ryden, K. Oh, A. Chandra, J. Weissman, Nebula: Distributed edge cloud
for internet of everything, in: 2017 IEEE 37th International Conference on for data intensive computing, in: 2014 IEEE International Conference on Cloud
Distributed Computing Systems (ICDCS), IEEE, 2017, pp. 1756–1764. Engineering, IEEE, 2014, pp. 57–66.

578
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

[142] K. Kaur, T. Dhand, N. Kumar, S. Zeadally, Container-as-a-service at the edge: [168] B. Aiken, J. Strassner, B. Carpenter, I. Foster, C. Lynch, J. Mambretti, R.
Trade-off between energy efficiency and service availability at fog nano data Moore, B. Teitelbaum, Network Policy and Services: A Report of a Workshop
centers, IEEE Wirel. Commun. 24 (3) (2017) 48–56. on Middleware, Tech. Rep., 2000.
[143] T. Rausch, Message-oriented middleware for edge computing applications, [169] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, L. Fei-Fei, Large-
in: Proceedings of the 18th Doctoral Symposium of the 18th International scale video classification with convolutional neural networks, in: Proceedings
Middleware Conference, ACM, 2017, pp. 3–4. of the IEEE conference on Computer Vision and Pattern Recognition, 2014, pp.
[144] M. Song, K. Zhong, J. Zhang, Y. Hu, D. Liu, W. Zhang, J. Wang, T. Li, In-situ AI: 1725–1732.
Towards autonomous and incremental deep learning for IoT systems, in: 2018 [170] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga,
IEEE International Symposium on High Performance Computer Architecture G. Toderici, Beyond short snippets: Deep networks for video classification, in:
(HPCA), IEEE, 2018, pp. 92–103. Proceedings of the IEEE conference on computer vision and pattern recognition,
[145] S. Han, J. Kang, H. Mao, Y. Hu, X. Li, Y. Li, D. Xie, H. Luo, S. Yao, Y. 2015, pp. 4694–4702.
Wang, et al., Ese: Efficient speech recognition engine with sparse lstm on [171] H. Cao, M. Wachowicz, S. Cha, Developing an edge computing platform for
FPGA, in: Proceedings of the 2017 ACM/SIGDA International Symposium on real-time descriptive analytics, in: 2017 IEEE International Conference on Big
Field-Programmable Gate Arrays, ACM, 2017, pp. 75–84. Data (Big Data), IEEE, 2017, pp. 4546–4554.
[146] B.I. Ismail, et al., Evaluation of Docker as edge computing platform, in: Proc. [172] S. Nastic, T. Rausch, O. Scekic, S. Dustdar, M. Gusev, B. Koteska, M. Kostoska, B.
IEEE Conf. Open Syst. (ICOS), 2015, pp. 130–135. Jakimovski, S. Ristov, R. Prodan, A serverless real-time data analytics platform
[147] K. Ha, Y. Abe, T. Eiszler, Z. Chen, W. Hu, B. Amos, R. Upadhyaya, P. Pillai, for edge computing, IEEE Internet Comput. 21 (4) (2017) 64–71.
M. Satyanarayanan, You can teach elephants to dance: agile VM handoff for [173] M. Satyanarayanan, P. Simoens, Y. Xiao, P. Pillai, Z. Chen, K. Ha, W. Hu, B.
edge computing, in: Proceedings of the Second ACM/IEEE Symposium on Edge Amos, Edge analytics in the internet of things, IEEE Pervasive Comput. 14 (2)
Computing, ACM, 2017, p. 12. (2015) 24–31.
[148] R. Petrolo, R. Morabito, V. Loscrì, N. Mitton, The design of the gateway for [174] O. Skarlat, M. Nardelli, S. Schulte, S. Dustdar, Towards QoS-aware fog service
the cloud of things, Ann. Telecommun. 72 (1) (2016) 31–40. placement, in: 2017 IEEE 1st International Conference on Fog and Edge
[149] R. Morabito, N. Beijar, Enabling data processing at the network edge through Computing (ICFEC), IEEE, 2017, pp. 89–96.
lightweight virtualization technologies, in: Proc. 13th Annu. IEEE Int. Conf. [175] M.I. Naas, P.R. Parvedy, J. Boukhobza, L. Lemarchand, Ifogstor: an IoT data
Sens. Commun. Netw. Workshops (SECON Workshops), 2016, pp. 1–6. placement strategy for fog infrastructure, in: 2017 IEEE 1st International
[150] L. Ma, S. Yi, Q. Li, Efficient service handoff across edge servers via docker Conference on Fog and Edge Computing (ICFEC), IEEE, 2017, pp. 97–104.
container migration, in: Proceedings of the Second ACM/IEEE Symposium on [176] L. Gu, J. Cai, D. Zeng, Y. Zhang, H. Jin, W. Dai, Energy efficient task allocation
Edge Computing, ACM, 2017, p. 11. and energy scheduling in green energy powered edge computing, Future Gener.
[151] B.I. Ismail, E.M. Goortani, M.B. Ab Karim, W.M. Tat, S. Setapa, J.Y. Luke, Comput. Syst. 95 (2019) 89–99.
O.H. Hoe, Evaluation of docker as edge computing platform, in: 2015 IEEE [177] M. Dabbagh, B. Hamdaoui, M. Guizani, A. Rayes, Energy-efficient cloud
Conference on Open Systems (ICOS), IEEE, 2015, pp. 130–135. resource management, in: Computer Communications Workshops (INFOCOM
[152] P. Bellavista, A. Zanni, Feasibility of fog computing deployment based on docker WKSHPS), 2014 IEEE Conference on, IEEE, 2014, pp. 386–391.
containerization over raspberrypi, in: Proceedings of the 18th International [178] I. AlQerm, B. Shihada, Enhanced machine learning scheme for energy efficient
Conference on Distributed Computing and Networking, ACM, 2017, p. 16. resource allocation in 5G heterogeneous cloud radio access networks, in: 2017
[153] Aufs, 2019, https://wiki.gentoo.org/wiki/Aufs, Accessed on: 13 June 2019. IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile
[154] B. Confais, A. Lebre, B. Parrein, An object store service for a fog/edge Radio Communications (PIMRC), 2017, pp. 1–7.
computing infrastructure based on IPFS and a scale-out NAS, in: 2017 IEEE [179] N. Vasi´c, D. Novakovi´c, S. Miuˇcin, D. Kosti´c, R. Bianchini, Dejavu: ac-
1st International Conference on Fog and Edge Computing (ICFEC), IEEE, 2017, celerating resource allocation in virtualized environments, in: ACM SIGARCH
pp. 41–50. Computer Architecture News, Vol. 40, ACM, 2012, pp. 423–436, no. 1.
[155] J. Benet, Ipfs-content addressed, versioned, p2p file system, 2014, arXiv preprint [180] T.V.T. Duy, Y. Sato, Y. Inoguchi, Performance evaluation of a green scheduling
arXiv:1407.3561. algorithm for energy savings in cloud computing, in: In Parallel & Distributed
[156] S. Yi, Z. Qin, Q. Li, Security and privacy issues of fog computing: A survey, Processing, Workshops and Phd Forum (IPDPSW), 2010 IEEE International
in: International Conference on Wireless Algorithms, Systems, and Applications, Symposium on, IEEE, 2010, pp. 1–8.
Springer, 2015, pp. 685–695. [181] J.L. Berral, ´I. Goiri, R. Nou, F. Juli‘a, J. Guitart, R. Gavald‘a, J. Torres,
[157] Y. Wang, L. Liu, C. Su, J. Ma, L. Wang, Y. Yang, Y. Shen, G. Li, T. Zhang, Towards energy-aware scheduling in data centers using machine learning, in:
X. Dong, Cryptsqlite: Protecting data confidentiality of SQLite with Intel sgx, Proceedings of the 1st International Conference on Energy-Efficient Computing
in: 2017 International Conference on Networking and Network Applications and Networking, ACM, 2010, pp. 215–224.
(NaNA), IEEE, 2017, pp. 303–308. [182] J.L. Berral, R. Gavalda, J. Torres, Adaptive scheduling on poweraware managed
[158] A.H. Ngu, M. Gutierrez, V. Metsis, S. Nepal, Q.Z. Sheng, IoT middleware: A data-centers using machine learning, in: Proceedings of the 2011 IEEE/ACM
survey on issues and enabling technologies, IEEE Internet Things J. 4 (1) (2016) 12th International Conference on Grid Computing, IEEE Computer Society,
1–20. 2011, pp. 66–73.
[159] M.A. Razzaque, M. Milojevic-Jevric, A. Palade, S. Clarke, Middleware for [183] S. Jiang, S.R. Priya, N. Elango, J. Clay, R. Sridhar, An energy efficient
internet of things: a survey, IEEE Internet Things J. 3 (1) (2015) 70–95. in-memory computing machine learning classifier scheme, in: 2019 32nd In-
[160] A.R. Ribeiro, F. Silva, L.C. Freitas, J.C. Costa, C.R. Frances, Sensorbus: a ternational Conference on VLSI Design and 2019 18th International Conference
middleware model for wireless sensor networks, in: Proceedings of the 3rd on Embedded Systems (VLSID), Delhi, NCR, India, 2019, pp. 157–162.
International IFIP/ACM Latin American Conference on Networking, ACM, 2005, [184] C. Wu, D. Brooks, K. Chen, D. Chen, S. Choudhury, M. Dukhan, K. Hazelwood,
pp. 1–9. E. Isaac, Y. Jia, B. Jia, Machine learning at facebook: Understanding inference
[161] A. Boulis, C.-C. Han, R. Shea, M.B. Srivastava, Sensorware: Programming sensor at the edge, in: 2019 IEEE International Symposium on High Performance
networks beyond code update and querying, Pervasive Mob. Comput. 3 (4) Computer Architecture (HPCA), 2019, pp. 331–344.
(2007) 386–412. [185] T. Chen, T. Moreau, Z. Jiang, L. Zheng, E. Yan, M. Cowan, H. Shen, L. Wang,
[162] P. Evensen, H. Meling, Sensewrap: A service-oriented middleware with sensor Y. Hu, L. Ceze, C. Guestrin, A. Krishnamurthy, TVM: An automated end-to-end
virtualization and self-configuration, in: 2009 International Conference on optimizing compiler for deep learning, 2018, https://arxiv.org/abs/1802.04799.
Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), IEEE, [186] N. Rotem, J. Fix, S. Abdulrasool, S. Deng, J.H. Roman Dzhabarov, R. Levenstein,
2009, pp. 261–266. B. Maher, S. Nadathur, J. Olesen, J. Park, A. Rakhov, M. Smelyanskiy, Glow:
[163] M. Eisenhauer, P. Rosengren, P. Antolin, A development platform for integrating Graph lowering compiler techniques for neural networks, 2018, https://arxiv.
wireless devices and sensors into ambient intelligence systems, in: 2009 6th org/abs/1805.00907.
IEEE Annual Communications Society Conference on Sensor, Mesh and Ad Hoc [187] Google, XLA is a compiler that optimizes TensorFlow computations. https:
Communications and Networks Workshops, IEEE, 2009, pp. 1–3. //www.tensorflow.org/performance/xla/.
[164] Liangkai Liu, Jiamin Chen, Marco Brocanelli, Weisong Shi, E2M: An energy- [188] Apple Core ML, Core ML: Integrate machine learning models into your app.
efficient middleware for computer vision applications on autonomous mobile https://developer.apple.com/documentation/coreml?changes=_8.
robots, in: Proceedings of the fourth ACM/IEEE Symposium on Edge [189] NNPACK, Acceleration package for neural networks on multi-core cpus. https:
Computing(SEC), November (2019) 7-9, Arlington, VA, USA. //github.com/Maratyszcza/NNPACK.
[165] S. Li, S. Mishra, Optimizing power consumption in multicore smartphones, J. [190] M. Dukhan, Y. Wu, H. Lu, QNNPACK: open source library for optimized mobile
Parallel Distrib. Comput. 95 (2016) 124–137. deep learning. https://code.fb.com/ml-applications/qnnpack/.
[166] B. Luo, S. Tan, Z. Yu, W. Shi, Edgebox: Live edge video analytics for near [191] N. Balasubramanian, A. Balasubramanian, A. Venkataramani, Energy consump-
real-time event detection, in: 2018 IEEE/ACM Symposium on Edge Computing tion in mobile phones: A measurement study and implications for network
(SEC), IEEE, 2018, pp. 347–348. applications, in: Proc. ACM SIGCOMM Conf. Internet Meas. Conf. 2009, pp.
[167] G. Orsini, D. Bade, W. Lamersdorf, Cloudaware: A context-adaptive middleware 280–293.
for mobile edge and cloud computing applications, in: 2016 IEEE 1st Interna- [192] A. Sharma, V. Navda, R. Ramjee, V.N. Padmanabhan, E.M. Belding, Cool-Tether:
tional Workshops on Foundations and Applications of Self* Systems (FAS* W), Energy efficient on-the-fly wifi hot-spots using mobile phones, in: Proc. ACM
IEEE, 2016, pp. 216–221. Emerging Netw. Exp. Technol. 2009, pp. 109–120..

579
C. Jiang, T. Fan, H. Gao et al. Computer Communications 151 (2020) 556–580

[193] Z. Tang, S. Guo, P. Li, T. Miyazaki, H. Jin, X. Liao, Energy-efficient transmission [210] C. Jiang, Y. Qiu, H. Gao, T. Fan, K. Li, J. Wan, An edge computing platform
scheduling in mobile phones using machine learning and participatory sensing, for intelligent operational monitoring in internet data centers, IEEE Access 7
IEEE Trans. Veh. Technol. 64 (7) (2015) 3167–3176. (2019) 133375–133387.
[194] A. Kumar, S. Goyal, M. Varma, Resource-efficient machine learning in 2 KB [211] J. Gao, Machine learning applications for data center optimization, 2014.
RAM for the internet of things, in: Proceedings of the 34th International [212] M. Demirci, A survey of machine learning applications for energy-efficient
Conference on Machine Learning-Volume 70, 2017, pp. 1935-1944. resource management in cloud computing environments, in: 2015 IEEE 14th
[195] X. Zhang, A. Ramachandran, C. Zhuge, D. He, W. Zuo, Z. Cheng, K. Rupnow, International Conference on Machine Learning and Applications (ICMLA), 2015,
D. Chen, Machine learning on FPGAs to face the IoT revolution, in: Proceedings pp. 1185–1190.
of the 36th International Conference on Computer-Aided Design, 2017, pp. [213] C. Jiang, L. Duan, C. Liu, J. Wan, L. Zhou, VRAA: virtualized resource auction
819–826. and allocation based on incentive and penalty, Clust. Comput. 16 (2013)
[196] G. Anastasi, M. Conti, M.D. Francesco, A. Passarella, Energy conservation in 639–650.
wireless sensor networks: A survey, Ad Hoc Netw. 7 (2009) 537–568. [214] C. Jiang, X. Cheng, H. Gao, X. Zhou, J. Wan, Toward computation offloading
[197] M.A. Razzaque, C. Bleakley, S. Dobson, Compression in wireless sensor net- in edge computing: A survey, IEEE Access 7 (2019) 131543–131558.
works: A survey and comparative evaluation, ACM Trans. Sensor Netw. 10 [215] C. Jiang, Y. Wang, D. Ou, B. Luo, W. Shi, Energy proportional servers: Where
(2013) 5. are we in 2016?, in: 2017 IEEE 37th International Conference on Distributed
[198] H. Li, K. Ota, M. Dong, Learning IoT in edge: Deep learning for the internet of Computing Systems (ICDCS), IEEE, 2017, pp. 1649–1660.
things with edge computing, IEEE Netw. 32 (2018) 96–101. [216] Y. Mao, J. Zhang, K.B. Letaief, Dynamic computation offloading for mobile-edge
[199] N.D. Lane, P. Georgiev, L. Qendro, Deepear: Robust smartphone audio sensing in computing with energy harvesting devices, IEEE J. Sel. Areas Commun. 34 (12)
unconstrained acoustic environments using deep learning, in: Proc. 2015 ACM (2016) 3590–3605.
Int’l. Joint Conf. Pervasive and Ubiquitous Computing, 2015, pp. 283–294. [217] S. Bi, Y. Zhang, Computation rate maximization for wireless powered
[200] H. Harb, A. Makhoul, C.A. Jaoude, En-route data filtering technique for mobile-edge computingwith binary computation offloading, IEEE Trans. Wirel.
maximizing wireless sensor network lifetime, in: 2018 14th International Commun. 17 (6) (2018) 4177–4190.
Wireless Communications & Mobile Computing Conference (IWCMC), 2018, pp. [218] C. You, Y. Zeng, R. Zhang, K. Huang, Asynchronous mobile-edge computa-
298–303. tion offloading: energy-efficient resource management, IEEE Trans. Wireless
[201] J. Azar, A. Makhoul, R. Darazi, J. Demerjian, R. Couturier, On the performance Commun. 17 (11) (2018) 7590–7605.
of resource-aware compression techniques for vital signs data in wireless body [219] X. Lyu, H. Tian, C. Sengul, P. Zhang, Multiuser joint task offloading and
sensor networks, in: 2018 IEEE Middle East and North Africa Communications resource optimization in proximate clouds, IEEE Trans. Veh. Technol. 66 (4)
Conference (MENACOMM), 2018, pp. 1–6. (2016) 3435–3447.
[202] J. Azar, R. Darazi, C. Habib, A. Makhoul, J. Demerjian, Using DWT lift- [220] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Mahar-
ing scheme for lossless data compression in wireless body sensor networks, jan, Y. Zhang, Energy-efficient offloading for mobile edge computing in 5G
in: Communications & Mobile Computing Conference (IWCMC), 2018, pp. heterogeneous networks, IEEE Access 4 (2016) 5896–5907.
1465–1470. [221] H. Sun, F. Zhou, R.Q. Hu, Joint offloading and computation energy efficiency
[203] J. Azar, A. Makhoul, M. Barhamgi, R. Couturier, An energy efficient IoT data maximization in a mobile edge computing system, IEEE Trans. Veh. Technol.
compression approach for edge machine learning, Future Gener. Comput. Syst. 68 (3) (2019) 3052–3056.
96 (2019) 168–175. [222] Z. Zhou, J. Feng, Z. Chang, X. Shen, Energy-efficient edge computing service
[204] Y. Wang, X. Dai, J.M. Wang, B. Bensaou, A reinforcement learning approach to provisioning for vehicular networks: A consensus ADMM approach, IEEE Trans.
energy efficiency and QoS in 5G wireless networks, IEEE J. Sel. Areas Commun. Veh. Technol. 68 (2019) 5087–5099.
37 (6) (2019) 1413–1423. [223] Q. Tang, H. Lyu, G. Han, J. Wang, K. Wang, Partial offloading strategy for
[205] Q. Zeng, Y. Du, K.K. Leung, K. Huang, Energy-efficient radio resource allocation mobile edge computing considering mixed overhead of time and energy, Neural
for federated edge learning, 2019, arXiv preprint arXiv:1907.06040. Comput. Appl. (2019) 1990-08-07.
[206] Y. Liu, C. He, X. Li, C. Zhang, C. Tian, Power allocation schemes based [224] J. Wang, W. Wu, Z. Liao, A.K. Sangaiah, R. Simon Sherratt, An energy-efficient
on machine learning for distributed antenna systems, IEEE Access 7 (2019) off-loading scheme for low latency in collaborative edge computing, IEEE Access
20577–20584. 7 (2019) 149182–149190.
[207] C. He, Y. Zhou, G. Qian, X. Li, D. Feng, Energy efficient power allocation based [225] E. Meskar, T.D. Todd, D. Zhao, G. Karakostas, Energy aware offloading for
on machine learning generated clusters for distributed antenna systems, IEEE competing users on a shared communication channel, IEEE Trans. Mob. Comput.
Access 7 (2019) 59575–59584. 16 (1) (2016) 87–96.
[208] K. Thangaramya, K. Kulothungan, R. Logambigai, M. Selvi, S. Ganapathy, A. [226] K. Zhang, Y. Mao, S. Leng, S. Maharjan, Y. Zhang, Optimal delay constrained
Kannan, Energy aware cluster and neuro-fuzzy based routing algorithm for offloading for vehicular edge computing networks, in: 2017 IEEE International
wireless sensor networks in IoT, Comput. Netw. 151 (2019) 211–223. Conference on Communications (ICC), IEEE, 2017, pp. 1–6.
[209] T. Hu, Y. Fei, QELAR: A machine-learning-based adaptive routing protocol for [227] M.B. Terefe, H. Lee, N. Heo, G.C. Fox, S. Oh, Energy-efficient multisite
energy-efficient and lifetime-extended underwater sensor networks, IEEE Trans. offloading policy using Markov decision process for mobile cloud computing,
Mob. Comput. 9 (6) (2010) 796–809. Pervasive Mob. Comput. 27 (2016) 75–89.

580

You might also like